uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,941,325,221,088 | arxiv | \section{The Role of Quantum Simulations}
The motivation for numerical solutions to the quantum many body problem is the intractability of analytic solutions except in limited circumstances. Indeed, the extension of the analytic solution of the single electron Hydrogen atom to two electrons is already problematic, as emphasized in the classic text of Bethe and Salpeter~\cite{bethe57}. As a consequence, quantum simulations approaches appeared in chemistry~\cite{anderson75,ceperley81}, condensed matter~\cite{ceperley80,hirsch82}, nuclear~\cite{negele86,lynn19}, and high energy physics~\cite{kogut79,blankenbecler81,kogut83} almost as soon as computers became reasonably available for scientific work.
There is considerable methodological linkage in quantum simulation between these fields. For example, in the case of the DQMC method used in this work, the problem of treating the accumulation of round off errors at low temperatures was first developed in the nuclear physics community~\cite{sugiyama86} before being adapted to condensed matter. Likewise, the linear scaling methods of lattice gauge theory (LGT)~\cite{batrouni85,duane85,gottleib87}
were soon adapted to condensed matter physics~\cite{scalettar86,scalettar87}, albeit with only
limited success owing to the extreme anisotropic (non-relativistic) nature of the condensed matter space-imaginary time lattices. Many of the algorithms, like Fourier Acceleration, which are crucial to LGT, were first implemented and tested for classical spin models~\cite{davies88}.
\vskip0.10in \noindent
\section{Determinant Quantum Monte Carlo}
The DQMC method is a specific type of AFQMC ~\cite{blankenbecler81,Buendia86,white89,chen92,assaad02,gubernatis16,alhassid17,hao19,he19}.
In DQMC, the partition function ${\cal Z}$ is expressed as a path integral
and the Trotter approximation is used to isolate the quartic terms,
\begin{align}
{\cal Z} = {\rm Tr} \, e^{-\beta \hat H}
= {\rm Tr} \, [ e^{-\Delta \tau \hat H} ]^{L_\tau}
\sim {\rm Tr} \, [ e^{-\Delta \tau \hat H_t} e^{-\Delta \tau \hat H_U} ]^{L_\tau}
\, ,
\end{align}
where $\hat H_t$ includes the hopping and chemical potential (together with all other bilinear terms in the fermionic operators), and $\hat H_U$ the on-site interactions, in the Hubbard Hamiltonian of Eq.~\eqref{eq:ham_spinful}. The latter are then decomposed via,
\begin{align}
e^{-\Delta \tau U (n_{i\uparrow} - \frac{1}{2})
(n_{i\downarrow} - \frac{1}{2})}
= \frac{1}{2} e^{-U \Delta \tau/4} \sum_{s_i=\pm 1} e^{\lambda s_i (n_{i\uparrow} - n_{i\downarrow})}
\, ,
\label{eq:HS_spinful}
\end{align}
where ${\rm cosh} \, \lambda = e^{U \Delta \tau / 2}$. A Hubbard-Stratonovich variable (bosonic field) $s_i$ must be introduced at each spatial site $i$ and for each imaginary time slice
$U_{\Delta \tau}$. The prefactor $\frac{1}{2} e^{-U \Delta \tau/4} $ is an irrelevant constant which may be dropped.
The key observation is that the right hand side of Eq.~\eqref{eq:HS_spinful} is now
quadratic in the fermions, so that the partition function ${\cal Z}$ is a trace over a product of quadratic forms of fermionic operators:
\begin{align}
{\cal Z} = \sum_{\{s_{i\tau} \}}{\rm Tr_{\uparrow}} \, [
e^{\vec c^{\, \dagger}_{\uparrow} K \vec c^{\phantom{\dagger}}_\uparrow}
e^{\vec c^{\, \dagger}_{\uparrow} P^1 \vec c^{\phantom{\dagger}}_\uparrow}
\cdots
e^{\vec c^{\, \dagger}_{\uparrow} K \vec c^{\phantom{\dagger}}_\uparrow}
e^{\vec c^{\, \dagger}_{\uparrow} P^L_\tau \vec c^{\phantom{\dagger}}_\uparrow}
\, ]
\,\,\, {\rm Tr_{\downarrow}} \, [
e^{\vec c^{\, \dagger}_{\downarrow} K \vec c^{\phantom{\dagger}}_\downarrow}
e^{- \vec c^{\, \dagger}_{\downarrow} P^1 \vec c^{\phantom{\dagger}}_\downarrow}
\cdots
e^{\vec c^{\, \dagger}_{\downarrow} K \vec c^{\phantom{\dagger}}_\downarrow}
e^{- \vec c^{\, \dagger}_{\downarrow} P^L_\tau \vec c^{\phantom{\dagger}}_\downarrow}
\, ]
\, ,
\label{eq:Z_tr}
\end{align}
with
$ \vec c^{\, \dagger}_{\sigma} =
(\, c^{\, \dagger}_{\sigma 1 }
\, c^{\, \dagger}_{\sigma 2 }
\cdots
\, c^{\, \dagger}_{\sigma N }
\, )^T$.
The matrix $K$ is the same for all time slices and contains $\mu \Delta \tau$ along its diagonal and $t\Delta \tau$ for sites connected by the hopping. (In the case of the ionic Hubbard model, the staggered site energy term also appears along the diagonal.) The matrices $P^\tau$ are diagonal, with $ P^\tau_{ii} = \lambda \sigma s_{i \tau}$ ($\sigma =\pm 1$ for $\uparrow$ and $\downarrow$). All matrices have dimension equal to the number of spatial lattice sites $N$.
The trace of a quadratic form such as Eq.~\eqref{eq:Z_tr} can be done analytically~\cite{blankenbecler81,white89,chen92,assaad02,gubernatis16,alhassid17,hao19,he19}, resulting in
\begin{align}
{\cal Z} = \sum_{\{s_{i\tau} \}}
{\rm det} [ I + e^{K} e^{P^1}
\cdots
e^{K} e^{P^{L_\tau}} ]
\,\, {\rm det} [ I + e^{K} e^{-P^1}
\cdots
e^{K} e^{-P^{L_\tau}} ]
\, .
\label{eq:Z_det}
\end{align}
The expression for ${\cal Z}$ of Eq.~\eqref{eq:Z_det} contains no quantum operators, just the matrices $K, \{ P^\tau \}$ of the quadratic forms. Its calculation is thereby reduced to a classical Monte Carlo problem in which the sum over $\{ s_{i\tau} \}$ must be done stochastically with a weight equal to the product of the two determinants. That these might become negative is the origin of the SP in DQMC.
Besides this spinful case, we have also investigated interacting spinless fermions. The problem formulation in QMC simulations is almost identical to that of the spinful case, but now the decoupling of the interactions $V$ on each nearest-neighbor bond $\langle i,j\rangle$ reads
\begin{align}
e^{-\Delta \tau V (n_i - \frac{1}{2}) (n_j - \frac{1}{2})}
= \frac{1}{2} e^{-V \Delta \tau/4} \sum_{s_{ij}=\pm 1} e^{\lambda s_{ij} (n_i - n_j)}
\, ,
\label{eq:HS_spinless}
\end{align}
where ${\rm cosh} \, \lambda = e^{V \Delta \tau / 2}$. The Hubbard-Stratonovich variable $s$ now lives on the bonds, and its total number for the rhombus-shaped space-time lattices used here is $3L^2 L_\tau$. After a similar procedure for tracing out the fermions, one ends up with
\begin{align}
{\cal Z} = \sum_{\{s_{ij,\tau} \}}
{\rm det} [ I + e^{K} e^{P^1}
\cdots
e^{K} e^{P^{L_\tau}} ],
\label{eq:Z_det_spinless}
\end{align}
where the elements of the diagonal matrix $P^\tau$ are $P_{ii}^\tau = (-1)^i \lambda \sum_j s_{ij,\tau}$. A major difference from the spinful case is that the protection that the sign of the determinants of the matrices for up and down spin channels display in bipartite lattices by getting locked together~\cite{Hirsch1985,Iazzi2016} is no longer present. This can generically give rise to an even more detrimental SP, yet it allows us to systematically locate the QCP of Eq.~(\ref{eq:ham_spinless}) with a large accuracy.
\vskip0.10in \noindent
\section{Physical Observables}
The central object in the QMC simulations is the Green's function $\textbf{\rm G}^\sigma$ whose matrix elements are $G_{ij}^\sigma(\tau^\prime,\tau)=\langle c_{i\sigma}^{\phantom{\dagger}}(\tau^\prime) c_{j\sigma}^\dagger(\tau)\rangle$. By using Wick contractions and the fermionic anti-commutation relations one can define all quantities used in the main text. These are the double occupancy,
\begin{equation}
\langle n_{\uparrow\downarrow}\rangle = \langle n_\uparrow n_\downarrow\rangle ,
\end{equation}
the static susceptibility
\begin{equation}
\chi(\textbf{q}) = \frac{\beta}{N}\sum_{i,j}e^{{\rm i} \textbf{q}\cdot (\textbf{R}_i - \textbf{R}_j)} \langle (n_{i,\uparrow} - n_{i,\downarrow})(n_{j,\uparrow} - n_{j,\downarrow})\rangle,
\end{equation}
and the pair-susceptibility
\begin{equation}
P_\alpha = \int_0^\beta d\tau \langle \Delta_\alpha^{\phantom{\dagger}}(\tau)\Delta_\alpha^\dagger(0)\rangle,
\label{eq:pairsusc}
\end{equation}
with the momentum-dependent pair operator given by
\begin{equation}
\Delta_\alpha^\dagger = \frac{1}{N}\sum_{\textbf{k}} f_\alpha (\textbf{k})\,c_{\textbf{k}\uparrow}^\dagger c_{-\textbf{k}\downarrow}^\dagger.
\end{equation}
The form factors $f_\alpha (\textbf{k})$ describe the various symmetry channels investigated:
\begin{equation}
f_d (\textbf{k}) = \cos k_x - \cos k_y;\ \ f_{s^*} (\textbf{k}) = \cos k_x + \cos k_y; \ \ f_s(\textbf{k}) = 1,
\end{equation}
for $d$-wave, extended $s$-wave, and $s$-wave pairings, respectively.
In all data presented, we subtract the uncorrelated (non-vertex) contribution, $P_\alpha^{\rm nv}$, in which pairs of fermionic operators are first averaged before taking the product, i.e., terms in Eq.~(\ref{eq:pairsusc}) such as $\langle c_{i\downarrow}^\dagger(\tau)c_{j\downarrow}^{\phantom{\dagger}}(0)c_{l\downarrow}^\dagger(\tau)c_{m\downarrow}^{\phantom{\dagger}}(0)\rangle$ get replaced by their decoupled contributions $\langle c_{i\downarrow}^\dagger(\tau)c_{j\downarrow}^{\phantom{\dagger}}(0)\rangle \, \langle c_{l\downarrow}^\dagger(\tau)c_{m\downarrow}^{\phantom{\dagger}}(0)\rangle$.
Other quantities used exclusively in this supplemental material are introduced in the corresponding sections.
\vskip0.10in \noindent
\section{Sign Problem: General Importance}
In lattice gauge theory, the SP is triggered by a non-zero chemical potential $\mu$.
Attempts to solve, or reduce, the SP include analytic continuation from complex to real chemical potentials $\mu$~\cite{alford99,toulouse16}, Taylor expansion about zero $\mu$~\cite{deforcrand02,gavai03}, re-weighting approaches~\cite{allton02}, complex Langevin methods~\cite{adami01,berger19}, and Lefshetz thimble techniques~\cite{witten10,cristoforetti12}. For a review of the SP in LGT, see Ref.~\cite{banuls19}. A similar litany of papers, ideas, and new methods characterizing, ameliorating, or solving the SP can be found in the condensed matter~\cite{loh90,ortiz93,zhang97,chandrasekharan99,henelius00,bergkvist03,troyer05,nyfeler08,nomura14,mukherjee14,shinaoka15,kaul15,iglovikov15,he19,fukuma19,ulybyshev20,kim20}, nuclear physics ~\cite{wiringa00,alhassid17,lynn19,gandolfi19}, and quantum chemistry~\cite{umrigar07,shi14,roggero18,rothstein19} communities.
These approaches differ in detail, depending on whether they are addressing the SP in real space versus lattice models, quantum spin versus itinerant fermion, etc.
One of the dominant themes in atomic and molecular (AMO) physics over the last decade has been the possibility that ultracold atoms in an optical lattice might serve as emulators of fundamental models of condensed matter physics~\cite{bloch05,esslinger10,bloch12,tarruell18,schafer20}. The initial motivation was the possibility that simplified model Hamiltonians could be more precisely realized in the AMO context than in the solid state, where materials `complications' are unavoidable and often significant, if not dominant. However, it was quickly understood that an equally important advantage was that, because of the SP, the solution of these simplified models was possible only at temperatures which were well above those needed to access phenomena like $d$-wave superconductivity.
Thus it is fair to say that the SP has been a significant driver of the enormous efforts and progress in this domain of AMO physics.
A similar theme is present in quantum computing~\cite{ortiz01,brown10,preskill18,clemente20}, which promises the general possibility of solving problems much more rapidly (`quantum supremacy') than with a classical computer. The exponential scaling time of solutions of model Hamiltonians in the presence of the SP offers one of the most significant targets for such endeavors~\cite{Steudtner2018,Arute2020}.
\vskip0.10in \noindent
\section{The sign problem in other QMC algorithms}
In exploring the linkage between the SP and critical properties, we have focused exclusively on DQMC. As noted in the previous discussion, the SP occurs also in a plethora of QMC methods: world-line quantum Monte Carlo (WLQMC)~\cite{kawashima04}, Greens Function (Diffusion) Monte Carlo (GFMC)~\cite{reynolds90}, in dynamical mean field theory (DMFT) and its cluster extensions (including continuous time approaches)~\cite{georges96,beard96,hettler00,kotliar01,rubtsov05,maier05,kyung06,gull08,gull11}, and in diagrammatic QMC~\cite{prokofev96,vanhoucke10,rossi17,rohringer18}. It would be interesting to explore these situations as well, checking the direct correlation between the SP and regimes of quantum critical behavior. Regarding WLQMC, it is known that the onset of the sign problem is at much higher temperature than in DQMC, and occurs due to particle exchange in the world-lines. Indeed, this provides one of the earliest examples of the dependence of the sign problem on the algorithm used. It seems probable that the restriction of WLQMC to only very high temperatures ($T/W \gtrsim 1/5$ is quite typical) might preclude the possibility of an association of the sign problem with interesting low temperature correlations and transitions. It is worth noticing that an attempt to reconcile WLQMC and DQMC was put forward by introducing more generic Hubbard-Stratonovich transformations~\cite{Hirsch1986}; it has been further argued that the SP in WLQMC has two origins: one from the fact that Slater determinants are anti-symmetric sums of world-line configurations and another intrinsic, akin to the one in DQMC, that has been claimed having topological origin~\cite{Iazzi2016}. In the same way, alternative choices of Hubbard-Stratonovich transformations in DQMC~\cite{batrouni90,chen92} significantly worsen the sign problem. DMFT~\cite{georges92,jarrell92,georges96}, on the other hand, is at the opposite end of the spectrum, with a sign problem which is greatly reduced relative to that of DQMC. Unfortunately, its cluster extensions, using QMC as the cluster solver, exhibit an increasingly serious SP at low enough temperatures as the number of points in the momentum grid increases~\cite{jarrell01,maier05,kyung06}.
\begin{figure}[b]
\includegraphics[scale=0.33]{spinful_SM}
\caption{\textbf{Local observables in the vicinity of the quantum critical point on the SU(2) honeycomb Hubbard model.} The derivative of the double occupancy (\textbf{A}) and the kinetic energy (\textbf{B}) at $T/t = 1/20$ when approaching the quantum critical point at $\mu\to0$ and $U_c\simeq3.85t$. As in the main text for other models, the noisy data at large chemical potentials denotes the regime of small $\langle{\cal S}\rangle$ with a vanishing signal-to-noise ratio. (\textbf{C}) and (\textbf{D}) display the same quantities when approaching $T/t\to0$, with a small chemical potential $\mu/t=0.1$. As in the $T=0$ results of Ref.~\cite{meng10}, the derivative of the kinetic energy displays a peak within the AFMI phase. Our data show this persists to finite temperatures. The lattice size used is $L=9$, and all data are averaged among 20 independent runs.}
\label{fig:spinful_SM}
\end{figure}
We finish by noting that while we have argued that when a QCP is present, the SP provides quantitative information about its location, the converse is {\it not} necessarily true: a SP can exist even in the absence of a QCP. In particular, in WLQMC, free fermions have a SP in $D>1$ without possessing any sort of phase transition. As noted above, DQMC is SP free when the interactions vanish, so this simple counter-example is not present in that algorithm.
\vskip0.10in \noindent
\section{The sign problem in other Hamiltonians}
In our work, the spinless fermion Hamiltonian on a honeycomb lattice offered a particularly concrete case where a sign problem free approach allows a detailed study of quantum critical behavior. We exploited this as a way to make a very quantitative test of the connection of the SP to the location of the QCP. In addition to exploring other algorithms, a promising further line of inquiry is to turn on a chemical potential in other `sign problem free' models~\cite{chandrasekharan10,berg12,wang15,li16,li19}. The Kane-Mele-Hubbard model would be especially interesting since it presents a framework to understand the transition from topological phases (quantum spin Hall insulator) towards a (topologically trivial) Mott insulator with antiferromagnetic order~\cite{Hohenadler2012,Bercx2014,Toldin2015}. The Hubbard-Stratonovich transformation is slightly more complicated, and in fact a `phase problem' appears instead. In exploring this competition between ordered and topological phases, the study of the Haldane-Hubbard model presents as a challenging case, since due to the absence of time-reversal symmetry, it gives rise to a severe SP in the simulations~\cite{Imriska2016}. Whether similar analysis as conducted here can help in locating the QCP associated to a topological transition in system sizes exceeding the ones amenable to ED~\cite{Vanhala2016,Shao2021} is yet an open question left for future studies.
A further possibility for future work includes situations where disorder drives a quantum phase transition~\cite{huscroft97}. This is of particular interest because different types of disorder can either possess particle-hole symmetry or not~\cite{denteneer01}, and this dichotomy is known to be linked both to the presence of the sign problem as well as to the occurrence of metal to insulator transitions~\cite{denteneer99,denteneer03}. Thus disordered systems might provide an especially rich arena to explore the connection between the SP and the underlying physics of the Hamiltonian.
\vskip0.10in \noindent
\section{More details on the Spinful Hubbard Model on a Honeycomb Lattice (main text, Sec.~I)}
\paragraph{Energy and Double Occupation.---}The quantum critical point separating the paramagnetic semimetal and the antiferromagnetic insulator (AFMI) phases of the honeycomb Hubbard hamiltonian has been characterized using observables ranging from the magnetic structure factor to the conductivity. In the main text, we focused on using the average sign as a signal of the QCP. Here we provide context by showing some of the traditional measurements. More detailed results are contained in the literature~\cite{sorella92,paiva05,herbut06,meng10,giuliani10,ma11,clark11,sorella12,raczkowski20}.
Figure~\ref{fig:spinful_SM} shows the derivatives of the average kinetic energy $\langle K \rangle = \frac{-t}{2L^2}\sum_{\langle i,j\rangle,\sigma} (c_{i \sigma}^{\dagger} c_{j \sigma}^{\phantom{}} + c_{j\sigma}^{\dagger} c_{i \sigma}^{\phantom{}})$ and double occupation $\langle n_{\uparrow \downarrow} \rangle$ with respect to $U$ across the honeycomb lattice transition from semimetal to AFMI. Both show clear signals in the vicinity of $U_c$. The accurate indication of antiferromagnetic long range order requires a careful finite size scaling analysis of the antiferromagnetic structure factor, which can be found in Refs.~\cite{paiva05,meng10,sorella12}.
\begin{figure}[b]
\includegraphics[width=0.6\columnwidth]{spin_resolved_sign_hc_N_vs_U.pdf}
\caption{\textbf{Spin resolved sign for the SU(2) Hubbard model on the honeycomb lattice.} The scaling with the inverse of the linear system size $L$ of the average sign of individual determinants in the case there is no sign problem: $\mu/t = 0$. The star marker depicts the best known estimation of the critical value $U_c/t = 3.869$~\cite{sorella12} at the ground state for the onset of the Mott insulating phase with AFM order. All data are extracted at $T/t = 1/20$, with $\Delta\tau = 0.1$.}
\label{fig:spin_resolv_sign}
\end{figure}
\paragraph{Individual spin channel average sign in the Spinful Hubbard Model on a Honeycomb Lattice.---} In the main text, we have demonstrated how the average sign can be used as a `tracker' of quantum critical behavior. In the case of models within regimes where a sign problem is absent, e.g., for an SU(2) Hubbard model on a bipartite lattice, this ability is no longer available if the chemical potential $\mu = 0$, since the determinants of the two spin species always have the same sign so that their product is positive. This can be proven by considering a staggered particle-hole transformation (PHT) $c^{\dagger}_{i\downarrow} \rightarrow (-1)^i c^{\phantom{\dagger}}_{i\downarrow}$ on the down spin species. Here $(-1)^i = +1(-1)$ on sublattice A(B). Under the PHT, the kinetic energy matrix $K$ of Eqs.~\eqref{eq:Z_tr},\eqref{eq:Z_det} remains invariant, but the matrices $P^\tau$ in the down spin trace change sign, making the down spin determinant the same as the up spin determinant, up to a positive factor $e^{\lambda \sum_{i \tau} s_{i\tau}}$~\cite{Hirsch1985}.
While the {\it product} of the determinants is always positive in this situation, the QCP remains imprinted in the average sign of the determinants for {\it individual} spin components $\langle {\cal S}_\sigma\rangle$. To illustrate this, we consider the first model used in the main text, the repulsive spinful Hubbard model on the (bipartite) honeycomb lattice. Figure~\ref{fig:spin_resolv_sign} plots $\langle {\cal S}_\sigma\rangle$ ($\sigma=\uparrow,\downarrow$) at fixed $T/t=1/20$. The individual signs are largely positive in the metallic phase, but rather abruptly change to equally positive and negative ($\langle {\cal S} \rangle \sim 0$) in the AFMI phase $U>U_c$. The match of the transition in the sign and the position of the QCP becomes increasingly precise in the thermodynamic limit $1/L \rightarrow 0$. The sharpness of the drop in $\langle {\cal S}_\sigma\rangle$ with increasing system sizes is suggestive of a possible scaling form for this quantity. Preliminary data, to be presented elsewhere, indicates a scaling with critical exponents compatible with the ones obtained from physical observables~\cite{Assaad2013}.
\vskip0.10in \noindent
\section{More details on the Ionic Hubbard Hamiltonian (main text, Sec.~II)}
\paragraph{Finite-size effects.---} The ionic Hubbard model presents a unique situation in our study: instead of displaying a quantum critical point at half-filling, it exhibits a quantum critical regime, associated with a correlated metal (CM) phase~\cite{paris07,bouadim07,fabrizio99,kampf03,manmana04,garg06,paris07,bouadim07,craco08,garg14,bag15}. We argued in the main text that this phase, sandwiched between the band-insulator (BI) at large $\Delta$, and the Mott insulator (MI) at large $U$, can be indicated by a vanishing average sign in the DQMC simulations. We now explore the influence of finite-size effects on those phase boundaries, for a specific value of the staggered potential $\Delta/t=0.5$.
\begin{figure*}[th!]
\includegraphics[width=1\columnwidth]{ionic_SM.pdf}
\caption{\textbf{Finite-size analysis: Ionic Hubbard model.} (\textbf{A, B, C}) The average sign $\langle{\cal S}\rangle$ for decreasing temperatures (growing $\beta = 1/T$) as a function of the Hubbard interaction $U/t$ in a lattice with $L=8, 12$ and 16 with a staggered potential $\Delta/t=0.5$. (\textbf{D,E,F}) The corresponding magnitude of the derivative of $\langle{\cal S}\rangle$ in respect to $U$ at each lattice size. (\textbf{G}) Peak position of the derivative of the average sign with different inverse temperatures $\beta$; the inset presents an extrapolation to the thermodynamic limit of the estimated transition from the BI to the CM phase, as inferred by the first peak of $|d\langle {\cal S}\rangle/dU|$. The gray dashed lines in all panels display the corresponding transition value obtained in Ref.~\cite{bouadim07}. All data are averaged over 24 independent runs, with $\Delta\tau = 0.1$.}
\label{fig:ionic_SM}
\end{figure*}
Figure~\ref{fig:ionic_SM} indicates that the first transition which occurs upon increasing $U/t$ from zero, that from BI to CM, is well marked by a fast drop of $\langle{\cal S}\rangle$ at lower temperatures. A quantitative estimation of the transition point can be extracted by differentiating the average sign with respect to the interaction strength, $d \langle {\cal S} \rangle / d U$ (lower panels in Fig.~\ref{fig:ionic_SM}). As the temperature is lowered, the peak position quickly approaches the best known values of the transition for this set of parameters~\cite{bouadim07}, see Fig.~\ref{fig:ionic_SM}(\textbf{G}). The system size dependence is reasonably small. The second transition, from CM to AFI, on the other hand, displays characteristics reminiscent of a crossover for the system sizes and temperatures investigated. The estimate given by the peak of the average sign also displays a stronger dependence on $L$, and, overall, is larger than the value of the position of the metal-AF transition at this $\Delta$ obtained in Ref.~\cite{bouadim07}. It is worth mentioning that these values in the existing literature were extracted at smaller lattice sizes than the largest $L$ used here. A finite size extrapolation of the `traditional' correlations used to obtain $U_c$ for the metal-AF transition, similar to the one we perform here, would be useful to undertake.
\paragraph{QMC vs. ED.---} A valuable test of the conjecture that $\langle {\cal S}\rangle$ tracks a quantum phase transition (or regime) can be made by comparing QMC results with exact ones, obtained at smaller lattice sizes. For this purpose, we contrast in Fig.~\ref{fig:ionic_QMC_ED_comparison_SM} the average sign in a lattice with $L=4$ with numerical results obtained from exact diagonalization (ED). At this small lattice size, the quantum critical region shrinks, and at the lowest temperatures studied ($T/t = 1/24$) $\langle{\cal S}\rangle$ displays a sharp dip at around $V/t\simeq 2$. Turning to the ED results, we probe the transition via the analysis of the low lying spectrum $E_\alpha$ ($E_0$ is the ground-state), the many-body excitation gaps $\Delta_{\rm ex}^{(\alpha)} = E_\alpha - E_0$, the spin and charge staggered structure factors, $S_{\rm sdw} = (1/N)\sum_{i,j} (-1)^\eta \langle (n_{i\uparrow} - n_{i\downarrow})(n_{j\uparrow} - n_{j\downarrow})\rangle$ and $S_{\rm cdw} = (1/N)\sum_{i,j} (-1)^\eta \langle (n_{i\uparrow} + n_{i\downarrow})(n_{j\uparrow} + n_{j\downarrow})\rangle$ [$\eta = +1\ (-1)$ when $i,j$ belong to the same (different) sublattices], and, the fidelity metric $g_{\tiny U} = \frac{2}{N}\frac{1-|\langle \Psi_0(U)|\Psi_0(U+dU)\rangle|}{dU^2}$. This last quantity displays a peak whenever one crosses a quantum phase transition for the parameters $U$ and $U+dU$. These results describe a single transition in the range of parameters investigated, displaying a first-order character, given the level crossings shown in Fig.~\ref{fig:ionic_QMC_ED_comparison_SM}(\textbf{B}) or a vanishing excitation gap $\Delta_{\rm ex}^{(1)} = E_1 - E_0$ at $U/t \simeq 1.99$ [Fig.~\ref{fig:ionic_QMC_ED_comparison_SM}(\textbf{C})]. In turn, the fidelity metric displays a sharp peak at this interaction value [Fig.~\ref{fig:ionic_QMC_ED_comparison_SM}(\textbf{E})], and the structure factors computed at the ground-state swap its characteristics, from a charge- to a spin-ordered one [Fig.~\ref{fig:ionic_QMC_ED_comparison_SM}(\textbf{D})].
It is an open question of whether one is able to capture the intermediate correlated metal phase in exact methods such as ED.
Stepping back from the technical details, the central message of
Fig.~\ref{fig:ionic_QMC_ED_comparison_SM} is extending the evidence presented in the main text
that the sign problem metric for the QCP of panel \textbf{A}
lines up well with those of the `traditional observables' in panels \textbf{B-E}.
\begin{figure*}[th!]
\includegraphics[width=0.6\columnwidth]{ionic_QMC_ED_comparison_SM.pdf}
\caption{\textbf{QMC vs. ED on a 4x4 lattice.} (\textbf{A}) The average sign extracted from the DQMC calculations $\langle{\cal S}\rangle$ for increasing inverse temperatures $\beta = 1/T$ as a function of the Hubbard interaction $U/t$ in a lattice with $L=4$ and a staggered potential $\Delta/t=0.5$. (\textbf{B--E}) Data extracted with the ED, including: (\textbf{B}) The low-lying eigenspectrum $E_\alpha$ at the zero-momentum sector $\mathbf{k} = (0,0)$; (\textbf{C}) The excitation gaps $\Delta_{\rm ex}^{(\alpha)}$; (\textbf{D}) the spin and charge structure factors and (\textbf{E}) the fidelity metric under variations of the interaction magnitudes $U$, using $dU=10^{-3}t$. In (\textbf{A}), data are averaged over 24 independent runs, with $\Delta\tau = 0.1$.}
\label{fig:ionic_QMC_ED_comparison_SM}
\end{figure*}
\vskip0.10in \noindent
\section{More details on Spinless Honeycomb Hubbard (main text, Sec.~III)}
\paragraph{The finite-temperature transition.---} As we have argued in the main text, the interacting spinless fermion Hamiltonian has a special property in AFQMC simulations: with an appropriate choice of the basis one uses to write the fermionic matrix, it has been proven that the sign problem can be eliminated~\cite{ZiXiang2015,li16,li19}. Nonetheless, using a standard single-particle basis, where the sign problem is manifest, we demonstrated in Fig.~\ref{fig:3} that $\langle{\cal S}\rangle$ can be used as a way to track the quantum phase transition. Concomitantly, we have shown that a local observable (the derivative of the nearest neighbor density correlations with respect to the interactions $V$) exhibits a steep downturn once the quantum (i.e.~zero temperature) phase transition is approached.
As a by-product of this analysis, we use our original approach based on the standard BSS algorithm in the standard fermionic basis to show that one can also obtain an estimation of the \textit{finite-temperature} transition (pertaining to universality class of the 2D Ising model) with a relatively large accuracy, if the system is not too close to the quantum critical point, see Fig.~\ref{fig:spinless_hc_SM}. We compute both the derivative of the nearest-neighbor density correlations as well as the CDW structure factor, i.e., a summation of all density-density correlations with a $+1$ $(-1)$ for sites belonging to the same (different) sublattice, on the largest lattice size we have investigated, $L=18$. These finite-size results for $T_c$ are in good agreement with recent results obtained after system size scaling of data extracted with continuous-time QMC methods~\cite{Wang2014,Hesselmann2016}, where the sign problem is absent.
\begin{figure}[th!]
\includegraphics[width=0.7\columnwidth]{spinless_hc_SM.pdf}
\caption{\textbf{Finite-temperature transition for the spinless fermion on the honeycomb lattice.} As a by-product of the analysis of the temperature dependence for average sign, we notice that other than in regimes very close to the QCP, $\langle {\cal S}\rangle$ is very close to 1. (\textbf{A}) The derivative of the nearest-neighbor density correlations in the $T/t$ vs. $V/t$ plane and (\textbf{B}) the CDW structure factor for a lattice with $L=18$. In both plots, the markers are the continuous-time QMC results extracted after finite-size scaling in Ref.~\cite{Hesselmann2016}. Imaginary-time discretization is fixed at $\Delta\tau = 0.1$, and all data is obtained as an average of 20 independent runs.}
\label{fig:spinless_hc_SM}
\end{figure}
\begin{figure}[th!]
\includegraphics[width=0.6\columnwidth]{spinless_dtau_sign.pdf}
\caption{\textbf{The U(1) model on the honeycomb lattice: Average sign dependence on the imaginary time-discretization.} The dependence of $\langle{\cal S}\rangle$ on the imaginary-time discretization $\Delta\tau$ with growing interactions $V/t$, in a lattice with $L = 9$. Data is extracted as an average of 24 independent runs, for a temperature $T/t=1/24$. The best known value for the Mott insulating transition~\cite{ZiXiang2015} ($V_c/t=1.355$) is signalled by the star marker.}
\label{fig:spinless_dtau_sign}
\end{figure}
An important outcome of these results is that they imply that the mark that a phase transition leaves on the average sign is restricted to quantum phase transitions, rather than thermal ones, as we show above.
\paragraph{Imaginary-time discretization.---}
Here we consider the effect of the imaginary-time discretization $\Delta\tau$ on the average sign, for a fixed inverse temperature and show that `Trotter errors'~\cite{fye86} do not affect our conclusions. Figure~\ref{fig:spinless_dtau_sign} shows the average sign for a fixed lattice size $L=9$ and $\Delta\tau$ ranging from 0.025 to 0.2 with a fixed temperature $T/t=1/24$. The drop in $\langle{\cal S}\rangle$ when approaching $V_c$ is indicative of the QCP, but using a more dense imaginary-time discretization does not render substantial changes in the average sign; similar behavior was observed in other models studied.
\vskip0.10in \noindent
\section{More details for the homogeneous Hubbard model on the square lattice (main text, Sec.~IV)}
\paragraph{Finite-size effects.---} An important aspect that deserves further attention concerns finite-size effects on the data presented in the main text. The average sign, for example, when not protected by some underlying symmetry of the Hamiltonian, is known to decrease for larger system sizes~\cite{white89}. Figure~\ref{fig:finite_size_effect_sq_spinful}
compares the original quantities displayed in the main text [Fig.~\ref{fig:4}] at different lattice sizes.
The ``growth'' of the $\langle {\cal S}\rangle\to 0$ dome is relatively small when taking into account lattices differing by an order of magnitude in the number of sites. The $d$-wave pair-susceptibility
also qualitatively preserves its overall features, with a tendency of local pair formation at higher electronic densities, encapsulating the $\langle {\cal S}\rangle\to 0$ dome. Similarly, the static spin-susceptibility and its accompanying `pseudogap' line are largely unaltered when the linear lattice size varies from $L=8$ to 16.
These results for different $L$ indicate that our analysis of the link between the SP and quantum criticality is not a finite size effect.
\begin{figure*}[th!]
\includegraphics[width=1.\columnwidth]{finite_size_effect_sq_spinful.pdf}
\caption{\textbf{Finite size effects for the SU(2) Hubbard model on the square lattice.} An analysis of the main quantities and their size dependence for the model with potential connection with the physics of the cuprates. As in the main text, we choose $U/t = 6$ and $t^\prime/t = -0.2$.
The top, middle, and bottom rows of plots refer to $L = 8, 12$ and 16, respectively. In turn, the columns from left to right depict the average sign, the correlated $d$-wave pair susceptibility, and the long-wavelength static spin susceptibility. Imaginary-time discretization is fixed at $\Delta\tau = 0.0625$, and all data are extracted as an average of 24 independent runs.}
\label{fig:finite_size_effect_sq_spinful}
\end{figure*}
\paragraph{Pair-symmetry channels.---} Another important check on the suitability of the chosen parameters to describe the physics of pairs with the same experimentally inferred symmetry as in the cuprates, is to directly compare the correlated susceptibility map with different symmetry channels. This has been done in early studies of QMC~\cite{white89a}, and here, as a side-aspect of the analysis of the average sign, we bring in a systematic investigation. Figure~\ref{fig:s_sx_d_wave_pairing_sq_spinful} displays the correlated pair susceptibility $\langle P_{\alpha} - P_{\alpha}^{\rm nv}\rangle$ dependence on the temperature and electronic filling, with $\alpha = d, s^*$ or $s$-wave. Clearly, the symmetric local pairing channel ($s$-wave), which directly
confronts the on-site $U$, is not favored in the whole range of $T$ and $\rho$ investigated. The correlated pair susceptibility is always negative for finite values of $\langle {\cal S}\rangle$, indicating the vertex is {\it repulsive}. In contrast, both the extended $s$-wave and $d$-wave pairings exhibit positive correlated pairing susceptibilities in the vicinity of the $\langle {\cal S}\rangle\to0$ dome, but are more pronounced in the latter. This emphasizes the dominance of $d$-wave pairing in the Hubbard model, in direct analogy to a wide class of materials displaying high-temperature superconductivity.
\begin{figure*}[th!]
\includegraphics[width=1.\columnwidth]{s_sx_d_wave_pairing_sq_spinful.png}
\caption{\textbf{Comparison of the pairing channels for the SU(2) Hubbard model on the square lattice.} Taking as a starting point for the physics of the cuprates the parameters $U/t = 6$ and $t^\prime/t = -0.2$~\cite{Hirayama2018,Hirayama2019}, we display the comparison of the correlated pair susceptibilities considering different symmetry channels in the left ($d$-wave), center (extended s-wave, $s^*$) and right ($s$-wave) columns, both in $\mu/t$ vs. $T/t$ (upper row) and $\rho$ vs. $T/t$ (lower row) parametric space. Imaginary-time discretization is fixed at $\Delta\tau = 0.0625$, and all data are extracted as an average of 24 independent runs in a lattice with $L=16$.}
\label{fig:s_sx_d_wave_pairing_sq_spinful}
\end{figure*}
\paragraph{Spectral weight at the anti-nodal point.---}
In describing the physics of cuprates, a common focus is the anisotropy of the single-particle gap as extracted from ARPES techniques~\cite{Damascelli2003}, which contrasts with the standard isotropic behavior seen in conventional BCS-type superconductors. At the root of the discussion is the shape of the Fermi surface when doping the parent spin-ordered Mott insulator. In particular, to classify the onset of the pseudogap phase at low enough temperatures $T$, i.e., the regime where single-particle, low-energy excitations are suppressed, one tracks the peak of the anti-nodal spectral weight at the Fermi energy, $A_{(\pi,0)}(\omega=0) \equiv -\frac{1}{\pi}{\rm Im}G_{(\pi,0)}(\omega=0)$ as $T$ is varied.
In QMC simulations, this quantity can in principle be extracted by means of an analytical continuation of the data, in which the imaginary-time dependence of the Green's function $G$ is converted to real frequency. To avoid the well known difficulty of such a calculation~\cite{jarrell96}, a proxy valid at low enough temperatures, $A_{(\pi,0)}^{\rm proxy} = \beta G_{(\pi,0)}(\tau=\beta/2)$ is often used~\cite{trivedi95,Wu2018}. Figure~\ref{fig:A_0_sq_spinful_proxy} shows the results for the anti-nodal spectral function. The pseudogap line extracted from the proxy is qualitatively close to the one obtained from the peak of the static long-wavelength spin susceptibility (Figs.~\ref{fig:4} and \ref{fig:finite_size_effect_sq_spinful}) and, like it, terminates on the $\langle {\cal S}\rangle\to 0$ dome.
\begin{figure}[th!]
\includegraphics[width=0.7\columnwidth]{A_0_sq_spinful_proxy.png}
\caption{\textbf{Comparison of the single-particle spectral weight at the anti-nodal point.} Temperature extrapolation of the extracted $A_{(\pi,0)}$ at the Fermi energy vs the chemical potential $\mu/t$ (\textbf{A}) and the electronic density $\rho$ (\textbf{B}), on a lattice with $L=16$; other parameters are $U/t = 6$ and $t^\prime/t = -0.2$. The maximum values at each temperature are denoted by the white markers. Imaginary-time discretization is fixed at $\Delta\tau = 0.0625$, and all data is extracted as an average of 24 independent runs.}
\label{fig:A_0_sq_spinful_proxy}
\end{figure}
\paragraph{Spectral function and the Lifshitz transition.---} The extensive dataset and associated analysis we have undertaken in this investigation of the sign problem enables us to check other important physical aspects of the Hubbard model on the square lattice, and their relation to cuprate phenomenology. We include them here, in order to further link their behavior to that of the sign. One of them refers to the change of the topology of the Fermi surface when increasing the hole-doping (decreasing the electron density) from half-filling: at some critical $\langle n\rangle$, the Fermi surface changes its shape from hole-like to a closed, electron-like one. This transition, referred to as the Lifshitz transition, has been investigated in the context of strongly interacting electrons, and was inferred to
occur concomitantly with the presence of a van Hove singularity at the Fermi level~\cite{Chen2012}.
\begin{figure}[th!]
\includegraphics[width=0.8\columnwidth]{spectral_function.png}
\caption{\textbf{Lifshitz transition.} The evolution of the spectral function in one quadrant of the Brillouin zone with increasing densities, after an interpolation of the results for a $24\times24$ lattice at temperature $T/t=1/3.25$, with $U/t = 6$ and $t^\prime/t = -0.2$. The different panels depict results with various chemical potentials as marked, and the average density is 0.25 in (\textbf{A}), 0.65 in (\textbf{B}), 0.95 in (\textbf{C}), and 1.00 in (\textbf{D}). Imaginary-time discretization is fixed at $\Delta\tau = 0.0625$, and all data is extracted as an average of 20 independent runs. }
\label{fig:spectral_function}
\end{figure}
Due to the presence of the SP, however, we can only investigate the Lifshitz transition at finite-temperatures, and thus the Fermi `surface' is thermally broadened. Nonetheless, Fig.~\ref{fig:spectral_function} displays the spectral function (obtained via the `proxy' scheme as previously explained) at the Fermi energy on lattices with linear size $L=24$, at a temperature
right above the $\langle{\cal S}\rangle\to0$ dome, and investigating dopings above and below the pseudogap line in Figs.~\ref{fig:4} and \ref{fig:spectral_function}. As one increases the density, the change of topology precisely confirms the Lifshitz-scenario, with a further formation of hole-pocket regions along the anti-nodal direction, in direct analogy to the phenomenology of high-Tc superconductors~\cite{Damascelli2003}.
\paragraph{Comparison to the near-neighbor hopping only Hubbard model.---}
While it is remarkable how much of the physics of simplest Hubbard Hamiltonian captures that of the cuprates~\cite{scalapino94}, refinements of the model are known to provide more accurate comparisons to the experiments~\cite{Piazza2012,Hirayama2018,Hirayama2019}. One such, the inclusion of next-nearest neighbor hopping $t^\prime$ was employed in the analysis in the main text. It is significant in the context of the sign problem because it breaks the particle-hole symmetry and, for example, induces a SP even at half-filling. In this section we address the extent to which including $t^\prime$ affects our conclusions. To this end, we contrast the results of Fig.~\ref{fig:4} with the ones arising from the Hubbard model with $t'=0$ (Fig.~\ref{fig:t2_0_sq_spinful}). The key qualitative aspects are similar, including the presence of a $\langle {\cal S} \rangle\to0$ dome, the tendency of $d$-wave pair formation (due to the enhanced pair susceptibility with this symmetry around such dome) and a peak of the spin susceptibility ending at the dome. The differences are: (i) while approaching half-filling, the average sign displays a sudden jump towards 1 (as one would expect for this bipartite case), (ii) the extension of the dome, within the temperatures investigated ($T/t \geq 1/6$), is more constrained in density compared to the $t^\prime/t=-0.2$ case, and, more importantly, (iii) the pseudogap region, signified by the temperatures below the $\chi(\textbf{q}=0)$ peak, is significantly reduced. Our data thus provide another argument in support of an added next-nearest neighbor hopping in order to possibly explain the phase diagram of high $T_c$ materials, which display a robust pseudogap region. However, the main point for the purpose of this manuscript is that whether $t^\prime$ is included or not, the behavior of the sign is correlated with the physics of pairing and magnetism.
\begin{figure*}[th!]
\includegraphics[width=1.\columnwidth]{t2_0_sq_spinful.png}
\caption{\textbf{The `phase diagram' of the bipartite Hubbard model on the square lattice.} The same as Fig.~\ref{fig:4} in the main text, but instead removing the non-bipartite contribution $t^\prime$, i.e., here the `vanilla' Hubbard model results are presented. Other parameters are the same, as $L=16$, $\Delta\tau=0.0625$ and $U/t=6$.}
\label{fig:t2_0_sq_spinful}
\end{figure*}
\end{document}
\section{Bilayer AF-Singlet Transition}
\textcolor{red}{Need additional citations throughout below.}
The transition from long range AF order to independent singlet formation
is one of the earliest and most well-studied QPT. The original underlying
physics considered localized $f$-orbitals which
hybridize with delocalized $d$-orbitals.
When the hybridization $t_{\rm fd}$ is weak,
the magnetic $f$-orbitals are indirectly coupled via
their polarization of the conduction electron cloud,
the Ruderman-Kittel-Kasuya-Yosida (RKKY)
interaction~\cite{ruderman54,kasuya56,yosida57},
and order antiferromagnetically. For large $t_{\rm fd}$
however, the conduction and localized electrons instead tend to
form spin-0 singlets, the Kondo effect, which quenches the magnetic
order~\cite{kondo70}.
This competition is most well-studied in the context of the
periodic Anderson Model (PAM).
These phenomena are believed to underlie the behavior of
a number of strongly correlated materials, including, for
example, the volume collapse transition in lanthanides and
actinides~\cite{allen82,mcmahan98}.
QMC studies have located the QCP of the PAM~\cite{vekic95}.
Here we study the AF to SL transition in the closely-related bilayer Hubbard
Hamiltonian consisting of two square lattices with intralayer
hybridization $t$ which are coupled by interlayer $t'$.
Thus $t'$ plays the role of $t_{\rm fd}$ in the discussion above.
At strong coupling $U$ this model becomes the bilayer ($J,J'$)
Heisenberg model, for which the location of the QCP is
known very precisely~\cite{hida90,millis93,sandvik94,wang06}.
\textcolor{red}{Paragraph below will need fine tuning as
the figure/data are finalized.}
Figure \ref{fig:signbilayer} shows
$\langle {\cal S} \rangle$
as the interlayer hopping is increased from zero.
A signal is seen at two values. At small $t'$,
the interlayer exchange $J' \sim t'^{\,2}/U$ is very small,
e.g.~for $t' = 0.5$ and $U=4$ we have $J' \sim 1/16$.
Thus, unless $\beta \gtrsim 1/J' \sim 16$ the ground state AF order is
not apparent. Thus, at any fixed $\beta$, as a function of $t'$
there is a `transition' to AF order~\cite{vekic95,chang14,mendes17}.
This is the origin of the signal in
$\langle {\cal S} \rangle$
at smaller $t' \sim 0.6$.
More significantly, there is a second signal in
$\langle {\cal S} \rangle$
at larger $t' \sim 1.6$.
This is close to the position of the QCP associated
with the AF to singlet transition~\cite{scalettar94}.
Thus, as with the evolution from SM to AFMI for Dirac fermions,
the average sign appears to be provide a barometer
of the onset of Kondo physics.
Figure~\ref{fig:spinlayer} show the more standard correlation functions
for the analysis of the AF-singlet transition: the near-neighbor
intra- and inter-plane correlation functions.
As the interplane hybridization increases, the interlayer correlations
grow. In the limit of large $t'$, and
as $\beta \rightarrow \infty$ and $U \rightarrow \infty$
they would take the perfect singlet lavle $C^{\rm nn}_{\rm inter}=-3/4$.
Two spins on adjacent sites in the same plane become less and less
coupled.
\begin{figure}[t]
\includegraphics[scale=0.33]{spinlayerA.pdf}
\caption{
\textcolor{red}{Needs work. Not sure which data set to show.}
Inter and intra-layer spin correlations as functions of interlayer
hybridization $t'$.
}
\label{fig:spinlayer}
\end{figure}
|
1,941,325,221,089 | arxiv | \section{The background torque density in a three-dimensional disc with
a fixed temperature profile}\label{tbg_deriv}
We give a brief derivation of the angular momentum exchange between
linear perturbations and the background disc. We consider a
three-dimensional disc in which the equilibrium pressure and density
are related by
\begin{align}\label{iso_cond}
p = c_s^2(R,z)F(\rho),
\end{align}
where $F(\rho)$ is an arbitrary function of $\rho$ with dimensions of
mass per unit volume, and $c_s$ is a prescribed function of
position with dimensions of velocity squared. The equilibrium disc
satisfies
\begin{align}
R\Omega^2(R,z) &= \frac{1}{\rho}\frac{\partial p}{d R} +
\frac{\partial\Phi_\mathrm{tot}}{\partial R}, \\
0 &= \frac{1}{\rho}\frac{\partial p}{\partial z} + \frac{\partial\Phi_\mathrm{tot}}{\partial
z}.
\end{align}
Note that, in general, the equilibrium rotation $\Omega$ depends on
$R$ and $z$.
We begin with the linearised equation of motion in terms of the
Lagrangian displacement $\bm{\xi}$ as given by \cite{lin93b} but with an
additional potential perturbation,
\begin{align}\label{lagragian_pert}
&\frac{D^2\bm{\xi}}{Dt^2} +
2\Omega\hat{\bm{z}}\times\frac{D\bm{\xi}}{Dt} \notag \\ &= -
\frac{\nabla \delta p }{\rho} + \frac{\delta\rho}{\rho^2}\nabla p
-\nabla\delta\Phi_d - R
\hat{\bm{R}}\left(\bm{\xi}\cdot\nabla\Omega^2\right) \notag \\
& = -\nabla\left(\frac{\delta p}{\rho} + \delta \Phi_d\right) -
\frac{\delta p}{\rho}\frac{\nabla\rho}{\rho} +
\frac{\delta\rho}{\rho}\frac{\nabla p}{\rho} - R
\hat{\bm{R}}\left(\bm{\xi}\cdot\nabla\Omega^2\right),
\end{align}
where $D/Dt \equiv \partial_t + \mathrm{i} m \Omega$ for perturbations with
azimuthal dependence in the form $\exp\left(\mathrm{i} m \phi\right)$, and
the $\delta$ quantities denote Eulerian perturbations.
As explained in \cite{lin11b}, a conservation law for the angular
momentum of the perturbation may be obtained by taking the dot product
between Eq. \ref{lagragian_pert} and $(-m/2)\rho\bm{\xi}^*$, then
taking the imaginary part afterwards. The left hand side becomes the
rate of change of angular momentum density. The first term on the right hand side (RHS)
becomes
\begin{align}\label{angflux1}
&-\frac{m}{2}\imag\left[-\rho\bm{\xi}^*\cdot\nabla\left(\frac{\delta p}{\rho} + \delta
\Phi_d\right)\right] \notag\\
&= \frac{m}{2}\imag\left\{\nabla\cdot\left[\rho\bm{\xi}^*\left(\frac{\delta p}{\rho} + \delta
\Phi_d \right) + \frac{1}{4\pi
G}\delta\Phi_d\nabla\delta\Phi_d^*\right]\right\} \notag\\
&+ \frac{m}{2}\imag\left(\delta\rho^*\frac{\delta p}{\rho}\right),
\end{align}
where $\delta\rho = - \nabla\cdot\left(\rho\bm{\xi}\right)$ and
$\nabla^2\delta\Phi_d = 4\pi G \delta \rho$ have been used. The
terms in square brackets on the RHS of Eq. \ref{angflux1} is (minus) the
angular momentum flux. The second term on RHS of Eq. \ref{angflux1},
together with the remaining terms on the RHS of
Eq. \ref{lagragian_pert} constitutes the background torque. That is,
\begin{align}\label{tbg_3d}
T_\mathrm{BG} = \frac{m}{2}\imag\left[
\frac{\delta p}{\rho} \Delta\rho^* -
\frac{\delta\rho}{\rho}\bm{\xi}^*\cdot\nabla p
+ \rho \xi_R^*\xi_z\frac{\partial\left(R\Omega^2\right)}{\partial z}\right],
\end{align}
where $\Delta\rho = \delta\rho + \bm{\xi}\cdot\nabla\rho$ is the
Lagrangian density perturbation.
So far we have not invoked an energy equation. For
adiabatic perturbations $T_\mathrm{BG}$ is zero, and we recover the
same statement of angular momentum conservation as in \cite{lin93b} but
modified by self-gravity in the fluxes.
However, if we impose the equilibrium relation
Eq. \ref{iso_cond} to hold in the perturbed state, then
\begin{align}
\delta p = c_s^2(R,z) F^\prime \delta\rho,
\end{align}
where $F^\prime = dF/d\rho$. Inserting this into Eq. \ref{tbg_3d}, we
obtain
\begin{align}\label{tbg_3d_2}
T_\mathrm{BG} = -\frac{m}{2}\frac{p}{\rho
c_s^2}\imag\left[\delta\rho\bm{\xi}^*\cdot\nabla c_s^2 +
\xi_R^*\xi_z \left(\frac{\partial\rho}{\partial z}\frac{\partial c_s^2}{\partial R} -
\frac{\partial\rho}{\partial R}\frac{\partial c_s^2}{\partial z}\right)\right],
\end{align}
where the equilibrium equations were used.
At this point setting $\xi_z=0$ gives $T_\mathrm{BG}$ for
perturbations with no vertical motion,
\begin{align}
T_\mathrm{BG,2D} = -\frac{m}{2}\frac{p}{\rho
c_s^2}\imag\left(\delta\rho \xi_R^* \partial_R c_s^2 \right),
\end{align}
and is equivalent to the 2D
expression, Eq. \ref{baroclinic_torque}, with $\delta\rho $ replaced
by $\delta \Sigma$ and $p=c_s^2\rho$.
In fact, we can simplify Eq. \ref{tbg_3d_2} in the general case by
using $\delta\rho = - \rho\nabla\cdot\bm{\xi} -
\bm{\xi}\cdot\nabla\rho$, giving
\begin{align}\label{tbg_general}
T_\mathrm{BG} = \frac{m}{2}\frac{p}{\rho
c_s^2}\imag\left[\rho\left(\nabla\cdot\bm{\xi}\right)\bm{\xi}^*\cdot\nabla
c_s^2\right].
\end{align}
For a barotropic fluid $p=p(\rho)$, the function $c_s^2$ can be
taken as constant (Eq. \ref{iso_cond}) for which $T_\mathrm{BG}$
vanishes. When there is a forced temperature gradient,
Eq. \ref{tbg_general} indicates a torque is applied to compressible
perturbations ($\nabla\cdot\bm{\xi}\neq0$) if there is motion along
the temperature gradient ($\bm{\xi}\cdot\nabla c_s^2 \neq 0$).
\section{Relation between horizontal Lagrangian displacements for
local, low frequency disturbances}\label{horizontal_displacements}
Here, we aim to relate the horizontal Lagrangian displacements $\xi_R$
and $\xi_\phi$ in the local approximation. Using the local solution to
the Poisson equation
\begin{align}
\delta \Phi_m = -\frac{2\pi G}{|k|} \delta\Sigma_m
\end{align}
\citep{shu91}, the linearised azimuthal equation of motion becomes
\begin{align}
- \mathrm{i}\bar{\sigma} \delta v_{\phi m} + \frac{\kappa^2}{2\Omega}\delta v_{Rm} = -\frac{\mathrm{i}
m}{R\Sigma}\left(c_s^2 - \frac{2\pi G
\Sigma}{|k|}\right)\delta\Sigma_m.
\end{align}
Next, we replace the surface density perturbation
$\delta \Sigma_m = -\mathrm{i} k \Sigma \xi_R$, and use the expressions
\begin{align}
&\delta v_{Rm} = -\mathrm{i}\bar{\sigma}\xi_R,\\
&\delta v_{\phi m} = -\mathrm{i}\bar{\sigma}\xi_\phi - \frac{\mathrm{i} R
\partial_R\Omega}{\bar{\sigma}} \delta v_{Rm}
\end{align}
\citep{papaloizou85} to obtain
\begin{align}
-\bar{\sigma}^2\xi_\phi - 2\mathrm{i}\bar{\sigma}\Omega \xi_R =
\frac{m}{kR}\left(\kappa^2 - \bar{\sigma}^2\right)\xi_R,
\end{align}
where the dispersion relation Eq. \ref{dispersion} was used.
In the local approximation, $|kR|\gg m$ by assumption.
Hence the RHS of this equation can be neglected. Then
\begin{align}
\xi_\phi \simeq -\frac{2\mathrm{i}\Omega}{\bar{\sigma}}\xi_R.
\end{align}
For low-frequency modes we have $\bar{\sigma} \simeq -m\Omega$, so
$\xi_\phi\simeq 2\mathrm{i}\xi_R/m$, as used in the main text.
\section{The confined spiral as an external potential}\label{disc-planet}
Let us treat the one-armed spiral confined in $R\in[R_1,R_2]$ as an
external potential of the form $\Phi_\mathrm{ext}(R)\cos{\left(\phi -
\Omega_pt\right)}$. We take
\begin{align}
\Phi_\mathrm{ext}
=-\frac{GM_\mathrm{ring}}{\overline{R}}b^{1}_{1/2}(\beta),
\end{align}
where $M_\mathrm{ring}$ is the disc mass contained within
$R\in[R_1,R_2]$, $\overline{R} = (R_1+R_2)/2$ is the approximate radial
location of the spiral, $b_{n}^m(\beta)$ is the Laplace coefficient
and $\beta = R/\overline{R}$. This form of
$\Phi_\mathrm{ext}$ is the $m=1$ component of the gravitational
potential of an external satellite on a circular orbit
\citep{goldreich79}.
We expect the external potential to exert a torque on the disc at the
Lindblad and co-rotation resonances. At the outer Lindblad resonance
(OLR), this torque is
\begin{align}
\Gamma_L =
\frac{\pi^2\Sigma_L}{3\Omega_L\Omega_p}
\left[\left.R_L\frac{d\Phi_\mathrm{ext}}{dR}\right|_L + 2\left(1 -
\frac{\Omega_p}{\Omega_L}\right)\Phi_\mathrm{ext}\right]^2,
\end{align}
where a Keplerian disc has been assumed and subscript $L$ denotes
evaluation at the OLR, $R=R_L$. (The inner Lindblad resonance does not
exist for the pattern speeds observed in our simulations.)
If we associate the external potential with an angular momentum magnitude of
$J_\mathrm{ext} = M_\mathrm{ring}\overline{R}^2\Omega_p$, we can
calculate a rate of change of angular momentum $\gamma_L=\Gamma_L/J_\mathrm{ext}$. Then
\begin{align}
\frac{\gamma_L}{\Omega_p} = &\frac{\pi
h}{3Q_L}\left(\frac{M_p}{M_*}\right)\left(\frac{R_L}{\overline{R}}\right)\left(\frac{R_c}{\overline{R}}\right)
^3\left(\frac{R_L}{R_c}\right)^{-3/2}\notag\\
&\times
\left\{\frac{R_L}{\overline{R}}\left.\frac{db_{1/2}^1}{d\beta}\right|_L
+ 2\left[1 -
\left(\frac{R_c}{R_L}\right)^{-3/2}\right]b_{1/2}^1(\beta_L)\right\}^2.
\end{align}
Inserting $h=0.05$, $Q_L=10$, $M_\mathrm{ring} = 0.05M_*$,
$R_L=7.2R_0$, $R_c=4.4R_0$ and $\overline{R}=1.5R_0$ from our fiducial
FARGO simulation, we get
\begin{align}
\gamma_L \sim 5\times10^{-4}\Omega_p.
\end{align}
For the co-rotation torque, we use the result
\begin{align}
\Gamma_c = \left.
\pi^2\Phi_\mathrm{ext}^2\left(\frac{d\Omega}{dR}\right)^{-1}\frac{d}{dR}\left(\frac{2\Sigma}{\Omega}\right)\right|_{c}
\end{align}
for a Keplerian disc, where subscript $c$ denotes evaluation at
co-rotation radius $R=R_c$. For a power-law surface density profile
$\Sigma\propto R^{-s}$ we have
\begin{align}
\frac{\gamma_c}{\Omega_p} = \frac{4}{3}\frac{\pi h}{Q_c}
\left(\frac{M_\mathrm{ring}}{M_*}\right)\left(\frac{R_c}{\overline{R}}\right)^4\left(s-\frac{3}{2}\right)
\left[b_{1/2}^1(\beta_c)\right]^2
\end{align}
Using the above parameter values with $s=2$ and $Q_c=10$, we obtain a
rate
\begin{align}
\gamma_c\sim 6\times 10^{-4}\Omega_p.
\end{align}
The torque exerted on the disc at the OLR by an external potential is
positive, while that at co-rotation depends
on the gradient of potential vorticity there \citep{goldreich79}. For
our disc models with surface density $\Sigma\propto R^{-2}$ in the
outer disc, this co-rotation torque is positive.
This means that the one-armed spiral loses
angular momentum by launching density waves with positive angular
momentum at the OLR, and by applying a
positive co-rotation torque on the disc. In principle, this
interaction is destabilising because the one-armed spiral has
negative angular momentum \citep{lin11b}.
However, the above estimates for $\gamma_L$ and $\gamma_c$ are much smaller than that due to the
imposed temperature gradient as measured in the simulations
($\gamma\sim∼0.1\Omega_p$). We conclude that for our disc models,
the Lindblad and co-rotation resonances have negligible effects
on the growth of the one-armed spiral in the inner disc (but it could be important
in other parameter regimes).
\section{Discussion}\label{discussions}
We discuss below several issues related to self-gravitating
discs in the context of our numerical simulations. However, it is
important to keep in mind that
the growth of the one-armed spiral in our simulations is
\emph{not} a gravitational instability in the sense that
destabilisation is through the background torque associated with a
forced temperature gradient, and not by self-gravitational torques\footnote{In fact, additional simulations with $Q_\mathrm{out}=4$ (giving $Q\gtrsim2.5$ throughout the disc) still
develops the one-armed spiral, but with a smaller growth rate.}.
\subsection{Motion of the central star}
In our models we have purposefully neglected the indirect potential
associated with a non-inertial reference frame to avoid
complications arising from the motion of the central star.
Although it has been established that such motion can destabilise an
$m=1$ disturbance in the disc \citep{adams89,shu90,michael10},
the disc masses in our models ($M_d\; \raisebox{-.8ex}{$\buildrel{\textstyle<}\over\sim$}\;
0.1 M_*$) are not expected to be sufficiently massive for this effect to be
significant. Indeed, simulations including the
indirect potential, carried out in the early stages of this project,
produced similar results.
\subsection{Role of Lindblad and co-rotation torques}
One effect of self-gravity is to allow the one-armed spiral,
confined to $R\sim R_0$ in our models, to act as an external potential for the
exterior disc in $R>R_0$. This is
analogous to disc-satellite interaction
\citep{goldreich79}, where the embedded satellite exerts a torque on
the disc at Lindblad and co-rotation resonances.
In Appendix \ref{disc-planet} we estimate the magnitude of this effect
using basic results from disc-planet theory \citep[see, e.g.][and
references therein]{papaloizou07}. There, we find that the angular
momentum exchange between the one-armed spiral and the exterior disc
is insignificant compared to the background torque.
We confirmed this with additional FARGO simulations that exclude the
co-rotation and outer Lindblad resonances (OLR) by reducing radial domain
size to $R_\mathrm{max} = 3 R_0$, which still developed
the one-armed spiral.
\subsection{Applicability to protoplanetary discs}
\subsubsection{Thermodynamic requirements}
A locally isothermal equation of state represents the ideal limit
of infinitely short cooling and heating timescales, so the
disc temperature instantly returns to its initial value when
perturbed. The background torque is generally non-zero if the
resulting temperature profile has a non-zero radial gradient.
A short cooling timescale $t_c$ can occur in the outer
parts of protoplanetary discs
\citep{rafikov05,clarke09,rice09,cossins10b,tsukamoto15}.
However, if a disc with $Q\simeq 1$ is cooled (towards zero
temperature) on a timescale $t_c\; \raisebox{-.8ex}{$\buildrel{\textstyle<}\over\sim$}\;\Omega_k^{-1}$, it will
fragment following gravitational instability
\citep{gammie01,rice05,paardekooper12}.
Fragmentation can be avoided if the disc is heated to maintain
$Q>Q_c$, the threshold for fragmentation \citep[$Q_c\simeq
1.4$ for isothermal discs,][]{mayer04}. This may be
possible in the outer parts of protoplanetary discs due to
stellar irradiation \citep{rafikov09,kratter11,zhu12}. Sufficiently strong
external irradiation is expected to suppress the linear gravitational
instabilities altogether \citep{rice11}.
The background torque may thus exist in the outer
parts of protoplanetary discs that are irradiated, such
that the disc temperature is set externally with a non-zero radial
gradient \citep[e.g.][]{stamatellos08}. Of course, if external irradiation sets a strictly
isothermal outer disc \citep[e.g.][]{boley09}, then the background
torque vanishes.
\subsubsection{Radial disc structure}
In our simulations the
$m=1$ spiral is confined between two $Q$-barriers, where real solutions to the local
dispersion relation is possible (Eq. \ref{wavenumber}). The
existence of such a cavity results from the adopted initial surface
density bump (Eq. \ref{sig_bump}).
Thus, in our disc models the main role of disc structure
and self-gravity is to allow a local $m=1$ mode to be set up, which is then
destabilised by the background torque.
In order to confine an $m=1$ mode between two $Q$-barriers, we
should have $Q^2(1-\nu^2)=1$ at two radii. Assuming
Keplerian rotation and a slow pattern speed $\Omega_p\ll\Omega$,
this amounts to
\begin{align}\label{qb_cond}
\left(\frac{R_{Qb}}{R_c}\right)^{-3/2} = 2Q^2(R_{Qb}).
\end{align}
Then two $Q$-barriers may exist when the $Q^2$ profile
rises more rapidly (decays more slowly) than $R^{-3/2}$
for decreasing (increasing) $R$. Note that
Eq. \ref{qb_cond} does not necessarily require strong self-gravity
if $R_c$ is large.
A surface density bump can develop in
`dead zones' of protoplanetary
discs, where there is reduced mass accretion because the magneto-rotational
instability is ineffective for angular momentum transport
\citep{gammie96,turner08,landry13}. The dead zone becomes
self-gravitating with sufficient mass built-up
\citep{armitage01,martin12,martin12b,zhu09,zhu10,zhu10b,bae13}.
However, conditions in a dead zone may not be suitable for
sustaining a background torque because it may not cool/heat fast enough
to maintain a fixed temperature profile. Recently,
\cite{bae14} presented numerical models of
dead zones including a range of heating and
cooling processes, which show that dead zones developed large-scale
(genuine) gravitational instabilities with multiple spiral
arms. Although this does not prove absence of the background torque,
it is probably insignificant compared to gravitational torques.
Another possibility is a gap opened by an embedded planet. In that case $Q$
rises rapidly towards the gap centre since it is a region of low
surface density. This can satisfy Eq. \ref{qb_cond}. Then the inner
edge of our bump function mimics the outer gap edge. The outer gap
edge is then a potential site for the growth of a low-frequency
one-armed spiral through the background torque. However, the
locally isothermal requirement would limit this process to the
outer disc, or that the temperature
profile about the gap edge is set by the planet luminosity.
Here, it is worth mentioning the transition disc
around HD 142527, the outer parts of which displays an $m=1$
asymmetry \citep{fukagawa13} and spiral arms
\citep{christiaens14} just outside a disc gap. These authors estimate
$Q\simeq$1---2 in the outer disc, implying self-gravity is
important, but the disc may remain gravitationally-stable
\citep{christiaens14}. This is a
possible situation that our disc models represent.
\section{Governing equations}\label{model}
We consider an inviscid fluid disc of mass $M_d$
orbiting a central star of mass $M_*$. We will mainly examine 2D
(or razor-thin) discs in favour of numerical
resolution, but have also carried out some 3D disc
simulations. Hence, for generality we describe the system in
3D, using both cylindrical $(R,\phi,z)$ and spherical polar
coordinates $(r,\theta,\phi)$ centred on the star.
The governing fluid equations in 3D are
\begin{align}
&\frac{\partial\rho}{\partial t} + \nabla\cdot\left(\rho\bm{v}\right) =
0,\label{cont_eq}\\
&\frac{\partial\bm{v}}{\partial t} + \bm{v}\cdot\nabla\bm{v} = -\frac{1}{\rho}\nabla
p -\nabla \Phi_\mathrm{tot}\label{mom_eq},\\
& \nabla^2\Phi_d = 4 \pi G \rho \label{poisson},
\end{align}
where $\rho$ is the mass density, $\bm{v}$ is the
fluid velocity (the angular velocity being $\Omega \equiv v_\phi/R$),
$p$ is the pressure and the total potential
$\Phi_\mathrm{tot} = \Phi_* + \Phi_d$ consists of that from the
central star,
\begin{align}
\Phi_*(r) = -\frac{GM_*}{r},
\end{align}
where $G$ is the gravitational constant, and the disc potential
$\Phi_d$. We impose a locally isothermal equation of state
\begin{align}
p = c_s^2(R)\rho,
\end{align}
where the sound-speed $c_s$ is given by
\begin{align}\label{sound-speed}
c_s(R) = c_{s0}\left(\frac{R}{R_0}\right)^{-q/2}
\end{align}
where $c_{s0} = h R_0\Omega_k(R_0)$ and
$h$ is the disc aspect-ratio at the reference radius $R=R_0$,
$\Omega_k=\sqrt{GM_*/R^3}$ is the midplane Keplerian frequency, and
$-q$ is the imposed temperature gradient since, for an ideal gas the
temperature is proportional to $c_s^2$. For convenience we will
refer to $c_s^2$ as the disc temperature. The vertical disc scale-height is
defined by $H=c_s/\Omega_k$. Thus, a strictly isothermal disc with
$q=0$ has $H\propto R^{3/2}$, and $q=1$ corresponds to a
disc with constant $H/R$.
The 2D disc equations are obtained by replacing $\rho$ with the surface
mass density $\Sigma$, $p$ becomes the vertically-integrated pressure,
and the 2D fluid velocity $\bm{v}$ is evaluated at the midplane, as are
the forces in the momentum equations. In the Poisson equation, $\rho$ is
replaced by $\Sigma\delta(z)$, where $\delta(z)$ is the
delta function.
\section{Linear density waves}\label{wkb}
We describe a key feature of locally isothermal discs that
enables angular momentum exchange between small disturbances and the
background disc through an imposed radial temperature gradient. This
conclusion results from the consideration of angular momentum
conservation within the framework of linear perturbation theory.
For simplicity, in this section we consider 2D discs.
In a linear analysis, one assumes a steady axisymmetric background state,
which is then perturbed such that
\begin{align}
\Sigma \to \Sigma(R) + \delta\Sigma_m(R)\exp{\left[\mathrm{i}\left(-\sigma t +
m\phi\right)\right]},
\end{align}
and similarly for other variables, where $\sigma=\omega+\mathrm{i}\gamma$ is
a complex frequency with $\omega$ being the real frequency,
$\gamma$ the growth rate, and $m$ is an integer. We take $m>0$ without
loss of generality.
The linearised mass and momentum equations are
\begin{align}
&-\mathrm{i}\bar{\sigma} \delta\Sigma_m = -\frac{1}{R}\frac{d}{dR}\left(R\Sigma\delta
v_{Rm}\right) - \frac{\mathrm{i} m \Sigma}{R}\delta v_{\phi m}, \\
&-\mathrm{i}\bar{\sigma}\delta v_{Rm} - 2\Omega \delta v_{\phi m} = -
c_s^2(R)\frac{d}{dR}\left(\frac{\delta\Sigma_m}{\Sigma}\right) - \frac{d}{dR}\delta\Phi_m,\label{radmom}\\
& -\mathrm{i}\bar{\sigma}\delta v_{\phi m} + \frac{\kappa^2}{2\Omega}\delta v_{Rm} =
-\frac{\mathrm{i} m }{R}\left(c_s^2\frac{\delta\Sigma_m}{\Sigma} + \delta\Phi_m\right),
\end{align}
where $\bar{\sigma} = \sigma - m\Omega$ and $\kappa^2 =
R^{-3}\partial_R(R^4\Omega^2)$ is the square of the epicyclic frequency. A
locally isothermal equation of state has been assumed in
Eq. \ref{radmom}. The linearised Poisson equation is
\begin{align}
\nabla^2\delta\Phi_m = 4\pi G \delta\Sigma_m \delta(z).
\end{align}
These linearised equations can be combined into a single
integro-differential equation eigenvalue problem. We defer a full
numerical exploration of the linear problem to a future study. Here, we
discuss some
general properties of the linear perturbations.
\subsection{Global angular momentum conservation for linear
perturbations} \label{global_cons}
It can be shown that linear perturbations with $\phi$-dependence in the form
$\exp{(\mathrm{i} m\phi)}$ satisfies angular momentum conservation
in the form
\begin{align}\label{angcons}
\frac{\partial j_\mathrm{lin}}{\partial t} + \nabla\cdot\bm{F} = T_\mathrm{BG},
\end{align}
\citep[e.g.][]{narayan87,ryu92,lin93b} where
\begin{align}\label{lin_ang_mom_cons}
j_\mathrm{lin} \equiv
-\frac{m\Sigma}{2}\imag\left(\bm{\xi}^*\cdot\frac{\partial\bm{\xi}}{\partial
t} + \Omega \hat{\bm{k}}\cdot\bm{\xi}\times\bm{\xi}^* + \mathrm{i}
m \Omega |\bm{\xi}|^2\right)
\end{align}
is the angular momentum density of the linear disturbance (which may
be positive or negative), $\bm{\xi}$ is the Lagrangian
displacement and $^*$ denotes complex conjugation, and $\bm{F}$ is the
vertically-integrated angular momentum flux consisting of a Reynolds
stress and a gravitational stress \citep{lin11b}. Explicit expressions
for $\bm{\xi}$ can be found in, e.g. \cite{papaloizou85}.
In Eq. \ref{angcons}, the background torque density $T_\mathrm{BG}$ is
\begin{align}\label{baroclinic_torque}
T_\mathrm{BG} \equiv
-\frac{m}{2}\imag\left(\delta\Sigma_m\xi_R^*\frac{dc_s^2}{dR}\right),
\end{align}
and arises because we have adopted a locally isothermal equation of
state in the perturbed disc. We outline the
derivation of $T_\mathrm{BG}$ in Appendix \ref{tbg_deriv}.
In a barotropic fluid, such as a strictly isothermal disc,
$T_\mathrm{BG}$ vanishes and the total angular momentum associated with
the perturbation is conserved, provided that there is no net angular
momentum flux. However, as noted in \cite{lin11b}, if there is an imposed
temperature gradient, as in the disc models we consider,
then $T_\mathrm{BG}\neq 0$ in general, which corresponds to a local torque
exerted by the background disc on the perturbation.
The important consequence of the background torque is the possibility
of instability if $\tbgj_\mathrm{lin}>0$. That is, if $j_\mathrm{lin}$ is positive
(negative) and $T_\mathrm{BG}$ is also positive (negative),
then the local angular momentum density of the linear disturbance
will further increase (decrease) with time. This implies the amplitude of the disturbance
may grow by exchanging angular momentum with the background
disc.
We demonstrate instability for low-frequency modes
($|\omega|\ll m\Omega$) by explicitly showing its
angular momentum density
$j_\mathrm{lin}<0$, and the background torque $T_\mathrm{BG}<0$ for
appropriate perturbations and radial temperature gradients.
\subsection{Angular momentum density of non-axisymmetric low-frequency modes}
From Eq. \ref{lin_ang_mom_cons} and assuming a time-dependence of the
form $\exp{(-\mathrm{i} \sigma t)}$ with $\real{\sigma} = \omega$,
the angular momentum density associated with linear waves is
\begin{align}
j_\mathrm{lin} = \frac{m\Sigma}{2}\left[\left(\omega -
m\Omega\right)|\bm{\xi}|^2 + \mathrm{i}\Omega\left(\xi_R\xi_\phi^* -
\xi_R^*\xi_\phi\right)\right].\label{ang_mom_def}
\end{align}
For a low-frequency mode, $|\omega|\ll m\Omega$. Then neglecting the
term $\omega|\bm{\xi}|^2$, we find
\begin{align}
j_\mathrm{lin} &\simeq \frac{m\Sigma\Omega}{2}\left[-m|\bm{\xi}|^2 + \mathrm{i}\left(\xi_R\xi_\phi^* -
\xi_R^*\xi_\phi\right)\right]\notag\\
& = -\frac{m\Sigma\Omega}{2}\left[ (m-1)|\bm{\xi}|^2 + |\xi_R + \mathrm{i}\xi_\phi|^2\right].
\end{align}
Thus, non-axisymmetric ($m\geq1$) low-frequency modes generally have
negative angular momentum. If the mode loses (positive) angular momentum
to the background, then we can expect instability. We show
below how this is possible through a forced temperature
gradient via the background torque. It is simplest to calculate
$T_\mathrm{BG}$ in the local approximation, which we review first.
\subsection{Local results}\label{local_approx}
In the local approximation, perturbations are assumed to vary rapidly
relative to any background gradients. The
dispersion relation for tightly-wound density
waves of the form $\exp{[\mathrm{i}(-\sigma t + m \phi + kR)]}$ in a razor-thin
disc is
\begin{align}\label{dispersion}
(\sigma - m\Omega)^2 = \kappa^2 + k^2c_s^2 - 2\pi G \Sigma |k|,
\end{align}
where $k$ is a real wavenumber such that $|kR|\gg1$ \citep{shu91}.
Note that in the strictly local approximation, where all
global effects are neglected, only axisymmetric perturbations
($m=0$) can be unstable.
Given the real frequency $\omega$ or pattern speed $\Omega_p\equiv
\omega/m$ of a non-axisymmetric neutral mode
, Eq. \ref{dispersion} can be solved
for $|k|$,
\begin{align}\label{wavenumber}
|k| = k_c\left[1 \pm \sqrt{1 -
Q^2(1-\nu^2)}\right],
\end{align}
where
\begin{align}
k_c \equiv \frac{\pi G \Sigma}{c_s^2}
\end{align}
is a characteristic wavenumber,
\begin{align}
Q \equiv \frac{c_s\kappa}{\pi G \Sigma}
\end{align}
is the usual Toomre parameter, and
\begin{align}
\nu \equiv \frac{(\omega - m\Omega)}{\kappa}
\end{align}
is a dimensionless frequency. In
Eq. \ref{wavenumber}, the upper (lower) sign correspond to short
(long) waves, and $k>0$ ($k<0$) correspond to trailing (leading)
waves.
At the \emph{co-rotation radius} $R_c$ the pattern speed matches
the fluid rotation,
\begin{align}
\Omega(R_c) = \Omega_p.
\end{align}
\emph{Lindblad resonances} $R_L$ occurs where
\begin{align}
\nu^2(R_L) = 1.
\end{align}
Finally, \emph{Q-barriers} occur at radii $R_{Qb}$ where
\begin{align}
Q^2(R_{Qb})\left[1-\nu^2(R_{Qb})\right] = 1.
\end{align}
According to Eq. \ref{wavenumber}, purely wave-like solutions with
real $k$ are only possible where $Q^2(1-\nu^2)\leq1$.
A detailed discussion of the properties of local density waves
is given in \cite{shu91}. An important result, which holds for
waves of all frequencies, is that
waves interior to co-rotation ($R<R_c$) have negative angular momentum, while
waves outside co-rotation ($R>R_c$) have positive angular
momentum.
\subsection{Unstable interaction between low-frequency modes
and the background disc due to an imposed temperature gradient}
Here we show that the torque density acting on a
local mode due to the background temperature gradient can be negative, which
would enforce low-frequency modes, because they have negative angular momentum.
The Eulerian surface density perturbation is given by
\begin{align}\label{den_pert}
\delta\Sigma_m = -\nabla\cdot\left(\Sigma\bm{\xi}\right)
= -\frac{1}{R}\frac{d}{dR}\left(R\Sigma \xi_R\right) - \frac{\mathrm{i} m}{R}\Sigma\xi_\phi.
\end{align}
We invoke local theory by setting $d/dR \to \mathrm{i} k$ where $k$ is
real, and assume $|kR|\gg m$ so that the second term on the right hand
side of Eq. \ref{den_pert} can be neglected. Then
\begin{align}
\delta\Sigma _m \simeq -\mathrm{i} k \Sigma \xi_R.
\end{align}
Inserting this into the expression for the background torque,
Eq. \ref{baroclinic_torque}, we find
\begin{align}
T_\mathrm{BG} = \frac{m}{2}\frac{dc_s^2}{dR}k\Sigma |\xi_R|^2. \label{baroclinic_torque1}
\end{align}
This torque density is negative for trailing waves ($k>0$) in discs with
a fixed temperature profile decreasing outwards ($dc_s^2/dR<0$). Note
that this conclusion does not rely on the low-frequency approximation.
However, if the linear disturbance under consideration \emph{is} a
low-frequency mode, then it has negative angular
momentum. If it is trailing and $dc_s^2/dR<0$, as is typical
in astrophysical discs, then $T_\mathrm{BG}<0$ and
the background disc applies a negative torque on the disturbance,
which further decreases its angular momentum. This suggests the
mode
amplitude will grow.
Using $j_\mathrm{lin}$ and $T_\mathrm{BG}$, we can estimate the
growth rate $\gamma$ of linear perturbations due to the background
torque as
\begin{align}
2\gamma \sim \frac{T_\mathrm{BG}}{j_\mathrm{lin}},
\end{align}
where the factor of two accounts for the fact that the angular momentum
density is quadratic in the linear perturbations. Inserting the above
expressions for $j_\mathrm{lin}$ and $T_\mathrm{BG}$ for gives
\begin{align}\label{theoretical_rate0}
2\gamma \sim
-\frac{dc_s^2}{dR}
\frac{k}{\Omega}\frac{|\xi_R|^2}{\left[(m-1)|\bm{\xi}|^2 + |\xi_R +
\mathrm{i}\xi_\phi|^2 \right]}
= -\frac{dc_s^2}{dR}
\frac{k}{m\Omega},
\end{align}
where the second equality uses $\xi_\phi \simeq
2\mathrm{i} \xi_R/m$ for low-frequency modes in the local approximation, as shown in Appendix
\ref{horizontal_displacements}.
Then for the temperature profiles $c_s^2 = c_{s0}^2 (R/R_0)^{-q}$ as
adopted in our disc models,
\begin{align}
2\gamma \sim q\frac{c_s^2}{R}\frac{k}{m\Omega} \sim q h
\left(\frac{kH}{m}\right)\Omega,\label{theoretical_rate}
\end{align}
where we used $c_s\sim H\Omega\sim hR\Omega$.
Eq. \ref{theoretical_rate} suggests that perturbations with small
radial length-scales ($kH\;\raisebox{-.8ex}{$\buildrel{\textstyle>}\over\sim$}\; m$) are most favourable for
destabilisation. Taking the local approximation is then
appropriate.
Note that the derivation of Eq. \ref{baroclinic_torque1} and
Eq. \ref{theoretical_rate} do not
require the disc to be self-gravitating. Thus, destabilisation by the
background torque is not directly associated with disc
self-gravity. However, in order to evaluate
Eq. \ref{baroclinic_torque1} or Eq. \ref{theoretical_rate} in terms
of disc parameters (as done in \S\ref{fargo_m1}), we need to insert a
value of $k$, which may depend on disc
self-gravity (e.g. from Eq. \ref{wavenumber}).
\section{Numerical simulations}\label{methods}
We demonstrate the destabilising effect of a fixed temperature
gradient using numerical simulations. The above discussion is
generic for low-frequency non-axisymmetric modes, but for
simulations we will consider specific examples.
There are two parts to the destabilising mechanism:
the disc should support low-frequency modes, which is then
destabilised by an imposed temperature gradient. The latter is
straight forward to implement by adopting a locally isothermal
equation of state as described in \S\ref{model}. For the former, we
consider discs with a
radially-structured Toomre $Q$ profile. We use local theory to show
that such discs can trap
low-frequency one-armed ($m=1$) modes.
This is convenient because Eq. \ref{theoretical_rate} indicates that
modes with small $m$ are
more favourable for destabilisation.
\subsection{Disc model and initial conditions}
For the initial disc profile we adopt a modified power-law disc with
surface density given by
\begin{align}
\Sigma(R) = \Sigma_\mathrm{ref} \left(\frac{R}{R_0}\right)^{-s}\times B(R;
R_{1}, R_{2}, \epsilon, \delta R),
\end{align}
where $s$ is the power-law index describing the smooth disc and
$\Sigma_\mathrm{ref}$ is a
surface density scale chosen by specifying $Q_\mathrm{out}$,
the Keplerian Toomre parameter at $R=R_{2}$,
\begin{align}
Q_\mathrm{out} = \left.\frac{c_s\Omega_k}{\pi G
\Sigma}\right|_{R=R_{2}}.
\end{align}
The bump function
$B(R)$ represents a surface density boost between
$R\in[R_{1},R_{2}]$ by a factor $\epsilon^{-1}>1$,
and $\delta R$ is the transition width between the bump and the
smooth disc. We choose
\begin{align}\label{sig_bump}
&B(R) = f_1(R)\times f_2(R),\\
&f_1(R) = \frac{1}{2}\left(1 - \epsilon\right)\left[1 +
\tanh\left(\frac{R-R_{1}}{\Delta_1}\right)\right] + \epsilon,\\
&f_2(R) = \frac{1}{2}\left(1 - \epsilon\right)\left[1 -
\tanh\left(\frac{R-R_{2}}{\Delta_2}\right)\right] + \epsilon,
\end{align}
where $\Delta_{1,2} = \delta R \times H(R_{1,2})$.
The 3D disc structure is obtained by assuming vertical hydrostatic
balance
\begin{align}
0 = \frac{1}{\rho}\frac{\partial p}{\partial z} + \frac{\partial\Phi_*}{\partial z} + \frac{\partial
\Phi_d}{\partial z},
\end{align}
which gives the mass density as
\begin{align}
\rho = \frac{\Sigma}{\sqrt{2\pi}H}Z(R,z),
\end{align}
where $Z(R,z)$ describes vertical stratification. In practice, we
numerically solve for $Z(R,z)$ by neglecting the radial self-gravity
force compared to vertical self-gravity, which reduces the equations
for vertical hydrostatic equilibrium to ordinary differential
equations. This procedure is described in \cite{lin12b}.
Our fiducial parameter values are: $s=2$, $R_{1}=R_0$, $R_{2}=2R_0$,
$\epsilon=0.1$, $\delta R=5$, $h=0.05$ and
$Q_\mathrm{out}=2$. An example of the initial surface density and the
Toomre $Q$ parameter is shown in
Fig. \ref{initial_surf}. Since $Q>1$, the disc is stable to local
axisymmetric perturbations \citep{toomre64}.
The transition between self-gravitating and
non-self-gravitating portions of the disc occur smoothly across
$\sim10H$. Initially there is no vertical or radial velocity
($v_R = v_r = v_\theta = 0$). The azimuthal velocity is initialized to
satisfy centrifugal balance with pressure and gravity,
\begin{align}
\frac{v_\phi^2}{r} = \frac{1}{\rho}\frac{\partial p}{\partial r} + \frac{\partial
\Phi_\mathrm{tot}}{\partial r}
\end{align}
and similarly in 2D.
\begin{figure}
\includegraphics[width=\linewidth,clip=true,trim=0cm 1.7cm 0cm
0cm]{figures/compare_profiles_dens000}
\includegraphics[width=\linewidth]{figures/compare_profiles_Q000}
\caption{Fiducial profiles of the surface density (top) and Toomre
parameter (bottom) used in this work.\label{initial_surf}}
\end{figure}
\subsection{Codes}
We use three independent grid-based codes to simulate the above
system. We adopt computational units such that
$G=M_*=R_0=1$. Time is measured in the Keplerian orbital period at
the reference radius, $P_0\equiv 2\pi/\Omega_k(R_0)$.
\subsubsection{FARGO}
Our primary code is FARGO with self-gravity \citep{baruteau08}. This
is a popular, simple finite-difference code for 2D discs. `FARGO' refers
to its azimuthal transport algorithm, which removes the mean azimuthal
velocity of the disc, thereby permit larger time-steps than that would otherwise be allowed
by the usual Courant condition based on the full azimuthal
velocity \citep{masset00a,masset00b}.
The 2D disc occupies
$R\in[R_\mathrm{min},R_\mathrm{max}],\,\phi\in[0,2\pi]$ and is
divided into $(N_R,N_\phi)$ grids, logarithmically spaced in radius and
uniformly spaced in azimuth. At radial boundaries we set the
hydrodynamic variables to their initial values.
The 2D Poisson equation is solved in integral form,
\begin{align}\label{2d_grav}
&\Phi_{d,z=0}(R,\phi) \notag \\
&=-\int_{R_\mathrm{min}}^{R_\mathrm{max}} \int_0^{2\pi}
\frac{G\Sigma(R^\prime,\phi^\prime)R^\prime dR^\prime d\phi^\prime}{\sqrt{R^2+R^{\prime 2} -
2RR^\prime\cos{(\phi - \phi^\prime)} + \epsilon_g^2}},
\end{align}
using Fast Fourier Transform (FFT), where $\epsilon_g$ is a softening
length to prevent a numerical singularity. The FFT approach requires
$\epsilon_g\propto R$ \citep{baruteau08}. In FARGO, $\epsilon_g$ is
set to a fraction of $hR$.
\subsubsection{ZEUS-MP}
ZEUS-MP is a general-purpose finite difference
code \citep{hayes06}. We use the code in 3D spherical geometry, covering
$r\in[r_\mathrm{min},r_\mathrm{max}]$, $\theta\in[\theta_\mathrm{min},\pi/2]$,
$\phi\in[0,2\pi]$. The vertical domain is chosen to cover $n_H$
scale-heights at $R=R_0$, i.e. $\tan{(\pi/2 - \theta_\mathrm{min})}/h=n_H$.
The grid is logarithmically spaced in radius and uniformly spaced in the angular
coordinates. We assume symmetry across the midplane, and
apply reflective boundary conditions at radial boundaries and the
upper disc boundary.
ZEUS-MP solves the 3D Poisson equation using a conjugate gradient
method. To supply boundary conditions to the linear solver, we
expand the boundary potential in spherical harmonics $Y_{lm}$
as described in \cite{boss80}. The expansion is truncated at
$(l,m)=(l_\mathrm{max},m_\mathrm{max})$. This code was used in \cite{lin12b} for
self-gravitating disc-planet simulations.
\subsubsection{PLUTO}
PLUTO is a general-purpose Godunov code \citep{mignone07}. The grid
setup is the same as that adopted in ZEUS-MP above. We configure the
code similarly to that used in \cite{lin14}: piece-wise linear
reconstruction, a Roe solver and second order Runge-Kutta time
integration. We also enable the FARGO algorithm for azimuthal
transport.
We solve the 3D Poisson equation throughout the domain using spherical
harmonic expansion \citep{boss80}, as used for the boundary potential
in ZEUS-MP. This version of PLUTO was used in \cite{lin14b} for
self-gravitating disc-planet simulations, producing similar results to
that of ZEUS-MP and FARGO.
\subsection{Diagnostics}
\subsubsection{Evolution of non-axisymmetric modes}
The disc evolution is quantified using mode amplitudes and angular
momenta as follows. We list the 2D definitions with obvious 3D generalisations.
A hydrodynamic variable $f$ (e.g. $\Sigma$) is written as
\begin{align}
f(R,\phi,t) &= \sum_{m=-\infty}^{\infty}f_m(R,t)\exp{\mathrm{i} m \phi} \notag\\
&= f_0 + 2 \real\left[\sum_{m=1}^\infty f_m \exp{(\mathrm{i}
m\phi)}\right],
\end{align}
where the $f_m$ may be obtained from Fourier transform in $\phi$.
The normalised surface density with azimuthal wavenumber $m$ is
\begin{align}
\Delta\Sigma_m = \frac{2}{\Sigma_{00}} \real\left[\Sigma_m \exp{(\mathrm{i}
m\phi)}\right]
\end{align}
where $\Sigma_{00} = \Sigma_0(t=0)$. The time evolution of the
$m^\mathrm{th}$ mode can be characterized by
assuming $\Sigma_m\propto\exp{(-\mathrm{i} \sigma t)}$ as in linear
theory. The total non-axisymmetric surface density is
\begin{align}
\Delta\Sigma = \frac{\Sigma - \Sigma_0}{\Sigma_0}.
\end{align}
\subsubsection{Angular momentum decomposition}
The total disc angular momentum is
\begin{align}
J &= \int_{R_\mathrm{min}}^{R_\mathrm{max}}\int_0^{2\pi} \Sigma Rv_\phi RdRd\phi \notag\\
&= 2\pi\int_{R_\mathrm{min}}^{R_\mathrm{max}} R\Sigma_0 v_{\phi0} R dR \notag\\
&\phantom{=}+
\sum_{m=1}^\infty2\pi\int_{R_\mathrm{min}}^{R_\mathrm{max}} 2R\real\left[\Sigma_m v_{\phi
m}^*\right] RdR
= \sum_{m=0}^\infty J_m.
\end{align}
We will refer to $J_m$ as the
$m^\mathrm{th}$ component of the total angular momentum, and use it to
monitor numerical angular momentum conservation in the simulations.
It is important to distinguish this empirical definition from the
angular momentum of linear perturbations given in \S\ref{wkb}, which
is defined through a conservation law.
\subsubsection{Three-dimensionality}
In 3D simulations we measure the importance of vertical motion with
$\Theta$, where
\begin{align}\label{theta}
\Theta^2 \equiv \frac{\avg{v_z^2}}{\avg{v_R^2}+\avg{v_\phi^2}},
\end{align}
and $\avg{\cdot}$ denotes the density-weighted average, e.g.,
\begin{align}
\avg{v_z^2} \equiv\frac{
\int_{R_1}^{R_2}\int_{\theta_\mathrm{min}}^{\pi/2} \int_{0}^{2\pi}
\rho v_z^2 dV}{
\int_{R_1}^{R_2}\int_{\theta_\mathrm{min}}^{\pi/2} \int_{0}^{2\pi}
\rho dV
},
\end{align}
and similarly for the horizontal velocities. Thus $\Theta$ is the
ratio of the average kinetic energy associated with vertical motion to
that in horizontal motion. The radial range of
integration is taken over $r\in[R_1,R_2]$ since this is where
we find the perturbations to be confined.
\section{Introduction}\label{intro}
An exciting development in the study of circumstellar
discs is the direct observation of large-scale, non-axisymmetric
structures within them. These
include lopsided dust distributions
\citep{marel13,fukagawa13,casassus13,isella13,perez14,follette14,plas14} and
spiral arms
\citep{hashimoto11,muto12,boccaletti14,grady13,christiaens14,avenhaus14}.
The attractive explanation for asymmetries in circumstellar discs is
disc-planet interaction. In particular, spiral structures
naturally arise from the gravitational interaction between a
planet and the gaseous protoplanetary disc it is embedded in
\citep[see, e.g.][for a recent review]{baruteau13b}. Thus, the
presence of spiral arms in circumstellar discs could be signposts of
planet formation \citep[but see][]{juhasz14}.
Spiral arms are also characteristic of global gravitational
instability (GI) in differentially rotating discs
\citep{goldreich65,laughlin96b,laughlin98,nelson98,lodato05,forgan11}. Large-scale
spiral arms can provide significant angular momentum transport
necessary for mass accretion \citep{lynden-bell72,
papaloizou91,balbus99,lodato04}, and spiral structures due to GI are
potentially observable with the Atacama Large
Millimeter/sub-millimeter Array \citep{cossins10,dipierror14}. GI can
be expected in the earliest stages of circumstellar disc formation
\citep{kratter10b,inutsuka10,tsukamoto13}, and may be possible in the
outer parts of the disc \citep{rafikov05,matzner05,kimura12}.
Single-arm spirals, or eccentric modes, corresponding to perturbations
with azimuthal wavenumber $m=1$, have received interest in
the context of protoplanetary discs because of their global nature
\citep{adams89,heemskerk92,laughlin96,tremaine01,papaloizou02,hopkins10}.
In the `SLING' mechanism proposed by \cite{shu90}, an $m=1$
gravitational instability arises from the motion of the
central star induced by the one-armed perturbation, and requires a
massive disc \citep[the former may have observable consequences,
][]{michael10}.
In this work we identify a new mechanism that leads to the growth of
one-armed spirals in astrophysical discs. We show
that when the disc temperature is prescribed (called locally isothermal
discs), the usual statement of the conservation of angular momentum
for linear perturbations acquires a source term proportional to the
temperature gradient. This permits angular momentum exchange between
linear perturbations and the background disc. This `background
torque' can destabilise low-frequency non-axisymmetric
trailing waves when the disc temperature decreases outwards.
We employ direct hydrodynamic simulations using three different
grid-based codes to demonstrate how this `background torque'
can lead to the growth of one-armed spirals in radially structured,
self-gravitating discs. This is despite the fact that our disc
models do not meet the requirements for the `SLING'
mechanism. Although our numerical simulations consider
self-gravitating discs, this `background torque' is generic for
locally isothermal discs and its existence does not require
disc self-gravity. Thus, the destabilisation effect we
describe should also be applicable to non-self-gravitating discs.
This paper is organised as follows. In \S\ref {model} we describe the system of interest
and list the governing equations. In \S\ref{wkb} we use linear theory to show
how a fixed temperature profile
can destabilise non-axisymmetric waves in discs. \S\ref{methods} describes
the numerical setup and hydrodynamic codes we use to
demonstrate the growth of one-armed spirals due to an imposed
temperature gradient. Our
simulation results are presented in \S\ref{results2d} for two-dimensional (2D)
discs and in \S\ref{results3d} for three-dimensional (3D) discs, and we further
discuss them in \S\ref{discussions}. We summarise in \S\ref{summary}
with some speculations for future work.
\input{model}
\input{results}
\input{results_3d}
\input{discussion}
\input{summary}
\section*{Acknowledgments}
I thank K. Kratter, Y. Wu and A. Youdin for valuable discussions, and the
anonymous referee for comments that significantly improved this paper. All
computations were performed on the El Gato cluster at the University
of Arizona. This material is based upon work supported by the National
Science Foundation under Grant No. 1228509.
\bibliographystyle{mn2e}
\section{Three-dimensional simulations}\label{results3d}
In this section we review 3D simulations carried
out using ZEUS-MP and PLUTO. The main purpose is to verify
the above results with different numerical codes, and validate
the 2D approximation.
The 3D disc has radial size
$[r_\mathrm{min},r_\mathrm{max}]=[0.4,10]R_0$ and vertical extent
$n_H=2$ scale-heights at $R=R_0$. The resolution is $N_r\times N_\theta\times
N_\phi=256\times32\times512$, or about $4$ cells per
$H$. Because of the reduced resolution
compared to 2D, we use a smooth perturbation by setting
$\delta = 10^{-3}$ and $M=1$ in Eq. \ref{randpert}. This corresponds
to a single $m=1$ disturbance in $R\in[R_1,R_2]$.
The 3D discs are initialised in approximate equilibrium only, so we
first evolve the disc without perturbations using
$(l_\mathrm{max},m_\mathrm{max})=(32,0)$ up to $t=10P_0$, during which
meridional velocities are damped out. We then restart the simulation
with the above perturbation and $(l_\mathrm{max},m_\mathrm{max})=(32,32)$, and damp
meridional velocities near the radial boundaries.
Fig. \ref{3d_ampmax} plots the evolution of the $m=1$ spiral amplitudes measured
in the ZEUS-MP and PLUTO runs. We also ran simulations
with a strictly isothermal equation of state ($q=0$), which display no
growth compared to that with a temperature gradient. This confirms
the temperature gradient effect is the same in 3D.
In the ZEUS-MP run, we observed high-$m$ disturbances developed near
the inner boundary initially, which is likely responsible for the
growth seen at $t<50P_0$. This is a numerical artifact and effectively
seeds the simulation with a larger perturbation. Results
from ZEUS-MP are therefore off-set from PLUTO by $\sim50P_0$. However,
once the coherent $m=1$ spiral begins to grow ($t\;\raisebox{-.8ex}{$\buildrel{\textstyle>}\over\sim$}\; 100P_0$),
we measure similar growth rates in both codes:
\begin{align*}
&\gamma \simeq 0.0073\Omega_k(R_0) \quad\quad \mathrm{PLUTO},\\
&\gamma \simeq 0.0085\Omega_k(R_0) \quad\quad \varA{ZEUS-MP}.
\end{align*}
Both are somewhat smaller than the 2D simulations. This is
possibly because of the lower resolutions adopted in 3D
and/or because the effective Toomre parameter is larger in 3D
\citep{mamat10} which, from Eq. \ref{theoretical_rate1}, is
stabilising.
\begin{figure}
\includegraphics[width=\linewidth]{figures/m1_analysis_plot_ampmax3d}
\caption{Evolution of the maximum $m=1$ density component in $r\in[R_1,R_2]$
in the 3D simulations. Results from discs with a
temperature gradient ($q=1$) and a strictly isothermal disc
($q=0$) are shown.
\label{3d_ampmax}}
\end{figure}
Visualisations of the 3D simulations are shown in
Fig. \ref{3d_prelim} for the disc midplane and near the upper disc
boundary. The snapshots are chosen when the one-armed spirals in the two codes
have reached comparable amplitudes.
Both codes show similar one-armed patterns at either height, and the
midplane snapshot is similar to the 2D simulation
(Fig. \ref{fargo_2d}).
The largest spiral amplitude is found in the
self-gravitating region $R\in[R_1,R_2]$, independent of
height. However, notice the spiral pattern extends into
the non-self-gravitating outer disc ($R>R_2$) at $z\sim 2H$, i.e.
the disturbance becomes more global away from the midplane.
\begin{figure}
\begin{center}
\subfigure[ZEUS-MP]{
\includegraphics[scale=0.305,clip=true,trim=0cm 0cm 0cm 0cm]{figures/polarxy2_dens015_z0}
\includegraphics[scale=0.305,clip=true,trim=0.8cm 0cm 0cm
0cm]{figures/polarxy2_dens015_zmax}
}
\subfigure[PLUTO]{
\includegraphics[scale=0.305,clip=true,trim=0cm 0cm 0cm 0cm]{figures/pdiskxy_023_z0}
\includegraphics[scale=0.305,clip=true,trim=0.8cm 0cm 0cm
0cm]{figures/pdiskxy_023_zmax}
}
\end{center}
\caption{Three-dimensional simulations using the ZEUS-MP (top) and
PLUTO (bottom) codes. The $m=1$ density component $\Delta\rho_1$
at the midplane (left) and approximately
two scale-heights above the midplane (right) is shown. Here $\psi
\equiv \pi/2 - \theta$ is the angular height
from the midplane. \label{3d_prelim}}
\end{figure}
\subsection{Vertical structure}
Fig. \ref{3d_rz} shows the vertical structure of the one-armed
spiral in the PLUTO run. The spiral is vertically confined to $z
\; \raisebox{-.8ex}{$\buildrel{\textstyle<}\over\sim$}\; H$ at $R\sim R_0$ (the self-gravitating region). Thus, a 2D
disc model, representing dynamics near the disc midplane, is
sufficient capture the instability. However, for $R>2R_0$ the spiral amplitude
increases away from the midplane. It remains
small in our disc model ($|\Delta\rho_1| \; \raisebox{-.8ex}{$\buildrel{\textstyle<}\over\sim$}\; 0.1$), but could become
significant with a larger vertical domain. This means that 3D
simulations are necessary to study the effect of the one-armed spiral
on the exterior disc.
\begin{figure}
\includegraphics[scale=0.47,clip=true,trim=0cm 0.79cm 0cm
0cm]{figures/pdisk_rz_023_sg}
\includegraphics[scale=0.47,clip=true,trim=0cm 0cm 0cm
0.64cm]{figures/pdisk_rz_023_nsg}
\caption{The $m=1$ density component in the meridional plane in the
PLUTO simulation. The slices are taken at the azimuth of
$\mathrm{max}[\Delta\rho_1(r,\pi/2,\phi)]$. Arrows represent the vector
$(v_r/R_0,-v_\theta/rh\sin^2{\theta})$. The top (bottom) panel corresponds
to the inner (outer) portions of the disc.
\label{3d_rz}}
\end{figure}
Although Fig. \ref{3d_rz} appears to display significant vertical motion,
we measure the three-dimensionality parameter $\Theta < 10^{-2}$
(Eq. \ref{theta}), so vertical motions are insignificant compared to
horizontal motions. This supports a 2D approximation. On the other
hand, we find $\mathrm{max}|v_z/c_s|\sim 0.2$ which, although
sub-sonic, is not very small.
\subsection{Angular momentum conservation}
Fig. \ref{3d_angmom} shows the angular momentum evolution in the 3D
runs during the linear growth of the one-armed spiral. Because the
ZEUS-MP simulation is off-set from PLUTO, the time interval for the
plot was chosen such that the change in the angular momentum
components are comparable in the two codes.
ZEUS-MP does not conserve angular momentum very well, but the
variation in total angular momentum $|\Delta J/J|< O(10^{-6})$ is
small compared to the individual components $|\Delta J_{0,1}/J|\sim
10^{-4}$.
PLUTO reaches similar values of $|\Delta J_{0,1}|$, but achieves better
conservation, with $|\Delta J/J|=O(10^{-8})$. These plots are again
similar to the 2D simulations, i.e. angular momentum lost by $J_1$ is
gained by $J_0$. This confirms that the interaction between $J_1$ and
$J_0$ operates in 3D and 2D similarly.
\begin{figure}
\includegraphics[scale=.41,clip=true,trim=0cm 1cm 0cm 0cm]{figures/nonaxi_evol_ang_zeus}
\includegraphics[scale=.41]{figures/nonaxi_evol_ang_pluto}
\caption{Evolution of angular momentum components in the 3D
simulations. The perturbation
relative to $t=10P_0$, during the growth of the one-armed spiral,
is shown in units of the initial total angular momentum
$J_\mathrm{ref}$.\label{3d_angmom}}
\end{figure}
\section{Results}\label{results2d}
We first present results from FARGO simulations. The 2D disc spans
$[R_\mathrm{min}, R_\mathrm{max}] = [0.4,10]R_0$. This gives a total
disc mass $M_{d}=0.086M_*$. The mass within
$R\in[R_\mathrm{min},R_{1}]$ is $0.017M_*$, that within
$R\in[R_{1},R_{2}]$ is $0.049M_*$, and that within
$R\in[R_{2},R_\mathrm{max}]$ is $0.021M_*$. We use a resolution of
$N_R\times N_\phi = 1024\times 2048$, or about $16$ grids per $H$, and
adopt $\epsilon_g=10^{-4}hR$ for the
self-gravity softening length\footnote{In 2D self-gravity, $\epsilon_g$ also
approximates for the vertical disc thickness, so a more appropriate
value would be $\epsilon_g\sim H$ \citep{muller12}. However, because
$\epsilon_g\propto R$ is needed in FARGO, the Poisson kernel
(Eq. \ref{2d_grav}) is no longer symmetric in $(R,R^\prime)$. We
choose a small
$\epsilon_g$ in favour of angular momentum conservation, keeping in
mind that the strength of self-gravity will be over-estimated.}.
In these simulations the disc is subject to initial perturbations in
cylindrical radial velocity,
\begin{align}\label{randpert}
v_R \to v_R+ c_s\frac{\delta}{M}
\exp{\left[-\frac{1}{2}\left(\frac{R-\overline{R}}{\Delta
R}\right)^2\right]}\sum_{m=1}^M\cos{m\phi},
\end{align}
where the amplitude $\delta\in[-10^{-3},10^{-3}]$ is set randomly but
independent of $\phi$, $\overline{R} = (R_{1}+R_{2})/2$
and $\Delta R = (R_{2}-R_{1})/2$.
\subsection{Reference run}
To obtain a picture of the overall disc evolution, we describe a
fiducial run initialised with $M=10$ in Eq. \ref{randpert}.
Fig. \ref{fargo_modeamp} plots evolution of the maximum
non-axisymmetric surface density amplitudes in $R\in[R_{1},R_{2}]$
for $m\in[1,10]$. Snapshots from the simulation are shown in
Fig. \ref{fargo_2d}.
At early times $t\lesssim100P_0$ the disc is
dominated by low-amplitude high-$m$ perturbations. The $m\geq4$ modes
growth initially and saturate (or decays) after $t=40P_0$. Notice the
low $m\leq 2$ modes decay initially, but grows between $t\in[20,40]P_0$,
possibly due to non-linear interaction of the high-$m$ modes
\citep{laughlin96,laughlin97}. However, the $m=1$ mode begins to grow
again after $t=70P_0$, and eventually dominates the annulus.
\begin{figure}
\includegraphics[width=\linewidth]{figures/nonaxi_evol_DZ_fargo}
\caption{Evolution of non-axisymmetric surface density maxima
in the FARGO simulation initialised with perturbations
with $m\in[1,10]$.\label{fargo_modeamp}}
\end{figure}
\begin{figure*}
\includegraphics[scale=0.55]{figures/polarxy_dens050}\includegraphics[scale=0.55,clip=true,trim=2.26cm
0cm 0cm
0cm]{figures/polarxy_dens110}\includegraphics[scale=0.55,clip=true,trim=2.26cm
0cm 0cm 0cm]{figures/polarxy_dens130}
\caption{Visualisation of the FARGO 2D simulation in
Fig. \ref{fargo_modeamp}. The total
non-axisymmetric surface density
$\Delta\Sigma$ is shown. \label{fargo_2d}}
\end{figure*}
Fig. \ref{2d_angmom} shows the evolution of disc angular momentum
components. Only the $m=0,\,1$ components are
plotted since they are dominant. The $m=1$ structure has
an associated negative angular momentum, which indicates it is a
low-frequency mode.
Its growth is compensated by an increase in the axisymmetric
component of angular momentum, such that $\Delta J_0 + \Delta
J_1 \sim 0$. Note that FARGO does not conserve angular momentum
exactly. However, we find the total angular momentum varies by
$|\Delta J/J|= O(10^{-6})$, and is much smaller than the
change in the angular momenta components, $|\Delta J_{0,1}/J|>
O(10^{-5})$. Fig. \ref{2d_angmom} then suggest that angular momentum
is transferred from the one-armed spiral to the background disc.
\begin{figure}
\includegraphics[width=\linewidth]{figures/nonaxi_evol_ang_fargo}
\caption{Evolution of angular momentum components in the
FARGO simulation in Fig. \ref{fargo_modeamp}---\ref{fargo_2d}. The
perturbation relative to $t=0$ in 2D is shown in units of the
initial total angular momentum $J_\mathrm{ref}$.\label{2d_angmom}}
\end{figure}
\subsection{Dependence on the imposed temperature profile}
We show that the growth of the $m=1$ spiral is
associated with the imposed temperature gradient by performing a
series of simulations with $q\in[0,1]$. However, to maintain similar Toomre $Q$
profiles, we adjust the surface density power-law index such
that $s = (3+q)/2$. For clarity these simulations are initialised
with $m=1$ perturbations only.
Fig. \ref{fargo_varq} compares the $m=1$ spiral amplitudes as a
function of $q$. We indeed observe slower growth with decreasing
$q$. Although the figure indicates growth for the strictly isothermal
disc ($q=0$), we did not observe a coherent one-armed spiral upon
inspection of the $m=1$ surface density field. The growth in this case
may be associated with high-$m$ modes, which dominated the
simulation.
\begin{figure}
\includegraphics[width=\linewidth]{figures/m1_analysis_plot_fargo_varq}
\caption{Evolution of the $m=1$ spiral amplitude as a function of
the negative of the imposed temperature gradient $q$. The maximum value of the
$m=1$ surface density in $R\in[R_1,R_2]$ is shown.
\label{fargo_varq}}
\end{figure}
We plot growth rates of the $m=1$ mode as a function of $q$ in
Fig. \ref{fargo_varq_growth}. The correlation can be fitted with a
linear relation
\begin{align*}
\gamma \simeq \left[0.015 q - 7.9\times10^{-4}\right] \Omega_k(R_0).
\end{align*}
As the background torque is proportional to $q$
(Eq. \ref{theoretical_rate}),
this indicates that the temperature gradient is responsible for the
development of the one-armed spirals observed in our simulations.
\begin{figure}
\includegraphics[width=\linewidth]{figures/m1_analysis_plot_ratemax_fargo_varq}
\caption{Growth rates of the $m=1$ spiral mode as a function of the
imposed sound-speed gradient $q$ (asterisks). A linear fit is also
plotted (dotted line).
\label{fargo_varq_growth}}
\end{figure}
We also performed a series of simulations with variable aspect-ratio
$h\in[0.03,0.07]$ but fixed $q=1$. This affects the magnitude of the temperature
gradient since $c_s \propto h$. However, with other parameters equal
to that in the fiducial simulation, varying $h$ also changes the disc
mass. For $h\in[0.03,0.07]$ the total disc mass ranges from
$M_d=0.052M_*$ to $M_d=0.12M_*$ and the
mass within $R\in[R_1,R_2]$ ranges from $0.033M_*$ to $0.062M_*$.
Fig. \ref{fargo_varh_growth} shows the growth rates of the $m=1$
spiral in $R\in[R_1,R_2]$ as a function of $h$. Growth rates increases
with $h$, roughly as
\begin{align*}
\gamma \simeq \left[0.10h + 8.3\times10^{-3}\right]\Omega_k(R_0).
\end{align*}
However, a linear fit is less good than for variable $q$ cases above. This
may be due to the change in the total disc mass when $h$ changes. We
find no qualitative difference between the spiral pattern that
emerges.
\begin{figure}
\includegraphics[width=\linewidth]{figures/m1_analysis_plot_ratemax_fargo_varh}
\caption{Growth rates of the $m=1$ spiral mode as a function of the
disc aspect-ratio $h$ (asterisks). A linear fit is also
plotted (dotted line).
\label{fargo_varh_growth}}
\end{figure}
\subsection{Properties of the $m=1$ spiral and its growth}\label{fargo_m1}
Here we analyse the $q=1$ case in Fig. \ref{fargo_varq_growth} in
more detail.
Fig. \ref{2d_fargo_viz} shows a snapshot of the $m=1$ surface
density of this run.
By measuring the $m=1$ surface density amplitude and its
pattern speed, we obtain a co-rotation radius and growth rate
\begin{align*}
&R_c \simeq 4.4R_0,\\
&\gamma\simeq 0.014\Omega_k(R_0) = 0.13\Omega_p.
\end{align*}
This one-armed spiral can be considered as low frequency because its
pattern speed $\Omega_p \simeq 0.1\Omega_k(R_0)\; \raisebox{-.8ex}{$\buildrel{\textstyle<}\over\sim$}\; 0.3\Omega$ in
$R\in[R_{1},R_{2}]$ (where it has the largest amplitude). Thus, the spiral
pattern appears nearly stationary. The growth rate $\gamma$ is also slow
relative to the local rotation, although the characteristic growth
time $\gamma^{-1} \simeq 10P_0$ is not very long.
\begin{figure}
\includegraphics[width=\linewidth]{figures/polarxy2_dens120_fargo}
\caption{Cartesian visualisation of the $m=1$ surface density
structure in the FARGO simulation initialised with only $m=1$
perturbations.
\label{2d_fargo_viz}}
\end{figure}
Next, we write $\Sigma_1 =
|\Sigma_1|\exp{(\mathrm{i} kR)}$, where $k$ is real, and assume the amplitude
$|\Sigma_1|$ varies slowly compared to the complex phase. This is the
main assumption in local theory. We calculate $k$ numerically and plot
its normalised value in Fig. \ref{fargo_wavenumber}. We find
\begin{align*}
kR \sim \frac{\pi G \Sigma}{c_s^2}R \sim \frac{1}{hQ},
\end{align*}
where we used $Q\sim c_s\Omega/\pi G \Sigma$ and $R\Omega/c_s\sim
h^{-1}$. Since $Q=O(1)$ and $h\ll 1$ imply $|kR|\gg 1$, we
can apply results from local theory
(\S\ref{local_approx}). Note also that $k\simeq k_c$.
Fig. \ref{2d_fargo_viz} shows the $m=1$
spiral is trailing, consistent with $k>0$.
\begin{figure}
\includegraphics[width=\linewidth]{figures/m1_analysis_kr120_fargo}
\caption{Normalised radial wavenumber of the $m=1$ spiral in
Fig. \ref{2d_fargo_viz}.\label{fargo_wavenumber}}
\end{figure}
Using the estimated value of $R_c$, we plot in
Fig. \ref{fargo_qbarrier} the quantity $\nu^2 - 1 + Q^{-2}$, which is
required to be positive in local theory for purely wave-like
solutions to the dispersion relation (Eq. \ref{dispersion}) when the
mode frequency is given.
Fig. \ref{fargo_qbarrier} shows two $Q$-barriers located in
the inner disc, at $R_{Qb}=R_0$ and $R_{Qb}=1.6R_0$; the bounded region is indeed
where the $m=1$ spiral develops. This shows that the one-armed
spiral is trapped. Note in this region, $\nu^2 - 1 +
Q^{-2}\simeq 0.1\ll 1$, which is necessary for consistency with
the measured wavenumber $k$ and Eq. \ref{wavenumber}.
There is one outer Lindblad resonance at $R_L\simeq
7.2R_0$. Thus, acoustic waves may be launched in $R\;\raisebox{-.8ex}{$\buildrel{\textstyle>}\over\sim$}\; 7.2R_0$ by
the spiral disturbance in the inner disc \citep{lin11b}.
\begin{figure}
\includegraphics[width=\linewidth]{figures/m1_analysis_Qbar_fargo}
\caption{Dimensionless mode frequency $\nu$ for the $m=1$ spiral in
Fig. \ref{2d_fargo_viz}. For a given real mode frequency, the
dispersion relation for local density waves, Eq. \ref{dispersion},
permits purely wave-like solutions in regions where $\nu^2 - 1 +
Q^{-2}>0$.
\label{fargo_qbarrier}}
\end{figure}
We can estimate the expected growth rate of the $m=1$ mode due to the
temperature gradient. Setting $k = k_c$ and $m=1$ into
Eq. \ref{theoretical_rate} gives
\begin{align}\label{theoretical_rate1}
\gamma \sim \frac{qh}{2Q}\Omega.
\end{align}
Inserting $q=1$, $h=0.05$ and $Q\simeq 1.5$ gives
$\gamma \simeq 0.017\Omega$, consistent with numerical
results.
\subsubsection{Angular momentum exchange with the background disc}
We explicitly show that the growth of the $m=1$ spiral is due to
the forced temperature gradient via the background torque described
in \S\ref{wkb}. We integrate the statement for angular momentum
conservation for linear perturbations, Eq. \ref{lin_ang_mom_cons},
assuming boundary fluxes are negligible, to obtain
\begin{align}\label{baroclinic_torque_int}
\frac{d}{dt}\underbrace{\int_{R_\mathrm{min}}^{R_\mathrm{max}}j_\mathrm{lin}
2\pi R dR}_{J_\mathrm{lin}}
=\int_{R\mathrm{min}}^{R\mathrm{max}}T_\mathrm{BG} 2\pi R dR,
\end{align}
where we recall $T_\mathrm{BG}$ is the torque density associated with the imposed
sound-speed profile (Eq. \ref{baroclinic_torque}). We
compute both sides of Eq. \ref{baroclinic_torque_int} using
simulation data, and compare them in Fig. \ref{fargo_angmom_ex}. There
is a good match between the two torques, especially at early times
$t\lesssim110P_0$. The average discrepancy is $\simeq 5\%$.
The match is less good later on, when the spiral
amplitude is no longer small ($\mathrm{max}\Delta\Sigma_1\sim 0.2$ at $t=110P_0$
and $\mathrm{max}\Delta\Sigma_1\sim 0.4$ by $t=120P_0$) and linear theory becomes
less applicable. Fig. \ref{fargo_angmom_ex} confirms that the $m=1$ spiral
wave experiences a negative torque that further reduces its (negative) angular
momentum, leading to its amplitude growth. This is consistent
with angular momentum component measurements (Fig. \ref {2d_angmom}).
\begin{figure}
\includegraphics[width=\linewidth]{figures/m1_analysis_ang_fargo}
\caption{Rate of change of the $m=1$ wave angular momentum as defined by
Eq. \ref{baroclinic_torque_int} (solid) compared to the torque
exerted on the wave associated with the background temperature
gradient (dotted).
\label{fargo_angmom_ex}}
\end{figure}
\section{Summary and conclusions}\label{summary}
In this paper, we have described a destabilising
effect of adopting a fixed temperature profile to model
astrophysical discs. By applying angular momentum conservation
within linear theory, we showed that a forced temperature gradient
introduces a torque on linear perturbations. We call this the
background torque because it represents an exchange of angular
momentum between the background disc and the perturbations. This
offers a previously unexplored pathway to instability in locally
isothermal discs.
In the local approximation, we showed that this background torque is
negative for non-axisymmetric trailing waves in discs with a fixed temperature or
sound-speed profile that decrease outwards. A negative background torque enforces
low-frequency non-axisymmetric modes because they are associated
with negative angular momentum.
We demonstrated the destabilising effect of the background torque by
carrying out direct numerical hydrodynamic simulations of
locally isothermal discs with a self-gravitating surface density
bump.
We find such systems are unstable to low-frequency perturbations
with azimuthal wavenumber $m=1$, which leads to the development of an one-armed
trailing spiral that persist for at least $O(10^2)$ orbits. The spiral
pattern speed is smaller than the local disc rotation and
growth rates are $O(10^{-2}\Omega)$ which gives a characteristic
growth time of $O(10)$ orbits.
We used three independent numerical codes --- FARGO in 2D, ZEUS-MP and
PLUTO in 3D --- to show that the growth of
one-armed spirals in our disc model is due to the imposed
temperature gradient: growth rates increased linearly
with the magnitude of the imposed temperature gradient, and one-armed
spirals did not develop in strictly isothermal simulations. This
one-armed spiral instability can be interpreted as an initially
neutral, tightly-wound $m=1$ mode being destabilised by the
background torque. The spiral
is mostly confined between two $Q$-barriers in the surface density bump.
We find the instability behaves similarly in 2D and 3D, but in 3D
the spiral disturbance becomes more radially global away from
the midplane.
\subsection{Speculations and future work}
There are several issues that remain to be addressed in future
works:
\emph{Thermal relaxation.} The locally isothermal assumption
can be relaxed by including an energy equation with
a source term that restores the disc temperature over a
characteristic timescale $t_\mathrm{relax}$. Preliminary FARGO
simulations indicate a thermal relaxation timescale $t_\mathrm{relax} <
0.1\Omega_k^{-1}$ is needed for the one-armed spiral to
develop. However, this value is likely model-dependent. For example,
a longer $t_\mathrm{relax}$ may be permitted with larger temperature
gradients. This issue, together with a parameter survey, will be
considered in a follow-up study.
\emph{Non-linear evolution}. In the deeply non-linear regime, the
one-armed spiral may
shock and deposit negative angular momentum onto
the background disc. The spiral amplitude would saturate by gaining
positive angular momentum. However, if the temperature gradient is
maintained, it may be possible to achieve a balance between the gain
of negative angular momentum through the background torque, and the
gain of positive angular momentum through shock dissipation. We remark
that fragmentation is unlikely because the co-rotation radius is
outside the bulk of the spiral arm \citep{durisen08,rogers12}. In order
to study these possibilities, improved numerical models are needed to
ensure total angular momentum conservation on timescales much longer
than that considered in this paper.
\emph{Other applications of the background torque.}
The background torque is a generic feature in
discs for which the temperature is set externally. It may therefore
be relevant in other astrophysical contexts.
One possibility is in Be star discs \citep{rivinius13},
for which one-armed oscillations may explain long-timescale variations
in their emission lines \citep[see e.g.][and references
therein]{okasaki97,papaloizou06c,ogilvie08}. These studies
invoke alternative mechanisms to produce \emph{neutral} one-armed
oscillations (e.g. rotational deformation of the star), but consider
strictly isothermal discs. It would be interesting to explore
the effect of a radial temperature gradient on the stability of these
oscillations.
|
1,941,325,221,090 | arxiv | \section{Calculation of the integral \rfs{eq:integral}}
\label{sec:A}
We would like to evaluate
\begin{equation} \label{eq:eee} I_N=\frac{1}{\pi^N} \int_{-\infty}^\infty \prod_{n=1}^N \frac{dx_n}{1+x_n^2}
\ln \left( 1+ \epsilon^2 X \right),
\end{equation}
where \begin{equation} X = \sum_{n=1}^N \left( x_n^2 x_{n+1}^2 + 2 x_n x_{n+2} + 2 x_n x_{n+1}^2 x_{n+2} \right).
\end{equation}
and show that for small $\epsilon$ it is approximately equal to
\begin{equation} - \frac{4N}{\pi} \epsilon \ln \epsilon.
\end{equation} We will be able to show this for even $N$, although this result very likely holds for any $N$. Therefore, from now on we take $N$ to be even.
In anticipation of the answer, we will calculate
\begin{equation} \alpha =- \lim_{\epsilon \rightarrow 0} \left[ \epsilon \frac{\partial^2 I_N}{\partial \epsilon^2} \right].
\end{equation}
and show that
\begin{equation} \alpha = \frac{4N}{\pi}.
\end{equation}
Carrying out the differentiation we find
\begin{equation} \label{eq:alpha}
\alpha = \lim_{\epsilon \rightarrow 0} \frac{2}{\pi^N} \int_{-\infty}^\infty \prod_{n=1}^N \frac{dx_n}{1+x_n^2}
\frac{ \epsilon X \left( \epsilon^2 X - 1 \right)}{\left( 1+ \epsilon^2 X \right)^2}. \end{equation}
At this point it is convenient to change odd labelled integration variables $x_{2n-1}$ according to
\begin{equation} \label{eq:change} x_{2n-1}=\frac {z_n}{\epsilon}. \end{equation} We observe that
\begin{equation} \frac{1}{\pi} \frac{dx_{2n-1}}{1+x_{2n-1}^2} = \frac 1 \pi
\frac{\epsilon dz_n}{\epsilon^2+z_n^2}.
\end{equation}
In turn, for small $\epsilon$, we can take advantage of the expansion
\begin{equation} \label{eq:expd} \frac 1 \pi \frac{\epsilon }{\epsilon^2+z_n^2} \approx \delta(z_n) + \frac{\epsilon}{\pi z_n^2}.
\end{equation}
This can for example be derived by Fourier transforming this expression with respect to $z_n$ obtaining $e^{-\epsilon \left|k \right|}$, where $k$ is the variable conjugate to $z_n$. Expanding in powers of $\epsilon$ and transforming back, we obtain \rfs{eq:expd}.
When applied to \rfs{eq:alpha} this becomes, with the convenient relabeling $x_{2n}=y_n$,
\begin{eqnarray} \alpha &=& 2 \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon \pi^{N/2}} \int_{-\infty}^\infty \prod_{n=1}^{N/2} dz_n \left[ \delta(z_n) + \frac{\epsilon}{\pi z_n^2} \right] \times \cr
&& \prod_{n=1}^{N/2} \frac{dy_{n}}{1+y_n^2} \frac{Y (Y-1)}{\left( 1+ Y \right)^2}. \label{eq:expande}
\end{eqnarray}
Here
\begin{equation} Y = \sum_{n=1}^{N/2} \left( z_n^2 (y_n+y_{n+1})^2 + 2 z_n \left( y_n^2+1 \right) z_{n+1} +
2 \epsilon^2 y_n y_{n+1} \right).
\end{equation}
The term in $Y$ proportional to $\epsilon^2$ is small and can be dropped. The rest of the integral can be calculated as an expansion over $\epsilon$. The term where delta functions are employed to do all
integrals over $z_n$ can be seen to be zero. The first non-vanishing term is the one where the
integral over one of $z_n$ is done with the help of the second term in the square brackets of
\rfs{eq:expande}. There are $N/2$ such terms, one for each $z_n$, and they are all identical.
They give
\begin{eqnarray} \alpha &=& \frac{N}{\pi^3} \int_{-\infty}^\infty \frac{dz dy_1 dy_2}{(1+y_1^2)(1+y_2^2)}
\times \cr &&
\frac{\left(y_1+y_2\right)^2 \left( z^2 \left(y_1+y_2 \right)^2-1\right)}{\left( 1+ z^2 \left(y_1+y_2 \right)^2\right)^2}.
\end{eqnarray}
To compute this integral, care needs to be taken because if $y_1+y_2=0$, then the integral over $z$ is divergent. Therefore, first one has to integrate over $y_1$ and $y_2$, and only then over $z$. The integrals over $y_1$ and $y_2$ give
\begin{equation} \alpha = \frac{4N}{\pi} \int_{-\infty}^\infty \frac{1}{\left( 1+ 2 \left| z \right| \right)^2}.
\end{equation} Doing this integral results in
\begin{equation} \alpha = \frac{4N}{\pi},
\end{equation}
which is the advertised result.
Finally, one may worry that terms higher order in $\epsilon$, dropped in the derivation of \rfs{eq:eee}, will not be small since upon \rfs{eq:change} $\epsilon$ may drop out of these higher order terms. However, all such terms are zero since they involve products of more than one $z_n$ and vanish thanks to the delta functions in
\rfs{eq:expande}.
|
1,941,325,221,091 | arxiv |
\section{Introduction}
\label{intro}
Jets are collimated sprays of particles produced in high energy particle collisions. They are remnants of hard-scattered partons, which are the fundamental objects of pQCD. At RHIC, jets can be used as a probe of the hot and dense matter created in heavy ion collisions~\cite{MP,EB}. To quantify the signals observed in heavy ion collisions in comparison to p+p collisions, it is necessary to measure the cold nuclear matter effects in systems such as d+Au.
Cold nuclear matter effects can be described by partonic rescattering~\cite{vitev} and by modification of parton distribution functions~\cite{eps}. Measurements of single particle $R_\mathrm{dAu}$ have shown modification even at midrapidity~\cite{singleRdAu}. It is important to verify these observations using jets: they provide much higher kinematic reach as they are not prone to fragmentation biases and one can also avoid uncertainties coming from imprecise knowledge of fragmentation functions. An alternative approach to full jet reconstruction is a method using di-hadron correlations, discussed in detail in~\cite{Mriganka}.
\section{Jet reconstruction}
\label{jets}
This analysis is based on $\sqrt{s_\mathrm{NN}} = 200~\gev$ data from the STAR experiment, recorded during RHIC run 8 (2007-2008). The Beam Beam Counter detector, located in the Au nucleus fragmentation region, was used to select the 20\% highest multiplicity events in d+Au collisions. The Barrel Electromagnetic Calorimeter (BEMC) detector was used to measure the neutral component of jets, and the Time Projection Chamber (TPC) detector was used to measure the charged component of jets. In the case of a TPC track pointing to a BEMC tower, its momentum was subtracted from the tower energy to avoid double counting (electrons, MIP and possible hadronic showers in the BEMC).
To minimize the effect of BEMC backgrounds and dead areas, the jet neutral energy fraction is required to be within $(0.1,0.9)$. An upper $\pT < 15~\gevc$ cut was applied to TPC tracks due to uncertainties in TPC tracking performance at high-$\pT$ in run 8 (under further investigation). The acceptance of TPC and BEMC together with experimental details (calibration, primary vertex position cuts) limit the jet fiducial acceptance to $|\eta|<0.55 \; (R=0.4), |\eta|<0.4 \; (R=0.5)$, where $R$ is the resolution parameter used in jet finding.
Recombination jet algorithms kt and anti-kt, part of the FastJet package~\cite{fj}, are used for jet reconstruction. To subtract d+Au underlying event background, a method based on active jet areas~\cite{bgsub} is applied event-by-event: $\pT^{Rec} = \pT^{Candidate} - \rho \cdot A$, with $\rho$ estimating the background density per event and $A$ being the jet active area. Due to the asymmetry of the colliding d+Au system, the background is asymmetric in $\eta$. This dependence was fit with a linear function in $\eta$ and included in the background subtraction procedure.
Pythia 6.410 and GEANT detector simulations were used to correct for experimental effects. Jet reconstruction was run at MC hadron level (PyMC) and at detector level (PyGe). To study residual effects of the d+Au background (such as background fluctuations), a sample with added background (PyBg) was created by mixing Pythia events with 0-20\% highest multiplicity d+Au events (minimum bias online trigger). This mixing was done at detector level (reconstructed TPC tracks and BEMC towers).
\section{Nuclear $\kt$ broadening}
Comparing azimuthal correlations of jets in di-jet events in p+p and d+Au can provide information on nuclear $\kt$ broadening. To increase di-jet yield, BEMC high tower (HT) online trigger was employed (one tower with $\et > 4.3~\gev$) for both p+p and d+Au data (run 8). Resolution parameter $R=0.5$ was used for jet finding and a cut $\pT > 0.5~\gevc$ applied for tracks and towers to reduce background.
To select a clean di-jet sample two highest energy jets ($p_\mathrm{T,1} > p_\mathrm{T,2}$) in each event were used, with $p_\mathrm{T,2} > 10~\gevc$~\cite{qm09}. Distributions of $k_\mathrm{T,raw} = p_\mathrm{T,1} \sin(\Delta\phi)$ were constructed for di-jets and Gaussian widths, $\sigma_{k_\mathrm{T,raw}}$, were obtained for the two jet algorithms and two ($10 - 20~\gevc$, $20 - 30~\gevc$) $p_\mathrm{T,2}$ bins.
Detector and residual background effects on the $\kt$ widths were studied by comparing PyMC, PyGe and PyBg distributions as shown in Fig.~\ref{fig:ktsimu}. The widths are the same within statistical errors, most likely due to the interplay between jet $\pT$ and $\Delta\phi$ resolutions. Fig.~\ref{fig:ktdata} shows an example of the $k_\mathrm{T,raw}$ distributions for data.
The Gaussian fit to p+p data is not ideal and the precise shape of the distribution is under study. RMS widths of these distributions have therefore been checked and they agree with the sigma widths of the fits.
The values extracted from the Gaussian fits are $\sigma_{k_\mathrm{T,raw}}^{p+p} = 2.8 \pm 0.1~\mathrm{(stat)}~\gevc$ and $\sigma_{k_\mathrm{T,raw}}^{d+Au} = 3.0 \pm 0.1~\mathrm{(stat)}~\gevc$. Possible nuclear $\kt$ broadening therefore seems rather small.
The systematic uncertainties on extracted $\kt$ widths come from Jet Energy Scale (JES) uncertainty (discussed in more detail in section~\ref{systematics}) and from the way they were extracted. The latter is estimated to be $0.2~\gevc$ and includes a weak dependence on the $|\Delta\phi - \pi|$ cut for back-to-back di-jet selection (varied between 0.5 and 1.0), differences in the $p_\mathrm{T,2}$ range and jet algorithm selections and the precision with which the detector and background effects were found to be negligible.
\begin{figure}[htb]
\centering
\includegraphics[width=0.9\textwidth]{gr/kt_simu_AkT.eps}
\vspace{-0.5cm}
\caption{\label{fig:ktsimu}Distributions of $k_\mathrm{T,raw} = p_\mathrm{T,1} \sin(\Delta\phi)$ for simulation ($10 < p_\mathrm{T,2} < 20~\gevc$).}
\end{figure}
\section{Inclusive jet spectra}
\label{spectra}
Run 8 d+Au data with a minimum bias online trigger were used for this study. 10M 0-20\% highest multiplicity events after event cuts were used for jet finding (anti-kt algorithm) with resolution parameter $R = 0.4$, $\pT > 0.2~\gevc$ cut was applied to tracks and towers. The jet $\pT$ spectrum is normalized per event and the high multiplicity of d+Au events also guarantees the trigger efficiency is independent of the $\pT$ of the hard scattering. Therefore, no correction related to trigger is applied to jet $\pT$ spectra.
A bin-by-bin correction is used to correct jet $\pT$ spectrum to hadron level. It is based on the generalized efficiency, constructed as the ratio of PyMC to PyBg jet $\pT$ spectra, applied to the measured jet $\pT$ spectrum. It therefore corrects for detector effects (tracking efficiency, unobserved neutral energy, jet $\pT$ resolution) as well as for residual background effects. As the impact of these effects on jet $\pT$ spectrum differ substantially depending on jet $\pT$ spectrum shape, the shapes have to be consistent between the PyBg and the measured jet $\pT$ spectra. Fig.~\ref{fig:ratio} shows that this is indeed the case.
As the jet $\pT$ spectrum is very sensitive to the jet energy scale, an additional correction was applied here to account for the lower TPC tracking efficiency in d+Au compared to that from the used p+p Pythia simulation. The d+Au efficiency was determined by simulating single pions and embedding them at the raw detector level into real d+Au minimum bias events.
The tracking efficiency in the Pythia simulation was then artificially lowered, prior to jet finding at PyGe and PyBg level, so that it matches the one obtained from d+Au embedding.
To compare per event jet yield in d+Au to jet cross section in p+p collisions, an input from MC Glauber study is utilized: $\langle N_\mathrm{bin} \rangle = 14.6 \pm 1.7$ for 0-20\% highest multiplicity d+Au collisions and $\sigma_\mathrm{inel,pp} = 42~\mathrm{mb}$. These factors were used to scale the p+p jet cross section measured previously by the STAR collaboration~\cite{ppjetprl} using Mid Point Cone (MPC) jet algorithm with $R = 0.4$.
The resulting d+Au jet $\pT$ spectrum is shown in Fig.~\ref{fig:spectrum} together with the scaled p+p jet spectrum. The systematic errors are indicated by dashed lines and by the gray boxes. The dominant contribution to p+p systematic uncertainty is the Jet Energy Scale (JES) uncertainty. Within these systematic uncertainties, the d+Au jet spectrum shows no significant deviation from the scaled p+p spectrum.
\section {Systematic uncertainties}
\label{systematics}
The JES uncertainty dominates the uncertainties of d+Au measurement and is marked by the dashed lines in Fig.~\ref{fig:spectrum}. Part of it comes from the BEMC calibration uncertainty of 5\%: it's applied to the neutral component of the jet ($\approx 40\%$ for minimum-bias online trigger, $\approx 55\%$ for HT online trigger).
An uncertainty of 10\% in TPC tracking efficiency is applied to the charged component of jets. Embedding of jets into real d+Au events at raw detector level will allow to decrease this uncertainty in the future. As JES uncertainty is expected to be largely correlated between run 8 p+p and d+Au data, we plan to measure jet $\pT$ spectrum in run 8 p+p collisions to decrease uncertainties in $R_\mathrm{dAu}$.
Caution is needed due to the use of different pseudorapidity acceptances and different jet algorithms in Fig.~\ref{fig:spectrum}. The effect of $\eta$ acceptance on jet $\pT$ spectrum is illustrated in Fig.~\ref{fig:etadep} using Pythia simulation: it is less than 10\% in the $\pT$ range covered by the present measurement. The effect of jet algorithm is mainly due to hadronization and can exceed 20\% for $\pT < 20~\gevc$~\cite{gregory}. Therefore, the same acceptance and the same algorithm have to be used in p+p and d+Au to obtain jet $R_\mathrm{dAu}$.
\begin{figure}[htb]
\begin{minipage}[h]{0.62\textwidth}
\includegraphics[width=\textwidth]{gr/kt_data_AkT.eps}
\vspace{-0.95cm}
\caption{\label{fig:ktdata}Distributions of $k_\mathrm{T,raw}$ for p+p, d+Au ($10 < p_\mathrm{T,2} < 20~\gevc$).}
\end{minipage}
\hfill
\begin{minipage}[h]{0.33\textwidth}
\includegraphics[width=\textwidth]{gr/pt_datacomp.eps}
\vspace{-0.99cm}
\caption{\label{fig:ratio}Ratio of jet $\pT$ spectra between d+Au and simulation.}
\end{minipage}
\end{figure}
\begin{figure}[htb]
\begin{minipage}[h]{0.61\textwidth}
\centering
\includegraphics[width=\textwidth]{gr/ptspectrum.eps}
\vspace{-0.9cm}
\caption{\label{fig:spectrum}Jet $\pT$ spectrum: d+Au collisions compared to scaled p+p~\protect\cite{ppjetprl}.}
\end{minipage}
\hfill
\begin{minipage}[h]{0.37\textwidth}
\includegraphics[width=\textwidth]{gr/etadep.eps}
\vspace{-0.9cm}
\caption{\label{fig:etadep}Effect of $\eta$ acceptance on jet $\pT$ spectra (Pythia).}
\end{minipage}
\end{figure}
\section{Summary}
\label{summary}
Di-jet $\kt$ widths were measured in 200 GeV p+p and d+Au collisions: $\sigma_{k_\mathrm{T,raw}}^{p+p} = 2.8 \pm 0.1~\mathrm{(stat)}~\gevc$, $\sigma_{k_\mathrm{T,raw}}^{d+Au} = 3.0 \pm 0.1~\mathrm{(stat)}~\gevc$. No significant broadening due to Cold Nuclear Matter effects was observed.
Jet $\pT$ spectrum from minimum bias 200 GeV d+Au collisions is consistent with the scaled p+p jet spectrum within systematic uncertainties. Precise tracking efficiency determination from jet embedding in raw d+Au data and jet cross section measurement in run 8 p+p data will allow to construct jet $R_\mathrm{dAu}$.
\section*{Acknowledgement}
\label{acknowledgement}
This work was supported in part by grants LC07048 and LA09013 of the Ministry of Education of the Czech Republic and by the grant SVV-2010-261 309.
\vspace{-0.35cm}
|
1,941,325,221,092 | arxiv | \section{Introduction}
An oriented hypersurface $M$ in $\mathbb{R}^{n+1}$ is called a translating soliton (or translator) if
$$M+ t \; \mathbf{e}_{n+1}$$
is a mean curvature flow. This is equivalent to
\[
\vec{\bf{H}}={\bf e}_{n+1}^{\bot},
\]
where $\vec{\bf{H}}$ denotes the mean curvature vector field of $M$ and $\bot$ indicates the projection over the normal bundle of $M.$ Thus we have the scalar mean curvature satisfies:
\begin{equation}\label{TS-Eq.}
H=\langle N,{\bf e}_{n+1}\rangle,
\end{equation}
where $N$ indicates the unit normal along of $M.$ Recall that $H$ is just the trace of the second fundamental form of $M.$
In 1994, T. Ilmanen \cite{Ilmanen} showed that translating solitons are minimal hypersurfaces in $\mathbb{R}^{n+1}$ endowed with the conformal metric $g:=e^{\frac{2}{n}x_{n+1}}\langle\cdot.\cdot\rangle.$ From now on, we shall always assume that $\mathbb{R}^{n+1}$ is endowed with the metric $g.$
We will say that a translating soliton $M$ in $\mathbb{R}^{n+1}$ is {\bf complete} if $M$ is complete as hypersurface in $\mathbb{R}^{n+1}$ with the Euclidean metric.
This duality of being able to see translating solitons as minimal hypersurfaces was the key point that allowed to F. Mart\'in and the author \cite{Gama-Martin} the use of tools from the theory of varifolds to concluded that translating solitons $C^1$-asymptotic to two half-hyperplanes outside no vertical cylinder in $\mathbb{R}^{n+1}$ must be either a hyperplane parallel to $\mathbf e_{n+1}$ or an element of the family of the tilted grim reaper cylinder.
The family of the tilted grim reaper cylinders is the family of graphs given by the one-parameter family of functions \[f_\theta:\left(-\frac{\pi}{2\cos \theta},\frac{\pi}{2\cos \theta}\right)\times\mathbb{R}^{n-1}\to\mathbb{R}\] given by $f_\theta(x_1\ldots,x_n)=x_n\tan(\theta)-\sec^2(\theta)\log\cos(x_1\cos(\theta)), \; \theta\in[0,\pi/2)$. Besides the previous result, the method used in \cite{Gama-Martin} also implies, for a vertical cylinder and dimension $n < 7$, that the hypersurface $M$ must coincide with an hyperplane parallel to $\mathbf e_{n+1}$. Thus it remained open to know whether the same type of result were true for any dimension, i. e., if the hyperplane parallel to $\mathbf e_{n+1}$ are the unique examples that outside a vertical cylinder are $C^1-$asymptotic to two half-hyperplanes.
In this paper, we will prove that variation of the method used in \cite{Gama-Martin}, together with the result about connectness of the regular set of a stationary varifold due to Ilmamen in \cite{Ilmanen-maximum} and a sharp version of the maximum principle due to N. Wickramasekera in \cite{Wickramasekera} allow us to conclude that the hyperplanes parallel to $\mathbf e_{n+1}$ are the unique examples of translating solitons $C^1-$asymptotic to two half-hyperplanes outside a vertical cylinder in $\mathbb{R}^{n+1}$ for all dimension.
It is important to point out here that the main theorem of this paper and the main theorem obtained in \cite{MPGSHS15} (for dimension three) and in \cite{Gama-Martin} (for arbitrary dimension) give a complete characterization of all the translators which are $C^1-$asymptotic to two half-hyperplanes outside a cylinder in $\mathbb{R}^{n+1}$, up to rotations fixing $\mathbf e_{n+1}$ and translations. More precisely, the next result holds for all dimensions. Here ${\bf u}_\theta=-\sin (\theta) \cdot \mathbf e_n+\cos(\theta) \cdot \mathbf e_{n+1}.$
\begin{Theorem} \label{th:41}
Let $M\hookrightarrow \mathbb{R}^{n+1}$ be a complete, connected, properly embedded translating soliton and consider the cylinder $\textstyle{\mathcal{C}_\theta(r):=\{x\in \mathbb{R}^{n+1} \; : \; \langle x,\mathbf e_{1}\rangle^2+\langle {\bf u}_{\theta},x\rangle^2 \leq r^2\}},$ where $r>0.$ Assume that $M$ is $C^{1}$-asymptotic to two half-hyperplanes outside $\mathcal{C}_\theta(r)$.
\begin{itemize}
\item[i.] If $\theta\in[0,\pi/2)$, then we have one, and only one, of these two possibilities:
\begin{enumerate}
\item[a.] Both half-hyperplanes are contained in the same hyperplane $\Pi$ parallel to $\mathbf e_{n+1}$ and $M$ coincides with $\Pi$;
\item[b.] The half-hyperplanes are included in different parallel hyperplanes and $M$ coincides with a vertical translation of the tilted grim reaper cylinder associated to $\theta$.
\end{enumerate}
\item[ii.]If $\theta=\pi/2$, then $M$ coincides with a hyperplane parallel to $\mathbf e_{n+1}$.
\end{itemize}
\end{Theorem}
Notice that this theorem is sharp in several senses. If we increase the number of half-hyperplane then there are a lot of counterexamples. The cylinder over the pitchfork translator obtained recently by D. Hoffman, F. Mart\'in and B. White in \cite{Hoffman-New} is an example of a complete, connected, properly embedded translating soliton which is $C^1-$asymptotic to 4 half-hyperplanes outside a cylinder in $\mathbb{R}^{n+1}$. In general, the cylinder over the examples obtained by X. Nguyen in \cite{Nguyen-1},\cite{Nguyen-2} and \cite{Nguyen-3} give similar examples which are $C^1-$asymptotic to $2 k$ half-hyperplanes outside a cylinder, for any $k \geq 2$. The examples given by Nguyen have infinity topology, however the pitchfork translator is simply connected.
We would like to point that the number of asymptotic half-hyperplanes cannot be odd, because each loop in $\mathbb{R}^{n+1}$ must intersect each properly embedded hypersurface in $\mathbb{R}^{n+1}$ at a even number of points (counting their multiplicity), so whenever this example existed, we could find a loop so that it would intersect the example at an exactly odd number of points.
On the other hand, the hypothesis about the asymptotic behaviour outside a cylinder is also necessary as it is shown by the examples obtained by Hoffman, Ilmanen, Mart\'in and White in \cite{Hoffman}.
\section*{Acknowledgements}
I would like to thank Francisco Mart\'in for valuable conversations and suggestions about this work. I would like to thank the referee for his valuable suggestions about the manuscript.
\section{Preliminaries}\label{Preliminaries}
Let $\Pi$ be a hyperplane in $\mathbb{R}^{n+1}$ and $\nu$ an unit normal along $\Pi$ with respect to the Euclidean metric in $\mathbb{R}^{n+1}$. Suppose that $u:\Omega\to\mathbb{R}$ is a smooth function. The set ${\rm Graph}^\Pi[u]$ defined by
\[{\rm Graph}^{\Pi}[u]:=\{x+u(x)\nu\colon x\in\overline{\Omega}\},\]
is called the graph of $u$. Notice that we can orient ${\rm Graph}^{\Pi}[u]$ by the unit normal
\[
N=\frac{1}{W}(\nu- D u),
\]
where $Du$ indicates the gradient of u on $\Pi$ with respect to the Euclidean metric and $W^2=1+|D u|^2.$
With this orientation for ${\rm Graph}^{\Pi}[u]$ we have that $x\mapsto\langle N,\nu\rangle$ is a positive Jacobi field on ${\rm Graph}^\Pi[u]$ of the Jacobi operator associated to the metric $g$ in $\mathbb{R}^{n+1}$. Therefore applying the similar strategy used by Shariyari in \cite{SHAHRIYARI15} we shall conclude that all the graphs are stable in $\mathbb{R}^{n+1}$ with the metric $g$.
Actually, these graphs satisfy a stronger property than stability. Using the method developed by Solomon in \cite{Solomon} (see also \cite{Gama}) we shall conclude that graphs are area-minimizing inside the cylinder over their domain. More precisely, we have the next proposition. Here $\mathcal{A}_g[\Sigma]$ indicates the area of the hypersurface $\Sigma$ in $\mathbb{R}^{n+1}$ with the Ilmanen's metric $g.$
\begin{Proposition}\label{homology-general.}
Let $u:\overline{\Omega}\to\mathbb{R}$ a smooth function over a domain $\overline{\Omega}\subset\Pi$, where $\Pi$ is a hyperplane in $\mathbb{R}^{n+1}$. Suppose that ${\rm Graph}^{\Pi}[u]$ is a translating graph in $\mathbb{R}^{n+1}.$ Assume that $\Sigma$ is any another hypersurface inside the cylinder $\mathcal{C}(\Omega)=\{x+s\nu\colon x\in\overline{\Omega}\ {\rm and}\ s\in\mathbb{R}\}$ so that $\partial \Sigma=\partial {\rm Graph}^{\Pi}[u].$ Then we have
\[
\mathcal{A}_g[{\rm Graph}^{\Pi}[u]]\leq \mathcal{A}_g[\Sigma].
\]
\end{Proposition}
\begin{proof}
Let $N=\frac{1}{W}(\nu-D u)$ be the unit normal along ${\rm Graph}^{\Pi}[u].$ Suppose first that $\Sigma$ is a hypersurface in $\mathcal{C}(\Omega)$ that lies oneside of ${\rm Graph}^{\Pi}[u]$ and let $U$ be the domain in $\mathcal{C}(\Omega)$ limited by $\Sigma$ and ${\rm Graph}^{\Pi}[u]$. Consider the vector field $X$ in $\mathcal{C}(\Omega)$ obtained from the unit normal $N$ of ${\rm Graph}^{\Pi}[u]$ by parallel transport across the line of the flow of $\nu$. That is, $X$ is given by
\[
X(p,s)=\frac{e^{x_{n+1}}}{W}(\nu-D u)\ {\rm for\ all}\ (p,s)\in\mathcal{C}(\Omega).
\]
Using that ${\rm Graph}^{\Pi}[u]$ satisfies \eqref{TS-Eq.} in $\mathbb{R}^{n+1}$ one gets
\begin{eqnarray*}
\operatorname{div}_{\mathbb{R}^{n+1}} X=0.
\end{eqnarray*}
Thus the divergence theorem applying to $U$ and $X$ in $\mathbb{R}^{n+1}$ with to the Euclidean metric implies, up to a sign, that
\begin{eqnarray*}
0&=&\int_{{\rm Graph}^{\Pi}[u]}\langle X,N\rangle{\rm d}\mu_{{\rm Graph}^{\Pi}[u]}-\int_{\Sigma}\langle X,N_{\Sigma}\rangle{\rm d}\mu_\Sigma\\
&\geq&\int_{{\rm Graph}^{\Pi}[u]}e^{x_{n+1}}{\rm d}\mu_{{\rm Graph}^{\Pi}[u]}-\int_{\Sigma}e^{x_{n+1}}{\rm d}\mu_\Sigma=\mathcal{A}_g[{\rm Graph}^{\Pi}[u]]-\mathcal{A}_g[\Sigma].
\end{eqnarray*}
This completes the proof when $\Sigma$ lies oneside of ${\rm Graph}^{\Pi}[u]$. The general case can be obtained by breaking the hypersurface $\Sigma$ into many parts so that each part lies oneside of ${\rm Graph}^{\Pi}[u].$
\end{proof}
\begin{Remark}
This Proposition also was obtained by Xin in \cite{Xin} for $\nu=\mathbf e_{n+1}.$
\end{Remark}
Next we define what means a hypersurface be $C^1-$asymptotic to a half-hyperplane.
\begin{Definition}\label{Def. Asymptotic}
Let $\mathcal{H}$ a open half-hyperplane in $\mathbb{R}^{n+1}$ and $w$ the unit inward pointing normal of $\partial \mathcal{H}$. For a fixed positive number $\delta$, denote by $\mathcal{H}(\delta)$ the set given by
\begin{equation*}
\mathcal{H}(\delta):=\left\{p+tw:p\in \partial \mathcal{H}\ \operatorname{and}\ t>\delta\right\}.
\end{equation*}
We say that a smooth hypersurface $M$ is $C^{k}-$asymptotic to the open half-hyperplane $\mathcal{H}$ if $M$ can be represented as the graph of a $C^{k}-$ function $\varphi:\mathcal{H}\longrightarrow \mathbb{R}$ such that for every $\epsilon>0$, there exists $\delta>0$, so that for any $j\in\{1,2,\ldots,k\}$ it holds
\begin{equation*}
\sup_{\mathcal{H}(\delta)}|\varphi|<\epsilon\ \operatorname{and}\ \sup_{\mathcal{H}(\delta)}|D^j\varphi|<\epsilon.
\end{equation*}
We say that a smooth hypersurface $M$ is $C^{k}-$asymptotic outside a cylinder to two half-hyperplanes $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ provided there exists a solid cylinder $\mathcal{C}$ such that:
\begin{itemize}
\item[i.]The solid cylinder $\mathcal{C}$ contains the boundaries of the half-hyperplane $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$,
\item[ii.]$M\setminus \mathcal{C}$ consists of two connected components $M_{1}$ and $M_{2}$ that are $C^{k}-$asymptotic to $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$, respectively.
\end{itemize}
\end{Definition}
\begin{Remark}
Observe that the solid cylinders in $\mathbb{R}^{n+1}$ in the definition are those isometric to $D(r)\times\mathbb{R}^{n-1},$ where $D(r)$ indicates the disk of radius $r$ in $\mathbb{R}^2$.
\end{Remark}
We need some notation from the theory of varifolds (see \cite{Simon} for more information about this subject). Let $V$ be an n-dimensional varifold in $U$, where $U$ is an open subset of $\mathbb{R}^{n+1}.$
\begin{Definition}
We define ${\rm reg} V$ as the set of all the points $p\in U\cap\operatorname{spt} V$ so that there exist a open ball $B_\epsilon(p)\subset U$ such that $B_\epsilon(p)\cap\operatorname{spt} V$ is hypersurface of class $C^1$ in $B_\epsilon(p)$ without boundary. The set ${\rm reg} V$ is called the regular set of $V$. The complement of ${\rm reg} V$ in $U$, denoted by ${\rm sing} V:=U\setminus{\rm reg} V$, is called the singular set of $V.$
\end{Definition}
\begin{Definition}
We say that an $n-$dimensional varifold $V$ is connected provided that $\operatorname{spt} V$ is a connected subset in $U.$
\end{Definition}
The following compactness result (in the class varifolds) was proven in \cite{Gama-Martin}.
\begin{Lemma}\label{remarkcompctness}
Let $M^n\hookrightarrow \mathbb{R}^{n+1}$ be a complete, connected, properly embedded translating soliton and $\displaystyle{\mathcal{C}(r):=\{x\in \mathbb{R}^{n+1} \; : \; \langle x,\mathbf e_{1}\rangle^2+\langle x, \mathbf e_{n}\rangle^2 \leq r^2\}},$ for $r>0.$ Assume that $M$ is $C^{1}$-asymptotic to two half-hyperplanes outside $\mathcal{C}(r)$. Suppose that $\left\{b_{i}\right\}_{i\in \mathbb{N}}$ is a sequence in $[\mathbf e_1,\mathbf e_{n}]^\perp$ and let $\left\{M_{i}\right\}_{i\in \mathbb{N}}$ be a sequence of hypersurfaces given by $M_{i}:=M+b_{i}.$ Then there exist a connected stationary integral varifold $M_\infty$ and a subsequence $\{M_{i_k}\}\subset\{M_i\}$ so that
\begin{itemize}
\item[$($i$)$]$M_{i_k}\stackrel{*}{\rightharpoonup} M_{\infty}$ in $\mathbb{R}^{n+1}$;
\item[$($ii$)$]${\rm sing}\ M_\infty$ satisfies $\mathcal{H}^{n-7+\beta}({\rm sing}\ M_{\infty}\cap (\mathbb{R}^{n+1}\setminus \mathcal{C}(r)))=0$ for all $\beta>0$ if $n\geq7$, ${\rm sing}\ M_{\infty}\cap (\mathbb{R}^{n+1}\setminus \mathcal{C}(r))$ is discrete if $n=7$ and ${\rm sing}\ M_{\infty}\cap (\mathbb{R}^{n+1}\setminus \mathcal{C}(r))=\varnothing$ if $1\leq n\leq6$;
\item[$($iii$)$] $M_{i_k}\to\operatorname{spt} M_{\infty}$ in $\mathbb{R}^{n+1}\setminus (\mathcal{C}(r)\cup{\rm sing}\ M_{\infty}).$
\end{itemize}
\end{Lemma}
\section{Main theorem}\label{Main theorem}\label{Main}
Now we are going to see how we can get the main result from the results in Section \ref{Preliminaries}.
\begin{Theorem}\label{limitcase}
Let $M^n\hookrightarrow \mathbb{R}^{n+1}$ be a complete, connected, properly embedded translating soliton and $\displaystyle{\mathcal{C}(r):=\{x\in \mathbb{R}^{n+1} \; : \; \langle x,\mathbf e_{1}\rangle^2+\langle x, \mathbf e_{n}\rangle^2 \leq r^2\}},$ for $r>0.$ Assume that $M$ is $C^{1}$-asymptotic to two half-hyperplanes outside $\mathcal{C}(r)$. Then $M$ must coincide with a hyperplane parallel to $\mathbf e_{n+1}.$
\end{Theorem}
\begin{proof}
We start by proving the following.
\begin{claim}\label{parallel}
The half-hyperplanes $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ are parallel.
\end{claim}
\begin{proof}[Proof of the Claim \ref{parallel}]
Assume this is not true, then we could take a hyperplane parallel to $\mathbf e_{n+1}$, $\tilde{\Pi}$, such that it does not intersect $M$ and such that the normal vector $v$ to $\tilde{\Pi}$ is not perpendicular to $w_{1}$ and $w_{2}$, where $w_i$ denotes the unit inward pointing normal of $\partial \mathcal{H}_i$. Next we translate $\tilde{\Pi}$ by $t_0 \in \mathbb{R}$ in the direction of $v$ until we get a hyperplane $\tilde{\Pi}_{t_0}:=\tilde{\Pi}+t_{0}v$ in such a way that either $\tilde{\Pi}_{t_0}$ and $M$ have a first point of contact or $\operatorname{dist} \left(\tilde{\Pi}_{t_0},M\right)=0$ and $\tilde{\Pi}_{t_0}\cap M=\varnothing.$ The first case is not possible by the maximum principle. The second case implies that there exists a sequence $\{p_i=(p_1^i,\ldots,p_{n+1}^i)\}\in M$ so that $\lim_i\operatorname{dist}(p_i,\tilde{\Pi})=0$, $\langle p_i,\mathbf e_1\rangle\to a_1$ and $\langle p_i,\mathbf e_n\rangle\to a_n$. In particular, we also have $(a_1,0,\ldots,0,a_n,0)\in\tilde{\Pi}$. Consider the sequence of hypersurface
\(
\{M_i:=M-(0,p_2^i,\ldots,p_{n-2}^i,0,p_{n+1}^i)\}.
\)
By Lemma \ref{remarkcompctness}, after passing to a subsequence, $M_i\stackrel{*}{\rightharpoonup} M_\infty$, where $M_\infty$ is a connected stationary integral varifold in $\mathbb{R}^{n+1}$ and $(a_1,0,\ldots,0,a_n,0)\in\operatorname{spt} M_\infty$ by \cite{Simon}[Corollary 17.8]. Thus by \cite{White}[Theorem 4] we conclude $\operatorname{spt} M_\infty=\tilde{\Pi}$, which is impossible because of the behaviour of $\operatorname{spt} M_\infty$ outside $\mathcal{C}(r).$
\end{proof}
It is clear that neither $\mathcal{H}_{1}\subset\mathcal{H}_{2}$ nor $\mathcal{H}_{2}\subset\mathcal{H}_{1}.$
Denote by $\Pi_{1}$ and $\Pi_{2}$ the hyperplanes that contain the half-hyperplanes $\mathcal{H}_{1}$ and $\mathcal{H}_{2},$ respectively. Observe that the previous claim implies that $\Pi_{1}$ and $\Pi_{2}$ are parallel. We would like to conclude that $\Pi_1=\Pi_2.$ Assume that the contrary of this is true, i. e. admit that the hyperplanes $\Pi_1$ and $\Pi_2$ are different.
\begin{claim}\label{slablimitation}
$M$ lies in the slab between $\Pi_1$ and $\Pi_2.$
\end{claim}
\begin{proof}[Proof of the Claim \ref{slablimitation}]
Let $S$ be the closed slab limited by $\Pi_1$ and $\Pi_2$ in $\mathbb{R}^{n+1}.$ If $M\setminus S\neq\varnothing,$ then proceeding as in the first paragraph we could find a hyperplane parallel $\tilde{\Pi}$ to $\Pi_1$ in $\mathbb{R}^{n+1}\setminus S$ so that either $\tilde{\Pi}$ and $M$ have a point of contact or $\operatorname{dist}(\tilde{\Pi},M)=0$. However, arguing as in the first paragraph, and taking in account the behaviour of $M$, we would conclude that both situations are impossible. So $M$ must lie in $S.$ Notice that $M$ does not intersect neither $\Pi_1$ nor $\Pi_2$, by the maximum principle.
\end{proof}
Next, we need to study the behaviour of $M$ inside the solid cylinder $\mathcal{C}(s)$ with respect to the hyperplane $\Pi_1$ and $\Pi_2.$
\begin{claim}\label{Cylinder bounded}
For all $s\geq r$ and $j\in\{1,2\}$ we have $\operatorname{dist}(M\cap\mathcal{C}(s),\Pi_j)>0$.
\end{claim}
\begin{proof}[Proof of the Claim \ref{Cylinder bounded}]
Otherwise we could find a sequence $\{p_i=(p_1^i,\ldots,p_{n+1}^i)\}$ in $M\cap\mathcal{C}(s)$ so that \(\operatorname{dist}(p_i,\Pi_j)=0\), so considering the sequence of hypersurfaces \(\{M_i:=M-(0,p_2^i,\ldots,p_{n-1}^i,0,p_{n+1}^i)\}\) by Lemma \ref{remarkcompctness} we would have that $M_i\stackrel{*}{\rightharpoonup} M_\infty$, after passing to a subsequence, where $M_\infty$ is a connected n-dimensional stationary integral varifold. Using that $\{p_i\}$ lies in $\mathcal{C}(s)$ we may also assume $\langle p_i,\mathbf e_1\rangle\to a_1$ and $\langle p_i,\mathbf e_n\rangle\to a_n$. Now $(a_1,0,\ldots,0,a_n,0)\in\operatorname{spt}\ M_\infty\cap \Pi_j$. So by \cite{White}[Theorem 4] we would have $\operatorname{spt} M_\infty=\Pi_j,$ which is impossible because $\Pi_1\neq\Pi_2$ and part of $\operatorname{spt} M_\infty$ is close to $\Pi_1$ and $\Pi_2.$
\end{proof}
We know, thanks to our hypothesis over $M$, that $M\setminus \mathcal{C}(r)={\rm Graph}^{\Pi_1}[\varphi_1]\cup{\rm Graph}^{\Pi_2}[\varphi_2]$, where $\varphi_j:\mathcal{H}_j\to\mathbb{R}$ is a smooth function and it holds
\[
\sup_{\mathcal{H}_j(\delta)}|\varphi_j|<\epsilon\ {\rm and}\ \sup_{\mathcal{H}_j(\delta)}|D\varphi_j|<\epsilon,
\]
where $\delta$ depends on $\epsilon$ and $\delta\to+\infty$ as $\epsilon\to0.$ Fix some $s>r$ and define \[\epsilon:=\frac{1}{10}\min_{j}\{\operatorname{dist}(M\cap\mathcal{C}(s),\Pi_j)\}>0.\] For this choice of $\epsilon$ we take $\delta>0$ so that
\[
\sup_{\mathcal{H}_j(\delta)}|\varphi_j|<\epsilon\ {\rm and}\ \sup_{\mathcal{H}_j(\delta)}|D\varphi_j|<\epsilon.
\]
If we assume that $\Pi_1\neq\Pi_2,$ then these choices lead us to a contradiction as follows: let $\nu$ be the unit normal vector to $\Pi_{1}$ pointing outside $S$ and define $s_{0}=\operatorname{dist}(\Pi_1,\Pi_2)>0$. Notice that for this choice of $s_0$ we have that $M+s_{0}\nu$ does not intersect $S,$ but the wing of $M+s_0\nu$ corresponding to $\mathcal{H}_2+s_0\nu$ is asymptotic a half-hyperplane on $\Pi_1$ with the unit inward pointing normal to its boundary is $-w_1$.
Define $M_\epsilon:=\{x\in M\colon \min\{\operatorname{dist}(x,\Pi_1),\operatorname{dist}(x,\Pi_2)\}\geq\epsilon\}$. By Claim \ref{Cylinder bounded} one has $M\cap\mathcal{C}(s)\subset M_\epsilon.$ Notice that, taking $t_0>0$ sufficiently large $t_{0}>0$, we may assume that $M_\epsilon+s_{0}\nu+t_{0}w_{1}$ lies in $\mathcal{Z}_{1,2\delta}^{+}$ and $\mathcal{C}(s)\cap(\mathcal{C}(s)+s_{0}\nu+t_{0}w_{1})=\varnothing,$ where $\mathcal{Z}_{1,2\delta}$ denotes the half-space in $\mathbb{R}^{n+1}$ that contains $\mathcal{H}_1(2\delta)$, $\partial \mathcal{Z}_{1,2\delta}$ has unit inward pointing normal $w_1$ and $\partial\mathcal{H}_1 \subset\partial \mathcal{Z}_{1,2\delta}$.
Define the set \(\displaystyle{\mathcal{A}:=\{s\in[0,s_0]\colon (M+s\nu+t_{0}w_{1})\cap M=\varnothing\}},\) and let $s_{1}:=\inf\mathcal{A}>0.$ Notice that from our assumptions on $s_0$ and $\epsilon$ we have $s_1>0$. We have two possibilities for $s_1$: either $s_{1}\notin\mathcal{A}$ or $s_{1}\in\mathcal{A}$. The first case implies that $M+s_{1}\nu+t_{0}w_{1}$ and $M$ have points of contact, which is impossible by the maximum principle and our hypothesis over $M.$ Consequently it holds $s_{1}\in\mathcal{A},$ and so \(\displaystyle{\operatorname{dist}\left(M+s_{1}\nu+t_{0}w_{1},M\right)}=0\) and $\textstyle{\{M+s_{1}\nu+t_{0}w_{1}\}\cap M=\varnothing}.$ This fact together with our choice of $\epsilon$ imply that there exist sequences $\{p_{i}\}$ in $M\setminus \mathcal{C}(s)$ and $\{q_{i}\}$ in $(M\setminus\mathcal{C}(s))+s_{1}\nu+t_{0}w_{1}$ such that
$\operatorname{dist}(p_i,\mathcal{C}(s)\cap M)>2\epsilon,$ $\operatorname{dist}(q_i,\mathcal{C}(s)\cap M)>2\epsilon,$ $\operatorname{dist}(p_i,\mathcal{C}(s)\cap M+s_{1}\nu+t_{0}w_{1})>2\epsilon,$ $\operatorname{dist}(q_i,\mathcal{C}(s)\cap M+s_{1}\nu+t_{0}w_{1})>2\epsilon$ and
$\operatorname{dist}(p_{i},q_{i})=0.$ Observe that we can assume $\{\langle q_{i},\mathbf e_{1}\rangle\}, \{\langle p_{i},\mathbf e_{1}\rangle\}\to a$ and $\{\langle q_{i},\mathbf e_{n}\rangle\}, \{\langle p_{i},\mathbf e_{n}\rangle\}\to b$.
In $\mathbb{R}^{n+1}\setminus(\mathcal{C}(s)\cup\mathcal{C}(s)+s_{1}\nu+t_{0}w_{1})$ consider the following sequences \[\displaystyle{\{M_{i}:=(M^1\setminus\mathcal{C}(s))-(0,p_{2},\ldots,p_{n-1},0,p_{n+1})\}}\] and \[\displaystyle{\{\widehat{M}_{i}:=(M^2\setminus\mathcal{C}(s))+s_{1}\nu+t_{0}w_{1}-(0,q_{2},\ldots,q_{n-1},0,q_{n+1})\}},\] where $M_i$ indicates the wing of $M$ which is asymptotic to $\mathcal{H}_i$. In particular, each $M_i$ and $\widehat{M}_i$ are graphs over open half-hyperplanes in $\mathcal{H}_1$. Hence, they are stable hypersurfaces and the sequences $\{M_i\}$ and $\{\widehat{M}_i\}$ have locally bounded area by Proposition \ref{homology-general.}.
Now \cite{Wickramasekera}[Theorem 18.1] implies that, after passing to a subsequence, $M_{i}\rightharpoonup M_{\infty}$ and $\widehat{M}_{i}\rightharpoonup\widehat{M}_{\infty}$ in $\mathbb{R}^{n+1}\setminus(\mathcal{C}(s)\cup\mathcal{C}(s)+s_{1}\nu+t_{0}w_{1}),$ where $M_{\infty}$ and $\widehat{M}_{\infty}$ are connected stable integral varifold so that ${\rm sing}\ M_\infty$ and ${\rm sing}\ \widehat{M}_\infty$ satisfy $\mathcal{H}^{n-7}({\rm sing}\ M_\infty)=\mathcal{H}^{n-7}({\rm sing}\ \widehat{M}_\infty)=0$ in $\mathbb{R}^{n+1}\setminus(\mathcal{C}(s)\cup\mathcal{C}(s)+s_{1}\nu+t_{0}w_{1})$, and $(a,0,\ldots,0,b,0)\in\operatorname{spt} M_{\infty}\cap\operatorname{spt} \widehat{M}_{\infty}$ by \cite{Simon}[Corollary 17.8]. The connectedness of the support can be obtained as in \cite{Gama-Martin}[Lemma 3.1].
On the other hand, \cite{Ilmanen-maximum}[Theorem A (ii)] implies that ${\rm reg}\ M_{\infty}$ and ${\rm reg}\ \widehat{M}_{\infty}$ are connected subsets in $\mathbb{R}^{n+1}\setminus(\mathcal{C}(s)\cup\mathcal{C}(s)+s_{1}\nu+t_{0}w_{1})$. Consequently, the asymptotic behaviour of $\operatorname{spt} M_{\infty}$ and $\operatorname{spt} \widehat{M}_{\infty}$ imply that ${\rm reg}\ M_{\infty}$ does not intersect ${\rm reg}\ \widehat{M}_{\infty}$. Thus, one has $\operatorname{spt} M_{\infty}\cap\operatorname{spt} \widehat{M}_{\infty}\subset{\rm sing}\ M_{\infty}\cup{\rm sing}\ \widehat{M}_{\infty},$ and so, we would have $\mathcal{H}^{n-1}(\operatorname{spt} M_{\infty}\cap\operatorname{spt} \widehat{M}_{\infty})=0,$ so \cite{Wickramasekera}[Theorem 19.1 (a)] implies that $\operatorname{spt} M_{\infty}\cap\operatorname{spt} \widehat{M}_{\infty}=\varnothing$, which is impossible since $(a,0,\ldots,0,b,0)\in\operatorname{spt} M_{\infty}\cap\operatorname{spt} \widehat{M}_{\infty}.$ Therefore, we must have $\Pi_1=\Pi_2$. However, if we proceed as in Claim \ref{slablimitation} we conclude $M=\Pi_1$. This concludes the proof of the theorem.
\end{proof}
\bibliographystyle{amsplain, amsalpha}
|
1,941,325,221,093 | arxiv |
\section{Conclusion}
We developed a novel theory for signed normalized cuts
as well as an algorithm for finding the discrete solution.
We showed that we can find superior synonym clusters which
do not require new word embeddings, but simply overlay thesaurus information.
The clusters are general and can be used with several out of the box word embeddings.
By accounting for antonym relationships, our algorithm greatly outperforms simple normalized cuts, even with Huang's word embeddings
, which are designed to capture semantic relations.
Finally, we examined our clustering method on the sentiment analysis task from \citet{socher2013recursive} sentiment treebank dataset
and showed
improved performance versus comparable models.
This method could be applied to a broad range of NLP tasks, such as prediction of social group clustering, identification of personal versus non-personal verbs, and analysis of clusters which capture positive, negative, and objective emotional content. It could also be used to explore multi-view relationships, such as aligning synonym clusters across multiple languages. Another possibility is to use thesauri and word vector representations together with word sense disambiguation
to generate synonym clusters for multiple senses of words.
Finally, our signed clustering could be extended to evolutionary signed clustering.
\section{Introduction}
While vector space models \citep{turney2010frequency} such as Eigenwords, Glove, or word2vec capture relatedness, they do not adequately encode synonymy and similarity \citep{mohammad2013computing,scheible2013uncovering}.
Our goal was to create clusters of synonyms or semantically-equivalent words
and linguistically-motivated unified constructs.
We innovated a novel theory and method that extends multiclass normalized cuts
(K-cluster) to signed graphs \citep{gallier2015spectral}, which
allows the incorporation of semi-supervised
information. Negative edges serve as repellent or opposite relationships between nodes.
In distributional vector representations opposite relations
are not fully captured. Take, for example, words such as ``great'' and ``awful'', which can appear with similar frequency in the same sentence structure: ``today is a great day'' and ``today is an awful day''.
Word embeddings, which are successful in a wide array of NLP tasks, fail to capture this antonymy because they follow the {\it distributional hypothesis} that
similar words are used in similar contexts \citep{harris1954distributional}, thus assigning small cosine or euclidean distances between the
vector representations of ``great'' and ``awful''.
Our signed spectral normalized graph cut algorithm (henceforth, signed clustering) builds antonym relations into the vector space, while
maintaining distributional similarity. Furthermore, another strength of K-clustering of signed graphs
is that it can be used collaboratively with other methods for augmenting semantic
meaning. Signed clustering leads to improved
clusters over spectral clustering of word embeddings, and has better coverage than thesaurus look-up.
This is because thesauri erroneously give equal weight to rare senses of word, such as ``absurd'' and its rarely used synonym ``rich''.
Also, the overlap between thesauri is small, due to their manual creation.
\citet{lin1998automatic} found 0.178397 between-synonym set from Roget's Thesaurus and WordNet 1.5.
We also found similarly small overlap between all three thesauri tested.
We evaluated our clusters by comparing them to different
vector representations. In addition, we evaluated our clusters against SimLex-999.
Finally, we tested our method on the sentiment analysis task.
Overall, signed spectral clustering results are a very clean and
elegant augmentation to current methods, and may have broad application to many fields.
Our main contributions are the novel method for signed clustering of signed graphs by \citet{gallier2015spectral},
the application of this method to create semantic word clusters which are agnostic to both vector space representations and thesauri,
and finally, the systematic evaluation and creation of word clusters using thesauri.
\subsection{Related Work}
Semantic word cluster and distributional thesauri have been well studied \citep{lin1998automatic,curran2004distributional}.
Recently there has been a lot of work on incorporating synonyms and antonyms into word embeddings.
Most recent models either attempt to make richer contexts, in order to find semantic similarity, \
or overlay thesaurus information in a supervised or semi-supervised manner.
\citet{tang2014learning} created sentiment-specific word embedding (SSWE), which
were trained for twitter sentiment.
\citet{yih2012polarity} proposed
polarity induced latent semantic analysis (PILSA) using thesauri,
which was extended by \citet{chang2013multi} to a multi-relational setting.
The Bayesian tensor factorization model (BPTF) was introduced in order to
combine multiple sources of information \citep{zhang2014word}.
\citet{faruqui2014retrofitting} used belief propagation to modify existing vector space representations.
The word embeddings on Thesauri and Distributional
information (WE-TD) model \citep{ono2015word} incorporated thesauri by
altering the objective function for word embedding representations.
Similarly, \citet{marcobaroni2multitask} introduced multitask Lexical Contrast Model
which extended the word2vec Skip-gram method to optimize for both context as well as synonymy/antonym relations.
Our approach differs from the afore-mentioned methods in that
we created word clusters using the antonym relationships as negative links.
Similar to \citet{faruqui2014retrofitting} our signed clustering method uses existing vector representations to create word clusters.
To our knowledge, \citet{gallier2015spectral} is the first theoretical foundation of multiclass signed normalized cuts.
\citet{hou2005bounds} used positive degrees of nodes in the degree matrix
of a signed graph with weights (-1, 0, 1), which was advanced by
\citet{kolluri2004spectral,kunegis2010spectral} using absolute values of weights in the degree matrix.
Although must-link and cannot-link soft spectral clustering \citep{rangapuramconstrained} both share similarities with our method, this similarity only applies to cases where cannot-link edges are present. Our method excludes a weight term of cannot-link, as well as the volume of cannot-link edges within the clusters.
Furthermore, our optimization method differs from that of must-link / cannot-link algorithms.
We developed a novel theory and algorithm that extends the
clustering of \citet{shi2000normalized,yu2003multiclass} to the multi-class signed graph case \citep{gallier2015spectral}.
\section{Metrics}
The evaluation of clusters is non-trivial to generalize. We used both intrinsic and extrinsic methods of evaluation.
Intrinsic evaluation is two fold where we only examine cluster entropy, purity, number of disconnected components and
number of negative edges. We also compare multiple word embeddings and thesauri to show stability of our method.
The second intrinsic measure is using a gold standard. We chose a gold standard
designed for the task of capturing word similarity. Our metric for evaluation is a detailed
accuracy and recall.
For extrinsic evaluation, we use our clusters to
identify polarity and apply this to the task.
\subsection{Similarity Metric and Edge Weight}
For clustering there are several choices to make. The first choice being the similarity metric.
In this paper we chose the heat kernel based off of Euclidean distance between word vector
representations. We define the distance between two words $w_i$ and $w_j$ as
$dist(w_i, w_j) = \norme{w_i - w_j}$.
In the paper by \citet{belkin2003laplacian}, the authors show that the
heat kernel where
\[
W_{ij} =
\begin{cases}
0 & \text{ if } e^{-\frac{dist(w_i, w_j)^2}{\sigma}} < thresh \\
e^{-\frac{dist(w_i, w_j)^2}{\sigma}} & \text{ otherwise}
\end{cases}.
\]
The next choice of how to combine the word embeddings with the thesauri in order to make a signed
graph also has hyperparameters. We can represent the thesaurus as a matrix where
\[
T_{ij} =
\begin{cases}
1 & \text{ if words $i$ and $j$ are synonyms} \\
-1 & \text{ if words $i$ and $j$ are antonyms} \\
0 & \text{ otherwise} \\
\end{cases}.
\]
Another alternative is to only look at the antonym information, so
\[
T^{ant}_{ij} =
\begin{cases}
-1 & \text{ if words $i$ and $j$ are antonyms} \\
0 & \text{ otherwise} \\
\end{cases}.
\]
We can write the signed graph as $\hat{W_{ij}} = \beta T_{ij} W_{ij}$ or in matrix form
$\hat{W} =\beta T \odot W$ where $\odot$ computes Hadamard product (element-wise multiplication); however,
the graph will only contain the overlapping vocabulary.
In order to solve this problem we use $\hat{W} = \gamma W + \beta^{ant} T^{ant} \odot W + \beta T \odot W$.
\subsection{Evaluation Metrics}
It is important to note that this metric does not require a gold standard.
Obviously we want this number to be as small as possible.
As we used thesaurus information for two other novel metrics which are the number of
negative edges (NNE) in the clusters, and the number of disconnected components (NDC) in the cluster where we only use synonym edges.
\begin{align*}
NDC &= \sum_{r=1}^{k}{\sum_{i=1}^{C}{(n_r^i)}}
\end{align*}
The NDC has the disadvantage of thesaurus coverage.
Figure 2 shows a graphical representation of the number of disconnected components and negative edges.
\begin{figure}[ht]
\label{fig:dc2}
\begin{subfigure}
\par
\label{fig:figure2}
\includegraphics[width=\linewidth]{cluster_2_dc.jpg}
{\small {\it Figure 2.1.} Cluster with two disconnected components.
All edges represent synonymy relations. The edge colors are only meant to highlight the different components.}
\end{subfigure}
\begin{subfigure}
\par
\label{fig:ant1}
\includegraphics[width=\linewidth]{cluster_1_ant.jpg}
{\small {\it Figure 2.1.} Cluster with one antonym relation.
The red edge represents the antonym relation. Blue edges represent synonymy relations.}
\end{subfigure}
\caption{Disconnected component and number of antonym evaluations.}
\end{figure}
Next we evaluate our clusters using an external gold standard.
Cluster purity and entropy \cite{zhao2001criterion} is defined as,
\begin{align*}
Purity &= \sum_{r=1}^{k}{\frac{1}{n}max_i(n_r^i)} \\
Entropy &= \sum_{r=1}^{k}{\frac{n_r}{n}\left({-\frac{1}{\log q}\sum_{i=1}^{q}{\frac{n_r^i}{n_r}\log \frac{n_r^i}{n_r}}}\right)}
\end{align*}
where $q$ is the number of classes, $k$ the number of clusters, $n_r$ is the size of cluster $r$, and $n_r^i$
number of data points in class $i$ clustered in cluster $r$.
The purity and entropy measures improve (increased purity, decreased entropy) monotonically with the number of clusters.
\section{Empirical Results}
In this section we begin with intrinsic analysis of the resulting clusters.
We then compare empirical clusters with
SimLex-999 as a gold standard for semantic word similarity.
Finally, we evaluate our metric using the sentiment prediction task.
Our synonym clusters are well suited for
this task, as including antonyms in clusters results in incorrect predictions.
\subsection{Simulated Data}
In order to evaluate our signed graph clustering method, we first focused on intrinsic measures of cluster quality.
Figure 3.2 demonstrates that the number of negative edges within a cluster is minimized using our clustering algorithm on simulated data. However, as the number of clusters becomes large, the number of disconnected components, which includes clusters of size one, increases. For our further empirical analysis, we used both the number of disconnected components as well as the number of antonyms within clusters in order to set the cluster size.
\begin{figure}[ht]
\label{fig:dcnemetric}
\begin{subfigure}
\par
\label{fig:simgraph}
\includegraphics[width=\linewidth]{simulated_signed_graph.jpg}
{\small {\it Figure 3.1.} Simulated signed graph \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }
\end{subfigure}
\begin{subfigure}
\par
\label{fig:ant1}
\includegraphics[width=\linewidth]{DC_NE_vs_NC.jpg}
{\small {\it Figure 3.2.} This is a plot of the relationship between the number of disconnected components and negative edges within the clusters.}
\end{subfigure}
\caption{Graph of Disconnected Component and Negative Edge Relations}
\end{figure}
\subsection{Data}
\subsubsection{Word Embeddings}
For comparison, we used four different word embedding methods: Skip-gram vectors (word2vec) \citep{mikolov2013efficient}, Global vectors (GloVe) \citep{pennington2014glove}, Eigenwords \citep{dhillon2015eigenwords}, and Global Context (GloCon) \citep{huang2012improving} vector representation.
We used word2vec 300 dimensional embeddings which were trained using word2vec code on several billion words of English comprising the entirety of Gigaword and the English discussion forum data gathered as part of BOLT. A minimal tokenization was performed based on CMU's twoknenize\footnote{\url{https://github.com/brendano/ark-tweet-nlp}}.
For GloVe we used pretrained 200 dimensional vector embeddings\footnote{\url{http://nlp.stanford.edu/projects/GloVe/}} trained using Wikipedia 2014 + Gigaword 5 (6B tokens).
Eigenwords were trained on English Gigaword with no lowercasing or cleaning.
Finally, we used 50 dimensional vector representations from \citet{huang2012improving}, which
used the April 2010 snapshot of the Wikipedia corpus \cite{lin1998automatic,shaoul2010westbury},
with a total of about 2 million articles and 990 million tokens.
\subsubsection{Thesauri}
Several thesauri were used, in order to test robustness (including Roget's Thesaurus,
the Microsoft Word English (MS Word) thesaurus from \citet{samsonovic2010principal} and WordNet 3.0) \citep{miller1995wordnet}.
\citet{jarmasz2004roget,hale1998comparison} have shown that Roget's thesaurus has better semantic similarity than WordNet.
This is consistent with our results using a larger dataset of SimLex-999.
We chose a subset of 5108 words for the training dataset, which had high overlap between various sources. Changes to the training dataset had minimal effects on the optimal parameters.
Within the training dataset, each of the thesauri had roughly 3700 antonym pairs, and combined they had 6680. However, the number of distinct connected components varied, with Roget's Thesaurus having the least (629), and MS Word Thesaurus (1162) and WordNet (2449) having the most. These ratios were consistent across the full dataset.
\subsection{Cluster Evaluation}
One of our main goals was to go beyond qualitative analysis into quantitative measures of
synonym clusters and word similarity.
In Table \ref{tab:qualclusters}, we show the 4 most-associated words with ``accept'', ``negative'' and ``unlike''.
\begin{table*}[!tbh]
\centering
\small
\begin{tabular}{|l||l|l|l|l|l|l|l|}
\hline
{\bf Ref word} & {\bf Roget} & {\bf WordNet} & {\bf MS Word} & {\bf W2V} & {\bf GloDoc} & {\bf EW} & {\bf Glove} \\ \hline
accept
& adopt & agree & take & accepts & seek & approve & agree \\
& accept your fate & get & swallow & reject & consider & declare & reject \\
& be fooled by & fancy & consent & agree & know & endorse & willin \\
& acquiesce & hold & assume & accepting & ask & reconsider & refuse \\
negative
& not advantageous & unfavorable & severe & positive & reverse & unfavorable & positive \\
& pejorative & denial & hard & adverse & obvious & positive & impact \\
& pessimistic & resisting & wasteful & Negative & calculation & dire & suggesting \\
& no & pessimistic & charged & negatively & cumulative & worrisome & result \\
\hline
unlike
& {\bf no synonyms} & incongruous & different & Unlike & whereas & Unlike & instance \\
& & unequal & dissimilar & Like & true & Like & though \\
& & separate & & even & though & Whereas & whereas \\
& & hostile & & But & bit & whereas & likewise \\
\hline
\end{tabular}
\caption{Qualitative comparison of clusters.}
\label{tab:qualclusters}
\end{table*}
\subsubsection{Cluster Similarity and Hyperparameter Optimization}
For a similarity metric between any two words, we use the heat kernel of Euclidean distance, so $sim(w_i, w_j) = e^{-\frac{\norme{w_i-w_j}^2}{\sigma}}$.
The thesaurus matrix entry $T_{ij}$ has a weight of 1 if words $i$ and $j$ are synonyms, -1 if words $i$ and $j$ are antonyms, and 0 otherwise. Thus the weight matrix entries $W_{ij} = T_{ij}e^{-\frac{\norme{w_i-w_j}^2}{\sigma}}$.
\begin{table*}[!tbh]
\centering
\small
\begin{tabular}{|l||p{1cm}|c|c|c|c|c|}
\hline
\multicolumn{1}{|c||}{\bf Method} &
{\bf $\sigma$} &
{\bf thresh} &
{\bf \# Clusters} &
{\bf Error $\downarrow$ } &
{\bf Purity $\uparrow$} &
{\bf Entropy $\downarrow$} \\
& & & & $\frac{(NNE+NDC)}{|V|}$ & & \\ \hline
Word2Vec & 0.2 & 0.04 & 750 & 0.716 & 0.88 & 0.14 \\ \hline
Word2Vec + Roget & 0.7 & 0.04 & 750 & 0.033 & 0.94 & 0.09 \\ \hline
Eigenword & 2.0 & 0.07 & 200 & 0.655 & 0.84 & 0.25 \\ \hline
Eigenword + MSW & 1.0 & 0.08 & 200 & 0.042 & 0.95 & 0.01 \\ \hline
GloCon & 3.0 & 0.09 & 100 & 0.691 & 0.98 & 0.03 \\ \hline
GloCon + Roget & 0.9 & 0.06 & 750 & 0.048 & 0.94 & 0.02 \\ \hline
Glove & 9.0 & 0.09 & 200 & 0.657 & 0.72 & 0.33 \\ \hline
Glove + Roget & 11.0 & 0.01 & 1000 & 0.070 & 0.91 & 0.10 \\ \hline
\end{tabular}
\caption{Clustering evaluation after parameter optimization minimizing error using grid search.}
\label{tab:optimalparams}
\end{table*}
Table \ref{tab:optimalparams} shows results from the grid search of hyperparameter optimization.
Here we show that Eigenword + MSW outperforms Eigenword + Roget, which is in contrast
with the other word embeddings where the combination with Roget performs better.
As a baseline, we created clusters using K-means where the number of K clusters was set to 750.
All K-means clusters have a statistically significant difference in the number of antonym pairs relative to
random assignment of labels. When compared with the MS Word thesaurus, Word2Vec, Eigenword, GloCon, and GloVe word embeddings had a total of 286, 235, 235, 220 negative edges, respectively. The results are similar with the other thesauri. This shows that there are a significant number of antonyms pairs in the K-means clusters derived from the word embeddings. By optimizing the hyperparameters using normalized cuts without thesauri information, we found a significant decrease in the number of negative edges, which was indistinguishable from random assignment and corresponded to a roughly ninety percent decrease across clusters. When analyzed using an out of sample thesaurus and 27081 words, the number of antonym clusters decreased to under 5 for all word embeddings, with the addition of antonym relationship information.
If we examined the number of distinct connected components within the different word clusters, we
observed that when K-means were used, the number of disconnected components were statistically significant from random labelling. This suggests that the word embeddings capture synonym relationships. By optimizing the hyperparameters we found roughly a 10 percent decrease in distinct connected components using normalized cuts. When we added the signed antonym relationships using our signed clustering algorithm, on average we found a thirty-nine percent decrease over the K-means clusters. Again, this shows that the hyperparameter optimization is highly effective.
\subsubsection{Evaluation Using Gold Standard}
SimLex-999 is a gold standard resource for semantic similarity, not relatedness, based on ratings by human annotators.
The differentiation between relatedness and similarity was a problem with previous datasets such as WordSim-353.
\citet{hill2014simlex} has a further comparison of SimLex-999 to previous datasets.
Table \ref{tab:simlex} shows the difference between SimLex-999 and WordSim-353.
\begin{table*}[!tbh]
\centering
\small
\begin{tabular}{|c||c|c|}
\hline
{\bf Pair} & {\bf Simlex-999 rating} & {\bf WordSim-353 rating} \\ \hline
coast - shore & 9.00 & 9.10 \\ \hline
clothes - closet & 1.96 & 8.00 \\ \hline
\end{tabular}
\caption{Comparison between SimLex-999 and WordSim-353.
This is from \url{http://www.cl.cam.ac.uk/~fh295/simlex.html}}.
\label{tab:simlex}
\end{table*}
SimLex-999 comprises of multiple parts-of-speech with 666 Noun-Noun pairs, 222 Verb-Verb pairs and 111 Adjective-Adjective pairs.
In a perfect setting, all word pairs rated highly similar by human annotators would be in the same cluster, and all words which were rated dissimilar would be in different clusters.
Since our clustering algorithm produced sets of words, we used this evaluation instead of the more commonly-reported correlations.
\begin{table}[!tbh
\centering
\small
\begin{tabular}{|l||c|c|}
\hline
\multicolumn{1}{|c||}{\bf Method} & {\bf Accuracy} & {\bf Coverage} \\ \hline
MS Thes Lookup & 0.70 & 0.57 \\ \hline
Roget Thes Lookup & 0.63 & 0.99 \\ \hline
WordNet Thes Lookup & 0.43 & 1.00 \\ \hline
Combined Thes Lookup & 0.90 & 1.00 \\ \hline
Word2Vec & 0.36 & 1.00 \\ \hline
Word2Vec+CombThes & 0.67 & 1.00 \\ \hline
Eigenwords & 0.23 & 1.00 \\ \hline
Eigenwords+CombThes & 0.12 & 1.00 \\ \hline
GloCon & 0.07 & 1.00 \\ \hline
GloCon+CombThes & 0.05 & 1.00 \\ \hline
GloVe & 0.33 & 1.00 \\ \hline
GloVe+CombThes & 0.58 & 1.00 \\ \hline
Thes Lookup+W2V+CombThes & 0.96 & 1.00 \\ \hline
\end{tabular}
\caption{Clustering evaluation using SimLex-999 with 120 word pairs having similarity score over 8.}
\label{tab:simlexeval}
\end{table}
In Table \ref{tab:simlexeval} we show the results of the evaluation with SimLex-999.
Accuracy increased for all of the clustering methods aside from Eigenwords+CombThes. However, we achieved better results when we exclusively used the MS Word thesaurus.
Combining thesaurus lookup and word2vec+CombThes clusters yielded an accuracy of 0.96.
\subsubsection{Sentiment Analysis}
We used the \citet{socher2013recursive}
sentiment treebank \footnote{\url{http://nlp.stanford.edu/sentiment/treebank.html}} with coarse grained labels on phrases and
sentences from movie review excerpts.
The treebank is split into training (6920) , development (872), and test (1821)
datasets.
We trained an $l_2$-norm regularized logistic regression \citep{friedman2001elements} using our word clusters
in order to predict the coarse-grained sentiment at the sentence level.
We compared our model against existing models: Naive
Bayes with bag of words (NB),
sentence word embedding averages (VecAvg),
retrofitted sentence word embeddings (RVecAvg) \citep{faruqui2014retrofitting},
simple recurrent neural network (RNN),
recurrent neural tensor network (RNTN) \citep{socher2013recursive},
and the state-of-the art Convolutional neural network (CNN) \citep{kim2014convolutional}.
Table \ref{tab:sentanalysis} shows that although our model
does not out-perform the state-of-the-art,
signed clustering performs better than comparable models, including the recurrent neural network,
which has access to more information.
\begin{table}[!tbh]
\centering
\small
\begin{tabular}{|l||c|c|}
\hline
\multicolumn{1}{|c||}{\bf Model} & {\bf Accuracy} \\ \hline
NB \citep{socher2013recursive} & 0.818 \\ \hline
VecAvg (W2V, GV, GC) & 0.812, 0.796, 0.678 \\
\citep{faruqui2014retrofitting} & \\ \hline
RVecAvg (W2V, GV, GC) & 0.821, 0.822, 0.689 \\
\citep{faruqui2014retrofitting} & \\ \hline
RNN, RNTN \citep{socher2013recursive} & 0.824, 0.854 \\ \hline
CNN \citep{le2015compositional} & 0.881 \\ \hline
\hline
SC W2V & 0.836 \\ \hline
SC GV & 0.819 \\ \hline
SC GC & 0.572 \\ \hline
SC EW & 0.820 \\ \hline
\end{tabular}
\caption{Sentiment analysis accuracy for binary predictions of signed clustering algorithm (SC) versus other models.}
\label{tab:sentanalysis}
\end{table}
\section{Signed Graph Cluster Estimation}
\subsection{Signed Normalized Cut}
Weighted graphs
for which the weight matrix is a symmetric matrix in which negative
and positive entries are allowed are called {\it signed graphs\/}.
Such graphs (with weights $(-1, 0, +1)$) were introduced as early as
1953 by \cite{harary1953notion}, to model social relations involving disliking,
indifference, and liking. The problem of clustering the nodes
of a signed graph arises naturally as a generalization of the
clustering problem for weighted graphs. Figure 1 shows a signed graph of word similarities with a thesaurus overlay.
\begin{figure}[ht]
\label{fig:dc2}
\includegraphics[width=\linewidth]{signed_word_cluster.jpg}
\caption{Signed graph of words using
a distance metric from the word embedding. The red edges represent the antonym relation while blue edges represent synonymy relations.}
\end{figure}
\citet{gallier2015spectral} extends normalized cuts signed graphs in order to incorporate antonym information into word clusters.
\theoremstyle{definition}
\begin{definition}
\label{graph-weighted}
A {\it weighted graph\/}
is a pair $G = (V, W)$, where
$V = \{v_1, \ldots, v_m\}$ is a set of
{\it nodes\/} or {\it vertices\/}, and $W$ is a symmetric matrix
called the {\it weight matrix\/}, such that $w_{i\, j} \geq 0$
for all $i, j \in \{1, \ldots, m\}$,
and $w_{i\, i} = 0$ for $i = 1, \ldots, m$.
We say that a set $\{v_i, v_j\}$ is an edge iff
$w_{i\, j} > 0$. The corresponding (undirected) graph $(V, E)$
with $E = \{\{v_i, v_j\} \mid w_{i\, j} > 0\}$,
is called the {\it underlying graph\/} of $G$.
\end{definition}
Given a signed graph $G = (V, W)$ (where $W$ is a symmetric matrix
with zero diagonal entries), the {\it underlying graph\/} of $G$ is
the graph with node set $V$ and set of (undirected) edges
$E = \{\{v_i, v_j\} \mid w_{i j} \not= 0\}$.
If $(V, W)$ is a signed graph, where $W$ is an $m\times m$ symmetric
matrix with zero diagonal entries and with the other entries
$w_{i j}\in \mathbb{R}$ arbitrary, for any node $v_i \in V$, the {\it signed degree\/} of $v_i$ is defined as
\[
\overline{d}_i = \overline{d}(v_i) = \sum_{j = 1}^m |w_{i j}|,
\]
and the {\it signed degree matrix \/} $\overline{D}$ as
\[
\overline{D} = \mathrm{diag}(\overline{d}(v_1) , \ldots, \overline{d}(v_m)).
\]
For any subset $A$ of the set of nodes
$V$, let
\[
\mathrm{vol}(A) = \sum_{v_i\in A} \overline{d}_i =
\sum_{v_i\in A} \sum_{j = 1}^m |w_{i j}|.
\]
For any two subsets $A$ and $B$ of $V$,
define $\mathrm{links}^+(A,B)$,
$\mathrm{links}^-(A,B)$, and $\mathrm{cut}(A,\overline{A})$ by
\begin{align*}
\mathrm{links}^+(A, B) & =
\sum_{\begin{subarray}{c}
v_i \in A, v_j\in B \\
w_{i j} > 0
\end{subarray}
}
w_{i j} \\
\mathrm{links}^-(A,B) & =
\sum_{\begin{subarray}{c}
v_i \in A, v_j\in B \\
w_{i j} < 0
\end{subarray}
}
- w_{i j} \\
\mathrm{cut}(A,\overline{A}) & =
\sum_{\begin{subarray}{c}
v_i \in A, v_j\in \overline{A} \\
w_{i j} \not= 0
\end{subarray}
}
|w_{i j}| .
\end{align*}
Then, the {\it signed Laplacian\/} $\overline{L}$ is defined by
\[
\overline{L} = \overline{D} - W,
\]
and its normalized version $\overline{L}_{\mathrm{sym}}$ by
\[
\overline{L}_{\mathrm{sym}} = \overline{D}^{-1/2}\, \overline{L}\,
\overline{D}^{-1/2}
= I - \overline{D}^{-1/2} W \overline{D}^{-1/2}.
\]
For a graph without isolated vertices, we have $\overline{d}(v_i) > 0$
for $i = 1, \ldots, m$, so $\overline{D}^{-1/2}$ is well defined.
\begin{proposition}
\label{Laplace1s}
For any $m\times m$ symmetric matrix $W = (w_{i j})$, if we let $\overline{L} = \overline{D} - W$
where $\overline{D}$ is the signed degree matrix associated with $W$,
then we have
\[
\transpos{x} \overline{L} x =
\frac{1}{2}\sum_{i, j = 1}^m |w_{i j}| (x_i - \mathrm{sgn}(w_{i j}) x_j)^2
\quad\mathrm{for\ all}\> x\in \mathbb{R}^m.
\]
Consequently, $\overline{L}$ is positive semidefinite.
\end{proposition}
Given a partition of $V$ into $K$
clusters $(A_1, \ldots, A_K)$, if we represent the $j$th block of
this partition by a vector $X^j$ such that
\[
X^j_i =
\begin{cases}
a_j & \text{if $v_i \in A_j$} \\
0 & \text{if $v_i \notin A_j$} ,
\end{cases}
\]
for some $a_j \not= 0$.
\begin{definition}
\label{sncut}
The {\it signed normalized cut\/}
$\mathrm{sNcut}(A_1, \ldots, A_K)$ of the
partition $(A_1, ..., A_K)$ is defined as
\[
\mathrm{sNcut}(A_1, \ldots, A_K) = \sum_{j = 1}^K
\frac{\mathrm{cut}(A_j, \overline{A_j}) +
2 \mathrm{links}^-(A_j, A_j)}{\mathrm{vol}(A_j)}.
\]
\end{definition}
Another formulation is
\[
\mathrm{sNcut}(A_1, \ldots, A_K) =
\sum_{j = 1}^K \frac{\transpos{(X^j)} \overline{L} X^j}
{\transpos{(X^j)} \overline{D} X^j}.
\]
where $X$ is the $N\times K$ matrix whose $j$th column is $X^j$.
Observe that minimizing $\mathrm{sNcut}(A_1, \ldots, A_K)$ amounts to
minimizing the number of positive and negative edges between clusters,
and also minimizing the number of negative edges within clusters.
This second minimization captures the intuition that nodes connected
by a negative edge should not be together (they do not ``like''
each other; they should be far from each other).
\subsection{Optimization Problem}
We have our first formulation of $K$-way clustering
of a graph using normalized cuts, called problem PNC1
(the notation PNCX is used in Yu \cite{yu2003multiclass}, Section 2.1):
If we let
\begin{align*}
\s{X} = \Big\{[X^1\> \ldots \> X^K] \mid
X^j = a_j(x_1^j, \ldots, x_N^j) , \\
\>
x_i^j \in \{1, 0\},
a_j\in \mathbb{R}, \> X^j \not= 0
\Big\}
\end{align*}
our solution set is
\[
\s{K} = \big\{
X \in\s{X} \mid \transpos{X} \overline{D}\mathbf{1} = 0
\big\}.
\]
\medskip\noindent
{\bf $K$-way Clustering of a graph using Normalized Cut, Version 1: \\
Problem PNC1}
\begin{align*}
& \mathrm{minimize} & \sum_{j = 1}^K
\frac{\transpos{(X^j)} \overline{L} X^j}{\transpos{(X^j)}\overline{D} X^j}& & & &\\
& \mathrm{subject\ to} &
\transpos{(X^i)} \overline{D} X^j = 0, & & & &\\
& & \quad 1\leq i, j \leq K,\>
i\not= j, & & X\in \s{X}. & & &
\end{align*}
An equivalent version of the optimization problem is
\medskip\noindent
{\bf Problem PNC2}
\begin{align*}
& \mathrm{minimize} & &
\mathrm{tr}(\transpos{X} \overline{L} X)& & & &\\
& \mathrm{subject\ to} & &
\transpos{X} \overline{D} X = I,
& & X\in \s{X}. & &
\end{align*}
The natural relaxation of problem PNC2 is to drop the condition
that $X\in \s{X}$, and we obtain the
\medskip\noindent
{\bf Problem $(*_2)$}
\begin{align*}
& \mathrm{minimize} & &
\mathrm{tr}(\transpos{X} \overline{L} X)& & & &\\
& \mathrm{subject\ to} & &
\transpos{X} \overline{D} X = I,
& & & &
\end{align*}
If $X$ is a solution to the relaxed problem, then $XQ$ is also a solution, where $Q\in\mathbf{O}(K)$.
If we make the change of variable $Y = \overline{D}^{1/2} X$ or equivalently
$X = \overline{D}^{-1/2} Y$.
However, since $\transpos{Y} Y = I$, we have
\[
Y^+ = \transpos{Y},
\]
so we get the equivalent problem
\medskip\noindent
{\bf Problem $(**_2)$}
\begin{align*}
& \mathrm{minimize} & &
\mathrm{tr}(\transpos{Y}\overline{D}^{-1/2} \overline{L} \overline{D}^{-1/2} Y)& & & &\\
& \mathrm{subject\ to} & &
\transpos{Y} Y = I.
& & & &
\end{align*}
The minimum of problem $(**_2)$
is achieved by any $K$ unit eigenvectors $(u_1, \ldots, u_K)$ associated with the smallest
eigenvalues
\[
0 = \nu_1\leq \nu_2 \leq \ldots \leq \nu_K
\]
of $L_{\mathrm{sym}}$.
\subsection{Finding an Approximate Discrete Solution}
Given a solution $Z$ of problem $(*_2)$,
we look for pairs
$(X, Q)$ with $X\in \s{X}$ and where $Q$ is a $K\times K$ matrix with
nonzero and pairwise orthogonal columns,
with $\norme{X}_F = \norme{Z}_F$,
that minimize
\[
\varphi(X, Q) = \norme{X - ZQ}_F.
\]
Here, $\norme{A}_F$ is the Frobenius norm of $A$.
This is a difficult nonlinear optimization problem
involving two unknown matrices $X$ and $Q$.
To simplify the problem,
we proceed by alternating steps during which we minimize
$\varphi(X, Q) = \norme{X - ZQ}_F$ with respect to $X$ holding $Q$
fixed, and steps during which we minimize
$\varphi(X, Q) = \norme{X - ZQ}_F$ with respect to $Q$ holding $X$
fixed.
This second step in which $X$ is held fixed has been studied, but it
is still a hard problem for which no closed--form solution is known.
Consequently, we further simplify the problem.
Since $Q$ is of the form $Q = R\Lambda$ where
$R\in \mathbf{O}(K)$ and $\Lambda$ is a diagonal invertible matrix,
we minimize $\norme{X - ZR\Lambda}_F$ in two stages.
\begin{enumerate}
\item
We set $\Lambda = I$ and find $R\in \mathbf{O}(K)$
that minimizes $\norme{X - ZR}_F$.
\item
Given $X$, $Z$, and $R$, find a
diagonal invertible matrix $\Lambda$ that
minimizes $\norme{X - ZR\Lambda}_F$.
\end{enumerate}
The matrix $R\Lambda$ is not a minimizer of
$\norme{X - ZR\Lambda}_F$ in general, but it is an improvement
on $R$ alone, and both stages can be solved quite easily.
In stage 1, the matrix $Q=R$ is orthogonal, so $Q\transpos{Q} = I$, and
since $Z$ and $X$ are given,
the problem reduces to minimizing
$ - 2\mathrm{tr}(\transpos{Q}\transpos{Z}X)$; that is,
maximizing $\mathrm{tr}(\transpos{Q}\transpos{Z}X)$.
\section{Signed Graph Cluster Estimation}
\subsection{Signed Normalized Cut}
Signed graphs allow negative and positive entries in a symmetric matrix (the weight matrix $W$).
Such graphs (with weights $(-1, 0, +1)$) were introduced as early as
1953 by \cite{harary1953notion}, to model social relations involving disliking,
indifference, and liking. The problem of clustering the nodes
of a signed graph arises naturally as a generalization of the
clustering problem for weighted graphs. Figure 1 shows a signed graph of word similarities with a thesaurus overlay.
\begin{figure}[ht]
\label{fig:dc2}
\includegraphics[width=\linewidth]{signed_word_cluster.jpg}
\caption{Signed graph of words using
the heat kernel of the Euclidean distance from the word embedding. The red edges represent the antonym relation while blue edges represent synonymy relations.}
\end{figure}
\theoremstyle{definition}
\begin{definition}
\label{graph-weighted}
A {\it weighted graph\/}
is a pair $G = (V, W)$, where
$V = \{v_1, \ldots, v_m\}$ is a set of
{\it nodes\/} or {\it vertices\/}, and $W$ is a symmetric matrix
called the {\it weight matrix\/}, such that $w_{i\, j} \geq 0$
for all $i, j \in \{1, \ldots, m\}$,
and $w_{i\, i} = 0$ for $i = 1, \ldots, m$.
We say that a set $\{v_i, v_j\}$ is an edge iff
$w_{i\, j} > 0$. The corresponding (undirected) graph $(V, E)$
with $E = \{\{v_i, v_j\} \mid w_{i\, j} > 0\}$,
is called the {\it underlying graph\/} of $G$.
\end{definition}
Given a signed graph $G = (V, W)$ (where $W$ is a symmetric matrix
with zero diagonal entries), the {\it underlying graph\/} of $G$ is
the graph with node set $V$ and set of (undirected) edges
$E = \{\{v_i, v_j\} \mid w_{i j} \not= 0\}$.
If $(V, W)$ is a signed graph, where $W$ is an $m\times m$ symmetric
matrix with zero diagonal entries and with the other entries
$w_{i j}\in \mathbb{R}$ arbitrary, for any node $v_i \in V$, the {\it signed degree\/} of $v_i$ is defined as
\[
\overline{d}_i = \overline{d}(v_i) = \sum_{j = 1}^m |w_{i j}|,
\]
and the {\it signed degree matrix \/} $\overline{D}$ as
\[
\overline{D} = \mathrm{diag}(\overline{d}(v_1) , \ldots, \overline{d}(v_m)).
\]
For any subset $A$ of the set of nodes
$V$, let
\[
\mathrm{vol}(A) = \sum_{v_i\in A} \overline{d}_i =
\sum_{v_i\in A} \sum_{j = 1}^m |w_{i j}|.
\]
For any two subsets $A$ and $B$ of $V$,
define $\mathrm{links}^+(A,B)$,
$\mathrm{links}^-(A,B)$, and $\mathrm{cut}(A,\overline{A})$ by
\begin{align*}
\mathrm{links}^+(A, B) =
\sum_{\begin{subarray}{c}
v_i \in A, v_j\in B \\
w_{i j} > 0
\end{subarray}
}
w_{i j} \\
\mathrm{links}^-(A,B) =
\sum_{\begin{subarray}{c}
v_i \in A, v_j\in B \\
w_{i j} < 0
\end{subarray}
}
- w_{i j} \\
\mathrm{cut}(A,\overline{A}) =
\sum_{\begin{subarray}{c}
v_i \in A, v_j\in \overline{A} \\
w_{i j} \not= 0
\end{subarray}
}
|w_{i j}| .
\end{align*}
Then, the {\it signed Laplacian\/} $\overline{L}$ is defined by
\[
\overline{L} = \overline{D} - W,
\]
and its normalized version $\overline{L}_{\mathrm{sym}}$ by
\[
\overline{L}_{\mathrm{sym}} = \overline{D}^{-1/2}\, \overline{L}\,
\overline{D}^{-1/2}
= I - \overline{D}^{-1/2} W \overline{D}^{-1/2}.
\]
For a graph without isolated vertices, we have $\overline{d}(v_i) > 0$
for $i = 1, \ldots, m$, so $\overline{D}^{-1/2}$ is well defined.
\begin{proposition}
\label{Laplace1s}
For any $m\times m$ symmetric matrix $W = (w_{i j})$, if we let $\overline{L} = \overline{D} - W$
where $\overline{D}$ is the signed degree matrix associated with $W$,
then we have
\[
\transpos{x} \overline{L} x =
\frac{1}{2}\sum_{i, j = 1}^m |w_{i j}| (x_i - \mathrm{sgn}(w_{i j}) x_j)^2
\quad\mathrm{for\ all}\> x\in \mathbb{R}^m.
\]
Consequently, $\overline{L}$ is positive semidefinite.
\end{proposition}
Given a partition of $V$ into $K$
clusters $(A_1, \ldots, A_K)$, if we represent the $j$th block of
this partition by a vector $X^j$ such that
\[
X^j_i =
\begin{cases}
a_j & \text{if $v_i \in A_j$} \\
0 & \text{if $v_i \notin A_j$} ,
\end{cases}
\]
for some $a_j \not= 0$.
\begin{definition}
\label{sncut}
The {\it signed normalized cut\/}
$\mathrm{sNcut}(A_1, \ldots, A_K)$ of the
partition $(A_1, \ldots, A_K)$ is defined as
\[
\mathrm{sNcut}(A_1, \ldots, A_K) =
\sum_{j = 1}^K \frac{\transpos{(X^j)} \overline{L} X^j}
{\transpos{(X^j)} \overline{D} X^j}.
\]
where $X$ is the $N\times K$ matrix whose $j$th column is $X^j$.
\end{definition}
Minimizing $\mathrm{sNcut}(A_1, \ldots, A_K)$ amounts to
minimizing the number of positive between clusters,
and also minimizing the number of negative edges within clusters.
\subsection{Optimization Problem}
We have our first formulation of $K$-way clustering
of a graph using normalized cuts, called problem PNC1:
If we let
\begin{align*}
\s{X} = \Big\{[X^1\> \ldots \> X^K] \mid
X^j = a_j(x_1^j, \ldots, x_N^j) , \\
\>
x_i^j \in \{1, 0\},
a_j\in \mathbb{R}, \> X^j \not= 0
\Big\}
\end{align*}
our solution set is
\[
\s{K} = \big\{
X \in\s{X} \mid \transpos{X} \overline{D}\mathbf{1} = 0
\big\}.
\]
\medskip\noindent
{\bf $K$-way Clustering of a graph using Normalized Cut, Version 1: \\
Problem PNC1}
\begin{align*}
& \mathrm{minimize} & \sum_{j = 1}^K
\frac{\transpos{(X^j)} \overline{L} X^j}{\transpos{(X^j)}\overline{D} X^j}& & & &\\
& \mathrm{subject\ to} &
\transpos{(X^i)} \overline{D} X^j = 0, & & & &\\
& & \quad 1\leq i, j \leq K,\>
i\not= j, & & X\in \s{X}. & & &
\end{align*}
We obtain the relaxed problem by the unconstrained the solution without $X\in \s{X}$
\medskip\noindent
{\bf Problem $(*_2)$}
\begin{align*}
& \mathrm{minimize} & &
\mathrm{tr}(\transpos{Y}\overline{D}^{-1/2} \overline{L} \overline{D}^{-1/2} Y)& & & &\\
& \mathrm{subject\ to} & &
\transpos{Y} Y = I.
& & & &
\end{align*}
The minimum of problem $(*_2)$
is achieved by any $K$ unit eigenvectors $(u_1, \ldots, u_K)$ associated with the smallest
eigenvalues
\[
0 \leq \nu_1\leq \nu_2 \leq \ldots \leq \nu_K
\]
of $L_{\mathrm{sym}}$.
\subsection{Finding an Approximate Discrete Solution}
Given a solution $Z$ of problem $(*_2)$,
we look for pairs
$(X, Q)$ with $X\in \s{X}$ and where $Q$ is a $K\times K$ matrix with
nonzero and pairwise orthogonal columns,
with $\norme{X}_F = \norme{Z}_F$,
that minimize
\[
\varphi(X, Q) = \norme{X - ZQ}_F.
\]
Here, $\norme{A}_F$ is the Frobenius norm of $A$.
This is a challenging nonlinear optimization problem.
To simplify the problem, we proceed by alternating steps during which we minimize
$\varphi(X, Q)$ with respect to $X$ holding $Q$ fixed, and vice versa.
This second step in which $X$ is held fixed has no known closed--form solution, thus we simplify the problem.
We minimize $\norme{X - ZR\Lambda}_F$ in two stages.
In stage 1, the problem reduces to maximizing $\mathrm{tr}(\transpos{Q}\transpos{Z}X)$.
\begin{enumerate}
\item
We set $\Lambda = I$ and find $R\in \mathbf{O}(K)$
that minimizes $\norme{X - ZR}_F$.
\item
Given $X$, $Z$, and $R$, find a
diagonal invertible matrix $\Lambda$ that
minimizes $\norme{X - ZR\Lambda}_F$.
\end{enumerate}
|
1,941,325,221,094 | arxiv | \section{Introduction}
\label{sec:intro}
When passing through matter, high energy particles lose energy by
showering, via the splitting processes of hard bremsstrahlung and pair
production. At very high energy, the quantum mechanical duration of
each splitting process, known as the formation time, exceeds the mean
free time for collisions with the medium, leading to a significant
reduction in the splitting rate known as the Landau-Pomeranchuk-Migdal
(LPM) effect \cite{LP,Migdal}.%
\footnote{
The papers of Landau and Pomeranchuk \cite{LP} are also available in
English translation \cite{LPenglish}.
The generalization to QCD was originally carried out by
Baier, Dokshitzer, Mueller, Peigne, and Schiff \cite{BDMPS12,BDMPS3}
and by Zakharov \cite{Zakharov}
(BDMPS-Z).
}
A long-standing problem in field theory has
been to understand how to implement this effect in cases where
the formation times of two consecutive splittings overlap.
The goal of this paper is to (i) present nearly complete results for
the case of two overlapping gluon splittings (e.g.\ $g \to gg \to ggg$)
and (ii)
confirm that earlier leading-log results for these effects
\cite{Blaizot,Iancu,Wu} are reproduced
by our more-complete results in the appropriate soft limit.
As a necessary step, we discuss how to combine the effects of
overlapping real double splitting ($g \to gg \to ggg$) with corresponding
virtual corrections to single splitting (e.g.\ $g \to gg^* \to ggg^* \to gg$)
to cancel spurious infrared (IR) divergences.
In our analysis of virtual corrections, we will also verify that
we reproduce the correct ultraviolet (UV)
renormalization and running of the QCD
coupling $\alpha_{\rm s}$ associated with the high-energy
vertex for single splitting.
In this paper, we will present the formulas for the building blocks
just discussed, but we leave application of those formulas to later
work. In particular, one of the ultimate motivations \cite{qedNfstop}
of our study is to eventually investigate whether the size of overlap
effects is small enough to justify a picture of parton showers, inside
a quark-gluon plasma, as composed of individual high-energy partons; or
whether the splitting of high-energy partons is so strongly-coupled
that high-energy partons lose their individual identity, similar to
gauge-gravity duality studies
\cite{GubserGluon,HIM,CheslerQuark,adsjet12}
of energy loss. But, as will be discussed
in our conclusion, further work will be needed to answer that question.
As a technical matter, our calculations are organized \cite{QEDnf} using
Light-Cone Perturbation Theory (LCPT) \cite{LB,BL,BPP}.%
\footnote{
For readers not familiar with time-ordered
LCPT who would like
the simplest possible example of how it reassuringly
reproduces the results of
ordinary Feynman diagram calculations,
we recommend section
1.4.1 of Kovchegov and Levin's monograph \cite{KL}.
}
As we will explain below, the ``nearly'' in our claim of ``nearly complete
results'' refers to the fact that we have not yet calculated,
for QCD, contributions from diagrams that involve ``instantaneous''
interactions in Light-Cone Perturbation Theory.
The effects of such diagrams have been numerically small in
earlier studies of overlap effects in QED \cite{QEDnf}, and they
do not contribute to our check that our results agree with earlier
leading-log calculations. For these reasons, and because analysis of
the non-instantaneous diagrams is already complicated, we leave the
calculation of instantaneous diagrams for QCD to later work.
For similar reasons, we also leave to later work the effect of
diagrams involving 4-gluon vertices, like those computed for
real double gluon splitting in ref.\ \cite{4point}.
We make a number of simplifying assumptions also
made in the sequence of earlier papers \cite{2brem,seq,dimreg,QEDnf}
leading up to this work:
We take
the large-$N_{\rm c}$ limit, assume that the medium is thick compared to
formation lengths, and use the multiple-scattering ($\hat q$)
approximation appropriate to elastic scattering of high-energy partons
from the (thick) medium. All of these simplifications could be relaxed in
the context of the underlying formalism used for calculations,%
\footnote{
In particular, for a discussion of how one could in principle
eliminate the large-$N_{\rm c}$ approximation, see refs.\ \cite{color,Vqhat}.
}
but
practical calculations would then be quite considerably harder; so we
focus on the simplest situation here.
\subsection {The diagrams we compute}
Previous work \cite{2brem,seq,dimreg} has computed overlap effects
for real double gluon splitting ($g \to gg \to ggg$) depicted by the
interference diagrams of figs.\ \ref{fig:crossed} and \ref{fig:seq}.
Each diagram is time-ordered from left to right and
has the following interpretation: The blue (upper)
part of the diagram represents a contribution to the amplitude for
$g \to ggg$, the red (lower) part represents a contribution to the
conjugate amplitude, and the two together represent a particular
contribution to the {\it rate}. Only high-energy particle lines are
shown explicitly, but each such line is implicitly
summed over an arbitrary number
of interactions with the medium, and the diagram is averaged over
the statistical fluctuations of the medium.
See ref.\ \cite{2brem} for details.
For real double gluon splitting, we will refer to the longitudinal
momentum fractions of the three final-state gluons as $x$, $y$,
and
\begin {equation}
z \equiv 1{-}x{-}y
\end {equation}
relative to the initial gluon.
Also, our nomenclature is that figs.\ \ref{fig:crossed} and \ref{fig:seq}
are respectively called ``crossed'' and ``sequential'' diagrams
because of the way they are drawn.
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.55]{crossed.eps}
\caption{
\label{fig:crossed}
``Crossed'' time-ordered diagrams for the real double gluon splitting rate.
Labeling of diagrams ($x y \bar y \bar x$, etc.)\ is as in
ref.\ \cite{2brem}. All lines in this and other figures represent
high-energy gluons.
}
\end {center}
\end {figure}
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.55]{seq.eps}
\caption{
\label{fig:seq}
``Sequential'' time-ordered diagrams for the real double gluon
splitting rate \cite{seq}.
}
\end {center}
\end {figure}
For the case of sequential diagrams (fig.\ \ref{fig:seq}),
it is possible for the two consecutive splittings to be arbitrarily
far separated in time, in which case their formation times do not
overlap. The effect of overlapping formation times in this case
is then determined by subtracting from the sequential diagrams the
corresponding results one would have gotten by treating the two
splittings as independent splittings. Details are given in
ref.\ \cite{seq}, along with discussion of physical interpretation
and application.%
\footnote{
See in particular the discussion of section 1.1 of ref.\ \cite{seq}.
}
Whenever such a subtraction needs to be made
on a double-splitting differential rate $d\Gamma$, we will use the
symbol $\Delta\,d\Gamma$ to refer to the subtracted version that
isolates the effect of overlapping formation times.
In the limit that one of the three final-state gluons---say $y$---is soft,
it was found \cite{seq} that the overlap effect on real double splitting
behaves parametrically as%
\footnote{
See section 1.4 of ref.\ \cite{seq} for a back-of-the-envelope explanation
of why (\ref{eq:realscaling}) is to be expected.
}
\begin {equation}
\left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{g\to ggg}
\sim \frac{C_{\rm A}^2 \alpha_{\rm s}^2}{x y^{3/2}}
\left( \frac{\hat q}{E} \right)^{1/2}
\qquad \mbox{for $y \ll x < z$.}
\label {eq:realscaling}
\end {equation}
As we'll review later, the $y^{-3/2}$ behavior would lead to
{\it power-law} infrared divergences in energy loss calculations.
Very crudely analogous to what happens in vacuum bremsstrahlung in
QED, where there are (logarithmic) infrared divergences that cancel
in inclusive calculations between real and virtual emissions, we need
to supplement the real double emission processes ($g \to ggg$)
by a calculation of
corresponding virtual corrections to the single emission process
($g \to gg$) of
fig.\ \ref{fig:LO}. The virtual processes that we calculate in this
paper are shown in fig.\ \ref{fig:virtI} (which we call Class I)
and fig.\ \ref{fig:virtII} (which we call Class II).
There are also cousins of the Class I diagrams
generated by swapping the two final state gluons ($x \to 1{-}x$),
two examples of which are shown in
fig.\ \ref{fig:virtIb}. For Class II diagrams, such a swap does
not generate a new diagram.
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.55]{LO.eps}
\caption{
\label{fig:LO}
Time-ordered diagrams for the leading-order rate for
single gluon splitting.
\cite{seq}.
}
\end {center}
\end {figure}
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.55]{virtI.eps}
\caption{
\label{fig:virtI}
Class I one-loop virtual corrections to fig.\ \ref{fig:LO}.
As with previous figures, not all possible time orderings
of the diagrams have been shown explicitly but
the missing orderings are all
included when one adds in the complex conjugates (``+ conjugates'')
of the diagrams explicitly shown above.
Graphically, taking the conjugate flips a diagram about its
horizontal axis while swapping the colors red and blue.
}
\end {center}
\end {figure}
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.55]{virtII.eps}
\caption{
\label{fig:virtII}
Class II one-loop virtual corrections to fig.\ \ref{fig:LO}.
}
\end {center}
\end {figure}
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.55]{virtIb.eps}
\caption{
\label{fig:virtIb}
Two examples of diagrams (right) generated by swapping the
two final-state gluons in Class I diagrams (left) from
fig.\ \ref{fig:virtI}. The swap is equivalent to
replacing $x \to 1{-}x$ in the results for Class I diagrams.
}
\end {center}
\end {figure}
In total, these sets of virtual diagrams include all one-loop
virtual corrections to
single splitting except for processes involving
instantaneous interactions or fundamental 4-gluon vertices.
As mentioned previously, we leave the latter for future work.
A few examples are shown in
fig.\ \ref{fig:later}. The ``instantaneous'' interactions
(indicated by a propagator crossed by a bar) are instantaneous in
light-cone time and correspond to the exchange of a
longitudinally-polarized gluon in light-cone gauge.
See ref.\ \cite{QEDnf} for examples of such diagrams evaluated in QED.
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.55]{later.eps}
\caption{
\label{fig:later}
A few examples of diagrams involving either (i) instantaneous interactions
via longitudinal gluon exchange or (ii) fundamental 4-gluon vertices.
Longitudinal gluon exchange is represented by a vertical
(i.e.\ instantaneous) line that is crossed by a black bar, following
the diagrammatic notation of Light-Cone Perturbation Theory.
}
\end {center}
\end {figure}
We should clarify that, physically,
the power-law divergences of (\ref{eq:realscaling})
as $y {\to} 0$ are not actually infinite. The scaling (\ref{eq:realscaling})
depends on the $\hat q$ approximation, which breaks down when the
soft gluon energy $yE$ becomes as small as the plasma temperature $T$.%
\footnote{
This will be discussed again later, in section \ref{sec:scales}.
}
In the high-energy limit, however, the cancellation of such
power-law contributions to shower development, even if
only a cancellation of contributions that are
parametrically large in energy rather than truly infinite,
will be critical to extracting the relevant physics that survives after
the cancellation. In this paper, we will be able to ignore the
far-infrared physics (meaning scale $T \ll E$) that regulates the
power-law divergences and can simply analyze the cancellation
of power-law divergences in the
context of the $\hat q$ approximation appropriate for the
high-energy behavior.
\subsection {Infrared Divergences}
\label {sec:introIR}
We will later discuss the calculation of the differential rates
\begin {equation}
\left[ \Delta \frac{d\Gamma}{dx\>dy} \right]_{g\to ggg} ,
\qquad
\left[ \Delta \frac{d\Gamma}{dx} \right]_{{\rm virt\,I}} ,
\qquad
\left[ \Delta \frac{d\Gamma}{dx} \right]_{{\rm virt\,II}} ,
\end {equation}
associated respectively with the real
double emission diagrams
of fig.\ \ref{fig:crossed} plus fig.\ \ref{fig:seq},
the Class I virtual correction diagrams of fig.\ \ref{fig:virtI}, and
the Class II virtual correction diagrams of fig.\ \ref{fig:virtII}.
But here we first preview some results concerning
infrared divergences.
In the virtual diagrams of figs.\ \ref{fig:virtI} and \ref{fig:virtII},
the virtual loop longitudinal momentum fraction $y$
in the amplitude or conjugate
amplitude needs to be integrated over, and it will be convenient to
introduce the notation $[d\Gamma/dx\,dy]_{{\rm virt\,I}}$ and
$[d\Gamma/dx\,dy]_{{\rm virt\,II}}$ for the corresponding integrands of that
$y$ integration. Our calculations are performed in Light Cone Perturbation
Theory, in which every particle line (virtual as well as real) is
restricted to {\it positive} longitudinal momentum fraction.
The structure of the Class I diagrams of fig.\ \ref{fig:virtI} then
forces $0 < y < 1{-}x$, whereas the structure of the Class II diagrams
of fig.\ \ref{fig:virtII} forces $0 < y < 1$ instead.
So, in our notation,
\begin {equation}
\left[ \Delta \frac{d\Gamma}{dx} \right]_{{\rm virt\,I}}
= \int_0^{1-x} dy \>
\left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{{\rm virt\,I}}
\quad \text{and} \quad
\left[ \Delta \frac{d\Gamma}{dx} \right]_{{\rm virt\,II}}
= \int_0^1 dy \>
\left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{{\rm virt\,II}} .
\label {eq:virtIntegrands}
\end {equation}
We will later give detailed discussion of how infrared divergences appear
in various calculations associated with shower development,
but a good starting point is to consider the net rate
$[d\Gamma/dx]_{\rm net}$ at which all
of the processes represented by figs.\ \ref{fig:crossed}--\ref{fig:virtIb}
produce one daughter of energy $xE$ (plus any other daughters)
from a particle of energy $E$, for a given $x$.
That's given by
\begin {subequations}
\label {eq:dGnet}
\begin {equation}
\left[ \frac{d\Gamma}{dx} \right]_{\rm net}
=
\left[ \frac{d\Gamma}{dx} \right]^{\overline{\rm LO}}
+
\left[ \frac{d\Gamma}{dx} \right]_{\rm net}^{\overline{\rm NLO}}
\end {equation}
where the first term is the rate of the leading-order (LO)
$g\to gg$ process of fig.\ \ref{fig:LO}, and where
the next-to-leading-order (NLO) contribution is%
\footnote{
Here and throughout, the terms leading-order and next-to-leading-order
refer to expansion in the $\alpha_{\rm s}(Q_\perp)$ associated with each
splitting vertex for high-energy partons and not to the $\alpha_{\rm s}(T)$ that
controls whether the quark-gluon plasma is strongly or weakly coupled.
}
\begin {align}
\left[ \frac{d\Gamma}{dx} \right]_{\rm net}^{\overline{\rm NLO}}
&=
\left[ \frac{d\Gamma}{dx} \right]_{g\to gg}^{\overline{\rm NLO}}
+
\frac12
\int_0^{1{-}x} dy \>
\left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{g\to ggg}
\nonumber\\[5pt]
&=
\biggl(
\int_0^{1-x} dy \, \left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{\rm virt\,I}
\biggr)
+ (x \to 1{-}x)
\nonumber\\ & \hspace{10em}
+
\int_0^1 dy \, \left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{\rm virt\,II}
+
\frac12
\int_0^{1{-}x} dy \>
\left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{g\to ggg} .
\label {eq:dGnetNLOdef}
\end {align}
\end {subequations}
[See appendix \ref{app:details} for more discussion.]
The bars above $\overline{\rm LO}$ and $\overline{\rm NLO}$ in (\ref{eq:dGnet}) are
a technical distinction that will
be discussed later and can be ignored for now.
In the integrals above, some virtual or final particle has zero
energy at both the lower {\it and} upper limits of the
$y$ integrations, and so both limits are associated with
infrared divergences. In order to see how divergences behave,
it is convenient to use symmetries and/or change of integration
variables to rewrite the integrals so that the infrared divergences
of $[d\Gamma/dx]_{\rm net}^{\rm NLO}$ are associated {\it only} with
$y \to 0$ (for fixed non-zero $x < 1$).
In particular, (\ref{eq:dGnetNLOdef}) can be rewritten
[see appendix \ref{app:details} for details] as
\begin {align}
\left[ \frac{d\Gamma}{dx} \right]_{\rm net}^{\overline{\rm NLO}}
&=
\biggl(
\int_0^{1-x} dy \,
\left\{
\left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{\rm virt\,I}
+
\left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{\rm virt\,II}
\right\}
\biggr)
+ (x \to 1{-}x)
\nonumber\\ & \hspace{10em}
+
\frac12
\int_0^{1{-}x} dy \>
\left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{g\to ggg}
\label {eq:dGnetNLO1}
\end {align}
and thence [appendix \ref{app:details}]
\begin {align}
\left[ \frac{d\Gamma}{dx} \right]_{\rm net}^{\overline{\rm NLO}}
&=
\int_0^{1/2} dy \>
\Bigl\{
\bigl[ v(x,y) \, \theta(y<\tfrac{1-x}{2}) \bigr]
+ [x \to 1-x]
+ r(x,y) \, \theta(y<\tfrac{1-x}{2})
\Bigr\}
\nonumber\\
&=
\int_0^{1/2} dy \>
\Bigl\{
v(x,y) \, \theta(y<\tfrac{1-x}{2})
+ v(1{-}x,y) \, \theta(y<\tfrac{x}{2})
+ r(x,y) \, \theta(y<\tfrac{1-x}{2})
\Bigr\}
,
\label {eq:dGnetNLO}
\end {align}
where contributions from virtual and real double splitting
processes appear in the respective combinations
\begin {subequations}
\label {eq:VRdef}
\begin {align}
v(x,y) &\equiv
\left(
\left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{\rm virt\,I}
+ \left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{\rm virt\,II}
\right)
+ ( y \leftrightarrow z ) ,
\label {eq:Vdef}
\\
r(x,y) & \equiv
\left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{g\to ggg} .
\label {eq:Rdef}
\end {align}
\end {subequations}
The $\theta(\cdots)$ in (\ref{eq:dGnetNLO}) represent unit step
functions [$\theta(\mbox{true})=1$ and $\theta(\mbox{false})=0$],
and they just implement upper limits on the $y$ integration.
The advantage of using the $\theta$ functions is so that we
can combine all the integrals: the
integrals for the separate terms each have power-law
IR divergences, but whether or not those divergences cancel
is now just a question of the $y\to 0$ behavior of the combined
integrand of (\ref{eq:dGnetNLO}).
In the limit $y\to 0$ for fixed $x$, the integrand of
(\ref{eq:dGnetNLO}) approaches
\begin {equation}
v(x,y) + v(1{-}x,y) + r(x,y) .
\label {eq:VVR}
\end {equation}
Using the symmetry of the $g\to ggg$ rate (\ref{eq:Rdef})
under permutations of $x$, $y$, and $z=1{-}x{-}y$,
we have
$r(x,y) = r(1{-}x{-}y,y) \simeq r(1{-}x,y)$ for small $y$, and
so (\ref{eq:VVR}) approaches
\begin {equation}
\bigl[ v(x,y) + \tfrac12 r(x,y) \bigr] + [x \to 1{-}x] .
\label {eq:VR}
\end {equation}
By (\ref{eq:realscaling}), $r(x,y) \sim y^{-3/2}$ for small $y$,
and so the integral of $r(x,y)$ in (\ref{eq:dGnetNLO}) has
a power-law IR divergence proportional to $\int_0 dy/y^{3/2}$.
From the full results for rates that we calculate in this paper, we find
that the $y^{-3/2}$ behavior cancels in the combination
$v(x,y)+\tfrac12 r(x,y)$ appearing in (\ref{eq:VR}).
We also find that left behind after this cancellation is,
at leading logarithmic order,
\begin {equation}
v(x,y) + \tfrac12 r(x,y)
\approx
-\frac{C_{\rm A}\alpha_{\rm s}}{8\pi}
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO} \frac{\ln y}{y}
\,,
\label {eq:VRlimit}
\end {equation}
which generates an IR double log divergence when integrated over
$y$. As we discuss later, this result, applied to (\ref{eq:dGnetNLO}),
exactly matches leading-log
results derived earlier in the literature \cite{Blaizot,Iancu,Wu}
and so provides a crucial check of our calculations.
Though it should be possible to extract (\ref{eq:VRlimit}) from
our results analytically, so far we have only checked numerically.%
\footnote{
Analytic extraction of double and single IR
logs directly from our full rate formulas
is
complicated because diagram by diagram the logs are
subleading to the power-law IR divergences, and the latter
are already complicated to extract analytically from our
results. Interested readers can see a painful
example in appendix \ref{app:Gxi}.
}
Fig.\ \ref{fig:dbllogCheck} shows a plot of our full results for
\begin {equation}
\frac{
v(x,y) + \tfrac12 r(x,y)
\vphantom{\Big|}
}{
\frac{C_{\rm A}\alpha_{\rm s}}{8\pi}
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO} \frac{1}{y}
\vphantom{\Big|}
}
\label {eq:VRratio}
\end {equation}
vs.\ $\ln y$ for a sample value of $x$.
According to (\ref{eq:VRlimit}),
the slope of (\ref{eq:VRratio}) vs.\ $\ln y$ should approach $-1$
as $\ln y \to -\infty$, which we show in
fig.\ \ref{fig:dbllogCheck} by comparison
to the straight line.
We hope in the future to also provide exact analytic results for
single-log divergences that are subleading to the double-log
divergence. For now we only have numerical results for
those, which we present later with an examination of how
well those numerical results fit an educated guess for their analytic
form.
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.55]{dbllogCheck.eps}
\caption{
\label{fig:dbllogCheck}
Full numerical results (circular data points)
for the ratio (\ref{eq:VRratio}) plotted vs.\ $\ln y$
for the case $x=0.3$. The blue straight line shows a line of
slope $-1$ for comparison, showing that our numerical results
confirm the leading-log behavior (\ref{eq:VRlimit}).
}
\end {center}
\end {figure}
\subsection {Outline}
The new diagrams needed for this paper are the virtual diagrams of
figs.\ \ref{fig:virtI} and \ref{fig:virtII}.
In the next section, we discuss how we can avoid calculating any of
these diagrams from scratch. All of the $g \to gg$ QCD virtual diagrams
can be obtained by either (i) transformation from known results for the
$g \to ggg$ QCD diagrams of figs.\ \ref{fig:crossed} and \ref{fig:seq}
or (ii) by adapting the known result for one QED virtual diagram.
In section \ref{sec:IR}, we go into much more detail about how to organize
IR divergences in calculations related to energy loss.
We also show that the double-log behavior (\ref{eq:VRlimit}) is
equivalent to earlier leading-log results.
Section \ref{sec:SingleLog} presents numerical results for sub-leading
single-log divergences and shows that the numerics fit very well, but not
quite perfectly, a form one might guess based on the physics of
double-log divergences.
The formalism and calculations that have led to our results for rates
have spanned many papers, and one can reasonably worry about the
possibility of error somewhere along the way.
Section \ref{sec:error} provides a compendium of several non-trivial
cross-checks of our results.
Section \ref{sec:conclusion} offers our conclusion and our outlook
for what needs to be done in future work.
Appendix \ref{app:summary} contains a complete summary of
all our final formulas for rates. Many technical
issues, derivations, and side investigations
are left for the other appendices.
\section{Method for computing diagrams}
\label {sec:diagrams}
\subsection{Symmetry Factor Conventions}
\label{sec:SymFactor}
Before discussing how to find formulas for differential
rates, we should clarify some conventions.
Note each virtual diagram in fig.\ \ref{fig:virtII}, as well as
the second row of
fig.\ \ref{fig:virtI}, has a loop in the amplitude (an all-blue loop)
or conjugate amplitude (an all-red loop) that should be associated
with a diagrammatic loop symmetry factor of $\tfrac12$.
Our convention in this paper
is that any such diagrammatic symmetry factor associated
with an internal loop is already included in the formula for
what we call $\Delta\,d\Gamma/dx\,dy$ in
(\ref{eq:virtIntegrands}).
Note that the loops in the first row of fig.\ \ref{fig:virtI} do
{\it not}\/ have an associated symmetry factor.
In contrast, we do not include any identical-particle final-state
symmetry factors in our formulas for differential rates.
These must be included by hand whenever integrating over the
longitudinal momentum fractions of daughters if the integration
region double-counts final states. For example, the total rate for
real double-splitting $g\to ggg$ is formally given by
\begin {equation}
\Delta\Gamma_{g\to ggg} =
\frac{1}{3!} \int_0^1 dx \int_0^{1{-}x} dy \>
\left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{g\to ggg}
\end {equation}
because the integration region used above covers all $3!$ permutations
of possible momentum fractions $x$, $y$, and $z=1{-}x{-}y$ for
the three daughter gluons.
Similarly, for $g \to gg$ processes, formally
\begin {equation}
\Gamma_{g\to gg}^{\rm LO} =
\frac{1}{2!} \int_0^1 dx
\left[ \frac{d\Gamma}{dx} \right]_{g\to gg}^{\rm LO} ,
\qquad
\Delta\Gamma_{g\to gg}^{\rm NLO} =
\frac{1}{2!} \int_0^1 dx
\left[ \Delta \frac{d\Gamma}{dx} \right]_{g\to gg}^{\rm NLO} .
\end {equation}
We use the caveat ``formally'' because the total splitting rates
$\Gamma$ and $\Delta\Gamma$ above are infrared divergent, but they
provide simple examples for explaining our conventions.
\subsection{Relating virtual diagrams to previous work}
In the context of (large-$N_{\rm f}$) QED, ref.\ \cite{QEDnf} showed
how many diagrams needed for virtual corrections to single splitting
could be obtained from results for real double splitting via
what were named back-end and front-end transformations.
For the current context of QCD,
figs.\ \ref{fig:relateI} and \ref{fig:relateII}
depict diagrammatically how all
but two of the Class I and II virtual diagrams we need
(figs.\ \ref{fig:virtI} and \ref{fig:virtII}) can be related to known results
for crossed and sequential $g \to ggg$ diagrams
(figs.\ \ref{fig:crossed} and \ref{fig:seq}) using
back-end and front-end transformations, sometimes accompanied by
switching the variable names $x$ and $y$ and/or complex conjugation.
Diagrammatically, a back-end transformation corresponds to taking
the {\it latest}-time splitting vertex in one of our rate diagrams
and sliding it around the back end of the diagram from the amplitude
to the conjugate-amplitude or vice versa.
Diagrammatically, a front-end transformation corresponds to taking
the {\it earliest}-time splitting vertex and sliding it around the
front end of the diagram.
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.4]{relateI.eps}
\caption{
\label{fig:relateI}
Relation of all but one Class I virtual diagram (fig.\ \ref{fig:virtI})
to real $g \to ggg$ diagrams.
The black arrows indicate moving the latest-time
(or earliest-time) vertex using a back-end (or front-end)
transformation \cite{QEDnf}.
}
\end {center}
\end {figure}
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.4]{relateII.eps}
\caption{
\label{fig:relateII}
Relation of all but one Class II virtual diagram (fig.\ \ref{fig:virtII})
to real $g \to ggg$ diagrams via front-end transformations.
}
\end {center}
\end {figure}
In terms of formulas, the only effect of a back-end transformation
is to introduce an overall minus sign in the corresponding formula
for $d\Gamma/dx\,dy$ \cite{QEDnf}. So, for example,
fig.\ \ref{fig:relateI} tells us that
\begin {equation}
\left[ \frac{d\Gamma}{dx\,dy} \right]_{\bar x y x y}
=
- \left[ \frac{d\Gamma}{dx\,dy} \right]_{x \bar y \bar x y}^*
\end {equation}
and so
\begin {equation}
2\operatorname{Re} \left[ \frac{d\Gamma}{dx} \right]_{\bar x y x y}
=
- \int_0^{1-x} dy \> 2\operatorname{Re}\left[ \frac{d\Gamma}{dx\,dy} \right]_{x \bar y \bar x y}
.
\end {equation}
Similarly,
\begin {equation}
2\operatorname{Re} \left[ \frac{d\Gamma}{dx} \right]_{y x \bar x y}
=
- \int_0^{1-x} dy \>
\left\{
\mbox{Replace $x\leftrightarrow y$ in formula for}~
2\operatorname{Re}\left[ \frac{d\Gamma}{dx\,dy} \right]_{x y \bar y \bar x}
\right\}
.
\end {equation}
When making a back-end transformation, one may also have to include
a loop symmetry factor if the resulting virtual diagram has one,
which the original $g{\to}ggg$ processes do not.
Front-end transformations are more complicated.
In the cases where it is an $x$ emission at the earliest vertex that is
being moved between the amplitude and conjugate amplitude, requiring
the longitudinal momentum fractions of the lines of the diagrams
to match up requires replacing
\begin {equation}
(x,y,E) \longrightarrow
\Bigl( \frac{{-}x}{1{-}x} \,,\, \frac{y}{1{-}x} \,,\, (1{-}x)E \Bigr) ,
\end {equation}
where $E$ is the energy of the initial particle in the real or virtual
double-splitting process.
See section 4.2 of ref.\ \cite{QEDnf} for a more detailed discussion.
There is also an overall normalization factor associated with the
transformation that, for our case here where all the particles are
gluons, amounts to%
\footnote{
See appendix H of ref.\ \cite{QEDnf}, especially eqs.\ (H.13) and (H.14)
there. In (H.13) of ref.\ \cite{QEDnf} there was additionally
an overall factor of $2N_{\rm f} {\cal N}_e/{\cal N}_\gamma$ that arose because
that front-end transformation related a diagram with an initial electron
to one with an initial photon, and the $2N_{\rm f} {\cal N}_e/{\cal N}_\gamma$
reflected the different factors associated with averaging over initial
flavors and helicities. In our case, the initial particle is always
a gluon, so no such adjustment is necessary.
Also, eqs.\ (H.13) and (H.14) of ref.\ \cite{QEDnf} do not have the
overall minus sign of our (\ref{eq:frontend}) above because they
included a back-end transformation in addition to the front-end
transformation.
Note that those equations have also implemented $x\leftrightarrow y$ in
addition to the front-end transformation (\ref{eq:frontend}) above.
}
\begin {equation}
\frac{d\Gamma}{dx\,dy}
\xrightarrow{\rm front-end}
- (1{-}x)^{-\epsilon}
\left\{
\frac{d\Gamma}{dx\,dy}
~\mbox{with}~
(x,y,E) \longrightarrow
\Bigl( \frac{{-}x}{1{-}x} \,,\, \frac{y}{1{-}x} \,,\, (1{-}x)E \Bigr)
\right\}
\label {eq:frontend}
\end {equation}
in $4{-}\epsilon$ spacetime dimensions.
The overall factor $(1{-}x)^{-\epsilon}$ will be relevant because we will use
dimensional regularization to handle and renormalize
UV divergences in our calculation.
We should note that there are a few additional subtleties in
practically implementing front-end transformations, which we leave
to appendix \ref{app:method}.
As an example of (\ref{eq:frontend}),
the relation depicted by the first case of fig.\ \ref{fig:relateII}
gives
\begin {multline}
2\operatorname{Re} \left[ \frac{d\Gamma}{dx} \right]_{\bar y x \bar y \bar x}
=
- \tfrac12 (1{-}x)^{-\epsilon} \int_0^1 dy
\biggl(
\mbox{Replace $x\leftrightarrow y$ in result of}~
\\
2\operatorname{Re}
\biggl\{
\left[ \frac{d\Gamma}{dx\,dy} \right]_{xy\bar x\bar y}
~\mbox{with substitution (\ref{eq:frontend})}
\biggr\}
\biggr) .
\end {multline}
The overall factor of $\tfrac12$ is included because of
the loop symmetry factor associated with the (red) loop
in the $\bar y x \bar y \bar x$ virtual diagram.
The only two virtual diagrams not covered by figures
\ref{fig:relateI} and \ref{fig:relateII}
are $xyy\bar x$ and $x\bar y\bar y\bar x$.
But these diagrams are related to
each other by combined front-end and back-end transformations,
as depicted in fig.\ \ref{fig:relateFund}.
That means that transformations have given us a short-cut for determining
all virtual diagrams except for one, which we take to be
$xyy\bar x$.
Fortunately, the $xyy\bar x$ diagram has the same form as
the QED diagram of fig.\ \ref{fig:QEDfund}
previously computed in ref.\ \cite{QEDnf},
and the QED result can be easily adapted to QCD.
One just needs to include QCD group factors associated with
splitting vertices; use QCD instead of QED
Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) splitting functions;
correctly account for identical-particle symmetry factors; and
use QCD rather than QED results for the complex frequencies and
normal modes associated with the $\hat q$ approximation to
the propagation of the high-energy particles through the medium.
Details of the conversion are given in appendix \ref{app:methodFund}.
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.4]{relateFund.eps}
\caption{
\label{fig:relateFund}
Relation to each other of the two virtual diagrams of
figs.\ \ref{fig:virtI} and \ref{fig:virtII} that are
not covered by the relations of
figs.\ \ref{fig:relateI} and \ref{fig:relateII}.
}
\end {center}
\end {figure}
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.6]{QEDfund.eps}
\caption{
\label{fig:QEDfund}
A QED version \cite{QEDnf} of the $xyy\bar x$ diagram of
fig.\ \ref{fig:virtI}.
}
\end {center}
\end {figure}
We give more detail on implementing the above methods in appendix
\ref{app:method}, and final results for unrenormalized diagrams are
given in appendix \ref{app:summary}
[with $\sigma_{\rm ren}{=}0$ and $\sigma_{\rm bare}{=}1$ in section \ref{app:NLOsummary}].
\subsection{UV divergences, renormalization, and running of }\tableofcontents$\alpha_{\rm s}$}
The virtual diagrams of figs.\ \ref{fig:virtI} and \ref{fig:virtII} contain
UV-divergent loops in the amplitude or conjugate amplitude.
It may seem surprising that most of them can be related via
figs.\ \ref{fig:relateI} and \ref{fig:relateII} to real double splitting
($g \to ggg$) diagrams that involve only tree-level diagrams in the
amplitude and conjugate amplitude.
This is possible because we are working with {\it time-ordered}\/ diagrams:
individual time-ordered interferences of
tree-level diagrams are UV-divergent even
though the sum of all the different time-orderings is not.
See section 4.1 of ref.\ \cite{QEDnf} for more discussion of this point.
In any case, the original calculations \cite{2brem,seq,dimreg}
of the $g{\to}ggg$ diagrams of figs.\ \ref{fig:crossed} and \ref{fig:seq}
discussed the UV divergence of each diagram and showed that they
indeed canceled.
The corresponding divergences of the virtual diagrams, however, will
not cancel. Indeed, they must conspire to produce the known renormalization
of $\alpha_{\rm s}$.
Ref.\ \cite{QEDnf} demonstrated how this worked out for large-$N_{\rm f}$ QED,
but the diagrammatics of renormalization of the QCD coupling is a little
more complicated.
We will also encounter a well-known annoyance of
Light Cone Perturbation Theory (LCPT): individual diagrams will contain
mixed UV-IR divergences that only cancel when the diagrams are summed
together.%
\footnote{
For an example from calculations that are tangentially related to ours,
see Beuf \cite{Beuf1,Beuf2} and
H\"anninen, Lappi, and Paatelainen \cite{LaP,HLaP} on next-to-leading-order
deep inelastic scattering (NLO DIS).
For a description of the similarities and differences of our problem
and theirs, see appendix B of ref.\ \cite{QEDnf}.
For a very early result on obtaining the correct renormalization of
the QCD coupling with LCPT in the context of vacuum diagrams, see
ref.\ \cite{HarindranathZhang3}.
}
\subsubsection{UV and IR regulators}
We use dimensional regularization in $4{-}\epsilon$
spacetime dimensions for UV divergences.
However, we use the letter $d$ to refer to the number
of \textit{transverse spatial} dimensions
\begin {equation}
d \equiv d_\perp = 2-\epsilon .
\end {equation}
For infrared divergences, we introduce a hard lower cut-off
$(p^+)_{\rm min}$ on light-cone momentum components $p^+$.
Hard momentum cut-offs complicate gauge invariance, but this is a
fairly standard procedure in LCPT, since LCPT is formulated specifically in
light-cone gauge $A^+{=}0$.
Note that $p^+$ is invariant under any
{\it residual}\/ gauge transformation that preserves light-cone gauge.
It would of course be nicer to use a more generally gauge invariant choice
of infrared regulator, but that would lead to more
complicated calculations.%
\footnote{
\label {foot:IRdimreg}
In particular, one might imagine using dimensional regularization for
the infrared as well as the ultraviolet. Unfortunately, the
dimensionally-regulated expansions in $\epsilon$ that we currently have
available \cite{dimreg,QEDnf} for the types of diagrams we need
all made use of the fact that dimensional regularization was
{\it only} needed for the ultraviolet.
}
We will write our IR cut-off on longitudinal momenta $p^+$ as
\begin {equation}
(p^+)_{\rm min} = P^+ \delta
\label {eq:deltadef}
\end {equation}
where $P^+$ is the longitudinal momentum of the initial particle
in the double-splitting process and $\delta$ is an arbitrarily tiny
positive number.%
\footnote{
A technicality concerning orders of limits: One should take the
UV regulator $\epsilon \to 0$ before taking the IR regulator
$\delta \to 0$. Taking $\delta \to 0$ first would be
equivalent to using dimensional
regularization for the IR as well as the UV, which is
currently problematic for the reason given in
footnote \ref{foot:IRdimreg}.
}
For consistency of IR regularization of the theory,
this constraint must be applied to all particles in the process.
For instance, in a $g{\to}ggg$ process where $P^+$ splits into
daughters with longitudinal momenta $x P^+$, $y P^+$, and $z P_+$, we
require that the longitudinal momentum fractions $x$, $y$, and $z$
all exceed $\delta$. (This automatically guarantees
that internal particle lines in $g{\to}ggg$ diagrams also have
$p^+ > P^+ \delta$.)
In a virtual correction to
$g \to gg$ where $P^+$ splits into $x P^+$ and $(1{-}x)P^+$,
we must have $x$ and $1{-}x$ greater than $\delta$, but we must
also impose that the momentum fractions of internal virtual lines
are greater than $\delta$ as well. We'll see explicit examples below.
With this notation, the annoying mixed UV-IR divergences of LCPT are
proportional to $\epsilon^{-1} \ln\delta$, which is the product of
a logarithmic UV divergence $\epsilon^{-1}$ and a logarithmic IR divergence
$\ln\delta$.
\subsubsection{Results for UV (including mixed UV-IR) divergences}
We can read off the results for $1/\epsilon$ divergences from the
complete results given in appendix \ref{app:summary}. However,
we will take the opportunity to be a little more concrete here
in the main text by stepping through the calculation for one
of the diagrams, but focusing on just the UV-divergent ($1/\epsilon$)
terms. Then we'll put the diagrams together to see the
cancellation of mixed UV-IR divergences and the appearance of
the QCD beta function coefficient $\beta_0$.
Consider the first NLO $g{\to}gg$
diagram ($yx\bar xy$) in fig.\ \ref{fig:relateI},
which shows that diagram related by back-end transformation
to the $g{\to}ggg$ diagram $xy\bar y\bar x$.
The $1/\epsilon$ piece of the latter can be taken from ref.\ \cite{dimreg}
and is [see appendix \ref{app:details} of the current paper
for more detail]
\begin {align}
\left[ \frac{d\Gamma}{dx\,dy} \right]_{xy\bar y\bar x} \approx{} &
\frac{C_{\rm A}^2 \alpha_{\rm s}^2}{8\pi^2\epsilon}
\bigl[ (i\Omega \operatorname{sgn} M)_{-1,x,1-x} + (i\Omega \operatorname{sgn} M)_{-(1-y),x,z} \bigr]
\nonumber\\ &\quad\times
x y z^2 (1{-}x)(1{-}y)
\left[
(\alpha + \beta)(1{-}x)(1{-}y)
+ (\alpha + \gamma) x y
\right] ,
\label {eq:xyyxdiv}
\end {align}
where
\begin {equation}
\Omega_{x_1,x_2,x_3} \equiv
\sqrt{ \frac{-i \hat q_{\rm A}}{2 E}
\left(\frac{1}{x_1}+\frac{1}{x_2}+\frac{1}{x_3}\right) } ,
\qquad
M_{x_1,x_2,x_3} \equiv - x_1 x_2 x_3 E ,
\label {eq:OmMdefs}
\end {equation}
and $(\alpha,\beta,\gamma)$ are functions of $x$ and $y$ that represent
various combinations of the helicity-dependent DGLAP splitting functions
associated with the vertices in the diagram.%
\footnote{
Details
of the definition of $(\alpha,\beta,\gamma)$
in terms of DGLAP splitting functions are
given in sections 4.5 and 4.6 of ref.\ \cite{2brem}.
In order to make those definitions work with front-end transformations,
one must additionally include absolute value signs as discussed after
eq.\ (\ref{eq:abc}) of the current paper.
}
In this section we use $\approx$ to indicate that we are only
keeping $1/\epsilon$ terms.
Back-end transforming the above expression and swapping $x{\leftrightarrow}y$,
as indicated in fig.\ \ref{fig:relateI}, gives the corresponding result
for the virtual diagram $yx\bar xy$:
\begin {subequations}
\label {eq:crossedXpieces}
\begin {align}
2\operatorname{Re}\left[ \frac{d\Gamma}{dx} \right]_{yx\bar x y}
\approx&
- \frac{C_{\rm A}^2 \alpha_{\rm s}^2}{4\pi^2\epsilon}
\int_\delta^{1-x-\delta} dy \>
\operatorname{Re}(i\Omega_{-1,y,1-y} + i\Omega_{-(1-x),y,z})
x y z (1{-}x)(1{-}y)
\nonumber\\ &\qquad\times
\left[
(\alpha{+}\beta) z(1{-}x)(1{-}y)
+ (\alpha{+}\gamma) x y z
\right] ,
\end {align}
where we have taken $2\operatorname{Re}(\cdots)$ to include the conjugate diagram
as well.
Doing similar calculations for the other crossed Class I diagrams
(the top line of fig.\ \ref{fig:relateI}), by using $g{\to}ggg$ results
for $x\bar y y\bar x$ and $x\bar y\bar x y$ from ref.\ \cite{dimreg}
and then transforming as in fig.\ \ref{fig:relateI}, gives
\begin {align}
2\operatorname{Re}\left[ \frac{d\Gamma}{dx} \right]_{y\bar x xy}
\approx&
- \frac{C_{\rm A}^2 \alpha_{\rm s}^2}{4\pi^2\epsilon}
\int_\delta^{1-x-\delta} dy \>
\operatorname{Re}(i\Omega_{-1,y,1-y} + i\Omega_{-(1-x),y,z})
x y z (1{-}x)(1{-}y)
\nonumber\\ &\qquad\times
\left[
- (\alpha + \beta) z(1{-}x)(1{-}y)
+ (\beta + \gamma) x y (1{-}x)(1{-}y)
\right] ,
\\
2\operatorname{Re}\left[ \frac{d\Gamma}{dx} \right]_{\bar x yxy}
\approx&
- \frac{C_{\rm A}^2 \alpha_{\rm s}^2}{4\pi^2\epsilon}
\int_\delta^{1-x-\delta} dy \>
\operatorname{Re}(i\Omega_{-1,x,1-x} + i\Omega_{-(1-x),y,z})
x y z (1{-}x)(1{-}y)
\nonumber\\ &\qquad\times
\left[
- (\alpha + \gamma) x y z
- (\beta + \gamma) x y (1{-}x)(1{-}y)
\right] ,
\\
2\operatorname{Re}\left[ \frac{d\Gamma}{dx} \right]_{yxy\bar x}
\approx&
- \frac{C_{\rm A}^2 \alpha_{\rm s}^2}{4\pi^2\epsilon}
\int_\delta^{1-x-\delta} dy \>
\operatorname{Re}(i\Omega_{-1,y,1-y} + i\Omega_{-1,x,1-x})
x y z (1{-}x)(1{-}y)
\nonumber\\ &\qquad\times
\left[
- (\alpha + \gamma) x y z
- (\beta + \gamma) x y (1{-}x)(1{-}y)
\right] .
\label {eq:divyxyx}
\end {align}
\end {subequations}
Eqs.\ (\ref{eq:crossedXpieces}) sum to
\begin {align}
\left[ \frac{d\Gamma}{dx} \right]_{{\rm virt\,I}~\rm crossed}
\approx&
\frac{C_{\rm A}^2 \alpha_{\rm s}^2}{2\pi^2\epsilon}
\operatorname{Re}(i\Omega_{-1,x,1-x})
\int_\delta^{1-x-\delta} dy \>
x^2 y^2 z (1{-}x)(1{-}y)
\nonumber\\ &\qquad\times
\left[
(\alpha + \gamma) z
+ (\beta + \gamma) (1{-}x)(1{-}y)
\right] .
\label{eq:crossedXdiv0}
\end {align}
Since we are focused here just on the $1/\epsilon$ pieces above, the integral
may be done using the explicit $d{=}2$ expressions (\ref{eq:abc})
for $(\alpha,\beta,\gamma)$.
But the combination $(\alpha + \gamma) z + (\beta + \gamma) (1{-}x)(1{-}y)$
appearing in (\ref{eq:crossedXdiv0}) turns out to be dimension-independent
in any case! (See appendix \ref{app:abcdim}.)
Remember that for the crossed virtual diagrams, like all the Class I diagrams of
fig.\ \ref{fig:virtI}, taking $x \to 1{-}x$ generates other distinct
diagrams that need to be included as well.
So, do the $y$ integral in (\ref{eq:crossedXdiv0}),
combine the result with $x\to 1{-}x$
[as in (\ref{eq:dGnetNLOdef}) or (\ref{eq:VVR})],
and take the small-$\delta$ limit. This gives
\begin {equation}
\biggl(\left[ \frac{d\Gamma}{dx} \right]_{{\rm virt\,I}~\rm crossed}\biggr)
+ (x\to 1{-}x)
\approx
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO}
\frac{C_{\rm A}\alpha_{\rm s}}{\pi\epsilon}
\bigl[
- \tfrac{11}{3} + 2\ln\bigl(x(1{-}x)\bigr) - 6\ln\delta
\bigr] ,
\label {eq:divcrossed}
\end {equation}
where $[d\Gamma/dx]^{\rm LO}$ is the leading-order single splitting
result%
\footnote{
The QCD version of the leading-order rate
goes back to BDMPS \cite{BDMPS12,BDMPS3}
and Zakharov \cite{Zakharov}.
For a discussion of how the QED version in our notation matches up
with the original QED result of Migdal \cite{Migdal}, see
appendix C.4 of ref.\ \cite{QEDnf}.
}
\begin {equation}
\left[ \frac{d \Gamma}{dx} \right]^{\rm LO}
= \frac{\alpha_{\rm s}}{\pi} \, P(x) \, \operatorname{Re}(i\Omega_{-1,x,1-x})
+ O(\epsilon)
\label {eq:LO0}
\end {equation}
and $P(x)$ is the DGLAP $g{\to}gg$ splitting function.
A non-trivial feature of (\ref{eq:divcrossed}) is that the $y$ integration
in (\ref{eq:crossedXdiv0}),
combined with the addition of $x \to 1{-}x$, gave a result proportional to
the $P(x)$ in (\ref{eq:LO0}). This is what will later make possible
the absorption of $1/\epsilon$ divergences
by renormalizing the $\alpha_{\rm s}$ in the leading-order result.
For the time being, however,
note the unwanted mixed UV-IR divergence $\epsilon^{-1}\ln\delta$ in
(\ref{eq:divcrossed}).
Now turn to the sequential virtual diagrams.
The sum $2\operatorname{Re}[xy\bar x\bar y + x\bar x y\bar y + x\bar x \bar y y]$
of {\it non}-virtual sequential $g{\to}ggg$ diagrams shown in
fig.\ \ref{fig:seq} (together with their conjugates) represents
the sum of all time orderings of a tree-level process and so does
not give any net $1/\epsilon$ divergence.%
\footnote{
This is shown explicitly by summing the individually divergent
time-order diagrams in eq.\ (5.20) of \cite{dimreg}.
}
So there will also be no divergence in its back-end transformation,
which fig.\ \ref{fig:relateI} shows is equivalent to the
sum $2\operatorname{Re}[xy\bar x y + x\bar x y y + x\bar x \bar y \bar y]$
of three Class I sequential virtual diagrams.
Nor will there be any divergence to its front-end transformation
followed by the swap $x\leftrightarrow y$,
corresponding by fig.\ \ref{fig:relateII} to the
sum $2\operatorname{Re}[\bar y x \bar y \bar x + \bar y \bar y x \bar x + y y x \bar x]$
of three Class II sequential diagrams. So none of these groups of
diagrams generate a divergence.
What remains of figs.\ \ref{fig:virtI} and \ref{fig:virtII}
is the Class I virtual diagram $xyy\bar x$ and the Class II
virtual diagram $x\bar y\bar y\bar x$, which are related to
each other via fig.\ \ref{fig:relateFund}.
As mentioned earlier, the result for $2\operatorname{Re}[xyy\bar x]$ can be converted from
the known result \cite{QEDnf} for the similar QED diagram of
fig.\ \ref{fig:QEDfund}.
The UV-divergent $1/\epsilon$ piece of that QED result was%
\footnote{
\label{foot:QEDnfF42}
This can be obtained by expanding eq.\ (F.42) of ref.\ \cite{QEDnf}
in $\epsilon$ and replacing ${\mathfrak y}_e$ there by its definition
${\mathfrak y}_e \equiv y_e/(1-x_e)$.
There was an overall sign error in eq.\ (F.42) of the original
published version of ref.\ \cite{QEDnf}, which is treated
correctly in the version above.
}
\begin {equation}
2\operatorname{Re} \left[ \frac{d\Gamma}{dx_e} \right]_{xyy\bar x}
\approx
-
\frac{N_{\rm f} \alpha_{\scriptscriptstyle\rm EM}^2}{\pi^2 \epsilon} \,
P_{e\to e}(x_e) \, \operatorname{Re}(i \Omega^{\rm QED} \operatorname{sgn} M)_{-1,x_e,1-x_e}
\int_0^{1-x} \frac{dy_e}{1-x_e} \> P_{\gamma\to e}\bigl(\frac{y_e}{1-x_e}\bigr) .
\label{eq:QEDdiv}
\end{equation}
The translation from a QED diagram to a QCD diagram is explained
in our appendix \ref{app:methodFund} and gives
\begin {align}
2\operatorname{Re}\left[ \frac{d\Gamma}{dx} \right]_{xyy\bar x}
&\approx
-
\frac{\alpha_{\rm s}^2}{2 \pi^2 \epsilon} \,
P(x) \, \operatorname{Re}(i \Omega \operatorname{sgn} M)_{-1,x,1-x}
\int_\delta^{1-x-\delta} \frac{dy}{1-x} \> P\bigl(\frac{y}{1-x}\bigr)
\nonumber\\
&=
-
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO}
\frac{\alpha_{\rm s}}{2 \pi \epsilon} \,
\int_\delta^{1-x-\delta} \frac{dy}{1{-}x} \> P\bigl(\frac{y}{1{-}x}\bigr) .
\end {align}
Our IR cut-off $\delta$ must now be included with the integration limits
because, unlike QED, LPM splitting rates are (non-integrably)
infrared divergent in QCD.
The $\operatorname{sgn} M$ factors are included above because, even though
$M_{-1,x,1-x}$ is positive for the $xyy\bar x$ diagram, this more general form
is consistent with the front-end transformation we are about to perform.
Since $xyy\bar x$ above is a Class I diagram, we need to also add in the
other diagram that is generated by $x \to 1{-}x$.
Finally, the transformation of fig.\ \ref{fig:relateFund} gives the
remaining (Class II) diagram $x\bar y\bar y\bar x$.%
\footnote{
As discussed after eq.\ (\ref{eq:Pgg}), one must include an absolute
value sign in the definition of $P(x)$ in order to make it work
with front-end transformations using our conventions.
}
The sum of all three is
\begin {align}
\left[ \frac{d\Gamma}{dx} \right]_{\rm other\,virt}
&\approx
-
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO}
\frac{\alpha_{\rm s}}{2 \pi \epsilon} \,
\left[
\int_\delta^{1-x-\delta}\!\!
\frac{dy}{1{-}x} \> P\bigl(\frac{y}{1{-}x}\bigr)
+ \int_\delta^{x-\delta}\!
\frac{dy}{x} \> P\bigl(\frac{y}{x}\bigr)
+ \int_\delta^{1-\delta}\!\!
dy \> P(y)
\right]
\nonumber\\
&\approx
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO}
\frac{C_{\rm A}\alpha_{\rm s}}{\pi\epsilon}
\bigl[
\tfrac{11}{2} - 2\ln\bigl(x(1{-}x)\bigr) + 6\ln\delta
\bigr] .
\label {eq:divother}
\end {align}
Adding (\ref{eq:divcrossed}) and (\ref{eq:divother}) gives
the total UV divergence from virtual corrections to single
splitting:
\begin {equation}
\left[ \Delta \frac{d\Gamma}{dx} \right]^{\rm NLO}_{g\to gg}
\approx
-
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO}
\frac{\beta_0\alpha_{\rm s}}{\epsilon}
\label {eq:divNLO}
\end {equation}
with
\begin {equation}
\beta_0 = -\frac{11C_{\rm A}}{6\pi} \,.
\label {eq:beta0}
\end {equation}
The $\beta_0$ above is the same coefficient that appears in the
one-loop beta function for $\alpha_{\rm s} = g^2/4\pi$:
\begin {equation}
\frac{d\alpha_{\rm s}}{d(\ln\mu)}
= - \frac{( 11 C_{\rm A} - 2 N_{\rm f})}{6\pi} \,
\alpha_{\rm s}^2 ,
\label {eq:RNG}
\end {equation}
where $N_{\rm f}$ is the number of quark flavors.
The $N_{\rm f}$ term does not appear in (\ref{eq:beta0})
because we have not included quarks in our calculations,
consistent with our choice to work in the large-$N_{\rm c}$ limit
(for $N_{\rm f}$ fixed).
Note that the UV-IR mixed divergences have canceled between
(\ref{eq:divcrossed}) and (\ref{eq:divother}), as well as
the $\ln\bigl( x(1-x) \bigr)$ terms. These cancellations had to occur in
order for the total divergence of the virtual diagrams to be
absorbed by usual QCD renormalization, as we'll now see.%
\footnote{
There is something sloppy one might have tried in the preceding
calculations that would have failed to produce the correct UV divergences,
which we mention here as a caution to others because
we unthinkingly tried it on our first attempt at
this calculation.
Suppose that we had set $\delta$ to zero in all the integration
limits so that each IR-divergent integral we've done was
divergent and ill-defined. Then suppose that in each integral
we scaled the integration variable $y$ so that each integral
was now from 0 to 1, e.g.
$\int_0^{1-x} dy \> f(y) \to (1{-}x) \int_0^1 dy \> f\bigl((1{-}x)y\bigr)$
and similarly for $x \to 1{-}x$. Now that the integration limits are
the same, one could add together all the integrands for
all the diagrams. The combined integral would be convergent but
does not give the correct result (\ref{eq:divNLO}).
That's because one can get any incorrect answer by manipulating
sums of ill-defined integrals. To properly regularize a theory,
one must first independently define the cut-off on the theory
(in this case the
IR cutoff on longitudinal momenta) and only then
add up all diagrams calculated with that cut-off.
}
\subsubsection{Renormalization}
Following ref.\ \cite{QEDnf},%
\footnote{
Specifically section 4.3.4 and footnote 26 of that reference.
Our $\beta_0$ here corresponds to $2N_{\rm f}\alpha_{\scriptscriptstyle\rm EM}/3\pi$ in QED.
}
we find it simplest to implement
renormalization in this calculation by imagining that all diagrams have
been calculated using the bare (unrenormalized) coupling and then
rewriting $(\alpha_{\rm s})_{\rm bare}$ in terms of $(\alpha_{\rm s})_{\rm ren}$.
For the $\overline{\mbox{MS}}$-renormalization scheme, that's
\begin {equation}
\alpha_{\rm s}^{\rm bare} =
\alpha_{\rm s}^{\rm ren}
+ \frac{\beta_0}{2} (\alpha_{\rm s}^{\rm ren})^2
\Bigl( \frac{2}{\epsilon} - \gamma_{\rm E}^{} + \ln(4\pi) \Bigr)
+ O(\alpha_{\rm s}^3) .
\label {eq:renorm}
\end {equation}
When expressed in terms of renormalized $\alpha_{\rm s}$, the $1/\epsilon$
divergences should then cancel in the combination%
\footnote{
Though the $[\Delta\,d\Gamma/dx]^{\rm LO+NLO}_{g\to gg}$
defined in (\ref{eq:ratebare})
is UV finite, it is power-law IR divergent.
Only in combination of
the $g{\to}gg$ rates with $g{\to}ggg$ rates, such as
(\ref{eq:dGnet}), are power-law IR divergences eliminated,
leaving double-log IR divergences.
}
\begin {equation}
\left[ \Delta \frac{d\Gamma}{dx} \right]^{\rm LO+NLO}_{g\to gg}
\equiv
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO,bare}
+
\left[ \Delta \frac{d\Gamma}{dx} \right]^{\rm NLO,bare}_{g\to gg}
\label {eq:ratebare}
\end {equation}
through order $\alpha_{\rm s}^2$.
Since the leading-order $[d\Gamma/dx]^{\rm bare}$ is proportional
to $\alpha_{\rm s}^{\rm bare}$, (\ref{eq:renorm}) gives
\begin {equation}
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO,bare}
=
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO,ren}
+
\frac{\beta_0 \alpha_{\rm s}^{\rm ren}}{2}
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO,ren}_{d=2-\epsilon}
\Bigl( \frac{2}{\epsilon} - \gamma_{\rm E}^{} + \ln(4\pi) \Bigr)
+ O(\alpha_{\rm s}^3) .
\label {eq:ren1}
\end {equation}
Note that, because it is multiplied by $2/\epsilon$, we will need to
use a $d{=}2{-}\epsilon$ formula for
$[ d\Gamma/dx ]^{\rm LO}$ in the last term above, as
indicated by the subscript.
We can now use (\ref{eq:ren1}) to
regroup terms in (\ref{eq:ratebare}) to write the
LO+NLO $g{\to}gg$ rate in terms of $\overline{\mbox{MS}}$ renormalized quantities
as
\begin {equation}
\left[ \Delta \frac{d\Gamma}{dx} \right]^{\rm LO+NLO}_{g\to gg}
=
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO,ren}
+
\left[ \Delta \frac{d\Gamma}{dx} \right]^{\rm NLO,ren}_{g\to gg}
\label {eq:rateren}
\end {equation}
with
\begin {equation}
\left[ \Delta \frac{d\Gamma}{dx} \right]^{\rm NLO,ren}_{g\to gg}
=
\left[ \Delta \frac{d\Gamma}{dx} \right]^{\rm NLO,bare}_{g\to gg}
+
\frac{\beta_0 \alpha_{\rm s}^{\rm ren}}{2}
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO,ren}_{d=2-\epsilon}
\Bigl( \frac{2}{\epsilon} - \gamma_{\rm E}^{} + \ln(4\pi) \Bigr) .
\label {eq:NLOren0}
\end {equation}
One can see from (\ref{eq:divNLO}) that the $1/\epsilon$ poles indeed
cancel in this renormalized $[ \Delta d\Gamma/dx ]^{\rm NLO}$.
There are many equivalent ways to introduce the $\overline{\mbox{MS}}$ renormalization
scale into the renormalization procedure outlined above.
Following ref.\ \cite{QEDnf},%
\footnote{
See in particular the discussion of eq.\ (F.31) of ref.\ \cite{QEDnf}.
}
we will introduce it by
writing the dimensionful bare $g^2/4\pi$
in $4{-}\epsilon$ spacetime dimensions as $\mu^\epsilon \alpha_{\rm s}^{\rm bare}$,
where $\alpha_{\rm s}^{\rm bare}$ is the usual dimensionless coupling for
$4$ spacetime dimensions. As a result, every power of $\alpha_{\rm s}$ in
our unrenormalized calculations comes with a power of $\mu^\epsilon$ which,
if multiplied by a $1/\epsilon$ UV divergence and expanded in $\epsilon$, will
generate the correct
logarithms $\ln\mu$ of the renormalization scale in our results,
as we detail next.
\subsubsection{Organization of Renormalized Results}
\label {sec:OrganizeRenorm}
Formulas for the NLO $g{\to}gg$ rate are given in appendix \ref{app:NLOsummary}.
Because of the fact that multiple diagrams contribute to cancellation of
$1/\epsilon$ poles in ways that are not particularly simple diagram by diagram,
we have organized our renormalized
result for $[d\Gamma/dx]^{\rm NLO, ren}_{g\to gg}$
slightly differently than the QED case of
ref.\ \cite{QEDnf}, in a way that we will explain here.
Also, we would like to write renormalized formulas in appendix
\ref{app:NLOsummary} in a way that makes transparent the dependence
on explicit renormalization scale logarithms $\ln\mu$.
The running (\ref{eq:RNG}) of $\alpha_{\rm s}$, plus the fact that the
leading-order rate is proportional to $\alpha_{\rm s}$, implies that
the renormalized NLO rate must have explicit $\mu$ dependence
\begin {equation}
\left[ \Delta \frac{d\Gamma}{dx} \right]^{\rm NLO,ren}_{g\to gg}
=
-\left[ \frac{d\Gamma}{dx} \right]^{\rm LO} \beta_0 \alpha_{\rm s} \ln\mu
+ \cdots
\label {eq:lnmu1}
\end {equation}
in order to cancel the implicit $\mu$ dependence
$d\alpha_{\rm s}/d(\ln\mu)=\beta_0\alpha_{\rm s}^2$ of $\alpha_{\rm s}(\mu)$
from the LO rate.
In contrast, the NLO bare rate $[\Delta\,d\Gamma/dx]^{\rm NLO, bare}_{g\to gg}$
is proportional to
$(\mu^\epsilon\alpha_{\rm s})^2$, and so its divergence
(\ref{eq:divNLO}) generates
\begin {equation}
\left[ \Delta \frac{d\Gamma}{dx} \right]^{\rm NLO,bare}_{g\to gg}
=
-
\mu^{2\epsilon} \left[ \frac{d\Gamma}{dx} \right]^{\rm LO}_{d=2}
\frac{\beta_0\alpha_{\rm s}}{\epsilon}
+ \cdots
=
-
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO}_{d=2}
\frac{\beta_0\alpha_{\rm s}}{\epsilon}
-2\left[ \frac{d\Gamma}{dx} \right]^{\rm LO} \beta_0 \alpha_{\rm s} \ln\mu
+ \cdots .
\label {eq:lnmu2}
\end {equation}
The difference between the $\ln\mu$ terms of
(\ref{eq:lnmu1}) and (\ref{eq:lnmu2}) is made up by the last
term of the renormalization (\ref{eq:NLOren0}), as we'll now
make explicit while also keeping track of all $O(\epsilon^0)$
pieces of the conversion.
To start, we need the $d{=}2{-}\epsilon$ dimensional result for the leading-order
single splitting process, which appears in (\ref{eq:NLOren0}).
We'll find it convenient to write this as
\begin {subequations}
\label {eq:LOd}
\begin {equation}
\left[ \frac{d \Gamma}{dx} \right]^{\rm LO}_{d=2-\epsilon}
= 2\operatorname{Re} \left[ \frac{d \Gamma}{dx} \right]_{\substack{x\bar x\hfill\\d=2-\epsilon}} ,
\end {equation}
where complex-valued $[d\Gamma/dx]_{x \bar x}$ is the result for the
$x\bar x$ diagram of fig.\ \ref{fig:LO}:%
\footnote{
Specifically, eqs.\ (3.1), (3.2) and (3.7) of ref.\ \cite{dimreg} give
(\ref{eq:LOd}) above, except one needs to include the factor
of $\mu^\epsilon$ discussed previously.
See also the QED version in
eq.\ (F.44) of ref.\ \cite{QEDnf}.
}
\begin {align}
\left[ \frac{d\Gamma}{dx} \right]_{\substack{x\bar x\hfill\\d=2-\epsilon}}
&=
- \frac{\mu^\epsilon\alpha_{\rm s} d}{8\pi} \,
P(x) \,
\operatorname{B}(\tfrac12{+}\tfrac{d}{4},-\tfrac{d}{4}) \,
\Bigl( \frac{2\pi}{M_0\Omega_0} \Bigr)^{\epsilon/2}
i \Omega_0
\nonumber\\
&= \frac{\alpha_{\rm s}}{2\pi} \, P(x) \,
i\Omega_0
\left[ 1 + \frac{\epsilon}{2} \ln\Bigl( \frac{\pi\mu^2}{M_0 \Omega_0}\Bigr)
+ O(\epsilon^2) \right]
.
\end {align}
\end {subequations}
Here $\operatorname{B}(x,y) \equiv \Gamma(x)\,\Gamma(y)/\Gamma(x{+}y)$ is the
Euler Beta function;
we use the short-hand notations $\Omega_0$ and $M_0$ for
\begin {equation}
\Omega_0 \equiv \Omega_{-1,x,1-x}
= \sqrt{ \frac{-i\hat q_{\rm A}}{2 E}
\left( -1 + \frac{1}{x} + \frac{1}{1-x} \right) }
= \sqrt{ \frac{-i (1-x+x^2) \hat q_{\rm A}}{2x(1-x)E} } \,,
\label {eq:Om0def}
\end {equation}
\begin {equation}
M_0 \equiv M_{-1,x,1-x} = x(1{-}x)E \,;
\end {equation}
and the DGLAP $g\to gg$ splitting function $P(x)$, given by (\ref{eq:Pgg}),
is independent of dimension (see appendix \ref{app:abcdim}).
Using (\ref{eq:LOd}), we rewrite the renormalized rate (\ref{eq:NLOren0})
as
\begin {equation}
\left[ \Delta \frac{d\Gamma}{dx} \right]^{\rm NLO,ren}_{g\to gg}
=
\left[ \frac{d\Gamma}{dx} \right]_{\rm ren\,log}
+
\left[ \Delta \frac{d\Gamma}{dx} \right]^{\overline{\rm NLO}}_{g\to gg}
\label {eq:NLOren}
\end {equation}
with
\begin {equation}
\left[ \frac{d\Gamma}{dx} \right]_{\rm ren\,log}
\equiv
- \beta_0\alpha_{\rm s}
\operatorname{Re}\left(
\left[ \frac{d\Gamma}{dx} \right]_{\substack{x\bar x\hfill\\d=2}}
\left[
\ln \Bigl( \frac{\mu^2}{\Omega_0 E} \Bigr)
+ \ln\Bigl( \frac{x(1{-}x)}{4} \Bigr)
+ \gamma_{\rm E}^{}
\right]
\right)
\label {eq:renlog}
\end {equation}
and
\begin {equation}
\left[ \Delta \frac{d\Gamma}{dx} \right]^{\overline{\rm NLO}}_{g\to gg}
\equiv
\left[ \Delta \frac{d\Gamma}{dx} \right]^{\rm NLO,bare}_{g\to gg}
+
2\beta_0 \alpha_{\rm s} \operatorname{Re}\left(
\left[ \frac{d\Gamma}{dx} \right]_{\substack{x\bar x\hfill\\d=2}}
\Bigl[
\frac{1}{\epsilon} + \ln\Bigl(\frac{\pi\mu^2}{\Omega_0 E}\Bigr)
\Bigr]
\right) .
\label {eq:NLObar}
\end {equation}
The first term $[d\Gamma/dx]_{{\rm ren\,log}}$
of (\ref{eq:NLOren}) contains the correct
explicit $\ln\mu$ dependence of (\ref{eq:lnmu1}). The
second term $[d\Gamma/dx]^{\overline{\rm NLO}}$ has,
by virtue of (\ref{eq:lnmu2}),
no net divergence $1/\epsilon$ and no net explicit dependence on
$\ln\mu$. In appendix \ref{app:NLOsummary}, we implement this
combination (\ref{eq:NLObar})
by grouping all $1/\epsilon$ pieces of our unrenormalized
calculations into the form
\begin {equation}
\sigma_{\rm bare} \biggl(
\frac{1}{\epsilon} + \ln\Bigl(\frac{\pi\mu^2}{\Omega_0 E}\Bigr)
\biggr) .
\end {equation}
Setting $\sigma_{\rm bare}{=}1$ displays unrenormalized formulas for
$[d\Gamma/dx]^{\rm NLO}$. Setting
$\sigma_{\rm bare}{=}0$ instead implements the combination
$[d\Gamma/dx]^{\overline{\rm NLO}}$ of (\ref{eq:NLObar}) once all diagrams
are summed over.
In this way, appendix \ref{app:NLOsummary} simultaneously presents
both bare and renormalized expressions for NLO $g\to gg$.
For later convenience, we find it useful to also define
\begin {equation}
\left[ \frac{d\Gamma}{dx} \right]^{\overline{\rm LO}}_{g\to gg}
\equiv
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO,ren}_{g\to gg}
+ \left[ \frac{d\Gamma}{dx} \right]_{\rm ren\,log}
\label {eq:dGLObar}
\end {equation}
so that we can rewrite (\ref{eq:rateren}) as
\begin {equation}
\left[ \Delta \frac{d\Gamma}{dx} \right]^{\rm LO+NLO}_{g\to gg}
=
\left[ \frac{d\Gamma}{dx} \right]^{\overline{\rm LO}}
+
\left[ \Delta \frac{d\Gamma}{dx} \right]^{\overline{\rm NLO}}_{g\to gg} .
\label {eq:raterenbar}
\end {equation}
This is the meaning behind the notation we used back in
(\ref{eq:dGnet}). The notation is convenient because, for our
final renormalized $g{\to}gg$ results listed in appendix
\ref{app:summary}, the notation distinguishes the
parts $[\Delta\,d\Gamma/dx]^{\overline{\rm NLO}}$ of our results that are expressed
in terms of $y$ integrals,%
\footnote{
Specifically, $[\Delta\,d\Gamma/dx]^{\overline{\rm NLO}}$ is given by
(\ref{eq:dGammaNLObar}) and the formulas following it with
$\sigma_{\rm bare}=0$.
}
like in (\ref{eq:dGnetNLO1}),
from the parts $[d\Gamma/dx]^{\overline{\rm LO}}$ above that are not.
\section{IR divergences in Energy Loss Calculations}
\label {sec:IR}
We now discuss in detail how the IR behavior of
various measures of the development of
in-medium high-energy QCD parton showers depends only on the
combination
\begin {equation}
v(x,y) + \tfrac12 r(x,y)
\approx
-\frac{C_{\rm A}\alpha_{\rm s}}{8\pi}
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO} \frac{\ln y}{y}
\label {eq:VRlimit2}
\end {equation}
of virtual and real diagrams introduced in (\ref{eq:VRlimit}),
for which power-law IR divergences cancel.
In this section, $\approx$ indicates an equality that is valid at
leading-log order.
\subsection {General shower evolution}
We start by looking generally at the evolution of the distribution
of partons in such a shower. This will generalize, to NLO,
similar methods that
have been applied by Blaizot et al.\ at leading order
\cite{Nevolve1,Nevolve2}.%
\footnote{
See also earlier leading-order work by Jeon and Moore \cite{JeonMoore},
which avoided the $\hat q$ approximation and treated
the quark-gluon plasma as weakly coupled.
}
In what follows, let $E_0$ be the energy of the initial parton that
starts the entire shower. We will let $\zeta E_0$ refer to the energy
of some parton in the shower as the shower develops, and we will refer
to the distribution of shower partons in $\zeta$ at time $t$ as
$N(\zeta,E_0,t)$. Formally, the total number of partons remaining in
the shower at time $t$ is then $\int_0^1 d\zeta \> N(\zeta,E_0,t)$, but
this particular integral is IR divergent, not least because some fraction of
the energy of the shower will have come to a stop in the medium
($\zeta{=}0$) and thermalized by time $t$.
However,
one may also
use $N(\zeta,E_0,t)$ to calculate IR-safe characteristics of the
shower, including $N(\zeta,E_0,t)$ itself for fixed $\zeta > 0$.%
\footnote{
See the leading-order analysis of $N(\zeta,E_0,t)$ in
refs.\ \cite{Nevolve1,Nevolve2}. (Be aware that their analytic
results depend on approximating $[d\Gamma/dx]^{\rm LO}$ by something
more tractable.)
For a next-to-leading-order example, see the related
discussion of charge stopping distance and other moments of the
charge stopping distribution for large-$N_{\rm f}$ QED in appendix C of
ref.\ \cite{qedNfstop}.
}
\subsubsection {Basic Evolution Equation}
The basic evolution equation to start with is
(see appendix \ref{app:details} for some more detail)%
\footnote{
This equation is only meant to be valid for particle energies
$\zeta E$ large compared to the temperature $T$ of the plasma.
In the high-energy and infinite-medium limit that we are working
in, the evolution of particles in the shower whose energy has degraded
to $\sim T$ has a negligible (i.e.\ suppressed by a power of $T/E$)
effect on questions about in-medium shower development
and calculations of where the shower deposits its energy into the
plasma. See, for example, the discussions in refs.\ \cite{stop}
and \cite{Nevolve1}. For discussion of some of the theory issues
that would be involved in going beyond this high-energy approximation
for single-splitting processes,
see, as two examples, refs.\ \cite{JeonMoore} and \cite{Ghiglieri}.
}
\begin {equation}
\frac{\partial}{\partial t} N(\zeta,E_0,t)
=
- \Gamma(\zeta E_0) \, N(\zeta,E_0,t)
+ \int_\zeta^1 \frac{dx}{x} \>
\left[ \frac{d\Gamma}{dx} \bigl(x,\tfrac{\zeta E_0}{x}\bigr) \right]_{\rm net}
N\bigl( \tfrac{\zeta}{x}, E_0, t \bigr) ,
\label {eq:Nevolve0}
\end {equation}
where
\begin {equation}
\left[ \frac{d\Gamma}{dx} (x,E) \right]_{\rm net}
\end {equation}
refers to the net rate (\ref{eq:dGnet}) to produce one daughter of energy
$xE$ (plus any other daughters) via single splitting or overlapping
double splitting from a parton of energy $E$.
The total splitting rate $\Gamma$ in the loss term is
\begin {equation}
\Gamma(E)
=
\frac{1}{2!} \int_0^1 dx
\left\{
\left[ \frac{d\Gamma}{dx} \right]^{\overline{\rm LO}}
+
\left[ \Delta \frac{d\Gamma}{dx} \right]^{\overline{\rm NLO}}_{g \to gg}
\right\}
+
\frac{1}{3!} \int_0^1 dx \int_0^{1-x} dy
\left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{g \to ggg}
,
\label {eq:Gtot}
\end {equation}
where the $1/2!$ and $1/3!$ are the final-state identical particle factors for
$g \to gg$ and $g \to ggg$.
The first and second terms in (\ref{eq:Nevolve0}) are respectively
loss and gain terms for $N(\zeta,E_0,t)$.
The gain term corresponds to the rate for any higher-energy particle
in the shower (energy $\zeta E_0/x$)
to split and produce a daughter whose energy is $\zeta E_0$.
To keep formulas simple here and throughout this discussion,
we will not explicitly write the IR cut-off $\delta$ in integration
limits.
By comparing (\ref{eq:Gtot}) to (\ref{eq:dGnet}), note that
\begin {equation}
\Gamma(E) \not=
\int_0^1 dx \> \left[ \frac{d\Gamma}{dx} (x,E) \right]_{\rm net}
\end {equation}
because of the different combinatoric factors involved in how
$[d\Gamma/dx]_{\rm net}$ is defined. This is related to the fact
that (\ref{eq:Nevolve0}) should not conserve the total number of
partons: each $g \to gg$ should add a parton, and each
$g \to ggg$ should add two partons.%
\footnote{
One way to see this clearly is to
over-simplify the problem by {\it pretending} that splitting
rates did not depend on energy $E$, then integrate both
sides of (\ref{eq:Nevolve0}) over $\zeta$, and rewrite
$\int_0^1 d\zeta \int_\zeta^1 dx/x = \int_0^1 dx \int_0^1 d\bar\zeta$
with $\bar\zeta \equiv \zeta/x$. Formally, this would give
$\partial{\cal N}/\partial t =
+(\Gamma_{g\to gg} + 2 \, \Delta\Gamma_{g\to ggg}) {\cal N}$,
where ${\cal N}$ is the total number of partons in the shower and
$\Gamma_{g \to gg} \equiv \Gamma^{\rm LO} + \Delta\Gamma^{\rm NLO}_{g\to gg}$.
From the coefficients $+1$ and $+2$ of $\Gamma_{g\to gg}$ and
$\Delta\Gamma_{g\to ggg}$ in this expression, one can see explicitly the
number of partons added by each type of process.
}
The various pieces that go into the calculation of the right-hand side
of the evolution equation (\ref{eq:Nevolve0}) have various power-law
IR divergences which cancel in the combination of all the terms.
We now focus on identifying those divergences and showing how to
reorganize (\ref{eq:Nevolve0}) into an equivalent
form where power-law IR divergences
are eliminated from the integrals that must be done.
\subsubsection {$x\to 0$ or $1$ divergences at leading order}
To start, let's ignore $\overline{\rm NLO}$ corrections for a moment
and look at the leading-order version of (\ref{eq:Nevolve0}):
\begin {equation}
\frac{\partial}{\partial t} N(\zeta,E_0,t)
\simeq
- \Gamma^{\overline{\rm LO}}(\zeta E_0) \, N(\zeta,E_0,t)
+ \int_\zeta^1 \frac{dx}{x} \>
\left[ \frac{d\Gamma}{dx} \bigl(x,\tfrac{\zeta E_0}{x}\bigr)
\right]^{\overline{\rm LO}}
N\bigl( \tfrac{\zeta}{x}, E_0, t \bigr)
\label {eq:NevolveLO}
\end {equation}
with
\begin {equation}
\Gamma^{\overline{\rm LO}}(E)
=
\frac{1}{2!} \int_0^1 dx
\left[ \frac{d\Gamma}{dx} \right]^{\overline{\rm LO}} .
\label {eq:GLO}
\end {equation}
The leading-order rate $[d\Gamma/dx]^{\rm LO}$ diverges as
\begin {equation}
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO}
\sim
\frac{1}{[x(1{-}x)]^{3/2}}
\qquad
\mbox{as $x\to 0$ or $1$}
\label {eq:dGLOlims}
\end {equation}
[see eq.\ (\ref{eq:LO0}) with $\epsilon{=}0$]. Up to logarithmic factors,
this divergence is the same for $[d\Gamma/dx]^{\overline{\rm LO}}$ (\ref{eq:dGLObar})
as well.
This means that the integral (\ref{eq:GLO}) that gives the total
rate $\Gamma^{\overline{\rm LO}}$ generates power-law IR divergences from both
the $x\to 0$ and $x\to 1$ parts of the integration region.
In contrast, the integral for the gain term in (\ref{eq:NevolveLO})
runs from $\zeta{>}0$ to $1$ and so only generates a
divergence from the $x\to 1$ behavior.
That means that we cannot get rid of the IR divergences simply
by directly combining the integrands. However, if we first use
the identical final-particle symmetry $x \leftrightarrow 1{-}x$ of
$[d\Gamma/dx]^{\overline{\rm LO}}$ to rewrite (\ref{eq:GLO}) as
\begin {equation}
\Gamma^{\overline{\rm LO}}(E)
=
\int_{1/2}^1 dx
\left[ \frac{d\Gamma}{dx} \right]^{\overline{\rm LO}} ,
\end {equation}
then we can combine the loss and gain terms in (\ref{eq:NevolveLO})
into
\begin {multline}
\frac{\partial}{\partial t} N(\zeta,E_0,t)
\simeq
\int_0^1 dx
\biggl\{
-
\left[ \frac{d\Gamma}{dx} \bigl(x,\zeta E_0\bigr) \right]^{\overline{\rm LO}}
N\bigl( \zeta, E_0, t \bigr) \,
\theta(x > \tfrac12 )
\\
+
\left[ \frac{d\Gamma}{dx} \bigl(x,\tfrac{\zeta E_0}{x}\bigr) \right]^{\overline{\rm LO}}
N\bigl( \tfrac{\zeta}{x}, E_0, t \bigr) \,
\frac{\theta(x > \zeta)}{x}
\biggr\} .
\label {eq:NevolveLO2}
\end {multline}
Similar to (\ref{eq:dGnetNLO}),
we have implemented the actual limits of integration here using step
functions $\theta(\cdots)$ so that we may combine the integrands.
Because of the $\theta$ functions, the integrand has no support
for $x \to 0$ and so no divergence associated with $x \to 0$.
Because we have combined the integrands, however, one can see
that the integrand behaves like $1/(1{-}x)^{1/2}$ instead of
$1/(1{-}x)^{3/2}$ (\ref{eq:dGLOlims}) as $x \to 1$ because
of cancellation in that limit between the loss and gain contributions.
So the form (\ref{eq:NevolveLO2}) has the advantage that the integral
is completely convergent, and there are no IR divergences in this
equation for any given $\zeta > 0$.
\subsubsection {$y\to 0$ divergences at NLO}
As discussed in section \ref{sec:introIR}, $g{\to}ggg$ and NLO $g{\to}gg$
processes generate power-law IR divergences as the energy of the
softest real or virtual gluon (whose longitudinal momentum fraction we
often arrange to correspond to the letter $y$) goes to zero.
We have already discussed how
those power-law IR divergences cancel in the combination
$[\Delta\,d\Gamma/dx]_{\rm net}^{\overline{\rm NLO}}$ (\ref{eq:dGnetNLO}),
which is the combination
that appears in the NLO contribution to the gain term in
the evolution equation (\ref{eq:Nevolve0}).
But the loss term involves a different combination $\Gamma$
(\ref{eq:Gtot})
of real and virtual diagrams, and so we must
check that a similar cancellation occurs there.
Recalling that our NLO $g \to gg$ diagrams consist of our Class I diagrams
(fig.\ \ref{fig:virtI}), their $x \to 1{-}x$ cousins, and
our class II diagrams (fig.\ \ref{fig:virtII}), the NLO contribution
to the total rate
(\ref{eq:Gtot}) is, in more detail,
\begin {multline}
\Delta\Gamma^{\overline{\rm NLO}}(E)
=
\frac{1}{2!} \int_0^1 dx
\biggl\{
\biggl(
\int_0^{1-x} dy \, \left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{\rm virt\,I}
\biggr)
+ (x \to 1{-}x)
+ \int_0^1 dy \, \left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{\rm virt\,II}
\biggr\}
\\
+
\frac1{3!}
\int_0^1 dx
\int_0^{1{-}x} dy \>
\left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{g\to ggg} .
\label {eq:Gtot2}
\end {multline}
[Compare and contrast to (\ref{eq:dGnetNLOdef}).]
Fig.\ \ref{fig:Gxylims} shows the various integration regions corresponding
to the different terms above and the limits of integration
producing IR divergences (which is all of them).
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.7]{Gxylims.eps}
\caption{
\label{fig:Gxylims}
The integration regions (shaded) for the various terms in (\ref{eq:Gtot2})
corresponding to
(a) Class I virtual diagrams,
(b) their $x \to 1{-}x$ cousins,
(c) Class II virtual diagrams, and
(d) $g \to ggg$ diagrams.
The colored lines correspond to limits of the integration regions,
which generate IR divergences.
See the caption of fig.\ \ref{fig:Gxylims2} for the distinction
between the red vs.\ blue lines here.
}
\end {center}
\end {figure}
We will now align the location of the IR divergences so that we can
eventually combine the different integrals and eliminate
power-law divergences.
First, note by change $x \to 1{-}x$ of integration variables,
the ``$(x \to 1{-}x)$'' term in (\ref{eq:Gtot2}) gives the same
result as the ``${\rm virt\,I}$'' term. Second, simultaneously use
the $x \to 1{-}x$ and $y \to 1{-}y$ symmetries of Class II diagrams
to divide the integration region of fig.\ \ref{fig:Gxylims}c in half
diagonally, giving
\begin {multline}
\Delta\Gamma^{\overline{\rm NLO}}(E)
=
\int_0^1 dx \int_0^{1-x} dy \>
\biggl\{
\left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{\rm virt\,I}
+ \left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{\rm virt\,II}
\biggr\}
\\
+
\frac1{3!}
\int_0^1 dx
\int_0^{1{-}x} dy \>
\left[ \Delta \frac{d\Gamma}{dx\,dy} \right]_{g\to ggg} .
\label {eq:Gtot3}
\end {multline}
For the NLO $g{\to}gg$ contributions, we now divide the integration
region into (i) $0 < y < (1{-}x)/2$ and (ii) $(1{-}x)/2 < y < 1{-}x$
and change integration variables $y \to z = 1{-}x{-}y$ in the latter,
similar to the manipulations used earlier to obtain (\ref{eq:dGnetNLO}).
For the $g{\to}ggg$ contributions, note that permutation symmetry for
the three final daughters $(x,y,z)$ implies the integral over each of
the six regions shown in fig.\ \ref{fig:xyzRegions} is the same.
We can therefore replace the integral over all six regions by three
times the integral over the bottom two, depicted by the shaded
region of fig.\ \ref{fig:Gxylims2}d. [We will see later the
advantage of integrating over these two regions instead of reducing
the integral to just one region.]
Eq.\ (\ref{eq:Gtot3}) can
then be written as
\begin {multline}
\Delta\Gamma^{\overline{\rm NLO}}(E)
=
\int_0^1 dx \int_0^{1/2} dy
\biggl\{
v(x,y) \, \theta(y<\tfrac{1-x}{2})
+ \tfrac12 r(x,y) \, \theta(y<x) \, \theta(y<\tfrac{1-x}{2})
\biggr\} ,
\end {multline}
with $v$ and $r$ defined as in (\ref{eq:VRdef}).
We will find it convenient to change
integration variable
$x \to 1{-}x$ in the first term and rewrite the equation as
\begin {multline}
\Delta\Gamma^{\overline{\rm NLO}}(E)
=
\int_0^1 dx \int_0^{1/2} dy
\biggl\{
v(1{-}x,y) \, \theta(y<\tfrac{x}{2})
+ \tfrac12 r(x,y) \, \theta(y<x) \, \theta(y<\tfrac{1-x}{2})
\biggr\} .
\label {eq:Gtot4}
\end {multline}
The integration regions corresponding to the two terms in
(\ref{eq:Gtot4}) are shown in fig.\ \ref{fig:Gxylims2}, where
the only IR divergences correspond to $y{\to}0$ or $x{\to}1$.
\begin {figure}[t]
\begin {center}
\includegraphics[scale=1.5]{xyzRegions.eps}
\caption{
\label{fig:xyzRegions}
Equivalent integration regions for $g \to ggg$ corresponding
to permutations of the daughters $(x,y,z)$.
The common vertex of these regions is at
$(x,y,z) = (\tfrac13,\tfrac13,\tfrac13)$.
}
\end {center}
\end {figure}
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.7]{Gxylims2.eps}
\caption{
\label{fig:Gxylims2}
The integration regions in (\ref{eq:Gtot4}).
The labels (a,b,c) and (d) correspond to the original origin
of these terms in fig.\ \ref{fig:Gxylims}.
Colored lines again denote limits of integration associated
with IR divergences. Power-law divergences associated with
the red lines above cancel each other in (\ref{eq:Gtot4}).
Blue line divergences only cancel when loss and gain
terms are combined in (\ref{eq:SevolveNLO}). The origins of the
red vs.\ blue divergences here are depicted by the red vs.
blue lines in fig.\ \ref{fig:Gxylims}.
}
\end {center}
\end {figure}
The rationale for the last change was to convert
$x{\to}0$ divergences into $x{\to}1$ divergences (the blue line
in fig.\ \ref{fig:Gxylims2}), which we will later see then
cancel similar $x{\to}1$ divergences in the gain term of the
evolution equation. For the moment, however, we focus
only on the $y{\to}0$ divergences of (\ref{eq:Gtot4}), depicted
by the red lines in fig.\ \ref{fig:Gxylims2}.
In the limit $y\to 0$ (for fixed $x$), the integrand in
(\ref{eq:Gtot4}) approaches
\begin {equation}
v(1{-}x,y) + \tfrac12 r(x,y)
= v(1{-}x,y) + \tfrac12 r(z,y)
\simeq v(1{-}x,y) + \tfrac12 r(1{-}x,y) ,
\label {eq:VRlimitG}
\end {equation}
where the first equality follows because $g \to ggg$ is symmetric under
permutations of $(x,y,z)$.
The right-hand side of (\ref{eq:VRlimitG})
is the same combination as (\ref{eq:VRlimit}) but with
$x \to 1{-}x$. In fig.\ \ref{fig:dbllogCheck}, we verified
numerically that $y^{-3/2}$ divergences (which generate power-law IR divergences
when integrated) indeed cancel in this combination, leaving behind
the double-log divergence shown in (\ref{eq:VRlimit}) [which happens
to be symmetric under $x \to 1{-}x$]. Interested readers can find
non-numerical information on how the $y^{-3/2}$ divergences cancel in
appendix \ref{app:IRcancel}.
One can now see why we did not replace the integral of $r(x,y)$ over
the two sub-regions shown in fig.\ \ref{fig:Gxylims2} by, for example,
twice the integral of just the left-hand sub-region $(x < y < z)$.
If we had done the latter, there would be no $r$ term for
$x > 1/2$ and so nothing would cancel the $y^{-3/2}$ divergence
of $v(1{-}x,y)$ for $x > 1/2$. We had to be
careful how we organized things to achieve our goal that the $y$ integral in
(\ref{eq:Gtot4}) not generate a power-law IR divergence for
any value of $x$.
Next, we turn to our final goal for this section
of showing that the integrals
in the evolution equation for
$N(\zeta,E_0,t)$ can be arranged to directly avoid
power-law IR divergences for the entire integration over {\it both}\/
$x$ and $y$.
\subsubsection {$x\to 0$ or $1$ divergences at NLO}
\label {sec:xNLOdivs}
By using (\ref{eq:dGnetNLO}), (\ref{eq:NevolveLO2}), and (\ref{eq:Gtot4})
in the shower evolution equation (\ref{eq:Nevolve0}), we can now
combine integrals to avoid all power-law divergences:
\begin {subequations}
\label {eq:NevolveRV}
\begin {equation}
\frac{\partial}{\partial t} N(\zeta,E_0,t)
=
{\cal S}^{\overline{\rm LO}} + {\cal S}^{\overline{\rm NLO}}
\end {equation}
where
\begin {multline}
{\cal S}^{\overline{\rm LO}}
=
\int_0^1 dx
\biggl\{
-
\left[ \frac{d\Gamma}{dx} \bigl(x,\zeta E_0\bigr) \right]^{\overline{\rm LO}}
N\bigl( \zeta, E_0, t \bigr) \,
\theta(x > \tfrac12 )
\\
+
\left[ \frac{d\Gamma}{dx} \bigl(x,\tfrac{\zeta E_0}{x}\bigr) \right]^{\overline{\rm LO}}
N\bigl( \tfrac{\zeta}{x}, E_0, t \bigr) \,
\frac{\theta(x > \zeta)}{x}
\biggr\}
\end {multline}
and
\begin {multline}
{\cal S}^{\overline{\rm NLO}}
=
\int_0^1 dx \int_0^{1/2} dy
\biggl\{
- \Bigl[
v(1{-}x,y) \, \theta(y<\tfrac{x}{2})
+ \tfrac12 r(x,y) \, \theta(y<x) \, \theta(y<\tfrac{1-x}{2})
\Bigr]
N\bigl( \zeta, E_0, t \bigr)
\\
+ \Bigl[
v(x,y) \, \theta(y<\tfrac{1-x}{2})
+ v(1{-}x,y) \, \theta(y<\tfrac{x}{2})
+ r(x,y) \, \theta(y<\tfrac{1-x}{2})
\Bigr]
N\bigl( \tfrac{\zeta}{x}, E_0, t \bigr) \,
\frac{\theta(x > \zeta)}{x}
\biggr\} .
\label {eq:SevolveNLO}
\end {multline}
\end {subequations}
We've previously seen that the LO piece ${\cal S}^{\overline{\rm LO}}$ is free of
divergences. And we've seen that the loss and gain terms of the
NLO piece ${\cal S}^{\overline{\rm NLO}}$ are each free of power-law divergences
associated with $y\to 0$ (with fixed $x$). Now consider divergences
of ${\cal S}^{\overline{\rm NLO}}$ associated with the behavior of $x$.
The integrand in (\ref{eq:SevolveNLO}) has no
support as $x \to 0$ (fixed $y$). And for $x\to 1$ (fixed $y$),
there is a cancellation between the loss and gain terms.
So there is no divergence of ${\cal S}^{\overline{\rm NLO}}$
associated with $x \to 0$ or $x \to 1$.%
\footnote{
This statement relies on the observation that the various
NLO $g \to gg$ differential rates making up $v(x,y)$ diverge
no faster than $s^{-3/2}$ as some parton with longitudinal momentum
fraction $s$ becomes soft, e.g.\ $(1-x)^{-3/2}$ as $x \to 1$.
The cancellation between the gain and loss terms
in (\ref{eq:SevolveNLO}) reduces
that by one power, to $(1-x)^{-1/2}$, which is an integrable
singularity and so generates no divergence for (\ref{eq:SevolveNLO}).
}
In summary, the only IR divergences coming from ${\cal S}^{\overline{\rm NLO}}$
are the uncanceled double-log divergences associated with
$y \to 0$.
\subsection {Absorbing double logs into }\tableofcontents$\hat q$
and comparison with known results}
Refs.\ \cite{Blaizot,Iancu,Wu} have previously performed leading-log
calculations of overlap corrections and shown that the double-log
IR divergences can be absorbed into the medium parameter $\hat q$.
We will now verify that the double-log piece of our results produces
the same modification \cite{Wu0} of $\hat q$.
\subsubsection {Double-log correction for $[d\Gamma/dx]_{\rm net}$}
\label {sec:qhateff}
Let's start with the relatively simple situation of the
$[d\Gamma/dx]_{\rm net}$ introduced in section \ref{sec:introIR}.
From the discussion of (\ref{eq:dGnetNLO}) through (\ref{eq:VRlimit}),
the double-log divergence of the NLO contribution to
$[d\Gamma/dx]_{\rm net}$ is given by%
\footnote{
Note that (\ref{eq:renlog}) has no IR double log contribution, so the
distinction between (LO,NLO) and $(\overline{\rm LO},\overline{\rm NLO})$ can be ignored
for this discussion.
}
\begin {align}
\left[ \frac{d\Gamma}{dx} \right]_{\rm net}^{\rm NLO}
\approx
- \frac{C_{\rm A}\alpha_{\rm s}}{4\pi} \left[ \frac{d\Gamma}{dx} \right]^{\rm LO}
\int_\delta^{1/2} dy \> \frac{\ln y}{y}
\approx
\frac{C_{\rm A}\alpha_{\rm s}}{8\pi} \ln^2\delta
,
\label {eq:dGnetNLODblLog}
\end {align}
where we have re-introduced our sharp IR cut-off $\delta$.
Combining (\ref{eq:dGnetNLODblLog}) with
$[d\Gamma/dx]_{\rm net} = [d\Gamma/dx]^{\rm LO} + [d\Gamma/dx]_{\rm net}^{\rm NLO}$
gives
\begin {equation}
\left[ \frac{d\Gamma}{dx} \right]_{\rm net} \simeq
\left[
1 + \frac{C_{\rm A}\alpha_{\rm s}}{8\pi} \ln^2\delta
\right]
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO}
.
\label {eq:dGLOeff}
\end {equation}
Since $[d\Gamma/dx]^{\rm LO} \propto \sqrt{\hat q/E}$
[see (\ref{eq:OmMdefs}) and (\ref{eq:LO0})],
the double-log correction above can be
absorbed at this order by replacing $\hat q$ by
\begin {equation}
\hat q_{\rm eff} =
\left[
1 + \frac{C_{\rm A}\alpha_{\rm s}}{4\pi} \ln^2\delta
\right]
\hat q .
\label {eq:qhateffdelta}
\end {equation}
The corresponding
leading-log modification of $\hat q$ from earlier literature
\cite{Wu0,Blaizot,Iancu,Wu} is usually
expressed in the final form
\begin {equation}
\hat q_{\rm eff}(L) =
\left[
1 + \frac{C_{\rm A}\alpha_{\rm s}}{2\pi} \ln^2\Bigl( \frac{L}{\tau_0} \Bigr)
\right]
\hat q ,
\label {eq:qhateffStd}
\end {equation}
where $L$ is the thickness of the medium and $\tau_0$ is taken to be
of order the mean free path for elastic scattering in the medium.
In order to compare (\ref{eq:qhateffdelta}) and (\ref{eq:qhateffStd}),
we need to translate.
First, for simplicity, we have been working in the infinite-medium
approximation, which assumes that
the size of the medium is large compared to all relevant formation lengths.
Eq.\ (\ref{eq:qhateffStd}) instead focuses on the
phenomenologically often-relevant case where the width $L$ of the medium
is $\lesssim$ the formation time $t_{\rm form}(x)$ associated with the harder
splitting $x$. One may convert at leading-log level
by considering the boundary case where
\begin {equation}
L \sim t_{\rm form}(x) .
\label {eq:Lsim}
\end {equation}
Parametric substitutions like this inside the arguments of logarithms
are adequate for a leading-log analysis.
What remains is to translate between the use of two different types
of cut-offs
in (\ref{eq:qhateffdelta}) and (\ref{eq:qhateffStd}): $\delta$ and
$\tau_0$. To understand the effect of the cut-offs, it is useful
to review where double logs come from in the $\hat q$ approximation,
at first ignoring the cut-offs altogether. Parametrically, the IR
double log arises from an integral of the form
\begin {equation}
\iint \frac{dy}{y} \> \frac{d(\Delta t)}{\Delta t}
\label {eq:Dlog0}
\end {equation}
over the integration region shown in fig.\ \ref{fig:region}a, given by%
\footnote{
Using (\ref{eq:Lsim}) and $t_{\rm form}(\xi) \sim \sqrt{\xi E/\hat q}$
for small $\xi$,
(\ref{eq:tregion}) can be put in the form
$y \sqrt{E/x\hat q} \ll \Delta t \ll \sqrt{y E/\hat q}$ presented
in eq.\ (9.3) of ref.\ \cite{2brem} for
$y \ll x \le z$. The equivalence, in turn, with notation used in some
of the original work on double logs in the NLO LPM effect is
discussed in appendix F.1 of ref.\ \cite{2brem}.
}
\begin {subequations}
\label {eq:region}
\begin {equation}
\frac{yE}{\hat q L}
\ll \Delta t
\ll t_{\rm form}(y) \,.
\label {eq:tregion}
\end {equation}
Using
$t_{\rm form}(y) \sim \sqrt{yE/\hat q}$ for small $y$,
these inequalities can
be equivalently expressed as a range on $y$:
\begin {equation}
\frac{\hat q (\Delta t)^2}{E}
\ll y
\ll \frac{\hat q L\,\Delta t}{E}
\,.
\label {eq:yregion}
\end {equation}
\end {subequations}
\begin {figure}[t]
\begin {center}
\begin{picture}(450,150)(0,0)
\put(0,25){\includegraphics[scale=0.5]{region.eps}}
\put(0,115){$\scriptstyle{\Delta t\,\sim\,t_{\rm form}(x)}$}
\put(83,46){\rotatebox{-90}{$\scriptstyle{y\,\sim\,x}$}}
\put(50,140){$\ln y$}
\put(113,90){\rotatebox{-90}{$\ln\Delta t$}}
\put(50,0){(a)}
\put(163,115){$\scriptstyle{\Delta t\,\sim\,t_{\rm form}(x)}$}
\put(246,46){\rotatebox{-90}{$\scriptstyle{y\,\sim\,x}$}}
\put(213,140){$\ln y$}
\put(276,90){\rotatebox{-90}{$\ln\Delta t$}}
\put(213,0){(b)}
\put(178,80){$\color{blue}\scriptstyle{\Delta t\,\sim\,\tau_0}$}
\put(326,115){$\scriptstyle{\Delta t\,\sim\,t_{\rm form}(x)}$}
\put(409,46){\rotatebox{-90}{$\scriptstyle{y\,\sim\,x}$}}
\put(376,140){$\ln y$}
\put(439,90){\rotatebox{-90}{$\ln\Delta t$}}
\put(376,0){(c)}
\put(365,46){\rotatebox{-90}{$\color{red}\scriptstyle{y\,\sim\,\delta}$}}
\end{picture}
\caption{
\label {fig:region}
The region of integration (\ref{eq:region})
giving rise to a double log in the
$\hat q$ approximation with (a) no cut off, (b) the cut off
$\Delta t \sim \tau_0$ used in earlier literature, and (c)
the IR regulator $y \sim \delta$ used in our calculations.
See text for discussion.
}
\end {center}
\end {figure}
Now consider two different ways to evaluate the double logarithm
(\ref{eq:Dlog0}). The first method is to add a lower cut-off
$\tau_0$ on $\Delta t$, as in fig.\ \ref{fig:region}b.
Using (\ref{eq:yregion}), that's
\begin {equation}
\approx
\int_{\tau_0}^L \frac{d(\Delta t)}{\Delta t}
\int_{\hat q(\Delta t)^2/E}^{\hat q L\,\Delta t/E} \frac{dy}{y}
=
\int_{\tau_0}^L \frac{d(\Delta t)}{\Delta t} \,
\ln\Bigl( \frac{L}{\Delta t} \Bigr)
=
\tfrac12 \ln^2\Bigl( \frac{L}{\tau_0} \Bigr) .
\label {eq:Dlogtau}
\end {equation}
Alternatively, adding a lower cut-off $\delta$ on $y$ as in
fig.\ \ref{fig:region}c, using (\ref{eq:Lsim}), and assuming
$x \le \frac12$ so that parametrically
$t_{\rm form}(x) \sim \sqrt{x E/\hat q}$,
the double log (\ref{eq:Dlog0}) is regulated to
\begin {equation}
\approx
\int_\delta^x \frac{dy}{y}
\int_{y E/\hat q L}^{t_{\rm form}(y)} \frac{d(\Delta t)}{\Delta t}
=
\int_\delta^x \frac{dy}{y}
\ln \Bigl( \frac{\hat q L \, t_{\rm form}(y)}{y E} \Bigr)
\approx
\int_\delta^x \frac{dy}{y}
\ln \Bigl( \sqrt{\frac{x}{y}} \Bigr)
=
\tfrac14 \ln^2 \Bigl( \frac{\delta}{x} \Bigr) .
\label {eq:Dlogdelta}
\end {equation}
When we extract just the double log dependence $\ln^2\delta$
on the parameter $\delta$, there is no difference (for fixed $x$) at leading-log
order between $\ln^2( \delta/x )$ and $\ln^2\delta$.
At that level, comparison of (\ref{eq:Dlogtau}) and (\ref{eq:Dlogdelta})
gives the leading-log translation
\begin {equation}
\ln^2\Bigl( \frac{L}{\tau_0} \Bigr)
\longrightarrow
\tfrac12 \ln^2\delta
\label {eq:DlogTranslate}
\end {equation}
between IR-regularization with $\tau_0$ and $\delta$.
Applied to the standard double log result (\ref{eq:qhateffStd}),
this translation exactly reproduces the double log behavior
(\ref{eq:qhateffdelta}) of our own results.
We will return to the $x$ dependence of (\ref{eq:Dlogdelta})
when we later examine sub-leading single-log corrections in section
\ref{sec:SingleLog}.
Our $\delta$ is simply a formal IR regulator.
In contrast, there is a plausible physical reason for using
the elastic mean free path $\tau_0$ as an IR regulator at the
double log level: The $\hat q$ approximation used throughout our
discussion and earlier literature
is a multiple-scattering approximation that requires
long time periods compared to the mean free time between collisions.
However, beyond leading-log order, the use of a
$\tau_0$ cut-off would be problematic for full NLO calculations.
In our calculations, a $\tau_0$ cut-off would interfere with
the correct UV-renormalization of $\alpha_{\rm s}$,
which comes from $\Delta t \to 0$
(and small enough time scales that even $\hat q$-approximation propagators
faithfully reproduce vacuum propagators). So in this paper
we have just chosen the formal IR regulator, $\delta$, that seemed most
convenient for our calculations.
In order to use IR-regulated results for NLO splitting rates, one
must either compute quantities that are IR-safe in the $\hat q$
approximation or else make an appropriate matching calculation for
soft emission that takes into account how the QCD LPM effect turns off
for formation lengths $\lesssim \tau_0$.
\subsubsection {Physics scales:
What if you wanted to take $\delta$ more seriously?}
\label {sec:scales}
Though we are simply taking $\delta$ as a formal IR cut-off for
calculations involving the $\hat q$ approximation, we should mention
what the physics scales are where our $\hat q$-based analysis would
break down if one used our results for calculations that were
sensitive to the value of $\delta$. The situation is complicated
because there are potentially two scales to consider,
indicated in
fig.\ \ref{fig:region2}. We have given parametric formulas
for those scales for the case of a weakly-coupled quark-gluon
plasma. One may translate to a strongly-coupled
quark-gluon plasma, in both the figure and the discussion below,
simply by erasing the factors of $g$.
\begin {figure}[t]
\begin {center}
\begin{picture}(180,200)(0,0)
\put(60,75){\includegraphics[scale=0.5]{region2.eps}}
\put(0,137){$\color{blue}{\scriptstyle{
\Delta t\,\sim\,\tau_0 \sim 1/g^2T
}}$}
\put(75,75){\rotatebox{-90}{$\color{red}{
\scriptstyle{yE\,\sim\,\hat q\tau_0^2\,\sim\,T}
}$}}
\put(111,75){\rotatebox{-90}{$\color{red}{
\scriptstyle{yE\,\sim\,\hat q L \tau_0^{}\,\sim\,\sqrt{x E T}}
}$}}
\put(110,190){$\ln y$}
\put(173,140){\rotatebox{-90}{$\ln\Delta t$}}
\end{picture}
\caption{
\label {fig:region2}
Parametric scales associated with various features of
fig.\ \ref{fig:region}b. The expressions in terms of
$\tau_0$, $\hat q$ and $L$ match those
of the original work \cite{Wu0} on double log corrections
to $\hat q$.
}
\end {center}
\end {figure}
Parametrically, the mean free time between (small-angle)
elastic collisions with the medium is $\tau_0 \sim 1/g^2 T$,
and $\hat q$ is $\sim g^4 T^3$. Using the
limits (\ref{eq:yregion}) on $y$, as well as (\ref{eq:Lsim})
and $t_{\rm form(x)} \sim \sqrt{xE/\hat q}$,
one then finds
for $\Delta t \sim \tau_0$ the corresponding
soft gluon energies $yE$ indicated in the figure.
Our formalism breaks down for $yE$ smaller
than the lower limit $yE \sim T$ because gluons of energy $T$
cannot be treated as high-energy compared to the plasma.
Note that if one correspondingly chose $\delta \sim T/E$
without also constraining $\Delta t$, then the resulting double
log region would be larger than has been conventionally assumed
in the literature. In contrast, if one chose
$\delta \sim \sqrt{xT/E}$, corresponding to the other red line
in fig.\ \ref{fig:region2}, then one would guarantee that
$\Delta t \gtrsim \tau_0$ but the resulting double log region
would be smaller than the one used in the literature.
There is no choice of $\delta$ alone that corresponds to the
traditional shaded region of fig.\ \ref{fig:region2}.
\subsubsection {Double-log correction for shower evolution equation}
The gain term of the shower evolution equation (\ref{eq:Nevolve0})
depends only on the combination $[d\Gamma/dx]_{\rm net}$ of rates,
and so the same redefinition (\ref{eq:qhateffdelta}) will absorb
the double logarithmic divergence. One expects that this must
also work for the loss term in (\ref{eq:Nevolve0}), which depends
on the combination $\Gamma$, but we should make sure. Since we found that
only $y \to 0$ ultimately contributes to the double logarithm
in our later version (\ref{eq:NevolveRV}) of the evolution equation,
we can focus on the $y{\to}0$ behavior of the NLO loss term
for fixed $x$, which corresponds to the $y{\to}0$ behavior of the
integrand of (\ref{eq:Gtot4}) for $\Delta\Gamma^{\overline{\rm NLO}}$.
Using (\ref{eq:VRlimitG}) and
(\ref{eq:VRlimit}), the
double log generated by the $y$ integration in (\ref{eq:Gtot4})
is
\begin {equation}
\Delta\Gamma^{\rm NLO}
\approx
- \frac{C_{\rm A}\alpha_{\rm s}}{8\pi}
\int_0^1 dx \left[ \frac{d\Gamma}{dx} \right]^{\rm LO}
\int_\delta^{1/2} dy \> \frac{\ln y}{y}
\approx
\frac{C_{\rm A}\alpha_{\rm s}}{16\pi}
\int_0^1 dx \left[ \frac{d\Gamma}{dx} \right]^{\rm LO}
\ln^2\delta .
\end {equation}
When combined with the leading-order rate $\Gamma^{\rm LO}$ given
by (\ref{eq:GLO}), we have
\begin {equation}
\Gamma
\approx
\frac{1}{2!}
\int_0^1 dx \>
\left[
1 + \frac{C_{\rm A}\alpha_{\rm s}}{8\pi} \ln^2\delta
\right]
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO} ,
\end {equation}
which indeed involves the same correction to $[d\Gamma/dx]^{\rm LO}$,
and so to $\hat q$, as (\ref{eq:dGLOeff}).
\subsection {Why not talk about }\tableofcontents$dE/dL$?}
In the literature, it is common to discuss energy loss per unit
length ($dE/dL$) for a high-energy particle. This makes sense
only if one can unambiguously identify the original particle after
a process that has degraded its energy.
For many applications of the LPM effect, the energy loss occurs by
radiation that is soft compared to the initial particle energy $E$,
and so one can identify the particle afterwards as the only one that
still has very high energy. In this paper, however, we have been
focused on the case of a very thick medium (thick compared to
formation lengths). In that case, hard bremsstrahlung is an
important aspect of energy loss. If the two daughters of a splitting
have comparable energies, it becomes more difficult to say which
is the successor of the original. For a double-splitting process
beginning with a quark, one can unambiguously (for large $N_{\rm c}$) choose
to follow the original quark. But, for processes that begin with
$g{\to}gg$, the distinction is less clear.
One possibility might be to formally define $dE/dL$ for
$g{\to gg}$ processes by always following after each splitting
the daughter gluon that has the highest energy of the two daughters.
Unfortunately, this procedure is ill-defined when analyzing
the effect of overlapping formation times on successive splittings.
Consider the interference shown in fig.\ \ref{fig:ambiguityGlue}
of two different amplitudes for double splitting $g \to gg \to ggg$.
For each amplitude, the red gluon line shows which gluon we would
follow by choosing the highest-energy daughter of each individual
$g{\to}gg$ splitting. The two amplitudes do not agree on which
of the final three gluons is the successor of the original gluon.
That's not a problem if the individual splittings are well
enough separated
that the interference can be ignored, i.e.\ if formation lengths
for the individual splittings do not overlap. But since we are
interested specifically in calculating such interference,
we have no natural way of defining which gluon to follow.
This is why we have avoided $dE/dL$ and
focused on more general measures of shower evolution.
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.7]{ambiguityGlue.eps}
\caption{
\label{fig:ambiguityGlue}
An example of an interference between two different amplitudes
for double splitting $g \to gg \to ggg$. The numbers show the
energy fractions of gluons relative to the first gluon
that initiated the double-splitting process. The red follows
the highest-energy daughter of each individual $g{\to}gg$
process
}
\end {center}
\end {figure}
The above argument generalizes to $g \to ggg$ points made in
ref.\ \cite{qedNfstop} about
$e \to \gamma e \to \bar e e e$, $q \to gq \to \bar q q q$
and $q\ \to gq \to ggq$.
However, in those cases, ref.\ \cite{qedNfstop} noted that
$dE/dL$ was nonetheless well-defined in the large $N_{\rm f}$ or
$N_{\rm c}$ limits respectively. In contrast, the $g{\to}ggg$ interference
shown in fig.\ \ref{fig:ambiguityGlue} is unsuppressed in
the large-$N_{\rm c}$ limit.
\subsection {Similar power-law IR cancellations}
LPM splitting rates and overlap corrections scale with energy like
$\sqrt{\hat q/E}$, up to logarithms.
For situations where rates are proportional to a power $E^{-\nu}$
of energy,
ref.\ \cite{qedNfstop} discusses how to derive relatively
simple formulas for the stopping distance of a shower, and more
generally formulas for various moments of the distribution of where
the energy of the shower is deposited. Those formulas can
also be adapted to the case where the rates also have
single-logarithmic
dependence $E^{-\nu} \ln E$. This is adequate for
analyzing stopping distances for QED showers \cite{qedNfstop},
but the application to QCD, which has double logs, is unclear.
But even for QCD, one can use those stopping
length formulas as yet
another context in which to explore the cancellation of power-law
IR divergences.
See appendix \ref{app:lstop} for that analysis.
\section{IR single logarithms}
\label {sec:SingleLog}
\subsection{Numerics}
In (\ref{eq:VRlimit}) and section \ref{sec:qhateff}, we extracted the
known IR double logarithm from the slope of a straight-line fit
to the small-$y$ behavior of our full numerical results when plotted
as
\begin {equation}
\frac{
v(x,y) + \tfrac12 r(x,y)
\vphantom{\Big|}
}{
\frac{C_{\rm A}\alpha_{\rm s}}{8\pi}
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO} \frac{1}{y}
\vphantom{\Big|}
}
\label {eq:VRratio2}
\end {equation}
vs.\ $\ln y$, as in fig.\ \ref{fig:dbllogCheck}.
The sub-leading single-log behavior can
be similarly found, for each value of $x$,
from the {\it intercept}\/ of that straight-line fit.
Specifically, refine (\ref{eq:VRlimit}) to include single-log
effects by writing
\begin {equation}
v(x,y) + \tfrac12 r(x,y)
\simeq
-\frac{C_{\rm A}\alpha_{\rm s}}{8\pi}
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO} \frac{\bigl(\ln y + s(x)\bigr)}{y}
\,.
\label {eq:VRwithSingle}
\end {equation}
Here, the $y^{-1} \ln y$ term generates the known double-log behavior
$\propto \ln^2\delta$ after integration over $y$, and the new
$s(x)\,y^{-1}$ term allows for additional single-log behavior
$\propto \ln\delta$.
Then the combination (\ref{eq:VRratio2}) behaves at small $y$ like
\begin {equation}
\frac{
v(x,y) + \tfrac12 r(x,y)
\vphantom{\Big|}
}{
\frac{C_{\rm A}\alpha_{\rm s}}{8\pi}
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO} \frac{1}{y}
\vphantom{\Big|}
}
\simeq
-\bigl( \ln y + s(x) \bigr) .
\label {eq:VRratioSingle}
\end {equation}
The right-hand side represents the straight line fit of
fig.\ \ref{fig:dbllogCheck}, and the intercept of that fit at $\ln y = 0$
gives $-s(x)$.
Our numerical results for $s(x)$ are shown by circles in
fig.\ \ref{fig:SingleLogs}.
Note that $s(x)$ is not symmetric under $x \to 1{-}x$.
That's because we defined $v(x,y)$ in (\ref{eq:Vdef})
to contain Class I virtual diagrams but not their
$x \to 1{-}x$ cousins.
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.55]{SingleLogs.eps}
\caption{
\label {fig:SingleLogs}
The single-log coefficients $s$ (circles) and $\bar s$ (diamonds)
as a function of $x$. The left-most and right-most data points are
for $x{=}0.01$ and $x{=}0.99$, while all other data points
are evenly spaced at
$x{=}0.05$, $0.10$, $0.15$, ..., $0.85$, $0.90$, $0.95$.
For comparison,
the dashed blue curve shows the anticipated small-$x$ behavior
(\ref{eq:sbarsmallx}) with constant $c$ fit by (\ref{eq:c}),
and the solid blue curve shows the
educated guess (\ref{eq:sbarform}) for the full $x$ dependence.
}
\end {center}
\end {figure}
We do not have anything interesting to say about the precise shape of
$s(x)$ itself. But we can get to something interesting if we note that our
original discussion (\ref{eq:VRlimit}) of the small-$y$ behavior of
$v(x,y)+\tfrac12 r(x,y)$ was in the context of $[d\Gamma/dx]_{\rm net}$,
where $v(x,y)+\tfrac12 r(x,y)$ appeared in the $x \leftrightarrow 1{-}x$
symmetric combination
\begin {equation}
\bigl[ v(x,y) + \tfrac12 r(x,y) \bigr] + \bigl[ x \to 1{-}x \bigr]
\label {eq:VR2}
\end {equation}
of (\ref{eq:VR}).
For this combination, the single log piece corresponds to twice
the average
\begin {equation}
\bar s(x) \equiv \frac{s(x)+s(1{-}x)}{2}
\end {equation}
of $s(x)$ over $x \leftrightarrow 1{-}x$.
This $\bar s(x)$ is depicted by the diamonds in
fig.\ \ref{fig:SingleLogs}.
And even though we currently have only numerical results for $\bar s(x)$,
we will be able to make some interesting observations about its form
by comparing our numerics to an educated guess that we will discuss in
a moment.
$[d\Gamma/dx]_{\rm net}$, and thus $\bar s(x)$, also appears in
our other discussions of IR behavior, such as the gain term
in the evolution equation (\ref{eq:Nevolve0}) for the gluon
distribution $N(\zeta,E_0,t)$.
The loss term of that equation depends on the total rate
$\Gamma$, which treats the two identical daughters of $g \to gg$
processes $x$ and $1{-}x$ on an equal footing.%
\footnote{
As was true for $[d\Gamma/dx\,dy]_{\rm net}$, the $r(x,y)$ contribution
representing $g \to ggg$ is symmetric in
$x \leftrightarrow z \equiv 1{-}x{-}y$ rather than $x\leftrightarrow 1{-}x$,
but the difference is unimportant in the $y{\to}0$ limit we are using
to extract IR divergences. More specifically, the
difference between $r(x,y) = r(1{-}x{-}y,y)$ and $r(1{-}x,y)$ is
parametrically smaller as $y{\to}0$ than the $1/y$ terms responsible
for the single-log IR divergence under discussion.
}
So $\bar s(x)$ is the relevant function for single log divergences,
regardless of the fact that we found it convenient to rewrite
$\Gamma$ in (\ref{eq:NevolveRV}) in a way
that obscured the $x \leftrightarrow 1{-}x$ symmetry of $g{\to}gg$
so that we could make more explicit the cancellation of
power-law IR divergences.%
\footnote{
If desired, one could achieve both goals by replacing the integrand in
(\ref{eq:NevolveRV}) by its average
over $x \leftrightarrow 1{-}x$.
}
\subsection {Educated guess for form of }\tableofcontents$\bar s(x)$}
Let's now return to the issue of $x$ dependence in the translation
of the standard double log result $\ln^2(L/\tau_0)$ in (\ref{eq:Dlogtau})
to the $\ln^2\delta$ of our calculations in (\ref{eq:Dlogdelta}).
Previously, when we compared the two, we ignored the $x$ dependence
of the $\ln^2(\delta/x)$ in (\ref{eq:Dlogdelta}). Now keeping track of
that $x$ dependence, the translation (\ref{eq:DlogTranslate}) becomes
\begin {equation}
\ln^2\Bigl( \frac{L}{\tau_0} \Bigr)
\longrightarrow
\tfrac12 \ln^2\Bigl( \frac{\delta}{x} \Bigr) .
\label {eq:DlogTranslatex}
\end {equation}
Here we assume $x < 1{-}x$, and
the arguments of the double logarithms are only {\it parametric} estimates.
Rewrite the right-hand side
of (\ref{eq:DlogTranslatex}) as $\ln^2 \Delta$ with
$\Delta \sim \delta/x$. For $x \ll 1$, this parametric relation
suggests that $\Delta \simeq \#\delta/x$ for some proportionality
constant $\#$.
So (\ref{eq:DlogTranslatex}) suggests
that a more precise substitution for $x \ll 1$ would be
\begin {equation}
\ln^2\Bigl( \frac{L}{\tau_0} \Bigr)
\longrightarrow
\tfrac12 \ln^2\Bigl( \# \frac{\delta}{x} \Bigr)
=
\tfrac12 \ln^2\delta
+ \left[ \ln\Bigl(\frac{1}{x} \Bigr) + \ln\# \right] \ln\delta
+ \mbox{(IR convergent)}
.
\label {eq:DlogTranslatex2}
\end {equation}
Eq.\ (\ref{eq:DlogTranslatex2}) contains information about the
small-$x$ dependence
of the coefficient of the sub-leading, single IR-logarithm $\ln\delta$.
In a moment, we will attempt to generalize to a guess of the
behavior for all values of $x$,
but first let's see how (\ref{eq:DlogTranslatex})
compares to our numerics. Consider the logarithms arising
from a symmetrized $\bar s$ version of (\ref{eq:VRwithSingle}), whose
integral over $y$ would be proportional to
\begin {equation}
- \int_\delta dy \> \frac{\bigl( \ln y + \bar s(x) \bigr)}{y}
= \tfrac12 \ln^2\delta + \bar s(x) \ln \delta
+ (\mbox{IR convergent}) .
\label {eq:sbarlogint}
\end {equation}
Comparison of (\ref{eq:DlogTranslatex2}) with (\ref{eq:sbarlogint})
suggests that
\begin {equation}
\bar s(x) \simeq \ln\Bigl(\frac{1}{x} \Bigr) + c
\qquad (y \ll x \ll 1) ,
\label {eq:sbarsmallx}
\end {equation}
where $c=\ln\#$ is a constant that is not determined by this argument
and must be fit to our numerics. The dashed blue curve in
fig.\ \ref{fig:SingleLogs} shows (\ref{eq:sbarsmallx}) with
\begin {equation}
c = 9.0
\label {eq:c}
\end {equation}
on the graph of our full numerical results.
The form (\ref{eq:sbarsmallx}) works well for small $x$.
To make an educated guess for the full $x$ dependence of
$\bar s(x)$, we need to replace (\ref{eq:sbarsmallx}) by
something symmetric in $x \leftrightarrow 1{-}x$.
The formation time $t_{\rm form}(x)$, related to the harmonic
oscillator frequency $\Omega_0$ of (\ref{eq:Om0def}) by
\begin {equation}
\frac{1}{[t_{\rm form}(x)]^2} \sim |\Omega_0|^2
=
\left|
\frac{-i \hat q_{\rm A}}{2E} \Bigl( -1 + \frac{1}{x} + \frac{1}{1-x} \Bigr)
\right| ,
\label {eq:Om0sqr}
\end {equation}
is symmetric in $x \leftrightarrow 1{-}x$ and plays a major role in
the LPM effect. So, even though our arguments about double logs
have only been parametric, let us see what happens if we guess
that the $1/x$ in
(\ref{eq:sbarsmallx}) is arising from the small $x$ behavior of
(\ref{eq:Om0sqr}), and so we replace (\ref{eq:sbarsmallx}) by
\begin {equation}
\bar s(x) =
\ln \Bigl( -1 + \frac{1}{x} + \frac{1}{1-x} \Bigr) + c .
\label {eq:sbarform}
\end {equation}
This guess is shown by the solid blue curve in fig.\ \ref{fig:SingleLogs}.
\subsection {How well does the educated guess work?}
As the figure shows, (\ref{eq:sbarform}) captures the $x$ dependence
of the single log coefficient $\bar s(x)$
very well. However, it is not quite perfect.
To see the discrepancies, one may use (\ref{eq:VRwithSingle})
together with (\ref{eq:sbarform}) to extract from our numerical
results for $v(x,y)+\tfrac12r(x,y)$ the best choice $c(x)$
of $c$ for each {\it individual}\/ value of $x$:
\begin {equation}
c(x) \equiv
\lim_{y\to 0}
\left\{
\frac{ \frac12\bigl(
[v(x,y) + \tfrac12 r(x,y)] + [x \leftrightarrow 1{-}x]
\bigr) }
{ -\frac{C_{\rm A}\alpha_{\rm s}}{8\pi}
\left[ \frac{d\Gamma}{dx} \right]^{\rm LO} \frac{1}{y}
\vphantom{\Big|} }
-
\left[
\ln y + \ln \Bigl( -1 + \frac{1}{x} + \frac{1}{1-x} \Bigr)
\right]
\right\} .
\label {eq:cx}
\end {equation}
If the guess (\ref{eq:sbarform}) for the form of $\bar s(x)$ were
exactly right, then $c(x)$ would be an $x$-independent constant.
But fig.\ \ref{fig:c} shows a small variation of our
$c(x)$ with $x$. Our educated guess is
a good approximation but appears not to be the entire
story for understanding IR single logs.
The variation of $c(x)$ in fig.\ \ref{fig:c} is the reason that
we have not bothered to determine the small-$x$ value of $c$
in (\ref{eq:sbarsmallx}) to better precision than (\ref{eq:c}).
\begin {figure}[t]
\begin {center}
\includegraphics[scale=0.55]{c.eps}
\caption{
\label {fig:c}
Extraction via (\ref{eq:cx})
of the $x$-dependence of the ``constant'' $c$
in the form (\ref{eq:sbarform}) for $\bar s(x)$.
}
\end {center}
\end {figure}
We should note that the value of $c$ will be IR-regularization
scheme dependent.
If we had regulated the IR with a smooth cut-off at $p^+ \sim P^+ \delta$
instead of a hard cut off, a different value of $c$ would be needed
to keep the physics the same on the right-hand side of
(\ref{eq:sbarlogint}) with the different meaning of $\delta$.
\section{Theorist Error}
\label {sec:error}
The results presented in Appendix \ref{app:summary} for overlap
effects on double splitting calculations represent the culmination
of a very long series of calculations
\cite{2brem,seq,dimreg,QEDnf} that required addressing many subtle
technical issues as well as many involved arguments computing
expansions in $\epsilon$ for novel dimensionally-regulated quantities.
In the absence of calculations by an independent group using
independent methods,
a natural worry must be whether somewhere our group
might have made a mistake that would noticeably affect our final
results. We refer to this possibility as ``theorist error,'' in
contrast to ``theoretical error'' estimates of
uncertainty arising from the approximations used.
Though we cannot absolutely guarantee the absence of theorist error,
we think it useful to
list a number of cross-checks and features of our calculations.
Some of these check our treatment of technical subtleties of
the calculation.
{\it 1.}
The power-law IR divergences computed for real and virtual diagrams
in the $\hat q$ approximation cancel
each other, as discussed in this paper.
Sub-leading IR divergences, which do not cancel,
correctly reproduce the IR double log \cite{Wu0} known from previous,
independent calculations \cite{Blaizot,Iancu,Wu}
that analyzed overlap effects in leading-log approximation.
{\it 2.}
Our calculation generates the correct $1/\epsilon$ UV divergences for the known
renormalization of $\alpha_{\rm s}$.
This includes the cancellation of mixed UV-IR divergences,
which is one of the subtleties of Light-Cone Perturbation Theory.
{\it 3.}
In the soft limit $y \ll x \ll 1$ of $g \to ggg$,
crossed \cite{2brem} and sequential \cite{seq} diagrams give
contributions to $\Delta\Gamma/dx\,dy$ that behave like
$\ln(x/y)/x y^{3/2}$. But the logarithmic enhancement of
these $1/x y^{3/2}$ contributions {\it cancels}\/ when
all $g {\to} ggg$ processes are
added together, reassuringly consistent with the
Gunion-Bertsch picture presented in appendix B of ref.\ \cite{seq}.
When our formalism is applied instead to large-$N_{\rm f}$ QED \cite{QEDnf},
the analogous logarithm does not cancel. In that case,
its coefficient reassuringly matches what one would expect from
DGLAP-like arguments, as explained in section 2.2.3 of ref.\ \cite{QEDnf}.
{\it 4.}
One of the technical subtleties of our methods has to do with identifying the
correct branch to take for logarithms $\ln C$ of complex or negative numbers,
which may arise in dimensional regularization, for example, from
the expansion of a $C^\epsilon$. See section 4.6 and appendix H of
ref.\ \cite{dimreg}, as well as appendix H.1 of ref.\ \cite{QEDnf}, for
examples where the determination of the appropriate branch requires
care. Making a mistake of $\pm 2\pi i$ in the
evaluation of a logarithm would generally have a significant effect
on our results. But we do have some consistency checks on such
``$\pi$ terms'' that result from the logarithm of the phases of
complex numbers in our calculation. One check is illustrated by
appendix \ref{app:IRcancel}, where $\pi$ terms associated
with individual diagrams must all cancel as one part of the
cancellation of IR power-law divergences.
A different, somewhat indirect cancellation test of
$\pi$ terms generated by dimensional regularization is given
in appendix D of ref.\ \cite{dimreg}.
{\it 5.}
Here is another test of an $O(\epsilon^0)$ term in the expansion of
dimensional regularization of a UV-divergent diagram.
Recall that both $g{\to}ggg$ and NLO $g{\to}gg$ processes have
power-law IR divergences of the form
$\int_\delta dy/y^{3/2} \sim \delta^{-1/2}$, where the power law $y^{-3/2}$
matches a physical argument given in section I.D of ref.\ \cite{seq}.
In the calculation of divergent diagrams, the UV-sensitive piece
of the calculation is isolated
into what are called ``pole'' pieces in refs.\ \cite{2brem,seq,dimreg,QEDnf}
and in appendix \ref{app:summary}.
These pole pieces are evaluated analytically with
dimensional regularization and
yield $1/\epsilon$ divergences plus finite $O(\epsilon^0)$
contributions. The remaining UV-insensitive contributions to the
diagrams are evaluated with numerical integration. For
some of the crossed virtual diagrams (top line of fig.\ \ref{fig:virtI}),
both the $O(\epsilon^0)$ pole piece and the UV-insensitive numerical
integral%
\footnote{
In formulas, the pole piece of the crossed virtual diagrams
corresponds to eq.\ (\ref{eq:ApoleIc}) for $A^{\rm pole}_{\rm virt\,Ic}$.
whereas the UV-insensitive piece is the integral shown in
(\ref{eq:AvirtIc}). For more details on exactly how the pole
piece is defined, see appendix \ref{app:method}.
}
turn out to have spurious IR divergences that are {\it more} IR divergent
than the power-law divergences we have discussed. However,
they also turn out to exactly cancel each other. For example,
in appendix \ref{app:Dxi}, we show how the integral
associated with $2\operatorname{Re}(x y \bar y\bar x)$ has an unwanted
$\int dy/y^2 \sim \delta^{-1}$
divergence from $y{\to}0$ that is canceled by the $O(\epsilon^0)$ piece of the
UV-divergent pole term.%
\footnote{
This is unrelated (as far as we know) to a different class of cases,
where individual diagrams have unwanted
IR divergences that are only canceled by similar divergences of
another diagram. See the two pairs of $\int dz/z^{5/2}$ divergences
in Table \ref{tab:limits} in appendix \ref{app:IRcancel}.
}
\section{Conclusion}
\label {sec:conclusion}
The results of this paper (combined with those of earlier papers)
are the complete formulas in appendix
\ref{app:summary} for the effects of overlapping formation times
associated with the various $g{\to}ggg$ and $g{\to}gg$ processes of
figs.\ \ref{fig:crossed}--\ref{fig:virtII}. But there are still
missing pieces we need before we can answer the qualitative question
which motivates this work:
Are overlap effects small enough that an
in-medium shower can be treated as a collection of individual
high-energy partons, assuming one first absorbs potentially large
double logarithms into the effective value of $\hat q$?
First, for a complete calculation, we will also need processes involving
longitudinal gluon exchange and direct 4-gluon vertices, such as
in fig.\ \ref{fig:later}. The methods for computing those diagrams
are known, and so it should only take an investment of care and time
to include them.
More importantly, our results as given are double-log IR divergent.
The known double-log IR divergence can easily be subtracted away from our
results and absorbed into the effective value of $\hat q$ reviewed
in section \ref{sec:qhateff}. However, this potentially leaves behind
a sub-leading {\it single}-log IR divergence. We've seen from numerics
that much of those single-log divergences can also be absorbed into
$\hat q_{\rm eff}$ by accounting for the $x$ dependence of the
natural choice of scale for the double-log contribution to
$\hat q_{\rm eff}$, but there remains a smaller part of the single-log
IR divergences that is not yet understood.
In order to make progress and understand the structure of the
single logarithms, we hope in the future to extract analytic (as opposed to
numerical) results for them from our full diagrammatic results.
We have also not yet determined whether diagrams involving longitudinal
gluon exchange, which have so far been left out, contribute to
IR single logarithms.
It would be extremely helpful, both conceptually and as a check of our
own work, if someone can figure out a way to directly and independently
compute the sub-leading single-log IR divergences without going through
the entire complicated and drawn-out process that we have used to
compute our full results.
\begin{acknowledgments}
We are very grateful to Risto Paatelainen for valuable conversations
concerning cancellation of divergences in Light Cone Perturbation Theory.
We also thank Yacine Mehtar-Tani for several discussions over
the years concerning double-log corrections.
This work was supported, in part, by the U.S. Department
of Energy under Grant No.~DE-SC0007984 (Arnold and Gorda)
and the National Natural
Science Foundation of China under
Grant Nos.~11935007, 11221504 and 11890714 (Iqbal).
\end{acknowledgments}
|
1,941,325,221,095 | arxiv | \section{Introduction}
Through the emergence of new online channels and information technology, targeted advertising plays a growing role in our society and progressively replaces traditional forms of advertising like newspapers, billboards, etc. Indeed, companies can minimize wasted advertising costs by targeting directly individuals that are potentially interested by the product the advertiser is promoting. Modern targeted media use historical data on internet (cookies) such as tracking online or mobile web activities of consumers.
\vspace{1mm}
Optimal control is a suitable mathematical tool for studying advertising problems, and there is already a large literature on this topic.
In the classical approach, a dynamical system for the sales process is modeled and the optimisation is performed over the advertising expenditures process.
We mention the pioneering works by \cite{Nerlove:1962aa}, \cite{Vidale:1957aa}, and then important papers by Sethi and his collaborators, see \cite{Feichtinger:1994aa} for an overview of this research field up to the 90s,
the more recent paper in \cite{sethi21}, and other references in \cite{RGSethi}, as well as in the handbook \cite{hand08}.
We also mention two other works, one about optimal advertising with delay, studied in \cite{2009Gozzi}, and the other \cite{lonzer11} on a model of optimal advertising with singular control.
\vspace{1mm}
The past decade has seen a growing academic interest in the economic and operations research community for digital advertising. We mention for instance the paper \cite{levmil10} on the design of online advertising markets,
\cite{yuan14} for a survey on real-time bidding advertising, \cite{goetal21} for a study of bidding behaviour by learning, \cite{jinetal18} for a multi-agent reinforcement learning algorithm to bid optimisation, or \cite{baletal14}, \cite{tilletal20} for an optimal bid control problem in online ad auctions, see also
\cite{choietal20} for a recent literature review on online display advertising.
\vspace{1mm}
In this paper, we address the following problem. We consider an Agent {\bf A} (company/association) willing to spread some advertising information ${\bf I}$ to Users/Individual, like e.g. (i) the existence of a new product, a new service, or (ii) the danger of some behaviour (drug, virus, etc).
These informations correspond to the following two types of advertising models that we shall study:
\begin{enumerate}
\item {\bf Commercial advertising}, modeling situations where informing an individual triggers a reward for the agent.
We shall consider two types of rewards: {\it purchase-based reward}, corresponding to a punctual payment from the individual to the agent, and {\it subscription-based reward}, corresponding to a subscription of the individual to a service proposed by the agent, who then receives a regular fee.
\item {\bf Social marketing}, modeling situations where informing an individual cancels a cost continuously perceived by the agent.
In contrast with commercial advertising model, the objective of the agent is not to make profit but is rather philanthropic. The aim is to change people's behaviours and to promote social change by raising awareness about dangers. Classical social marketing campaigns are anti-drugs, vaccination campaigns, road-safety, or low-fat diet campaigns.
From the agent's viewpoint, any individual who is not behaving safely is considered to represent a continuous cost to her.
\end{enumerate}
The issue for Agent {\bf A} is how to diffuse efficiently the information ${\bf I}$ by means of ``mo\-dern" online channels (digital ad, social networks, etc)? With that aim, we propose a continuous-time model for optimal digital advertising strategies, and an important feature is to consider online behaviours of individuals/users that may interact with each other, and to derive how advertising will affect their information states.
Compared to the classical models that focus directly on macroscopic variables like sales process controlled by an advertising process, but often without an explicit modelling of the underlying mechanism,
our approach starts from a more ``atomic" level by describing the individual's behaviours, their social interactions, by an explicit modelling, hence easier to justify from an
intrinsic point of view. In particular, we encode the feature of auctions for targeted advertising in our models, which is a crucial component of online advertising.
The counterpart, in general, is that such microscopic model is often less tractable than classical macro-scale models. In this work, we aim to provide a detailed model with
reasonably realistic description while keeping it enough tractable in order to obtain explicit solutions.
\vspace{1mm}
Auctions, in targeted advertising, are used to determine which company will have its ad displayed to a given individual.
There are marketplaces for digital advertising, called Ad exchange, that enable automated buying and selling of ad space. Each time an individual browsers through a publisher content (e.g Google, Yahoo, etc), an ad request is sent out for ad space to be viewed, and the Ad exchange collects the data and information about the viewer via cookies. Then, an auction process is set up in real time where several advertisers (companies, influencers) declare their bid for ad display, and the highest bidder wins the {\it ad space} by paying a cost according to the first-price or second-price auction rule.
The long history of auctions, starting from the groundbreaking works of John Nash (\cite{Nash:1951aa}) and later William S. Vickrey (\cite{Vickrey:1961aa}), and their omnipresence on Internet, illustrate the crucial importance of auction theory, also evidenced by the 2020 Nobel prize in economics, awarded to Milgrom and Wilson, for their contributions to auction theory.
\vspace{1mm}
The output of auctions can be quite challenging to predict even in simple frameworks, and as the overall framework of our models is enough complex, we will not model endogenously each bidding company (which would turn our optimal control models into games/equilibrium models) and instead assume that at each targeted advertising auction, the exogenous maximal bid from companies other than our agent is a random variable independent from the past, and identically distributed across auctions. This assumption has the practical advantage to keep the control problem tractable.
Finally, one of our models will also encode social interactions allowing individuals who saw the ad to become themselves vectors of information. Again, our modeling of social interactions will be quite simple and symmetric, to keep the problem tractable. For a detailed overview of information spreading models in populations, we refer to \cite{Acemoglu:2011aa}.
\vspace{1mm}
Besides the different nature of their applications, the both aforementioned studies also differ in their goals. On one hand, commercial targeted advertising is already widely spread on the Internet, and in this case, our study
proposes a model that could potentially improve company's bidding strategies. On the other hand, social marketing does not seem currently to use targeted advertising a lot, instead relying more on classical non-targeted advertising, and in this case, our model proposes a method to combine non-targeted advertising with targeted advertising for any organisation or association with that philanthropic purpose.
\vspace{2mm}
\noindent{\bf Our main contributions.} Our first contribution is to propose four advertising models, based on a common core framework explicitly modelling on the one hand individuals online behaviours via their web-browsing at
Poisson random times, and on the other hand advertising auctions, each designed for various types of advertising, as described above.
For each of these problems, we obtain a semi-explicit form of the optimal value function and optimal bidding policy.
Our second contribution is to propose in one of these models a rich population model, involving individuals spontaneously finding an information, combining targeted advertising and non-targeted advertising auctions, and highlighting the role of social interactions.
Our third contribution is to provide classes of examples where the solutions (optimal value and bidding policy) are fully explicit.
By analysing the form of the solutions, we are able to clearly understand some interesting points: (1) we observe that the optimal bid to make in a given targeted advertising auction depends not only upon the distribution of other bidders' maximal bids, but also upon the online behaviour of the individual (intensities at which he connects at random times to various types of websites); (2) in the fourth model, involving a population, and adding non-targeted advertising and social interactions in the population, we are able to understand (i) how the presence of social interactions impact the optimal bid to make, and (ii) how the optimal bid to make for non-targeted advertising auctions relates to the optimal bid for targeted advertising auctions and to the proportion of already informed people. More generally, this work shows
how the different sources of information (targeted/non-targeted advertising, websites containing the information, and social interactions) affect each other, and in particular how they affect the optimal bids to make in advertising auctions. This is our fourth contribution.
\vspace{1mm}
The mathematical method for solving these problems is based on martingale tools, in particular, on techniques involving Poisson processes and their compensators. By means of these techniques and with a suitable change of variable in order to reformulate the problem in terms of proportion of informed individuals, we essentially prove the results in two steps: 1) bounding from above (resp. from below) the optimal value when it is a gain (resp. a cost), and then 2) providing a well chosen policy such that the inequalities in 1) become equalities, thus simultaneously proving that the optimal value is equal to its bound, and obtaining an optimal policy reaching it.
\vspace{1mm}
\noindent{\bf Outline of the paper.} We introduce in Section \ref{sect-core} the core framework of our different models. In Section \ref{sect-com}, we study two targeted advertising models designed for applications to commercial advertising, the first one modeling advertising to trigger a purchase, the second one modeling advertising to trigger a subscription. In Section \ref{sect-soc}, we study two advertising models applied to social marketing,
the first one with an arbitrary discount factor, the second one with no discounting, but with extra features of non-targeted advertising and social interactions. We also derive some insightful properties of the solution related to the
sensitivity of the optimal bidding strategies with respect to the individual online behaviour and social interactions effect. Section \ref{sec:example} presents some classes of examples where fully explicit formulas can be derived,
The proofs of our main results are postponed in Section \ref{sec:proofs}, and we conclude in Section \ref{sec:conclusion} by highlighting some extensions and perspectives.
\section{Basic framework} \label{sect-core}
In this section, we introduce the framework on which all the subsequent models are based, and then enrich it in various ways.
The core framework essentially consists in modeling (i) the concept of information, (ii) an individual's online behaviour, (iii) the targeted advertising auction mechanism, (iv) a targeted advertising bidding strategy, and finally in describing how these four features combine together to determine the information
dynamic of an individual.
\subsection{The Information and the Agent}
In this work, all our models will be about some {\it Information}. We shall denote it with a capital ``${\bf I}$'' to emphasize that it is a specific piece of information. It could a priori be any information. Let us give a few examples, further discussed in this work. The Information can be:
\begin{itemize}
\item the existence of a new company,
\item the existence of a new service (e.g. in Netflix, Amazon, etc),
\item the existence of a new product (smartphone, computer),
\item the unhealthiness or healthiness of a behaviour (drug/alcohol consuming, road safety, sexual safety, etc).
\end{itemize}
In the various models studied in this paper, each model will naturally correspond to one of these three types of information, but for now, let us simply consider a generic Information.
The main characteristic of the Information is that any individual can either {\it not know it} or {\it know it}. In other words, the Information is naturally associated to a binary state for any individual: an individual in state $0$ means that he is not aware of the Information, while an individual in state $1$ means that he is aware of
the Information.
\vspace{1mm}
In our work, the Agent {\bf A} will represent any entity (company, association, etc) desiring to spread the Information to individuals or population.
\begin{itemize}
\item In the case of a new service or product, she corresponds naturally to the company that proposes this service or sells this product.
\item In the case of the unhealthiness or healthiness of a given behaviour, she represents a philanthropic association or a governmental entity aiming to work for social welfare.
\end{itemize}
The main characteristics of the Agent is that 1) she wants to spread the Information, 2) she has a gain or cost function depending upon how the information spreads, and 3) she will use a digital advertising strategy as a channel to diffuse the Information.
\subsection{The Individual and the Action}
Let us start by modelling the general behaviour of an individual. Our model is in continuous time. An individual is associated to some random times when he browses on Internet with the following possible choices:
\begin{itemize}
\item Spontaneously connect to a website providing the Information. Websites intrinsically providing the information are numerous, depending upon the kind of information: specialized websites relaying the Information, company/association's own website, etc. Essentially, any website such that the Information is in the actual website's content, as opposed to the alternative option:
\item Visit a website not providing {\it a priori} the information, but displaying targeted ads, and thus susceptible to display the Information whenever the agent (company, association, etc) wins the ad auction and pays for it. Important websites displaying targeted ads typically are social networks and search engines.
\end{itemize}
An Individual is associated to independent Poisson processes $(N^{\bf I}, N^{\bf T})$ with respective intensities $\eta^{\bf I}$, $\eta^{\bf T}$. $N^{\bf I}$ counts the times when the Individual connects to websites intrinsically providing the Information, while $N^{\bf T}$ counts the times when the Individual connects to websites displaying targeted ads.
We shall, in our fourth model, introduce a population with several individuals modeled on this basis, each with their own Poisson processes, independent across individuals.
\vspace{2mm}
The Agent aims to spread the Information in order to trigger an {\it Action} from indi\-viduals. The {\it Action} depends upon the type of the Information:
\begin{itemize}
\item If the Information is about the existence of a service, the expected Action is a {\it subscription}.
\item If the Information is about the existence of a product, the expected Action is a {\it purchase}.
\item If the Information is about an unhealthy behaviour, the expected Action is a {\it healthier behaviour}.
\end{itemize}
In this work, we assume that the Agent knows the individuals well enough to be aware of who would do the Action if they had the Information (who would subscribe to the service if he learns that it exists, buy the product if he learns that it exists, or stop some behaviour if he learns that it is unhealthy).
The individuals who would not perform the Action, even informed, are dismissed: the Agent does not try to send them an ad. Therefore, we can assume that the individuals considered in this work are all such that
\begin{eqnarray*}
\text{Getting the Information }\Rightarrow\text{ Doing the Action.}
\end{eqnarray*}
\subsection{The targeted advertising auctions and bidding strategies}
When the Individual connects to websites displaying targeted ads, in reality, many influencers are competing to win the right to display their ads to him. The mechanism used by the website to choose which influencer will display her ad is to make them bid for it. Each influencer has the possibility to propose a bid associated to the Individual's characteristics (intensities of his Poisson processes). This ad emplacement allocation mechanism is what we call {\it targeted advertising auctions}.
Auctions are complex to study. They involve several bidders, and are thus part of game theory. The current framework is even more complicated since it is dynamic: an auction is opened each time the Individual connects to a website displaying targeted ads. Our goal is to focus on providing a {\it strategic} tool to the Agent, and keeping the problem tractable is important in this work.
A good compromise to both take targeted advertising auctions into account while having a strategically solvable problem is to model the maximal bid make by the other bidders (i.e. other than the Agent) as random variables, i.i.d. among auctions. We thus introduce a sequence of i.i.d. real (nonnegative)
random variables $(B^{{\bf T}}_k)_{k\in\mathbb{N}}$, such that for $k\in\mathbb{N}$, $B^{{\bf T}}_k$ represents the maximal bid of other bidders during the $k$-th targeted advertising auction of the problem.
\vspace{1mm}
We next introduce the notion of targeted advertising bidding strategies. In essence, a targeted advertising bidding strategy is simply a real valued process $\beta$ which depends {\it at most} from the past, i.e. which cannot depend upon the future (in other words non anticipative), such that at each time $t\in\mathbb{R}_+$, $\beta_t$ represents the bid that the Agent would make if the Individual connects to a website displaying targeted ads.
To rigorously formalize this, let us introduce the filtration $\mathbb{F}$ $=$ $({\cal F}_t)_{t\in \mathbb{R}_+}$ generated by the processes $(N^{{\bf I}}, N^{{\bf T}}, B^{{\bf T}}_{N^{{\bf T}}})$, i.e.,
\begin{eqnarray*}
{\cal F}_t &=& \sigma ((N^{{\bf I}}_s, N^{{\bf T}}_s, B^{{\bf T}}_{N^{{\bf T}}_s})_{0\leq s\leq t}), \quad t \geq 0,
\end{eqnarray*}
which thus represents all the information about event triggered before time $t$.
The set of open-loop bidding controls, denoted by $\Pi_{OL}$, is then the set of nonnegative processes $\beta$ predictable and progressively measurable w.r.t. the filtration $\mathbb{F}$.
\subsection{Information dynamic, constant bidding, and advertising cost}
We can now combine all the pieces of modeling previously introduced to define the {\it information dynamic} of the Individual, the notion of constant efficient bidding policy, and the advertising cost. Given an open-loop bidding control $\beta\in\Pi_{OL}$, the information dynamic of the Individual is the $\{0,1\}$-valued process $X^\beta$ satisfying the relation
\begin{equation}
\left\{
\begin{array}{ccl}
X^\beta_{0^-} &=& 0, \\
dX^\beta_t&=&(1-X^\beta_{t-})(dN^{{\bf I}}_t+{\bf 1}_{\beta_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}}dN^{{\bf T}}_t), \quad t \geq 0.
\end{array}
\right.
\end{equation}
Let us interpret this dynamic. The individual starts uninformed ($X^\beta_{0^-}=0$). Once he is informed ($X^\beta_t=1$), he stays informed (hence the $(1-X^\beta_{t-})$ part). As long as he is not informed, the remaining part of the dynamic is effective: when the individual connects to a website intrinsically providing the Information, he becomes informed ($dN^{\bf I}_t$ part). When he connects to a website displaying targeted ads ($dN^{\bf T}_t$ part), he becomes informed if and only if the Agent's ad is displayed to him, which happens if and only if the Agent wins the auction (${\bf 1}_{\beta_t \geq B^{{\bf T}}_{N^{{\bf T}}_t}}$ part).
\vspace{3mm}
\noindent{\bf Advertising cost.} In the subsequent models, the gain or cost function of the agent will be the combination of 1) a component depending upon the information dynamic of the Individual, and 2) an advertising cost component. The component 1) will depend upon the model, but the advertising cost will always have the same form, namely:
\begin{align} \label{defCost}
C(\beta) &= \; \mathbb{E}\Big[\int_0^\infty e^{-\rho t}{\bf 1}_{\beta_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}} {\bf c}(\beta_t,B^{{\bf T}}_{N^{{\bf T}}_t}) d N^{{\bf T}}_t)\Big].
\end{align}
The interpretation is the following:
\begin{itemize}
\item $\rho\in \mathbb{R}_+$ is a discount rate. Usually, discount rate is chosen to be strictly positive in order to avoid infinite rewards or costs. However, in one of our models (the last one), we will specifically assume $\rho=0$, and it will be an important assumption to make the problem solvable. We shall see that in this model, infinite rewards/costs will never occur despite this assumption.
\item When the Individual connects to a website displaying targeted advertising ($d N^{{\bf T}}_t$ part), if the targeted advertising auction is won by the agent (${\bf 1}_{\beta_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}}$ part), the agent has to pay a price ${\bf c}(\beta_t,B^{{\bf T}}_{N^{{\bf T}}_t})$, where ${\bf c}:\mathbb{R}^2\rightarrow \mathbb{R}$ is a function depending upon the paying rule defined by the auction. In this paper, the auction payment rule is assumed to be one of the two following standard rules:
\begin{enumerate}
\item {\it First-price auctions.} Under this auction rule, the winner of the auction pays her bid, and thus, we have ${\bf c}(b,B)=b$.
\item {\it Second-price auctions.} Under this rule, the winner of the auction pays the {\it second winning bid}, i.e. the bid that she beat. In this case, we have ${\bf c}(b,B)=B$.
\end{enumerate}
\end{itemize}
\vspace{3mm}
\noindent{\bf Constant bidding policy.} A constant bidding policy is a constant $b\in\mathbb{R}_+$. The constant bidding control $\beta^b\in\Pi_{OL}$ associated to a constant bidding policy is defined by the feedback form constraint $\beta^b_t= (1-X^{\beta^b}_{t-})b$.
It simply models a strategy where the Agent makes a constant bid $b$ as long as the Individual is not informed (notice that it would be useless to make a positive bid once he is informed).
\vspace{2mm}
We have now introduced all the elements of the core framework. In the sequel, we shall study several advertising problems based on this framework:
\begin{itemize}
\item In Section \ref{sect-com}, we model commercial advertising problems, i.e. problems where the Agent is a company either trying to sell a service or a product. The common property of both situations is that informing the Individual triggers an Action bringing a {\it reward} to the company (subscription regular fee, purchase punctual fee).
\item In Section \ref{sect-soc}, we model social marketing problems, i.e. problems where the Agent is an association or government trying to alert people about unhealthy behaviours (anti-drug/alcohol campaigns, road-safety campaigns, etc). The particularity of such type of advertising is that informing people does not bring a reward to the Agent, but instead, it {\it cancels a cost}: as long as an individual has an unhealthy behaviour, he incurs a continuous cost to the philanthropic association. Once informed, he behaves healthier and stops incurring such cost.
\end{itemize}
\section{Commercial advertising model}\label{sect-com}
In this section, we study models for commercial advertising.
The Agent is thus a company trying to maximize its gain. We will study two types of commercial gains: the subscription-based gain, and the purchase-based gain.
\subsection{Purchase-based gain function} \label{sec:compur}
We consider the situation where the Information is the existence of a product, where the Agent is a company selling this product, and where the Action of the Individual, once informed, is to purchase the product. We thus define the following {\it purchase-based} gain function:
\begin{align} \label{defVpurchase}
V(\beta) &=\; \mathbb{E}\Big[\int_0^\infty e^{-\rho t}KdX^\beta_t\Big]-C(\beta), \quad \mbox{ for } \beta \in\Pi_{OL}.
\end{align}
where $K$ is a nonnegative constant.
Let us interpret this gain function. The part $C(\beta)$ is just the advertising cost from the core framework as defined in \eqref{defCost}. $\rho$ is still the discount rate, and the part $\int_0^\infty e^{-\rho t}KdX^\beta_t$ simply represents a punctual payment $K$ from the Individual to the Agent when he becomes informed ($dX^\beta_t$ part). This naturally models the reward obtained by the Agent when the individual buys the product. Therefore, $V(\beta)$ represents the net profit of the Agent in the situation of selling a product.
\vspace{2mm}
We now state the main result of this section.
\begin{Theorem}\label{theo-purchase}
We have
\begin{eqnarray*}
V^\star:=\sup_{\beta\in\Pi_{OL}} V(\beta)&=&
\sup_{b\in\mathbb{R}_+} V(\beta^b),
\end{eqnarray*}
with
\begin{align} \label{Vbetab}
V(\beta^b) &= \; \frac{\eta^{{\bf I}}K+\eta^{{\bf T}}\mathbb{E}\big[\big(K-{\bf c}(b, B_1^{{\bf T}})\big) {\bf 1}_{b\geq B^{{\bf T}}_1}\big]}{\eta^{{\bf I}}+ \rho + \eta^{{\bf T}}\mathbb{P}[b\geq B^{{\bf T}}_1]}, \quad \forall b\in\mathbb{R}_+.
\end{align}
Furthermore, any $b^\star$ $\in$
$\argmax_{b\in\mathbb{R}_+}V(\beta^b)$
yields an optimal constant bid policy, i.e. an optimal open-loop bid control $\beta^{b^\star}$ taking the form of a constant bid.
\end{Theorem}
\paragraph{Interpretation.} Let us interpret this result by first understanding the role of $\rho$. It is well known that a discount rate is mathematically equivalent to a random termination date of the problem following an exponential distribution ${\cal E}(\rho)$ with parameter $\rho$.
Up to adding this random termination time, we can thus consider that the problem has no discount rate. Given this interpretation, and assuming that the Agent plays a constant bidding policy $b$, notice that the inner fraction in $V(\beta^b)$ can be written as
\begin{eqnarray*}
\pi_{{\bf I}}K+\pi_{\rho}\times 0 + \pi_{{\bf T}}\mathbb{E} [K-{\bf c}(b, B_1^{{\bf T}})\mid b\geq B^{{\bf T}}_1]
\end{eqnarray*}
where $(\pi_{{\bf I}}, \pi_{\rho},\pi_{{\bf T}})$ are probability weights proportional to $(\eta^{{\bf I}},\rho , \eta^{{\bf T}}\mathbb{P}[b\geq B^{{\bf T}}_1])$. This expression should be seen as the expected reward of the Agent computed in terms of how the problem terminates:
\begin{itemize}
\item When it terminates with the Individual finding the Information by himself with probability $\pi_{{\bf I}}$, the Agent only perceives the reward $K$.
\item When it terminates at the random time associated to ${\cal E}(\rho)$, the Individual has not had the time to be informed: the Agent perceives nothing.
\item When it terminates with the Individual getting informed by viewing the Agent's targeted ad with probability $\pi_{{\bf T}}$, the Agent perceives $K$ and pays ${\bf c}(b, B^{{\bf T}}_1)$ because he had to pay the auction's price.
\end{itemize}
\vspace{1mm}
Besides the quantitative aspect of this result, an important qualitative property is that a constant bidding policy is enough to reach the optimal value over all open-loop bidding controls. This is particularly interesting from a model-free viewpoint (reinforcement learning) as it means that one can restrict the search for an optimal strategy to the set of nonnegative constant bidding policies,
which is a reasonably ``small'' set.
\begin{Remark}[Cost dual viewpoint]
Another interesting way to formulate the optimal value and bid is from a {\it cost viewpoint} (and this is actually how we prove this formula in this paper): the idea is to consider the best possible scenario for the Agent, which arises when the Individual directly connects to a website containing the information from the very beginning, and then look at the real scenario {\it relatively} to this best scenario. The real scenario necessarily brings a smaller gain than the best scenario, and thus, it is {\it as if} the Agent won the best scenario gain but then pays a cost corresponding to this difference. From this viewpoint, the goal is to minimize this cost. The best scenario gain is clearly equal to $K$, and we can rewrite the optimal value from \eqref{Vbetab} as
\begin{eqnarray*}
\sup_{\beta\in\Pi_{OL}} V(\beta)&=& K-
\inf_{b\in \mathbb{R}_+}
\frac{\rho K+\eta^{{\bf T}}\mathbb{E}\big[{\bf c}(b, B_1^{{\bf T}}) {\bf 1}_{b\geq B^{{\bf T}}_1}\big]}{\eta^{{\bf I}}+ \rho + \eta^{{\bf T}}\mathbb{P}[b\geq B^{{\bf T}}_1]},
\end{eqnarray*}
and any $b^\star\in\mathbb{R}_+$ such that
\begin{eqnarray*}
b^\star&=&
\argmin_{b\in\mathbb{R}_+}
\frac{\rho K+\eta^{{\bf T}}\mathbb{E}\big[{\bf c}(b, B_1^{{\bf T}}) {\bf 1}_{b\geq B^{{\bf T}}_1}\big]}{\eta^{{\bf I}}+ \rho + \eta^{{\bf T}}\mathbb{P}[b\geq B^{{\bf T}}_1]}
\end{eqnarray*}
yields an optimal constant bid.
\end{Remark}
\vspace{1mm}
We next introduce the following special optimal minimal bidding policy.
\begin{Definition}[Smallest optimal constant bid policy]
We denote $b^\star_{min}$ the constant bidding policy such that
\begin{eqnarray*}
b_{min}^\star &=& \min \argmax_{b\in\mathbb{R}_+}V(\beta^b).
\end{eqnarray*}
$b_{min}^\star$ is called the smallest optimal constant bidding policy.
\end{Definition}
\begin{Remark}
From the proofs of our results, it is possible to see that the open-loop bidding control $\beta^{b_{min}^\star}$ is the smallest optimal open-loop bidding control, i.e. for all optimal open-loop bidding control $\beta$, we have
$\beta^{b_{min}^\star}_t$ $\leq$ $\beta_t$, $(\omega,t)$-a.e.
\end{Remark}
We have the following result about the sensitivity to parameters and upper bounds of the optimal value and smallest minimal optimal bidding policy.
\begin{Proposition}\label{prop-sensitivity}
The optimal value $V^\star$ is increasing in $\eta^{{\bf I}},\eta^{{\bf T}}$ and decreasing in $\rho$, and the smallest optimal constant bidding policy $b_{min}^\star$ is decreasing in $\eta^{{\bf I}},\eta^{{\bf T}}$ and increasing in $\rho$.
Finally, we have
\begin{eqnarray*}
V^\star\geq \frac{\eta^{{\bf I}} K}{\eta^{{\bf I}}+\rho}, \quad \quad b_{min}^\star \; \leq \; K-V^\star \; \leq \; \frac{\rho K}{\eta^{{\bf I}}+\rho}.
\end{eqnarray*}
\end{Proposition}
\vspace{1mm}
The interpretation of the above Proposition is the following.
\vspace{1mm}
\noindent{\bf Properties for $V^\star$.} The higher $\eta^{{\bf I}}$ is, the more frequently the Individual connects to a website containing the Information, and thus the sooner he would learn by this channel the Information, which, at fixed constant bid $b$, naturally increases the expected gain of the Agent. This explains why the gain with any constant bid $b$, and thus the optimal gain, of the Agent is increasing in $\eta^{{\bf I}}$. Given $\tilde{\eta}^{{\bf T}}\leq \eta^{{\bf T}}$, the Agent can always ``emulate'' any scenario associated to a constant bid $b$ and a frequency $\tilde{\eta}^{{\bf T}}$ of connection to a website displaying targeted ads, simply by constraining himself to bid $b$ only with probability $\frac{\tilde{\eta}^{{\bf T}}}{\eta^{{\bf T}}}$ and $0$ otherwise at each auction: by a standard property of Poisson processes, it will be equivalent to always bid $b$ at auctions occurring with intensity $\tilde{\eta}^{{\bf T}}$. Consequently, with the intensity $\eta^{{\bf T}}$, the Agent can replicate all the gains that an intensity $\tilde{\eta}^{{\bf T}}$ could yield, and thus his optimal gain is increasing in $\eta^{{\bf T}}$. Finally, the larger $\rho$ is, the more impatient the Agent is, and thus the less value he gives to potential future rewards, which explains why his optimal gain $V^\star$ is decreasing in $\rho$. The lower bound of $V^\star$ simply corresponds to the gain associated to the constant bidding policy consisting in bidding $0$ at each auction, i.e. never displaying any ad, and thus simply waiting that the individual informs himself on a website containing the Information.
\vspace{1mm}
\noindent{\bf Properties for the smallest optimal constant bid.} When an auction is opened, two scenarii can occur: 1) the Agent wins the auction, receives $K$ and pays the auction price, or 2) the Agent loses the auction, and the problem of informing the individual keeps going, with an optimal value $V^\star$. In other words, if we put the auction price apart, an auction can be seen as providing a reward $K$ when it is won, and $V^\star$ when it is lost. Notice that it is equivalent to consider that $V^\star$ is won anyway, and that the auction is a standard static auction providing the additional reward $K-V^\star$ if the auction is won, and $0$ if it is lost. The larger $V^\star$ is, the smaller $K-V^\star$ is, and thus the smaller the bid that the Agent should be willing to make to win $K-V^\star$ is. The smallest optimal bid is thus decreasing in $V^\star$, which explains why its sensitivity to all the parameters are reversed w.r.t. the sensitivity of $V^\star$. In such auction, it is also clear that the Agent would have no interest in paying more than $K-V^\star$ to win the auction, which justifies the upper bound $K-V^\star$. The greater upper bound $\frac{\rho K}{\eta^{{\bf I}}+\rho}$ directly comes from the lower bound of $V^\star$. In particular, this implies that for getting an optimal bidding constant strategy, we can restrict the search of the supremum over $b$ in $V(\beta^b)$ to the bounded interval $[0,\frac{\rho K}{\eta^{{\bf I}}+\rho}]$.
\subsection{Subscription-based gain function}
We now consider the situation where the Information is the existence of a service, where the Agent is the company proposing this service, and where the Action of the Individual, once informed, is to subscribe to the service. To that aim, we then simply consider the following {\it subscription-based} gain function:
\begin{align} \label{defVsus}
V(\beta) &= \; \mathbb{E}\Big[\sum_{n\in\mathbb{N}} e^{-(\tau^\beta+n)\rho}K\Big]-C(\beta), \quad \mbox{ for } \beta \in\Pi_{OL},
\end{align}
where $\tau^\beta:= \inf\{t\in\mathbb{R}_+: X^\beta_t=1\}$ is the time of information of the individual.
\vspace{3mm}
Let us interpret this gain function. Again, the part $C(\beta)$ is the advertising cost described in the core framework, and $\rho$ is still the discount rate.
The other part
represents the gain coming from the Individual's information dynamic. It corresponds to a regular payment of $K$ every period $1$ from the time of information $\tau^\beta$ (and thus the time of subscription) of the Individual.
\vspace{2mm}
We can now state the main result of this section.
\begin{Theorem} \label{theo-sus}
We have
\begin{eqnarray*}
\sup_{\beta\in\Pi_{OL}} V(\beta) &=&
\sup_{b\in\mathbb{R}_+}
V(\beta^b),
\end{eqnarray*}
with
\begin{eqnarray*}
V(\beta^b) &=& \frac{\eta^{{\bf I}}\frac{K}{1- e^{-\rho}}+\mathbb{E}\big[\big(\frac{K}{1-e^{-\rho}}-{\bf c}(b,B_1^{{\bf T}})\big) {\bf 1}_{b\geq B^{{\bf T}}_1}]}{\eta^{\bf I}+ \rho + \mathbb{P}[b\geq B^{{\bf T}}_1]},
\end{eqnarray*}
and any $b^\star$ $\in$
$\argmax_{b\in\mathbb{R}_+} V(\beta^b)$
yields an optimal constant bid, i.e. an optimal open-loop bid control taking the form of a constant bid.
\end{Theorem}
\noindent{\bf Interpretation.} Notice that the regular payment of $K$ every period of duration $1$ from the time of information is, from the Agent's viewpoint, equivalent to a unique payment of
$\frac{K}{1-e^{-\rho}}$ at the time of information. We are thus reduced to the previous case of purchase-based gain.
\section{Social marketing models}\label{sect-soc}
We now model a quite different type of advertising, called {\it social marketing}. Social marketing is the activity of making advertising campaigns not to make profit but to alert people, in particular about unhealthy behaviours (anti-drug campaigns, road-safety campaigns, sexual-safety campaigns, etc).
The Agent, here, is either a philanthropic association or a governmental entity working for social welfare, and considers that each Individual not behaving healthily incurs a cost to her.
As opposed to commercial advertising from previous section, informing an Individual here does not bring a reward to the Agent, but instead, cancels a cost.
\vspace{2mm}
For this application, our study will be split in two sub-cases:
\begin{enumerate}
\item The case with a positive discount rate $\rho$, based on the same framework as previous models but with a cost function, and
\item The important case with no discount rate (i.e. $\rho$ $=$ $0$), where we shall be able to enrich the basic framework by introducing a population of $M$ individuals as well as a non-targeted advertising mechanism, therefore turning the model into a population control problem.
\end{enumerate}
In both cases the Agent's goal will be to {\it minimize} her cost function.
\subsection{Case with a discount rate} \label{secsocialdiscount}
We start by the case, with no social interaction nor non-targeted advertising, but with an arbitrary discount rate $\rho$. Besides the processes $N^{\bf I}$ and $N^{\bf T}$, we consider a third Poisson process
$N^{\boldsymbol{D}}$ ($\boldsymbol{D}$ for ``Dangerous behaviour'') , independent from the others, with normalized intensity $\eta^{\boldsymbol{D}}=1$, counting the times when the Individual behaves unsafely.
In this social marketing problem, the cost function of the Agent is defined by
\begin{align} \label{defVsocialdis}
V(\beta) &= \; \mathbb{E}\Big[\int_0^\infty e^{-\rho t}K(1-X^\beta_{t-})dN^{\boldsymbol{D}}_t\Big]+C(\beta), \quad \mbox{ for } \beta \in\Pi_{OL}.
\end{align}
The part $C(\beta)$ is the advertising cost, and the part $\mathbb{E}\Big[\int_0^\infty e^{-\rho t}K(1-X^\beta_{t-})dN^{\boldsymbol{D}}_t\Big]$ measures the (discounted) cost perceived in the period before the Individual was informed, assuming that the Individual incurs a cost $K$ to the Agent every time he behaves unsafely.
\vspace{2mm}
We have the following result.
\begin{Theorem} \label{theo-socialdis}
We have
\begin{eqnarray*}
V^\star \; := \; \inf_{\beta \in\Pi_{OL}} V(\beta) &=&
\inf_{b\in\mathbb{R}_+} V(\beta^b),
\end{eqnarray*}
with
\begin{eqnarray*}
V(\beta^b) &=& \frac{K+\eta^{{\bf T}}\mathbb{E}\big[{\bf c}(b,B_1^{{\bf T}}) {\bf 1}_{b\geq B^{{\bf T}}_1}\big]}{\eta^{{\bf I}}+ \rho + \eta^{{\bf T}}\mathbb{P}[b\geq B^{{\bf T}}_1]}
\end{eqnarray*}
and any $b^\star$ $\in$
$\argmin_{b\in\mathbb{R}_+} V(\beta^b)$
yields an optimal constant bid, i.e. an optimal open-loop bid taking the form of a constant bid.
\end{Theorem}
\noindent{\bf Interpretation.} Here again, we interpret $\rho$ as the parameter of a random terminal time with exponential distribution. Notice that in the case of social marketing,
there is already a random terminal time: the time when the Individual connects on the website intrinsically containing the information. Indeed, in such case, the cost stops and the problem stops as well. Both terminal times are exponential random variables with respective parameters $\eta^{{\bf I}}$ and $\rho$. It is known that they can be compressed in a unique terminal time (the minimum of both) with parameter $\eta^{{\bf I}}+\rho$. In other words, up to replacing the original intensity $\eta^{{\bf I}}$ of connection to a website containing the Information by $\eta^{{\bf I}}+\rho$, we are reduced to a problem with no discount rate ($\rho=0$). The inner fraction can be split as follows:
\begin{align} \label{decsocial}
\frac{K+\eta^{{\bf T}}\mathbb{E}\big[{\bf c}(b,B_1^{{\bf T}}) {\bf 1}_{b\geq B^{{\bf T}}_1}\big]}{\eta^{{\bf I}}+ \rho + \eta^{{\bf T}}\mathbb{P}[b\geq B^{{\bf T}}_1]} &=\;
\frac{K}{\eta^{{\bf I}}+ \rho + \eta^{{\bf T}}\mathbb{P}[b\geq B^{{\bf T}}_1]}
+\frac{\eta^{{\bf T}}\mathbb{E}\big[{\bf c}(b,B_1^{{\bf T}}) {\bf 1}_{b\geq B^{{\bf T}}_1}\big]}{\eta^{{\bf I}}+ \rho + \eta^{{\bf T}}\mathbb{P}[b\geq B^{{\bf T}}_1]},
\end{align}
and has the following interpretation. $\eta^{{\bf I}}+ \rho + \eta^{{\bf T}}\mathbb{P}[b\geq B^{{\bf T}}_1]$ is the intensity of the time of information of the Individual, and thus $\frac{1}{\eta^{{\bf I}}+ \rho + \eta^{{\bf T}}\mathbb{P}[b\geq B^{{\bf T}}_1]}$ is the expected time before information. During this time, a continuous cost $K$ is essentially perceived, which explains the first term in the r.h.s. of \eqref{decsocial}. The second term
is essentially the expected cost perceived at the time of termination of the problem, given that in this case, no reward, and only the ad cost, is paid.
\vspace{1mm}
As in the commercial advertising case, we introduce the following special optimal minimal bidding policy.
\begin{Definition}[Smallest optimal constant bid policy]
We denote $b_{min}^\star$ the constant bidding policy such that
\begin{eqnarray*}
b_{min}^\star &=& \min \argmin_{b\in\mathbb{R}_+}V(\beta^b).
\end{eqnarray*}
$b_{min}^\star$ is called the smallest optimal constant bidding policy.
\end{Definition}
We have the following result about the sensitivity to parameters and upper bounds of the optimal value and smallest minimal optimal bidding policy.
\begin{Proposition}\label{propsocial-sensitivity}
The optimal value $V^\star$ and the smallest optimal bid are decreasing in $\eta^{{\bf I}},\eta^{{\bf T}}$, and $\rho$, and we have
\begin{eqnarray*}
V^\star \; \leq \; \frac{K}{\eta^{{\bf I}}+\rho}, \quad \quad b_{min}^\star \; \leq \; V^\star \; \leq \; \frac{K}{\eta^{{\bf I}}+\rho}.
\end{eqnarray*}
\end{Proposition}
\vspace{1mm}
The interpretation of the above proposition is the following.
\vspace{1mm}
\noindent{\bf Properties for $V^\star$.} The higher $\eta^{{\bf I}}$ is, the more frequently the Individual connects to a website containing the Information, and thus the sooner he would learn by this channel the Information, and stop inducing a cost to the Agent, which naturally decreases the expected cost of the Agent. The justification of the sensitivity in $\eta^{{\bf T}}$ is similar to the corresponding interpretation for commercial advertising.
Finally, the larger $\rho$ is, the more impatient the Agent is, and thus the less value she gives to potential future costs, which explains why her optimal cost $V^\star$ is decreasing in $\rho$. The lower bound of $V^\star$ corresponds to the cost associated to the constant bidding policy consisting in bidding $0$ at each auction, and thus waiting that the individual informs himself on a website containing the Information.
\vspace{1mm}
\noindent{\bf Properties for the smallest optimal constant bid.} When an auction is opened, two scenarii can occur: 1) the Agent wins the auction, and the cost stops, or 2) the Agent loses the auction, and the problem of informing the individual keeps going on, with an optimal cost $V^\star$. In other words, if we put the auction price apart, an auction can be seen as incurring a cost $0$ when it is won, and $V^\star$ when it is lost. Notice that it is equivalent to consider that the cost $V^\star$ is incurred anyway, and that the auction is a standard static auction providing the compensating reward $V^\star$ if the auction is won, and $0$ if it is lost. Thus, the larger $V^\star$ is, the greater the bid that the Agent should be willing to make to win $V^\star$ is. The smallest optimal bid is thus increasing in $V^\star$, which explains why its sensitivity to all the parameters are the same as $V^\star$. In such auction, it is also clear that the Agent would have no interest in paying more than $V^\star$ to win the auction, which justifies the upper bound $V^\star$. The greater upper bound $\frac{K}{\eta^{{\bf I}}+\rho}$ follows from the upper bound of $V^\star$.
\subsection{Case with no discount rate, with social interactions and non-targeted advertising} \label{sec:nodiscount}
In this section, we consider a social marketing model with no discounting, but with much more features than previous models.
We do not simply model websites intrinsically containing the Information and websites displaying targeted ads, but also model the alternative for users to connect on website displaying non-targeted ads, and to socially interact with each other.
The arguments for introducing these two extra features is twofold:
\begin{enumerate}
\item {\it For relevance in terms of applications.} Social marketing nowadays still widely happens via non-targeted advertising (TV campaigns, etc). Although our model proposes to use targeted advertising, it thus seems important to not completely dismiss the current method, and instead propose a way to combine both mechanisms.
\item {\it Mathematical reason.} The absence of discount rate allows the problem to still be tractable even by adding these features.
\end{enumerate}
Let us reintroduce for sake of completeness and self-contained reading each component of the framework together with these additional features.
\paragraph{The population.}
We now consider a population with $M$ individuals, with online behaviour characterised by:
\begin{itemize}
\item a family of $M$ i.i.d. triplets $(N^{m,{\bf I}}, N^{m,{\bf T}}, N^{m,{\bf NT}}, N^{m,\boldsymbol{D}})$, for $m\in \llbracket 1,M\rrbracket$, where $N^{m,{\bf I}}$, $N^{m,{\bf T}}$, $N^{m,{\bf NT}}$, and $N^{m,\boldsymbol{D}}$ are four independent Poisson processes with respective intensities $\eta^{{\bf I}}$, $\eta^{{\bf T}}$, $\eta^{{\bf NT}}$, and $\eta^{\boldsymbol{D}}=1$.
Notice that we assume that the population is homogeneous, i.e. each individual shares the same intensities.
\item a family $(N^{m,i, {\bf S}})_{m,i\in \llbracket 1,M\rrbracket}$ of i.i.d. Poisson processes with intensity $\eta^{{\bf S}}$, independent from the other Poisson processes.
\end{itemize}
For all $m\in \llbracket 1,M\rrbracket$, the processes $N^{m,{\bf I}}$, $N^{m,{\bf T}}$, and $N^{m,\boldsymbol{D}}$, have the same interpretation as in the previous model: $N^{m,{\bf I}}$ counts the times when individual $m$ visits a website intrinsically containing the Information (in this case, it would be an association's website, the website specialized in health, etc). $N^{m,{\bf T}}$ counts the times when individual $m$ connects to a website displaying targeted ads, and $N^{m,\boldsymbol{D}}$ counts the time when he behaves unsafely. The new features are: $N^{m,{\bf NT}}$, counting the times when individual $m$ visits a website displaying {\it non}-targeted ads, and for $m,i\in \llbracket 1,M\rrbracket$, $N^{m,i,{\bf S}}$ counting the social interactions between individuals $m$ and $i$ in the population.
\paragraph{Targeted and non-targeted advertising auctions.}
\begin{itemize}
\item {\it Targeted advertising auctions.} For each individual $m\in \llbracket 1,M\rrbracket$, whenever he connects to a website displaying targeted ads, an auction is automatically opened where several agents bid to win the right to display their ads to the individual. As in previous models,
we model the maximal bid from other bidders (other than our Agent), by introducing
an i.i.d. family of nonnegative random variables $(B^{m,{\bf T}}_k)_{k\in\mathbb{N},m\in \llbracket 1,M\rrbracket}$, where $B^{m,{\bf T}}_k$ represents the maximal bid from other bidders at the $k$-th targeted advertising auction concerning individual $m$. We also denote by ${\bf c}^{{\bf T}}:\mathbb{R}^2\rightarrow \mathbb{R}$ the paying rule function of this targeted auction, which is again assumed to be either of first-price or second-price auction rule.
\item {\it Non-targeted advertising auctions.} In this model, we also consider non-targeted advertising. Each time when an individual (regardless of his index) connects to a website displaying non-targeted ads, here again, agents will compete to display their ads (with the only difference that they cannot make their bid depending upon the individual who connects to the website, hence the name ``non-targeted advertising'').
An auction is thus also opened at each such connection. As before, we model the maximum bid from other bidders (i.e. not the Agent) by introducing
an i.i.d. family of nonnegative random variables $(B^{{\bf NT}}_k)_{k\in\mathbb{N}}$, where $B^{{\bf NT}}_k$ represents the maximal bid of other bidders during the $k$-non-targeted advertising auction (in all the population). The paying rule on this non-targeted auction is defined by a function ${\bf c}^{{\bf NT}}:\mathbb{R}^2\rightarrow \mathbb{R}$, which is also assumed to be either of first-price or second-price auction rule.
We stress that the auction rules used for the targeted advertising and the non-targeted advertising auctions do not necessarily have to be the same.
\end{itemize}
\paragraph{Advertising bidding map strategies.}
Given that there are now $M$ individuals, targeted advertising, and non-targeted advertising, a general bidding map control will take a more complex form with respect to previous model.
Informally, a bidding map control is a random process, depending only upon past events (i.e. non anticipative), and valued in $\mathbb{R}^{M+1}$. The idea is that this vector process
will store the $M$ bids that the Agent would like to make for each individual $m\in \llbracket 1,M\rrbracket$ if he were to connect to a website displaying targeted ads, and the remaining coordinate corresponds to the bid that the Agent would like to make if someone (anonymous) connects to a website using non-targeted advertising. Therefore, $M+1$ potential bids are required at any time, hence the term {\it bidding map}.
To formalise this concept, let us introduce the filtration $\mathbb{F}=({\cal F}_t)_{t\in \mathbb{R}_+}$ generated by the processes
\begin{eqnarray*}
((N^{m,{\bf I}}, N^{m,{\bf T}}, N^{m,{\bf NT}}, N^{m,\boldsymbol{D}}, B^{m,{\bf T}}_{N^{m,{\bf T}}}, N^{m,S})_{m\in \llbracket 1,M\rrbracket}, B^{{\bf NT}}_{N^{{\bf NT}}}, ((N^{m,i,{\bf S}})_{m,i\in \llbracket 1,M\rrbracket})
\end{eqnarray*}
where $N^{{\bf NT}}:=\sum_{m=1}^M N^{m,{\bf NT}}$ globally counts the connections to a website displaying non-targeted ads.
An open-loop bidding map control, denoted by $\beta$ $\in$ $\Pi_{OL}$, is a process $\beta=(\beta^m)_{m=0,...,M}$, valued in $\mathbb{R}^{M+1}_+$, predictable and progressively measurable w.r.t. the filtration $\mathbb{F}$.
For $m=1,...,M$, $\beta^m_t$ is the bid that the Agent would make if a targeted advertising auction for individual $m$ happened at time $t$. The remaining coordinate, $\beta^0_t$ is the bid that the Agent would make if a non-targeted advertising auction occurs at time $t$. In other words, if an individual connects to a website displaying targeted ads (resp. non-targeted ads), the website will open the targeted advertising auction for this individual (resp. the non-targeted auction for this connection), look at the bidding map $\beta_t=(\beta^m_t)_{m=0,..., M}$, and automatically use the bid $\beta^m_t$ for the connected individual $m$ $\in$
$\llbracket 1,M\rrbracket$ (resp. the bid $\beta^0_t$) inscribed in this bidding map as the bid of the Agent for this auction. This allows the agent to specify a different bid for each individual, which encodes the idea of {\it targeted}-advertising, or a bid that do not depend upon who is connecting, which encodes the idea of {\it non-targeted} advertising.
\paragraph{The information dynamic.}
Given an open-loop bidding map control $\beta$, we define the information dynamic process $X^{m,\beta}$ valued in $\{0,1\}$ of any individual $m$ $\in \llbracket 1,M\rrbracket$ of the population as follows:
$$
\begin{cases}
X^{m,\beta}_{0^-} &=\; 0, \\
dX^{m,\beta}_t & = \; (1-X^{m,\beta}_{t-})(dN^{m,{\bf I}}_t+{\bf 1}_{\beta^m_t\geq B^{m,{\bf T}}_{N_t^{m,{\bf T}}}}dN^{m,{\bf T}}_t \\
& \quad \quad \quad \quad \quad + \; {\bf 1}_{\beta^0_t\geq B^{{\bf NT}}_{N_t^{{\bf NT}_t}}}dN^{m,{\bf NT}}_t+\sum_{i=1}^M X^{i,\beta}_{t-}dN^{m,i,{\bf S}}_t), \quad t \geq 0.
\end{cases}
$$
The interpretation of this dynamic is similar to previous sections for the first two terms (but they are now related to a given individual $m\in \llbracket 1,M\rrbracket$), and with additional terms which are
essentially related to the new features of non-targeted advertising and social interactions.
Each individual $m$ starts uninformed ($X^{m,\beta}_{0^-}=0$). Once individual $m$ is informed ($X^{m,\beta}_t=1$), he stays informed ($(1-X^{m,\beta}_{t-})$ part). As long as he is not informed, he can receive the information either
by connecting to to a website intrinsically containing the Information ($dN^{m,{\bf I}}_t$ part), or by connecting to a website displaying targeted ads ($dN^{m,{\bf T}}_t$ part), and when the Agent's ad is displayed to him, i.e. iff the Agent wins the targeted advertising auction (${\bf 1}_{\beta^m_t\geq B^{m,{\bf T}}_{N_t^{m,{\bf T}}}}$ part).
Furthermore, individual $m$ has the possibility to
\begin{itemize}
\item browse through websites displaying non-targeted ads ($dN^{m,{\bf NT}}_t$ part), in which case he will get informed if and only if the Agent's ad is displayed to him, i.e. iff the Agent wins the non-targeted advertising auction (${\bf 1}_{\beta^0_t\geq B^{{\bf NT}}_{N_t^{{\bf NT}_t}}}$ part),
\item and socially interact with individual $i$ ($dN^{m,i,{\bf S}}_t$ part). In this case, he will get informed whenever individual $i$ is informed ($X^{i,\beta}_t$ part).
\end{itemize}
\paragraph{Proportion-based bidding policy.} A proportion-based bidding policy is defined by a pair of functions $\mathfrak{b}=(\mathfrak{b}^{{\bf T}},\mathfrak{b}^{{\bf NT}})$ defined both from
$\mathbb{P}_M$ $:=$ $\{ \frac{k}{M}: k=0,\ldots,M-1\}$
into $\mathbb{R}_+$.
To any such policy we associate the open-loop bidding map control $\beta^{\mathfrak{b}}$ satisfying the feedback form constraint
\begin{eqnarray*}
\beta^{m,\mathfrak{b}}_t&=&\mathfrak{b}^{{\bf T}}\Big(\frac{1}{M}\sum_{i=1}^M X^{i,\beta^{\mathfrak{b}}}_{t-}\Big)(1-X^{m,\beta^{\mathfrak{b}}}_{t-}), \quad m\in \llbracket 1,M\rrbracket,\\
\beta^{0,\mathfrak{b}}_t&=&\mathfrak{b}^{{\bf NT}}\Big(\frac{1}{M}\sum_{i=1}^M X^{i,\beta^{\mathfrak{b}}}_{t-}\Big){\bf 1}_{\frac{1}{M}\sum_{i=1}^M X^{i,\beta^{\mathfrak{b}}}_{t-}<1}, \quad \quad t \geq 0.
\end{eqnarray*}
In other words, a bidding map control associated to a proportion-based bidding policy formalises a strategy where in the targeted auction, the Agent makes a bid
for an individual $m$ that depends only on the proportion of informed people $\frac{1}{M}\sum_{i=1}^M X^{i,\beta^{\mathfrak{b}}}_{t-}$ at any time $t$, and whether the individual $m$ is informed or not, and where in the non-targeted auction, she makes a bid depending only on the proportion of informed people.
\paragraph{Cost function.}
Given a bidding map control $\beta$, the expected cost incurred to the Agent is defined by
\begin{align}
V(\beta) &= \; \mathbb{E}\Big[\sum_{m=1}^M\Big(\int_0^\infty K(1-X^{m,\beta}_{t-})dN^{m,\boldsymbol{D}}_t+\int_0^\infty {\bf 1}_{\beta^m_t\geq B^{m,{\bf T}}_{N_t^{m,{\bf T}}}} {\bf c}^{{\bf T}}(b^{m,{\bf T}}_t,B^{m,{\bf T}}_{N_t^{m,{\bf T}}}) d N_t^{m,{\bf T}} \\
& \hspace{3cm} + \; \int_0^\infty {\bf 1}_{\beta^0_t\geq B^{{\bf NT}}_{N^{{\bf NT}}_t}} {\bf c}^{{\bf NT}} (\beta^0_t,B^{{\bf NT}}_{N^{{\bf NT}}_t}) dN^{m,{\bf NT}}_t\Big)\Big]. \label{defVsocialno}
\end{align}
This cost function is similar to previous model in Section \ref{secsocialdiscount} except that there is a cost for each individual $m\in \llbracket 1,M\rrbracket$ in the population, ($\sum_{m=1}^M$ part), and that there is an additional term
$\int_0^\infty {\bf 1}_{\beta^0_t\geq B^{{\bf NT}}_{N^{{\bf NT}}_t}} {\bf c}^{{\bf NT}}(\beta^0_t,B^{{\bf NT}}_{N^{{\bf NT}}_t}) dN^{m,{\bf NT}}_t$ that measures the the non-targeted advertising cost of the strategy.
\vspace{3mm}
We now state the main result for this model.
\begin{Theorem}\label{theo-social-no-discount}
The minimal cost is given by
\begin{eqnarray*}
V^\star \; := \; \inf_{\beta\in \Pi_{OL}}V(\beta) &=&
\sum_{p\in\mathbb{P}_M}
v(p),
\end{eqnarray*}
where
$v(p)=\inf_{b^{{\bf T}},b^{{\bf NT}}\in\mathbb{R}_+} v^{b^{{\bf T}},b^{{\bf NT}}}(p)$,
with
\begin{eqnarray*}
v^{b^{{\bf T}},b^{{\bf NT}}}(p)&=& \frac{K+ \eta^{{\bf T}}\mathbb{E}\big[{\bf c}^{{\bf T}}(b^{{\bf T}},B_1^{{\bf T}}){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_1}\big]+\eta^{{\bf NT}}\mathbb{E}\Big[\frac{{\bf c}^{{\bf NT}}(b^{{\bf NT}},B_1^{{\bf NT}})}{1-p}{\bf 1}_{b^{{\bf NT}}\geq\frac{B^{1,{\bf NT}}_1}{1-p}}\Big] }{\eta^{{\bf I}}+\eta^{{\bf T}}\mathbb{P}\big[b^{{\bf T}}\geq B^{1,{\bf T}}_1\big]
+\eta^{{\bf NT}}\mathbb{P}\big[b^{{\bf NT}}\geq B^{{\bf NT}}_1\big]+p\eta^{{\bf S}}}.
\end{eqnarray*}
For all $p\in\mathbb{P}_M$,
the set
$\argmin_{b^{{\bf T}},b^{{\bf NT}}\in\mathbb{R}_+} v^{b^{{\bf T}},b^{{\bf NT}}}(p)$
is not empty, and any
proportion-based bidding policy $\mathfrak{b}^\star$ $=$ $(\mathfrak{b}^{\star,{\bf T}},\mathfrak{b}^{\star,{\bf NT}})$ such that
\begin{align} \label{bstarinf}
(\mathfrak{b}^{\star,{\bf T}}(p),\mathfrak{b}^{\star,{\bf NT}}(p)) &\in \;
\argmin_{b^{{\bf T}},b^{{\bf NT}}\in\mathbb{R}_+}
v^{b^{{\bf T}},b^{{\bf NT}}}(p), \quad \forall
p \in \mathbb{P}_M.
\end{align}
yields an optimal bidding map control $\beta^{\mathfrak{b}^\star}$. Moreover, in the case with fully second-price auctions, i.e. ${\bf c}^{{\bf T}}(b,B)={\bf c}^{{\bf NT}}(b,B)=B$, we have
$v(p)=\inf_{b\in\mathbb{R}_+} v^{b,(1-p)b}(p)$.
The set
$\argmin_{b\in\mathbb{R}_+} v^{b,(1-p)b}(p)$
is not empty, and any
proportion-based bidding policy defined by
$\mathfrak{b}^{\star,{\bf T}}(p)$ $=$ $\mrb^\star(p)$, $\mathfrak{b}^{\star,{\bf NT}}(p)$ $=$ $(1-p)\mrb^{\star}(p)$ with
\begin{eqnarray*}
\mrb^\star(p) &=&
\argmin_{b\in\mathbb{R}_+}
v^{b,(1-p)b}(p), \quad \forall p\in \mathbb{P}_M,
\end{eqnarray*}
yields an optimal bidding map control $\beta^{\mathfrak{b}^\star}$.
\end{Theorem}
\noindent{\bf Interpretation.} Let us provide some interpretations of the formulas in Theorem \ref{theo-social-no-discount}.
\begin{itemize}
\item {\it The sum part ``$\sum_{p\in\mathbb{P}_M}$".
}
We can split the problem in several successive problems each consisting in optimally going from a proportion $\frac{k}{M}$ of informed people to a proportion $\frac{k+1}{M}$, for $k\in\{0,...,M-2\}$. The fact that there is no discount rate implies that the time when each problem starts does not matter, and therefore, these successive problems can be optimised independently, i.e. one by one.
\item {\it The term in the sum.} The justification of the form of the terms in the sum is similar to the justification given for the previous model: the fraction can be split into two fractions, one corresponding to the expected cost perceived during this period, and the other one corresponding to the expected cost perceived at the termination time of this period.
\item {\it The term $\frac{{\bf c}^{{\bf NT}}(b,B_1^{{\bf NT}})}{1-p}$.} Notice that in the formula, $B^{1,{\bf T}}_1$ and $\frac{B^{{\bf NT}}_1}{1-p}$ play symmetric roles. It is {\it as if} the non-targeted advertising mechanism with price $B^{{\bf NT}}_1$ is equivalent to a targeted advertising mechanism with price $\frac{B^{{\bf T}}_1}{1-p}$. In other words, making the advertising mechanism not targeted essentially is equivalent to multiply the ad cost by $\frac{1}{1-p}$. This is natural since when the ad mechanism is not targeted, there is a probability $p$ that it displays the ad to an already informed individual. This means that only a proportion $1-p$ of the paid ads will be useful, and thus, for each useful ad, an average number of $\frac{1}{1-p}$ ads has to be paid (including the useful one). In other words, we have to pay the price of $\frac{1}{1-p}$ ads to display an ad to an uninformed individual.
\item {\it The term $p\eta^{{\bf S}}$.} Notice that in the formula, $p\eta^{{\bf S}}$ plays the same role as $\eta^{{\bf I}}$. This is consistent with the intuition that socially interacting with an informed individual has the same effect as visiting a website containing the information: it will inform the individual and not cost anything to the Agent. The more individuals are informed, the more likely such interaction is to occur. More precisely, each informed individual ``plays the role'' of a website containing the information, such that an individual has intensity $\frac{1}{M}\eta^{{\bf S}}$ to ``visit'' it, and thus, with a $k$ informed individuals, it yields an intensity $\frac{k}{M}\eta^{{\bf S}}=p\eta^{{\bf S}}$.
\end{itemize}
\vspace{1mm}
We introduce the following special optimal proportion-based bidding policy.
\begin{Definition}[Smallest optimal proportion-based bidding policy]\label{def-smallest-proportion-based-bid}
There exists a unique proportion-based bidding policy $\mathfrak{b}_{min}^\star$ $=$ $(\mathfrak{b}_{min}^{\star,{\bf T}},\mathfrak{b}_{min}^{\star,{\bf NT}})$ such that any proportion-based bidding policy $\mathfrak{b}^{\star}$ $=$ $(\mathfrak{b}^{\star,{\bf T}},\mathfrak{b}^{\star,{\bf NT}})$
as in Theorem \ref{theo-social-no-discount} satisfies $\mathfrak{b}^{\star,{\bf T}}_{min}(p)$ $\leq$ $\mathfrak{b}^{\star,{\bf T}}(p)$ and $\mathfrak{b}^{\star,{\bf NT}}_{min}(p)$ $\leq$ $\mathfrak{b}^{\star,{\bf NT}}(p)$ for all $p\in\mathbb{P}_M$.
$\mathfrak{b}_{min}^{\star}$ is called the smallest optimal proportion-based bidding policy.
\end{Definition}
\begin{Remark}
The above result comes from the identity
\begin{eqnarray*}
& & \argmin_{b^{{\bf T}},b^{{\bf NT}}\in\mathbb{R}_+} v^{b^{{\bf T}},b^{{\bf NT}}}(p) \\
&=& \argmax_{b^{{\bf T}}\in \mathbb{R}_+}\mathbb{E}\Big[(v(p)-B^{1,{\bf T}}_{1}){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}\Big] \times \argmax_{b^{{\bf NT}}\in \mathbb{R}_+}\mathbb{E}\Big[\big((1-p)v(p)-B^{{\bf NT}}_{1}\big){\bf 1}_{b^{{\bf NT}}\geq B^{{\bf NT}}_{1}}\Big]
\end{eqnarray*}
which follows from a Bellman and verification result type property, essentially allowing to see, in this dynamic problem, the long term optimal bid for a targeted advertising auction (resp. for a non-targeted advertising auction) as a greedy optimal bid for a static auction with immediate reward $v(p)$ (resp. $(1-p)v(p)$) when the auction is won. Consequently, we have, for all $p\in\mathbb{P}_M$,
\begin{eqnarray*}
\mathfrak{b}^{\star,{\bf T}}_{min}(p)&=&\min \argmax_{b^{{\bf T}}\in \mathbb{R}_+}\mathbb{E}\Big[(v(p)-B^{1,{\bf T}}_{1}){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}\Big],\\
\mathfrak{b}^{\star,{\bf NT}}_{min}(p)&=&\min \argmax_{b^{{\bf NT}}\in \mathbb{R}_+}\mathbb{E}\Big[\big((1-p)v(p)-B^{{\bf NT}}_{1}\big){\bf 1}_{b^{{\bf NT}}\geq B^{{\bf NT}}_{1}}\Big].
\end{eqnarray*}
It is also possible to see, from the proof of Theorem \ref{theo-social-no-discount}, that the open-loop bidding map control $\beta^{\mathfrak{b}_{min}^\star}$ is the smallest optimal open-loop bidding map control, i.e. for all optimal open-loop bidding map control
$\beta$ $=$ $(\beta^m)_{m\in\llbracket 0,M\rrbracket}$, we have $\beta^{m,\mathfrak{b}_{min}^\star}_t$ $\leq$ $\beta^m_t$ for all $m\in \llbracket 0, M\rrbracket$, $(\omega,t)$-a.e., i.e. for almost every $(\omega,t)\in \Omega\times \mathbb{R}_+$ w.r.t. the measure $\mathbb{P}\otimes {\cal B}(\mathbb{R}_+)$.
\end{Remark}
We have the following properties about the sensitivity to parameters and upper bounds of the optimal value and smallest minimal optimal bidding policy.
\begin{Proposition}\label{propsocialno-sensitivity}
The optimal value $V^\star$ and the smallest optimal proportion-based bidding policies $b^{\star,{\bf T}}_{min}(p)$ and $b^{\star,{\bf NT}}_{min}(p)$ for targeted and for non-targeted advertising are decreasing in $\eta^{{\bf I}},\eta^{{\bf T}}, \eta^{{\bf NT}}$, and $\eta^{{\bf S}}$. Furthermore, $b^{\star,{\bf NT}}_{min}(p)$ is decreasing in $p$, while
$b^{\star,{\bf T}}_{min}(p)$ is
\begin{itemize}
\item decreasing in $p$ when there is no non-targeted advertising ($\eta^{{\bf NT}}=0$),
\item increasing in $p$ when there is no social interactions ($\eta^{{\bf S}}=0$).
\end{itemize}
Finally, we have for all $p$ $\in$ $[0,1]$,
\begin{eqnarray*}
b^{\star,{\bf T}}_{min}(p), \; b^{\star,{\bf NT}}_{min}(p) \; \leq \; v(p) \; \leq \; \frac{K}{\eta^{{\bf I}}+p\eta^{{\bf S}}}.
\end{eqnarray*}
\end{Proposition}
\vspace{1mm}
The interpretation of the above proposition related to the monotonicity of the optimal value and smallest proportion-based bidding policies with respect to the intensity parameters is similar as in the previous section. Let us discuss the monotonicity properties with respect to the proportion $p$ of informed individuals. Recall that the agent has at disposal four channels of information: (1) her website which informs without cost the individual, (2) the social interaction which also informs with probability $p$ the individual and without cost, (3) the targeted ad which informs surely the individual with cost ${\bf c}^{{\bf T}}$, and (4) the non-targeted ad which informs with probability $1-p$ the individual with cost ${\bf c}^{{\bf NT}}$. It is then a trade-off to choose the most efficient channel for diffusing the information. So, in the case where the fourth channel is not accessible ($\eta^{{\bf NT}}$ $=$ $0$), and when $p$ is increasing, this will only affect positively the efficiency of the social interactions (second channel), and therefore the agent will bid less for the targeted ad. On the other hand, in the case where the second channel is absent ($\eta^{{\bf S}}$ $=$ $0$), and when $p$ is increasing, this will only affect the fourth channel, which loses in efficiency since many individuals are already informed. Consequently, it becomes more interesting to bid on the targeted ad. In the general case, when $p$ is increasing, the second channel gains in efficiency but the fourth channel become more costly.
\begin{Remark} The monotonicity properties of the smallest optimal proportion-based bidding policies with respect to $p$ has useful implications regarding their computational cost. Indeed, the practical implementation of these optimal bids require to compute (via the search of the infimum in \eqref{bstarinf})
$(\mathfrak{b}_{min}^{\star,{\bf T}}(p),\mathfrak{b}_{min}^{\star,{\bf NT}}(p))$ for any $p$ $\in$ $\mathbb{P}_M$. This is a priori very expensive for large $M$. However, by taking advantage of the monotonicity in $p$ of $(\mathfrak{b}_{min}^{\star,{\bf T}}(p),\mathfrak{b}_{min}^{\star,{\bf NT}}(p))$, one can considerably reduce the computational complexity. For instance, if one starts by computing optimal bids for $p= \frac{1}{2}$, the computation of $\mathfrak{b}^{\star,{\bf NT}}_{min}(\frac{1}{2})$ allows to limit the search for $\mathfrak{b}^{\star,{\bf NT}}_{min}(p)$ to $[0,\mathfrak{b}^{\star,{\bf NT}}_{min}(\frac{1}{2})]$ for $p>\frac{1}{2}$, and to
$[\mathfrak{b}^{\star,{\bf NT}}_{min}(\frac{1}{2}), \frac{K}{\eta^{{\bf I}}+p\eta^{{\bf S}}}]$ for $p<\frac{1}{2}$. In particular, one can search $\mathfrak{b}^{\star,{\bf NT}}_{min}(\frac{3}{4})$ in $[0,\mathfrak{b}^{\star,{\bf NT}}_{min}(\frac{1}{2})]$ and $\mathfrak{b}^{\star,{\bf NT}}_{min}(\frac{1}{4})$ in $[\mathfrak{b}^{\star,{\bf NT}}_{min}(\frac{1}{2}), \frac{K}{\eta^{{\bf I}}+\frac{1}{4}\eta^{{\bf S}}}]$.
The computation of $(\mathfrak{b}^{\star,{\bf NT}}_{min}(p))_{p\in\frac{\llbracket 0, M\llbracket}{M}}$ can thus clearly be made by dichotomy. For example, let us assume that there is only non-targeted advertising, and to simplify, let us assume that $M=2^L$ for some $L\in\mathbb{N}$. Then, only $L=\log_2(M)$ dichotomies have to be made, and at the $\ell$-th dichotomy, there are $2^\ell$ minimizers to find in $2^\ell$ consecutive intervals with total (i.e. cumulative) length upper bounded by $\frac{K}{\eta^{{\bf I}}}$. Assuming that the computational time of the search for a minimizer is proportional to the length of the interval on which it is searched, the computational complexity of each dichotomy iteration is thus ${\cal O}(\frac{K}{\eta^{{\bf I}}})$, and therefore, the total computational complexity of the whole minimal bidding policy is ${\cal O}(\frac{K}{\eta^{{\bf I}}}\log_2(M))$ and is thus only logarithmic in the size $M$ of the population, which suggests that this algorithm should be tractable even for large population.
\end{Remark}
\begin{Remark}[Mean-field approximation] \label{rem-meanfield}
As in any population models with enough symmetry, it is expected that when $M$ gets large, we obtain a {\it mean-field limit}. Let us check formally how it is derived.
Notice that the mean over the population of the Agent's optimal value is equal to
\begin{eqnarray*}
\frac{1}{M} V^\star &=& \frac{1}{M}\sum_{p\in \mathbb{P}_M} v(p),
\end{eqnarray*}
which thus takes the form of a Riemann sum, hence when $M$ $\rightarrow$ $\infty$,
\begin{eqnarray*}
\frac{1}{M} V^\star &\simeq & \int_0^1 v(p)dp.
\end{eqnarray*}
where $v$ is extended on $[0,1)$ with the same expression as in Theorem \ref{theo-social-no-discount}. Such result can be useful for two reasons:
\begin{enumerate}
\item To obtain an analytical approximation of the optimal value in some cases where the integral can be explicitly computed,
\item and to provide a way to numerically approximate the optimal value, by discretising the integral with a suitable discretisation step. This can be useful with very large population, where one might want to speed up the computation.
\end{enumerate}
It is also possible to formally derive a differential optimal control problem on the proportion of informed users $(p_t)_{t\in\mathbb{R}_+}$ such that the optimal value and optimal control from this model are the limit of the corresponding objects in our model when $M\rightarrow \infty$. The controlled dynamic is defined by
\begin{eqnarray*}
\frac{dp^\beta_t}{dt}= (1-p^\beta_t)\big(\eta^{{\bf I}}+ \eta^{{\bf T}}\mathbb{P}[\beta^{{\bf T}}_t\geq B^{{\bf T}}_1] + \eta^{{\bf NT}}\mathbb{P}[\beta^{{\bf NT}}_t\geq B^{{\bf NT}}_1] +\eta^{{\bf S}} p^\beta_t\big),
\end{eqnarray*}
with deterministic control $\beta=(\beta^{{\bf T}},\beta^{{\bf NT}})$, and with cost functional
\begin{eqnarray*}
V(\beta) &=& \int_0^{\infty}\Big\{ (1-p^\beta_t)\big(K+\eta^{{\bf T}}\mathbb{E}[{\bf c}^{{\bf T}}(\beta^{{\bf T}}_t,B^{{\bf T}}_1){\bf 1}_{\beta^{{\bf T}}_t\geq B^{{\bf T}}_1}]\big) \\
& & \hspace{2cm} + \; \eta^{{\bf NT}}\mathbb{E}\big[{\bf c}^{{\bf NT}}(\beta^{{\bf NT}}_t,B^{{\bf NT}}_1){\bf 1}_{\beta^{{\bf NT}}_t\geq B^{{\bf NT}}_1}\big] \Big\}dt.
\end{eqnarray*}
\end{Remark}
\section{Examples with explicit computations} \label{sec:example}
Notice that the results of all the models presented in this work have a solution (optimal value and policy) that can be expressed in the form
\begin{eqnarray*}
\inf_{b\in\mathbb{R}_+}/\sup_{b\in\mathbb{R}_+}/\argmin_{b\in\mathbb{R}_+}/\argmax_{b\in\mathbb{R}_+}\frac{a_1+a_2\mathbb{E}[{\bf c}(b,B^{{\bf T}}_1){\bf 1}_{b\geq B^{{\bf T}}_1}]+a_3\mathbb{E}[c({\bf b},B^{{\bf NT}}_1){\bf 1}_{{\bf b}\geq B^{{\bf NT}}_1}]}{a_4+a_5\mathbb{P}[b\geq B^{{\bf T}}_1] + a_6\mathbb{P}[{\bf b}\geq B^{{\bf NT}}_1]}
\end{eqnarray*}
with well chosen parameters $(a_i)_{i\leq 6}$. In this section, we discuss two types of distributions for $B^{{\bf T}}_1$ and $B^{{\bf NT}}_1$ that will lead to fully explicit formulas for the optimal bidding policy.
\subsection{Constant maximal bid from other bidders}
We consider the case where the maximal bids from other bidders, i.e. $(B^{{\bf T}}_k)_{k\in\mathbb{N}}$ for the targeted advertising auctions, and $(B^{{\bf NT}}_k)_{k\in\mathbb{N}}$ for the non-targeted advertising auctions, are constant, i.e. $B^{{\bf T}}_k=B^{{\bf T}}\in\mathbb{R}_+$ and $B^{{\bf NT}}_k=B^{{\bf NT}}\in\mathbb{R}_+$. Under this assumption, the first-rice auction or second-price auction cases essentially become equivalent, and we focus on the second price type of auction, i.e. the auction payment rule ${\bf c}(b,B)=B$. Let us study two cases:
\begin{enumerate}
\item The commercial advertising problem with purchase-based gain function, and
\item The social marketing problem with no discount factor and with social interactions and non-targeted advertising.
\end{enumerate}
\subsubsection{Commercial advertising with purchase-based gain function}
In this case, we have
\begin{eqnarray*}
V(\beta^b)&=& \frac{\eta^{{\bf I}}K+\eta^{{\bf T}}(K-B^{{\bf T}}) {\bf 1}_{b\geq B^{{\bf T}}}}{\eta^{{\bf I}}+ \rho + \eta^{{\bf T}}{\bf 1}_{b\geq B^{{\bf T}}}},
\end{eqnarray*}
and any $b^\star$ $\in$ $\argmax_{b\in\mathbb{R}_+}$ $V(\beta^b)$ yields an optimal constant bid. Notice that $V(\beta^b)$ only takes two possible values, one for $b<B^{{\bf T}}$ and one for $b\geq B^{{\bf T}}$. The optimisation thus reduces to choose either $b<B^{{\bf T}}$ (for instance $b=0$), either $b\geq B^{{\bf T}}$ (for instance $b=B^{{\bf T}}$).
Hence, an optimal bid is $b^\star\geq B^{{\bf T}}$ iff
\begin{eqnarray*}
\frac{\eta^{{\bf I}}K+\eta^{{\bf T}}(K-B^{{\bf T}}) }{\eta^{{\bf I}}+ \rho + \eta^{{\bf T}}}> \frac{\eta^{{\bf I}}K}{\eta^{{\bf I}}+ \rho },
\end{eqnarray*}
which can be rewritten equivalently after some straightforward calculation as
\begin{eqnarray*}
B^{{\bf T}} \; < \; \frac{\rho}{\eta^{{\bf I}}+ \rho}K.
\end{eqnarray*}
We clearly see what we had already established in the general case: the optimal bids are ``decreasing'' in $\eta^{{\bf I}}$ and ``increasing'' in $\rho$. Namely, the smallest optimal bid is $B^{{\bf T}}{\bf 1}_{B^{{\bf T}}\leq \frac{\rho}{\eta^{{\bf I}}+ \rho}}$, which is clearly a decreasing (resp. increasing) function of $\eta^{{\bf I}}$ (resp. of $\rho$).
There is another interesting optimal bid, that is, the bid $\frac{\rho}{\eta^{{\bf I}}+ \rho}K$. Indeed, this bid is the only one to be optimal {\it regardless} $B^{{\bf T}}$. In other words, by assuming (or knowing) that other bidders' maximal bid is constant,
we obtain a dominant bidding strategy $\frac{\rho}{\eta^{{\bf I}}+ \rho}K$, which is optimal whatever (hence robust to) the constant value of $B^{{\bf T}}$.
\subsubsection{Social marketing problem with no discount factor and with social interactions and non-targeted advertising}
In this case, the optimal bidding map control is obtained from a proportion-based bidding policy $\mathfrak{b}^\star$ $=$ $(\mathfrak{b}^{\star,{\bf T}}, \mathfrak{b}^{\star,{\bf NT}})$ with $\mathfrak{b}^{\star,{\bf T}}(p)$ $=$ $\mrb^\star(p)$ and $\mathfrak{b}^{\star,{\bf NT}}(p)$ $=$ $(1-p)\mrb^\star(p)$, where
\begin{eqnarray*}
\mrb^\star(p) & \in & \argmin_{b\in \mathbb{R}_+}\frac{K+\eta^{{\bf T}}B^{{\bf T}}{\bf 1}_{b\geq B^{{\bf T}}} +\eta^{{\bf NT}}\frac{B^{{\bf NT}}}{1-p}{\bf 1}_{b\geq \frac{B^{{\bf NT}}}{1-p}} }{\eta^{{\bf I}}+\eta^{{\bf T}}{\bf 1}_{b\geq B^{{\bf T}}}+\eta^{{\bf NT}}{\bf 1}_{b\geq \frac{B^{{\bf NT}}}{1-p}} +p\eta^{{\bf S}}}.
\end{eqnarray*}
In order to obtain simple and interpretable formula, let us assume that there is only one type of advertising.
\paragraph{Only targeted advertising.} If there is only targeted advertising, i.e. if $\eta^{{\bf NT}}=0$, we have
\begin{eqnarray*}
\mathfrak{b}^{\star,{\bf T}}(p) & \in & \argmin_{b\in \mathbb{R}_+} \frac{K+\eta^{{\bf T}}B^{{\bf T}}{\bf 1}_{b\geq B^{{\bf T}}}}{\eta^{{\bf I}}+\eta^{{\bf T}}{\bf 1}_{b\geq B^{{\bf T}}} +p\eta^{{\bf S}}}.
\end{eqnarray*}
Here again, we are reduced to compare two costs:
\begin{eqnarray*}
\frac{K}{\eta^{{\bf I}}+p\eta^{{\bf S}}} & \text{ and } & \frac{K+\eta^{{\bf T}}B^{{\bf T}}}{\eta^{{\bf I}}+\eta^{{\bf T}} +p\eta^{{\bf S}}},
\end{eqnarray*}
the first one being obtained for $b<B^{{\bf T}}$, and the second one for $b\geq B^{{\bf T}}$. The best option will be $\mathfrak{b}^{\star,{\bf T}}(p)$ $\geq$ $B^{{\bf T}}$ if and only if
\begin{eqnarray*}
\frac{K}{\eta^{{\bf I}}+p\eta^{{\bf S}}} &>& \frac{K+\eta^{{\bf T}}B^{{\bf T}}}{\eta^{{\bf I}}+\eta^{{\bf T}} +p\eta^{{\bf S}}},
\end{eqnarray*}
which is equivalent to
\begin{eqnarray*}
p \; < \; \frac{\frac{K}{B^{{\bf T}}}-\eta^{{\bf I}}}{\eta^{{\bf S}}} \; =: \; p_\star.
\end{eqnarray*}
This means that before the proportion of informed individuals attains the threshold informed proportion $p_\star$, one should bid higher than $B^{{\bf T}}$ (and thus display ads), and when the informed proportion exceeds $p_\star$, one should bid lower than $B^{{\bf T}}$ (and thus stop displaying ads).
Assuming that $K$ $\geq$ $\eta^{{\bf I}}B^{{\bf T}}$, we also notice that this threshold $p_\star$ is decreasing in $\eta^{{\bf I}}$ and in $\eta^{{\bf S}}$. This is interpreted as follows:
\begin{itemize}
\item First of all, the fact that there is an informed proportion threshold under which the Agent should display ads and above which she should not display ads necessarily comes from the social interactions.
Indeed, in no-social interaction case ($\eta^{{\bf S}}=0$), this threshold $p_\star$ $=$ $\infty$.
The fact that the presence of social interactions is susceptible to introduce a finite threshold proportion above which the Agent should stop displaying ads, can be also understood as follows: suppose that an individual connects to a website displaying targeted ads. With social interactions, the more people are informed, the sooner this individual will learn the Information anyway, by interacting with an informed individual. Therefore, the incentive of the Agent to display the ad to him is weaker as the proportion of informed individuals increases, which justifies that the bid she is willing to make is smaller, and once it is small enough to fall below $B^{{\bf T}}$, the Agent will stop displaying ads.
\item The interpretation of the decreasing nature of the threshold proportion in $\eta^{{\bf I}}$ and $\eta^{{\bf S}}$ is the following. For a fixed proportion of informed individuals $p$, increasing the intensity of social interactions $\eta^{{\bf S}}$ will also make more likely a soon interaction with an informed people, thus weakening the Agent's incentive to display an ad, such that this incentive will be fully compensated after a smaller informed proportion. Likewise, increasing the intensity $\eta^{{\bf I}}$ of connections to a website containing the information will make people inform themselves faster, thus catalyzing the increase of the informed proportion, in turn decreasing the Agent's incentive to display an ad.
\end{itemize}
\vspace{2mm}
Finally, we can estimate the optimal value when $M$ $\rightarrow$ $\infty$, by using the mean-field approximation as observed in Remark \ref{rem-meanfield}. First, consider the case when $p_\star$ $\geq$ $1$. In this case, we have $\mathfrak{b}^{\star,{\bf T}}(p)$ $\geq$ $B^{{\bf T}}$, for all $p$ $\in$ $[0,1]$, and so
\begin{eqnarray*}
\frac{1}{M}V^\star & \simeq & \int_0^1 v(p) dp \; = \; \int_0^1 \frac{K+\eta^{{\bf T}}B^{{\bf T}}{\bf 1}_{\mathfrak{b}^{\star,{\bf T}}(p)\geq B^{{\bf T}}}}{\eta^{{\bf I}}+\eta^{{\bf T}}{\bf 1}_{\mathfrak{b}^{\star,{\bf T}}(p)\geq B^{{\bf T}}} +p\eta^{{\bf S}}} dp
\; = \; \int_0^{1}\frac{K+\eta^{{\bf T}}B^{{\bf T}}}{\eta^{{\bf I}}+\eta^{{\bf T}} +p\eta^{{\bf S}}} dp \\
&=&\frac{K+\eta^{{\bf T}}B^{{\bf T}}}{\eta^{{\bf S}}}\ln\Big[\frac{\eta^{{\bf I}}+\eta^{{\bf T}} +\eta^{{\bf S}}}{\eta^{{\bf I}}+\eta^{{\bf T}}}\Big].
\end{eqnarray*}
When $0 \leq p_\star < 1$, we have
\begin{eqnarray*}
\frac{1}{M} V^\star &\simeq & \int_0^1 \frac{K+\eta^{{\bf T}}B^{{\bf T}}{\bf 1}_{\mathfrak{b}^{\star,{\bf T}}(p)\geq B^{{\bf T}}}}{\eta^{{\bf I}}+\eta^{{\bf T}}{\bf 1}_{\mathfrak{b}^{\star,{\bf T}}(p)\geq B^{{\bf T}}} +p\eta^{{\bf S}}} dp
\; = \; \int_0^{p_\star}\frac{K+\eta^{{\bf T}}B^{{\bf T}}}{\eta^{{\bf I}}+\eta^{{\bf T}} +p\eta^{{\bf S}}} dp \; + \; \int_{p_\star}^1\frac{K}{\eta^{{\bf I}}+p\eta^{{\bf S}}}dp\\
&=& \frac{K+\eta^{{\bf T}}B^{{\bf T}}}{\eta^{{\bf S}}}\ln\Big[\frac{\eta^{{\bf I}}+\eta^{{\bf T}} +p_\star\eta^{{\bf S}}}{\eta^{{\bf I}}+\eta^{{\bf T}}}\Big] \; - \; \frac{K}{\eta^{{\bf S}}}\ln\Big[ \frac{\eta^{{\bf I}} +p_\star\eta^{{\bf S}}}{\eta^{{\bf I}}+\eta^{{\bf S}}}\Big] \\
&=& \frac{K+\eta^{{\bf T}}B^{{\bf T}}}{\eta^{{\bf S}}}\ln\Big[ \frac{\eta^{{\bf T}} +\frac{K}{B^{{\bf T}}}}{\eta^{{\bf I}}+\eta^{{\bf T}}}\Big] \; - \; \frac{K}{\eta^{{\bf S}}}\ln\Big[ \frac{\frac{K}{B^{{\bf T}}}}{\eta^{{\bf I}}+\eta^{{\bf S}}}\Big]. \\
\end{eqnarray*}
\paragraph{Only non-targeted advertising.} If there is only non-targeted advertising, i.e. if $\eta^{{\bf T}}=0$, we have
\begin{eqnarray*}
\mathfrak{b}^{\star,{\bf NT}}(p) &\in & \argmin_{b\in \mathbb{R}_+}\frac{K +\eta^{{\bf NT}}\frac{B^{{\bf NT}}}{1-p}{\bf 1}_{b\geq B^{{\bf NT}}} }{\eta^{{\bf I}}+\eta^{{\bf NT}}{\bf 1}_{b\geq B^{{\bf NT}}
} +p\eta^{{\bf S}}}.
\end{eqnarray*}
Here again, we are reduced to compare two costs:
\begin{eqnarray*}
\frac{K}{\eta^{{\bf I}}+p\eta^{{\bf S}}} \;\;\; \text{ and } \;\;\; \frac{K +\eta^{{\bf NT}}\frac{B^{{\bf NT}}}{1-p}}{\eta^{{\bf I}}+\eta^{{\bf NT}} +p\eta^{{\bf S}}},
\end{eqnarray*}
the first one being obtained for $b<B^{{\bf NT}}$, and the second one for $b\geq B^{{\bf NT}}$. The best option will be $\mathfrak{b}^{\star,{\bf NT}}(p)$ $\geq$ $B^{{\bf NT}}$ if and only if
\begin{eqnarray*}
\frac{K}{\eta^{{\bf I}}+p\eta^{{\bf S}}} &>& \frac{K +\eta^{{\bf NT}}\frac{B^{{\bf NT}}}{1-p}}{\eta^{{\bf I}}+\eta^{{\bf NT}} +p\eta^{{\bf S}}},
\end{eqnarray*}
which is equivalent to
\begin{eqnarray*}
p &<& \frac{K-\eta^{{\bf I}} B^{{\bf NT}}}{K+\eta^{{\bf S}} B^{{\bf NT}}} \; =: \; \bar p_\star.
\end{eqnarray*}
This means that below the informed proportion threshold $\bar p_\star$, one should bid higher than $B^{{\bf NT}}$ (and thus display ads), and above the informed proportion $\bar p_\star$, one should bid lower than $B^{{\bf NT}}$ (and thus stop displaying ads).
When $K$ $\geq$ $\eta^{{\bf I}} B^{{\bf NT}}$, we notice, as in the ``only targeted advertising'' case, that the informed proportion $\bar p_\star$ is decreasing in $\eta^{{\bf I}}$ and in $\eta^{{\bf S}}$.
The same interpretations as in the ``only targeted advertising'' case still hold, but there is an additional argument. Indeed, recall that in the ``only targeted advertising'' case, it is argued that the presence of such threshold comes from the presence of social interactions, and that when they are absent ($\eta^{{\bf S}}=0$), or more generally when $\eta^{{\bf S}}$ is small enough, there is no threshold (the optimal bidding strategy is a constant bid). Here, notice that $\bar p_\star$ $<$ $1$, even if $\eta^{{\bf S}}=0$ (recall that we assumed that $\eta^{{\bf I}}>0$). Thus, as opposed to the previous ``only targeted advertising'' case, the existence of such threshold does not only come from social interactions.
Displaying non-targeted ads always comes with the risk to display ads to already informed people, and thus paying for a useless ad. The more people are informed, the higher the risk. This explains why after some proportion threshold, it is not worth to pay for displaying an ad, and thus the Agent has to stop doing so.
\subsection{Uniform maximal bid from other bidders}
We now consider the case where the other bidders' maximal bid is uniformly distributed.
Regarding the auction payment rule, we shall focus on the {\it first-price auction rule}, i.e., ${\bf c}(b,B)=b$ (the same argument applies to the {\it second-price auction rule}).
We focus on the example of the purchase-based commercial advertising model, but the same argument can be adapted to the other models. From Theorem \ref{theo-purchase}, the gain value associated to a constant bid strategy is equal to
\begin{eqnarray*}
V(\beta^b) &=&
\frac{\eta^{{\bf I}}K+\eta^{{\bf T}}(K-b)\mathbb{P}[b\geq B^{{\bf T}}_1]}{\eta^{{\bf I}}+ \rho + \eta^{{\bf T}}\mathbb{P}[b\geq B^{{\bf T}}_1]}, \quad b \in \mathbb{R}_+.
\end{eqnarray*}
Denoting by $[b^-,b^+]$ with $b^-<b^+$, the support of the uniform distribution for $B^{{\bf T}}_1$, we can restrict the search for the argmax of $b$ $\mapsto$ $V(\beta^b)$ to the interval $[b^-,b^+]$, and so
\begin{eqnarray*}
b^\star & \in & \argmax_{b\in [b^-,b^+]} \frac{\eta^{{\bf I}}K+\eta^{{\bf T}}(K-b)\frac{b-b^-}{b^+-b^-}}{\eta^{{\bf I}}+ \rho + \eta^{{\bf T}}\frac{b-b^-}{b^+-b^-}}.
\end{eqnarray*}
By making the change of variable
\begin{eqnarray*}
b' \; = \; \eta^{{\bf I}}+ \rho + \eta^{{\bf T}}\frac{b-b^-}{b^+-b^-}, & \mbox{ i.e. } & b \; = \; \lambda_1 + \lambda_2 b'
\end{eqnarray*}
with
\begin{eqnarray*}
\lambda_1 \; = \; b^--(b^+-b^-)\frac{\eta^{{\bf I}}+ \rho }{\eta^{{\bf T}}},\quad \lambda_2\; = \; \frac{b^+-b^-}{\eta^{{\bf T}}},
\end{eqnarray*}
we see that $b^\star$ $=$ $\lambda_1 + \lambda_2 b^{',\star}$, with
\begin{align} \label{b'}
b^{',\star}
& \in \; \argmax_{b'\in [b'^-,b'^+]} \frac{a_0+a_1b'+a_2b'^2}{b'} \; = : \; \argmax_{b'\in [b'^-,b'^+]} g(b'),
\end{align}
with
\begin{eqnarray*}
a_0 \; = \; \lambda_1(\eta^{{\bf I}}+\rho)-K\rho, \quad a_1 \; = \; K-\lambda_1+\lambda_2(\eta^{{\bf I}}+\rho), \quad a_2= -\lambda_2<0,
\end{eqnarray*}
and
\begin{eqnarray*}
b'^-=\eta^{{\bf I}}+ \rho, \quad b'^+=\eta^{{\bf I}}+ \rho+\eta^{{\bf T}}.
\end{eqnarray*}
By writing the first-order condition for $g(b')$ in \eqref{b'}, we see that its derivative is equal to $a_2-\frac{a_0}{b'^2}$ which is negative, for $b'\in [b'^-,b'^+]\subset \mathbb{R}_+$, if and only if $b'^2\geq \frac{a_0}{a_2}$, and thus if and only if $b'\geq \sqrt{\left(\frac{a_0}{a_2}\right)_+}$.
The argmax for $g$ in $[b'^-,b'^+]$ is thus given by
\begin{eqnarray*}
b^{',\star} &=& \max\Big[ b'^-, \min\Big(b'^+, \sqrt{\big(\frac{a_0}{a_2}\big)_+}\Big)\Big]
\end{eqnarray*}
and thus
\begin{eqnarray*}
b^\star
&=& \max\big[b^-, \min(b^+, \bar b)\big]
\end{eqnarray*}
where (after some straightforward calculation)
\begin{eqnarray*}
\bar b
&=& b^--(b^+-b^-)\frac{\eta^{{\bf I}}+ \rho }{\eta^{{\bf T}}} + \sqrt{\frac{b^+-b^-}{\eta^{{\bf T}}}\Big(K\rho-b^-(\eta^{{\bf I}}+\rho)+\frac{b^+-b^-}{\eta^{{\bf T}}}(\eta^{{\bf I}}+ \rho)^2\Big)_+}.
\end{eqnarray*}
\section{Proof of main results} \label{sec:proofs}
We first prove the results in the social marketing model with no discount rate, and then show how the other results can be reduced as particular cases of this model.
\subsection{Proof of results in Section \ref{sec:nodiscount}}
\subsubsection{Proof of Theorem \ref{theo-social-no-discount}} \label{sec:socialno}
Fix an arbitrary open-loop bidding map control $\beta$, and denote by
\begin{eqnarray*}
p^{\beta}_t &=& \frac{1}{M}\sum_{m=1}^M X^{m,\beta}_t,\quad t\in\mathbb{R}_+,
\end{eqnarray*}
the proportion of informed individuals at each time $t\in\mathbb{R}_+$. The underlying idea of this proof is a change of variable from the numerous Poisson processes of the problem to the proportion $p^{\beta}_t$ in the cost function.
In other words, the suitable approach to look at the problem is to express the optimisation of the cost $V(\beta)$ over the proportion $p^\beta$ running into $\{0,1/M,\ldots,1\}$ rather than over the times of jumps of the numerous Poisson processes $N^{{\bf I}}$, $N^{\boldsymbol{D}}$, $N^{{\bf T}}$, $N^{{\bf NT}}$ defined in our model.
Notice that Poisson and proportion process are piece-wise constant processes, and thus the change of variable has to be done carefully. To deal with this technical issue, we shall rely on
the compensated processes of the Poisson processes, and use martingale arguments in order to express $V(\beta)$ first with $dt$ thanks to the intensity processes,
then make a continuous-time change of variable to obtain another intensity process, and then move back to jump processes with $dp^\beta$.
\vspace{1mm}
\noindent {\it Step 1: Intensity process of $p^\beta$.} From the dynamics of $X^\beta$, we have
\begin{eqnarray*}
dp^{\beta}_t
&=& \frac{1}{M}\sum_{m=1}^M (1-X^{m,\beta}_{t-})\Big(dN^{m,{\bf I}}_t+{\bf 1}_{\beta^m_t\geq B^{m,{\bf T}}_{N^{m,{\bf T}}_t}}dN^{m,{\bf T}}_t. \\
& & \hspace{3cm} + \; {\bf 1}_{\beta^0_t\geq B^{{\bf NT}}_{N^{{\bf NT}}_t}} dN^{m,{\bf NT}}_t+\sum_{i=1}^M X^{i,\beta}_{t-}dN^{i,m,{\bf S}}_t\Big) \\
&=& \frac{1}{M}\sum_{m=1}^M (1-X^{m,\beta}_{t-})\Big(dN^{m,{\bf I}}_t+{\bf 1}_{\beta^m_t\geq B^{m,{\bf T}}_{N^{m,{\bf T}}_{t-}+1}}dN^{m,{\bf T}}_t \\
& & \hspace{3cm} + \; {\bf 1}_{\beta^0_t\geq B^{{\bf NT}}_{N^{{\bf NT}}_{t-}+1}} dN^{m,{\bf NT}}_t+\sum_{i =1}^M X^{i,\beta}_{t-}dN^{m,i,{\bf S}}_t\Big).
\end{eqnarray*}
It follows that the process
\begin{align}\label{eq-martingale}
t &\longmapsto\; p^{\beta}_t-\int_0^t\frac{1}{M}\sum_{m \in \llbracket 1,M \rrbracket} (1-X^{m, \beta}_{s-})\Big(\;\eta^{{\bf I}}+{\bf 1}_{\beta^m_s\geq B^{m,{\bf T}}_{N^{m,{\bf T}}_{s-}+1}}\eta^{{\bf T}}\\
& \hspace{3cm} + \; {\bf 1}_{\beta^0_s\geq B^{{\bf NT}}_{N^{{\bf NT}}_{s-}+1}} \eta^{{\bf NT}}+\sum_{i \in \llbracket 1,M \rrbracket} X^{i,\beta}_{s-}\eta^{{\bf S}}\Big)ds,
\end{align}
is a martingale. Indeed, a classical result of martingale theory for Point process, is that for any Poisson process $N$ with intensity $\eta$, its {\it compensated process}, defined by $(N_t-\eta t)_{t\in\mathbb{R}_+}$ is a martingale w.r.t.
its natural filtration, but also, clearly, w.r.t. any filtration generated by $N$ and any process $Y$ independent of $N$. This implies that all the Poisson processes considered in this model are martingales w.r.t. the filtration $\tilde{\mathbb{F}}=(\tilde{{\cal F}}_t)_{t\in\mathbb{R}_+}$ defined by
\begin{eqnarray*}
\tilde{{\cal F}}_t=\sigma((B^{m,{\bf T}}_k)_{m\in\llbracket 1, M\rrbracket, k\in\mathbb{N}_\star},(B^{{\bf NT}}_k)_{k\in\mathbb{N}_\star}, (N^{m,{\bf I}}_s,N^{m,{\bf T}}_s,N^{m,{\bf NT}}_s,N^{m,i,{\bf S}}_s,N^{m,\boldsymbol{D}}_s)_{m,i \in \llbracket 1,M \rrbracket, s\leq t}).
\end{eqnarray*}
Notice that the processes in the integrand in \eqref{eq-martingale} is $\tilde{\mathbb{F}}$-predictable, which thus implies that the process in \eqref{eq-martingale} is a $\tilde{\mathbb{F}}$-martingale. Since ${\cal F}_t\subset \tilde{{\cal F}}_t$ for all $t\in\mathbb{R}_+$, this implies that for any bounded positive $\mathbb{F}$-predictable process $H$,
\begin{eqnarray*}
&& \mathbb{E}\Big[\int_0^\infty H_tdp^\beta_t\Big]\\
&=&\mathbb{E}\Big[\int_0^\infty H_t \frac{1}{M}\sum_{m \in \llbracket 1,M \rrbracket} (1-X^{m, \beta}_t)\Big(\eta^{{\bf I}}+{\bf 1}_{\beta^m_t\geq B^{m,{\bf T}}_{N^{m,{\bf T}}_{t-}+1}}\eta^{{\bf T}} \\
& & \hspace{5cm} + \; {\bf 1}_{\beta^0_t\geq B^{{\bf NT}}_{N^{{\bf NT}}_{t-}+1}} \eta^{{\bf NT}}+\sum_{i \in \llbracket 1,M \rrbracket} X^{i,\beta}_{t-}\eta^{{\bf S}}\Big)dt \Big] \\
&=& \mathbb{E}\Big[\int_0^\infty H_t\frac{1}{M}\sum_{m \in \llbracket 1,M \rrbracket} (1-X^{m, \beta}_t)\Big(\eta^{{\bf I}}+\mathbb{P}[b\geq B^{m,{\bf T}}_{1}]_{b:= \beta^m_t}\eta^{{\bf T}} \\
& & \hspace{5cm} + \; \mathbb{P}[b\geq B^{{\bf NT}}_{1}]_{b:=\beta^0_t} \eta^{{\bf NT}}+\sum_{i \in \llbracket 1,M\rrbracket}X^{i,\beta}_t\eta^{{\bf S}}\Big)dt\Big].
\end{eqnarray*}
This expression can be rewritten as
\begin{align}\label{eq-dp}
\mathbb{E}\Big[\int_0^\infty H_tdp^\beta_t\Big] &= \mathbb{E}\Big[\int_0^\infty H_t G_t^\beta dt \Big],
\end{align}
where $\alpha^{\beta}_t:=\frac{\sum_{m=1}^M(1-X^{m, \beta}_{t^-})\mathbb{P}[b\geq B^{{\bf T}}_{1}]_{b:= \beta^m_t}}{M(1-p^\beta_{t^-})}$, and
\begin{align} \label{defG}
G_t^\beta &:= \; (1-p^\beta_t)\big(\eta^{{\bf I}}+ \eta^{{\bf T}} \alpha^\beta_t+ \eta^{{\bf NT}} \mathbb{P}[b\geq B^{{\bf NT}}_{1}]_{b:=\beta^0_t} +\eta^{{\bf S}} p^\beta_t\big), \quad \forall t\in\mathbb{R}_+.
\end{align}
This means that $G^\beta$ is the intensity process of $p^\beta$.
\vspace{1mm}
\noindent {\it Step 2: Lower bound for $V(\beta)$.}
From \eqref{defVsocialno}, and using the intensities of the Poisson processes, we rewrite the cost function as
\begin{eqnarray*}
V(\beta)
&=&\mathbb{E}\Big[\sum_{m=1}^M\int_0^\infty \Big(K(1-X^{m,\beta}_{t^-})+{\bf 1}_{\beta^m_t\geq B^{m,{\bf T}}_{N_t^{m,{\bf T}}}} {\bf c}^{{\bf T}}(\beta^m_t,B^{m,{\bf T}}_{N_t^{m,{\bf T}}}) \eta^{{\bf T}} \\
& & \hspace{5cm} + \; {\bf 1}_{\beta^0_t\geq B^{{\bf NT}}_{N^{{\bf NT}}_t}} {\bf c}^{{\bf NT}}(\beta^0_t, B^{{\bf NT}}_{N^{{\bf NT}}_t}) \eta^{{\bf NT}}\Big)dt\Big] \\
&=&\mathbb{E}\Big[\sum_{m=1}^M\int_0^\infty \Big(K(1-X^{m,\beta}_{t^-}) + \eta^{{\bf T}} \mathbb{E}[{\bf c}^{{\bf T}}(b,B^{1,{\bf T}}_{1}){\bf 1}_{b\geq B^{1,{\bf T}}_{1}}]_{b:=\beta^m_t} \\
& & \hspace{5cm} + \; \eta^{{\bf NT}} \mathbb{E}[{\bf c}^{{\bf NT}}(b, B^{{\bf NT}}_{1}){\bf 1}_{b\geq B^{{\bf NT}}_{1}}]_{b:=\beta^0_t} \Big)dt\Big].
\end{eqnarray*}
Now, we can bound from below the part $\mathbb{E}[{\bf c}^{{\bf T}}(b,B^{1,{\bf T}}_{1}){\bf 1}_{b\geq B^{1,{\bf T}}_{1}}]_{b:=\beta^m_t}$ by $(1-X^{m,\beta}_{t^-})\mathbb{E}\big[{\bf c}^{{\bf T}}(b,B^{1,{\bf T}}_{1}){\bf 1}_{b\geq B^{1,{\bf T}}_{1}}\big]_{b:=\beta^m_t}$ and the part
$\mathbb{E}[{\bf c}^{{\bf NT}}(b, B^{{\bf NT}}_{1}){\bf 1}_{b\geq B^{{\bf NT}}_{1}}]_{b:=\beta^0_t}$ by ${\bf 1}_{p^\beta_{t^-}<1}\mathbb{E}[{\bf c}^{{\bf NT}}(b, B^{{\bf NT}}_{1}){\bf 1}_{b\geq B^{{\bf NT}}_{1}}]_{b:=\beta^0_t}$, so that
\begin{align}
V(\beta)
&\geq \; M\mathbb{E}\Big[\int_0^\infty \Big(K(1-p^{\beta}_{t^-})+\frac{1}{M}\sum_{m=1}^M(1-X^{m,\beta}_{t^-}) \eta^{{\bf T}} \mathbb{E}\big[{\bf c}^{{\bf T}}(b,B^{1,{\bf T}}_{1}){\bf 1}_{b\geq B^{1,{\bf T}}_{1}}\big]_{b:=\beta^m_t} \\
& \hspace{4cm} + \; {\bf 1}_{p^\beta_{t^-}<1} \eta^{{\bf NT}} \mathbb{E}\big[ {\bf c}^{{\bf NT}}(b, B^{{\bf NT}}_{1}) {\bf 1}_{b\geq B^{{\bf NT}}_{1}}\big]_{b:=\beta^0_t} \Big)dt\Big] \\
&= \; M \mathbb{E}\Big[\int_0^\infty H_t G_t^\beta d t\Big], \label{inegV}
\end{align}
where $H_t$ $=$ $\tilde H_t(p_{t^-}^\beta)$ with $\tilde H_t(p)$ defined for $p$ $\in$ $[0,1]$ by
\begin{align}
\tilde H_t(p) &:= \; \frac{K+ \frac{\eta^{{\bf T}}}{M(1-p)}\displaystyle\sum_{m=1}^M(1-X^{m,\beta}_{t-}) \mathbb{E}[{\bf c}^{{\bf T}}(b,B^{1,{\bf T}}_{1}){\bf 1}_{b\geq B^{1,{\bf T}}_{1}}]_{b:=\beta^m_t}
+ {\bf 1}_{p<1} \eta^{{\bf NT}} \mathbb{E}\big[ \frac{{\bf c}^{{\bf NT}}(b, B^{{\bf NT}}_{1})}{1-p} {\bf 1}_{b\geq B^{{\bf NT}}_{1}}\big]_{b:=\beta^0_t} }{\eta^{{\bf I}}+ \eta^{{\bf T}} \alpha^\beta_t+ \eta^{{\bf NT}} \mathbb{P}[b\geq B^{{\bf NT}}_{1}]_{b:=\beta^0_t} + \eta^{{\bf S}} p },
\end{align}
(with the convention that $\frac{0}{0}=0$) by recalling the definition of $G^\beta$ in \eqref{defG}.
For such $H$, which is clearly a positive and $\mathbb{F}$-predictable bounded process, we have from \eqref{inegV} and \eqref{eq-dp}
\begin{eqnarray*}
V(\beta)&\geq&
M \mathbb{E}\Big[\int_0^\infty \tilde H_t (p_{t^-}^\beta) dp^{\beta}_t\Big].
\end{eqnarray*}
The above r.h.s. is turned into a sum over successive values of $p^{\beta}_t$ as
\begin{align}
V(\beta) & \geq \; \mathbb{E}\Big[\sum_{p\in \mathbb{P}_M} \tilde H_{\tau_p^\beta}(p) \Big],
\end{align}
where $\tau^\beta_p$ $=$ $\inf\{t\in\mathbb{R}_+: p^\beta_t=p+1/M\}$ for $p\in \mathbb{P}_M$.
Notice that in $\tilde H_{\tau_p^\beta}(p)$, the terms $\sum_{m=1}^M(1-X^{m,\beta}_{\tau^{\beta}_p-})\mathbb{E}[{\bf c}^{{\bf T}}(b^{m,{\bf T}},B^{1,{\bf T}}_{1}){\bf 1}_{b\geq B^{1,{\bf T}}_{1}}]_{b:=\beta^m_{\tau^{\beta}_p}}$ and the sum in the definition of $\alpha^\beta_{\tau^{\beta}_p}$ are only summing over the $M(1-p)$ indices $m$ such that
$X^{m,\beta}_{\tau^{\beta}_p-}=0$. We can thus clearly bound from below $\tilde H_{\tau_p^\beta}(p)$, for $p$ $\in$ $\mathbb{P}_M$, by
\begin{eqnarray*}
\tilde H_{\tau_p^\beta}(p) & \geq & \inf_{\underset{m\in\llbracket 1, M(1-p)\rrbracket}{b^{m,{\bf T}}, b^{{\bf NT}}\in \mathbb{R}_+}} w(p;(b^{m,{\bf T}})_{m\in\llbracket 1,M(1-p)\rrbracket},b^{{\bf NT}}),
\end{eqnarray*}
with
\begin{eqnarray*}
& & w(p;(b^{m,{\bf T}})_{m\in\llbracket 1,M(1-p)\rrbracket},b^{{\bf NT}}) \\
&:=& \frac{K+\frac{\eta^{{\bf T}}}{M(1-p)}\displaystyle\sum_{m=1}^{M(1-p)}\mathbb{E}\big[{\bf c}^{{\bf T}}(b^{m,{\bf T}},B^{1,{\bf T}}_{1}){\bf 1}_{b^{m,{\bf T}}\geq B^{1,{\bf T}}_{1}}\big]
+ \eta^{{\bf NT}} \mathbb{E}\big[\frac{{\bf c}^{{\bf NT}}(b^{{\bf NT}}, B^{{\bf NT}}_{1})}{1-p}{\bf 1}_{b^{{\bf NT}}\geq B^{{\bf NT}}_{1}}\big] }{\eta^{{\bf I}}+ \eta^{{\bf T}} \frac{\displaystyle\sum_{m=1}^{M(1-p)}\mathbb{P}[b^{m,T}\geq B^{{\bf T}}_{1}]}{M(1-p)} + \eta^{{\bf NT}} \mathbb{P}[b^{{\bf NT}}\geq B^{{\bf NT}}_{1}] +\eta^{{\bf S}}p },
\end{eqnarray*}
so that for any open-loop bidding map control $\beta$ $\in$ $\Pi_{OL}$,
\begin{align} \label{Vlower}
V(\beta) & \geq \; \sum_{p\in \mathbb{P}_M} v(p),
\end{align}
with
\begin{align} \label{eq-infimum}
v(p) &:= \; \inf_{\underset{m\in\llbracket 1, M(1-p)\rrbracket}{b^{m,{\bf T}}, b^{{\bf NT}}\in \mathbb{R}_+}} w(p;(b^{m,{\bf T}})_{m\in\llbracket 1,M(1-p)\rrbracket},b^{{\bf NT}}), \quad p \in \mathbb{P}_M.
\end{align}
\vspace{1mm}
\noindent {\it Step 3: Attaining the lower bound}. By definition of $v(p)$ in \eqref{eq-infimum}, we have for all $b^{m,{\bf T}}\in\mathbb{R}_+$, $m\in\llbracket 1, M(1-p)\rrbracket$, and $b^{{\bf NT}}\in \mathbb{R}_+$,
\begin{eqnarray*}
&&K+\frac{ \eta^{{\bf T}}}{M(1-p)}\sum_{m=1}^{M(1-p)}\mathbb{E}[{\bf c}^{{\bf T}}(b^{m,{\bf T}},B^{1,{\bf T}}_{1}){\bf 1}_{b^{m,{\bf T}}\geq B^{1,{\bf T}}_{1}}] + \eta^{{\bf NT}} \mathbb{E}[\frac{{\bf c}^{{\bf NT}}(b^{{\bf NT}}, B^{{\bf NT}}_{1})}{1-p}{\bf 1}_{b^{{\bf NT}}\geq B^{{\bf NT}}_{1}}] \\
&\geq& v(p)\Big(\eta^{{\bf I}}+ \eta^{{\bf T}} \frac{\sum_{m=1}^{M(1-p)}\mathbb{P}[b^{m,{\bf T}}\geq B^{{\bf T}}_{1}]}{M(1-p)}+ \eta^{{\bf NT}}\mathbb{P}[b^{{\bf NT}}\geq B^{{\bf NT}}_{1}] + \eta^{{\bf S}} p \Big)
\end{eqnarray*}
which is equivalent to
\begin{eqnarray*}
&&K-v(p) (\eta^{{\bf I}}+p\eta^{{\bf S}})+\frac{\eta^{{\bf T}} }{M(1-p)}\sum_{m=1}^{M(1-p)}\mathbb{E}\big[\big({\bf c}^{{\bf T}}(b^{m,{\bf T}},B^{1,{\bf T}}_{1})-v(p)\big){\bf 1}_{b^{m,{\bf T}}\geq B^{1,{\bf T}}_{1}}\big] \\
&& \quad + \; \eta^{{\bf NT}} \mathbb{E}\Big[\Big(\frac{{\bf c}^{{\bf NT}}(b^{{\bf NT}}, B^{{\bf NT}}_{1})}{1-p}-v(p)\Big){\bf 1}_{b^{{\bf NT}} \geq B^{{\bf NT}}_{1}}\Big] \; \geq \; 0.
\end{eqnarray*}
Moreover, we have equality if and only if $b^{m,{\bf T}}\in\mathbb{R}_+$, $m\in\llbracket 1, M(1-p)\rrbracket$, and $b^{{\bf NT}}\in \mathbb{R}_+$ attains the infimum in \eqref{eq-infimum}. This means that
\begin{align}
& \quad \;\argmin_{\underset{m\in\llbracket 1, M(1-p)\rrbracket}{b^{m,{\bf T}}, b^{{\bf NT}}\in \mathbb{R}_+}} w(p;(b^{m,{\bf T}})_{m\in\llbracket 1,M\rrbracket},b^{{\bf NT}}) \\
&\;=\argmin_{\underset{m\in\llbracket 1, M(1-p)\rrbracket}{b^{m,{\bf T}}, b^{{\bf NT}}\in \mathbb{R}_+}} \Big\{
\frac{\eta^{{\bf T}}}{M(1-p)}\sum_{m=1}^{M(1-p)}\mathbb{E}[({\bf c}^{{\bf T}}(b^{m,{\bf T}},B^{1,{\bf T}}_{1})-v(p)){\bf 1}_{b^{m,{\bf T}}\geq B^{1,{\bf T}}_{1}}] \\
& \hspace{5cm} + \; \eta^{{\bf NT}} \mathbb{E}\Big[\Big(\frac{{\bf c}^{{\bf NT}}(b^{{\bf NT}} , B^{{\bf NT}}_{1})}{1-p}-v(p)\Big){\bf 1}_{b^{{\bf NT}} \geq B^{{\bf NT}}_{1}}\Big] \Big\} \\
&\;=\Big(\displaystyle\prod_{m\in\llbracket 1, M(1-p)\rrbracket}\argmin_{b^{m,{\bf T}}\in \mathbb{R}_+}\mathbb{E}[({\bf c}^{{\bf T}}(b^{m,{\bf T}},B^{1,{\bf T}}_{1})-v(p)){\bf 1}_{b^{m,{\bf T}}\geq B^{1,{\bf T}}_{1}}]\Big) \\
& \hspace{3cm} \times \argmin_{b^{{\bf NT}}\in \mathbb{R}_+} \mathbb{E}\Big[\Big(\frac{{\bf c}^{{\bf NT}}(b^{{\bf NT}} , B^{{\bf NT}}_{1})}{1-p}-v(p)\Big){\bf 1}_{b^{{\bf NT}} \geq B^{{\bf NT}}_{1}}\Big] \\
&\;= \; \Big(\argmax_{b^{{\bf T}}\in \mathbb{R}_+}\mathbb{E}\big[ \big( v(p) - {\bf c}^{{\bf T}}(b^{{\bf T}},B^{1,{\bf T}}_{1}) \big) {\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}\big]\Big)^{M(1-p)} \\
& \hspace{3cm} \times \argmax_{b^{{\bf NT}}\in \mathbb{R}_+}\mathbb{E}\Big[\Big(v(p)-\frac{{\bf c}^{{\bf NT}}(b^{{\bf NT}} , B^{{\bf NT}}_{1})}{1-p}\Big){\bf 1}_{b^{{\bf NT}} \geq B^{{\bf NT}}_{1}}\Big]. \label{eq-split-static}
\end{align}
Relation \eqref{eq-split-static} shows that the original argmin over $M(1-p)+1$ arguments in $v(p)$ is reduced into the search of two argmax for the optimal bid of static auctions, one with respect to maximal bid distribution ${\cal L}(B^{1,{\bf T}}_{1})$ for the targeted auction, and the other with respect to the
maximal bid distribution ${\cal L}(B^{{\bf NT}}_{1})$ of the non-targeted auction.
Let us check that these sets are not empty. We study the set
\begin{align} \label{setmin}
\argmax_{b^{{\bf T}}\in \mathbb{R}_+}\mathbb{E}[(v(p)-{\bf c}^{{\bf T}}(b^{{\bf T}},B^{1,{\bf T}}_{1})){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}],
\end{align}
the other set being treated similarly, and distinguish the two cases of paying auction rules:
\begin{itemize}
\item[1.] {\it First-price auction: ${\bf c}^{{\bf T}}(b,B)=b$}. In this case, the set in \eqref{setmin} is written as
\begin{eqnarray*}
\argmax_{b^{{\bf T}}\in \mathbb{R}_+} \big\{ (v(p)-b^{{\bf T}}) \mathbb{P}[ b^{{\bf T}}\geq B^{1,{\bf T}}_{1} ] \big\}.
\end{eqnarray*}
Notice that for $b^{{\bf T}}>v(p)$, the expression $(v(p)-b^{{\bf T}})\mathbb{P}[ b^{{\bf T}}\geq B^{1,{\bf T}}_{1} ]$ is strictly negative, and thus smaller than its value for $b^{{\bf T}}=0$. The maximisation can thus be restricted to $[0,v(p)]$. Notice that $b^{{\bf T}}\mapsto v(p)-b^{{\bf T}}$ is positive and continuous on $[0,v(p)]$, and thus upper semi-continuous, and $b^{{\bf T}}\mapsto \mathbb{P}[b^{{\bf T}}\geq B^{1,{\bf T}}_{1}]$ is positive, non-decreasing and càd-làg, and thus upper semi-continuous.
It follows that $b^{{\bf T}}\mapsto (v(p)-b^{{\bf T}})\mathbb{P}[b^{{\bf T}}\geq B^{1,{\bf T}}_{1}]$ is positive and upper semi-continuous on $[0,v(p)]$, and thus reaches its maximum.
\item[2.] {\it Second-price auction case: ${\bf c}^{{\bf T}}(b,B)=B$}. In this case, the set \eqref{setmin} is written as
\begin{eqnarray*}
\argmax_{b^{{\bf T}}\in \mathbb{R}_+}\mathbb{E}[(v(p)-B^{1,{\bf T}}_{1}){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}],
\end{eqnarray*}
and it is clear that the maximum is attained by $b^{{\bf T}}=v(p)$.
\end{itemize}
This proves that the set \eqref{eq-split-static} is not empty. Moreover, any $b^{m,{\bf T}}\in\mathbb{R}_+$, $m\in\llbracket 1, M(1-p)\rrbracket$, and $b^{{\bf NT}}\in \mathbb{R}_+$ in the set \eqref{eq-split-static} reaches the infimum in \eqref{eq-infimum}. Given the form of the set \eqref{eq-split-static}, one can clearly take an element of the form
$((b^{{\bf T}})_{m\in\llbracket 1, M(1-p)\rrbracket}, b^{{\bf NT}})$ in this set (i.e. the same bid $b^{{\bf T}}$ for the targeted advertising bid associated to all individuals who are not informed yet). Thus, the infimum in \eqref{eq-infimum} is written as
\begin{align}
& \quad v(p) \\
& = \;\inf_{b^{{\bf T}}, b^{{\bf NT}}\in \mathbb{R}_+}\frac{K+\frac{ \eta^{{\bf T}}}{M(1-p)}\sum_{i=1}^{M(1-p)}\mathbb{E}[{\bf c}^{{\bf T}}(b^{{\bf T}},B^{1,{\bf T}}_{1}){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}] + \eta^{{\bf NT}} \mathbb{E}[\frac{{\bf c}^{{\bf NT}}(b^{{\bf NT}} , B^{{\bf NT}}_{1})}{1-p}{\bf 1}_{b^{{\bf NT}} \geq B^{{\bf NT}}_{1}}] }
{\eta^{{\bf I}}+ \eta^{{\bf T}} \frac{\sum_{i=1}^{M(1-p)}\mathbb{P}[b^{{\bf T}}\geq B^{{\bf T}}_{1}]}{M(1-p)} + \eta^{{\bf NT}} \mathbb{P}[b^{{\bf NT}}\geq B^{{\bf NT}}_{1}] + \eta^{{\bf S}} p } \\
&\;=\inf_{b^{{\bf T}}, b^{{\bf NT}}\in \mathbb{R}_+} v^{b^{{\bf T}},b^{{\bf NT}}}(p), \label{vbtbnt}
\end{align}
with
\begin{align}
v^{b^{{\bf T}},b^{{\bf NT}}}(p) & \; :=
\frac{K+ \eta^{{\bf T}} \mathbb{E}[{\bf c}^{{\bf T}}(b^{{\bf T}},B^{1,{\bf T}}_{1}){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}] + \eta^{{\bf NT}} \mathbb{E}[\frac{{\bf c}^{{\bf NT}}(b^{{\bf NT}} , B^{{\bf NT}}_{1})}{1-p}{\bf 1}_{b^{{\bf NT}} \geq B^{{\bf NT}}_{1}}] }{\eta^{{\bf I}}+ \eta^{{\bf T}} \mathbb{P}[b^{{\bf T}}\geq B^{{\bf T}}_{1}] + \eta^{{\bf NT}} \mathbb{P}[b^{{\bf NT}}\geq B^{{\bf NT}}_{1}] +\eta^{{\bf S}}p }.
\end{align}
Therefore, we have by \eqref{Vlower}
\begin{align}
V^\star \; = \; \inf_{\beta\in\Pi_{OL}} V(\beta) & \geq \; \sum_{p\in \mathbb{P}_M} v(p) \; = \; \sum_{p\in \mathbb{P}_M} \inf_{b^{{\bf T}}, b^{{\bf NT}}\in \mathbb{R}_+} v^{b^{{\bf T}},b^{{\bf NT}}}(p).
\end{align}
Now, by considering the control $\beta^{\mathfrak{b^\star}}$ associated to the proportion-based bidding policy $\mathfrak{b}^\star$ $=$ $(\mathfrak{b}^{\star,{\bf T}},\mathfrak{b}^{\star,{\bf NT}})$ defined by
\begin{align} \label{boptim}
(\mathfrak{b}^{{\bf T}}(p),\mathfrak{b}^{{\bf NT}}(p)) & \in \; \argmin_{b^{{\bf T}}, b^{{\bf NT}}\in \mathbb{R}_+} v^{b^{{\bf T}},b^{{\bf NT}}}(p),
\end{align}
and retracing the above derivations, we see that all the inequalities turn into equalities. More precisely, the first inequality \eqref{inegV} becomes an equality whenever the bidding control used comes from a proportion-based policy. Indeed, in this case,
\begin{eqnarray*}
\mathbb{E}[{\bf c}^{{\bf T}}(b,B^{1,{\bf T}}_{1}){\bf 1}_{b\geq B^{1,{\bf T}}_{1}}]_{b:=(\beta^{\mathfrak{b}})^m_t}&=&\mathbb{E}[{\bf c}^{{\bf T}}(b,B^{1,{\bf T}}_{1}){\bf 1}_{b\geq B^{1,{\bf T}}_{1}}]_{b:=(1-X^{m,\beta^{\mathfrak{b}}}_{t^-})(\beta^{\mathfrak{b}})^m_t}\\
&=&(1-X^{m,\beta^{\mathfrak{b}}}_{t^-})\mathbb{E}\big[{\bf c}^{{\bf T}}(b,B^{1,{\bf T}}_{1}){\bf 1}_{b\geq B^{1,{\bf T}}_{1}}\big]_{b:=(\beta^{\mathfrak{b}})^m_t}
\end{eqnarray*}
and
\begin{eqnarray*}
\mathbb{E}[{\bf c}^{{\bf NT}}(b, B^{{\bf NT}}_{1}){\bf 1}_{b\geq B^{{\bf NT}}_{1}}]_{b:=(\beta^{\mathfrak{b}})^0_t}&=&\mathbb{E}[{\bf c}^{{\bf NT}}(b, B^{{\bf NT}}_{1}){\bf 1}_{b\geq B^{{\bf NT}}_{1}}]_{b:={\bf 1}_{p^{\beta^{\mathfrak{b}}}_{t^-}<1}(\beta^{\mathfrak{b}})^0_t}\\
&=&{\bf 1}_{p^{\beta^{\mathfrak{b}}}_{t^-}<1}\mathbb{E}[{\bf c}^{{\bf NT}}(b, B^{{\bf NT}}_{1}){\bf 1}_{b\geq B^{{\bf NT}}_{1}}]_{b:=(\beta^{\mathfrak{b}})^0_t},
\end{eqnarray*}
The next steps of the proof thus lead to the equality
\begin{eqnarray*}
V(\beta^{\mathfrak{b^\star}}) & = \; \mathbb{E}\Big[\sum_{p\in \mathbb{P}_M} \tilde H_{\tau_p^{\beta^{\mathfrak{b^\star}}}}(p) \Big],
\end{eqnarray*}
and the only other lower bound performed is $\tilde H_{\tau_p^\beta}(p)\geq v(p)$, but this bound is clearly, by definition of $\mathfrak{b^\star}$, reached by $\beta^{\mathfrak{b^\star}}$, i.e. we have $\tilde H_{\tau_p^{\beta^{\mathfrak{b^\star}}}}(p)= v(p)$, and thus $V(\beta^{\mathfrak{b}^\star})$ $=$ $\sum_{p\in \mathbb{P}_M} v(p)$, which implies that $\beta^{\mathfrak{b}^\star}$ is an optimal bidding map control: $V(\beta^{\mathfrak{b}^\star})$ $=$ $V^\star$.
\vspace{3mm}
\noindent {\it Case of a second-price auction rule, i.e. ${\bf c}^{{\bf T}}(b,B)={\bf c}^{{\bf NT}}(b,B)=B$}.
In this case, relation \eqref{eq-split-static} is written as
\begin{align}
& \quad \argmin_{\underset{m\in\llbracket 1, M(1-p)\rrbracket}{b^{m,{\bf T}}, b^{{\bf NT}}\in \mathbb{R}_+}} w(p;(b^{m,{\bf T}})_{m\in\llbracket 1,M\rrbracket},b^{{\bf NT}}) \\
&\;= \; \Big(\argmax_{b^{{\bf T}}\in \mathbb{R}_+}\mathbb{E}\big[ \big( v(p) - B^{1,{\bf T}}_{1}) \big) {\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}\big]\Big)^{M(1-p)} \\
& \hspace{3cm} \times \argmax_{b^{{\bf NT}}\in \mathbb{R}_+}\mathbb{E}\Big[\Big(v(p)- \frac{ B^{{\bf NT}}_{1}}{1-p}\Big){\bf 1}_{b^{{\bf NT}} \geq B^{{\bf NT}}_{1}}\Big]. \label{eq-split-static-second}
\end{align}
We then notice that the element $((v(p))_{m\in\llbracket 1, M(1-p)\rrbracket}, (1-p)v(p))$,
belongs to the set \eqref{eq-split-static-second}. It follows that the infimum in \eqref{eq-infimum} (or \eqref{vbtbnt}) can be reduced to a single parameter optimisation problem, namely
\begin{eqnarray*}
v(p) &=& \inf_{b\in\mathbb{R}_+} v^{b,(1-p)b}(p).
\end{eqnarray*}
Moreover, we obtain an optimal bidding map control $\beta^{\mathfrak{b}^\star}$ associated to a proportion-based bidding policy in the form $\mathfrak{b}^\star(p)$ $=$ $(\mrb^\star(p),(1-p)\mrb^\star(p))$, $p$ $\in$ $[0,1]$ with
\begin{eqnarray*}
\mrb^\star(p) & \in & \argmin_{b\in \mathbb{R}_+} v^{b,(1-p)b}(p).
\end{eqnarray*}
\subsubsection{Proof of the well-posedness of Definition \ref{def-smallest-proportion-based-bid}} \label{sec:proof-def-smallest-proportion-based-bid}
Let us first prove that
\begin{align}
& \quad \argmin_{b^{{\bf T}}, b^{{\bf NT}}\in \mathbb{R}_+}v^{{\bf b}^{{\bf T}},{\bf b}^{{\bf NT}}}(p)\\
= & \;\; \argmax_{b^{{\bf T}}\in \mathbb{R}_+}\mathbb{E}\big[(v(p)-{\bf c}^{{\bf T}}(b^{{\bf T}},B^{1,{\bf T}}_{1})){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}\big] \times \argmax_{b^{{\bf NT}}\in \mathbb{R}_+}\mathbb{E}\Big[\Big(v(p)-\frac{{\bf c}^{{\bf NT}}(b^{{\bf NT}} , B^{{\bf NT}}_{1})}{1-p}\Big){\bf 1}_{b^{{\bf NT}} \geq B^{{\bf NT}}_{1}}\Big].
\end{align}
By definition, for any $b^{{\bf T}},b^{{\bf NT}}\in \mathbb{R}_+$, we have $v^{b^{{\bf T}},b^{{\bf NT}}}(p)$ $\geq$ $v(p)$, with equality if and only if $b^{{\bf T}},b^{{\bf NT}}$ reach the infimum in the definition of $v(p)$. This is formulated as
\begin{eqnarray*}
\frac{K+ \eta^{{\bf T}} \mathbb{E}[{\bf c}^{{\bf T}}(b^{{\bf T}},B^{1,{\bf T}}_{1}){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}] + \eta^{{\bf NT}} \mathbb{E}[\frac{{\bf c}^{{\bf NT}}(b^{{\bf NT}} , B^{{\bf NT}}_{1})}{1-p}{\bf 1}_{b^{{\bf NT}} \geq B^{{\bf NT}}_{1}}] }{\eta^{{\bf I}}+ \eta^{{\bf T}} \mathbb{P}[b^{{\bf T}}\geq B^{{\bf T}}_{1}] + \eta^{{\bf NT}} \mathbb{P}[b^{{\bf NT}}\geq B^{{\bf NT}}_{1}] +\eta^{{\bf S}}p }
&\geq& v_{}(p),
\end{eqnarray*}
which is written equivalently as
\begin{align}
K-(\eta^{{\bf I}}+\eta^{{\bf S}}p)v_{}(p) + \eta^{{\bf T}} \mathbb{E}\big[({\bf c}^{{\bf T}}(b^{{\bf T}},B^{1,{\bf T}}_{1})-v_{}(p)){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}\big] & \\
\quad + \; \; \eta^{{\bf NT}} \mathbb{E}\Big[\Big(\frac{{\bf c}^{{\bf NT}}(b^{{\bf NT}} , B^{{\bf NT}}_{1})}{1-p}-v_{}(p)\Big){\bf 1}_{b^{{\bf NT}} \geq B^{{\bf NT}}_{1}}\Big] & \geq \; 0, \label{eq-static-det0}
\end{align}
again with equality if and only if $b^{{\bf T}},b^{{\bf NT}}$ reach the infimum in the definition of $v_{}(p)$. This clearly means that $b^{{\bf T}},b^{{\bf NT}}$ reach the infimum in the definition of $v_{}(p)$ if and only if they minimize \eqref{eq-static-det0} over $b^{{\bf T}},b^{{\bf NT}}\in \mathbb{R}_+$, i.e. if and only if
\begin{align}
(b^{{\bf T}},b^{{\bf NT}}) &\in \; \argmax_{b^{{\bf T}}\in \mathbb{R}_+}\mathbb{E}[(v(p)-{\bf c}^{{\bf T}}(b^{{\bf T}},B^{1,{\bf T}}_{1})){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}] \\
& \quad \times \argmax_{b^{{\bf NT}}\in \mathbb{R}_+}\mathbb{E}\Big[\Big(v(p)-\frac{{\bf c}^{{\bf NT}}(b^{{\bf NT}} , B^{{\bf NT}}_{1})}{1-p}\Big){\bf 1}_{b^{{\bf NT}} \geq B^{{\bf NT}}_{1}}\Big]. \label{eq-split-det}
\end{align}
It is thus clear, that there exists a unique proportion-based policy $\mathfrak{b}_{min}^{\star}$, defined by
\begin{align}
\mathfrak{b}_{min}^{\star, {\bf T}}(p)&= \; \min \argmax_{b^{{\bf T}}\in \mathbb{R}_+}\mathbb{E}[(v(p)-{\bf c}^{{\bf T}}(b^{{\bf T}},B^{1,{\bf T}}_{1})){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}] \label{bstarTmin} \\
\mathfrak{b}_{min}^{\star, {\bf NT}}(p)&= \; \min \argmax_{b^{{\bf NT}}\in \mathbb{R}_+}\mathbb{E}\Big[\Big(v(p)-\frac{{\bf c}^{{\bf NT}}(b^{{\bf NT}} , B^{{\bf NT}}_{1})}{1-p}\Big){\bf 1}_{b^{{\bf NT}} \geq B^{{\bf NT}}_{1}}\Big], \label{bstarNTmin}
\end{align}
such that $(\mathfrak{b}_{min}^{\star, {\bf T}}(p),\mathfrak{b}_{min}^{\star, {\bf NT}}(p))$ reaches the infimum in the definition of $v(p)$, and that for any other $({\bf b}^{\star, {\bf T}},{\bf b}^{\star, {\bf NT}})$ reaching this infimum, we have
$\mathfrak{b}_{min}^{\star, {\bf T}}(p)\leq {\bf b}^{\star, {\bf T}}(p)$, $\mathfrak{b}_{min}^{\star, {\bf NT}}(p)\leq {\bf b}^{\star, {\bf NT}}(p)$.
\subsubsection{Proof of Proposition \ref{propsocialno-sensitivity}} \label{sec:prosen}
It is clear from the formula that for all $p\in \mathbb{P}_M$, $v(p)$ is decreasing in $\eta^{{\bf I}}$ and $\eta^{{\bf S}}$. Regarding the sensitivity in $\eta^{{\bf T}}$ and $\eta^{{\bf NT}}$, let us first prove that we have
\begin{align} \label{tildevp}
v(p) &= \; \inf_{{\bf b}^{{\bf T}},{\bf b}^{{\bf NT}}\in L(\Omega, \mathbb{R}_+), \protect\mathpalette{\protect\independenT}{\perp} B^{1,{\bf T}}_1,B^{1,{\bf NT}}_1} v^{{\bf b}^{{\bf T}},{\bf b}^{{\bf NT}}}(p),
\end{align}
i.e. that the $\inf$ can be taken over the set of random variables ${\bf b}^{{\bf T}},{\bf b}^{{\bf NT}}\in L(\Omega, \mathbb{R}_+), \protect\mathpalette{\protect\independenT}{\perp} B^{1,{\bf T}}_1,B^{1,{\bf NT}}_1$ instead of over $b^{{\bf T}},b^{{\bf NT}}\in \mathbb{R}_+$, without changing the infimum. Denote by $\tilde v(p)$ the right hand side of \eqref{tildevp}, and
let us check that $\tilde{v}(p)$ $=$ $v(p)$. By definition, for any ${\bf b}^{{\bf T}},{\bf b}^{{\bf NT}}\in L(\Omega, \mathbb{R}_+), \protect\mathpalette{\protect\independenT}{\perp} B^{1,{\bf T}}_1,B^{1,{\bf NT}}_1$,
we have $v^{{\bf b}^{{\bf T}},{\bf b}^{{\bf NT}}}(p)$ $\geq$ $\tilde v(p)$, with equality if and only if ${\bf b}^{{\bf T}},{\bf b}^{{\bf NT}}$ reach the infimum in the definition of $\tilde v(p)$. This is formulated as
\begin{align}
K-(\eta^{{\bf I}}+\eta^{{\bf S}}p)v_{}(p) + \eta^{{\bf T}} \mathbb{E}\big[({\bf c}^{{\bf T}}({\bf b}^{{\bf T}},B^{1,{\bf T}}_{1})-v_{}(p)){\bf 1}_{{\bf b}^{{\bf T}}\geq B^{1,{\bf T}}_{1}}\big] & \\
\quad + \; \; \eta^{{\bf NT}} \mathbb{E}\Big[\Big(\frac{{\bf c}^{{\bf NT}}({\bf b}^{{\bf NT}} , B^{{\bf NT}}_{1})}{1-p}-v_{}(p)\Big){\bf 1}_{{\bf b}^{{\bf NT}} \geq B^{{\bf NT}}_{1}}\Big] & \geq \; 0, \label{eq-static-rand}
\end{align}
with equality if and only if ${\bf b}^{{\bf T}},{\bf b}^{{\bf NT}}$ reach the infimum in the definition of $\tilde v_{}(p)$. This clearly means that ${\bf b}^{{\bf T}},{\bf b}^{{\bf NT}}$ reach the infimum in the definition of $\tilde v_{}(p)$ if and only if they minimize \eqref{eq-static-rand} over
${\bf b}^{{\bf T}},{\bf b}^{{\bf NT}}\in L(\Omega, \mathbb{R}_+), \protect\mathpalette{\protect\independenT}{\perp} B^{1,{\bf T}}_1,B^{1,{\bf NT}}_1$. By conditioning, it is clear that \eqref{eq-static-rand} will reach the same infimum if minimized over $b^{{\bf T}},b^{{\bf NT}}\in\mathbb{R}_+$. This in particular implies that the infimum in $\tilde{v}(p)$ will be reached if it is only taken over $b^{{\bf T}},b^{{\bf NT}}\in\mathbb{R}_+$,
which means that $\tilde{v}(p)=v(p)$.
In the sequel, we stress the dependence of $v^{{\bf b}^{{\bf T}},{\bf b}^{{\bf NT}}}(p)$ and $v(p)$ in $\eta^{{\bf T}}$ by writing $v_{\eta^{{\bf T}}}^{{\bf b}^{{\bf T}},{\bf b}^{{\bf NT}}}(p)$ and $v_{\eta^{{\bf T}}}(p)$. Let us consider $\tilde{\eta}^{{\bf T}}<\eta^{{\bf T}}$ and let us denote by $Z$ a Bernoulli random variable with parameter $\frac{\tilde{\eta}^{{\bf T}}}{\eta^{{\bf T}}}$, independent from $(B^{1,{\bf T}}_1,B^{1,{\bf NT}}_1)$. For any $b^{{\bf T}},b^{{\bf NT}}\in\mathbb{R}_+$, we define ${\bf b}^{{\bf T}}=Zb^{{\bf T}}$ and ${\bf b}^{{\bf NT}}=b^{{\bf NT}}$. Notice then that,
\begin{eqnarray*}
v_{\eta^{{\bf T}}}^{{\bf b}^{{\bf T}},{\bf b}^{{\bf NT}}}(p)=v_{\tilde{\eta}^{{\bf T}}}^{b^{{\bf T}},b^{{\bf NT}}}(p)
\end{eqnarray*}
Then, we have
\begin{eqnarray*}
v_{\eta^{{\bf T}}}(p)&=&\inf_{{\bf b}^{{\bf T}},{\bf b}^{{\bf NT}}\in L(\Omega, \mathbb{R}_+), \protect\mathpalette{\protect\independenT}{\perp} B^{1,{\bf T}}_1,B^{1,{\bf NT}}_1} v_{\eta^{{\bf T}}}^{{\bf b}^{{\bf T}},{\bf b}^{{\bf NT}}}(p)\leq \inf_{b^{{\bf T}},b^{{\bf NT}}\in \mathbb{R}_+} v_{\eta^{{\bf T}}}^{Zb^{{\bf T}},b^{{\bf NT}}}(p)\\
&=& \inf_{b^{{\bf T}},b^{{\bf NT}}\in \mathbb{R}_+} v_{\tilde{\eta}^{{\bf T}}}^{b^{{\bf T}},b^{{\bf NT}}}(p)= v_{\tilde{\eta}^{{\bf T}}}(p),
\end{eqnarray*}
and thus $v_{\eta^{{\bf T}}}(p)\leq v_{\tilde{\eta}^{{\bf T}}}(p)$, which means that $v_{\eta^{{\bf T}}}(p)$ is decreasing in $\eta^{{\bf T}}$. The same argument allows to prove that $v(p)$ is decreasing in $\eta^{{\bf NT}}$.
To summarize, $v(p)$ (and thus $\inf_{\beta\in \Pi_{OL}}V(\beta)$) is decreasing in all the model's intensity parameters $\theta=(\eta^{{\bf I}},\eta^{{\bf S}},\eta^{{\bf T}},\eta^{{\bf NT}})$.
Let us now study the monotonicity of the smallest optimal bid with respect to $\theta$. We then stress the dependence of $v^{b^{{\bf T}},b^{{\bf NT}}}(p)$, $v(p)$, $\mathfrak{b}_{min}^{\star,{\bf T}}(p)$ and $\mathfrak{b}_{min}^{\star,{\bf NT}}(p)$ in $\theta=(\eta^{{\bf I}},\eta^{{\bf S}},\eta^{{\bf T}},\eta^{{\bf NT}})$ by writing $v_{\theta}^{b^{{\bf T}},b^{{\bf NT}}}(p)$, $v_{\theta}(p)$, $\mathfrak{b}_{min,\theta}^{\star,{\bf T}}(p)$ and $\mathfrak{b}_{min,\theta}^{\star,{\bf NT}}(p)$.
Let us now consider $\tilde{\theta}=(\tilde{\eta}^{{\bf I}},\tilde{\eta}^{{\bf S}},\tilde{\eta}^{{\bf T}},\tilde{\eta}^{{\bf NT}})$ such that $\tilde{\eta}^{{\bf I}}\leq \eta^{{\bf I}}$, $\tilde{\eta}^{{\bf S}}\leq \eta^{{\bf S}}$, $\tilde{\eta}^{{\bf T}}\leq \eta^{{\bf T}}$, $\tilde{\eta}^{{\bf NT}}\leq \eta^{{\bf NT}}$. We then know from above that $v_{\theta}(p)\leq v_{\tilde{\theta}}(p)$.
Let us then prove that $\mathfrak{b}_{min,\theta}^{\star,{\bf T}}(p)\geq \mathfrak{b}_{min,\tilde{\theta}}^{\star,{\bf T}}(p)$. Assume on the contrary that $\mathfrak{b}_{min,\theta}^{\star,{\bf T}}(p)< \mathfrak{b}_{min,\tilde{\theta}}^{\star,{\bf T}}(p)$.
From \eqref{bstarTmin}, we have in particular that
$\mathfrak{b}_{min,\theta}^{\star,{\bf T}}(p)\in \argmax_{b^{{\bf T}}\in \mathbb{R}_+}\mathbb{E}[(v_\theta(p)-{\bf c}^{{\bf T}}(b^{{\bf T}},B^{1,{\bf T}}_{1})){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}]$,
and thus for $b^{{\bf T}}=\mathfrak{b}_{min,\tilde{\theta}}^{\star,{\bf T}}(p)$, we obtain
\begin{align}\label{ineq-1}
& \mathbb{E}[(v_{\theta}(p)-{\bf c}^{{\bf T}}(\mathfrak{b}_{min,\theta}^{\star,{\bf T}}(p),B^{1,{\bf T}}_{1})){\bf 1}_{\mathfrak{b}_{min,\theta}^{\star,{\bf T}}(p)\geq B^{1,{\bf T}}_{1}} \\
\geq & \; \mathbb{E}[(v_{\theta}(p)-{\bf c}^{{\bf T}}(\mathfrak{b}_{min,\tilde{\theta}}^{\star,{\bf T}}(p),B^{1,{\bf T}}_{1})){\bf 1}_{\mathfrak{b}_{min,\tilde{\theta}}^{\star,{\bf T}}(p)\geq B^{1,{\bf T}}_{1}}].
\end{align}
On the other hand, since
$\mathfrak{b}_{min,\tilde{\theta}}^{\star,{\bf T}}(p)=\min \argmax_{b^{{\bf T}}\in \mathbb{R}_+}\mathbb{E}[(v_{\tilde{\theta}}(p)-{\bf c}^{{\bf T}}(b^{{\bf T}},B^{1,{\bf T}}_{1})){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}]$, by \eqref{bstarTmin},
we have that for all $b'^{{\bf T}}<\mathfrak{b}_{min,\tilde{\theta}}^{\star,{\bf T}}(p)$,
\begin{eqnarray*}
b'^{{\bf T}}\not\in \argmax_{b^{{\bf T}}\in \mathbb{R}_+}\mathbb{E}[(v_{\tilde{\theta}}(p)-{\bf c}^{{\bf T}}(b^{{\bf T}},B^{1,{\bf T}}_{1})){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}].
\end{eqnarray*}
Therefore, by taking $b'^{{\bf T}}=\mathfrak{b}_{min,\theta}^{\star,{\bf T}}(p)$ $<$ $\mathfrak{b}_{min,\tilde{\theta}}^{\star,{\bf T}}(p)$, we get
\begin{align} \label{ineq-2}
& \mathbb{E}[(v_{\tilde{\theta}}(p)-{\bf c}^{{\bf T}}(\mathfrak{b}_{min,\theta}^{\star,{\bf T}}(p),B^{1,{\bf T}}_{1})){\bf 1}_{\mathfrak{b}_{min,\theta}^{\star,{\bf T}}(p)\geq B^{1,{\bf T}}_{1}}] \\
< \; & \; \mathbb{E}[(v_{\tilde{\theta}}(p)-{\bf c}^{{\bf T}}(\mathfrak{b}_{min,\tilde{\theta}}^{\star,{\bf T}}(p),B^{1,{\bf T}}_{1})){\bf 1}_{\mathfrak{b}_{min,\tilde{\theta}}^{\star,{\bf T}}(p)\geq B^{1,{\bf T}}_{1}}].
\end{align}
By subtracting \eqref{ineq-1} to \eqref{ineq-2}, we obtain
\begin{eqnarray*}
\mathbb{E}[(v_{\tilde{\theta}}(p)-v_{\theta}(p)){\bf 1}_{\mathfrak{b}_{min,\theta}^{\star,{\bf T}}(p)\geq B^{1,{\bf T}}_{1}}] &<& \mathbb{E}[(v_{\tilde{\theta}}(p)-v_{\theta}(p)){\bf 1}_{\mathfrak{b}_{min,\tilde{\theta}}^{\star,{\bf T}}(p)\geq B^{1,{\bf T}}_{1}}],
\end{eqnarray*}
and thus $\mathbb{P}[\mathfrak{b}_{min,\theta}^{\star,{\bf T}}(p)\geq B^{1,{\bf T}}_{1}] > \mathbb{P}[\mathfrak{b}_{min,\tilde{\theta}}^{\star,{\bf T}}(p)\geq B^{1,{\bf T}}_{1}]$, which contradicts the inequality $\mathfrak{b}_{min,\theta}^{\star,{\bf T}}(p)< \mathfrak{b}_{min,\tilde{\theta}}^{\star,{\bf T}}(p)$.
This shows that $\mathfrak{b}_{min,\theta}^{\star,{\bf T}}(p)\geq \mathfrak{b}_{min,\tilde{\theta}}^{\star,{\bf T}}(p)$. The same arguments applies to prove that $\mathfrak{b}_{min,\theta}^{\star,{\bf NT}}(p)\geq \mathfrak{b}_{min,\tilde{\theta}}^{\star,{\bf NT}}(p)$.
\vspace{1mm}
Let us now study the variations of the smallest optimal bid w.r.t. the proportion of informed people $p$. By definition of $v(p)$, we have the following properties:
\begin{itemize}
\item If there is only targeted advertising ($\eta^{{\bf NT}}=0$), then $v(p)$ is decreasing in $p$, and the above argument applied to two proportions $\tilde{p}<p$ (instead of two model parameters $\theta,\tilde{\theta}$) show that
${\bf b}^{\star,{\bf T}}_{\min}(\tilde{p})\geq \mathfrak{b}_{min}^{\star,{\bf T}}(p)$.
\item If there is no social interactions ($\eta^{{\bf S}}=0$), $v(p)$ is increasing in $p$, and the above argument applied to two proportions $\tilde{p}<p$ (instead of two model parameters $\theta,\tilde{\theta}$) show that
${\bf b}^{\star,{\bf T}}_{\min}(\tilde{p})\leq \mathfrak{b}_{min}^{\star,{\bf T}}(p)$.
\end{itemize}
By \eqref{bstarNTmin}, we have
\begin{eqnarray*}
\mathfrak{b}_{min}^{\star,{\bf NT}}(p)&=&\min\argmax_{b^{{\bf NT}} \in\mathbb{R}_+}\mathbb{E}\Big[\Big(v(p)-\frac{{\bf c}^{{\bf NT}}(b^{{\bf NT}} , B^{{\bf NT}}_{1})}{1-p}\Big){\bf 1}_{b^{{\bf NT}} \geq B^{{\bf NT}}_{1}}\Big]\\
&=&\min\argmax_{b^{{\bf NT}} \in\mathbb{R}_+}\mathbb{E}\Big[\Big((1-p)v(p)-{\bf c}^{{\bf NT}}(b^{{\bf NT}} , B^{{\bf NT}}_{1})\Big){\bf 1}_{b^{{\bf NT}} \geq B^{{\bf NT}}_{1}}\Big].
\end{eqnarray*}
Notice that from the definition of $v(p)$ we see that $(1-p)v(p)$ is always decreasing in $p$. By the same argument as before applied to two proportions $\tilde{p}<p$ (instead of two model parameters $\theta,\tilde{\theta}$), and with $(1-p)v(p)$ playing the role of $v(p)$, we deduce that $\mathfrak{b}_{min}^{\star,{\bf NT}}(\tilde{p})\geq \mathfrak{b}_{min}^{\star,{\bf NT}}(p)$.
\vspace{1mm}
Let us prove that $\mathfrak{b}_{min}^{\star,{\bf T}}(p)\leq v(p)$ and ${\bf b}^{\star,{\bf NT}}_{\min}(p)\leq v(p)$. We check this result for $\mathfrak{b}_{min}^{\star,{\bf NT}}(p)$, the other being similarly proved. If the targeted ads are sold with first price auctions, i.e. if ${\bf c}^{{\bf T}}(b,B)=b$, then notice that for all $b^{{\bf T}}>v(p)$, we have
\begin{eqnarray*}
\mathbb{E}[(v(p)-b^{{\bf T}}){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}] \; < \; 0 \; \leq \; \mathbb{E}[(v(p)-0){\bf 1}_{0\geq B^{1,{\bf T}}_{1}}].
\end{eqnarray*}
This implies that any bid $b^{{\bf T}}>v(p)$ cannot be optimal, and thus that the smallest optimal bid $\mathfrak{b}_{min}^{\star,{\bf NT}}(p)$ is smaller than $v(p)$. If the targeted ads are sold with second price auctions, we clearly have
\begin{eqnarray*}
\mathbb{E}[(v(p)-B^{1,{\bf T}}_{1}){\bf 1}_{v(p)\geq B^{1,{\bf T}}_{1}}] &=& \argmax_{b^{{\bf T}}\in\mathbb{R}_+}\mathbb{E}[(v(p)-b^{{\bf T}}){\bf 1}_{b^{{\bf T}}\geq B^{1,{\bf T}}_{1}}],
\end{eqnarray*}
and thus $v(p)$ is an optimal bid, which implies that the smallest optimal bid $\mathfrak{b}_{min}^{\star,{\bf NT}}(p)$ is, again, smaller than $v(p)$. In particular, given that
\begin{eqnarray*}
v(p)\leq v^{0,0}(p)=\frac{K}{\eta^{{\bf I}}+p\eta^{{\bf S}}},
\end{eqnarray*}
the smallest optimal bid $\mathfrak{b}_{min}^{\star,{\bf NT}}(p)$ is bounded from above by $\frac{K}{\eta^{{\bf I}}+p\eta^{{\bf S}}}$.
\subsection{Proof of results in Section \ref{secsocialdiscount}}
\subsubsection{Proof of Theorem \ref{theo-socialdis}
}
Let us fix an open-loop bidding control $\beta$. From \eqref{defVsocialdis}, we have
\begin{align}
V(\beta)&= \; \mathbb{E}\Big[\int_0^\infty e^{-\rho t}(K(1-X^{\beta}_{t^-})dN^{\boldsymbol{D}}_t+{\bf 1}_{\beta_t\geq B^{{\bf T}}_{N_t^{{\bf T}}}} B^{{\bf T}}_{N_t^{{\bf T}}} d N_t^{{\bf T}})\Big]\\
&\geq \; \mathbb{E}\Big[\int_0^\infty e^{-\rho t} \Big(K(1-X^\beta_{t-})+(1-X^\beta_{t-}){\bf 1}_{\beta_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}} B^{{\bf T}}_{N^{{\bf T}}_t} \eta^{{\bf T}}\Big)dt\Big]\\
& = \; \mathbb{E}\Big[\int_0^\infty \mathbb{P}[\tau> t] (1-X^\beta_{t-})\big(K+{\bf 1}_{\beta_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}} B^{{\bf T}}_{N^{{\bf T}}_t} \eta^{{\bf T}}\big)dt\Big]\\
&=\; \mathbb{E}\Big[\int_0^\infty {\bf 1}_{\tau> t} (1-X^\beta_{t-})\big(K+{\bf 1}_{\beta_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}} B^{{\bf T}}_{N^{{\bf T}}_t} \eta^{{\bf T}}\big)dt\Big]\\
&= \; \mathbb{E}\Big[\int_0^{\tau} (1-X^\beta_{t-})\big(K+{\bf 1}_{\beta_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}} B^{{\bf T}}_{N^{{\bf T}}_t} \eta^{{\bf T}}\big)dt\Big],
\end{align}
where we introduced an independent random time $\tau$ with exponential distribution of parameter $\rho$.
Notice also that the first inequality becomes an equality if the bidding control $\beta$ makes null bids once the individual is informed.
Let us next consider a Poisson process $N$ with intensity $\rho$, whose first time of jump is given by $\tau$, and independent of the other random variables,
and denote by $\tilde X^\beta$ the process satisfying the dynamic
\begin{eqnarray*}
\tilde{X}^{\beta}_{0^-}&=& 0\\
d\tilde{X}^{\beta}_t&=&(1-\tilde{X}^\beta_{t^-})(dN^{{\bf I}}_t+dN_t+{\bf 1}_{\beta_t \geq B^{{\bf T}}_{N_t^{{\bf T}}}}dN^{{\bf T}}_t), \quad t \geq 0.
\end{eqnarray*}
Notice that $\tilde{X}^{\beta}$ has exactly the same dynamic as $X^\beta$ except that there is an additional cause of transition to state $1$ given by the term $dN$. It is then clear that we have
\begin{eqnarray*}
\mathbb{E}\Big[\int_0^{\tau} (1-X^\beta_{t-})(K+{\bf 1}_{\beta_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}} B^{{\bf T}}_{N^{{\bf T}}_t} \eta^{{\bf T}})dt\Big] \; = \; \mathbb{E}\Big[\int_0^\infty (1-\tilde{X}^\beta_{t^-})(K+{\bf 1}_{\beta_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}} B^{{\bf T}}_{N^{{\bf T}}_t} \eta^{{\bf T}})dt\Big]\\
= \; \mathbb{E}\Big[\int_0^\infty (K(1-\tilde{X}^\beta_{t^-})dN^{\boldsymbol{D}}_t+{\bf 1}_{\tilde{\beta}_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}} B^{{\bf T}}_{N^{{\bf T}}_t} dN^{{\bf T}}_t)\Big],
\end{eqnarray*}
where $\tilde{\beta}_t=(1-\tilde{X}^\beta_{t^-})\beta_t$. By noting $\tilde{N}^{{\bf I}}=N^{{\bf I}}+N$, we obtain a Poisson process $\tilde{N}^{{\bf I}}$ with intensity $\eta^{\bf I}+\rho$, and the dynamic of $\tilde X^\beta$ is rewritten as
\begin{eqnarray*}
\tilde{X}^{\beta}_{0^-} &=& 0\\
d\tilde{X}^{\beta}_t&=&(1-\tilde{X}^\beta_{t^-})(d\tilde{N}^{{\bf I}}_t+{\bf 1}_{\beta_t \geq B^{{\bf T}}_{N^{{\bf T}}}}dN^{{\bf T}}_t), \quad t \geq 0.
\end{eqnarray*}
The cost $V(\beta)$ is thus bounded from below by the cost associated to the bidding map control $\beta$ $=$ $(0,\tilde{\beta})$ of problem in Section \ref{sec:socialno},
with a population of $M=1$ individual, with an intensity $\eta^{\bf I}+\rho$ for the counting number of connections on the website with information ${\bf I}$,
and where $\eta^{{\bf NT}}=\eta^{{\bf S}}=0$, i.e. the individual never connects to a website displaying non-targeted ads, and individuals do not socially interact.
From the result proved in Section \ref{sec:socialno}, we then know that $V(\beta)$ is thus bounded from below by:
\begin{eqnarray*}
V(\beta)&\geq& \inf_{b\in\mathbb{R}_+}\frac{K+ \eta^{{\bf T}}\mathbb{E}[B^{{\bf T}}_1{\bf 1}_{b\geq B^{1,{\bf T}}_1}]}{\eta^{{\bf I}}+\rho +\eta^{{\bf T}}
\mathbb{P}[b\geq B^{{\bf T}}_1]}.
\end{eqnarray*}
It is then direct to retrace this derivation with the particular bidding control $\beta^{b_\star}$ associated to the constant bidding policy $b^\star$ such that
\begin{eqnarray*}
b^\star&=& \argmin_{b\in \mathbb{R}_+}\frac{K+ \eta^{{\bf T}}\mathbb{E}[B^{{\bf T}}_1{\bf 1}_{b\geq B^{{\bf T}}_1}]}{\eta^{{\bf I}}+\rho +\eta^{{\bf T}}\mathbb{P}[b\geq B^{{\bf T}}_1]},
\end{eqnarray*}
and to turn inequalities into equalities. This concludes the result for this case.
\subsubsection{Proof of Proposition \ref{propsocial-sensitivity}}
Notice that the optimal value $V^\star$ corresponds to the optimal value of the problem solved in Theorem \ref{theo-social-no-discount} with $M=1$, $\eta^{{\bf NT}}=0$ and $\eta^{{\bf I}}$ replaced by $\eta^{{\bf I}}+\rho$. The sensitivity of the optimal value and smallest optimal bids to the model's parameters directly follows.
\subsection{Proof of results in Section \ref{sec:compur}}
\subsubsection{Proof of Theorem \ref{theo-purchase}
}
The idea is again to reduce to the previous case. Given an open-loop bidding control $\beta$, we have by \eqref{defVpurchase} and denoting by $\tau^\beta$ $=$ $\inf\{ t\geq 0: X_{t}^\beta =1\}$:
\begin{eqnarray*}
V(\beta)&=& \mathbb{E}\Big[ e^{-\rho \tau^\beta}K-\int_0^\infty e^{-\rho t}{\bf 1}_{\beta_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}} B^{{\bf T}}_{N^{{\bf T}}_t} d N^{{\bf T}}_t)\Big]\\
&=& \mathbb{E}\Big[\int_{\tau^\beta}^\infty \rho e^{-\rho t}K-\int_0^\infty e^{-\rho t}{\bf 1}_{\beta_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}} B^{{\bf T}}_{N^{{\bf T}}_t} d N^{{\bf T}}_t)\Big]\\
&=& \mathbb{E}\Big[\int_0^\infty e^{-\rho t}\rho KX^\beta_{t-}dt-\int_0^\infty e^{-\rho t}{\bf 1}_{\beta_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}} B^{{\bf T}}_{N^{{\bf T}}_t} d N^{{\bf T}}_t)\Big].
\end{eqnarray*}
The problem is thus reduced to a continuous gain problem, with continuous reward $\rho K$ from the time of information. This continuous gain problem is then turned into a continuous cost problem as follows:
\begin{eqnarray*}
V(\beta)
&=& K-\mathbb{E}\Big[\int_0^\infty e^{-\rho t} (\rho K(1-X^\beta_{t-})dt+{\bf 1}_{\beta_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}} B^{{\bf T}}_{N^{{\bf T}}_t}d N^{{\bf T}}_t)\Big]\\
&=& K-\mathbb{E}\Big[\int_0^\infty e^{-\rho t} (\rho K(1-X^\beta_{t-})dN^{\boldsymbol{D}}_t+{\bf 1}_{\beta_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}} B^{{\bf T}}_{N^{{\bf T}}_t}d N^{{\bf T}}_t)\Big].
\end{eqnarray*}
We are reduced to the previous case (social marketing with discount factor). This concludes the proof.
\subsubsection{Proof of Proposition \ref{prop-sensitivity}}
The cost dual viewpoint reduces the problem to the problem of social marketing with discount factor $\rho$ and continuous cost $\rho K$. This directly yields the sensitivity of the optimal value and smallest optimal bid in all the model's parameters except $\rho$, since here $\rho$ also appears in the continuous cost $\rho K$. For the sensitivity in $\rho$, it is suitable to use the gain viewpoint of the optimal value, in Theorem \ref{theo-purchase}. In this expression, it is clear that $V^\star$ is decreasing in $\rho$. From this property, a similar argument as used several times in the proof of Proposition \ref{propsocialno-sensitivity} allows to conclude that the smallest optimal bid is then increasing in $\rho$.
\subsection{Proof of Theorem \ref{theo-sus}
}
Given an open-loop bidding control $\beta$, we have by \eqref{defVsus}
\begin{eqnarray*}
V(\beta)&=& \mathbb{E}\Big[\sum_{n\in \mathbb{N}} e^{-\rho(\tau^\beta+n)} K - \int_0^\infty e^{-\rho t} {\bf 1}_{\beta_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}} B^{{\bf T}}_{N^{{\bf T}}_t} d N^{{\bf T}}_t)\Big]\\
&=& \mathbb{E}\Big[ e^{-\rho \tau^\beta} \frac{K}{1-e^{-\rho}}- \int_0^\infty e^{-\rho t} {\bf 1}_{\beta_t\geq B^{{\bf T}}_{N^{{\bf T}}_t}} B^{{\bf T}}_{N^{{\bf T}}_t} d N^{{\bf T}}_t)\Big].
\end{eqnarray*}
This reduces the problem to the purchase-based case studied in the previous section, and concludes the proof.
\section{Conclusion} \label{sec:conclusion}
In this paper, we have developed several targeted advertising models with semi-explicit solutions. An important feature of these models is a very concrete description of the ``modern" advertising problem.
One or several individuals are really modelled through their behaviours that involve connections to various types of websites at random times as well as social interactions. The advertising auctions are also precisely defined by considering various auction rules (second-price auctions, first-price auctions). Several variants of our models, that we did not study in this work for the sake of conciseness, can be easily analysed with the techniques developed in this work. For instance, in the first three models with a single Individual, one could study a model where first-price auctions and second-price auctions coexist, which would lead to more general formulas. There is also room for exploration to enrich the models while keeping them tractable with semi-explicit solutions: in the fourth model with an interacting population, it might be possible to add a bit of heterogeneity in the population connections and social interactions. It would be also interesting and useful in practice to consider the alternative for individuals
not to be receptive with some probability to the information (hence not purchasing a product, or continuing to behave ``dangerously").
Another opportune development, regarding the auctions, could be to model the maximal bid from other bidders more realistically than with an i.i.d. sequence of random variables, for instance as a Markov chain. Another approach could be to explicitly model several bidding agents, for instance playing according to the so-called fictitious play principle. In such game, several bidding Agents have pieces of Information to diffuse to Individuals, and each time when they declare their bid, they follow the strategies according to the model studied in this paper by modelling the other bidders' maximal bid as a sequence of i.i.d. random variables distributed as the empirical distribution of past maximal bids. Notice that this would require for Agents to constantly recalibrate their model, as new auctions modify the empirical distribution of past maximal bids.
\bibliographystyle{plain}
|
1,941,325,221,096 | arxiv | \section{Introduction}
\label{Intro}
Weinbergs asymptotic safety scenario \cite{Weinberg:1980gg,Weinproc1} strives toward constructing a quantum theory of gravity based on a non-Gaussian fixed point (NGFP) of the gravitational renormalization group (RG) flow (see \cite{Niedermaier:2006wt,robrev,Reuter:2012id} for reviews). This NGFP is supposed to control the behavior of the theory at high energies and renders it safe from unphysical divergences. In order to ensure the predictivity of the construction, the NGFP has to come with a finite number of UV-relevant directions, i.e., the space of RG trajectories attracted to the fixed point at high energies is finite-dimensional. Moreover one has the phenomenological requirement that at least one of the RG trajectories emanating from the NGFP possesses a ``classical limit'' where the Einstein-Hilbert action constitutes a good approximation of the gravitational interaction. The resulting quantum theory for gravity is called Quantum Einstein Gravity (QEG).
A central element for investigating Asymptotic Safety is the functional renormalization group equation (FRGE) for the gravitational average action $\Gamma_k$ \cite{Reuter:1996cp}, which allows the systematic construction of non-perturbative approximations of the theories $\b$ functions. Projecting the RG flow to subspaces of successively increasing complexity, the existence of a NGFP has been established in the Einstein-Hilbert truncation \cite{souma,oliver1,frank1}, the $R^2$ truncation \cite{oliver3,Lauscher:2002sq}, $f(R)$ truncations \cite{Codello:2007bd}, and truncations including a Weyl-squared term \cite{Benedetti:2009rx}. Moreover, refs.\ \cite{ frank1,Fischer:2006at} studied the properties of the NGFP in spacetime dimension more than four, while the quantum effects in the ghost sector have been investigated in \cite{Eichhorn:2009ah}. Furthermore boundary terms are considered in \cite{Becker:2012js}. QEG with Lorentzian signature was investigated in \cite{Manrique:2011jc} and an computer based algorithm for evaluating the flow equations was proposed in \cite{Benedetti:2010nr}. Besides providing striking evidence for the existence of the NGFP underlying Asymptotic Safety, the works including higher derivative interactions also provided strong hints that the fixed point possesses a finite number of relevant directions.
While there is solid evidence in favor of a NGFP in quantum gravity, a lot less is known about the IR physics of the RG trajectories emanating from it.
In case of the Einstein-Hilbert truncation the RG trajectories have been classified in \cite{frank1} (also see \cite{Litim:2012vz} for a recent account) and are shown in Fig.\ \ref{fig:eh}.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.4\textwidth]{rgflow}
\caption{Classification of the RG flow in the Einstein-Hilbert truncation \cite{Reuter:2001ag}.}
\label{fig:eh}
\end{center}
\end{figure}
All trajectories with positive Newtons constant are attracted to the NGFP in the UV and develop a classical regime close to the $\lambda$ axis.
Based on this classification, the RG trajectory realized by Nature has been identified in \cite{h3}.
Since the $\b$ functions of more sophisticated truncations quickly become very involved, there is only very little known about its dynamics of the RG flow beyond the
Einstein-Hilbert case, however. One aim of the present work is to fill this gap by constructing families of trajectories originating from the full flow equations of the $R^2$ truncation in $d=3$ and $d=4$ spacetime dimensions.
Our motivation for this study is two-fold. With respect to the fundamental aspects underlying Asymptotic Safety, it is important to understand to which extend the features of the RG flow shown in Fig.\ \ref{fig:eh} remain robust when the effect of additional running couplings are included. In particular the existence of a classical regime can only be established by studying the dynamics of the RG flow and the interplay of different fixed points. Surprisingly, our analysis of the $R^2$ truncation also provides further insights on the IR fixed point recently discussed in \cite{Donkin:2012ud,Nagy:2012rn,Litim:2012vz}.
A more phenomenological motivation arises from studying quantum effects in gravitational systems via an ``RG improvement'', taking into account the scale-dependence of the gravitational coupling constants when studying, e.g., black holes \cite{reu:bh,reu:erick1}, cosmology \cite{reu:cosmo1,reu:entropy,reu:cosmo2,reu:elo,reu:wein3,reu:h1,reu:Ward:2008sm,reu:Bonanno:2010bt,Hindmarsh:2011hx}, or galaxy rotation curves \cite{reu:h2}. The RG improvement requires rather explicit knowledge of the RG trajectories underlying the physical system. Thus our results may be used to investigate the robustness of previous studies.
As a first application in this direction, we extend the formalism \cite{Lauscher:2005qz,Reuter:2011ah} to study the multi-fractal properties of the effective QEG spacetimes captured by the spectral dimension $D_s$. The interesting question from the functional renormalization group point of view is whether the fractal properties found within the Einstein-Hilbert truncation \cite{Reuter:2011ah} stay valid in higher truncations and if there are new features special to the higher derivative ansatz. In this sense, the $R^2$ truncation constitutes a natural choice for computing on shell corrections to the fractal properties of spacetime observed in the Einstein-Hilbert case.
Notably, the interest in spectral dimension of spacetime is not limited to the functional renormalization group, since it is also accessible within other approaches to quantum gravity, as e.g., causal dynamical triangulation (CDT) \cite{Ambjorn:2005db}, euclidean dynamical triangulation (EDT) \cite{Coumbe:2012qr}, loop quantum gravity and spin foam models \cite{Modesto:2008jz} or in the presence of a minimal length scale \cite{Modesto:2009qc}. Studying the spectral dimension from various different angles may unravel universal features in the quantum structure of spacetime. On more phenomenological grounds one can build toy models of spacetime which encompass special features of the spectral dimension \cite{Giasemidis:2012qk}. Among further developments a fractional differential calculus \cite{Calcagni:2011kn} was used in \cite{Calcagni:2009kc,Arzano:2011yt} to assemble spacetimes with the fractal features observed in quantum gravity. Thus $D_s$ constitutes a quite useful probe for quantum gravity effects.
The remaining parts of the paper are organized as follows. The phase diagrams of the $R^2$ truncation are constructed in section \ref{NGFPdata}. Section \ref{sec:diffusion} introduces the formalism for investigating the spectral dimension $D_s$ and we give explicit examples in section \ref{sec:spectral}. We close with a discussion of our results in section \ref{sec:conclusion}.
\section{Fixed points, phase diagrams and Asymptotic Safety}
\label{NGFPdata}
The gravitational part of the $R^2$ truncation takes the form
\begin{equation}\label{ansatz}
\Gamma_k^{\rm grav}\!\!\! = \!\!\! \int \!\! d^dx \sqrt{g} \left[ \tfrac{1}{16 \pi G_k} \left( - R + 2 \bar{\lambda}_k \right) + \tfrac{1}{\bar{b}_k} R^2 \right] \, .
\end{equation}
The $\b$ functions governing the dependence of the dimensionful Newtons constant $G_k$, cosmological constant $\bar{\lambda}_k$ and $R^2$ coupling $\bar{b}_k$ on the RG scale $k$
have been derived in \cite{Lauscher:2002sq} for general dimension $d$ and regulator $\mathcal{R}_k$. In order to facilitate our numerical analysis, we will work with the optimized cutoff \cite{Litim:opt} throughout.
\subsection{$\b$ functions of the $R^2$ truncation}
In order to study the RG flow resulting from the ansatz \eqref{ansatz}, it is convenient to work with the dimensionless couplings
\begin{equation}\label{dimless}
g_k = \tfrac{G_k}{k^{2-d}} \, , \quad \lambda_k = \tfrac{\bar{\lambda}_k}{k^2} \, , \quad b_k = \tfrac{\bar{b}_k}{k^{4-d}} \, .
\end{equation}
Their scale dependence is governed by the $\b$ functions
\begin{equation}
\begin{split} \label{beta}
& \partial_t g_k = \beta_g(g, \lambda, b) \, , \\
& \partial_t \lambda_k = \beta_\lambda(g, \lambda, b) \, , \\
& \partial_t b_k = \beta_b(g, \lambda, b) ,
\end{split}
\end{equation}
with $t \equiv \ln k$ being the RG time.
Their explicit form can be found in \cite{Lauscher:2002sq} to which we refer for further details.
There the $\beta$ functions \eqref{beta} have been derived with the help of the transverse-traceless decomposition
of the fluctuation fields. This decomposition give rise to ``zero-mode'' contributions proportional to $\delta_{d,4}$.
A detailed investigation shows that these terms give rise to additional singularities of the $\b$ functions, which
also have a significant influence on the properties of the RG flow in the UV (see below for a more detailed discussion).
Since there is no reason to expect that similar singularities arise when the computation is carried out without the
use of the transverse traceless decomposition, we remove these contributions in four dimensional calculations.
\footnote{Alternatively, this can be seen as analyzing the ``regularized'' system of $\b$ functions \eqref{beta} in $d=4-\epsilon$ dimensions.}
When studying the RG flow arising from \eqref{beta}, we first note
\begin{equation}\label{planes}
\beta_g(\lambda, g, b)|_{g = 0} = 0 \, , \qquad \beta_b(\lambda, g, b)|_{b = 0} = 0 \, ,
\end{equation}
which implies that RG trajectories cannot cross the $g=0$ plane and the $b=0$ plane. As a consequence, a RG trajectory starting
with a positive Newtons constant cannot pass to a regime with negative Newtons constant dynamically. Eq.\ \eqref{planes} does not ensure
the positivity of $b_k$, however, since the coupling $b_k$ can change sign by passing $1/b_k = 0$, which is a regular point of the $\b$ functions.
In addition, the $\b$ functions \eqref{beta} contain a singular locus where the denominators
of the anomalous dimensions $\eta_N \equiv \partial_t \ln g_k$ and $\eta_b \equiv \partial_t \ln b_k$ vanishes.
For $d=4$, the resulting singular locus is depicted in Fig.\ \ref{fig:poles}.
\begin{figure}[b!]
\begin{center}
\includegraphics[width=0.45\textwidth]{pole}
\caption{The singular locus in theory space where the anomalous dimensions $\eta_N$ and $\eta_b$ diverge.}
\label{fig:poles}
\end{center}
\end{figure}
The final ingredient in understanding the RG flow of the theory are the fixed points where $\beta_\alpha(u^*) = 0$ for all $\alpha = 1, 2, \cdots$. In
the vicinity of such a fixed point of the RG equations \eqref{beta}, the linearized flow is governed by the Jacobi matrix
${\bf B}=(B_{\alpha \gamma})$, $B_{\alpha \gamma}\equiv\partial_\gamma \beta_\alpha(u^*)$:
\begin{eqnarray}
\label{reu:H2}
k\,\partial_k\,{u}_\alpha(k)=\sum\limits_\gamma B_{\alpha \gamma}\,\left(u_\gamma(k)
-u_{\gamma}^*\right)\;.
\end{eqnarray}
The general solution to this equation reads
\begin{equation}
\label{linflow}
u_\alpha(k)=u_{\alpha}^*+\sum\limits_I C_I\,V^I_\alpha\,
\left(\frac{k_0}{k}\right)^{\theta_I} \, ,
\end{equation}
where the $V^I$'s are the right-eigenvectors of ${\bf B}$ with eigenvalues
$-\theta_I$, i.e., $\sum_{\gamma} B_{\alpha \gamma}\,V^I_\gamma =-\theta_I\,V^I_\alpha$. Since ${\bf B}$ is not symmetric in general the $\theta_I$'s are not guaranteed to be real.
\subsection{The phase diagram in $d=4$}
In $d=4$, and for an exponential cutoff the fixed point structure of the system \eqref{beta} has already been analyzed in \cite{Lauscher:2002sq}. Besides the Gaussian fixed point (GFP) one encounters a
NGFP at
$\{\lambda^*, g^*, b^*\} = \{ 0.330, 0.292, 183.5\}$
with three UV-relevant eigendirections
\begin{equation}\label{FPexp}
\theta_{1,2} = 2.15 \pm 3.79 i \, , \qquad \theta_3 = 28.8 \, .
\end{equation}
In order to analyze the corresponding RG flow on phase space, we first repeat this analysis using the optimized cutoff. This also gives rise to a GFP and a NGFP.
\noindent
{\bf Gaussian Fixed Point} \\
Firstly, the system \eqref{beta} exhibits a GFP,
\begin{equation}\label{GFP4}
{\rm GFP}: \qquad \{\lambda^*, g^*, b^* \} = \{ 0 , 0 , 0 \}.
\end{equation}
Its stability coefficients are
\begin{equation}
\{\theta_1, \theta_2, \theta_3\} = \{2, 0, -2\} \, ,
\end{equation}
and thus correspond to the mass dimensions of the couplings. The linearized stability analysis shows that we have one IR-attractive, one IR-repulsive and one marginal eigendirection, in agreement with standard power counting. Since the GFP lies on the singular locus shown in Fig.\ \ref{fig:poles}, the corresponding stability matrix and its eigendirections depend on
the precise order in which the limit $\{\lambda^*, g^*, b^* \} \rightarrow \{ 0 , 0 , 0 \}$ is taken. To get a rough idea of the behavior close to the GFP one can build the following limits independent of the approaching direction
\begin{eqnarray}
\partial_t \lambda_k|_{g = 0, b = 0} &=& -2 \lambda_k + \mathcal{O}(\lambda^2_k) \, ,\nonumber\\
\partial_t g_k|_{\lambda = 0, b = 0} &=& 2 g_k + \mathcal{O}(g^2) \, ,\\
\partial_t b_k|_{g = 0, \lambda = 0} &=& - \tfrac{1}{(4\pi)^2} \tfrac{419}{1080} b^2_k \, .\nonumber
\end{eqnarray}
Since $\partial_t b_k$ is quadratic in $b$ it is IR-repulsive for positive $b$ and IR-attractive for negative $b$.
\noindent
{\bf Non-Gaussian Fixed Point} \\
In addition the four-dimensional RG flow gives rise to a unique NGFP, which is located at $\{ \lambda^* \, , \, g^* \, , \, b^* \, \} = \{ 0.170, 0.754, 336.2 \}$. Its stability coefficients are
\begin{equation}\label{NGFPd4}
\theta_1 = 3.95 \, , \qquad \theta_{2,3} = 1.56 \pm 3.31 i \, .
\end{equation}
The positive real part of the critical exponents indicates that
any RG trajectory in the vicinity of the fixed point is
attracted to the NGFP in the UV while the non-vanishing
imaginary part of $\theta_{2,3}$ shows that the trajectories will spiral into the NGFP.
Notably, the critical exponent $\theta_1$ is significantly smaller than the one found in eq.\ \eqref{FPexp}.
This difference can be traced back to the zero-mode contribution in the $\b$ functions. Including these
the optimized cutoff actually gives rise to \emph{two} NGFPs
\begin{equation}
\begin{split}
{\rm NGFP}_1: \; \{ \lambda^* \, , \, g^* \, , \, b^* \, \} = \{ 0.163, 0.744, 345 \} \\
{\rm NGFP}_2: \; \{ \lambda^* \, , \, g^* \, , \, b^* \, \} = \{ 0.173, 0.696, 455 \}
\end{split}
\end{equation}
with
\begin{equation}
\begin{split}
{\rm NGFP}_1: \; & \theta_1 = 3.49 \, , \quad \theta_{2,3} = 1.65 \pm 3.10 i \, , \\
{\rm NGFP}_2: \; & \theta_1 = 25.6 \, , \quad \theta_{2,3} = 1.40 \pm 2.78 \, .
\end{split}
\end{equation}
The critical exponents of the second one are very close to the one found in the analysis \cite{Lauscher:2002sq}.
When removing the zero-mode contribution ${\rm NGFP}_2$ vanishes and the NGFP$_1$ falls on top of the NGFP \eqref{NGFPd4}. Remarkably it was found in \cite{Codello:2007bd} that the largest critical exponent at the NGFP is of order $25$ for the $R^2$ truncation and approximately $2$ when the $R^3$ term is taken into account as well. Thus our FP with the largest critical exponent being $\theta_1 = 3.95$ fits well to the higher order results.
\noindent
{\bf Trajectories} \\
Starting at the NGFP there are several types of trajectories flowing toward the IR. In Fig.\ \ref{fig:flows4d}
we show a particular sample of them which exhibit a crossover to the GFP and are thus likely
to give rise to a classical regime.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.45\textwidth]{flows4d}
\caption{A sample of possible trajectories including the examples A and B. The fixed points are marked with dots.}
\label{fig:flows4d}
\end{center}
\end{figure}
In analogy to the phase diagram of the Einstein-Hilbert truncation depicted in Fig.\ \ref{fig:eh},
we distinguish RG trajectories of Type Ia and IIIa, according to their IR behavior as follows:
Upon their crossover, trajectories of Type Ia run toward negative values $\lambda$. For $k \rightarrow 0$, they
approach the point $\{\lambda, g, b\}=\{-\infty, 0, 0\}$. They exhibit a classical regime with a negative value of the dimensionful cosmological
constant $\bar{\lambda}_k$. In contrast, the trajectories of Type IIIa leave the GFP regime in direction of positive cosmological constant. They may first flow
parallel to the $\lambda$ axis toward increasingly positive values $\lambda_k$. Here the trajectories
develop a classical regime with a positive cosmological constant $\bar{\lambda}_k$. Before
hitting the singular locus depicted in Fig.\ \ref{fig:poles}, however, the trajectories essentially turn
perpendicular to the $\lambda$-$g$-plane and flow toward increasing values $b_k$. This turn
can already occur at rather small values $\lambda_k$, such that the flow is along the $b$ axis.
Trading the coupling $b_k$ for $1/b_k$,
shows that the $R^2$ coupling changes sign. At the end all these trajectories tend toward the point $\{\lambda, g, b\}=\{1/2, 0, 0^-\}$.
Note that these observations are in perfect agreement with the recent suggestions
\cite{Donkin:2012ud,Nagy:2012rn,Litim:2012vz} that the corresponding point $\{\lambda, g\}=\{1/2,0\}$
in the Einstein-Hilbert truncation actually constitutes an IR fixed point.
For later purposes, we marked two explicit Type IIIa sample trajectories {\bf A} and {\bf B}.
These arise from the initial conditions
\begin{equation} \label{example4d}
\begin{split}
{\rm \bf A}: \; & \left\{ \lambda_{\rm ini} \, , \, g_{\rm ini} \, , \, b_{\rm ini} \, \right\} = \left\{ 0.4999 \, , \, 10^{-10} \, , \, 10^9 \right\} \, , \\
{\rm \bf B}: \; & \left\{ \lambda_{\rm ini} \, , \, g_{\rm ini} \, , \, b_{\rm ini} \, \right\} = \left\{ 10^{-5} \, , \, 10^{-8} \, , \, 500 \right\} \, .
\end{split}
\end{equation}
They will form the basis for our discussion of the spectral dimension in section \ref{sec:spectral}.
Finally, we remark that we have not been able to construct a RG trajectory that connects
the NGFP with the GFP and which would constitute the analogue of the separatrix (Type IIa trajectory) shown in Fig.\ \ref{fig:eh}.
The non-existence of a separatrix in the $R^2$ truncation is caused by the singularity structure of the $\b$ functions:
The RG flow away from the NGFP cannot be aligned to the IR-attractive eigendirection of the GFP. Thus we can finetune the flow such that the trajectory passes arbitrarily close to the GFP,
but there is no solution connecting the fixed points.
\subsection{The phase diagram in $d=3$}
In this subsection we will repeat the discussion of the previous one for the case of three spacetime dimensions. The three dimensional case is a model very well suited for comparison to Monte Carlo data as e.g. \cite{Benedetti:2009ge} since the computations in three dimensions are less expensive within the CDT framework.
\noindent
{\bf Gaussian Fixed Point} \\
Again, the GFP is situated at $\{\lambda^*, g^*, b^* \} = \{ 0 , 0 , 0 \}$. Its associated stability coefficients are given by
\begin{equation}
\{\theta_1, \theta_2, \theta_3\} = \{2, 1, -1\} \, .
\end{equation}
These correspond to the mass-dimension of the (dimensionful) coupling constants. Eq.\ \eqref{linflow} then indicates that there are two IR-repulsive and one IR-attractive eigendirection. These depend on the direction of the approach to the GFP which is again situated on the singular locus. The limits
\begin{eqnarray}\label{eq:linear}
\partial_t \lambda_k|_{g = 0, b = 0} &=& -2 \lambda_k + \mathcal{O}(\lambda^2_k) \, ,\nonumber\\
\partial_t g_k|_{\lambda = 0, b = 0} &=& g_k + \mathcal{O}(g^2_k) \, ,\\
\partial_t b_k|_{g = 0, \lambda = 0} &=& -b_k - \tfrac{1}{(4\pi)^2} \tfrac{79}{54} b_k^2 \, ,\nonumber
\end{eqnarray}
again are independent of the direction of the approach and give a rough idea of the behavior close to the GFP. This time the $b$ direction is IR-repulsive for positive and negative $b$ values.
\noindent
{\bf Non-Gaussian Fixed Points} \\
Investigating the limit $g = 0$, one first encounters a non-trivial FP at $\{\lambda^*, g^*, b^*\} = \{0, 0, - \tfrac{54}{79}(4\pi)^2\}$. This FP has two eigendirections which both have no $g$ component. Thus all trajectories connected to this FP can not leave the $g=0$ plane. We do not consider this FP any further.
In addition the system \eqref{beta} gives rise to a \emph{physical} NGFP which is located at $\{ \lambda^* , g^* , b^* \} = \{ 0.019 \, , \, 0.188 \, , \, 126.4 \, \}$. Its critical exponents read
\begin{equation}\label{NGFPd3}
\{\theta_1, \theta_2, \theta_3\} = \{8.39, 1.86, 1.35\} \, .
\end{equation}
Thus this NGFP comes with three UV attractive directions and all trajectories in its vicinity are dragged into this FP in the UV. The reality of the critical exponents indicates that the
curling into the NGFP seen in four dimensions is absent. By varying the dimension in our theory one can see that the NGFPs \eqref{NGFPd3} and \eqref{NGFPd4} are continuously connected in $d$.
\begin{figure}[b!]
\begin{center}
\includegraphics[width=0.45\textwidth]{3dto4d}
\caption{Position of the UNGFP$_1$ (blue) and UNGFP$_2$ (red) depending on $d$. The two fixed points annihilate at $d=3.84$.}
\label{fig:3dto4d}
\end{center}
\end{figure}
Besides this NGFP one finds two additional \emph{unphysical} fixed points (UNGFPs). For completeness, their position and stability properties are collected in Table \ref{tab.fp1}.
\begin{table*}[t!]
\begin{center}
\begin{tabular}{|c| c c c | c c c | }
\hline
& \quad $\lambda^*$ \quad & \quad $g^*$ \quad & \quad $b^*$ \quad & \quad $\theta_1$ \quad & \quad $\theta_2$ \quad & \quad $\theta_3$ \quad \\
\hline
UNGFP$_1$ & 0.364 & 0.147 & 18.167 & \multicolumn{2}{|c}{$1.56 \pm 4.84 i $} & - 9.85 \\
UNGFP$_2$ & 0.099 & 0.216 & 18.835 & \multicolumn{2}{c}{$0.19 \pm 0.97 i $} & 1.68 \\
\hline
\hline
\end{tabular}
\end{center}
\caption{\small Additional fixed points of the beta functions \eqref{beta} in $d=3$. These fixed points are labeled ``unphysical'' since the RG flow emanating from them does not give rise to a classical regime where General Relativity constitutes a good approximation.}
\label{tab.fp1}
\end{table*}
By tracing the flow of the fixed points UNGFP$_1$ and UNGFP$_2$ under an increase of the dimension $d$, one finds that they annihilate each other for $d=3.84$ at $\{\lambda^*, g^*, b^*\} = \{0.27, 0.54, 76.9\}$ as shown in Fig.\ \ref{fig:3dto4d}. A similar continuous treatment of the dimension for the $\mathbb Z_2$-effective potential in \cite{Codello:2012sc} showed the appearance of new fixed points when decreasing the dimension $d$ as well.
Compared to the NGFP the UNGFPs are situated on the other side of the singularity locus. In principle, both UNGFPs do act as UV attractors for the RG trajectories spanning their UV-critical hypersurface. Tracing the RG flow toward the IR reveals, however, that the corresponding RG trajectories do not flow to a region in theory space, where General Relativity provides a good approximation. Thus, while still giving rise to RG trajectories which are asymptotically safe in the UV, they do not give rise to a classical regime.
\noindent
{\bf Trajectories} \\
In three dimensions the phase diagram is similar to the one in four dimensions. Again we focus on RG trajectories that exhibit a crossover from the NGFP to the GFP.
Some typical examples are depicted in Fig.\ \ref{fig:flows3d} and for later purposes we marked the trajectories \textbf A and \textbf B. These are obtained with the initial conditions
\begin{equation}\label{example3d}
\begin{split}
{\rm \bf A}: \; & \left\{ \lambda_{\rm init} \, , \, g_{\rm init} \, , \, b_{\rm init} \, \right\} = \left\{ 0.49 \, , \, 10^{-8} \, , \, 5 \times 10^4 \right\} \, , \\
{\rm \bf B}: \; & \left\{ \lambda_{\rm init} \, , \, g_{\rm init} \, , \, b_{\rm init} \, \right\} = \left\{ 10^{-5} \, , \, 10^{-8} \, , \, 500 \right\} \, .
\end{split}
\end{equation}
\begin{figure}[b!]
\begin{center}
\includegraphics[width=0.42\textwidth]{flows3d}
\caption{Phase diagram in three dimensions with the GFP, the NGFP and the two UNGFPs. We depict a set of trajectories which develops a long classical regime and also includes the examples \eqref{example3d} highlighted with bold black lines.}
\label{fig:flows3d}
\end{center}
\end{figure}
Besides the GFP and the NGFP, Fig.\ \ref{fig:flows3d} also depicts the two UNGFPs. These are separated from the NGFP by the singular locus shown
in Fig.\ \ref{fig:poles}. Moreover, we highlighted the prospective IR fixed point $\{\lambda, g, b\} = \{ 0.5, 0, 0^- \} $.
The first obvious difference to the four dimensional plot in Fig.\ \ref{fig:flows4d} is that in three dimensions the approach toward the NGFP is straight since the critical exponents \eqref{NGFPd3} are real, compared to the complex ones \eqref{NGFPd4} in four dimensions.
As we did in four dimensions we can divide the trajectories into Type Ia and Type IIIa trajectories. The former are going toward $\{\lambda, g, b\} = \{-\infty, 0, 0\}$ in the IR. The latter pass the GFP and head toward infinite $b$ values at positive $\lambda_k$ afterward. Again the use of $1/b$ instead of $b$ shows that the sign changes and the Type IIIa trajectories go toward the point $\{\lambda, g, b\}=\{1/2, 0, 0^- \}$ with negative $b$ values.
In three (as well as four) dimensions we get large contributions to the $\b$ functions between the GFP and the NGFP as well as close to $\lambda = 1/2$. These are caused by the singularity structure of the truncated theory space and lead to a strong running in the $b$ direction.
\section{RG-improved diffusion processes}
\label{sec:diffusion}
As a first application, we use the phase-space study of the last section
to investigate the influence of higher-derivative terms on the
fractal-like properties of the effective QEG spacetimes.
In this course, we start by adapting the ``RG improvement''
scheme for the diffusion of a test particle on the QEG spacetime \cite{Lauscher:2005qz,Reuter:2011ah}
to the higher-derivative action \eqref{ansatz}.
The key idea of the RG improvement is to study diffusion processes with a spacetime
metric, which depends on the RG scale $k$
\begin{equation}
\partial_T K(x, x^\prime; T) = - \Delta(k) K(x, x^\prime; T) \, .
\end{equation}
Resorting to the ``flat-space'' approximation, the scale $k$ is identified with the momentum $p$ of
the plain waves used to probe the structures of spacetime and thus provides a (inverse) length scale at which the system is probed.
Taking the trace of $K(x, x^\prime; T)$, gives the average return probability
\begin{equation}\label{retpro}
P(T) = \int \frac{d^dk}{(2\pi)^d} {\rm e}^{-k^2 F(k^2) T} \, ,
\end{equation}
where the function $F(k^2)$ relates the Laplacian at a fixed reference scale $k_0$ (taken in the IR)
and the scale $k$: $\Delta(k) = F(k^2) \Delta(k_0)$. The scale-dependent spectral dimension is defined as the logarithmic $T$ derivative
\begin{equation}\label{spectraldef}
D_s(T) \equiv -2 \frac{d \ln P_g(T)}{d \ln T} \, .
\end{equation}
For $F(k^2) = 1$ this relation reproduces the classical case $D_s(T) = d$; thus all quantum corrections
to the spectral dimension are encoded in $F(k^2)$.
Assuming that $F(k^2) \propto k^\delta$ undergoes power law scaling,
the integral \eqref{retpro} can be evaluated analytically, yielding
$P(T) \propto T^{-d/(2+\delta)}$. Substituting this relation into
\eqref{spectraldef} yields the spectral dimension
\begin{equation}\label{fracdim}
\mathcal{D}_s(T) = \frac{2d}{2+\delta} \, .
\end{equation}
For the Einstein-Hilbert case \cite{Reuter:2011ah}
\begin{equation}\label{deh}
\delta^{\rm EH}(g, \lambda) = 2 + \lambda^{-1} \beta_\lambda(g, \lambda)
\end{equation}
can be expressed as a function on theory space. The main task of this section is to derive
the analogue of this expression for the ansatz \eqref{ansatz}, taking the $R^2$ contribution
into account.
Following \cite{Reuter:2011ah}, we first write down the equations of motion arising from \eqref{ansatz}. Upon integrating by parts and dropping the surface terms these are
\begin{eqnarray} \label{eom}
&\left(- R + 2 \bar{\lambda}_k + \tfrac{16 \pi G_k R^2}{\bar{b}_k} \right) g^{\m\n}
+ \left( 2 - \tfrac{64 \pi G_k R}{\bar{b}_k} \right) R^{\m\n} \nonumber \\
& + \tfrac{64 \pi G_k}{\bar{b}_k} \left( D^\m D^\n R - (D^2 R) \, g^{\m\n} \right) = 0 \, .
\end{eqnarray}
In contrast to the classical equations of motion, the $k$ dependence of the coupling constants
promotes \eqref{eom} to a one-parameter family of equations of motion, each yielding
an effective description of the physics at the fixed scale $k$. In order to extract
the $k$ dependence of the metric, we first keep the scale $k$ fixed and substitute the ansatz
\begin{equation} \label{eq:eomSol}
R_{\m\n}\left( g|_k \right) = \frac{c_k}{d} g_{\m\n}|_k \, ,
\end{equation}
which implies constant curvature $R\left( g|_k \right)=c_k$. For this ansatz, the second line in \eqref{eom} is zero while the first line yields
\begin{equation}\label{ceq}
2 \bar{\lambda}_k - \tfrac{1}{d} \left(d-2 \right) c_k + \tfrac{1}{d} \, \left(d-4\right) \, \tfrac{16 \pi G_k}{\bar{b}_k} \, c_k^2 = 0 \, .
\end{equation}
At this stage, it is convenient to distinguish the two cases $d=4$ and $d \not = 4$.
For $d=4$ the term quadratic in $c_k$ vanishes and eq.\ \eqref{ceq} is easily solved
\begin{equation}\label{ckd4}
c_k|_{d=4} = 4 \bar{\lambda}_k \, .
\end{equation}
Comparing solutions \eqref{eq:eomSol} at two different scales $k$ and $k_0$, where $k_0$ is a fixed reference scale taken in the classical regime, and using the identity $R^\m_\n(cg)=c^{-1}R^\m_\n(g)$, which holds for constant $c>0$, then yields
\begin{equation}\label{confrela}
R_\m^\n\left( g_k \right) = R_\m^\n\left( \tfrac{c_k}{c_{k_0}} g_{k_0} \right) \, .
\end{equation}
This equation allows to read off the function $F(k^2)$ relating the metrics at the scale $k$ and $k_0$
\begin{equation}\label{confrel}
g^{\m\n}|_k = F(k) \, g^{\m\n} |_{k_0} \, ,
\end{equation}
with $F(k) = \bar{\lambda}_k/\bar{\lambda}_{k_0}$. In a regime where $F(k^2$ undergoes power-law scaling for an extended $k$-interval, the function $\delta(k)$ can then be obtained as $\delta(k) = k \partial_k \ln F(k)$ and reads
\begin{equation} \label{eq:delta4d}
\delta(g, \lambda, b)|_{d=4} = 2 + \lambda^{-1} \beta_\lambda \, .
\end{equation}
Formally, this result has the same form as in the Einstein-Hilbert case \eqref{deh}. For the $R^2$ truncation the $\b$ function $\beta_\lambda$ depends on the three couplings $\lambda, g, b$, however, so that there are non-trivial corrections.
For $d \not = 4$ the situation is slightly more complicated. In this case the
solutions of the quadratic equation are
\begin{equation}\label{csol}
c_k^{\pm} = \tfrac{(d-2) \bar b_k}{32 \pi (d-4) G_k} \, \left( \, 1 \pm \sqrt{ 1 - \tfrac{h_d \, G_k \, \bar{\lambda}_k}{\bar b_k} } \, \right) \, ,
\end{equation}
where
\begin{equation}
h_d = \frac{128 d (d-4) \pi}{(d-2)^2} \, .
\end{equation}
Investigating the conditions implied by eq.\ \eqref{confrela}, one then observes that $c_k^\pm$ may not be positive definite. In particular for $d=3$ and positive $G$ and $\bar b$, $c_k^+$ is negative. Therefore we discard this solution and focus on $c_k^-$. Notably, $c_k^-$ also has a well-defined limit $d \rightarrow 4$ where it reduces to \eqref{ckd4}. Thus selecting the $c_k^-$ branch will lead to a spectral dimension which is continuous in $d$.
The function $F(k^2)$ originating from $c_k^-$ can then be obtained completely analogous to the case $d=4$ and reads
\begin{equation}\label{Fgen}
F(k) = \frac{G_0 \bar{b}_k}{G_k \bar{b}_0} \, \frac{1 - \sqrt{1- h_d G_k \bar{\lambda}_k / \bar{b}_k}}{1 - \sqrt{1- h_d G_0 \bar{\lambda}_0 / \bar{b}_0} } \, .
\end{equation}
Again, this leads to a $\delta(k)$ which is given by a function on theory space
\begin{equation}\label{deltamod}
\delta = 2 - \frac{\beta_g}{g} + \frac{\beta_b}{b} + \frac{\tfrac{h_d}{2}\left( \tfrac{g}{b} \beta_\lambda - \tfrac{g \lambda}{b^2} \beta_b + \tfrac{\lambda}{b} \beta_g \right)}{\left( 1 - \sqrt{1- h_d g \lambda / b} \right)\sqrt{1- h_d g \lambda / b }} \, .
\end{equation}
Substituted into the spectral dimension \eqref{fracdim}, this result captured
the $R^2$ corrections to $D_s$. Its properties will be investigated in the next section.
Keep in mind, however, that $D_s$ is ill-defined if $\delta$ changes rapidly \cite{Reuter:2011ah}.
\section{Fractal properties of QEG spacetimes}
\label{sec:spectral}
In this section, we analyze the spectral dimension obtained along
the sample trajectories \eqref{example4d} and \eqref{example3d} in $d=4$ and $d=3$, respectively.
For this purpose we evaluate \eqref{deltamod} along the trajectory and substitute
the result into \eqref{fracdim} to obtain the spectral dimension $D_s(t)$ depending
on the logarithmic RG time $t = \ln k$.
\noindent
{\bf Spectral Dimension in $d=4$} \\
For $d=4$ the two exemplary trajectories \textbf A and \textbf B are depicted in Fig.\ \ref{fig:flows4d} .
Both trajectories emanate from the NGFP in the UV and flow toward the GFP.
After passing the GFP trajectory \textbf A flows toward $\lambda = 1/2$ along the $\lambda$ axis
before running toward large values $b_k$. Trajectory \textbf B leaves the GFP regime close to the $b$ axis instead.
The spectral dimension along trajectory \textbf A is shown in Fig.\ \ref{fig:DsA4d}.
\begin{figure}[b!]
\begin{center}
\includegraphics[width=0.4\textwidth]{Aminus4d}
\caption{Spectral dimension $D_s$ depending on RG time $t$ for the trajectory \textbf A in $d=4$.}
\label{fig:DsA4d}
\end{center}
\end{figure}
The diagram displays three distinguished plateaus marked by the solid blue lines.
From the UV (large $t$) to the IR (small $t$) these are the NGFP plateau with
$D_s = 2$, the semi-classical plateau with $D_s \simeq 1.5$ and the classical plateau
where $D_s =4$, respectively. They are connected by interpolating regimes (dashed), where $\delta(t)$ changes
rapidly and thus $D_s(t)$ is not well defined. The poles in these regions are caused by
$\delta(t) \rightarrow -2$, which in turn is linked to $\lambda_k$ changing sign when
passing from the NGFP to the GFP.
Surprisingly, the spectral dimension of the semi-classical plateau, $D_s \simeq 1.5$
does not agree with the one observed in the Einstein-Hilbert truncation $D_s^{\rm EH} = 4/3$ \cite{Reuter:2011ah}.
This difference can be traced back to the fact, that in the $R^2$ truncation the
flow for this intermediate part of the trajectory is dominated by the singular
locus Fig.\ \ref{fig:poles}. This locus induces a scaling behavior, $k^\delta$, $\delta \simeq 3.3$, different from the $k^4$-scaling
associated to the GFP.
The spectral dimension of trajectory {\bf B}, shown in Fig.\ \ref{fig:DsB4d} also exhibits
three plateaus which are connected by short interpolating pieces where $\delta(t)$ changes
rapidly.
\begin{figure}[b!]
\begin{center}
\includegraphics[width=0.4\textwidth]{Bminus4d}
\caption{Spectral dimension $D_s$ depending on RG time $t$ for the trajectory \textbf B in $d=4$.}
\label{fig:DsB4d}
\end{center}
\end{figure}
The NGFP plateau ($D_s=2$) and the classical plateau ($D_s=4$) have the same spectral dimension as for the trajectory {\bf A}.
In this case, the semi-classical plateau forms at $D_s=4/3$, however, and thus has the same value as in the Einstein-Hilbert truncation.
In terms of the underlying RG flow it is created by the trajectory flowing away from the GFP along the $b$ axis (see the Appendix for an analytic investigation).
Notably, there is no plateau corresponding to the scaling caused by the singular locus. This feature is shadowed by the poles where $\delta \rightarrow -2$.
\noindent
{\bf Spectral Dimension in $d=3$} \\
When investigating the spectral dimension in $d=3$, we first notice that the functional form of
\eqref{deltamod} is quite different from the four-dimensional case. In particular $\delta(\lambda, g, b)$ now depends
on the $\b$ functions of all three couplings. In order to see if the new structures still support
a multi-fractal picture of spacetime, we again analyze $D_s(t)$ along the two exemplary trajectories \textbf A and \textbf B given in Fig.\ \ref{fig:flows3d}.
The spectral dimension resulting from trajectory \textbf A is depicted in Fig.\ \ref{fig:DsA3d}.
\begin{figure}[b!]
\begin{center}
\includegraphics[width=0.4\textwidth]{Aminus3d}
\caption{Spectral dimension $D_s$ depending on RG time $t$ for the trajectory \textbf A in $d=3$.}
\label{fig:DsA3d}
\end{center}
\end{figure}
Also here we observe three plateaus (solid line) which are connected by short interpolating pieces (dashed line)
where the spectral dimension is not well-defined. In the IR and in the UV $D_s(t)$ again has the expected behavior:
at small RG times $t$ it equals the spacetime dimension three while for large RG times the NGFP induces
a spectral dimension which is half of the spacetime dimension, $D_s=3/2$.
The semi-classical plateau has $D_s \simeq 1.3$ and again develops during the crossover between the NGFP and the GFP.
The poles framing the plateau are caused by $\lambda_k$ changing sign which again leads to $\delta = -2$. Similarly to the four-dimensional case,
the value of the semi-classical plateau does not correspond to the one observed in the Einstein-Hilbert case $D_s^{\rm EH} = 1$. Instead of a property of the GFP this plateau
is created by scaling induced by the singular locus and thus has a quite different origin.
A special feature occurs when studying the spectral dimension along trajectory {\bf B}, which is shown in Fig.\ \ref{fig:DsB3d}.
\begin{figure}[b!]
\begin{center}
\includegraphics[width=0.4\textwidth]{Bminus3d}
\caption{Spectral dimension $D_s$ depending on RG time $t$ for the trajectory \textbf B in $d=3$.}
\label{fig:DsB3d}
\end{center}
\end{figure}
Besides the classical and NGFP plateaus, this trajectory develops {\it two} semi-classical regimes:
The plateau with $D_s \simeq 1.3$ is caused by the trajectory passing close to the singular locus when flowing
from the NGFP to the GFP. The second plateau with $D_s \simeq 0.85$ appears during the flow away from the GFP close to the $b$ axis
toward the classical regime. The latter plateau is the one that, in $d=4$, corresponds to the semi-classical plateau found in
the Einstein-Hilbert truncation. Thus it is natural to identify the shift $D_s^{\rm EH} = 1$ $\rightarrow$ $D_s^{R^2} \simeq 0.85$
as the correction induced by the inclusion of the $R^2$-terms in the computation.
The four different scaling regimes underlying Fig.\ \ref {fig:DsB3d} are also visible in the running of the absolute value of the dimensionful cosmological constant $\bar\lambda$ in Fig.\ \ref{fig:dimfullRunning} whereas the running of the dimensionful Newton constant shows only two distinct scaling regimes. Moreover the running of $\bar\lambda$ shows two peaks indicating the change of the sign.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.4\textwidth]{LambdaPlot} \hspace{2cm}
\includegraphics[width=0.4\textwidth]{Gplot}
\caption{Absolute value of dimensionful cosmological constant $\bar\lambda$ and dimensionful Newton constant $G$ depending on logarithmic RG scale $t$ along the exemplary trajectory {\bf B}.}
\label{fig:dimfullRunning}
\end{center}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this work, we used the $\b$ functions \cite{Lauscher:2002sq} to construct
the phase diagram of Quantum Einstein Gravity (QEG) in the $R^2$ truncation.
For $d=4$ and $d=3$ the resulting RG trajectories are shown in Figs.\ \ref{fig:flows4d} and \ref{fig:flows3d}, respectively.
Notably, the flow is governed by the interplay of its fixed points. In this sense the phase diagram of the $R^2$ truncation
is quite similar to the one of the Einstein-Hilbert truncation:
the trajectories with a long classical regime result from a crossover from the
non-Gaussian fixed point (NGFP) controlling the UV behavior of the theory to the Gaussian fixed point (GFP)
responsible for its classical properties. Remarkably, the value
of the $R^2$ coupling is easily compatible with experimental bounds \cite{Berry:2011pb,DeFelice:2010aj}.
As an important new feature, our phase diagram is molded not only by its fixed points, but also by singular loci where
the anomalous dimensions diverge. The two regimes where this new mechanism is at work are the crossover between the NGFP and GFP
and the flow close to $\lambda = 1/2$. In both cases the singularities induce a strong running of the $R^2$-coupling $b$. Especially
close to the $\lambda =1/2$ locus the RG flow is essential parallel to the $\lambda =1/2$ boundary and the strong $b$ running allows to avoid
this singularity. The resulting flow pattern closely resembles the one expected from the IR fixed point recently discussed
in \cite{Donkin:2012ud,Nagy:2012rn,Litim:2012vz}.
As a first application of our classification, we studied the spectral dimension $D_s$ of the effective QEG spacetimes.
This is the first instance where higher-derivative corrections are taken into account
in an RG-improvement scheme.
As a central result, we establish that the multi-fractal structure of the effective spacetimes \cite{Reuter:2011ah}, reflected by several
plateau values for $D_s$, is robust upon including $R^2$ effects. All our examples give $D_s = d$ in the classical regime and $D_s =d/2$ in the UV. The latter constitutes
the universal signature of the NGFP in the spectral dimension \cite{Lauscher:2005qz}.
Between the classical and UV regime we obtain the ``semi-classical'' plateau which, in the Einstein-Hilbert case, is closely linked to
universal properties of the RG flow close to the GFP and situated at $D_s = 4/3$ $(d=4)$ and $D_s = 1$ $(d=3)$. The $R^2$ construction
reveals a second mechanism that can lead to a similar ``semi-classical'' plateau: scaling induced by singularities. For our examples
this induces semi-classical plateaus with $D_s \approx 3/2$ $(d=4)$ and $D_s \approx 1.3$ $(d=3)$. Thus the semi-classical regime is sensitive to
the details of the underlying RG trajectory. In this light, it is not surprising that different Monte-Carlo studies of the gravitational
path integral report different plateau values for $D_s$ at intermediate scales \cite{Benedetti:2009ge,Coumbe:2012qr}.
\section*{Acknowledgments}
We thank M.\ Reuter for many helpful and inspiring discussions and A.\ Eichhorn and G.\ Calcagni for comments on the manuscript. This work is supported by the Deutsche Forschungsgemeinschaft (DFG) within the Emmy-Noether program (Grant SA/1975 1-1).
\begin{appendix}
\begin{section}{The GFP in $d=4$}
In this section we investigate the GFP in four spacetime dimensions a little bit further. Since we are interested in the behavior of the trajectory B in Fig.\ \ref{fig:poles} we evaluate the stability matrix by first taking the limits $g \rightarrow 0$ and $\lambda \rightarrow 0$ and at the end the limit $b \rightarrow 0$. This procedure leads to the stability matrix
\begin{equation}
B = \begin{pmatrix}
-2 & \tfrac{1}{2\pi} & 0 \\
0 & 2 & 0 \\
0 & 0 & 0
\end{pmatrix} \, .
\end{equation}
One of the corresponding eigenvectors is the unit vector in $b$ direction and thus we are able to investigate the behavior along this direction. The upper left block of this stability matrix is exactly the stability matrix of the Einstein--Hilbert truncation and thus the solutions to the linearized flow equations are known to be
\begin{eqnarray}
\lambda_k &=& \left( \lambda_{k_0} - \tfrac{4}{2\pi} g_{k_0} \right) \left( \tfrac{k_0}{k} \right)^2 + \tfrac{4}{2\pi} g_{k_0} \left( \tfrac{k}{k_0} \right)^2 \, ,\nonumber \\
g_k &=& g_{k_0} \left( \tfrac{k}{k_0} \right)^2 \, .
\end{eqnarray}
Thus for the dimensionful cosmological constant $\bar\lambda$ we find
\begin{equation}
\bar\lambda_k = \bar\lambda_{k_0} + \tfrac{4 G_{k_0}}{2\pi}\left( k^4 - k_0^4 \right) \, .
\end{equation}
Since $\bar\lambda$ shows a $k^4$ scaling $\delta$ of \eqref{eq:delta4d} is four as it was already derived in \cite{Reuter:2011ah}. Thus we get the spectral dimension
\begin{equation}
D_s = \tfrac{2d}{2 + \delta} = \tfrac{4}{3} \, .
\end{equation}
\end{section}
\end{appendix}
|
1,941,325,221,097 | arxiv | \section{INTRODUCTION}
\label{sec:1}
The theoretical description of electronic correlations
in solids is still a challenge. Even one of the
simplest models for correlated electrons, the Hubbard
model,\cite{hubbard} exhibits enormous mathematical
complexity. Its analysis is far from being complete.
Recently, a new class of models for tight-binding electrons
with infinitely strong on-site repulsion has been discovered.
For particle numbers of two or more particles per unit cell,
one can construct the ground states of these
models.\cite{brandt-giesekus} These ground states are
spin-singlets and have the structure of
resonating-valence-bond states\cite{tasaki1} for certain
cases.
The class of models which are solvable by this method
has been generalized by several
authors.\cite{tasaki1,mielke,strack,bares-lee,tasaki2}
One-dimensional models of this class\cite{strack} allowed
for the calculation of equal-time correlation
functions\cite{bares-lee} using a transfer-matrix
technique. All equilibrium correlations studied
so far decay exponentially. Dynamical correlations
obtained numerically for the one-dimensional Hubbard
chain indicate dispersionless excitation spectra for
an electron density of two per cell.\cite{giesekus} The
propagation of one hole, however, shows dispersive
delocalized features.
The regime of particle numbers below two per unit cell
has not been accessible by analytical methods so far. The
main result of this paper is an analytical upper bound on
the ground state energy of systems containing $2N-1$ particles
($N$ denotes the number of cells). The trial state and the
corresponding upper bound on the ground-state energy are
discussed for two examples. The first example is a hypercubic
Hubbard model as introduced in Ref.\onlinecite{brandt-giesekus}.
It is shown that the trial state becomes an asymptotically
exact eigenstate of this particular $(2N-1)$ -particle system
in the thermodynamic limit. For the second system, a linear
Hubbard chain, the upper bound on the ground-state energy may
be calculated analytically as well. A comparison to exact
numerical results\cite{giesekus} is presented.
\section{The class of solvable Model Hamiltonians}
\label{sec:2}
In this section we recall the most general description of the
class of solvable models.\cite{tasaki2} This class contains
Hubbard, Anderson and Emery models in the limit of infinitely
strong interaction ($U=\infty$). The solution presented below
requires certain lattice structures. (In the following,
the term {\em graph} is used, because {\em lattice} implies
translational invariance which needs not be assumed.)
The models are defined on graphs, where each of the $N$ vertices
contains a {\em cell} of sites. (The cells need not be identical
either.) The cells are labeled by an index $i=1,\ldots N$. Let
the set of all sites within the cell $i$ be called ${\cal N}_i$.
(It is mentioned in passing that ${\cal N}_i$ is {\em not} a unit
cell if the graph is translationally invariant, because neighboring
cells may share some sites.)
The sites (or electronic orbitals) within a cell $i$ are labeled
by an index $\alpha$. Some of those orbitals (not necessarily all)
may carry an infinitely strong on-site repulsion, e.\,g.\ a cell
may contain ``$d$-sites'' which may be empty or occupied by one
particle only (with spin up or down) and ``$p$-sites'' which can
be occupied by up to two particles. The set of $d$-sites is called
${\cal U}_{i}$. The repulsion is incorporated into the model by a
projection operator
\begin{equation}
\label{eq:1}
P \quad = \quad \prod_{i}
\prod_{\alpha \in {\cal U}_i}
\big(
1 - n_{i,\alpha,\uparrow}^{}n_{i,\alpha,\downarrow}^{}
\big)
\end{equation}
which strictly excludes double occupancy on all sites
$i,\alpha \in {\cal U}_i$, where
$n_{i,\alpha,\sigma}^{}=
c_{i,\alpha,\sigma}^{\dagger}
c_{i,\alpha,\sigma}^{}$. The Fermi operators
$c_{i,\alpha,\sigma}^{\dagger}$
($c_{i,\alpha,\sigma}^{}$) denote creation (annihilation) operators
and are defined as usual. The connection between cells is established
by common sites. We denote the set of common sites by the intersection
${\cal C}_{ij}={\cal N}_i \cap {\cal N}_j$. Examples for some models
are given in Ref.\onlinecite{brandt-giesekus,tasaki1,tasaki2}.
Figure \ref{fig:1} illustrates the two cases discussed below.
In order to construct the ground state of the models defined
above consider the linear combination of Fermi operators
\begin{equation}
\label{eq:2}
{\Psi}_{i,\sigma}^{\dagger} :=
\sum_{\alpha \in {\cal N}_i}
\lambda_{i,\alpha,\sigma}^{}
c_{i,\alpha,\sigma}^{\dagger}\,.
\end{equation}
They obey the following algebra:
\begin{eqnarray}
\label{eq:3}
&
{[{\Psi}_{i,\sigma}^{\dagger},{\Psi}_{j,\sigma '}^{\dagger}]}_+
=
{[{\Psi}_{i,\sigma}^{},{\Psi}_{j,\sigma '}^{}]}_+
=
0
&
\\
\nonumber
&
{[{\Psi}_{i,\sigma}^{},{\Psi}_{j,\sigma '}^{\dagger}]}_+
=\,\delta_{\sigma,\sigma '}^{}
&
\\
\nonumber
&
\times
\left\{
\begin{array}{ll}
\sum\limits_{\alpha\in{\cal N}_i}\,
{|\lambda_{i,\alpha,\sigma}^{}|}_{}^{2}
&\quad {\rm for}\quad i=j\\
\sum\limits_{\alpha,\beta\in{\cal C}_{ij}}
\delta_{(i\alpha),(j\beta)}^{}
\lambda_{i,\alpha,\sigma}^{\star}
\lambda_{j,\beta,\sigma}^{}
&\quad {\rm otherwise}\\
\end{array}
\right. \,.
&
\end{eqnarray}
(The Kronecker symbol $\delta_{(i\alpha),(j\beta)}^{}$
assumes the value one if the indices $i\alpha$ and $j\beta$ label
the same site and equals zero otherwise.) The coefficients
$\lambda_{i,\alpha,\sigma}^{}$ define the parameters of the
Hamiltonian which will be constructed below.
The $\Psi$-operators allow to write down a positive
semidefinite\cite{positive} Hamiltonian in a very concise
way:
\begin{equation}
\label{eq:4}
H
=
\sum_{\sigma}
H_{\sigma}
\quad {\rm with} \quad
H_{\sigma}
=
\sum_{i}
{\Psi}_{i,\sigma}^{}
P
{\Psi}_{i,\sigma}^{\dagger}\,.
\end{equation}
However, the physical meaning of this Hamiltonian becomes
more transparent if it is rewritten in terms of the Fermion
$c$-operators by using the relation
\begin{equation}
\label{eq:5}
{\Psi}_{i,\sigma}^{\dagger} P
=
P {\Psi}_{i,\sigma}^{\dagger}
+
\sum_{\alpha\in{\cal U}_i}
\lambda_{i,\alpha,\sigma}^{}
c_{i,\alpha,\sigma}^{\dagger}
n_{i,\alpha,-\sigma}^{}
P
\end{equation}
and the definition of the $\Psi$ operators as defined in
Eq.\ (\ref{eq:2}). The Hamiltonian (\ref{eq:4}) is transformed
into the following form:
\begin{eqnarray}
\label{eq:6}
H_{\sigma} \quad = \quad
- \,P\,\, \sum_{i} \Big\{
&{\sum\limits_{\alpha \not= \beta \in {\cal N}_i}}&
\lambda_{i,\alpha,\sigma}^{}
\lambda_{i,\beta,\sigma}^{\star}\,
{c_{i,\alpha,\sigma}^{\dagger}}
c_{i,\beta,\sigma}^{}\\
\nonumber
+ \,\, &{\sum\limits_{\alpha\in{\cal N}_i}}&
|{\lambda_{i,\alpha,\sigma}^{}}|^{2}\,
(n_{i,\alpha,\sigma}^{}-1) \\
\nonumber
+ \,\, &{\sum\limits_{\alpha\in{\cal U}_i}}&
|{\lambda_{i,\alpha,\sigma}^{}}|^{2}\,
n_{i,\alpha,-\sigma}^{}
\Big\} \,P\,.
\nonumber
\end{eqnarray}
The first term in $H_{\sigma}$ describes particle
hopping within a cell where each site (orbital) is connected
to all others. (Particle transfer between interacting
orbitals and orbitals without interaction is usally called
a hybridization. In that case, the model would describe
e.\,g.\ an Anderson model.) The remaining terms contain trivial
constants and sums over occupation numbers which add up to the
total particle-number operator for the case of the $D$-dimensional
Hubbard model as discussed below. If the sets ${\cal U}_i$ and
${\cal N}_i$ are not identical or if not every site within a cell
is connected to a neighbor cell, the sums over occupation number
operators may add up to an additional field which shifts the
energy of certain sites. Such a model then contains more than one
parameter and the exact ground state as derived below is only
valid on a parameter surface defined by the quantities
$\lambda$. In the following, all $\lambda$-coefficients are
assumed to be real\cite{real}.
\section{The ground state for 2$N$ or more particles}
\label{sec:3}
The models as defined above allow to write down an
exact ground state if the system contains two or more
particles per cell. The ground state is a Gutzwiller
projected Slater determinant:
\begin{equation}
\label{eq:7}
|\Phi_0\rangle
:=
P
\prod_i
{\Psi}_{i,\uparrow}^{\dagger}
{\Psi}_{i,\downarrow}^{\dagger}
|\chi\rangle \,.
\end{equation}
The expression $H |\Phi_0\rangle$ always contains
a factor ${(P{\Psi}_{i,\sigma}^{\dagger})}^{2}$.
Because of the equation
$
{(P
{\Psi}_{i,\sigma}^{\dagger})}^{2}
=0
$,
the state Eq.\ (\ref{eq:7}) is an eigenstate of $H$
(\ref{eq:4}), the corresponding eigenvalue equals zero.
Further, $H$ is positive semi-definite\cite{positive}.
Therefore the state (\ref{eq:7}) belongs to the ground-state
manifold. In Ref.\onlinecite{tasaki2} Tasaki provides a proof of
the uniqueness of the ground state if the state $|\chi\rangle$
is equal to the vacuum state $|0\rangle$ ($2N$ particles).
Larger fillings can be discussed if $|\chi\rangle$ contains
additional particles, for that case it immediately follows
by construction that the state (\ref{eq:7}) is degenerate.
If all sites carry an infinitely strong on-site repulsion and,
e.\,g.\ the $\lambda$-coefficients are set to unity, the state
(\ref{eq:7}) exhibits a structure often called
resonating-valence-bond state.\cite{pauling} In that case the
ground state consists of a linear combination of local singlet
bonds:\cite{tasaki1}
$
P
{\Psi}_{i,\uparrow}^{\dagger}
{\Psi}_{i,\downarrow}^{\dagger}
=
P
\sum_{\alpha\neq\beta}
\big(
c_{i,\alpha,\uparrow}^{\dagger}
c_{i,\beta,\downarrow}^{\dagger}
+
c_{i,\beta,\uparrow}^{\dagger}
c_{i,\alpha,\downarrow}^{\dagger}
\big)
$.
The projector ensures that no two singlet bonds have a
site in common.
\section{Upper bound on the ground-state energy
for 2$N$-1 particles}
\label{sec:4}
Although the ground state is known for two or more particles
per cell, the only analytical result that has been obtained for
particle numbers $N_e$ below the ``magic'' number $N_e=2N$ so
far is the trivial lower bound on the ground state
energy $E_0(N_e)\ge 0$ which is valid for {\em any} number of particles.
[The Hamiltonian (\ref{eq:4}) is non-negative.] The present section
provides an upper bound on the ground state energy of $H$ for one
additional hole, e.\,g.\, a particle number of $N_e=2N-1$
using a variational state.
The quality of variational methods is not very well
controlled, there exists no small parameter, only an
inequality for the energies. Therefore the method requires
some intuition in order to ``hit'' the correct physics by
guessing a good state. The first guess of a variational state,
the state
\begin{equation}
\label{eq:8}
|\varphi_i\rangle
:=
P
{\Psi}_{i,\uparrow}^{\dagger}
\prod_{j,\,\, j\neq i}
{\Psi}_{j,\uparrow}^{\dagger}
{\Psi}_{j,\downarrow}^{\dagger}
|0\rangle \,,
\end{equation}
is led by the idea to leave as much of the $2N$ -particle
physics unchanged as possible. The state is constructed
in a way that most terms of the Hamiltonian generate
zero-states due to the relation
$(P\Psi_{i,\alpha,\sigma}^{\dagger})^2=0$.
{}From
$
H_{\uparrow}
|\varphi_i\rangle
=0
$
and
$
H_{\downarrow}
|\varphi_i\rangle
=
{\Psi}_{i,\downarrow}^{}
|\Phi_0\rangle
$
($|\Phi_0\rangle$ denotes the ground state of the system
with $2N$ particles of the previous section) one obtains the
diagonal matrix
\begin{equation}
\label{eq:9}
\langle \varphi_j|
H
|\varphi_i\rangle
=
\delta_{i,j}^{}
\langle\Phi_0|\Phi_0\rangle\, .
\end{equation}
If the norm of the states $|\varphi_i\rangle$
and $|\Phi_0\rangle$ were known, the expectation
value of $H$ could be evaluated. Unfortunately, this is
not possible in general. However, the following
orthogonal and normalized [see Eq.\ (\ref{eq:9})]
trial states
\begin{equation}
\label{eq:10}
|\chi_i\rangle
:=
\frac
{\sqrt{H}}
{\sqrt{\langle\Phi_0|\Phi_0\rangle}}
|\varphi_i\rangle
\end{equation}
allow us to calculate the desired expectation value.
(The positive square root of $H$ is well defined because
$H \geq 0$.) Surprisingly, the Hamiltonian matrix
elements $\langle\chi_j|H|\chi_i\rangle$ of
the $(2N-1)$ -particle system can be expressed in
terms of occupation-number expectation-values of
the $2N$-particle system in the following way:
\begin{eqnarray}
\label{eq:11}
&
\langle
\chi_j|
H
|\chi_i\rangle
=
\delta_{i,j}^{}
\Big(
\sum\limits_{\alpha\in {\cal N}_i}
{\lambda_{i,\alpha,\downarrow}^{}}^{2}
&
\\
\nonumber
&
-
\sum\limits_{\alpha\in {\cal U}_i}
{\lambda_{i,\alpha,\downarrow}^{}}^{2}
\langle n_{i,\alpha,\uparrow}^{}\rangle
\Big)
&
\\
\nonumber
&
+
(1-\delta_{i,j}^{})
\Big(
\sum\limits_{\alpha,\beta\in {\cal C}_{ij}}
\delta_{(i\alpha),(j\beta)}^{}
\lambda_{i,\alpha,\downarrow}^{}
\lambda_{j,\beta,\downarrow}^{}
&
\\
\nonumber
&
-
\sum\limits_{\alpha,\beta\in \{{\cal C}_{ij}\cap{\cal U}_i \}}
\delta_{(i\alpha),(j\beta)}^{}
\lambda_{i,\alpha,\downarrow}^{}
\lambda_{j,\beta,\downarrow}^{}
\langle n_{i,\alpha,\uparrow}^{}\rangle
\Big)\,,
&
\end{eqnarray}
where
\begin{equation}
\label{eq:12}
\langle n_{i,\alpha,\sigma}^{}\rangle
=\frac{{\displaystyle \langle\Phi_0|n_{i,\alpha,\sigma}^{}|\Phi_0\rangle}}
{{\displaystyle \langle\Phi_0|\Phi_0\rangle}}
\,.
\end{equation}
In the following section, the upper bound Eq.\ (\ref{eq:11})
on the ground-state energy is discussed for two examples
which allow us to calculate the expectation value
$\langle n_{i,\alpha,\sigma}^{}\rangle$.
\section{Results}
\label{sec:5}
\subsection{$D$-dimensional Hubbard model}
\label{sec:5.1}
The first example, where the upper bound on the ground-state
energy of the $(2N-1)$ -particle system may be evaluated is
the Hubbard model as introduced in Ref.\onlinecite{brandt-giesekus},
where all $\lambda$'s are set to unity and the spatial dimension
is larger than one. For the three-dimensional case, the cells have
the topology of octahedra with $2D=6$ sites. All sites carry
interaction, therefore the sets ${\cal U}_i$ and ${\cal N}_i$ are
identical. Further, every site in ${\cal N}_i$ is shared by exactly
one neighboring cell, forming a $D$ dimensional (hyper)cubic lattice
of cells. Periodic boundary conditions are assumed. The corresponding
lattice is visualized in Fig.\ \ref{fig:1}a. The Hamiltonian of
this system reads
\begin{eqnarray}
\label{eq:13}
H
&=&
\sum_{i,\sigma}
\Psi_{i,\sigma}^{}
P
\Psi_{i,\sigma}^{\dagger}\\
\nonumber
&=&
P
\sum_{i,\sigma}
\Big(
-
\sum_{{\scriptstyle \alpha,\beta=1}
\atop{\scriptstyle \alpha\neq\beta}}^{2D}
c_{i,\alpha,\sigma}^{\dagger}
c_{i,\beta,\sigma}^{}
\Big)
P
+
4\,
P
\big(
ND - \hat{N}_e
\big)\,.
\end{eqnarray}
The last term in (\ref{eq:13}) contains a trivial constant $4ND$
and the total electron-number operator $\hat{N}_e$ (with eigenvalue
$N_e$) which is a conserved quantity.
With the Fourier transform
\begin{equation}
\label{eq:14}
|\chi_k\rangle
=
\frac{1}{\sqrt{N}}
\sum_{j}
e^{ikj}
|\chi_j\rangle
\end{equation}
and $\langle n_{i,\alpha,\sigma}^{}\rangle=D^{-1}$ (the occupation
probabilities for all sites are equal for symmetry reasons) one
obtains an upper bound\cite{dispersion1} on the $(2N-1)$ -particle
ground state energy
\begin{equation}
\label{eq:15}
\langle\chi_k|H|\chi_k\rangle
=2(1-\frac{1}{D})(D+\sum_{\mu=1}^{D}\cos k_{\mu})
\end{equation}
The minimum
$
E_{>}(2N-1)=\min\,\langle\chi_k| H|\chi_k\rangle
$
with respect to the $k$-vector is obvious: For
$(k_1 , k_2 , \ldots , k_D)=(\pi,\pi,\ldots,\pi)$
the right hand side of (\ref{eq:15}) becomes zero.
However, this $k$-vector has to be {\em excluded} for this
particular model, because it can be shown\cite{brandt-giesekus}
(c.\,f.\, App.\ \ref{app:1}) that the state $|\Phi_0\rangle$
(\ref{eq:7}) vanishes if $\vec{k}=(\pi,\ldots,\pi)$ is allowed.
That particular $k$-vector can be excluded by the restriction to
systems where one of the $D$ system dimensions contains
an odd number of cells. Then the $k$-vector component
corresponding to this particular direction cannot equal
$\pi$.
This result indicates that the state $|\chi_k\rangle$ becomes
an asymptotically exact $(2N-1)$ -particle ground state of
$H$ for the thermodynamic limit. The upper bound on the
ground state energy vanishes as
\begin{equation}
\label{eq:16}
E_{>}(2N-1)= O(N^{-\frac{2}{D}})
\end{equation}
and asymptotically approaches the lower bound $E_{<}(2N-1)=0$.
This result is even relevant for $D=2$, where the asymptotically
exact state describes one hole in a half filled RVB-background.
The absence of ferromagnetism does not contradict Nagaoka's
theorem\cite{nagaoka} because the present lattice is not
bipartite. (For a bipartite lattice, the state would be a
ferromagnet, i.\,e.\ the total spin would assume its maximum value.)
Unfortunately, no rigorous statements on the uniqueness of this
ground state can be made.
Further, it can be seen that for $D>2$ the chemical potential
of the system for particle numbers of $N_e=2N-1,\ldots,ND$ remains
constant. The non-interacting tight-binding analogue exhibits quite
similar behavior: The band structure of this particular lattice
consists of one cosine band with
$
\epsilon(k)=2-2(D+\sum_{\mu}\cos k_{\mu})
$
and $D-1$ flat bands at the upper band edge of the extended band.
However, the ``bandwidth'' of the interacting system is reduced by
a factor $(1-1/D)$, which may be interpreted as an increased mass.
This result sheds new light on the conductivity in the system. It
might be that the many-particle state describes a dispersive
delocalized hole which may contribute to conduction. Unfortunately,
no results on hole {\em densities} have been obtained and
the dispersive state is not exact except in the vicinity of
$\vec{k}=(\pi,\ldots,\pi)$.
\subsection{Linear Hubbard chain}
\label{sec:5.2}
As a second example, we study a linear chain with $N$
cells and $4$ sites per cell ${\cal N}_i$ ($3$ sites
per {\em unit cell}). The topology of this chain is
illustrated in Fig.\ \ref{fig:1}b; it may be interpreted
as a chain of connected tetrahedra. Again, periodic
boundary conditions and the case of infinite
on-site repulsion on every site are assumed, and thus
the sets ${\cal N}_i$ and ${\cal U}_i$ are identical.
The cells $i$ and $i+1$ are connected by one site
(backbone site): the sites with indices
$(i,\alpha=1)$ and $(i+1,\beta=4)$ are identical.
All $\lambda$'s are set to unity. The Hamiltonian of
this particular chain reads
\begin{eqnarray}
\label{eq:17}
H \quad &=& \sum_{i,\sigma} {\Psi}_{i,\sigma}^{} P
{\Psi}_{i,\sigma}^{\dagger}\\
\nonumber
&=& \quad P \, \Big(
- \,\, \sum_{i,\sigma}
\sum_{{\alpha,\beta=1}
\atop
{\alpha \not= \beta}}^{4}
{c_{i,\alpha,\sigma}^{\dagger}}
c_{i,\beta,\sigma}^{}
- \,\, 2 \, \sum_{i,\sigma} n_{i,1,\sigma}^{}
\Big) \,P\\
\nonumber
&{}&\quad\quad\quad +\,\,P\,\left( 8N - 2 \hat{N}_e \right)\,.
\end{eqnarray}
In addition to the hopping terms, one obtains a local field
which shifts the energy of the backbone sites ($\alpha=1$).
For this model, the upper bound Eq.\ (\ref{eq:11})
reduces to the simple expression\cite{dispersion2}
\begin{equation}
\label{eq:18}
\langle\chi_k|H|\chi_k\rangle
=
3 -
\langle n_{i,\alpha=1,\uparrow}^{}\rangle
+
2
(1-\langle n_{i,\alpha=1,\uparrow}^{}\rangle)
\cos k
\end{equation}
The right hand side of Eq.\ (\ref{eq:18}) assumes a minimum
for $k=\pi$ (which is allowed for this model). This leads
to the following upper bound on the ground state energy:
\begin{equation}
\label{eq:19}
E_{>}(2N-1)=1+\langle n_{i,\alpha=1,\uparrow}^{}\rangle \,.
\end{equation}
The occupation probability on the backbone sites may be
calculated by a transfer-matrix technique for the
thermodynamic limit. This calculation is briefly outlined
in appendix \ref{app:2} and yields
\begin{equation}
\label{eq:20}
\lim_{N\to\infty} E_{>}(2N-1)=
\frac{1}{2}(3-\frac{1}{\sqrt{13}})\approx 1.361325
\end{equation}
In contrast to the previous example this result indicates a
gap in the $(2N-1)$ -particle spectrum. Equation (\ref{eq:20})
estimates this gap from above. Therefore the upper bound is compared
to numerically obtained exact results as described in
Ref.\onlinecite{giesekus}.
The numerical results have been obtained by the conjugate-gradient
algorithm.\cite{bradbury,nightingale} The energies are listed in
Tab.\ \ref{tab:1}. For chain lengths up to four unit cells, the
ground state is calculated without applying symmetry restrictions.
In order to check for ground-state degeneracy, the determination of
the lowest eigenstate is repeated in the subspace orthogonal to the
ground state. For an even number of unit cells, the energies of
these orthogonal eigenstates do not coincide with the ground-state
energy, therefore the ground states for even chain length are
non-degenerate. (Chains with an odd number of unit cells are
degenerate because of reflection symmetry which induces degeneracy
with respect to $\pm k$.)
As a second step, the same calculations have been performed
assuming site-interchange symmetry as decribed in
Ref.\onlinecite{giesekus} and translational invariance with
$k=\pi$. The energies of the symmetrized states coincide
with the previously calculated non-degenerate ground-state
energies, therefore the symmetry of the ground state is uniquely
determined.
The chain with six cells is numerically treated by application
of site interchange symmetry only and the result for $N=8$ is obtained
assuming translational invariance in addition to site-interchange
symmetry.
The numerically obtained exact ground-state energies $E_0 (2N-1)$ are
shown in Tab.\ \ref{tab:1}. The upper bound Eq.\ (\ref{eq:20}) of
the infinite system deviates from the numerical result for $N=8$
by $\approx 13\%$. This agreement is not very close. However, the
symmetries of the variational state and the numerically exact state
coincide. Better quantitative agreement may be obtained by the
expectation value of $H$ with respect to the state
\begin{equation}
\label{eq:21}
|\varphi_k\rangle
=
\frac{1}{\sqrt{N}}
\sum_{j}
e^{ikj}
|\varphi_j\rangle\,,
\end{equation}
[The state $|\varphi_i\rangle$ is defined by Eq.\ (\ref{eq:8}).], which
may be evaluated numerically. For a chain with $N=4$ one obtains
$\langle\varphi_{k=\pi}|H|\varphi_{k=\pi}\rangle=1.2555$.
This result deviates from the numerically exact result by
$\approx 4\%$ only. The state $|\chi_k\rangle$ is proportional
to $\sqrt{H}|\varphi_k\rangle$, therefore a difference in the
expectation values $\langle\varphi_{k=\pi}|H|\varphi_{k=\pi}\rangle$
and $\langle\chi_{k=\pi}|H|\chi_{k=\pi}\rangle$ simply reflects the
fact that $|\chi_{k=\pi}\rangle$ (and hence $|\varphi_{k=\pi}\rangle$)
is not an eigenstate. The operator $\sqrt{H}$ ``amplifies'' the
deviation from the exact ground state.
The single-particle band-structure of this particular chain
is illustrated in Fig.\ \ref{fig:2} and consists of two extended
bands, one broad band separated from a narrower one by a gap of
the size of one energy unit. The highest energy band collapses to
one dispersionless flat band which is again separated by a small
gap. For $2N$ particles the non-interacting system would have a
completely filled lower valence band describing an insulator. It
is obvious that the interacting system shows different behavior
here: In the framework of the one-particle language, one would
expect a gap if the particle number is changed from $N_e=2N$ to
$N_e=2N+1$, however, for that case the chemical potential of the
interacting system remains constant. In contrast to the one-particle
picture, the chemical potential of the interacting system changes
if the particle number is changed from $N_e=2N-1$ to $N_e=2N$. This
contradiction illustrates the breakdown of one-particle physics in
this particular model.
\section{Summary}
\label{sec:6}
For a class of models describing strongly interacting Fermions
it is possible to write down the exact ground state if the
particle number equals 2 per unit cell or more. The particle
number regime below this threshhold has not been accessible
by rigorous methods so far. We present a tractable variational state
for systems with one additional hole. The corresponding expectation
value of the Hamiltonian approaches a lower bound for the case of a
particular $D$-dimensional Hubbard model in the thermodynamic limit.
We conclude that the presented variational state approaches the ground
state or an eigenstate which is degenerate to the ground state if the
ground state is not unique.
For a second model, a special Hubbard chain, the analytical
expectation value deviates significantly from the numerically
calculated ground state energy. Although it is shown that the
numerical ground state and the variational state exhibit the
same symmetries, it remains unclear, how well the ground-state
physics of the chain is described by our variational ansatz.
\acknowledgments
We gratefully acknowledge numerous helpful discussions with
J.\,Stolze. This work has been supported by the Deutsche
Forschungsgemeinschaft (DFG), Project Br\,434/6-2.
|
1,941,325,221,098 | arxiv |
\section{Introduction}
\label{sec:intro}
Most state-of-the-art methods in 3D Multiple Object Tracking(3DMOT) follow the tracking-by-detection paradigm.
They first obtain bounding boxes of objects in each frame with 3D detectors and then associate the detected objects between frames as tracklets.
In the online tracking process, the same object is associated sequentially across frames.
\input{introfig}
Ideally, a tracklet should be maintained until its target leaves the scene.
Previous works \cite{Weng20203DMT,weng2020gnn3dmot,sun2020scalability,chiu2020probabilistic,kim2020eager,yin2021center,benbarka2021score,zaech2021learnable} in 3DMOT usually adopt a simple delay-based mechanism for tracklet life management.
They maintain a tracklet that loses its target for a few frames.
If the tracklet fails to be associated with any detections during this period, it will be terminated and no longer participate in the detection-tracklet association in the future frames.
This mechanism assumes that an object not detected for several frames is considered to have left the scene.
However, objects go dark for different reasons.
When an object is occluded or temporarily being out of FOV, it might be missing for frames as well and its corresponding tracklet will be terminated prematurely.
In such a case if the object is detected again in the future, a new tracklet will be initialized, causing an identity switch.
As shown in Table~\ref{tab:idswstatic}, 99.9\% vehicle identity switches reported by the CenterPoint tracker on the validation set of Waymo Open Dataset (WOD) are the results of premature tracklet termination.
To overcome premature tracklet termination, we present Immortal Tracker, a simple tracking framework based on trajectory prediction for invisible objects.
Instead of terminating an unassociated tracklet, we maintain it with its predicted trajectory.
Therefore when a temporarily invisible object reappears on the predicted trajectory, it can be assigned to its original tracklet.
We use a vanilla 3D Kalman filter(3DKF)\cite{kalman1960new} for trajectory prediction to highlight the simplicity and effectiveness of our predict-to-track paradigm.
As shown in Figure~\ref{fig:introfig}, our simple 3DKF works surprisingly well for trajectory prediction on WOD.
Experiments show that our Immortal Tracker can significantly reduce identity switches when applied on the top of a wide range of off-the-shelf 3D detectors.
Built upon the detection results from CenterPoint\cite{yin2021center}, we establish new SOTA results on 3DMOT benchmarks like Waymo Open Dataset and nuScenes.
Our method achieves 60.6 level 2 MOTA for the vehicle class on Waymo Open Dataset.
We also bring the mismatch ratio down to the 0.0001 level, tens of times lower than previous SOTA methods.
On nuScenes, our method achieves a 66.1 AMOTA, outperforming all previous published LiDAR-based methods.
\input{idsw_static}
\section{Related works}
\label{sec:related}
{\bf 3D MOT.}
Following the tracking-by-detection paradigm, previous works in 3D multi-object tracking usually solve the tracking problem in the form of a bipartite graph matching process on top of off-the-shelf detectors.
Inspired by early works in 2D MOT\cite{bewley2016simple,wojke2017simple,dicle2013way}, various methods focus on strengthening the association between detections and tracklets by modeling their motions or appearance, or the combination of these both.
\cite{patil2019h3d} defines the state of Kalman Filter in 2D plane.
AB3DMOT\cite{Weng20203DMT} presents a baseline method through combining 3D Kalman filter and the Hungarian algorithm\cite{kuhn1955hungarian} on top of PointRCNN detector\cite{shi2019pointrcnn}.
AB3DMOT employs 3D Intersection of Union (3D IoU) as its association metric, while Chiu et al\cite{chiu2021probabilistic} used Mahalabnobis distance\cite{mahalanobis1936generalized} instead. SimpleTrack\cite{pang2021simpletrack} generalized GIoU\cite{Rezatofighi_2019_CVPR} to 3D for association.
CenterPoint\cite{yin2021center} learns to predict two-dimensional velocity of detected box following CenterTrack\cite{zhou2020tracking} and perform simple point-distance matching.
To further reduce confusion during the association process, GNN3DMOT\cite{weng2020gnn3dmot} jointly extracts appearance and motion features with a Graph Neural Network to introduce feature interactions between objects.
\cite{chiu2021probabilistic} proposes a probabilistic multi-modal system that contains trainable modules for 2D-3D appearance feature fusion, distance combination, and trajectory initialization.
\cite{kim2020eager} combines 2D and 3D object evidence obtained from 2D and 3D detectors.
Authors of \cite{zaech2021learnable} merge predictive models and objects detection features in a unified graph representation.
In this paper, we use simple 3D IoU/GIoU metrics for data association.
Following early experience in 2D MOT\cite{bewley2016simple,wojke2017simple,he2021learnable}, previous works\cite{Weng20203DMT,weng2020gnn3dmot,chiu2020probabilistic,kim2020eager,yin2021center,zaech2021learnable} in 3D MOT usually adopt count-based method for tracklet life-cycle management. For each frame, new tracklets are initialized with detections that are not associated with existing tracklets. Tracklets that lost their targets for serveral(typically less than 5) frames will be terminated.
\cite{benbarka2021score,sun2020scalability} proposed to initialize and terminate tracks depending on their confidence score estimated from the confidence of their associated detections. However, they will still permanently terminate the tracklets that fail to be associated with new detections. In contrast, we show that through positively predicting and preserving trajectories of objects, the tracklets which lost their targets can be properly maintained for possible association in the future.
\section{Method}
Figure~\ref{fig:pipeline} shows the overview of our system.
As an online tracking method, our method takes sequential detection results as input.
For each frame, we predict the object locations for all tracklets with 3DKF.
Then we perform Hungarian algorithm for bipartite matching between detections and predictions.
The bipartite matching process will output the matched pairs and the unmatched tracklets or detections.
The states of matched tracklets are updated by their corresponding detections, and unmatched tracklets are updated with their predicted object states.
The unmatched detections will be initialized as new tracklets.
Each part in the pipeline will be described in detail in the following sections.
{\bf Detection.}
By design, our Immortal Tracker is agnostic to the choice of detector.
For our best-reported result, We employ CenterPoint\cite{yin2021center} as our off-the-shelf detector.
CenterPoint chooses a quite permissive IoU threshold for NMS to ensure a better recall.
Therefore, we follow SimpleTrack~\cite{pang2021simpletrack} and perform a much stricter NMS on detection results to remove unwanted boxes before feeding them to our Immortal Tracker.
{\bf Trajectory Prediction.}
We use a vanilla 3D Kalman filter(3DKF) for trajectory prediction.
Following AB3DMOT\cite{Weng20203DMT}, we define the states of Kalman filter $Z=[x,y,z,\theta,l,w,h,\dot{x},\dot{y},\dot{z}]$ in 3D space, including object position, box size, box orientation and velocity.
Different from AB3DMOT, our object positions are defined in the world frame like \cite{yin2021center}, instead of in the LiDAR frame.
The change of reference frame is crucial to our Immortal Tracker, helping us reduce the total number of tracklets managed.
During the tracking process, we alternate between predicting object state $X$ with 3DKF and updating 3DKF state $Z$ with incoming detection $D$.
Both $X$ and $D$ are 7-dim vectors consisting of $[x, y, z, \theta, l, w, h]$.
Note that we use $X$ to denote both a tracklet and the latest prediction of its associated 3DKF.
We may refer $X$ as tracklet or prediction interchangeable hereafter.
{\bf Detection-Prediction Association.}
For association, we compute 3D IoU or GIoU\cite{Rezatofighi_2019_CVPR, pang2021simpletrack} between detected 3D bounding boxes and boxes predicted by 3DKF.
We then perform Hungarian matching based on 3D IoU / GIoU.
We discard low quality matchings if the 3D IoU / GIoU of a matching is lower than $\text{IoU}_\text{thres}$ or $\text{GIoU}_\text{thres}$.
The inputs of the bipartite matching process is defined as:
\begin{align}
\mathcal D&=\{D^1, D^2,\cdots, D^{p}\} \\
\mathcal X&=\{X^1, X^2,\cdots, X^{q}\}
\end{align}
where $p$ and $q$ are the number of detections and tracklets in the frame.
The output of the bipartite matching process is defined as:
\begin{align}
\mathcal D_\text{m}&=\{D_\text{m}^1, D_\text{m}^2, \cdots, D_\text{m}^{k}\}\\
\mathcal D_\text{um}&=\{D_\text{um}^1, D_\text{um}^2, \cdots, D_\text{um}^{p-k}\}\\
\mathcal X_\text{m}&=\{X_\text{m}^1, X_\text{m}^2, \cdots, X_\text{m}^{k_t}\}\\
\mathcal X_\text{um}&=\{X_\text{um}^1, X_\text{um}^2, \cdots, X_\text{um}^{q-k}\}
\label{updatestep}
\end{align}
where $\mathcal X_\text{m}$ and $\mathcal D_\text{m}$ are the $k$ matched pairs of predictions and detections.
$\mathcal X_\text{um}$ are the unmatched tracklets and $\mathcal D_\text{um}$ are the unmatched detections.
The matched detections $\mathcal D_\text{m}$ will be used to update the 3DKF states of corresponding tracklets $\mathcal X_\text{m}$.
The unmatched tracklets $\mathcal X_\text{um}$ will use their own predictions to update 3DKF.
{\bf Tracklet Birth and Preservation.}
Following the common tracklet initialization strategy, we initialize a set of tracklets $\mathcal X_\text{new}$ for detections in $\mathcal D_\text{um}$.
For these new tracklets, we will mark them as \emph{alive} if they are successfully associated with detections for $M_\text{hits}$ times in future frames.
Otherwise, they stay at the \emph{birth} stage.
Different from previous works\cite{Weng20203DMT,weng2020gnn3dmot,sun2020scalability,chiu2020probabilistic,kim2020eager,yin2021center,benbarka2021score,zaech2021learnable} which terminate a tracklet after several frames since its last successful association, our Immortal Tracker always maintains tracklets for objects even if they are invisible.
The maintained tracklets will repeatedly predict their locations in the future frames with their last estimated states.
In this way, we maintain tracklets for all objects observed at least once.
Our tracklets never die once created and hence are {\bf immortal}.
And when the missing object reappears near its predicted trajectory, we can associate it with its corresponding tracklet again and update its tracklet with the observation instead of prediction.
In each frame, we output tracklets in $\mathcal O=\{X_i \in \mathcal X_\text{m} \cap \mathcal X_\text{alive}\}$, where $\mathcal X_\text{alive}$ are tracklets in the alive state.
Therefore the predictions of unmatched tracklets will not cause false positives.
Algorithm ~\ref{fig:algorithm} summarizes our simple tracking algorithm.
\input{algorithm}
\section{Experiments}
\subsection{Datasets and Metrics.}
We evaluate our method on Waymo Open Dataset(WOD) and nuScenes.
{\bf Waymo Open Dataset}
\cite{sun2020scalability} contains 1000 driving video sequences each lasts 20 seconds at 10 FPS.
798 / 202 / 150 sequences are used for training, validation and testing, respectively.
Point clouds and ground truth 3D boxes of objects in vehicle, pedestrian and cyclist classes are provided for each frame.
Following the official evaluation metrics specified in \cite{sun2020scalability}, we report Multiple Object Tracking Accuracy (MOTA)\cite{bernardin2008evaluating}, False Positives(FP), Miss and Mismatch for objects in the L2 difficulty. Readers may refer to \cite{bernardin2008evaluating} for detailed descriptions about these metrics.
{\bf nuScenes}\cite{caesar2020nuscenes} contains 1000 driving sequences in total with LiDAR scans and ground truth 3D box annotations provided at 20 FPS and 2 FPS, respectively.
We report AMOTA\cite{weng2019baseline}, MOTA, and identity switches(IDS) for nuScenes.
AMOTA is computed by integrating MOTA over different recalls, which is used as the primary metric for evaluating 3DMOT on nuScenes.
\subsection{Baselines}
{\bf CenterPoint}\cite{yin2021center} is both a 3D detection method and a 3DMOT method. Here we refer them as CenterPoint-DET and CenterPoint-3DMOT, respectively.
We compare our proposed method with CenterPoint-3DMOT since we use the same CenterPoint-DET detection results.
Also, CenterPoint-3DMOT serves as a very strong baseline on both WOD and nuScenes.
{\bf CenterPoint++} is our modified tracking implementation of CenterPoint. CenterPoint-3DMOT only performs constant velocity trajectory prediction and nearest box center matching. It is unfair to compare it with our method which uses 3DKF for trajectory prediction and 3D IoU / GIoU for matching.
Therefore we set another baseline named CenterPoint++, which uses 3DKF and 3D IoU / GIoU matching like ours. The main difference between CenterPoint++ and Immortal Tracker is that a traditional tracklet termination mechanism is used in CenterPoint++. We terminate a tracklet in CenterPoint++ after $A_\text{max}$ frames since its last successful association following previous works\cite{Weng20203DMT,weng2020gnn3dmot,chiu2020probabilistic,kim2020eager,yin2021center,zaech2021learnable}. All the other hyper-parameters of CenterPoint++ are set the same as our Immortal Tracker for a fair comparison.
\subsection{ Implementation Details}
We take CenterPoint detection boxes as input.
We drop boxes with scores below 0.5 on WOD.
We keep all boxes on nuScenes.
We also perform 3D IoU-based NMS on detected bounding boxes with 0.25 / 0.1 3D IoU thresholds on WOD / nuScences, respectively.
In detection-prediction association, we use 3D IoU / GIoU between detections and predictions on WOD/nuScenes, respectively.
Then we perform Hungarian algorithm over the 3D IoU distance matrix.
We drop all detection-prediction associations with 3D IoU below 0.1 or 3D GIoU below -0.5.
For tracklet initialization, we set the hits to birth $M_\text{hits}=1$ on both WOD and nuScenes.
For CenterPoint++, we set $A_\text{max}=2$.
\subsection{Main Results}
We compare Immortal Tracker to previous 3DMOT methods on both WOD and nuScenes.
\input{waymotest}
\input{waymo_val}
\input{nuscenes_test}
\input{nuscenes_val}
{\bf Results on WOD.}
Table~\ref{tab:waymo} and ~\ref{tab:waymo_val} show comparisons of tracking results between our method, our baselines and other state-of-the-art methods on the WOD test and validation sets.
For the vehicle class, our method reduces the mismatch ratio to a new level of 0.0001, several times lower than previous works.
It is noteworthy that our method performs much better than CenterPoint++ in the mismatch metric.
This indicates the majority of mismatch cases we have reduced are results of maintaining the tracklets with predicted trajectories rather than employing 3DKF as motion model.
Besides the substantial improvement our proposed method achieved on the mismatch ratio, we also achieve a 0.4/0.9 MOTA improvement over CenterPoint++ for vehicle/pedestrian, respectively, outperforming all previously published methods.
{\bf Results on nuScenes.}
Table~\ref{tab:nuscenes} and ~\ref{tab:nuscenes_val} shows our results on nuScenes test and validation sets.
We compare our Immortal Tracker with published LiDAR-only works for a fair comparison.
All methods in Table~\ref{tab:nuscenes} and ~\ref{tab:nuscenes_val} are based on the same CenterPoint detection results, allowing a fair comparison between different tracking approaches.
Our method achieves a 2.3 AMOTA improvement over CenterPoint and a 0.3 improvement over CenterPoint++.
\subsection{Ablation Studies}
\input{ablation_detector}
We analyze design choices and hyper-parameter settings of Immortal Tracker on the WOD validation set for the vehicle class.
Results show that our Immortal Tracker is robust to a wide range of design choices and hyper-parameters.
{\bf The performance of base 3D detectors.} Table~\ref{tab:detector} summarizes the performances of our proposed method on top of different 3D detectors on the WOD validation set.
We take detection results of several 3D detectors including PointPillars+PPBA\cite{lang2019pointpillars,cheng2020improving}
, SECOND\cite{yan2018}, LiDAR R-CNN\cite{li2021lidar} and CenterPoint that take voxelized or Bird’s Eye View (BEV) representations of point clouds for object detection, and RangeDet\cite{fan2021rangedet} which takes range view representation of collected 3D informations as input.
The results show that our method is agnostic to 3D detectors and can reduce the mismatch rates to 0.0001 level on top of various detectors.
{\bf When to terminate a tracklet. }
\input{age}
Immortal Tracker permanently maintains tracklets for objects gone dark, which is the main difference between Immortal Tracker and CenterPoint++. For ablation experiment, here we introduce the trajectory prediction as well as tracklets preservation mechanisms into CenterPoint++, but still with a max preservation age $A_\text{max}$ for the tracklets. Then we increase $A_\text{max}$ from 2 to study the influence of premature tracklet termination.
As shown in Figure~\ref{fig:age}, the mismatch ratio drop significantly and monotonically when we extend the life of tracklets by increasing $A_\text{max}$.
When we preserve tracklets with predicted trajectories for 50 frames the mismatch ratio reaches 0.000123, 20 times lower than when the tracklets are preserved by predictions for only 2 frames.
And when the life of tracklet continues to grow from 50 frames, up to 17\% of remaining mismatch cases can further be avoided.
This indicates that preserving tracklets by trajectory prediction can significantly reduce mismatch cases without introducing significant tracklets confusions.
{{\bf The minimum birth count} $M_{\text{hits}}$.}
\input{abl_birth}
We also explore the influence of the minimum birth count $M_\text{hits}$ on the performance of our proposed method. The results are shown in Table~\ref{tab:birth}. Clearly Immortal Tracker is robust to $M_\text{hits}$.
{\bf Association Threshold.}
We find the association threshold $\rm IoU_{thres}$ is a major influence factor to the performance of Immortal Tracker. As shown in Table~\ref{tab:assothres}, when we increase $\rm IoU_{thres}$, Immortal Tracker reports significantly more mismatch cases with a lower MOTA.
\input{abl_assothres}
{\bf Redundancy of input detections.}
In our method, we perform NMS on detection results with a relatively low IoU threshold compared to CenterPoint.
The motivation is that considering rigid objects will not overlap in 3D space, the overlapped 3D detections have a high probability to be false positives and shall be discarded.
Our experiment results in Table~\ref{tab:nmsthres} show when we decrease the NMS threshold for a more strict suppression, the mismatch ratio drops by a large margin.
This result supports our assumption.
\input{abl_nmsthres}
\subsection{Qualitative Results.}
Fig~\ref{failcase} shows the BEV visualization of the only remaining wrong association case in Immortal Tracker.
Fig~\ref{successcase} visualizes one of the prevented premature terminations in Immortal Tracker.
\input{failcase}
\input{successcase}
\section{Conclusion and Future Direction}
In this work, we found that identity switches in 3D MOT are almost solely caused by premature tracklet termination widely existing in modern 3D MOT methods.
We proposed using trajectory prediction to preserve tracklets for invisible objects.
We found using a simple 3D Kalman filter for trajectory prediction could reduce identity switch cases by 96\%.
Our method provided valuable insights for handling long-standing challenges in tracking like long-term occlusion.
However, due to the limitation of existing 3DMOT benchmarks, we can not verify the effectiveness of our predict-to-track paradigm in long-term($\gg 20$ seconds) and/or more interactive(\textit{e.g.} the famous Shibuya crossing) scenarios.
In the future, we want to set up a more challenging benchmark for 3DMOT and explore the synergy between trackers and more sophisticated trajectory predictors.
|
1,941,325,221,099 | arxiv | \section{Introduction}
In this paper we continue the study of a class of parabolic free boundary problems initiated in [B]. Our present goal is to establish that Lipschitz free boundaries are $C^{1,\alpha}$ surfaces in a sense that is stated precisely in Theorem~\ref{thrm:main_thrm}.
The problem under consideration in this work is as follows: Let $u$ be a viscosity solution in a domain $\Omega$ to
\begin{equation}\label{FBP_Statement}
\left\lbrace
\begin{aligned}
&\mathcal{L}u -u_t =0 \quad \text{in } \;\{u>0\}\cup\{u\leq 0\}^\circ\\
&G(u^+_\nu,u^-_\nu)=1 \quad \text{along } \;\partial\{u>0\}.
\end{aligned}
\right.
\
\end{equation}
Here $\mathcal{L}$ is an elliptic operator with H\"{o}lder continuous coefficients and $G(\;,\;)$ defines the free boundary condition of the problem.
Typical examples of the boundary condition in~\eqref{FBP_Statement} include $u^+_\nu =1$ and $(u^+_\nu)^2-(u^-_\nu)^2 =2M$, $M$ a positive number. Both arise as the free boundary condition for a singular perturbation problem which models combustion. This problem consists of studying the limit as $\varepsilon\rightarrow 0$ of solutions to
\[
\Delta u^\varepsilon -u^\varepsilon_t=\beta_\varepsilon(u^\varepsilon)
\]
where $\beta(s)$ is a Lipschitz function supported on $[0,1]$ with
\[
\int_0^1 \beta(s) \mathrm{d}s =M \quad \text{and} \quad \beta_\varepsilon(s) =\frac{1}{\varepsilon}\beta(\frac{s}{\varepsilon}).
\]
Under the assumption that $u^\varepsilon \geq 0$, it was shown in \cite{CV} that the boundary condition for the limit function $u$ is $u_\nu^+=1$ . In \cite{CLW1}, \cite{CLW2} the two phase version of this problem was studied and the free boundary condition for the limit solution was demonstrated to be $(u^+_\nu)^2 -(u^-_\nu)^2 =2M$. In both cases this free boundary condition holds in a suitable weak sense.
Stationary (i.e. time independent) versions of~\eqref{FBP_Statement} with $\mathcal{L}=\Delta$ were studied in \cite{C1}, \cite{C2}. In these pioneering papers the idea of a viscosity solution to~\eqref{FBP_Statement} was introduced and the key concepts of monotonicity cones and sup-convolutions were introduced. The main result of these works is that Lipschitz free boundaries are smooth, as are sufficiently `flat' free boundaries. In this context `flat' means that the free boundary is close to the graph of a Lipschitz function with suitably small Lipschitz constant.
The first extension of these techniques to a parabolic problem was in \cite{ACS1}, \cite{ACS2}, \cite{ACS3}. The problem studied in these works is the Stefan problem, which models melting/solidification and differs from~\eqref{FBP_Statement} in its free boundary condition which, among other differences, involves the time derivative of $u$. Similar, though not quite as strong, results were proved in these papers as for the elliptic problem studied in \cite{C1}, \cite{C2}. Lipschitz free boundaries were proved to be $C^1$ under a non-degeneracy condition on $u$, and sufficiently flat free boundaries were also proved to be $C^1$. In both cases it was also proved that $u\in C^1(\overline{\Omega^+})\cup C^1(\overline{\Omega^-})
$, so that $u$ satisfies the free boundary condition in a classical sense. Finally, \cite{F} adapted these techniques to the study of~\eqref{FBP_Statement} for the heat equation.
All of the above cited works on the regularity of the free boundary involve either the Laplacian in the stationary case or the heat equation in the parabolic case. The proofs in these papers make extensive use of the fact that directional derivatives of solutions to a constant coefficient linear PDE are themselves solutions to the same equation. Indeed, the most difficult aspect of adapting these methods to the variable coefficient case is that this fact is unavailable. The only progress in adapting these methods to problems with variable coefficients is found in \cite{CFS}\textbf{,} where the authors study an elliptic problem\textbf{,} and in \cite{FS1}, \cite{FS2} where the authors study the Stefan problem with flat free boundaries.
In this work we adapt these methods and use them to study the regularity of the free boundary to solutions of~\eqref{FBP_Statement}. Our main result is that the free boundary is a differentiable surface whose normal varies with a H\"{o}lder modulus of continuity and the the free boundary condition is taken up with continuity.
The outline of this work is as follows: In Section 2 we precisely define the problem, the concept of a solution, our assumptions and our main result. In Section 3 we have collected the main tools and known results that we will need in our analysis. Section 4 deals with the interior enlargement of the monotonicity cone while Section 5 contains results that propagate a portion of this enlargement to the free boundary. Finally Section 6 contains the iteration used to prove the regularity of the free boundary in space while Section 7 contains a similar iteration used to prove the regularity in space-time.
\section{Definitions and Statement of Results}
We collect in this section the precise statement and hypotheses of our problem along with the statement of our main result.
We will denote the positivity set of $u$ by $\Omega^+$; likewise the negative set is denoted by $\Omega^-$. Occasionally we will write $\Omega^\pm(u)$ to emphasize the dependence of these domains on the function $u$. The set $\partial\{u>0\}$ is the free boundary and will be denoted by $F\!B(u)$ or just $F\!B$. In this work we will assume that the free boundary is the graph of a Lipschitz function $f$, that is, it consists of the set $\{(x',x_n,t)| f(x',t)=x_n\}$ with $f(0,0) =0$. Denote by $L$ and $L_0$ the Lipschitz constant of $f$ in space and time respectively.
The operator in~\eqref{FBP_Statement}
\[
\mathcal{L} =\sum_{i,j} a_{ij}(x,t)D_{ij}
\] has H\"{o}lder continuous coefficients $a_{ij} \in C^{0,\alpha}(\Omega)$ with respect to the parabolic distance, $0<\alpha\leq 1$ and there exists $\lambda, \Lambda >0$ such that
\[
\lambda|\xi|^2 \leq \sum a_{ij}(x,t)\xi_i \xi_j \leq \Lambda |\xi|^2
\] for all $(x,t) \in \Omega$. Denoting by $A(x,t)$ the matrix $[a_{ij}(x,t)]$, we assume $A(0,0) =[\delta_{ij}]$ the identity.
On $G(a,b)$ we will require
\begin{enumerate}
\item $G$ Lipschitz with constant $L_G$ in both variables.
\item $G(a_1,b) -G(a_2,b) >c^*(a_1-a_2)^p$ if $a_1>a_2$ (strictly increasing in first variable)
\item $G(a,b_1)-G(a,b_2) < -c^*(b_1-b_2)^p$ if $b_1>b_2$ (strictly decreasing in second variable)
\end{enumerate}
The $p$ appearing here is some positive power.
\begin{defn}(Classical Subsolution/Supersolution)
We say $v(x,t)$ is a classical subsolution (supersolution) to~\eqref{FBP_Statement} if $v \in C^1(\overline{\Omega^+(v)})\cup C^1(\overline{\Omega^-(v)})$, $\mathcal{L}v-v_t\geq 0$ $(\mathcal{L}v-v_t\leq 0)$ in $\Omega^\pm(v)$ and
\[
G(v_\nu^+,v_\nu^-) \geq 1 \;\;(G(v_\nu^+,v_\nu^-) \leq 1), \quad \text{where }\nu= \frac{\nabla v^+}{|\nabla v^+|}\textbf{.}\]
A strict subsolution (supersolution) satisfies the above with strict inequalities.
\end{defn}
\begin{defn}(Viscosity Subsolutions/Supersolutions)
A continuous function $v(x,t)$ is a viscosity subsolution (supersolution) to~\eqref{FBP_Statement} in $\Omega$ if for every space-time cylinder $Q= B_r' \times (-T,T) \Subset \Omega$ and for every classical supersolution (subsolution) $w$ in $Q$, the inequality $v\leq w$ ($v\geq w)$ on $\partial_pQ$ implies that $v\leq w$ ($v\geq w)$. Additionally, if $w$ is a strict classical supersolution (subsolution), then $v<w$ $(v>w)$ on $\partial_p Q$ implies $v<w$ $(v>w)$ inside $Q$.
\end{defn}
We now turn to the hypotheses on the free boundary of $u$. The main theorem of this work will require that this free boundary is Lipschitz, but will we will also require a non-degeneracy condition to hold at regular points. We first define such points.
\begin{defn}(Regular Points) A point $(x_0,t_0)$ on the free boundary of $u$ is a right regular point if there exists a space-time ball $B_R \subset \Omega^+$ such that $B_R \cap \partial \{u>0\} =\{(x_0,t_0)\}$.
A point $(x_0,t_0)$ on the free boundary of $u$ is a left regular point if there exists a space-time ball $B_R \subset \Omega^-$ such that $B_R \cap \partial \{u\leq 0\} =\{(x_0,t_0)\}$.
\end{defn}
We will assume the following non-degeneracy condition on $u$: There exists a $m>0$ such that if $(x_0,t_0)$ is a right regular point for $u$ then
\begin{equation}
\frac{1}{|B'_r(x_0)|}\int_{B'_r(x_0)} u^+ dx \geq mr.
\end{equation}
The main result of this paper is the following theorem.
\begin{thrm}\label{thrm:main_thrm}
Let $u$ be a solution to our free boundary problem in $Q_1$ satisfying the hypotheses of this section. Then for every point $(x,t)$ on the free boundary in $Q_{1/2}$ there exists a normal vector to the surface $\eta(x,t)$. Furthermore, this normal vector satisfies
\begin{enumerate}
\item $|\eta(x,t) -\eta(y,t)| \leq C|x-y|^\alpha$
\item $|\eta(x,s) -\eta(x,t)| \leq C|s-t|^\beta$
\end{enumerate}
Finally, the free boundary condition is taken up with continuity by the solution $u$ so that $u$ is a classical solution to~\eqref{FBP_Statement}.
\end{thrm}
\section{Main Tools and Collected Results}
Define the domain $\Omega_{2r}$ by
\[\Omega_{2r} =\{(x',x_n,t) : |x'|<2L^{-1}r, |t|<4L_0^{-2}r^2, f(x',t)<x_n<4r\}.
\]
Denote by $P_r =(0,r,0)$, $\overline{P_r}=(0,r,2L_0^{-2}r^2)$, $\underline{P_r}=(0,r,-2L_0^{-2}r^2)$. These are the inward point, forward point and backward point, respectively. Denote by $\delta(X,Y)$ the parabolic distance between $X=(x,t)$ and $Y=(y,s)$ and by $\delta_X$ the parabolic distance from $X$ to the origin.\\
Our tools, valid for $\mathcal{L}$-caloric functions on Lipschitz domains vanishing on a piece of the boundary, are as follows (see \cite{FS2}):
\textit{Interior Harnack Inequality}: There exists a positive constant $c=c(n,\lambda,\Lambda)$ such that for any $r\in(0,1)$
\[
u(\underline{P_r}) \leq cu(\overline{P_r}).
\]
\textit{Carleson Estimate}: There exists a $c=c(n,\lambda,\Lambda,L,L_0)$ and $\beta=\beta(n,\lambda,\Lambda,L,L_0)$, $0<\beta\leq 1$ such that for every $X\in \Omega_{r/2}$
\[
u(X) \leq c\left(\frac{\delta_X}{r}\right)^\beta u(\overline{P_r}).
\]
\textit{Boundary Harnack Principle}: There exists $c=c(n,\lambda, \Lambda, L, L_0)$ and $\beta=\beta(n,\lambda, \Lambda, L, L_0)$, $0<\beta \leq 1$, such that for every $(x,t) \in \Omega_{2r}$ and $u$ and $v $ are two solutions
\[
\frac{u(x,t)}{v(x,t)} \geq c \frac{u(\underline{P}_r)}{v(\overline{P}_r)}\textbf{.}
\]
\textit{Backward Harnack Inequality}: Let $m=u(\underline{P_{3/2}})$ and $M=\sup_{\Omega_2}u$. Then there exists a positive constant $c=c(n,\lambda,\Lambda,L,L_0,M/m)$ such that if $r\leq 1/2$
\[
u(\overline{P_r}) \leq cu(\underline{P_r}).
\]
Throughout the work we will use $c$ to denote constants which depend on some or all of $n,\lambda,\Lambda,L,L_0,M/m$.
Our starting point in the analysis of the free boundary will be the following result proved in \cite{B}. We denote the cone of directions with opening $\theta$ and axis $\eta$ by $\Gamma(\theta,\eta)$.
\begin{thrm}
Let $u$ be a viscosity solution to~\eqref{FBP_Statement} satisfying the hypotheses of this section. Then $u$ is Lipschitz and possesses a space-time cone of directions with axis $e_n$ and opening angle $\theta$ in which the solution is monotone:
\[
u(x-\tau)\leq u(x) \quad \forall \tau \in \Gamma(\theta,e_n)
\]
\end{thrm}
\subsection{Initial Configurations and Domains}
In what follows it will be necessary to know that the coefficients $a_{ij}(x,t)$ in the operator $\mathcal{L}$ are suitably close to $\delta_{ij}$. To this end we define $a^s_{ij}(x,t) =a_{ij}(sx,s^2t)$ and set
\[
\mathcal{L}^s-\partial_t = \sum_{i,j} a^s_{ij}(x,t) D_{ij} -\partial_t
\] to be the parabolic operator with these dilated coefficients. Set
\[
u_s(x,t) = \frac{u(sx,s^2t)}{s}.
\]
Then we have the equivalence
\[
\mathcal{L}u-u_t=0 \Leftrightarrow \mathcal{L}^su_s-(u_s)t=0.
\] Note that this parabolic rescaling of $u$ does not alter the free boundary condition in~\eqref{FBP_Statement}. We will assume $a_{ij}(0,0) =\delta_{ij}$ and set $A=\sup_{i,j} [a_{ij}]_\alpha$.
Let $x_0=\frac{3}{4}e_n$, $P_0=(x_0,0)$, $\overline{P_0}=(x_0,\frac{9}{8L_0^2})$, $\underline{P_0} =(x_0,-\frac{9}{8L_0^2})$. These are inward, forward and backward reference points, respectively.
Next define regions $T= B'_{1/4}(x_0)\times (-\frac{9}{16L_0^2},\frac{9}{16L_0^2})$ and set
\[
\Phi = B'_{1/4-\sigma}(x_0)\times (-\frac{9}{16L_0^2}+\sigma^2,\frac{9}{16L_0^2}-\sigma^2)
\]
$\sigma$ to be specified later. By construction the parabolic distance from $\partial \Phi$ to $\partial T$ is $\sigma$.
Finally, let
\[
\Psi =B'_{1/8}(x_0)\times (-\frac{9}{32L_0^2},\frac{9}{32L_0^2}).
\]
In what follows we will have $\Psi \Subset \Phi \Subset T$ by our choice of $\sigma$. Additionally, by an initial change of variables $u(rx,rt)$, with $r<1$, we can reduce the Lipschitz constants $L$ and $L_0$ to be less than one, so that the above regions and test points are contained within $\Omega_4$. Finally, by a rescaling we have the free boundary of $u$ contained in $\{|x_n|<1/10\}$. \\
Next define $z$ by
\begin{align*}
\Delta z -z_t &=0 \quad \text{in } T\\
z &=u \quad \text{on } \partial_p T.
\end{align*}
Note that
\begin{align*}
\mathcal{L}^s(u-z)-(u-z)_t &= \sum (a^s_{ij}-\delta_{ij})D_{ij}z \\
\Delta(u-z)-(u-z)_t &= \sum (a^s_{ij}-\delta_{ij})D_{ij}u.
\end{align*}
We may assume by this configuration that the conclusion of Lemma 2.1 in [FS1] holds throughout $\Omega_4$. This states that
\[
c_1 \frac{u(X)}{d_{X}} \leq D_nu(X) \leq c_2 \frac{u(X)}{d_X}
\]
where here $d_X$ denotes the distance from $X=(x,t)$ to the FB at time level $t$.
\section{Interior Enlargement of the Monotonicity Cone}
The results in this section only require the following: $u$ is $\mathcal{L}^s$-caloric, where $\mathcal{L}^s$ is suitably close to $\Delta$ (as controlled by the $a^s_{ij}$), $u$ vanishes on the piece of the boundary $\{f(x',t)=x_n\}$, and $u$ is Lipschitz with a monotonicity cone $\Gamma(e_n,\theta)$. In particular, the free boundary condition $G(u_\nu^+, u_\nu^-)=1$ plays no role in these results. Our method of proof is similar to \cite{CFS} in the elliptic case.
\begin{lm}
Let $u$ be a solution to $\mathcal{L}^s-u_t=0$ in $\Omega_4$, $z$ as above. Then
\begin{equation}
|u-z|^*_{2+\alpha,\Phi} \leq Ks^\beta u(\underline{P_0})
\end{equation}
where $K=K(A)$ is a constant which depends on $A$ as well as the usual quantities and $\beta =\frac{\alpha^2}{\alpha+2}$.
\end{lm}
\begin{proof}
We apply the Schauder estimates to the difference $u-z$ to obtain
\begin{equation}\label{inq:SchEst}
|u-z|^*_{2+\alpha,\Phi} \leq C(|u-z|_{0,\Phi}+|\sum (a_{ij}-\delta_{ij})D_{ij}u|^{(2)}_{0,\alpha, \Phi})
\end{equation}
using the standard (see [L]) notation for these norms and weighted norms. Recall that $|f|^{(2)}_\alpha = |f|^{(2)}_0+[f]^{(2)}_\alpha$\textbf{.} We begin by estimating the H\"{o}lder norm term as follows:
\begin{align*}
&|(a^s_{ij}-\delta_{ij})D_{ij} u|^{(2)}_{0,\Phi} +[(a^s_{ij}-\delta_{ij})D_{ij}u]^{(2)}_{\alpha,\Phi} \\
&\leq As^\alpha|D_{ij}u|^{(2)}_{0,\Phi} + |(a^s_{ij}-\delta_{ij})|_{0,\Phi}[D_{ij}u]^{(2)}_{\alpha,\Phi}+[(a^s_{ij}-\delta_{ij})]_{\alpha,\Phi}|D_{ij}u|^{(2)}_{0,\Phi}\\
&\leq As^\alpha|D_{ij}u|^{(2)}_{0,\Phi} + As^\alpha[D_{ij}u]^{(2)}_{\alpha,\Phi} + As^\alpha |D_{ij}u|^{(2)}_{0,\Phi}\\
&\leq A s^\alpha(|u|^*_{2+\alpha,\Phi}) \leq CAs^\alpha|u|_{0,\Phi} \leq CAs^\alpha u(\overline{P_0})\\
&\leq CAs^\alpha u(\underline{P_0}).
\end{align*}
The Backward Harnack Inequality was used to obtain the last inequality.
Now we estimate the sup norm term in~\eqref{inq:SchEst}. Using the $\textit{a priori}$ estimates we have
\[
|u-z|_{0,\Phi} \leq |u-z|_{0,\partial_p\Phi}+ C'\sup_\Phi |(a^s_{ij}-\delta_{ij})D_{ij}u|.
\]
The first term is estimated as follows: Recall that $u=z$ on the boundary of T so their difference is zero. Now $\partial_p \Psi$ lies a distance $\sigma$ from $\partial T$, so using the H\"{o}lder continuity up to the boundary we have that $|u-z|_{0,\partial_p \Psi} \leq c\sigma^\alpha|u-z|_{0,T}$.
Using this we have
\begin{align*}
|u-z|_{0,\Phi} &\leq c\sigma^\alpha|u-z|_{0,T} +C's^\alpha A \sigma^{-2} |u|_{0,\Phi}\\
&\leq c\sigma^\alpha|u|_{0,T} +C's^\alpha A \sigma^{-2} |u|_{0,\Phi}.
\end{align*}
Select $\sigma=s^{\frac{\alpha}{\alpha+2}}$ and obtain
\[
|u-z|_{0,\Phi} \leq s^{\frac{\alpha^2}{\alpha+2}}|u|_{0,T}(c+C'A) \leq c's^{\frac{\alpha^2}{\alpha+2}}u(\overline{P_0})(c+C'A).
\]
Now we always have $\alpha>\frac{\alpha^2}{\alpha+2}$ for $\alpha >0$ so for $s<1$ we have $s^\alpha< s^{\frac{\alpha^2}{\alpha+2}}$. Combining this with the estimate for the H\"{o}lder norm above we obtain
\[
|u-z|^*_{2+\alpha,\Phi} \leq \left[ CA +c'(c+C'A) \right]s^{\frac{\alpha^2}{\alpha+2}} u(\overline{P_0}).
\]
This is the conclusion of the lemma with $K =(CA+c'(c+C'A))$.
\end{proof}
At this point it becomes convienent to begin treating the spacial portion of the cone and space-time cone seperately. We denote these by $\Gamma_x(e_n,\theta_x)$ and $\Gamma_t(\eta,\theta_t)$ respectively, $\eta$ a vector in the $e_n-e_t$ plane. We now focus on expanding these cones of directions.
\begin{lm}\label{lm:2}
Let $u$ be a solution to $\mathcal{L}^s-u_t=0$ in $\Omega_4$ with a cone of monotonicity $\Gamma(e_n,\theta)$. Let $\nabla =\frac{1}{|\nabla u(\underline{P_0})|}\nabla u(\underline{P_0})$. Then if $s$ is sufficiently small, for any $\tau \in \Gamma_x(e_n,\theta)$, $|\tau| =1$,
\begin{equation}\label{inq:HarnackLike}
D_\tau u(X) \geq (C\langle \nabla,\tau \rangle -cs^\beta)u(\underline{P_0})
\end{equation} for all $X\in \Psi$.
The same statement holds for $\tau \in \Gamma_t(\eta,\theta_t)$ with $\nabla$ the unit vector in direction $(u_{x_n}(\underline{P_0}),u_t(\underline{P_0}))$ in the $e_n-e_t$ plane.
\end{lm}
\begin{proof}
The proof follows the same lines in both the spacial and space-time cases. We begin with the spacial case.
Let $z$ be as in the previous lemma so that
\[
|u-z|_{2+\alpha,\Phi}^* \leq cs^\beta u(\underline{P_0})
\] where $\beta =\frac{\alpha^2}{\alpha+2}$.\\
Then $D_\tau z +cs^\beta u(\underline{P_0})$ is a non-negative solution to the heat equation in $\Phi$. By the Harnack Inequality, for $(x,t)$ in $\Psi$ we have
\begin{equation}
D_\tau z(x,t) +cs^\beta u(\underline{P_0}) \geq c' \left( D_\tau z(\underline{P_0}) +cs^\beta u(\underline{P_0})\right).
\end{equation}
Hence, letting $\nabla' =\frac{1}{|\nabla z(\underline{P_0})|}\nabla z(\underline{P_0})$, and assuming without loss of generality that $c'<1$, we obtain
\begin{align}
D_\tau z &\geq c'D_\tau z(\underline{P_0}) -cs^\beta u(\underline{P_0})\\
&= c'|\nabla z(\underline{P_0})| \langle \nabla ',\tau \rangle -cs^\beta u(\underline{P_0}).
\end{align}
Now using the Schauder estimate, $\frac{u}{d}\sim |\nabla u|$ and the Harnack inequality we have
\[
|\nabla z| \geq |\nabla u| -cs^\beta u(\underline{P_0}) \geq (C-cs^\beta)u(\underline{P_0}).
\]
So if $s$ is small enough then $|\nabla z(\underline{P_0})| \geq cu(\underline{P_0})$ (4.6) becomes
\begin{equation} \label{eq:11}
D_\tau z \geq (c^*\langle \nabla',\tau \rangle -cs^\beta)u(\underline{P_0}).
\end{equation}
Using the Schauder estimate once more we have
\[
|\nabla' - \nabla| \leq \frac{|\nabla u(\underline{P_0})-\nabla z(\underline{P_0})|}{|\nabla u(\underline{P_0})|}+ \frac{|(|\nabla z(\underline{P_0})|-|\nabla u(\underline{P_0})|)|}{|\nabla u(\underline{P_0})|} \leq cs^\beta.
\]
This is proved as follows: After adding and subtracting the quantity $|\nabla z|\nabla z$, the left-hand side becomes (suppressing the dependence on the point)
\[
\frac{|\nabla z |\nabla u|-\nabla u|\nabla z||}{|\nabla z||\nabla u|} = \frac{||\nabla z|(\nabla z -\nabla u)+\nabla z(|\nabla u|-|\nabla z|)|}{|\nabla z||\nabla u|}\mathbf{.}
\]
Applying the triangle inequality and canceling terms we obtain the first inequality. The second one is then a consequence of the Schauder estimate (with a different constant $c$). \\
We have thus established $\langle \nabla',\tau \rangle \geq \langle \nabla,\tau \rangle -cs^\beta$. Replacing this in~\eqref{eq:11} we have
\[
D_\tau z \geq (c_1\langle \nabla,\tau \rangle -cs^\beta)u(\underline{P_0}).
\]
Finally, using the Schauder estimate one last time we obtain (with different constants than in the previous line)
\[
D_\tau u \geq (C\langle \nabla,\tau \rangle -cs^\beta)u(\underline{P_0}).
\]
In the space-time case the same calculation works with $\nabla$ the unit vector in $e_n-e_t$ plane in direction $(u_{x_n}(\underline{P_0}),u_t(\underline{P_0}))$ and $\nabla'$ the vector in direction $(z_{x_n}(\underline{P_0}),z_t(\underline{P_0}))$ .
\end{proof}
\textbf{Remark:} In the case of the heat equation the inequality~\eqref{inq:HarnackLike}, without the `error' term $cs^\beta u(\underline{P_0})$, can be obtained easily by simply applying the Harnack Inequalities to the solution. In the variable coefficient case~\eqref{inq:HarnackLike} acts as a substitute.\\
At this point we need to make sure that $D_\tau u$ remains positive, which cannot be guaranteed because of the error term in~\eqref{inq:HarnackLike}. To deal with this we eliminate the portion of the original cone consisting of the vectors which make an angle of more than $\frac{99\pi}{200}$ with $\nabla$. We denote this modified set of directions with $\Gamma'_x(e_n,\theta_x)$ (or $\Gamma_t'(\eta,\theta_t)$ as the case may be). Then for some $c_3$ and any $\tau \in \Gamma'_x(e_n,\theta_x)$
\[
\langle \nabla,\tau \rangle \geq c_3\delta\mathbf{,}
\] where $\delta =\frac{\pi}{2}-\theta_x$ is the defect angle of the cone, $c_3$ depends on how much of the cone was deleted. In the space-time case we use $\mu = \frac{\pi}{2}-\theta_t$ to denote the defect angle; initially this is the same as $\delta$ but this will not hold in the iteration later in the paper.
As it is by now standard, this monotonicity can be described in terms of the sup-convolution, in our case over `thin' balls either purely spacial or in the space-time plane. Precisely,
\[
v_\varepsilon(X) = \sup_{B'_\varepsilon(X)} u(Y-\tau) \leq u(X)
\] for any $\tau \in \Gamma'(e_n,\frac{\theta}{2})$ sufficiently small, with $\varepsilon =|\tau|\sin\frac{\theta}{2}$. The $B'$ denotes a thin ball either purely in space or in space-time, depending on whether $\tau$ is in $\Gamma_x'$ or $\Gamma_t'$.\\
In what follows, the direction $\tau$ is either in $\Gamma'_x$ or $\Gamma_t'$; the proofs are the same. We distinguish between them only later work will make a distinction and it is convenient to have interior enlargement respect this distinction.
\begin{lm}\label{lm:3}
Let $u$ be as in Lemma~\ref{lm:2}. Then there exists $s_0>0$ such that if $s\leq s_0$ we have
\begin{equation}\label{inq:gap}
u(\underline{P_0}) -v_\varepsilon(\underline{P_0}) \geq \sigma\varepsilon u(\underline{P_0}).
\end{equation}
\end{lm}
\begin{proof}
If $Y\in B_\varepsilon(\underline{P_0})$ then, invoking the Mean Value Theorem with $\bar{\tau}=\tau+(\underline{P_0}-Y)$, we obtain
\begin{equation}
u(Y-\tau) =u(\underline{P_0} -\bar{\tau}) =u(\underline{P_0}) -|\bar{\tau}|D_{\bar{\tau}}u(X^*).
\end{equation}
We estimate $D_{\bar{\tau}}u$ from below. If $\tau \in \Gamma(e_n,\frac{\theta}{2})$ then $\bar{\tau}\in \Gamma'(e_n,\theta)$, so using the observation immediately preceding this lemma we have
\begin{align*}
D_{\bar{\tau}} u &\geq (c_1\langle \nabla,\bar{\tau} \rangle -c_2s^\beta)u(\underline{P_0})\\
& \geq c\delta u(\underline{P_0})
\end{align*}
for any $s\leq s_0 =\left(\frac{c_1c_3\delta}{2c_2}\right)^{1/\beta}$\textbf{.}\\
This, together with the fact $\bar{\tau}\geq c\varepsilon$, implies that $|\bar{\tau}|D_{\bar{\tau}}u(X^*) \geq c\varepsilon\delta u(\underline{P_0})$. Using this in (4.6) we get
\[
u(Y-\tau) \leq (1-c\varepsilon\delta) u(\underline{P_0})\textbf{.}
\]
Since $y$ is any point in $B_\varepsilon(\underline{P_0})$, we obtain the
the desired `gap' with $\sigma =c\delta$, which is the desired conclusion.
\end{proof}
We now propagate the `gap' at the point $\underline{P_0}$ in the above inequality to a smaller gap in a whole neighborhood.
\begin{lm}\label{lm:4}
Let $u$ be as Lemma~\ref{lm:3}, monotone increasing in every direction in $\Gamma'(e_n,\theta)$. Suppose for $\varepsilon>0$, $\sigma>0$ small we have
\begin{equation}\label{eq:lm4}
u(\underline{P_0}) -v_\varepsilon(\underline{P_0}) \geq \sigma\varepsilon u(\underline{P_0})\mathbf{.}
\end{equation}
Then there exists positive constants $C$ and $h$ such that in $\Psi$ we have
\[
u(X) -v_{(1+h\sigma)\varepsilon}(X) \geq C\sigma \varepsilon u(\underline{P_0})\mathbf{.}
\]
\end{lm}
\begin{proof}
Write $v_\varepsilon (X) =\sup_{B'_\varepsilon (X)} u_1$ where $u_1(X)=u(X-\tau)$.
Let $\tau \in \Gamma'(e_n,\theta/2)$\textbf{,} with $\varepsilon=|\tau|\sin \frac{\theta}{2}$. For any unit vector $\nu$ (either in space or in $e_n-e_t$ plane depending on whether $\tau \in \Gamma_x$ or $\Gamma_t$) write
\begin{align*}
u(P) -&u_1(P +\varepsilon\nu(1+h\sigma)) \\ &=[u(P)-u_1(P+\varepsilon\nu)]+[u_1(P+\varepsilon\nu) -u_1(P +\varepsilon\nu(1+h\sigma))]\\
&=W(P)+Y(P)\mathbf{.}
\end{align*}
Set $\bar{\tau} =\tau -\varepsilon\nu$. Then $|\bar{\tau}| \geq |\tau|-\varepsilon \geq c\varepsilon$. We estimate $W(P)$ and $Y(P)$ as follows:
$W(P)$ is non-negative (since $\bar{\tau}\in \Gamma(e_n,\theta)$) and a solution to a parabolic equation, hence we can apply the Harnack and conclude
\[
W(P)\geq cW(\underline{P_0}) \geq c\sigma\varepsilon u(\underline{P_0})
\] using our initial assumption~\eqref{eq:lm4}.
For the $Y(P)$ term we apply the fact that $\nabla u \sim \frac{u}{d}$ and the Carleson Estimate. Hence
\[
|\nabla u_1(P)| \leq Cu_1(P) \leq Cu_1(\underline{P_0}) \leq Cu(\underline{P_0})\mathbf{.}
\] Here we have used a combination of the Carleson Estimate and the Backward Harnack Inequality to obtain the middle inequality.
Together the estimates for $W(P)$ and $Y(P)$ yield
\[
W(P)+Y(P) \geq c\sigma\varepsilon u(\underline{P_0}) -Ch\sigma\varepsilon u(\underline{P_0}) \geq \bar{C}\sigma\varepsilon u(\underline{P_0})
\] if $h$ is chosen small enough ($h<\frac{c}{2C}$).
\end{proof}
Using the Backward Harnack Inequality we have the following corollary.
\begin{cor}
Let $u$ be as in Lemma~\ref{lm:4}, monotone increasing in every direction in $\Gamma'(e_n,\theta)$. Suppose for $\varepsilon>0$, $\sigma>0$ small we have
\[
u(\underline{P_0}) -v_\varepsilon(\underline{P_0}) \geq \sigma\varepsilon u(\underline{P_0})\mathbf{.}
\]
Then there exists positive constants $C$ and $h$ such that in $\Psi$ we have
\[
u(X) -v_{(1+h\sigma)\varepsilon}(X) \geq C\sigma \varepsilon u(P_0)\mathbf{.}
\]
\end{cor}
An application of the geometric cone enlargement lemma due to Caffarelli, Theorem 4.2 in [CS] yields an expansion of the monotonicity cone, either $\Gamma_t$ or $\Gamma_x$, as the case may be. This is stated precisely below.
\begin{cor}\label{cor:int_gain}
Let $u$ be a solution to our free boundary problem
\[
\left\lbrace
\begin{aligned}
&\mathcal{L}u -u_t =0 \quad \text{in } \;\{u>0\}\cup\{u<0\}\\
&G(u^+_\nu,u^-_\nu)=1 \quad \text{along } \;\partial\{u>0\}
\end{aligned}
\right.
\
\]
and set $u_r = \frac{u(rx,r^2t)}{r}$ a parabolic blow-up. Then there exists an $r_0$ such that if $r\leq r_0$ we have the following:
\begin{enumerate}
\item If $u_r$ is monotone in a spacial cone of directions $\Gamma_0^x(e_n, \theta^x_0)$ then in $\Psi$ $u_r$ is monotone in an expanded cone of spacial directions $\Gamma_1^x(\nu_1,\theta_1^x)$ with defect angle decay given by $\delta_1\leq c\delta_0$ and $|\nu_1-e_n|\leq C\delta_0$.
\item If $u_r$ is monotone in a space-time cone of direction $\Gamma_0^t(\eta_0,\theta_0^t)$, $\eta_0 \in \mathrm{span}\{e_n,e_t\}$, then in $\Psi$ it is monotone in an expanded cone of directions $\Gamma_1^t(\eta_1,\theta_1^t)$ with defect angle decay $\mu_1 \leq c\mu_0$, $\eta_1 \in \mathrm{span}\{e_n,e_t\}$, $|\eta_1-\eta_0| \leq c\mu_0$.
\end{enumerate}
\end{cor}
\section{Propagation Lemma}
It is not possible to propagate the uniform gain in the monotonicity cone proved in the previous section to the free boundary. Instead only a portion of the gain can be propagated. This is accomplished by using a family of sup-convolutions with a variable radius.
For a positive function $\phi$ and direction $\tau$ define the sup-convolution
\[
v_{\phi,\tau}(p) =\sup_{B_{\phi(p)}}u(q-\tau).
\]
Also, let $\mathcal{C}_{R,T} =B_R^\prime\times (-T,T)\mathbf{.}$
In the sequel, we will need suitable versions of Lemmas 3.1 \& 3.3 in [FS2]. The first describes a condition that $\phi$ needs to satisfy in order to make $v_{\phi,\tau}$ a sub/super-solution to our operator. The second establishes the existence of a family of functions satisfying this condition, among others.
\begin{lm}
Let $u$ be a solution to our free boundary problem for the operator $\mathcal{L}-D_t$. Let $\varepsilon_0$ be small enough and $\phi \in C^2(\bar{\mathcal{C}}_{R,T})$ be a strictly positive function. Let $\omega \leq \omega(\phi_{\mathrm{MAX}})$. Assume that in a smaller cylinder $\mathcal{C}^\prime \subset \mathcal{C}_{R,T}$ with $\mathrm{dist}(\mathcal{C}^\prime, \partial \mathcal{C}_{R,T}) \geq \rho \gg \varepsilon_0$ $D_t\phi\geq 0$ and
\[
\mathcal{L}(\phi) -c_1 D_t\phi \geq C\frac{|\nabla \phi|^2 +\omega^2}{\phi} +c_2(|\nabla \phi |+\omega)
\] for some positive constants $C_0$, $C$, $c_1$ and $c_2$ depending only on $n,\lambda,\Lambda, \rho$.
Then in both $\Omega^\pm(v_{\psi,\tau}) \cap \mathcal{C}'$, $v_{\phi,\tau}$ is a viscosity subsolution to the operator $\mathcal{L} -D_t$.
\end{lm}
\textbf{Remark:} In [FS2] this lemma is stated for a family of operators and is therefore slightly different, and more complex, than the version we have stated. We do not require this in our case. Additionally, in [FS2], the lemma is stated for a solution $u$ to the Stefan problem, but the free boundary condition does not play a role in the proof and therefore the same result holds for our problem. Finally, their lemma has the Pucci extremal operator $\mathcal{P}^-$ on the left of the inequality instead of $\mathcal{L}$. Since $\mathcal{P}^-(\phi)-c_1D_t\phi \leq \mathcal{L}(\phi) -c_1D_t\phi$ by the properties of the Pucci operator, we are justified in making this substitution.\\
Next define the region
\[
D=\left[ B_1'\setminus \left( \bar{B}'_{1/8}(x_0)\right) \right]\times (-T,T).
\]
From Lemma 3.3 \cite{FS2} we have the following:
\begin{lm} Let $T>0$ and $C>1$. There exists positive constants $\bar{C}= \bar{C}(T,C)$, $k=k(T,C)$, and $h'_0 =h_0(T,C)$ such that for any $0<h'<h'_0$ there is a family of $C^2$ functions $\phi_\eta$, $0\leq \eta \leq 1$, defined in the closure of $D$ such that
\begin{enumerate}
\item $1-\omega\leq \phi_\eta \leq 1+\eta h'$
\item $\mathcal{L}\phi_\eta-c_1D_t\phi_\eta -C\frac{|\nabla \phi|^2+\omega^2}{\phi_\eta} -c_2(|\nabla \phi|+\omega) \geq 0 $ in $D$
\item $\phi_\eta\geq 1+k\eta h'$ in $B'_{1/2}\times(\frac{-T}{2},\frac{T}{2})$
\item $\phi_\eta \leq 1$ in $D\setminus (B'_{7/8}\times (-\frac{7T}{8},\frac{7T}{8}))$
\item $D_t\phi_\eta, \leq \bar{C}\eta h'$ and $|\nabla \phi_\eta| \leq \bar{C}(\eta h'+\omega)$ in $\bar{D}$
\item $D_t\phi_\eta \geq 0$ in $D$
\end{enumerate}
Here the $\omega$ appearing in (2) is a small positive constant; that is to say that if $c_1$, $c_2$, and $\omega$ are small positive constants depending on $n,\lambda,\Lambda,C$ then it is possible to construct this family.
\end{lm}
\textbf{Remark:} This is essentially Lemma 3.3 in [FS2] with only two small differences. First, as noted above, our domain has only the one hole; this causes only small and obvious alterations to the construction. Second, similar to the previous lemma, Lemma 3.3 in [FS2] has the Pucci extremal operator $\mathcal{P}_-$ instead of $\mathcal{L}$ in item (2). As in the previous remark, we are justified in this substitution by the properties of $\mathcal{P}^-$.
This concludes the essential properties of the variable radii functions $\phi_\eta$. \\
In what follows we will use the family $\varepsilon \phi_{\sigma \eta}$. The $\sigma \eta$ term presents no difficulties but the derivative inequality which must be satisfied in order for the sup-convolutions to be subsolutions is not homogeneous in $\phi$. Precisely, when we replace $\phi$ with $\varepsilon\phi$ in item (2) of the above lemma we have
\[
\mathcal{L}\varepsilon\phi_\eta-c_1D_t\varepsilon\phi_\eta -C\frac{|\nabla \varepsilon\phi|^2+\omega^2}{\varepsilon\phi_\eta} -c_2(|\nabla \varepsilon\phi|+\omega)\mathbf{.}
\]
The presence of the $\omega$ terms prevents us from simply factoring out an $\varepsilon$. Rearranging we have
\[
\left(\mathcal{L}\varepsilon\phi_\eta-c_1D_t\varepsilon\phi_\eta -C\frac{|\nabla \varepsilon\phi_\eta|^2}{\varepsilon\phi_\eta} -c_2|\nabla \varepsilon\phi_\eta|\right) -C\frac{\omega^2}{\varepsilon\phi_\eta}-c_2\omega\mathbf{.}
\]
Owing to the condition initially satisfied by $\phi_\eta$, the term in parentheses will be strictly positive provided $\omega$ is strictly positive (which it will be for a variable coefficient problem). So if we place the additional condition $\omega\leq \varepsilon^2$ then for sufficently small $\varepsilon$ the family $\varepsilon\phi_{\sigma\eta}$ will satisfy the desired inequality.\\
This condition is what restricts us to using $\varepsilon$-monotonicity. In our later work we take $\varepsilon =|\tau|\sin \delta$, $\tau$ and $\delta$ coming from the monotonicity cone. Since we are also requiring $\omega \leq \varepsilon^2$ we see that some restriction on the length of $\tau$ is necessary. We cannot take $\tau$ to be arbitrarily small since that would in turn force the oscillation of of coefficient matrix, which is measured by $\omega$, to be zero reducing the problem to the constant coefficient case.\\
We now prove our version of the propagation lemma used in this problems. From now on we will assume that $\omega_0 \leq \varepsilon^2$ with $\varepsilon\leq \varepsilon_0$. We will make use of standard asymptotic development results for both $u$ and $v_\varepsilon$ as in \cite{B}.
\newcommand{v_\eta}{v_\eta}
\newcommand{\bar{v}_\eta}{\bar{v}_\eta}
\begin{lm}\label{lm:propagation}
Let $u_1$ and $u_2$ be two viscosity solutions to our problem in $B'_2\times (-2,2)$ and $F(u_2)$ Lipschitz continuous with $(0,0)\in F(u_2)$. Assume
\begin{enumerate}
\item In $B'_1\times(-T,T)$
\[
v_\varepsilon (x,t) = \sup_{B_\varepsilon(x,t)}u_1 \leq u_2(x,t)
\]
\item For some $\sigma$ positive and some $h$ small and $(x,t)\in B_{1/8}(x_0)\!\times\! (-T,T) \subset {\{u_2>0\}}$
\[
u_2(x,t) -v_{(1+h\sigma)\varepsilon}(x,t) \geq C\sigma\varepsilon u_2(x_0,0)
\]
\item $\omega_0$ is sufficiently small (as above).
\end{enumerate}
Then if $\varepsilon>0$ and $h>0$ are small enough, there exists $k\in (0,1)$ such that in $B_{1/2}\times (-\frac{T}{2},\frac{T}{2})$
\[
v_{(1+kh\sigma)\varepsilon}(x,t)\leq u_2(x,t)
\]
\end{lm}
\begin{proof}
Define $w(x,t)$ as follows:
\begin{align*}
\mathcal{L}w-w_t &=0 \quad \text{in } D\cap \{u_2>0\}\\
w &=u(P_0) \quad \text{on } \partial B_{1/8}(P_0)\times (-9T/10,9T/10)\\
w &=0 \quad \text{on the rest of } \partial_pD.
\end{align*}
Next, using the family constructed above with $\varepsilon\leq \varepsilon_0$, set
\begin{align*}
v_\eta &= v_{\varepsilon \phi_{\sigma \eta}}\\
\bar{v}_\eta &= v_\eta +c\sigma \varepsilon w(x,t).
\end{align*}
The constant $c$ is chosen to make $\bar{v}_\eta \leq u_2$ on $\partial_p [B'_{1/8}(P_0)\times(-9T/10,9T/10)]$. This is possible by the second hypothesis and the Harnack inequality. This ensures that $\bar{v}_0 \leq u_2$.
We now demonstrate that the set of $\eta$ for which $\bar{v}_\eta \leq u_2$ is all of $[0,1]$. This is accomplished by showing that the set of $\eta$ for which we have $\{\bar{v}_\eta >0\}\cap D \Subset \{u_2>0\}\cap D$ is both open and closed; by construction the set is non-empty. The set is closed since the quantities involved vary continuously. We show it is open by supposing that there is an $\eta$ for which the free boundaries touch, that is $\bar{v}_\eta (x_0,t_0) =u_2(x_0,t_0)=0$.
All points are regular from the right for $\bar{v}_\eta$ by properties of the sup-convolution. Since $\bar{v}_\eta$ touches $u_2$ at $(x_0,t_0)$, this point will be right regular for $u_2$. Additionally, by the assumption that $\bar{v}_\eta (x_0,t_0) =u_2(x_0,t_0)=0$, we have that $v_\eta (x_0,t_0)=0$ as well since $w$ vanishes where $u_2$ does. This means that the corresponding point $(y_0,s_0)$ on the free boundary of $u_1$ is left regular. Therefore, appealing to the asymptotic development results in \cite{B} we have
\begin{align*}
u_1 &\geq a_1^+\langle y-y_0, \nu_1 \rangle -a_1^-\langle y-y_0,\nu_1 \rangle +o(|y-y_0|)\\
&\text{with } G(a_1^+,a_1^-) \geq 1 \text{ and equality along } t=-\gamma \langle y-y_0,\nu_1\rangle \gamma>0\\
u_2 &\leq a_2^+\langle x-x_0, \nu_2 \rangle -a_2^-\langle x-x_0,\nu_2 \rangle +o(|x-x_0|) \\
&\text{with }G(a_2^+,a_2^-) \leq 1 \text{ and equality along }t=-\gamma \langle x-x_0,\nu_1\rangle, \gamma>0\\
v_\eta &\geq a^+\langle x-x_0, \nu^* \rangle -a^-\langle x-x_0,\nu^* \rangle +o(|x-x_0|)\\
\end{align*}
where $\nu_1 =\frac{y_0-x_0}{|y_0-x_0|}$, $a^{\pm} =a_1^{\pm}|\tau|$, $\nu^* =\frac{\tau}{|\tau|}$ with
\[
\tau = \nu_1 +\frac{\varepsilon^2\phi_{\sigma\eta}(x_0,t_0)}{|y_0-x_0|}\nabla_x \phi(x_0,t_0)\mathbf{.}
\]
Now by the boundary Harnack comparison theorem we have $\frac{w}{u_2} \sim c$, so $w$ has the asymptotic development $ca_2^+$. Hence for $\bar{v}_\eta$ we have
\[
\bar{v}_\eta \geq \bar{a}^+\langle x-x_0, \nu^* \rangle -a^-\langle x-x_0,\nu^* \rangle +o(|x-x_0|)\mathbf{,}\\
\] where $\bar{a}^+=a^++c\sigma \varepsilon a_2^+$.
Now recall that $G$ is Lipschitz continuous in both variables with Lipschitz constant $L_G$, increasing in the first, decreasing in the second. Moreover in [CS 9.14] it is shown that
\begin{align*}
|a_1^\pm -a^\pm| &\leq c(D_t\varepsilon\phi_{\sigma\eta}(x_0,t_0) +|\nabla \varepsilon\phi_{\sigma\eta}(x_0,t_0)|)\\ &\leq c(C\sigma\eta h \varepsilon+C\sigma\eta h \varepsilon+C\omega\varepsilon) \\
&\leq\bar{c}\sigma h \varepsilon\mathbf{,}
\end{align*}
the last inequality coming from the construction of the $\phi$. Specifically, we use the fact that $\eta \leq 1$ and $\omega \leq \varepsilon^2$ so the $C\omega\varepsilon$ term can be majorized by the linear term for small $\varepsilon$. Now $\bar{a}^+ \geq a_1^+-\bar{c}\sigma\varepsilon h+c\sigma\varepsilon a_2^+$ and $a^- \leq a_1^-+\bar{c}\sigma\varepsilon h$. Hence we have
\begin{align*}
G(\bar{a}^+,a^-) &\geq G(a_1^+-\bar{c}\sigma\varepsilon h+c\sigma\varepsilon a_2^+,a_1^-+\bar{c}\sigma\varepsilon h)\\
&\geq G(a_1^+,a_1^-) +L_G[(-\bar{c}\sigma\varepsilon h+ c\sigma \varepsilon a_2^+)-\bar{c}\sigma\varepsilon h]\\
&= G(a_1^+,a_1^-)+L_G\sigma\varepsilon(-2\bar{c}h+ca_2^+)\\
&\geq 1+L_G\sigma\varepsilon(-2\bar{c}h+ca_2^+)\mathbf{,}
\end{align*}
which implies that $G(a^+,a^-) > 1$ provided $h \leq\frac{ca_2^+}{4\bar{c}}$. Our non-degeneracy condition forces $a_2^+\geq c>0$, so taking $h =\frac{ca_2^+}{4\bar{c}}$ we will have $(-2\bar{c}h+ca_2^+)>0$ and thus $G(a^+,a^-) > 1$ as desired.
We finish the proof by appealing to the Hopf Principle. The difference $u_2-\bar{v}_\eta$ is a positive $\mathcal{L}$-supersolution in $\{\bar{v}_\eta >0\}$ vanishing at the boundary point $(x_0,t_0)$. This implies that $a_2^- \leq a^-$ and by the Hopf Principle we have $a_2^+>\bar{a}^+$. The properties of $G$ then imply that
\[
1\geq G(a_2^+,a_2^-)> G(\bar{a}^+,a^-)\mathbf{,}
\]
which contradicts $G(\bar{a}^+,a^-)>1$ above.
Now recalling the properties of the $\varphi_\eta$ above, particularly
\[
\varphi_\eta\geq 1+k\eta h \text{ in } B'_{1/2}\times\left(\frac{-T}{2},\frac{T}{2}\right)
\]
we have for $\eta=1$ $\varphi_\sigma \geq 1+k\sigma h$ and thus
\[
v_{\varepsilon(1+k\sigma h)} (x,t)\leq u_2(x,t)
\]
in the region $B'_{1/2}\times\left(\frac{-T}{2},\frac{T}{2}\right)$.
\end{proof}
\section{Spacial Regularity}
\subsection{Outline of Proof}
In the constant coefficient case regularity follows from applying the interior gain, then the propagation lemma, then rescaling and repeating.
Preventing us from applying this classical argument in our case is the extra $\omega_0 \leq \varepsilon^2$ hypothesis of our propagation lemma. This restricts our choice of $\tau$ for which we can apply the propagation lemma with $u_1 =u(x-\tau)$ and $u_2=u(x)$. The $\tau$ cannot be `too short', since if it is allowed to be arbitrarily short it forces the oscillation $\omega_0=0$. This means that we can carry fully monotonicity using the propagation lemma only in the constant coefficient case. This forces us to use $\varepsilon$-monotonicity in our variable coefficient problem.
The reader will recall that a function $u$ is $\varepsilon_0$-monotone in a unit direction $\tau$ if
\[
u(x)\geq u(x-\varepsilon\tau) \quad \text{for } \varepsilon\geq \varepsilon_0\mathbf{.}
\]
Strict $\varepsilon$-monotonicity, which is of importance in this problem, is similar but quantifies the `gap' between the two points:
\[
u(x)- u(x-\varepsilon\tau)\geq c\varepsilon^\beta u(x) \quad \text{for } \varepsilon\geq \varepsilon_0, \text{ some } \beta>0\mathbf{.}
\]
Clearly if $u$ is fully monotone in a direction, then it is also $\varepsilon_0$-monotone for any $\varepsilon_0$ we choose.
Finally, it will be convenient to work with an alternate definition of $\varepsilon$-monotonicity, which is essentially equivalent to the one above. We say that $u$ is $\varepsilon$-monotone in the cone of directions $\Gamma(\nu,\theta)$ with defect angle $\delta$ if for any $\tau \in \Gamma(\nu,\theta-\delta)$ with $|\tau| =\varepsilon$ we have
\[
\sup_{B_{\varepsilon\sin \delta}(p)} u(q-\tau) \leq u(p)\mathbf{.}
\]
In this case, the requirement of the propagation lemma is seen to be $\omega \leq (|\tau|\sin \delta)^2$.
Our method of proof modifies the classical proof by accommodating this $\varepsilon$-monotonicity. An outline of the steps involved is as follows: Interior gain (given by Corollary~\ref{cor:int_gain}) is propagated to the free boundary by Lemma~\ref{lm:propagation}, but only for $\varepsilon$-monotonicity. The solution is then rescaled and by giving up part of the gain from the first two steps we can assert that the rescaled solution is fully monotone in a smaller cone away from the free boundary (see Lemma~\ref{lm:ep_to_full_mono_space}). This is all that is required to repeat the interior gain argument and at this point we can iterate the result. Special attention must be paid to the effect rescaling has on $\varepsilon$-monotonicity, as well as the amount of cone loss that occurs when in passing from $\varepsilon$-monotonicity to full monotonicity.
\subsection{Spacial Cone Enlargement}
In the lemma below, $r$ and $\lambda$ are constants (less than 1) chosen small enough later. In particular, $\lambda$ will be chosen by the calculation in Corollary~\ref{main_cor}. We take $\varepsilon_k =\lambda^k\varepsilon_0$ and $C_{r^k }= B_{r^k/2}\times(\frac{-r^{2k}T}{2},\frac{r^{2k}T}{2})$. $Q_R$ will be the quadratic cylinder $B_R\times(-R^2,R^2)$.
\begin{lm} \label{space_cone_enlarge}
Let $u$ be a solution to our problem in $B'_2\times (-2,2)$, monotone in the directions $\Gamma^x(e_n,\theta_0)\cup\Gamma^t(\eta,\theta_t)$\textbf{,} with $\eta$ in the span of $e_n$ and $e_t$. Then $u$ is $r^k\varepsilon_k$\textbf{-}monotone in $C_{r^k}$ in an expanded spacial cone of directions $\Gamma^x(\nu_1,\theta^x_1) =\Gamma^x_1$ with defect angle $\delta_1 \leq c\delta_0$, $c<1$.
\end{lm}
\textbf{Remark:} Notice that we are asserting improved $\varepsilon$-monotonicity in smaller and smaller regions $C_{r^k}$, in \textit{the same} expanded cone of directions $\Gamma^x_1$. Increasing the cone opening iteratively will come later.
\begin{proof} We rescale $u$:
\[
u_r =\frac{u(rx,r^2t)}{r}\mathbf{,}
\] the rescaling factor $r$ to be fixed later in the proof. The rescaled function will still possess the same spacial monotonicity cone as the original. Additionally, it solves an equation with the rescaled coefficients $a_{ij}(rx,r^2t)$. The oscillation of these coefficients in controlled by $cr^\alpha$, $\alpha$ being the H\"{o}lder exponent. We will assume $r$ is small enough so that Corollary~\ref{cor:int_gain} and the related results from Section 3 can be applied to $u_r$.
Consider now a spacial vector $\tau \in \Gamma^x(e_n,\theta-\delta_0)$, where $\delta_0$ is the defect angle of the space cone, $|\tau| =\varepsilon \ll \delta_0$, $\bar{\varepsilon}=|\tau|\sin\delta_0$. Set $u_1(x,t) =u_r((x,t)-\tau)$. Additionally, assume that the defect angle of the space-time cone is less than that of the space cone.
From the monotonicity cone we have
\[
\sup_{B_{\bar{\varepsilon}(x)}}u_1(y,t) \leq u_r(x,t) \quad \text{in } B_{1}\times(-1,1).
\]
Note that this sup is performed over a space ball. However, we may assume that the same sort of result holds over a space-time ball
\begin{equation}\label{eq:full_ball_sup}
\sup_{B_{\bar{\varepsilon}(x,t)}}u_1(y,s) \leq u_r(x,t) \quad \text{in } B_{1}\times(-1,1)
\end{equation} since the defect angle in space is larger than that in time.
From Corollary~\ref{cor:int_gain} we have that there exists an enlarged cone of spacial directions $\tilde{\Gamma}_x$ in $\Psi$, the neighborhood of $(x_0,0)$. Let $\bar{\tau}$ be a unit (spacial) direction in this expanded cone $\tilde{\Gamma}_x$; then since this enlarged cone contains the old one we can write this direction as $\bar{\tau}=\alpha\tau-\beta e_n$, $\beta\geq 0$, where $\tau$ is a unit vector in the old cone.
Since $\bar{\tau}$ is a direction in which $u$ is increasing we have $D_{\bar{\tau}} u_r \geq 0$. Using the above, this implies that
\[
D_\tau u_r\geq \frac{\beta}{\alpha}D_n u_r.
\]
Now, if we delete a small neighborhood $\mathcal{N}$ of the contact line $\Gamma\cap\tilde{\Gamma}_x$ between the old and new cones, we can force $\frac{\beta}{\alpha} \geq c\delta_0$, with $c$ depending on the size of the neighborhood $\mathcal{N}$ (see [CS], Section 9.4). We then obtain that for $\tau \in \Gamma_x \setminus \mathcal{N}$ we have
\[
D_\tau u_r\geq c\delta_0 D_n u_r.
\]
We now demonstrate that a similar inequality holds in this region if we allow the direction to have a small time component of order $\delta_0$. \\
Let $\lambda_1$ and $\lambda_2$ be positive constants such that $\lambda_1^2+\lambda_2^2=1$ and $|\lambda_2|\leq \frac{c\delta_0}{2\bar{c}}$ (here $\lambda_1>1/2)$ where $\bar{c}$ is such that $|D_tu|\leq \bar{c}D_n u_r$ (this inequality is a consequence of the monotonicity cone). We then have
\[
\lambda_1 D_\tau u_r+\lambda_2 D_t u_r \geq (\lambda_1c\delta_0-\frac{c\delta_0}{2})D_n u_r \geq \bar{c}\delta_0 D_n u_r.
\]
Now if $\bar{\tau}=\tau+\bar{\varepsilon}\varrho$, where $\varrho$ can be any $(n+1)-$dimensional unit vector, we have
\[
u_r((x,t)-\bar{\tau})-u_r(x,t) =-D_{\bar{\tau}}u_r(\tilde{x},\tilde{t})|\bar{\tau}| \leq -c\bar{\varepsilon}\delta_0 D_nu_r(\tilde{x},\tilde{t}) \leq -c\bar{\varepsilon}\delta_0 u_r(x_0,0)\mathbf{.}
\]
In the last inequality we have used that $|\bar{\tau}|\geq c\varepsilon\geq c\bar{\varepsilon}$, $D_n u_r \sim \frac{u_r}{d}$ and the Harnack Inequalities. Note that $u_r((x,t)-\bar{\tau}) = u_r((x,t)-\tau -\bar{\varepsilon}\varrho)$ so as $\varrho$ varies we obtain in this region
\[
v_{\bar{\varepsilon}} (x,t) =\sup_{B_{\bar{\varepsilon}} (x,t)} u_1 \leq u_r(x,t) -c\bar{\varepsilon}\delta_0 u_r(x_0,0).
\]
Now by standard arguments, as in Section 4, this gap implies that there exists a small $h$ such that in $\Psi$ (with a different constant $c$)
\[
u_r(x,t) -v_{(1+h\delta)\bar{\varepsilon}}(x,t) \geq c\bar{\varepsilon}\delta_0 u_r(x_0,0).
\]
At this point we must restrict ourselves $\varepsilon$-monotonicity so that the propagation lemma can be applied. Select $\tau$ with $|\tau| = \lambda\varepsilon_0 =\varepsilon_1$ and take $r$ small enough so that $cr^\alpha \leq \varepsilon_1^2$ holds.
Now $cr^\alpha$ is the oscillation of the coefficient matrix $A$. This choice enables us to apply our propagation lemma and obtain that in $B_{1/2}\times (\frac{-T}{2},\frac{T}{2})$ (here $T$ is the `height' of the cylinder $\Psi$)
\[
u_r(x,t) \geq v_{(1+ch\delta_0)\bar{\varepsilon}_1}(x,t).
\]
Therefore $u_r$ is $\varepsilon_1$-monotone in an enlarged cone $\Gamma(\nu_1,\theta_1)$ in the region $B_{1/2}\times(\frac{-T}{2},\frac{T}{2})$ with defect angle $\delta_1=\frac{\pi}{2}-\theta_1 \leq c\delta_0$ with $c<1$. Back-scaling we obtain $r\varepsilon_1$-monotonicity for $u$ in the appropriately rescaled domain $C_r = B_{r/2}\times(\frac{-r^2T}{2},\frac{r^2T}{2})$
Now we can repeat this argument for $\varepsilon_k =\lambda^k\varepsilon_0$ and $r^k$, $\lambda$ to be chosen later. Precisely, $u$ is fully monotone in the original cone, so $u_{r^k}$ will be fully monotone in the original cone as well. Hence, it is $\varepsilon_k$-monotone no matter what we choose $\lambda$ to be. Additionally, parabolic blowups decrease the defect angle of the space-time cone so we have that~\eqref{eq:full_ball_sup} will hold for any $r^k$.
We can repeat the cone enlargement arguments away from the free boundary to enlarge the spacial monotonicity cone. Finally, we can use the propagation lemma to transfer a portion of this new cone to the free boundary provided that $\omega_k \leq (\varepsilon_k)^2$. Since $\omega_k =cr^{\alpha k} $, we require that at each step $cr^{\alpha k} \leq \lambda^{2k}\varepsilon_0^2$\textbf{,} which can be arranged by coupling the choice of $r$ and $\lambda$.
This proves that in $C_{r^k}$ $u$ is $r^k\varepsilon_k$-monotone in the new, larger, cone of directions $\Gamma^x(\nu_1,\theta_1)$ (we get the same enlarged cone in each case). Alternatively $u_{r^k}$ is $\varepsilon_k$- monotone in $B_{1/2}\times(-\frac{T}{2},\frac{T}{2})$ in this same cone.
\end{proof}
We now turn to the task of iteratively increasing the monotonicity cone in the above result. To do this we will need to know that our solutions are fully monotone away from the free boundary since our interior gain results rely on full monotonicity. This in turn requires a strict $\varepsilon$-monotonicity not present in the result above. A slight modification of the proof, however, will yield the desired result. We explicitly observe that the enlarged cone in Corollary \ref{e_cone_gap} below is not the same as in Lemma~\ref{space_cone_enlarge}.
In the sequel we will let $\gamma, \delta$ be positive constants such that
\[
0<\gamma=\frac{1-\delta}{2}, \quad 0<\beta<\min\left\lbrace\gamma, \frac{\alpha+\delta -1}{2}\right\rbrace
\]
Notice in particular that this choice implies $\alpha+\delta-1>0$ and $\delta<1$. This $\delta$ is not to be confused with the defect angles $\delta_k$. The $M$ in the corollary below is determined by Lemma~\ref{lm:ep_to_full_mono_space} below.
\begin{cor}\label{e_cone_gap}
Let $u$ be a solution to our problem, monotone in the directions $\Gamma^x(e_n,\theta_0)\cup\Gamma^t(\eta,\theta_t)$, with $\eta$ in the span of $e_n$ and $e_t$. Then $u$ is $r^k\varepsilon_k$-monotone in $C_{r^k}$ in an expanded spacial cone of directions $\Gamma^x(\nu_1,\theta^x_1) =\Gamma^x_1$ with defect angle $\delta_1 \leq c\delta_0$, $c<1$.
Alternatively $u_{r^k}$ is $\varepsilon_k$-monotone in $B_{1/2}\times(-\frac{T}{2},\frac{T}{2})$ in this cone. Furthermore, there exists an M such that\textbf{,} $M\bar{\varepsilon}_k^\gamma$ away from the free boundary, we have strict $\varepsilon_k$-monotonicity in these directions in the following sense:
\begin{equation}\label{eq:strict_e_mono}
u_{r^k}(p) -u_{r^k}(p-\tau) \geq c\sigma\bar{\varepsilon}_k^{1-\gamma}u_{r^k}(p)\mathbf{.}
\end{equation}
\end{cor}
\begin{proof}
We pick up the proof of the above lemma at the point where the propagation lemma is applied, slightly changing notation with $\sigma =c\delta_0$ and $p$ and $q$ space-time points, so that
\[
\sup_{B_{(1+h\sigma)\bar{\varepsilon}}(p)}u(q-\tau) \leq u(p).
\]
Reducing the radius of the ball to $(1+\frac{h\sigma}{2})\bar{\varepsilon}$ we have
\[
\sup_{B_{(1+\frac{h}{2}\sigma)\bar{\varepsilon}}(p)}u(q-\tau) +\min_{B_{(1+h\sigma)\bar{\varepsilon}}(p)}|\nabla u||\frac{h\sigma\bar{\varepsilon}}{2}|\leq \sup_{B_{(1+h\sigma)\bar{\varepsilon}}(p)}u(q-\tau).
\]
Now assume that $p$ is located a distance $M\bar{\varepsilon}^\gamma$ away from the free boundary with $M$ a large constant to be fixed later (see Lemma 11). We have $|\nabla u| \sim \frac{u}{d}$, $d$ being the distance to the free boundary, hence the minimum of $|\nabla u|$ can be compared to the minimum of $\frac{u}{d}$.
By the Harnack inequalities $u$ is comparable to $u(p)$, while for the distance we have
\[
M\bar{\varepsilon}^\gamma -(1+h\sigma)\bar{\varepsilon} \leq d \leq M\bar{\varepsilon}^\gamma+(1+h\sigma)\bar{\varepsilon}\mathbf{.}
\]
Since $1+h\sigma$ is bounded, this implies that $d$ is also comparable to $M\bar{\varepsilon}^\gamma$ for $\bar{\varepsilon}$ sufficiently small.
In turn this implies that
\begin{align*}
\sup_{B_{(1+\frac{h}{2}\sigma)\bar{\varepsilon}}(p)}u(q-\tau) &\leq \sup_{B_{(1+h\sigma)\bar{\varepsilon}}(p)}u(q-\tau)-\min_{B_{(1+h\sigma)\bar{\varepsilon}}(p)}|\nabla u||\frac{h\sigma\bar{\varepsilon}}{2} \\
&\leq \sup_{B_{(1+h\sigma)\bar{\varepsilon}}(p)}u(q-\tau)-C\frac{u(p)}{M\bar{\varepsilon}^\gamma}h\sigma\bar{\varepsilon}\\
&\leq \sup_{B_{(1+h\sigma)\bar{\varepsilon}}(p)}u(q-\tau)-Cu(p)\sigma\bar{\varepsilon}^{1-\gamma}\\
&\leq u(p) -Cu(p)\sigma \bar{\varepsilon}^{1-\gamma}.
\end{align*}
Thus\textbf{,} by reducing slightly the cone of monotonicity\textbf{,} strict monotonicity is obtained away from the free boundary.
\end{proof}
\subsection{Results regarding $\varepsilon$-monotonicity}
As mentioned above, we will need to know that our solutions enjoy full monotonicity away from the free boundary. The above corollary is the first step in this process. The remaining results have been collected in this section.\\
Set
\[
Q_{\sqrt{\varepsilon M}}(x^*,t^*) = B'_{\sqrt{\varepsilon M}}(x^*)\times (-M\varepsilon +t^*,M\varepsilon+t^*) \subset \Omega^+(u)\cap\{d_{x,t}>(M\varepsilon)^\gamma\}
\] and let $\zeta$ be the solution to the Dirichlet Problem
\begin{align*}
\zeta_t &=\mathcal{L}_{p^*} \zeta \quad \text{in } Q_{\sqrt{\varepsilon M}}(p^*)\\
\zeta &= u \quad \text{on } \partial_pQ_{\sqrt{\varepsilon M}}(p^*)\\
\end{align*}
where $\mathcal{L}_{p*} =\sum a_{ij}(p^*)D_{ij}u$, a constant coefficient operator.
We state the following result from [FS1] regarding $\zeta$. Recall that $\alpha$ is the H\"{o}lder exponent of our coefficients. The value of $M$ is the lemma below is to be determined later Lemma~\ref{lm:ep_to_full_mono_space}.\\
\begin{lm} $\mathrm{(2.5 \:in \:[FS1])}$
Let $u$ be our caloric function and $\zeta$ as above. Let $\alpha,\gamma,\delta >0$ be as in [Lemma 2.4 in FS1]. Then for every point $p^*$ outside a $(M\varepsilon)^\gamma$-neighborhood of the free boundary of $u$, and for every point $p \in Q_{\sqrt{\varepsilon M}}$ the following estimates hold:
\begin{align}
|u(p)-\zeta(p)| &\leq C(M\varepsilon)^{\frac{\alpha}{2}+\delta}u(p^*)\\
|D_nu(p) -D_n\zeta(p)| &\leq C(M\varepsilon)^{\frac{\alpha}{2}+\frac{\delta}{2}}D_nu(p^*)\\
|D_tu(p)-D_t\zeta(p)| &\leq C(M\varepsilon)^{\frac{\alpha+\delta-1}{2}}D_nu(p^*).
\end{align}
\end{lm}
This next lemma allows us to transfer $\varepsilon$-monotonicity from $u$ to $\zeta$ provided we have strict monotonicity with the correct power.
\begin{lm}\label{e-mono_transfer}
Let $u$ and $\zeta$ be as above and suppose that $u$ is strictly $\varepsilon$-monotone in a direction $\tau$ in the following sense:
\[
u(p)-u(p-\varepsilon\tau)\geq c\varepsilon^{1-\gamma} u(p).
\] Then if $\varepsilon$ is sufficiently small, $\zeta$ is $\varepsilon$ monotone in the direction $\tau$.
\end{lm}
\begin{proof}
Define
\[
u_1(p) =u(p)-u(p-\varepsilon\tau), \quad \zeta_1(p) = \zeta(p) -\zeta(p-\varepsilon\tau)
\]
Using the above estimate we have
\begin{align}\label{lemma10}
\zeta_1(p) &\geq u_(p)-C_0(M\varepsilon)^{\delta+\frac{\alpha}{2}}u(p)\\
&\geq c(\varepsilon)^{1-\gamma} u(p) -C_0(M\varepsilon)^{\delta+\frac{\alpha}{2}}u(p)\\
&\geq 0
\end{align} for $\varepsilon$ small enough\textbf{,} provided $1-\gamma < \frac{\alpha}{2}+\delta$. Since
\[
\delta+\frac{\alpha}{2}=\left(\frac{\delta}{2}+\frac{\alpha}{2}\right)+\frac{\delta}{2}>\frac{1}{2}+\frac{\delta}{2}=1-\gamma\mathbf{,}
\]
we have the desired inequality and the proof is complete.
\end{proof}
\textbf{Remark:} The strict $\varepsilon$-monotonicity our solutions will enjoy from the previous section is
\[
u(p)-u(p-\varepsilon\tau)\geq c\sigma\bar{\varepsilon}^{1-\gamma} u(p) \geq c\delta_0^{2-\gamma}\varepsilon^{1-y}u(p)\textbf{,}
\] where $\sigma = c\delta_0$ and in future iterations we will have $\sigma_k=c\delta_k =c\bar{c}^k\delta_0$. We will be interested in applying the above lemma to a solution which is $\varepsilon_k$-monotone, in which case the $\bar{\varepsilon}$ appearing in the above will be $\bar{\varepsilon}_k$. Now for a fixed value of $\sigma$, there exists an $\varepsilon_0$ such that $\zeta_1(p)\geq 0$ as in the proof above. This value of $\varepsilon_0$ will depend on the size of $\sigma$, which could be problematic since $\sigma=c\delta_0$, and the defect angle will go to zero in our iteration.
However, in our iteration we will eventually have $\sigma_k =c\delta_k =c\bar{c}^k\delta_0$ and $\bar{\varepsilon}_k$. By choosing $\bar{c}$ close to 1, we can ensure that the calculation~\eqref{lemma10} remains valid when applied with $\delta_k$ and $\varepsilon_k$ since the $C_0(M\varepsilon)^{\delta+\frac{\alpha}{2}}u(p)$ term will also be decreasing.
We have then
\begin{lm}\label{lm:ep_to_full_mono_space}
Let $u$ be as in Corollary~\ref{e_cone_gap}.
Then, if $\varepsilon_k$ is small enough and $M$ is large enough, $(M\bar{\varepsilon}_k)^\gamma$ away from the free boundary, $u_{r^k}$ is fully monotone in the cone $\Gamma^x_1(\nu, \theta_1-c_0\varepsilon_k^\frac{\alpha+\delta}{2})$, $c_0 >1$.
\end{lm}
\begin{proof}From Corollary~\ref{e_cone_gap}
\[
u_{r^k}(p) -u_{r^k}(p-\tau) \geq c\sigma\bar{\varepsilon}_k^{1-\gamma}u_{r^k}(p)\mathbf{,}
\] for $\tau \in \Gamma^x_1(\nu,\theta_1),|\tau|=\varepsilon_k$, $M\bar{\varepsilon}_k^\gamma$ from the free boundary.
Applying Lemma~\ref{e-mono_transfer} we conclude that $\zeta$ is also $\varepsilon_k$-monotone in the cone $\Gamma^x_1$ away from the free boundary. Now $\zeta$ solves a constant coefficient parabolic equation which we may assume is the heat equation. From the proof of Lemma 13.23 in [CS] we conclude that $\zeta$ is fully monotone in the cone of directions $\Gamma^x_1(\nu,\theta_1-c\varepsilon)$.
Note that this cone of monotonicity implies $|\nabla \zeta|$ is controlled (in this region) by $D_n\zeta$. This in turn implies our estimate for $|D_nu-D_n\zeta|$ extends to an estimate of $|D_eu-D_e\zeta|$, where $e$ is any spacial vector.
Using this, we have for any direction $e \in \Gamma^x_1(\nu,\theta_1-c\varepsilon)$
\[
D_e u(p^*)\geq D_e\zeta(p^*) -c(M\varepsilon_k)^{\frac{\alpha+\delta}{2}}D_nu(p^*)\mathbf{.}
\]
It follows then that $u$ is fully monotone in the direction $\bar{e}=e+c(M\varepsilon_k)^{\frac{\alpha+\delta}{2}}\nu$.
Hence $u$ is fully monotone in the cone $\Gamma^x_1(\nu,\theta-c\varepsilon_k-c(M\varepsilon_k)^{\frac{\alpha+\delta}{2}})$. Now at this point in the proof the size of $M$ has already been determined by invoking Lemma 13.23 in [CS]. We may assume $\varepsilon_k$ is small enough that we can majorize the loss term and have full monotonicity in $\Gamma^x_1(\nu,\theta_1-c_0\varepsilon^{\frac{\alpha+\delta}{2}})$, with $c_0>1$.
\end{proof}
\textbf{Remark} Lemma 13.23 in [CS] is stated in a slightly different context from what we have here. Its proof requires a control $|\zeta_t|\leq c|\nabla \zeta|$ which holds since such an estimate holds for $u$ and since we have the estimates between the derivatives of $u$ and $\zeta$. More precisely, we have
\begin{align*}
\vert D_t\zeta\vert &\leq |D_t\zeta-D_tu|+|D_t\zeta| \\
&\leq c(M\varepsilon)^{\frac{\alpha+\delta-1}{2}}D_nu(p^*).\\
\end{align*}
On the other hand, we have
\begin{align*}
|D_nu(p^*)| &\leq |D_nu(p^*)-D_n\zeta|+|D_n\zeta|\\
&\leq c(M\varepsilon)^\frac{\alpha+\delta}{2}D_nu(p^*)+|D_n\zeta|.\\
\end{align*}
Or,
\[
(1-c(M\varepsilon)^\frac{\alpha+\delta}{2})D_nu(p^*) \leq |D_n\zeta|.
\]
So if we take $\varepsilon$ small enough, we have control of $D_nu(p^*)$ by $D_n\zeta$. Here we have not specified an argument for $D_n\zeta$ since the above estimate holds for any point in the neighborhood.
Taken together, these imply the control of $D_t\zeta$ by $D_n\zeta$ needed in the proof of Lemma 13.23 in [CS] (naturally control by $D_n$ implies control by the full gradient). \\
Lastly, we quote Lemma 2.4 from [FS1] for the space-time cone.
\begin{lm}\label{e-mono-time}(Lemma 2.4 in [FS1]).\\
Let $\alpha\leq 1$ be the H\"{o}lder exponent of the $a_{ij}$ and $\beta,\delta,\gamma$ as indicated above.
\indent Suppose $u\geq 0$ is monotone in the $e_n$ direction and
\[
u(p)-u(p-\varepsilon\tau) \geq c\varepsilon^{1-\gamma+\beta}u(p) =c\varepsilon^{\frac{\alpha+\delta}{2}-1}u(p)
\]
for $d_p <\eta/4$ [this is the distance to the free boundary] where $\tau =\beta_1e_n+\beta_2e_t$ with $\beta_1>0$ $|\beta_2 \neq 0$ and $\beta_1^2+\beta_2^2 =1$. Then if $M = M(n,L)$ is large enough and $\varepsilon$ is small enough, outside a $(M\varepsilon)^\gamma$-neighborhood of the free boundary we have
\[
D_{t_\varepsilon} \geq 0
\]
where $t_\varepsilon =\tau+ c(M\varepsilon)^{(\alpha+\delta-1)/2}e_n$.
\end{lm}
\subsection{Regularity of the Free Boundary in Space}
By combining the results from our cone enlargement lemma and the $\varepsilon$-monotonicity section we have the following corollary suitable for iteration.
\begin{cor}\label{main_cor}
Let $u$ be a solution to our problem, monotone in the directions $\Gamma^x(e_n,\theta_0)\cup\Gamma^t(\eta,\theta_t)$ with $\eta$ in the span of $e_n$ and $e_t$. Then $u$ is $r^k\varepsilon_k$-monotone in $Q_{r^k}$ in an expanded spacial cone of directions $\Gamma^x(\nu_1,\theta^x_1) =\Gamma^x_1$ with defect angle $\delta_1 \leq c\delta_0$, $c<1$.
Alternatively $u_{r^k}$ is $\varepsilon_k$-monotone in $Q_1$. Furthermore, there exists an M such that\textbf{,} $M\bar{\varepsilon}_k^\gamma$ away from the free boundary, we have have strict $\varepsilon_k$-monotonicity in these directions in the following sense:
\begin{equation}\label{eq:strict_e_mono}
u_{r^k}(p) -u_{r^k}(p-\tau) \geq c\sigma\bar{\varepsilon}_k^{1-\gamma}u_{r^k}(p).
\end{equation} Finally, in this region, at a distance greater than $M\bar{\varepsilon}_k^\gamma$ from the free boundary, $u_{r^k}$ is fully monotone in a cone of directions $\bar{\Gamma}_1^x(\nu,\bar{\theta}_1)$ with $\bar{\delta}_1\leq \bar{c}\delta_0\mathbf{.}$
\end{cor}
\begin{proof} This is Corollary~\ref{e_cone_gap} except for the last part about full monotonicity.
By Corollary~\ref{e_cone_gap} we have $u_{r^k}$ $\varepsilon_k$-monotone in $Q_1$ in the cone of directions $\Gamma^x(\nu_1,\theta_1)$, with~\eqref{eq:strict_e_mono} holding for directions in this cone. From Lemma~\ref{lm:ep_to_full_mono_space} $u_{r^k}$ is therefore fully monotone in the cone
\[
\Gamma^x(\nu_1,\theta-c_0\varepsilon_k^{\frac{\alpha+\delta}{2}}) :=\bar{\Gamma}_1^x.
\] For notational convenience we will write $B$ for the power $\frac{\alpha+\delta}{2}$. In terms of the spacial defect angles, we know that $\delta_1 \leq c\delta_0$ with $c<1$. Let $\bar{\delta}_1$ denote the defect angle of the cone $\bar{\Gamma}_1^x$. It is readily seen that the worst case scenario occurs with $\varepsilon_1$. In this case we have
\[
\bar{\delta}_1 =\delta_1+c_0\varepsilon_1^B \leq c\delta_0+c_0\varepsilon_1^B.
\] We desire to preserve the geometric decay of the defect angles so we want $\bar{\delta}_1 \leq \bar{c}\delta_0$ with $\bar{c}<1$. So what we must prove is that there is an appropriate choice of $\lambda$ in the definition of $\varepsilon_k =\lambda^k\varepsilon_0$ that makes this possible.
Now $\varepsilon_1 =\lambda\varepsilon_0$\textbf{,} so we need
\[
c_0\lambda^B\varepsilon_0^B \leq c'\delta_0\mathbf{,}
\]
where $c'$ is chosen so that $c + c' =\bar{c}<1$ and $c>2c'$. Now our starting assumption is that $\varepsilon_0 \ll \delta_0$ (and thus we can also assume $\varepsilon_0^B \leq \delta_0$; note that $B>1/2$) so it suffices to chose $\lambda$ so that
\[
\lambda^B \leq \frac{c'}{c_0}\mathbf{.}
\]
This would suffice to give $\bar{\delta}_1\leq \bar{c}\delta_0$.
\end{proof}
The calculation in the above proof will be of interest to us when we iterate. In particular, we need to ensure that the choice of $\lambda$ made in Corollary~\ref{main_cor} will also work in the iteration, where we need $\delta_k =\bar{c}^k\delta_0$. We take care of this now with the following result about cones.
\begin{lm}\label{calculation}
$\Gamma_k =\Gamma(\nu_k,\theta_k)$ is a sequence of cones, $\Gamma_k \subset \Gamma_{k+1}$ with defect angle $\delta_k \leq c^k\delta_0$. Let $\bar{\Gamma}_k =\Gamma_k(\nu_k,\theta_k -c_0\varepsilon_k^B)$, $B=\frac{\alpha+\delta}{2}$, with $\varepsilon_k =\lambda^k\varepsilon_0$, and defect angle $\bar{\delta}_k$. Then there exists a $\bar{c}<1$ such that $\bar{\delta}_k\leq \bar{c}^k\delta_0$.
\end{lm}
\begin{proof}
We have
\begin{align*}
\bar{\delta}_k &=\delta_k+c_0(\varepsilon_k)^B\\
&\leq c\bar{\delta}_{k-1}+ c_0(\lambda^k\varepsilon_0)^B\\
&\leq (c\bar{c}^{k-1} +(c')^k)\delta_0.
\end{align*}
We want this last term to be less than $\bar{c}^k\delta_0$ for a choice of $\bar{c}$ independent of $k$. From $\bar{c}=c+c'$ (referring to the constants in the proof of Corollary~\ref{main_cor} above) and the binomial theorem we have
\begin{align*}
(c\bar{c}^{k-1} +(c')^k) &= c\sum_{n=0}^{k-1} \binom {k-1}{n} c^{k-1-n}(c')^n +(c')^k\\
& =\sum_{n=0}^{k-1} \binom {k-1}{n} c^{k-n}(c')^n +(c')^k.
\end{align*}
Whereas
\[
(c+c')^k=\sum_{n=0}^{k} \binom {k}{n} c^{k-n}(c')^n .
\]
Consider the term $n=1$. We have that the first expression has the term $(k-1)c^{k-1}c'$ while the second has $kc^{k-1}c' $. Provided $c'<c$ we will have
\begin{align*}
(k-1)c^{k-1}c' +(c')^k <kc^{k-1}c' <(k-1)c^{k-1}c'+c^{k-1}c' =kc^{k-1}c'
\end{align*}
from which it follows that
\[
(c\bar{c}^{k-1} +(c')^k) <(c+c')^k.
\]
Therefore, with $\bar{c}=c+c'<1$, we will have $\bar{\delta}_k \leq \bar{c}^k\delta_0$ for any $k$.
\end{proof}
\textbf{Remark:} The results of this section imply that if a solution $v$ to our problem is strictly $\varepsilon$-monotone in the sense of~\eqref{eq:strict_e_mono} in a cone of directions $\Gamma_1$ with defect angle $\delta_1\leq c\delta_0$, then there exists $\bar{\Gamma}_1$, $\Gamma_0 \subset \bar{\Gamma}_1 \subset \Gamma_1$\textbf{,} with $v$ fully monotone away from the free boundary in $\bar{\Gamma}_1$, still preserving a decay of the defect angle $\bar{\delta}_1\leq \bar{c}\delta_0$.
\subsection{Final Spacial Iteration}
We reach the main result of this section.
\begin{cor}\label{cor: spacial_regularity}
The free boundary is a $C^{1,\alpha}$ surface in space.
\end{cor}
\begin{proof}
Combining Corollary~\ref{e_cone_gap} and the results in the $\varepsilon$-monotonicity section, we have that $u_{r^k}$ is $\varepsilon_k$-monotone in $B_{1/2}\times(-\frac{T}{2},\frac{T}{2})$ in the cone of directions $\bar{\Gamma}_1$ with $\bar{\delta}_1\leq \bar{c}\delta_0$. Furthermore, we have in this region
\[
\sup_{B_{\bar{\varepsilon_k}}(p)}u(q-\tau)\leq u(p)
\] for $\tau \in \bar{\Gamma}_1$, $|\tau|=\varepsilon_k$. Additionally, $u_{r^k}$ will be fully monotone in this cone of directions away from the free boundary.
We may assume that $T<1$. Then the quadratic cylinder $Q_{T/2} \subset B_{1/2}\times (-\frac{T}{2},\frac{T}{2})$.
This implies that $u_{r^2T/2}$ is $\frac{2}{T}\varepsilon_2$-monotone in $\bar{\Gamma}_1$ in the above $\sup$ sense, in all of $Q_1$ and is a solution in the larger region $Q_2$ (a technicality needed for the propagation lemma). Furthermore, $u_{r^kT/2}$ is fully monotone in $\bar{\Gamma}_1$ away from the free boundary in the region $\Psi$ by virtue of the results in Section 6.3.
We can then apply the proof of Lemma~\ref{space_cone_enlarge} and Corollary 4 to $u_{r^2T/2}$, concluding that in $B_{1/2}\times(-\frac{T}{2},\frac{T}{2})$, $u_{r^2T/2}$ is $\frac{2}{T}\varepsilon_2$-monotone in an enlarged cone of directions $\bar{\Gamma}_2(\nu_2,\bar{\theta}_2)$ with $\bar{\delta}_2\leq \bar{c}^2\delta_0$. As was the case for Lemma~\ref{space_cone_enlarge}, we have the same conclusion for $u_{r^kT/2}$, $k\geq 2$. Note that this region contains $B_{T/2}\times(-\frac{T}{2},\frac{T}{2})$.
Using this observation, after back-scaling $u_{r^2T/2}$ we deduce that $u$ is $r^2\varepsilon_2$-monotone in a cone of directions $\bar{\Gamma}_2$ in the region $B_{r^2T^2/2^2}\times(-\frac{r^4T^3}{2^3},\frac{r^4T^3}{2^3}) \supset Q_{r^2T^2/2^2}$.
In this way we construct a sequence of parabolic neighborhoods of the origin $Q_{r^kT^k/2^k}$ in which $u$ is $r^k\varepsilon_k$-monotone in a cone of directions $\bar{\Gamma}_k$. This implies that the free boundary of $u$ intersected with the time level $\{t=0\}$ is a $C^{1,\alpha}$ surface in space due to the following calculus lemma.
\end{proof}
\begin{lm}\label{calculus lemma}
Let $f$ be a function defined in a region $\mathcal{D}$ monotone in contracting cylinders $C_k =B'_{R^k} \times (-b_k,b_k)$ in cones $\Gamma_k$, with defect angles $\delta_k \leq \lambda^k \delta_0$, $\lambda<1$. Additionally, assume $f(0)=0$. Then $\{f=0\}$ is a $C^{1,\alpha}$ surface.
\end{lm}
\begin{proof} Since we can center the neighborhoods at any point on the free boundary, we have at once that each point on the free boundary possesses a genuine normal vector. It remains then only to show that these normal vectors vary with a modulus of continuity. It suffices for our purposes to assume the origin is one of the points, the other will be denoted by $x$; their corresponding normal vectors will be denoted by $\nu_x$ and $\nu_0$.
Select $k$ such that $R^{k+1}<|x|\leq R^k$ and let $\Gamma_{k+1}$ and $\Gamma_k$ be the corresponding monotonicity cones. Now the crucial observation is that the monotonicity cone is the same for any point in the corresponding region. In particular, both $x$ and $0$ have monotonicity cone $\Gamma_k$ since they are both in the region $C_k$. In turn this implies that the distance between the normal vectors $\nu_0$ and $\nu_x$ is controlled by the defect angle of the monotonicity cone:
\[
|\nu_x-\nu_0| \leq 2\delta_k.
\]
Now select $\alpha \in (0,1)$ such that $R^\alpha =\lambda$. Then we have
\begin{align*}
|\nu_x-\nu_0| \leq 2\delta_k &=c\lambda^k \\
&=c(R^\alpha)^k \\
&=c\left(\frac{R^{k+1}}{R}\right)^\alpha \\
&\leq C|x|^\alpha.
\end{align*}
\end{proof}
\section{Regularity of the Free Boundary in Space-Time}
We will now use similar ideas to prove that the free boundary has a space-time normal at every point which varies with a H\"{o}lder modulus of continuity. When taken together with the spacial regularity proved in the previous section this result will complete the regularity of the free boundary.
Having proved spacial regularity in the previous section we can orient our problem so that $e_n$ is the spacial normal at the origin. We will prove that there exists a space-time normal at the origin in the $e_n-e_t$ plane. This will be the full normal to the free boundary at that point. \\
The same technique used to prove the spacial regularity will be used for the space-time regularity. A technical difficulty arises early when following this line of argument however. Recall that in the spacial case we used the `sup-convolution' concept to describe the monotonicity cone. Precisely, given any $\tau \in \Gamma (e_n,\theta -\delta)$, $\delta$ the spacial defect angle, we have that
\[
\sup_{B'_\varepsilon(x)} u(y-\tau,t) \leq u(x,t).
\]
Here the $B'$ denotes the ball in space only. We have that $\varepsilon =|\tau|\sin \delta$. However, we need to know that the same statement holds over a full space-time ball $B_\varepsilon$. Since parabolic rescalings depress the space-time defect angle $\mu$ we can assume that $\delta \geq \mu$ at every step in the iteration. This guarantees that the full ball $B_\varepsilon$ is contained in the monotonicity cone.
In the present case however, the fact that the space-time defect angle $\mu$ is always smaller than the spacial defect angle $\delta$ poses a difficulty. It is still true that for $\tau \in \Gamma_t(\nu,\theta_t-\mu)$ we can take the $\sup$ statement over the `thin' ball, this time in the $e_n -e_t$ plane, but it is no longer true that we can take the sup over the full ball of radius $|\tau|\sin \mu$.
We have the following technical geometric lemma to address this problem.
\begin{lm}
Let $u$ be monotone in the directions $\Gamma_x(\theta_x)\cup \Gamma_t(\theta_t)$ with defect angles $\delta\geq \mu$ and $\delta\leq \pi/6$. Then there exists a $\kappa>0$ and a $\mu_0$ such that for any $\tau \in \Gamma_t(\theta_t-\kappa\mu)$ with $\mu\leq \mu_0$ the full ball $B_\varepsilon$ centered at the endpoint of $\tau$ is contained in the monotonicity cone, with $\varepsilon =|\tau|\sin \kappa\mu$.
\end{lm}
\begin{proof}
The space time cone is two dimensional in the plane $e_n-e_t$\textbf{,} while the spacial cone is a right cone in space. It therefore suffices to prove the result in three dimensions. Additionally, owing to the purely geometric nature of the lemma we may assume that the cones open along the positive $z$-axis. We will assume that the space-time cone opens along the $y$-axis.
Under these assumptions the elliptic monotonicity cone with defect angles $\delta$ and $\mu$ has parametric equations in the variables $s,t$
\[
(\cot(\delta) s \cos t, \cot(\mu) s \sin t, s)
\]
with $0\leq t\leq 2\pi$, $s\geq 0$. The vector
\[
v=(0, \cos \mu, \sin \mu)
\]
is on this cone; it is one edge of space-time direction with unit length.
Define for $\mu^* =(1+\kappa) \mu =c\mu$ the unit vector $\tau$ as
\[
\tau =(0,\cos \mu^*, \sin \mu^*)\mathbf{.}
\]
Then $\tau$ will be an edge of the smaller space-time cone $\Gamma(\theta_t-\kappa\mu)$.
We want to show that for some choice of $\kappa$ no vector on the edge of the elliptic cone can make an angle with $\tau$ which is smaller than the angle $\tau$ makes with $v$. This would mean that the right cone of directions with axis $\tau$ and opening given by the angle between $v$ and $\tau$, which is $\kappa\mu$, would fit completely inside the elliptic cone. In turn this would imply that the ball with radius $|\tau|\sin \kappa\mu$ centered at the endpoint of $\tau$ is entirely contained in the elliptic cone, and this is the conclusion of the lemma.
Let $\alpha (\;,\;)$ denote the angle between two vectors and let
\[
w = (\cot(\delta) x, \cot(\mu) y, 1)
\]
with $x^2+y^2 =1$; any vector on the outer edge of the cone lies in the same direction as $w$ for some such choice of $x,y$.
We want to show that for any such $w$
\[
\alpha(\tau, v) \leq \alpha(w,\tau)\mathbf{.}
\]
Or, since the cosine is decreasing in the first quadrant
\[
\cos(\alpha(\tau, v)) \geq \cos( \alpha(w,\tau))\mathbf{.}
\]
We then use the characterization of the dot product to obtain ($\tau$ and $v$ are unit vectors)
\[
\tau \cdot v \geq \frac{\tau \cdot w}{|w|}\mathbf{.}
\]
We compute (using $x^2+y^2=1$)
\[
\cos \mu^* \cos \mu +\sin \mu \sin \mu^* \geq \frac{y\cos \mu^* \cot \mu + \sin \mu^*}{\sqrt{1+(1-y^2) \cot^2\delta +y^2\cot^2\mu}}\mathbf{.}
\]
Now it can be shown directly from calculus that the right hand side as a function of $y$ with all other variables fixed increases to a maximum at
\[
y_M = \frac{\cot\mu\cos\mu^*(1+\cot^2\delta)}{\sin\mu^*(\cot^2\mu-\cot^2\delta)}\mathbf{.}
\]
When $y=1$ the two sides are equal so to obtain our desired inequality for all $0\leq y\leq 1$ it is necessary for $y_M \geq 1$. Recall that $\mu^* =c\mu$, $c>1$. We begin to estimate (we are assuming $\mu <\delta$ so $\cot \mu >\cot \delta$)
\begin{align*}
y_M &= \frac{\cot\mu\cos\mu^*(1+\cot^2\delta)}{\sin\mu^*(\cot^2\mu-\cot^2\delta)}\\
&\geq \frac{\cot\mu\cot\mu^*(1+\cot^2\delta)}{(\cot^2\mu)}\\
&=(1+\cot^2\delta)\frac{\cot\mu^*}{\cot\mu}\\
&= (1+\cot^2\delta)\frac{\tan\mu}{\tan\mu^*} = (1+\cot^2\delta)\frac{\tan\mu}{\tan c\mu}\mathbf{.}
\end{align*}
Letting $\mu\to 0$ in this last line we obtain by L'H\^{o}pital's rule
\[
(1+\cot^2\delta) \frac{1}{c}\mathbf{.}
\]
We need
\[
(1+\cot^2 \delta)\geq c >1.
\]
Or, to provide some room
\[
(1+\cot^2 \delta)\geq c >2.
\]
Now, by assumption, $\delta\geq \pi/6$, so that $(1 +\cot^2\delta)>2$. Thus, we can find a $\mu_0$ such that for $\mu\leq \mu_0$ there exists a $c$ for which
\[
y_M \geq (1+\cot^2\delta)\frac{\tan\mu}{\tan c\mu} \geq 2\mathbf{.}
\]
In turn this implies that the ball centered at the tip of $\tau \in \Gamma_t(\theta_t-\kappa\mu)$ of radius $|\tau|\sin \kappa \mu$ will be completely contained in the monotonicity cone.
\end{proof}
Since we have already proved the spacial regularity of the solution and we know that parabolic rescalings depress the space-time defect angle $\mu$ we will assume throughout the rest of this section that the hypotheses of this lemma are satisfied.
Our proof of the space-time regularity can now proceed along the same lines as the spacial regularity. Namely, we prove an enlargement of the monotonicity cone away from the free boundary, transfer a portion of this enlarged cone to the free boundary and iterate via a parabolic rescaling. As in the spacial case care must be taken with the iteration necessary to accommodate working with $\varepsilon$-monotonicity rather than full monotonicity.
A technical complication not present in the spacial case is the fact that parabolic rescalings enlarge the space-time cone $\Gamma_t$. Indeed, if $u$ has monotonicity cone $\Gamma_t$ with defect angle $\mu$ then $u_r$, a parabolic rescaling of $u$, will have a monotonicity cone which has defect angle $r\mu$.
We note that this gain from rescaling is of no use to us in proving the enlargement of the monotonicity cone since any gain from the rescaling must be given back when the solution is back-scaled. We will write $^*{\Gamma}^t_0$ to mean the original cone $\Gamma^t_0$ dilated by the rescaling. The rescaling factor involved in this dilation will be clear from the context so we will suppress it from the already dense notation.\\
We begin with a lemma which is the analogy of Lemma~\ref{space_cone_enlarge} and Corollary~\ref{e_cone_gap} for the space-time case and which largely follows the same lines for its proof. The only major difference is the need to keep track of the dilation of the space-time cone due to the rescaling. As before, $M$ and $\varepsilon_k =\lambda^k\varepsilon_0$ for $\lambda<1$ are chosen later. Additionally, although the statement of the lemma is similar to the spacial case we do not necessarily have that the constants, including $r$ and $\lambda$, are the same.
\begin{lm}
Let $u$ be a solution to our problem, monotone in the directions $\Gamma^x(e_n,\theta_0)\cup\Gamma^t(\eta,\theta_t)$ with $\eta$ in the span of $e_n$ and $e_t$. Then $u$ is $r^k\varepsilon_k$-monotone in $C_{r^k}$ in an expanded space-time cone of directions $\Gamma^t(\nu_1,\theta^t_1) =\Gamma^t_1$ with defect angle $\mu_1 \leq c\mu_0$, $c<1$.
Alternatively $u_{r^k}$ is $\varepsilon_k$-monotone in $B_{1/2}\times(-\frac{T}{2},\frac{T}{2})$ in the corresponding $r^k$-dilated cones $^*\Gamma_1^t$. Furthermore, there exists an M such that\textbf{,} $M\bar{\varepsilon}_k^\gamma$ away from the free boundary, we have have strict $\varepsilon_k$-monotonicity in these directions in the following sense:
\begin{equation}\label{eq:strict_e_mono_t}
u_{r^k}(x,t) -u_{r^k}((x,t)-\tau) \geq c\sigma\bar{\varepsilon}_k^{1-\gamma}u_{r^k}(x,t)\mathbf{.}
\end{equation}
\end{lm}
\begin{proof}
As in the spacial case we begin with a rescaling; as in that proof the choice of $r$ will be coupled to $\lambda$ and chosen later.
\[
u_r(x,t) =\frac{u(rx,r^2t)}{r}.
\]
For $u_r$ its space-time cone $^*\Gamma^t_0$ is described as the cone in the the $e_n- e_t$ plane with edges $e_t+Be_n$, $-e_t-Ae_n$. From Corollary~\ref{cor:int_gain} we have that the space-time cone enlarges. As in the spacial case we have either (the situation is simpler in this case since the cone is only two-dimensional)
\[
D_t u_r + BD_n u_r \geq cr\mu D_n u_r \quad \forall(x,t) \in \Psi
\]
or
\[
-D_t u_r -AD_n u_r \geq cr\mu D_n u_r \quad \forall(x,t) \in \Psi\mathbf{.}
\]
Recall that $^*\Gamma^t_0$ has defect angle $r\mu$; that is why an $r\mu$ appears on the right hand side.
We will assume that the first holds; the other case it treated similarly. For convenience we will denote by $\sigma$ the direction $e_t+Be_n$. Let $\tau$ be the direction in the $e_n-e_t$ plane which
lies below $\sigma$ by the angle $\kappa r\mu$.
Then define
\[
u_1(x,t) = u_r((x,t) - \tau).
\]
Then we have that
\[
\sup_{B_\varepsilon(x,t)}u_1(x,t)\leq u_r(x,t)
\] throughout the whole cylinder by our previous lemma where $\varepsilon = |\tau|\sin\kappa r\mu$.
Similar to the spacial case, the inequality
\[
D_\tau u_r \geq cr\mu D_n u_r \quad \forall(x,t) \in \Psi
\]
then holds. We needed the result about $\sigma$ to know which direction the cone was increasing in; once we have this information we only need to work with $\tau$.
Next, we show that a similar inequaltiy holds for small perturbations of this direction $\tau$ by other directions.
Let $\lambda_1 >1/2$ and $\lambda_2$ be such that $\lambda_1^2 +\lambda_2^2 =1$.
Next recall that for any spacial direction $e$ we have $|D_e u_r| \leq c^* D_n u_r$ and a similar inequality holds for time derivatives by the monotonicity cone. Thus for $\varrho \in \mathbb{R}^{n+1}$
\[
\lambda_1 D_\tau u_r +\lambda_2 D_\varrho u_r \geq (\lambda_1c\mu - \lambda_2 c^*)D_n u_r \geq cr\mu D_n u_r
\]
provided $|\lambda_2|\leq \frac{cr\mu}{2c*}$.
Set $\bar{\tau}=\tau
+\varepsilon \varrho$ so that the above implies
\[
u_r((x,t) -\bar{\tau}) -u_r(x,t) = -D_{\bar{\tau}}u_r(\tilde{x},\tilde{t})|\bar{\tau}| \leq -c\varepsilon r \mu D_nu(\tilde{x},\tilde{t}) \leq -c\varepsilon r\mu u_r(x_0,0)\mathbf{.}
\]
As in the spacial case this inequality implies that
\[
u_r((x,t)-\bar{\tau})-u_r(x,t) = u_r((x,t)-\tau -\varepsilon\varrho)-u_r(x,t) \leq -c\varepsilon r\mu u_r(x_0,0).
\]
As $\varrho$ ranges over all possible direction we deduce that in the region $\Psi$
\[
v_\varepsilon (x,t) := \sup_{B_\varepsilon(x,t)} u_1(x,t) \leq u_r(x,t) - cr\mu u_r(x_0,0).
\]
This enlarges to yield that in $\Psi$ for a small $h$ we have
\[
u_r(x,t) -v_{(1+h\mu)\varepsilon}(x,t) \geq c\varepsilon r\mu u_r(x_0,0).
\]
which is the cone enlargement of the cone $^*\Gamma^t_0$ away from the free boundary.
It is at this point we must once again restrict ourselves to $\varepsilon$-monotonicity so that the propagation lemma can be applied to $u_r$ and $u_1$. The remainder of the proof then proceeds in the same fashion as that of the spacial case.
\end{proof}
At this point the argument follows identical lines to that of the spacial case by using Lemma~\ref{e-mono-time} to produce a slightly smaller cone of direction $^*\bar{\Gamma}^t_1$ in which the solution is fully monotone away from the free boundary; additionally as in the spacial case a careful choice of $r$ and $\lambda$ results in the cones $^*\bar{\Gamma}_1^t$ still preserving the decay of the defect with $\mu_1 \leq \bar{c}\mu_0$. An iteration argument then implies the following corollary.
\begin{cor}\label{cor:space_time_iteration}
The solution $u$ is $r^k\varepsilon_k$ monotone in the parabolic neighborhoods of the origin $Q_{r^kT^k/2^k}$ in the cone of directions $\bar{\Gamma}^t_k$ which have defect angles $\mu_k \leq \bar{c}^k\mu$, $\bar{c}<1$.
\end{cor}
We arrive at the proof of our main theorem:
\begin{proof} \textit{(Theorem~\ref{thrm:main_thrm})}
The existence of a full normal at the origin follows by Corollary~\ref{cor:space_time_iteration} and the spacial regularity proved in Corollary~\ref{cor: spacial_regularity}. By centering this argument at different points we obtain that a normal vector to the free boundary exists at every point of the free boundary in $Q_{1/2}$. Furthermore, the spacial part of this normal vector varies with a H\"{o}lder modulus of continuity and the iteration from Corollary 7 implies that the space-time part of this normal also varies with a H\"{o}lder modulus of continuity.
Together this implies both the existence of a normal vector $\eta(x,t)$ at each point on the free boundary and also that this normal vector varies with the moduli of continuity stated in Theorem~\ref{thrm:main_thrm}.
As in the proof of the main result in [ACS3], to finish we apply the results in [W] to our solution now that we know the free boundary is $C^{1,\alpha}$ for each time level. This implies that $\nabla_x u$ is continuous up to the boundary at every time level. Hence $u$ takes up its boundary condition with continuity and $u$ is a classical solution to our problem.
\end{proof}
\textbf{Acknowledgment:} The author would like to thank S. Salsa for his advice.
\newpage
|
1,941,325,221,100 | arxiv | \section{Introduction}
Over the last two decades it has become apparent that many complex
systems exhibit a phenomenom which has been termed anomalous diffusion \cite{Shlesinger, Zaslavsky, MetzlerKlafter}.
On account of this, there has been an increasing interest in stochastic processes
deviating basically from standard diffusion processes characterized by a Gaussian
behavior. Systems exhibiting anomalous diffusion differ from the linear time
dependence of the second moment and rather show $\langle
x^2\rangle\sim t^\alpha$, where $0\leq\alpha\leq 2$.
In this context processes with $\alpha>1$ dispersing faster than
standard diffusion processes are called superdiffusive while $\alpha<1$ means that
a system displays subdiffusive behavior.
In the realm of anomalous diffusion, the classical diffusion equation has to be replaced by
the so-called generalized diffusion equations \cite{Balescu}. The most prominent
representatives of this class of equations are probably the fractional diffusion
equations \cite{MetzlerKlafter}, where the derivatives with respect to time or
to space or both are replaced by non-integer order derivatives. A more fundamental account
to anomalous diffusion is provided by a stochastic process called Continuous Time
Random Walk (CTRW). This process generalizes the standard Random Walk and allows
for random jump length and random waiting periods between the jumps \cite{Weiss}. It is well-known
that the generalized diffusion equation can be derived from the governing equations
of the CTRW. Another approach to anomalous diffusion has been put forward by
Fogedby who proposed a coupled system of Langevin equations leading to the
generalized diffusion equations \cite{Fogedby}. In a sense, this approach can be considered
as a continuous realization of the CTRW.
In the present paper we consider the effect of external forces onto processes
exhibiting anomalous diffusion. Although the incorporation of external forces
is straightforward in classical diffusion theory, leading to the well-known Fokker-Planck
equations, this task appears to be rather involved when anomalous diffusion is considered.
The arising difficulties are due to the long jumps and the long waiting times that
can occur. Throughout this paper we distinguish between {\it biasing} and
{\it decoupled} external forces. This notation shall indicate that there are two different
possibilities of the action of the force. When we speak of a biasing field,
we mean that the external field acts as a bias only at the time of the actual
jump. In contrast to this we speak of a decoupled field if the diffusing particle
is affected permanently during the waiting time periods and hence the diffusion process is
decoupled from the effect of the field. Note that this distinction is not necessary
for classical diffusion processes.
While the inclusion of external potentials is relatively well understood on the level
of the generalized diffusion equations and the generalized Fokker-Planck equations
respectively, there are still some open questions as long as the corresponding Langevin
equations are considered. However, an exhaustive comprehension of the Langevin equations
is inevitable to investigate the properties of sample paths of such processes.
The aim of this paper is to clarify the different possibilities
of including an external force into the framework of anomalous diffusion, namely
the difference between biasing and decoupled external forces, by considering
the corresponding Langevin equations. It is organized as follows.
First we state some fundamentals concerning the theory of anomalous diffusion and
thereby shortly review Fogedby's continuous formulation of CTRWs and the concept of subordination.
After shortly reviewing some wellknown and some very recent results on the Langevin formulations
of the generalized Fokker-Planck equations for biasing external fields
we establish the Langevin equations for generalized Fokker-Planck equations
for decoupled external potentials which have not been considered so far. We conclude with a discussion
on the role of external forces in anomalous diffusion.
\section{CTRWs and Generalized Diffusion Equations}
A suited stochastic process to describe discrete sample realizations of many microscopic
processes leading to anomalous diffusion is provided by the Continuous Time Random Walk
(CTRW). This process is an extension of the standard random walk and allows for random waiting times
between jumps of random length. In the decoupled case the CTRW is characterized by a waiting time distribution
$W(t)$ and a jump length distribution $F(\Delta x)$. Depending on the properties
of these distributions, the CTRW can lead to an anomalous behavior of the
mean-squared-displacement \cite{Balescu}. The governing equation for the probability
distribution (pdf) of the position of the walker is the Montroll-Shlesinger master equation \cite{MontShle}
\begin{eqnarray}\label{MontShle}
\frac{\partial}{\partial t}f(x, t)&=&\int \,dx' F(x; x')\int_0^t
dt'\Phi(t-t') f(x', t') \nonumber \\
& & -\int_0^t dt'\Phi(t-t') f(x, t')\, ,
\end{eqnarray}
where the time kernel $\Phi(t-t')$ is related to the waiting time distribution \cite{phiexpl}.
The master equation (\ref{MontShle}) has a straightforward interpretation.
It states that the density of particles at position $x$ and time $t$ is
increased by particles that have been at $x'$ at time $t'$ and perform a jump from $x'$ to $x$ at time $t$.
On the other hand the density is decreased by particles that have been
at $x$ and jump away at time $t$ to some other position. The resulting process is non-Markovian
for waiting time distributions with a power-law tail.
Another account to describe the evolution of pdfs in the context of anomalous
diffusion are the generalized diffusion or fractional diffusion equations.
For a subdiffusive process the generalized diffusion equation can be cast into the form
\begin{equation}\label{genDiffeq}
\frac{\partial}{\partial t}f(x, t)=\int_0^t dt' \phi(t-t') D
\frac{\partial^2}{\partial x^2}f(x,t')\, .
\end{equation}
For the special choice $\phi(\tau)=(\tau)^{-1+\alpha}$, which
corresponds to Mittag-Leffler type waiting time distributions $W(\tau)\sim \tau ^{-1-\alpha}$,
Eq.(\ref{genDiffeq}) yields the (time-) fractional diffusion equation
\begin{equation}
\frac{\partial}{\partial t}f(x,
t)=\mathcal{D}_t^{1-\alpha}\frac{\partial^2}{\partial x^2}f(x,t)\, ,
\end{equation}
where $\mathcal {D}_t^{1-\alpha}$ is the Riemann-Liouville fractional derivative.
It is well-known that generalized diffusion equations can be derived from the
Montroll-Shlesinger equation, see e.g. \cite{MetzlerBarkai}.
In order to describe superdiffusive diffusion processes
the so-called space-fractional diffusion equations have to be taken into account. These
equations describe Markovian processes with power-law distributed jump-length and are often
referred to as L\'evy flights.
\section{Fogedby's Approach and Subordination}
A continuous realization of the CTRW has been considered by Fogedby in \cite{Fogedby}. His
formulation is based on a system of coupled Langevin equations for the position $x$ and time $t$
\begin{equation}\label{Fogedbysys}
{\dot x}(s)=\Gamma (s),\qquad {\dot t}(s)=\eta (s)\, ,
\end{equation}
where $\Gamma(s)$ and $\eta(s)$ are random noise sources which are assumed to be independent.
$\eta(s)$ has in this context to be positive due to causality. The system (\ref{Fogedbysys}) can
be interpreted as a standard Langevin equation in a internal time $s$ that is
subjected to a random time change. This random time change to the physical
time $t$ is described by the second equation. The combined process
in physical time is then given according to $x(t)=x[s(t)]$, where $s(t)$ is
the inverse process to $t(s)$ defined as
\begin{equation}
s({\tilde t})=inf\lbrace s:t(s)>{\tilde t}\rbrace
\end{equation}
Closely related to this concept of Fogedby is the mathematical method of {\it subordination}.
Using the (not to formal) notation of Fogedby one calls the process $x(s)$ a parent process and
$s$ its operational time. The random time-transformation function $t(s)$ has to be a
non-decreasing right-continuous function with an inverse function $s(t)$. The resulting
process in physical time $t$ is then obtained by $x(t)=x[s(t)]$ and is referred to as
subordinated to the parent process. Consequently are the processes $t(s)$ and $s(t)$ named
subordinator and inverse subordinator respectively.
In \cite{Fogedby} it was shown that the Langevin equations (\ref{Fogedbysys}) lead
to a time-fractional diffusion equation if the $\eta (s)$ are governed by a generic one-sided
$\alpha$-stable distribution. Generally the pdf of the subordinated process can be stated
in the form
\begin{equation}\label{subsol}
f(x,t)=\int_0^\infty ds \, p(s,t)\,f_0(x,s)\, ,
\end{equation}
where $p(s,t)$ is the pdf of the inverse subordinator and $f_0(x,s)$ is the solution of
the parent process \cite{Barkai, Meerschaert}.
\section{Biasing External Force-fields}
Throughout this paper we will restrict to anomalous diffusion processes governed by
waiting time distributions, i.e. we will consider equations of the form (\ref{genDiffeq}).
Hence we consider processes which are e.g. ruled by L\'evy-stable subordinators
and time-fractional equations. The role of external potentials for L\'evy flights is discussed
in \cite{Brockmann}.
Let us first clarify what we mean by {\it biasing} external forces. Therefore, consider
the generic scenario of a subdiffusive CTRW governed by power-law distributed waiting times.
A biasing external potential or force shall not affect the diffusing particle during the waiting periods but
only provide it with a {\it bias} at the instance of a jump. In a sense one might say that the action of the force
can be regarded as anomalous as well.
If the considered force is time-independent it is well-known that anomalous diffusion
in biasing fields can be described by the generalized Fokker-Planck equation
\begin{equation}
\frac{\partial}{\partial t}f(x,
t)=\int_0^t dt'\phi(t-t')\left[-\frac{\partial}{\partial x}F(x)
+\frac{\partial^2}{\partial x^2}\right]f(x,t')\, ,
\end{equation}
where $F(x)$ is the external force \cite{MetzlerKlafter, MetzlerBarkai}. The equivalent description based on
Langevin equations is provided by the coupled system
\begin{equation}
{\dot x}(s)=F(x)+\Gamma (s),\qquad {\dot t}(s)=\eta (s)\, ,
\end{equation}
where $\Gamma (s)$ is a Gaussian and $\eta (s)$ is a fully skewed
$\alpha$-stable L\'evy noise source \cite{Fogedby}.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\linewidth]{NeuCompte.eps}
\caption{Time evolution of the probability density for a constant external biasing force
$F=v$ for Mittag-Leffler type waiting time distributions with $\alpha = 0.5$ for the consecutive
times $t=1$ to 5. One can observe the persistence of the maximum at the origin indicating
that there is no internal dynamic between the waiting times. The external biasing force acts only
at the time of the displacements resulting in a asymmetry of the pdf. The plume stretches more
and more into the direction of the force.}\label{biasadvec}
\end{center}
\end{figure}
If a time-dependent external force $F(t)$ is considered
it turns out that the situation is by far more involved. There exist different alternatives
to include the force. One can for example consider the generalized Fokker-Planck equation
\begin{equation}\label{FFPEwrong}
\frac{\partial}{\partial t}f(x,
t)=\int_0^t dt'\phi(t-t')\left[-\frac{\partial}{\partial x}F(t')
+\frac{\partial^2}{\partial x^2}\right]f(x,t')\, .
\end{equation}
However, such a generalized Fokker-Planck equation turns out to be physically meaningless.
The correct equation has been found recently \cite{SokKlaf, Heinsalu, Shushin}
\begin{equation}\label{timedep}
\frac{\partial}{\partial t}f(x, t)=\left[-\frac{\partial}{\partial x}F(t)
+\frac{\partial^2}{\partial x^2}\right]\int_0^tdt'\phi(t-t')f(x,t')\, .
\end{equation}
Recall that for the fractional time-kernel $\phi(t-t')=(t-t')^{(1-\alpha)}$, Eq.(\ref{timedep})
yields the fractional Fokker-Planck equation
\begin{equation}
\frac{\partial}{\partial t}f(x, t)=\left[-\frac{\partial}{\partial x}F(x, t)
+\frac{\partial^2}{\partial x^2}\right]\mathcal{D}_t^{1-\alpha}f(x,t)\, .
\end{equation}
Notice that the difficulty of time-dependent external forces stems from the fact
that in this case the fractional derivative and the Fokker-Planck drift-term
do not commute anymore. On the basis of the generalized Fokker-Planck equation,
the difficulty is due the fact that it is not clear whether the external force
has to depend on $t'$ or $t$. For a detailed treatment of this issue,
we refer the reader to the original papers. At this point, we want to confine ourselves to a
simple plausibility argument to account for the correct operator ordering which naturally
does not replace a rigorous derivation.
As we have already mentioned, when time-dependent transition amplitudes $F(x; x')$ are considered,
the question arises whether this amplitude has to depend on $t'$ or $t$ which is equivalent to
operator ordering problem in Eq.(\ref{timedep}). To answer this question, let us consider the
corresponding CTRW governed by the Montroll-Shlesinger equation (\ref{MontShle}).
According to the interpretation of this equation the probability to be at the position $x$
at time $t$ is increased by the particles that jump at time $t$ from some $x'$ to $x$.
Since this jump which occurs at time $t$ is governed by the transition amplitude $F(x;x')$
it is clear that the transition amplitude has to depend on the time of the jump, i.e. $F(x;x',t)$.
Performing the appropriate limit procedure, one obtains Eq.(\ref{timedep}) as
the correct FFPE for time-dependent Fokker-Planck operators.
Consequently, the corresponding Langevin equation for a time-dependent forcing
is not straight-forward to derive. In fact, it has even been stated in \cite{Heinsalu}
that it is impossible to find a subordination description for time-dependent external
fields. If the force is assumed to depend on the internal time, i.e.
\begin{equation}\label{wronglangevin}
{\dot x}(s)=F(s)+\Gamma (s),\qquad {\dot t}(s)=\eta (s)\, ,
\end{equation}
which corresponds to a completely subordinated force, the corresponding generalized
Fokker-Planck equation would be Eq.(\ref{FFPEwrong}) and thus Eq.(\ref{wronglangevin})
lacks a physical interpretation.
The appropriate Langevin system has been found recently by Magdziarz and co-workers \cite{Magdziarz}.
They argued that a deterministic force should not be modified by the subordination procedure and
depend on the physical time $t$ and proposed the Langevin equations
\begin{equation}\label{klaftersys}
{\dot x}(s)=F(t(s))+\Gamma (s),\qquad {\dot t}(s)=\eta (s)\, .
\end{equation}
One recognizes that the force depends on the subordination process. Subordination of the process
$x(s)$ then yields for the force term $F[t(s[t])]=F(t)$ since $t(s[t])=t$ and hence the desired dependence
on the physical time. It has been proven in \cite{Magdziarz} that the Langevin equations (\ref{klaftersys}) yields the
same probability distributions as Eq.(\ref{timedep}) and hence that they describe the same process.
\section{Decoupled External Force-fields}
If a particle is assumed to be affected by an external potential throughout
the whole waiting time period and the anomalous diffusion process is independent
of this potential, we speak of a decoupled potential.
It is instructive to consider a simple example where a particle
which is advected by a constant force during the waiting periods and performs jumps
after the waiting periods.
The pdf of such a process has been proven to be governed by
\begin{equation}\label{advec}
\left[\frac{\partial}{\partial t}+v\frac{\partial}{\partial x}\right]f(x, t)=
\int_0^t dt' \phi(t-t') \frac{\partial^2}{\partial x^2}f(x-v(t-t'), t')\, ,
\end{equation}
which can be considered as a generalized advection-diffusion equation, where
the advection is normal while the diffusion is anomalous \cite{Eule}. Observe
the retardation of the pdf on the right-hand side which renders the equation
non-local in space. A solution of this equation can be found after passing into
a co-moving reference frame. The ansatz $f(x, t)=F(\xi, t)$ with the shifted variable $\xi = x- vt$
yields a generalized diffusion equation for $\xi$
\begin{equation}\label{gendiffxi}
\frac{\partial}{\partial t}F(\xi, t)=\int_0^t dt' \phi(t-t') \frac{\partial^2}{\partial \xi ^2}F(\xi, t')\, ,
\end{equation}
whose solution is given by (see Eq.(\ref{subsol}))
\begin{equation}\label{gendiffsol}
F(\xi ,t)=\int_0^\infty ds \, p(s,t)\,F_0(\xi ,s)\, ,
\end{equation}
where $F_0$ is the solution of the standard diffusion equation.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\linewidth]{NeuFADE.eps}
\caption{Time evolution of the probability density for a decoupled external constant force
with the same settings as in Figure \ref{biasadvec}, i.e. Eq.(\ref{advec}) with a fractional
time-kernel. One can see that this evolution can be regarded as a force-free anomalous diffusion
in a co-moving reference frame. Note that the force does not affect the diffusion process
and hence can be considered as decoupled.}\label{0Umsd}
\end{center}
\end{figure}
In order to establish the corresponding set of Langevin equations, we have to be aware
of the decoupled character of the advective field. That means, the advection
has to be completely independent of the internal time $s$. Let us consider the
following set of Langevin equations
\begin{equation}\label{adveclangevin}
{\dot x}(s)=v\,\eta(s)+\Gamma (s),\qquad {\dot t}(s)=\eta (s)\, .
\end{equation}
The solution of the subordinated process $x[s(t)]$ can be found by integration
\begin{equation}
x(t)=x[s(t)]=\int_0^{s(t)}v\, \eta(s') d[s'(t')]+B[s(t)]\, ,
\end{equation}
where $B[s(t)]$ means subordinated Brownian motion, that is the force-free pure subdiffusive
part of the process \cite{Meerschaert, Magdziarz, Gorenflo, Pira}. The integral can be rewritten as
\begin{eqnarray}
\int_0^{s(t)}v\, \eta(s') d[s'(t')]&=&\int_0^{s(t)}v\, \frac{dt'}{ds'} d[s'(t')] \nonumber \\
&=&\int_0^{t}v\,dt' = v t \nonumber\\
\end{eqnarray}
yielding for the subordinated process
\begin{equation}\label{decoupledsolution}
x(t)= v t+ B[s(t)] \, .
\end{equation}
Introducing the variable $\xi=x-vt$ again, this equation can be written as
\begin{equation}
\xi(t)=B[s(t)] \, .
\end{equation}
Thus the variable $\xi$ performs a force-free subdiffusive process and therefore yields the probability
distributions given by Eq.(\ref{gendiffsol}), which proves that the Langevin equations (\ref{adveclangevin})
actually corresponds to the generalized Fokker-Planck equation (\ref{advec}).
The case of time-dependent external field is only slightly more difficult. Consider
a process, where the particle (of unit mass) performs during the waiting periods an overdamped
motion according to the equation of motion
\begin{equation}\label{eom}
{\dot x}(t)= F(t)
\end{equation}
where $F(t)$ is some time-dependent force-field.
The corresponding generalized Fokker-Planck equation reads \cite{Eule}
\begin{equation}\label{advectime}
\left[\frac{\partial}{\partial t}+F(t)\frac{\partial}{\partial x}\right]f(x, t)=
\int_0^t dt' \phi(t-t') \frac{\partial^2}{\partial x^2} e^{-\int_{t'}^t F(t'') dt'' \frac{\partial}{\partial x}}f(x, t')\, .
\end{equation}
The exponential function on the right-hand-side is the so-called Frobenius-Perron operator of the equation of motion
for the deterministic part of $x(t)$. This operator ensures the proper retardation of the probability
distribution during the waiting period \cite{Gaspard}.
Since Eq.(\ref{eom}) describes an invertible conservative system Eq.(\ref{advectime}) can be expressed as (see \cite{Gaspard})
\begin{equation}\label{advectimeretard}
\left[\frac{\partial}{\partial t}+F(t)\frac{\partial}{\partial x}\right]f(x, t)=
\int_0^t dt' \phi(t-t') \frac{\partial^2}{\partial x^2}f(x-\int_{t'}^tF(t'')dt'', t')\, .
\end{equation}
Performing the ansatz $f(x, t)=F(\xi, t)$ with $\xi=x-\int^t F(t') dt'$, the pdf of $\xi$ is governed
by the generalized diffusion equation (\ref{gendiffxi}).
The corresponding Langevin equation reads
\begin{equation}\label{timedeplangevin}
{\dot x}(s)=F(s)\,\eta(s)+\Gamma (s),\qquad {\dot t}(s)=\eta (s)\, .
\end{equation}
Integration of this equation yields for the subordinated process
\begin{eqnarray}
x(t)=x[s(t)]&=&\int_0^{s(t)}F(s')\, \eta(s') d[s'(t')]+B[s(t)] \nonumber \\
& = & \int_0^t F(t') dt' + B[s(t)] \, .
\end{eqnarray}
Evidently $\xi=x-\int^t F(t') dt'$ performs for this case a force-free subdiffusive
process which proves that $x(t)$ is a solution of Eq.(\ref{advectimeretard}).
Note, however, at this point, that the inclusion of space-dependent forces is
only straightforward as long as conservative dynamics is considered because
only in that case the Frobenius-Perron operator can be expressed by a substitution operator
like in Eq.(\ref{advectimeretard}). Even the simple case of a linearly damped motion
between the random kicks, i.e. ${\dot x}=-\gamma x$ leads to a generalized Fokker-Planck
equation whose solution cannot be expressed in a closed form \cite{Eule}. Hence the prove used here
is not applicable anymore. Similarly, a closed form solution of the Langevin equation
cannot be stated for this case.
Comparing the Langevin equation for a biasing time-dependent force Eq.(\ref{klaftersys}) and the
Langevin equation for the decoupled case, one realizes the difference between these equations.
For the case of a biasing force, the force has to depend on the subordination process in the parent
process. Then the force term yields the contribution $\int_0^t F(t')ds(t')$ to the process. Observe
that the force depends indeed on the physical time $t$ but is integrated over the subordinated measure.
In the decoupled case however, the force is integrated in physical time and thus is completely independent
of the diffusion process.
\begin{figure}
\begin{center}
\includegraphics[width=0.6\linewidth]{fracadfokkerplanck.eps}
\caption{Time evolution of the probability density for a decoupled external constant force
combined with biasing constant force of same amplitude but opposite sign. Observe
that the maximum of the distribution moves with the decoupled field while the
distribution becomes more and more asymmetric due to the biasing field.}\label{advecfokker}
\end{center}
\end{figure}
Of course it is possible to state the Langevin equation for a process where a biasing
and decoupled force are acting independently. If $F_B$ denotes the biasing and $F_D$ the decoupled
force the corresponding Langevin equation reads
\begin{equation}\label{langevinboth}
{\dot x}(s)=F_D(s)\,\eta(s)+ F_B(t(s))+\Gamma (s),\qquad {\dot t}(s)=\eta (s)\, .
\end{equation}
The time evolution of the pdf of such a process is displayed in Fig.
\ref{advecfokker} for a constant biasing force
and a constant decoupled force with same amplitude but opposite sign. Note, however, that in many
settings the two contributions are not independent and thus can display dependences.
\section{Conclusions}
Concluding, in this paper we have discussed the effect of external forces
on anomalous diffusion processes on basis of their corresponding Langevin
equations. We have introduced the concept of a biasing and a decoupled external
field which has no classical analoge. Corresponding to the recently established
Langevin formulation of biased diffusion in a time-dependent external field \cite{Magdziarz},
we have rigorously derived the Langevin equations for decoupled forces.
To clarify the concept of biasing an decoupled external force in systems
exhibiting anomalous diffusion we have presented the time evolution of
probability densities for the different considered cases. We have shown
that the established Langevin equation for decoupled force fields can be solved exactly
for conservative space-independent dynamics.
The presented work has aimed at a clarification of the role of external forces in complex
systems which are characterized by subdiffusion and long waiting times respectively.
The approach based on the Langevin equation has provided thereby deep insight into the
physical nature of the processes.
Concluding we shall exemplify the concept by two simple applications each with a constant force.
First consider the diffusion of tracer particles in an advective flow which
has frequent obstacles such as e.g. sediments. In this case the external force, i.e.
the advective flow, is decoupled from the diffusion process. Second, if the diffusion of
groundwater through porous media is examined the gravity field provides a bias on the
anomalous diffusion process.
|
1,941,325,221,101 | arxiv | \section{Introduction}
The role of Hilbert polynomials in commutative and homological algebra as well as in algebraic geometry and combinatorics is well known. A similar role
in differential algebra is played by differential dimension polynomials, which describe in exact terms the freedom degree of a dynamic system,
as well as the number of arbitrary constants in the general solution of a system of partial algebraic differential equations. The notion of a differential dimension polynomial was introduced by E. Kolchin \cite{K1} who proved the following fundamental result.
\begin{theorem} Let $K$ be a differential field of zero characteristic with basic derivations $\delta_{1},\dots, \delta_{m}$. Let $\Theta$ denote the free commutative semigroup generated by $\delta_{1},\dots, \delta_{m}$, and for any $r\in {\bf N}$, let $\Theta(r) = \{\theta = \delta_{1}^{k_{1}}\dots \delta_{m}^{k_{m}}\in\Theta \,|\,\sum_{i=1}^{m}k_{i}\leq r\}$. Furthermore, let $L = K\langle \eta_{1},\dots,\eta_{n}\rangle$ be a differential field extension of $K$ generated by a finite set $\eta = \{\eta_{1}, \dots , \eta_{n}\}$.
Then there exists a polynomial $\omega_{\eta|K}(t)\in {\bf Q}[t]$ such that $\omega_{\eta|K}(r) = trdeg_{K}K(\{\theta \eta_{j} | \theta \in \Theta(r), \,1\leq j\leq n\})$ for all sufficiently large $r\in {\bf Z}$. The degree of this polynomial does not exceed $m$ and the numbers $d = \deg \omega_{\eta|K}$,\, $a_{m}$ and $a_{d}$ do not depend on the choice of the system of differential generators $\eta$ of the extension $L/K$. Moreover, $a_{m}$ is equal to the differential transcendence degree of $L$ over $K$, that is, to the maximal number of elements $\xi_{1},\dots,\xi_{k}\in L$ such that the set $\{\theta \xi_{i} | \theta \in \Theta, 1\leq i\leq k\}$ is algebraically independent over $K$.
\end{theorem}
The polynomial $\omega_{\eta|K}(t)$ is called the {\em differential dimension polynomial} of the extension $L/K$ associated with the set of differential generators $\eta$.
If $P$ is a prime differential ideal of a finitely generated differential algebra $R = K\{\zeta,\dots, \zeta_{n}\}$ over a differential field $K$, then the quotient field of $R/P$ is a differential field extension of $K$ generated by the images of $\zeta_{i}$ ($1\leq i\leq n$) in $R/P$. The corresponding differential dimension polynomial, therefore, characterizes the ideal $P$; it is denoted by $\omega_{P}(t)$. Assigning such polynomials to prime differential ideals has led to a number of new results on the Krull-type dimension of differential algebras and dimension of differential varieties (see, for example, \cite{Johnson1}, \cite{Johnson2} and \cite{Johnson3}). Furthermore, as it was shown by A. Mikhalev and E. Pankratev \cite{MP1}, one can naturally assign a differential dimension polynomial to a system of algebraic differential equations and this polynomial expresses the A. Einstein's strength of the system (see \cite{E}). Methods of computation of (univariate) differential dimension polynomials and the strength of systems of differential equations via the Ritt-Kolchin technique of characteristic sets can be found, for example, in \cite{MP2} and \cite[Chapters 5, 9]{KLMP}. Note also, that there are quite many works on computation of dimension polynomials of differential, difference and difference-differential modules with the use of various generalizations of the Gr\"obner basis method (see, for example, \cite[Chapters V - XI]{KLMP}, \cite{Levin1}, \cite{Levin2}, \cite{Levin3}, \cite[Chapter 3]{Levin4}, and \cite{ZW}).
In this paper we develop a method of characteristic sets with respect to several orderings for algebras of difference-differential polynomials over a difference-differential fields whose basic set of derivations is partied into several disjoint subsets. We apply this method to prove the existence, outline a method of computation, and determine invariants of a multivariate dimension polynomial associated with a finite system of generators of a difference-differential field extension (and a partition of the basic sets of derivations). We also show that most of these invariants are not carried by univariate dimension polynomials and show how the consideration of the new invariants can be applied to the isomorphism problem for difference-differential field extensions and equivalence problem for systems of algebraic difference-differential equations.
\section{Preliminaries}
Throughout the paper, ${\bf N}, {\bf Z}$, ${\bf Q}$, and ${\bf R}$ denote the sets
of all non-negative integers, integers, rational numbers, and real numbers, respectively.
${\bf Q}[t]$ will denote the ring of polynomials in one variable $t$
with rational coefficients.
By a {\em difference-differential ring} we mean a commutative ring $R$ together with finite sets $\Delta = \{\delta_{1},\dots, \delta_{m}\}$ and $\sigma = \{\alpha_{1},\dots, \alpha_{n}\}$ of derivations and automorphisms of $R$, respectively, such that any two mappings of the set $\Delta\bigcup\sigma$ commute. The set $\Delta\bigcup\sigma$ is called the {\em basic set\/} of the difference-differential ring $R$, which is also called a $\Delta$-$\sigma$-ring. If $R$ is a field, it is called a {\em difference-differential field} or a $\Delta$-$\sigma$-field. Furthermore, in what follows, we denote the set $\{\alpha_{1},\dots, \alpha_{n}, \alpha^{-1}_{1},\dots, \alpha^{-1}_{n}\}$ by $\sigma^{\ast}$.
If $R$ is a difference-differential ring with a basic set $\Delta\bigcup\sigma$ described above, then
$\Lambda$ will denote the free commutative semigroup of all power products of the form $\lambda = \delta_{1}^{k_{1}}\dots \delta_{m}^{k_{m}}\alpha_{1}^{l_{1}}\dots \alpha_{n}^{l_{n}}$ where $k_{i}\in {\bf N},\, l_{j}\in {\bf Z}$ ($1\leq i\leq m,\, 1\leq j\leq n$). For any such an element $\lambda$, we set $\lambda_{\Delta} = \delta_{1}^{k_{1}}\dots \delta_{m}^{k_{m}}$, $\lambda_{\sigma} = \alpha_{1}^{l_{1}}\dots \alpha_{n}^{l_{n}}$, and denote by $\Lambda_{\Delta}$ and $\Lambda_{\sigma}$ the commutative semigroup of power products $\delta_{1}^{k_{1}}\dots \delta_{m}^{k_{m}}$ and the commutative group of elements of the form $\alpha_{1}^{l_{1}}\dots \alpha_{n}^{l_{n}}$, respectively.
The {\em order} of $\lambda$ is defined as $ord\,\lambda = \sum_{i=1}^{m}k_{i} + \sum_{j=1}^{n}|l_{j}|$, and for every $r\in {\bf N}$, we set $\Lambda(r) = \{\lambda\in \Lambda\,|\, ord\,\lambda\leq r\}$ ($r\in {\bf N}$).
\smallskip
A subring (ideal) $R_{0}$ of a $\Delta$-$\sigma$-ring $R$ is said to be a difference-differential (or $\Delta$-$\sigma$-) subring
of $R$ (respectively, difference-differential (or $\Delta$-$\sigma$-) ideal of $R$) if $R_{0}$ is closed with respect to the action of any operator of $\Delta\bigcup\sigma^{\ast}$. If a prime ideal $P$ of $R$ is closed with respect to the action of $\Delta\bigcup\sigma^{\ast}$, it is called a {\em prime} difference-differential (or $\Delta$-$\sigma$-) {\em ideal \/} of $R$.
If $R$ is a $\Delta$-$\sigma$-field and $R_{0}$ a subfield of $R$ which is also a $\Delta$-$\sigma$-subring of $R$, then $R_{0}$ is said to be a
$\Delta$-$\sigma$-subfield of $R$; $R$, in turn, is called a difference-differential (or $\Delta$-$\sigma$-) field extension or a $\Delta$-$\sigma$-overfield of $R_{0}$. In this case we also say that we have a $\Delta$-$\sigma$-field extension $R/R_{0}$.
If $R$ is a $\Delta$-$\sigma$-ring and $\Sigma \subseteq R$, then the intersection of all $\Delta$-ideals of $R$ containing the set $\Sigma$ is, obviously, the smallest $\Delta$-$\sigma$-ideal of $R$ containing $\Sigma$. This ideal is denoted by $[\Sigma]$. (It is clear that $[\Sigma]$ is generated, as an ideal, by the set $\{\lambda \xi | \xi \in \Sigma,\, \lambda\in \Lambda\}$). If the set $\Sigma$ is finite, $\Sigma = \{\xi_{1},\dots, \xi_{q}\}$, we say that the $\Delta$-ideal $I = [\Sigma]$ is finitely generated (we write this as $I = [\xi_{1},\dots, \xi_{q}]$) and call $\xi_{1},\dots, \xi_{q}$ difference-differential (or $\Delta$-$\sigma$-) generators of $I$.
If $K_0$ is a $\Delta$-$\sigma$-subfield of the $\Delta$-$\sigma$-field $K$ and $\Sigma \subseteq K$, then the intersection of all $\Delta$-$\sigma$-subfields
of $K$ containing $K_0$ and $\Sigma$ is the unique $\Delta$-$\sigma$-subfield of $K$ containing $K_0$ and $\Sigma$ and contained in every $\Delta$-$\sigma$-subfield of $K$ containing $K_0$ and $\Sigma$. It is denoted by $K_{0}\langle \Sigma \rangle$. If $K = K_{0}\langle \Sigma \rangle$ and the set $\Sigma$ is finite, $\Sigma = \{\eta_{1},\dots,\eta_{s}\}$, then $K$ is said to be a finitely generated $\Delta$-$\sigma$-extension of $K_{0}$ with the set of $\Delta$-$\sigma$-generators $\{\eta_{1},\dots,\eta_{s}\}$. In this case we write
$K = K_{0}\langle \eta_{1},\dots,\eta_{s}\rangle$. It is easy to see that the field $K_{0}\langle \eta_{1},\dots,\eta_{s}\rangle$ coincides with the
field $K_0(\{\lambda \eta_{i} | \lambda \in \Lambda, 1\leq i\leq s\}$).
Let $R$ and $S$ be two difference-differential rings with the same basic set $\Delta\bigcup\sigma$, so that elements of the sets $\Delta$ and $\sigma$ act on each of the rings as mutually commuting derivations and automorphisms, respectively. A ring homomorphism $\phi: R \longrightarrow S$ is called a {\em difference-differential\/} (or $\Delta$-$\sigma$-) {\em homomorphism\/} if $\phi(\tau a) = \tau \phi(a)$ for any $\tau \in \Delta\bigcup\sigma$, $a\in R$.
\smallskip
If $K$ is a difference-differential ($\Delta$-$\sigma$-) field and $Y =\{y_{1},\dots, y_{s}\}$ is a finite set of symbols, then one can consider the countable set of symbols $\Lambda Y = \{\lambda y_{j}|\lambda \in \Lambda, 1\leq j\leq s\}$ and the polynomial ring $R = K[ \{\lambda y_{j}|\lambda \in \Lambda, 1\leq j\leq s\}]$ in the set of indeterminates $\Lambda Y$ over the field $K$. This polynomial ring is naturally viewed as a $\Delta$-$\sigma$-ring where $\tau(\lambda y_{j}) = (\tau\lambda)y_{j}$ for any $\tau \in \Delta\bigcup \sigma$, $\lambda \in \Lambda$, $1\leq j\leq s$, and the elements of $\Delta\bigcup\sigma$ act on the coefficients of the polynomials of $R$ as they act in the field $K$. The ring $R$ is called a {\em ring of difference-differential\/} (or $\Delta$-$\sigma$-) {\em polynomials\/} in the set of differential ($\Delta$-$\sigma$-)indeterminates $y_{1},\dots, y_{s}$ over $K$. This ring is denoted by $K\{y_{1},\dots, y_{s}\}$ and its elements are called difference-differential (or $\Delta$-$\sigma$-) polynomials.
\smallskip
Let $L = K\langle \eta_{1},\dots,\eta_{s}\rangle$ be a difference-differential field extension of $K$ generated by a finite set $\eta =\{\eta_{1},\dots,\eta_{s}\}$. As a field, $L = K(\{\lambda\eta_{j} | \lambda\in \Lambda, 1\leq j\leq s\})$.
The following is a unified version of E. Kolchin's theorem on differential dimension polynomial and the author's theorem on the dimension polynomial of a difference field extension (see \cite{Levin1} or ~\cite[Theorem 4.2.5]{Levin4}\,).
\begin{theorem} With the above notation, there exists a polynomial $\phi_{\eta|K}(t)\in {\bf Q}[t]$ such that
{\em (i)}\, $\phi_{\eta|K}(r) = trdeg_{K}K(\{\lambda\eta_{j} | \lambda\in \Lambda(r), 1\leq j\leq s\})$ for all sufficiently large $r\in {\bf Z}$;
{\em (ii)}\, $\deg \phi_{\eta|K} \leq m+n$ and $\phi_{\eta|K}(t)$ can be written as \, $\phi_{\eta|K}(t) = \D\sum_{i=0}^{m+n}a_{i}{t+i\choose i}$ where $a_{0},\dots, a_{m+n}\in {\bf Z}$ and $2^{n}|a_{m+n}$\,\,.
{\em (iii)}\, $d = \deg \phi_{\eta|K}$,\, $a_{m+n}$ and $a_{d}$ do not depend on the set of difference-differential generators $\eta$ of $L/K$ ($a_{d}\neq a_{m+n}$ if and only if $d < m+n$). Moreover, $\D\frac{a_{m+n}}{2^{n}}$ is equal to the difference-differential transcendence degree of $L$ over $K$ (denoted by $\Delta$-$\sigma$-$trdeg_{K}L$), that is, to the maximal number of elements $\xi_{1},\dots,\xi_{k}\in L$ such that the family
$\{\lambda \xi_{i} | \lambda \in \Lambda, 1\leq i\leq k\}$ is algebraically independent over $K$.
\end{theorem}
The polynomial whose existence is established by this theorem is called a {\em univariate difference-differential} (or $\Delta$-$\sigma$-) {\em dimension polynomial} of the extension $L/K$ associated with the system of difference-differential generators $\eta$.
\section{Partition of the basic set of derivations and the formulation of the main theorem}
Let $K$ be a difference-differential field of zero characteristic with basic
sets $\Delta = \{\delta_{1},\dots, \delta_{m}\}$ and $\sigma = \{\alpha_{1},\dots, \alpha_{n}\}$
of derivations and automorphisms, respectively. Suppose that the set of derivations is
represented as the union of $p$ disjoint subsets ($p\geq 1$):
\begin{equation}\Delta = \Delta_{1}\bigcup \dots \bigcup \Delta_{p}\end{equation}
$$\text{where}\,\,\, \Delta_{1} = \{\delta_{1},\dots, \delta_{m_{1}}\},\, \Delta_{2} = \{\delta_{m_{1}+1},\dots, \delta_{m_{1}+m_{2}}\}, \,\dots,$$
$$\Delta_{p} = \{\delta_{m_{1}+\dots + m_{p-1}+1},\dots, \delta_{m}\}\, \,\, \, (m_{1}+\dots + m_{p} = m).$$
If \,\,$\lambda = \delta_{1}^{k_{1}}\dots \delta_{m}^{k_{m}}\alpha_{1}^{l_{1}}\dots \alpha_{n}^{l_{n}}\in\Lambda$
($k_{i}\in {\bf N}, \,\, l_{j}\in {\bf Z}$), then the order of $\lambda$ with respect to $\Delta_{i}$ ($1\leq i\leq p$)
is defined as $\D\sum_{\nu = m_{1}+\dots + m_{i-1}+1}^{m_{1}+\dots + m_{i}}k_{\nu}$; it is denoted by $ord_{i}\lambda$.
(If $i=1$, the last sum is replaced by $k_{1}+\dots + k_{m_{1}}$.)
The number $ord_{\sigma}\lambda = \D\sum_{j = 1}^{n}|l_{j}|$ is called the order of $\lambda$ with respect to $\sigma$. Furthermore, for any
$r_{1},\dots, r_{p+1}\in {\bf N}$, we set $$\Lambda(r_{1},\dots, r_{p+1}) = \{\lambda\in\Lambda\,|\,ord_{i}\lambda \leq r_{i} \,
(i=1,\dots, p)\,\,\, \text{and}\,\,\, ord_{\sigma}\lambda \leq r_{p+1}\}.$$
In what follows, for any permutation $(j_{1},\dots, j_{p+1})$ of the set $\{1,\dots, p+1\}$, $<_{j_{1},\dots, j_{p+1}}$ will denote the lexicographic order on ${\bf N}^{p+1}$ such that
\noindent$(r_{1},\dots, r_{p+1})<_{j_{1},\dots, j_{p+1}} (s_{1},\dots, s_{p+1})$ if and only if either $r_{j_{1}} < s_{j_{1}}$ or there exists $k\in {\bf N}$, $1\leq k\leq p$, such that $r_{j_{\nu}} = s_{j_{\nu}}$ for $\nu = 1,\dots, k$ and $r_{j_{k+1}} < s_{j_{k+1}}$.
Furthermore, if $\Sigma \subseteq {\bf N}^{p+1}$, then $\Sigma'$ denotes the set
\noindent$\{e\in \Sigma | e$ is a maximal element of $\Sigma$ with
respect to one of the $(p+1)!$ lexicographic orders
$<_{j_{1},\dots, j_{p+1}}\}$.
For example, if $\Sigma = \{(3, 0, 2), (2, 1, 1), (0, 1, 4), (1, 0, 3), (1, 1, 6), (3, 1, 0), \\(1, 2, 0)\} \subseteq {\bf N}^{3}$, then
$\Sigma' = \{(3, 0, 2), (3, 1, 0), (1, 1, 6), (1, 2, 0)\}$.
\begin{theorem} Let $L = K\langle \eta_{1},\dots,\eta_{s}\rangle$ be a
$\Delta$-$\sigma$-field extension generated by a set $\eta =
\{\eta_{1}, \dots , \eta_{s}\}$. Then there exists a polynomial
$\Phi_{\eta}(t_{1},\dots, t_{p+1})$ in $(p+1)$ variables $t_{1},\dots, t_{p+1}$
with rational coefficients such that
{\em (i)} \,$\Phi_{\eta}(r_{1},\dots, r_{p+q}) = trdeg_{K}K(\D\bigcup_{j=1}^{s} \Lambda(r_{1},\dots, r_{p+1})\eta_{j})$
\noindent for all sufficiently large $(r_{1},\dots,r_{p+1})\in {\bf N}^{p+1}$ (it means that there exist nonnegative integers\, $s_{1},\dots,s_{p+1}$ such that the last equality holds for all $(r_{1},\dots, r_{p+1})\in {\bf N}^{p+1}$ with $r_{1}\geq s_{1}, \dots, r_{p+1}\geq s_{p+1}$);
{\em (ii)} \, $deg_{t_{i}}\Phi_{\eta} \leq m_{i}$ ($1\leq i\leq p$)\,\,\, and $deg_{t_{p+1}}\Phi_{\eta} \leq n$, so that $deg\,\Phi_{\eta}\leq m+n$ and $\Phi_{\eta}(t_{1},\dots, t_{p+1})$ can be represented as
$$\Phi_{\eta}(t_{1},\dots, t_{p+1}) = \D\sum_{i_{1}=0}^{m_{1}}\dots \D\sum_{i_{p}=0}^{m_{p}}\D\sum_{i_{p+1}=0}^{n}a_{i_{1}\dots i_{p+1}}
{t_{1}+i_{1}\choose i_{1}}\dots {t_{p+1}+i_{p+1}\choose i_{p+1}}$$
where $a_{i_{1}\dots i_{p+1}}\in {\bf Z}$ and $2^{n}\,|\,a_{m_{1}\dots m_{p}n}$.
{\em (iii)} \,Let $E_{\eta} = \{(i_{1},\dots, i_{p+1})\in {\bf N}^{p+1}\,|\,0\leq i_{k}\leq m_{k}$ for $k=1,\dots, p$, $0\leq i_{p+1}\leq n$,
and $a_{i_{1}\dots i_{p+1}}\neq 0\}$. Then $d = deg\,\Phi_{\eta}$, $a_{m_{1}\dots m_{p+1}}$, elements $(k_{1},\dots, k_{p+1})\in E_{\eta}'$, the corresponding coefficients $a_{k_{1}\dots k_{p+1}}$ and the coefficients of the terms of total degree $d$ do not depend on the choice of the
system of $\Delta$-$\sigma$-generators $\eta$.
\end{theorem}
\begin{definition}
The polynomial $\Phi_{\eta}(t_{1},\dots, t_{p+1})$ is said to be the difference-differential {\em (or $\Delta$-$\sigma$-)} dimension polynomial of the $\Delta$-$\sigma$-field extension $L/K$ associated with the set of $\Delta$-$\sigma$-generators $\eta$ and partition {\em (3.1)} of the basic set of derivations.
\end{definition}
The $\Delta$-$\sigma$-dimension polynomial associated with partition (3.1) has the following interpretation
as the strength of a system of difference-differential equations.
Let us consider a system of partial difference-differential equations
\begin{equation}
A_{i}(f_{1},\dots, f_{s}) = 0\hspace{0.3in}(i=1,\dots, q)
\end{equation}
over a field of functions of $m$ real variables $x_{1},\dots, x_{m}$ ($f_{1},\dots, f_{s}$ are unknown functions of $x_{1},\dots, x_{m}$). Suppose that $\Delta = \{\delta_{1},\dots, \delta_{m}\}$ where $\delta_{i}$ is the partial differentiation $\partial/\partial x_{i}$ ($i=1,\dots, m$) and the basic set of automorphisms $\sigma = \{\alpha_{1},\dots, \alpha_{m}\}$ consists of $m$ shifts of arguments, $f(x_{1},\dots, x_{m})\mapsto f(x_{1},\dots, x_{i-1}, x_{i}+h_{i}, x_{i+1},\dots, x_{m})$ ($1\leq i\leq m$, $h_{1},\dots, h_{m}\in {\bf R}$). Thus, we assume that the left-hand sides of the equations in (3.2) contain unknown functions $f_{i}$, their partial derivatives, their images under the shifts $\alpha_{j}$, and various compositions of such shifts and partial derivations. Furthermore, we suppose that system (3.2) is algebraic, that is, all $A_{i}(y_{1},\dots, y_{s})$
are elements of a ring of $\Delta$-$\sigma$-polynomials $K\{y_{1},\dots, y_{s}\}$ with coefficients in some functional $\Delta$-$\sigma$-field $K$.
Let us consider a grid with equal cells of dimension $h_{1}\times\dots\times h_{m}$ that fills the whole space ${\bf R}^{m}$. We fix some node $\mathcal{P}$ and say that {\em a node $\mathcal{Q}$ has order $i$} if the shortest path from $\mathcal{P}$ to $\mathcal{Q}$ along the edges of the grid consists of $i$ steps (by a step we mean a path from a node
of the grid to a neighbor node along the edge between them). We also fix partition (3.1) of the set of basic derivations $\Delta$ (such a partition can be, for example, a natural separation of (all or some) derivations with respect to coordinates and the derivation with respect to time).
For any $r_{1},\dots, r_{p+1}\in {\bf N}$, let us consider the values of the unknown functions $f_{1},\dots, f_{s}$ and their partial derivatives, whose order with respect to $\Delta_{i}$ does not exceed $r_{i}$ ($1\leq i\leq p$), at the nodes whose order does not exceed $r_{p+1}$. If $f_{1},\dots, f_{s}$ should not satisfy any system of equations (or any other condition), these values can be chosen arbitrarily. Because of the system (and equations obtained from the equations of the system by partial differentiations and transformations of the form $f_{j}(x_{1},\dots, x_{m})\mapsto f_{j}(x_{1}+k_{1}h_{1},\dots, x_{m}+k_{m}h_{m})$ with $k_{1},\dots, k_{m}\in {\bf Z}$, $1\leq j\leq s$), the number of independent values of the functions $f_{1},\dots, f_{s}$ and their partial derivatives whose $i$th order does not exceed $r_{i}$ ($1\leq i\leq p$) at the nodes of order $\leq r_{p+1}$ decreases. This number, which is a function of $p+1$ variables $r_{1},\dots, r_{p+1}$, is the ``measure of strength'' of the system in the sense of A. Einstein. We denote it by $S_{r_{1},\dots, r_{p+1}}$.
Suppose that the $\Delta$-$\sigma$-ideal $J$ generated in the ring $K\{y_{1},\dots, y_{s}\}$ by the $\Delta$-$\sigma$-polynomials $A_{1},\dots, A_{q}$ is prime (e. g., the polynomials are linear). Then the field of fractions $L$ of the $\Delta$-$\sigma$-integral domain $K\{y_{1},\dots, y_{s}\}/J$ has a natural structure of a $\Delta$-$\sigma$-field extension of $K$ generated by the finite set $\eta = \{\eta_{1},\dots, \eta_{s}\}$ where $\eta_{i}$ is the canonical image of $y_{i}$ in $K\{y_{1},\dots, y_{s}\}/J$ ($1\leq i\leq s$). It is easy to see that the $\Delta$-$\sigma$-dimension polynomial $\Phi_{\eta}(t_{1},\dots, t_{p+1})$ of the extension $L/K$ associated with the system of $\Delta$-$\sigma$-generators $\eta$ has the property that $\Phi_{\eta}(r_{1},\dots, r_{p+1}) = S_{r_{1},\dots, r_{p+1}}$ for all sufficiently large $(r_{1},\dots, r_{p+q})\in {\bf N}^{p+1}$, so this dimension polynomial is the measure of strength of the system of difference-differential equations (3.2) in the sense of A. Einstein.
\section{Numerical polynomials of subsets of ${\bf N}^{m}\times {\bf Z}^{n}$}
\setcounter{equation}{0}
\begin{definition}
A polynomial $f(t_{1}, \dots,t_{p})$ in $p$ variables $t_{1},\dots, t_{p}$
($p\geq 1$) with rational coefficients is called
{\em numerical\/} if $f(r_{1},\dots, r_{p})\in {\bf Z}$
for all sufficiently large $(r_{1}, \dots, r_{p})\in{\bf Z}^{p}$.
\end{definition}
Of course, every polynomial with integer coefficients is numerical. As an example of a numerical polynomial in $p$ variables with noninteger coefficients ($p\geq 1$) one can consider $\prod_{i=1}^{p}{t_{i}\choose m_{i}}$ \, where $m_{1},\dots, m_{p}\in{\bf N}$. (As usual, ${t\choose k}$ ($k\geq 1$) denotes the polynomial $\frac{t(t-1)\dots (t-k+1)}{k!}$, ${t\choose0} = 1$, and ${t\choose k} = 0$ if $k < 0$.)
\smallskip
The following theorem proved in ~\cite[Chapter 2]{KLMP} gives the ``canonical'' representation of a numerical polynomial in several variables.
\begin{theorem}
Let $f(t_{1},\dots, t_{p})$ be a numerical polynomial in $p$ variables and let $deg_{t_{i}}\, f = m_{i}$ ($m_{1},\dots, m_{p}\in{\bf N}$). Then $f(t_{1},\dots, t_{p})$ can be represented as
\begin{equation}
f(t_{1},\dots t_{p}) =\D\sum_{i_{1}=0}^{m_{1}}\dots \D\sum_{i_{p}=0}^{m_{p}}
{a_{i_{1}\dots i_{p}}}{t_{1}+i_{1}\choose i_{1}}\dots{t_{p}+i_{p}
\choose i_{p}}
\end{equation}
with uniquely defined integer coefficients $a_{i_{1}\dots i_{p}}$.
\end{theorem}
In what follows, we deal with subsets of
${\bf N}^{m}\times {\bf Z}^{n}$ ($m, n\geq 1$) and a fixed partition of the set ${\bf N}_{m} = \{1,\dots , m\}$ into $p$ disjoint subsets ($p\geq 1$):
\begin{equation}
{\bf N}_{m} = N_{1}\bigcup\dots N_{p}
\end{equation}
where $N_{1} = \{1,\dots, m_{1}\}$,\dots,
$N_{p} = \{m_{1}+\dots + m_{p-1}+1,\dots, m\}$ \, ($m_{1}+\dots + m_{p} = m$).
If $a = (a_{1},\dots, a_{m+n})\in {\bf N}^{m}\times {\bf Z}^{n}$ we denote the numbers $\sum_{i=1}^{m_{1}}a_{i}$,
$\sum_{i=m_{1}+1}^{m_{1}+m_{2}}a_{i},\dots, \sum_{i=m_{1}+\dots + m_{p-1} + 1}^{m}a_{i}$,
$\sum_{i=m+1}^{m+n}|a_{i}|$ by $ord_{1}a,\dots, ord_{p+1}a$,
\smallskip
\noindent respectively. Furthermore, we consider the set ${\bf Z}^{n}$ as a union
\begin{equation}
{\bf Z}^{n} = \bigcup_{1\leq j\leq 2^{n}}{\bf Z}_{j}^{(n)}
\end{equation}
where
${\bf Z}_{1}^{(n)}, \dots, {\bf Z}_{2^{n}}^{(n)}$ are all different Cartesian products of $n$ sets each of which is either
${\bf N}$ or ${\bf Z_{-}}=\{a\in {\bf Z}|a\leq 0\}$. We assume that ${\bf Z}_{1}^{(n)} = {\bf N}^{n}$ and call ${\bf Z}_{j}^{(n)}$ the {\em $j$th orthant} of the set ${\bf Z}^{n}$ ($1\leq j\leq 2^{n}$). The set ${\bf N}^{m}\times {\bf Z}^{n}$ is considered as a partially
ordered set with the order $\unlhd$ such that $(e_{1},\dots, e_{m}, f_{1},\dots, f_{n})\unlhd (e'_{1},\dots, e'_{m}, f'_{1},\dots, f'_{n})$ if and only if $(f_{1},\dots, f_{n})$ and $(f'_{1},\dots, f'_{n})$ belong to the same orthant ${\bf Z}_{k}^{(n)}$ and the
$(m+n)$-tuple $(e_{1},\dots, e_{m}, |f_{1}|,\dots, |f_{n}|)$ is less than $(e'_{1},\dots, e'_{m}, |f'_{1}|,\dots, |f'_{n}|)$ with respect
to the product order on ${\bf N}^{m+n}$.
In what follows, for any set $A\subseteq {\bf N}^{m}\times {\bf Z}^{n}$, $W_{A}$ will denote the set of all elements of ${\bf N}^{m}\times {\bf Z}^{n}$ that do not exceed any element of $A$ with respect to the order $\unlhd$. (Thus, $w\in W_{A}$ if and only if there is no $a\in A$ such that $a\unlhd w$.) Furthermore, for any $r_{1},\dots r_{p+1}\in {\bf N}$, $A(r_{1},\dots r_{p+1})$ will denote the set of all elements $x = (x_{1},\dots, x_{m}, x'_{1},\dots, x'_{n})\in A$ such that $ord_{i}x\leq r_{i}$ ($i=1,\dots, p+1$).
\smallskip
The above notation can be naturally restricted to subsets of ${\bf N}^{m}$. If $E\subseteq {\bf N}^{m}$ and $s_{1},\dots, s_{p}\in {\bf N}$, then $E(s_{1},\dots, s_{p})$ will denote the set $\{e = (e_{1},\dots, e_{m})\in E\,|\,ord_{i}(e_{1},\dots, e_{m},0,\dots, 0)\leq s_{i}$ for $i=1,\dots, p\}$ (\,$(e_{1},\dots, e_{m},0,\dots, 0)$ ends with $n$ zeros; it is treated as a point in ${\bf N}^{m}\times {\bf Z}^{n}$.) Furthermore $V_{E}$ will denote the set of all $m$-tuples $v = (v_{1},\dots , v_{m})\in {\bf N}$ which are not greater than or equal to any $m$-tuple from $E$ with respect to the product order on ${\bf N}^{m}$. (Recall that the product order on ${\bf N}^{m}$ is a partial order $\leq_P$ on ${\bf N}^{m}$ such that $c =(c_{1}, \dots , c_{m})\leq_{P} c' =(c'_{1}, \dots , c'_{m})$ if and only if $c_{i}\leq c'_{i}$ for all $i=1, \dots , m$. If $c\leq_{P} c'$ and $c\neq c'$, we write $c<_{P} c'$ ). Clearly, $v=(v_{1}, \dots , v_{m})\in V_{E}$ if and only if for any element $(e_{1},\dots , e_{m})\in E$, there exists $i\in {\bf N}, 1\leq i\leq m$, such that $e_{i} > v_{i}$.
The following two theorems proved in ~\cite[Chapter 2]{KLMP} generalize the well-known Kolchin's result on the numerical polynomials associated with subsets of ${\bf N}^{m}$ (see ~\cite[Chapter 0, Lemma 16]{K2}) and give an explicit formula for the numerical polynomials in $p$ variables associated with a finite subset of ${\bf N}^{m}$.
\begin{theorem}
Let $E$ be a subset of ${\bf N}^{m}$ where $m = m_{1} + \dots + m_{p}$ for some nonnegative integers $m_{1},\dots, m_{p}$ ($p\geq 1$). Then there exists a numerical polynomial $\omega_{E}(t_{1},\dots, t_{p})$ with the following properties:
{\em (i)} \, $\omega_{E}(r_{1},\dots, r_{p}) = Card\,V_{E}(r_{1},\dots, r_{p})$ for all sufficiently large $(r_{1},\dots, r_{p})\in {\bf N}^{p}$.
{\em (As usual, $Card \, M$ denotes the number of elements of a finite set $M$).}
{\em (ii)} \, $deg_{t_{i}}\omega_{E}\leq m_{i}$ for all $i = 1,\dots, p$.
{\em (iii)} \, $deg\,\omega_{E} = m$ if and only if $E=\emptyset$.
Then $\omega_{E}(t_{1},\dots, t_{p}) = \D\prod_{i=1}^{p}{t_{i}+m_{i}\choose m_{i}}$.
\end{theorem}
\begin{definition}
The polynomial $\omega_{E}(t_{1},\dots, t_{p})$ is called the dimension polynomial of the set
$E\subseteq {\bf N}^{m}$ associated with the partition
$(m_{1},\dots, m_{p})$ of $m$.
\end{definition}
\begin{theorem}
Let $E = \{e_{1}, \dots, e_{q}\}$ ($q\geq 1$) be a finite subset of ${\bf N}^{m}$
and let a partition {\em (4.2)} of the set ${\bf N}_{m}$ into $p$ disjoint subsets $N_{1},\dots, N_{p}$ be fixed.
Let $e_{i} = (e_{i1}, \dots, e_{im})$ \, ($1\leq i\leq q$)
and for any $l\in {\bf N}$, $0\leq l\leq q$, let $\Gamma (l,q)$
denote the set of all $l$-element subsets of the set
${\bf N}_{q} = \{1,\dots, q\}$. Furthermore, for any
$\sigma \in \Gamma (l,q)$, let $\bar{e}_{\emptyset j} = 0$, $\bar{e}_{\sigma j} = \max \{e_{ij} |
i\in \sigma\}$ if $\sigma\neq \emptyset$ ($1\leq j\leq m$), and
$b_{\sigma k} =\D\sum_{h\in N_{k}}\bar{e}_{\sigma h}$ ($k = 1,\dots, p$).
Then \begin{equation} \omega_{E}(t_{1},\dots, t_{p}) =
\D\sum_{l=0}^{q}(-1)^{l}\D\sum_{\sigma \in \Gamma (l,\, q)}
\D\prod_{j=1}^{p}{t_{j}+m_{j} - b_{\sigma j}\choose m_{j}}
\end{equation}
\end{theorem}
{\bf Remark.} \, It is clear that if $E\subseteq {\bf N}^{m}$ and $E^{\ast}$ is the set of all minimal elements of the set $E$ with respect to the product order on ${\bf N}^{m}$, then the set $E^{\ast}$ is finite and $\omega_{E}(t_{1}, \dots, t_{p}) = \omega_{E^{\ast}}(t_{1}, \dots, t_{p})$. Thus, Theorem 4.5 gives an algorithm that allows one to find a
numerical polynomial associated with any subset of ${\bf N}^{m}$ (and with a given partition of the set $\{1,\dots, m\}$): one should first find the set of all minimal points of the subset and then apply Theorem 4.5.
\medskip
The following result can be obtained precisely in the same way as Theorem 3.4 of ~\cite{Levin2} (the only difference is that the proof in the mentioned paper uses Theorem 3.2 of ~\cite{Levin2} in the case $p=2$, while the proof of the theorem below should refer to the Theorem 3.2 of ~\cite{Levin2} where $p$ is any positive integer).
\begin{theorem}
Let $A\subseteq {\bf N}^{m}\times{\bf Z}^{n}$ and let partition {\em (4.2)} of ${\bf N}_{m}$ be fixed. Then there exists a numerical polynomial $\phi_{A}(t_{1},\dots, t_{p+1})$ in $p+1$ variables such that
{\em (i)}\, $\phi_{A}(r_{1},\dots, r_{p+1}) = Card\,W_{A}(r_{1},\dots, r_{p+1})$ for all sufficiently large
\noindent$(r_{1},\dots, r_{p+1})\in {\bf N}^{p+1}$.
\medskip
{\em (ii)}\, $deg_{t_{i}}\phi_{A}\leq m_{i}$ ($1\leq i\leq p$), $deg_{t_{p+1}}\phi_{A}\leq n$ and the coefficient of $t_{1}^{m_{1}}\dots t_{p}^{m_{p}}t_{p+1}^{n}$ in $\phi_{A}$ is of the form $\D\frac{2^{n}a}{m_{1}!\dots m_{p}!n!}$\, with $a\in {\bf Z}$.
\medskip
{\em (iii)}\, Let us consider a mapping $\rho: {\bf N}^{m}\times {\bf Z}^{n}\longrightarrow{\bf N}^{m+2n}$
such that
\smallskip
$\rho((e_{1},\dots, e_{m+n}) =(e_{1},\dots, e_{m}, \max \{e_{m+1}, 0\}, \dots, \max \{e_{m+n}, 0\},$
$\max \{-e_{m+1}, 0\}, \dots, \max \{-e_{m+n}, 0\})$.
\smallskip
Let $B = \rho(A)\bigcup \{\bar{e}_{1},\dots, \bar{e}_{n}\}$ where $\bar{e}_{i}$
($1\leq i\leq n$) is a $(m+2n)$-tuple in ${\bf N}^{m+2n}$ whose
$(m+i)$th and $(m+n+i)$th coordinates are equal to 1 and all other
coordinates are equal to 0.
Then $\phi_{A}(t_{1}, \dots, t_{p+1}) = \omega_{B}(t_{1}, \dots, t_{p+1})$ where
$\omega_{B}(t_{1},\dots, t_{p+1})$ is the dimension polynomial of the set $B$
(see {\em Definition 4.4}) associated with the partition ${\bf N}_{m+2n} =
\{1,\dots , m_{1}\}\bigcup \{m_{1}+1,\dots , m_{1}+m_{2}\}\bigcup\dots
\bigcup \{m_{1}+\dots +m_{p-1}+1,\dots , m\}\bigcup \{m+1,\dots, m+2n\}$ of the
set ${\bf N}_{m+2n}$.
\smallskip
{\em (iv)}\, If $A = \emptyset$, then
\begin{equation}\phi_{A}(t_{1}, \dots, t_{p+1}) ={{t_{1}+m_{1}}\choose m_{1}}\dots {{t_{p}+m_{p}}\choose m_{p}}
\sum_{i=0}^{n}(-1)^{n-i}2^{i}{n\choose i}{{t_{p+1}+i}\choose i}.\end{equation}
\end{theorem}
The polynomial $\phi_{A}(t_{1},\dots, t_{p+1})$ is called the {\em dimension polynomial} of the
set $A\subseteq {\bf N}^{m}\times{\bf Z}^{n}$ associated with partition (4.2) of ${\bf N}_{m}$.
\section{Proof of the main theorem and computation of difference-differential dimension
polynomials via characteristic sets}
\setcounter{equation}{0}
In this section we prove Theorem 3.1 and give a method of computation of difference-differential
dimension polynomials of $\Delta$-$\sigma$-field extensions based on
constructing a characteristic set of the defining prime $\Delta$-$\sigma$-ideal of the
extension.
In what follows we use the conventions of section 3. In particular, we assume that partition (3.1) of the set of basic derivations $\Delta = \{\delta_{1},\dots, \delta_{m}\}$ is fixed.
\medskip
Let us consider $p+1$ total orderings $<_{1}, \dots, <_{p}, <_{\sigma}$ of the
set of power products $\Lambda$ such that
\medskip
\noindent$\lambda = \delta_{1}^{k_{1}}\dots \delta_{m}^{k_{m}}
\alpha_{1}^{l_{1}}\dots \alpha_{n}^{l_{n}}
<_{i} \lambda' = \delta_{1}^{k'_{1}}\dots
\delta_{m}^{k'_{m}}\alpha_{1}^{l'_{1}}\dots\alpha_{n}^{l'_{n}}$ ($1\leq i\leq p$) if and only if
\medskip
\noindent $(ord_{i}\lambda, ord\,\lambda, ord_{1}\lambda,\dots, ord_{i-1}\lambda, ord_{i+1}\lambda, \dots, ord_{p}\lambda,
ord_{\sigma}\lambda, k_{m_{1}+\dots + m_{i-1}+1},\dots,$
\medskip
\noindent$k_{m_{1}+\dots +m_{i}},\, k_{1},\dots, k_{m_{1}+\dots +
m_{i-1}}, k_{m_{1}+\dots + m_{i}+1},\dots, k_{m}, |l_{1}|,\dots, |l_{n}|, l_{1},\dots, l_{n})$ is
\medskip
\noindent less than
$(ord_{i}\lambda', ord\,\lambda', ord_{1}\lambda',\dots, ord_{i-1}\lambda', ord_{i+1}\lambda', \dots, ord_{p}\lambda', ord_{\sigma}\lambda',$
\medskip
\noindent$ k'_{m_{1}+\dots + m_{i-1}+1},\dots, k'_{m_{1}+\dots +m_{i}},\, k'_{1},\dots, k'_{m_{1}+\dots +
m_{i-1}}, k'_{m_{1}+\dots + m_{i}+1},\dots,$
\medskip
\noindent$k'_{m}, |l'_{1}|,\dots, |l'_{n}|, l'_{1},\dots, l'_{n})$ with respect to the lexicographic order on ${\bf N}^{m+2n+p+2}$.
\medskip
Similarly, $\lambda <_{\sigma} \lambda'$ if and only if $(ord_{\sigma}\lambda, ord\,\lambda, ord_{1}\lambda,\dots, ord_{p}\lambda,
|l_{1}|,\dots, |l_{n}|,$
\medskip
\noindent$l_{1},\dots, l_{n}, k_{1},\dots, k_{m})$ is less than the corresponding $(m+2n+p+2)$-tuple for $\lambda'$
with respect to the lexicographic order on ${\bf N}^{m+2n+p+2}$.
\medskip
Two elements $\lambda_{1} = \delta_{1}^{k_{1}}\dots \delta_{m}^{k_{m}}
\alpha_{1}^{l_{1}}\dots \alpha_{n}^{l_{n}}$ and
$\lambda_{2} = \delta_{1}^{r_{1}}\dots \delta_{m}^{r_{m}}\alpha_{1}^{s_{1}}
\dots \alpha_{n}^{s_{n}}$ in $\Lambda$ are called {\em similar\/}, if
the $n$-tuples $(l_{1}, \dots, l_{n})$ and $(s_{1}, \dots, s_{n})$
belong to the same orthant of ${\bf Z}^{n}$ (see (4.3)\,). In this case we
write $\lambda_{1}\sim \lambda_{2}$. We say that $\lambda_{1}$ {\em divides} $\lambda_{2}$
(or $\lambda_{2}$ is a {\em multiple} of $\lambda_{1}$) and write $\lambda_{1}|\lambda_{2}$
if $\lambda_{1}\sim \lambda_{2}$ and there exists
$\lambda \in \Lambda$ such that $\lambda\sim \lambda_{1}, \,\lambda\sim \lambda_{2}$ and $\lambda_{2} = \lambda\lambda_{1}$.
\smallskip
Let $K$ be a difference-differential field ($Char\,K = 0$) with the basic sets $\Delta$ and $\sigma$ described above
(and partition (3.1) of the set $\Delta$). Let $K\{y_{1},\dots, y_{s}\}$ be the ring of
$\Delta$-$\sigma$-polynomials over $K$ and let $\Lambda Y$ denote
the set of all elements $\lambda y_{i}$ ($\lambda\in \Lambda$, $1\leq i\leq s$) called {\em terms}.
Note that as a ring, $K\{y_{1},\dots, y_{s}\} = K[\Lambda Y]$. Two terms $u=\lambda y_{i}$ and $v=\lambda' y_{j}$
are called {\em similar} if $\lambda$ and $\lambda'$ are similar; in this case we write $u\sim v$.
If $u = \lambda y_{i}$ is a term and $\lambda'\in \Lambda$, we say that $u$ is similar to $\lambda'$ and write
$u\sim \lambda'$ if $\lambda\sim \lambda'$. Furthermore, if $u, v\in \Lambda Y$, we say that $u$ {\em divides} $v$ or
{\em $v$ is a multiple of $u$}, if $u=\lambda' y_{i}$, $v=\lambda'' y_{i}$ for some $y_{i}$ and $\lambda'|\lambda''$.
(If $\lambda'' = \lambda\lambda'$ for some $\lambda\in \Lambda, \,\lambda\sim \lambda'$, we write $\D\frac{v}{u}$ for $\lambda$.)
\smallskip
Let us consider $p+1$ orders $<_{1},\dots, <_{p}, <_{\sigma}$ on the set $\Lambda Y$ that
correspond to the orders on the semigroup $\Lambda$ (we use the same symbols for the orders on $\Lambda$
and $\Lambda Y$). These orders are defined as follows: $\lambda y_{j} <_{i}$ (or $<_{\sigma}$) $\lambda' y_{k}$ if and only if
$\lambda <_{i}$ (respectively, $<_{\sigma}$)$\lambda'$ in $\Lambda$ or $\lambda = \lambda'$ and $j < k$ ($1\leq i\leq p,\, 1\leq j, k\leq s$).
\smallskip
The order of a term $u = \lambda y_{k}$ and its orders with respect to the sets $\Delta_{i}$ ($1\leq i\leq p$) and $\sigma$ are defined as the corresponding orders of $\lambda$ (we use the same notation $ord\,u$, $ord_{i}u$, and $ord_{\sigma}u$ for the corresponding orders).
\smallskip
If $A\in K\{y_{1},\dots, y_{s}\}\setminus K$ and $1\leq k\leq p$, then the highest with respect to $<_{k}$ term that appears in $A$ is called the {\em $k$-leader} of $A$. It is denoted by $u_{A}^{(k)}$. The highest term of $A$ with respect to $<_{\sigma}$ is called the {\em $\sigma$-leader} of $A$; it is denoted by $v_{A}$. If $A$ is written as a polynomial in $v_{A}$, $A = I_{d}{(v_{A})}^{d} + I_{d-1}{(v_{A})}^{d-1} + \dots + I_{0}$, where all terms of $I_{0},\dots, I_{d}$ are less than $v_{A}$ with respect to $<_{\sigma}$, then $I_{d}$ is called the {\em initial\/} of $A$. The partial derivative $\partial A/\partial v_{A} = dI_{d}(v_{A})^{d-1} + (d-1)I_{d-1}{(v_{A})}^{d-2} + \dots + I_{1}$ is called the {\em separant\/} of $A$. The initial and the separant of a $\Delta$-$\sigma$-polynomial $A$ are denoted by $I_{A}$ and $S_{A}$, respectively.
\smallskip
If $A, B\in K\{y_{1},\dots, y_{s}\}$, then $A$ is said to have lower rank than $B$ (we write $rk\,A < rk\,B$) if either $A\in K$, $B\notin K$, or $(v_{A}, deg_{v_{A}}A, ord_{1}u_{A}^{(1)},\dots, ord_{p}u_{A}^{(p)})$ is less than $(v_{B}, deg_{v_{B}}B, ord_{1}u_{B}^{(1)},\dots, ord_{p}u_{B}^{(p)})$ with respect to the lexicographic order ($v_{A}$ and $v_{B}$ are compared with respect to $<_{\sigma}$). If the vectors are equal (or $A, B\in K$) we say that $A$ and $B$ are of the same rank and write $rk\,A = rk\, B$.
\begin{definition}
If $A, B\in K\{y_{1},\dots, y_{s}\}$, then $B$ is said to be
reduced with respect to $A$ if
{\em (i)} $B$ does not contain terms $\lambda v_{A}$ such that $\lambda\sim v_{A}$,
$\lambda_{\Delta}\neq 1$, and $ord_{i}(\lambda u_{A}^{(i)})\leq ord_{i}u_{B}^{(i)}$
for $i=1,\dots, p$.
{\em (ii)} If $B$ contains a term $\lambda v_{A}$, where $\lambda\sim v_{A}$ and
$\lambda_{\Delta} = 1$, then either there exists
$j, \, 1\leq j\leq p$, such that $ord_{j} u_{B}^{(j)} < ord_{j}(\lambda u_{A}^{(j)})$
or $ord_{j}(\lambda u_{A}^{(j)}) \leq ord_{j} u_{B}^{(j)}$
for all $j = 1,\dots, p$ and $deg_{\lambda v_{A}}B < deg_{v_{A}}A$.
\end{definition}
If $B\in K\{y_{1},\dots, y_{s}\}$, then $B$ is said to be {\em reduced with respect to a set\, $\Sigma \subseteq K\{y_{1},\dots, y_{s}\}$} if $B$ is reduced with respect to every element of $\Sigma$.
\medskip
A set $\Sigma \subseteq K\{y_{1},\dots, y_{s}\}$ is called {\em autoreduced} if $\Sigma \bigcap K = \emptyset$ and every element of $\Sigma$ is reduced with respect to any other element of this set.
The proof of the following lemma can be found in ~\cite[Chapter0, Section 17]{K2}.
\begin{lemma}
Let $A$ be any infinite subset of ${\bf N}^{m}\times{\bf N}_{n}$ ($m,n\in {\bf N}$, $n\geq 1$). Then there exists an infinite sequence of elements of $A$, strictly increasing relative to the product order, in which every element has the same projection on ${\bf N}_{n}$.
\end{lemma}
This lemma implies the following statement that will be used below.
\begin{lemma}
Let $S$ be any infinite set of terms $\lambda y_j$ ($\lambda\in \Lambda, 1\leq j\leq s$) in $K\{y_{1},\dots, y_{s}\}$. Then there exists an index $j$ ($1\leq j\leq s$) and an infinite sequence of terms $\lambda_{1}y_{j}, \lambda_{2}y_{j}, \dots, \lambda_{k}y_{j},\dots $ such that $\lambda_{k}|\lambda_{k+1}$ for every $k=1, 2, \dots $.
\end{lemma}
\begin{proposition}
Every autoreduced set is finite.
\end{proposition}
\begin{proof}
Suppose that $\Sigma$ is an infinite autoreduced subset of $K\{y_{1},\dots, y_{s}\}$. Then $\Sigma$ must contain an infinite set $\Sigma'\subseteq \Sigma$ such that all $\Delta$-$\sigma$-polynomials from $\Sigma'$ have different $\sigma$-leaders similar to each other. Indeed, if it is not so, then there exists an infinite set $\Sigma_{1}\subseteq \Sigma$ such that all $\Delta$-$\sigma$-polynomials from $\Sigma_{1}$ have the same $\sigma$-leader $v$. By Lemma 5.2, the infinite set $\{(ord_{1}u_{A}^{(1)}, \dots, ord_{p}u_{A}^{(p)}) | A\in \Sigma_{1}\}$ contains a nondecreasing infinite sequence $$(ord_{1}u_{A_{1}}^{(1)}, \dots, ord_{p}u_{A_{1}}^{(p)}) \leq_{P}
(ord_{1}u_{A_{2}}^{(1)}, \dots, ord_{p}u_{A_{2}}^{(p)}) \leq_{P} \dots $$ ($A_{1}, A_{2}, \dots \in \Sigma_{1}$ and $\leq_{P}$ denotes the product order on ${\bf N}^{p}$). Since the sequence $\{deg_{v_{A_{i}}}A_{i} | i = 1, 2, \dots \}$ cannot be strictly decreasing, there are two indices $i$ and $j$ such that $i < j$ and $deg_{v_{A_{i}}}A_{i} \leq deg_{v_{A_{j}}}A_{j}$. We see that $A_{j}$ is not reduced with respect to $A_{i}$ that contradicts the fact that $\Sigma$ is an autoreduced set.
Thus, we can assume that all $\Delta$-$\sigma$-polynomials of our infinite autoreduced set $\Sigma$ have distinct $\sigma$-leaders similar to each other. Using Lemma 5.3, we can assume that there exists an infinite sequence $B_{1}, B_{2}, \dots $ \, of elements of $\Sigma$ such that
$v_{B_{i}} | v_{B_{i+1}}$ and $\left(\frac{v_{B_{i+1}}}{v_{B_{i}}}\right)_{\Delta} \neq 1$ for all $i = 1, 2, \dots $.
Let $k_{ij} = ord_{j}v_{B_{i}}$ and $l_{ij} = ord_{j}u_{B_{i}}^{(j)}$ ($1\leq j\leq p$). Obviously, $l_{ij}\geq k_{ij}$ ($i = 1, 2,\dots ; j = 1,\dots, p$),
so that $\{(l_{i1}-k_{i1}, \dots, l_{ip}-k_{ip}) | i =1, 2, \dots \}\subseteq {\bf N}^{p}$. By Lemma 5.2, there exists an infinite sequence of indices $i_{1} < i_{2} < \dots $ such that $(l_{i_{1}1}-k_{i_{1}1},\dots, l_{i_{1}p}-k_{i_{1}p}) \leq_{P} (l_{i_{2}1}-k_{i_{2}1},\dots, l_{i_{2}p}-k_{i_{2}p}) \leq_{P} \dots $.
Then for any $j = 1,\dots, p$, we have $ord_{j}\,(\, \frac{v_{B_{i_{2}}}}{v_{B_{i_{1}}}}u_{B_{i_{1}}}^{(j)}) = k_{i_{2}j} - k_{i_{1}j} + l_{i_{1}j} \leq
k_{i_{2}j} + l_{i_{2}j} - k_{i_{2}j} = l_{i_{2}j} = ord_{j}u_{B_{i_{2}}}^{(j)}$, so that $B_{i_{2}}$ contains a term $\lambda v_{B_{i_{1}}} = v_{B_{i_{2}}}$ such that $\lambda_{\Delta}\neq 1$ and $ord_{j}(\lambda u_{B_{i_{1}}}^{(j)}) \leq ord_{j} u_{B_{i_{2}}}^{(j)}$ for $j = 1, \dots, p$. Thus, the $\Delta$-$\sigma$-polynomial $B_{i_{2}}$ is reduced with respect to $B_{i_{1}}$ that contradicts the fact that $\Sigma$ is an autoreduced set.
\end{proof}
Throughout the rest of the paper, while considering autoreduced sets in the ring $K\{y_{1},\dots, y_{s}\}$ we always assume that their elements are arranged in order of increasing rank. (Therefore, if we consider an autoreduced set of $\Delta$-$\sigma$-polynomials $\Sigma = \{A_{1},\dots, A_{d}\}$, then $rk\,A_{1}< \dots < rk\,A_{d}$).
\begin{proposition} Let $\Sigma = \{A_{1},\dots, A_{d}\}$ be an autoreduced set in the ring $K\{y_{1},\dots, y_{s}\}$ and let $I_{k}$ and $S_{k}$ ($1\leq k\leq d$) denote the initial and separant of $A_{k}$, respectively. Furthermore, let $I(\Sigma) = \{X\in K\{y_{1},\dots, y_{s}\}\,|\,X=1$ or $X$ is a product of finitely many elements of the form $\gamma(I_{k})$ and $\gamma'(S_{k})$ where $\gamma, \gamma'\in \Lambda_{\sigma}\}$. Then for any $\Delta$-$\sigma$-polynomial $B$, there exist $B_{0}\in K\{y_{1},\dots, y_{s}\}$ and $J\in I(\Sigma)$
such that $B_{0}$ is reduced with respect to $\Sigma$ and $JB\equiv B_{0} \,(mod [\Sigma])$ (that is, $JB-B_{0}\in [\Sigma]$).
\end{proposition}
\begin{proof}
If $B$ is reduced with respect to $\Sigma$, the statement is obvious (one can set $B_{0}=B$). Suppose that $B$ is not reduced with respect to $\Sigma$.
Let $u^{(j)}_i$ and $v_i$ ($1\leq j\leq p,\, 1\leq i\leq d$) be the leaders of the element $A_i$ relative to the orders $<_{j}$ and $<_{\sigma}$, respectively.
In what follows, a term $w_H$, that appears in a $\Delta$-$\sigma$-polynomial $H\in R$, will be called a $\Sigma$-leader of $H$ if $w_H$ is the greatest (with respect to
$<_{\sigma}$) term among all terms $\lambda v_i$ ($\lambda\in\Lambda$, $1\leq j\leq d$) such that $\lambda\sim v_{i}$, $\lambda v_{i}$ appears in $H$ and either $\lambda_{\Delta}\neq 1$ and $ord_{j}(\lambda u^{(j)}_{i})\leq ord_{j}u^{(j)}_H$ for $j=1,\dots, p$, or $\lambda_{\Delta} = 1$, $ord_{j}(\lambda u^{(j)}_{i})\leq ord_{j}u^{(j)}_H$ ($1\leq j\leq p$),
and $deg_{v_{i}}A_{i} \leq deg_{\lambda v_{i}}H$.
Let $w_B$ be the $\Sigma$-leader of $B$. Then $B = B'w_{B}^{r} + B''$ where $B'$ does not contain $w_B$ and $deg_{w_{B}}B'' < r$. Let $w_{B}=\lambda v_i$ for some $i$ ($1\leq i\leq d$) and for some $\lambda \in \Lambda$, $\lambda\sim v_i$, such that $ord_{j}(\lambda u^{(j)}_{i})\leq ord_{j}u^{(j)}_B$ for $j=1,\dots, p$. Without loss of generality we may assume that $i$ corresponds to the maximum (with respect to $<_\sigma$) $\sigma$-leader $v_i$ in the set of all $\sigma$-leaders of elements of $\Sigma$.
Suppose, first, that $\lambda_{\Delta}\neq 1$ (and $ord_{j}(\lambda u^{(j)}_{i})\leq ord_{j}u^{(j)}_B$ for $j=1,\dots, p$).
Then $\lambda_{\Delta}A_{i} - S_{i}\lambda_{\Delta} v_{i}$ has lower rank than $\lambda_{\Delta} v_{i}$, hence
$T = \lambda A_{i} - \lambda_{\sigma}(S_{i})\lambda v_{i}$ has lower rank than $\lambda v_{i} = w_{B}$. Also,
$(\lambda_{\sigma}(S_{i}))^{r}B = (\lambda_{\sigma}(S_{i})\lambda v_{i})^{r}B' + (\lambda_{\sigma}(S_{i}))^{r}B'' =
(\lambda A_{i} - T)^{r}B' + (\lambda_{\sigma}(S_{i}))^{r}B''$. Setting $B^{(1)} = B'(-T)^{r} + (\lambda_{\sigma}(S_{i}))^{r}B''$ we
obtain that $B^{(1)}\equiv B\, (mod [\Sigma])$, $B^{ (1)}$
does not contain any $\Sigma$-leader, which is greater than $w_{B}$ with respect to $<_{\sigma}$, and $deg_{w_{B}} B^{(1)} < r$.
Now let $\lambda_{\Delta} = 1$, $ord_{j}(\lambda u^{(j)}_{i})\leq ord_{j}u^{(j)}_B$ ($1\leq j\leq p$), and $r_{i} < r$ where $r_{i} = deg_{v_{i}}A_{i}$. Then the $\Delta$-$\sigma$-polynomial $(\lambda I_{i})B - w_{B}^{r-r_{i}}(\lambda A_{i})B'$ has all the properties of $B^{(1)}$ mentioned above. Repeating the described procedure,
we arrive at a desired $\Delta$-$\sigma$-polynomial $B_{0}$, which is reduced with respect to $\Sigma$ and satisfies the condition
$JB\equiv B_{0} \,(mod [\Sigma])$, where $J=1$ or $J$ is a product of finitely many elements of the form $\gamma(I_{k})$
and $\gamma'(S_{k})$ ($\gamma, \gamma'\in \Lambda_{\sigma}$).
\end{proof}
With the notation of the last proposition, we say that the $\Delta$-$\sigma$-polynomial $B$
{\em reduces to $B_{0}$} modulo $\Sigma$.
\begin{definition}
Let $\Sigma = \{A_{1},\dots,A_{d}\}$ and
$\Sigma' = \{B_{1},\dots,B_{e}\}$ be two autoreduced sets in the ring of
$\Delta$-$\sigma$-polynomials $K\{y_{1},\dots, y_{s}\}$. An autoreduced set
$\Sigma$ is said to have lower rank than $\Sigma'$ if one of the following
two cases holds:
{\em (1)} There exists $k\in {\bf N}$ such that $k\leq \min \{d,e\}$,
$rk\,A_{i}=rk\,B_{i}$ for $i=1,\dots,k-1$ and $rk\,A_{k} < rk\,B_{k}$.
{\em (2)} $d>e$ and $rk\,A_{i}=rk\,B_{i}$ for $i=1,\dots,e$.
If $d=e$ and $rk\,A_{i}=rk\,B_{i}$ for $i=1,\dots,d$, then $\Sigma$ is
said to have the same rank as $\Sigma'$.
\end{definition}
\begin{proposition}
In every nonempty family of autoreduced sets of difference-differential polynomials
there exists an autoreduced set of lowest rank.
\end{proposition}
\begin{proof}
Let $\Phi$ be a nonempty family of autoreduced sets in the ring $K\{y_{1},\dots, y_{s}\}$. Let us inductively define an infinite descending chain of subsets of $\Phi$ as follows:
$\Phi_{0}=\Phi$, $\Phi_{1}=\{\Sigma \in \Phi_{0} | \Sigma$ contains at least one element and the first element of $\Sigma$ is of lowest possible rank\}, \dots , $\Phi_{k}=\{\Sigma \in \Phi_{k-1} | \Sigma$ contains at least $k$ elements and the $k$th element of $\Sigma$ is of lowest possible rank\}, \dots . It is clear that if $A$ and $B$ are any two $\Delta$-$\sigma$-polynomials in the same set $\Phi_{k}$, then $v_{A} = v_{B}$, $deg_{v_{A}}A = deg_{v_{B}}B$, and $ord_{i}u_{A}^{(i)} = ord_{i}u_{B}^{(i)}$ for $i = 1,\dots, p$.
Therefore, if all sets $\Phi_{k}$ are nonempty, then the set \{$A_{k}|A_{k}$ is the $k$th element of some autoreduced set in $\Phi_{k}$\} would be an infinite autoreduced set, and this would contradict Proposition 5.4. Thus, there is the smallest positive integer $k$ such that $\Phi_{k}=\emptyset$. Clearly, every element of $\Phi_{k-1}$ is an autoreduced set of
lowest rank in $\Phi$.
\end{proof}
Let $J$ be any ideal of the ring $K\{y_{1},\dots, y_{s}\}$. Since the set of all autoreduced subsets of $J$ is not empty (if $A\in J$, then $\{A\}$ is an autoreduced subset of $J$), the last statement shows that $J$ contains an autoreduced subset of lowest rank. Such an autoreduced set is called a {\em characteristic set} of the ideal $J$.
\begin{proposition}
Let $\Sigma = \{A_{1}, \dots , A_{d}\}$ be a characteristic set of a $\Delta$-$\sigma$-ideal
$J$ of the ring $R = K\{y_{1},\dots, y_{s}\}$. Then
an element $B\in R$ is reduced with respect to the set $\Sigma$ if and only
if $B = 0$.
\end{proposition}
\begin{proof} First of all, note that if $B\neq 0$ and $rk\,B < rk\,A_{1}$, then $rk\,\{B\} < rk\,\Sigma$ that contradicts the fact that $\Sigma$ is a characteristic set of the ideal
$J$. Let $rk\,B > rk\,A_{1}$ and let $A_{1},\dots, A_{j}$ ($1\leq j\leq d$) be all elements of $\Sigma$ whose rank is lower that the rank of $B$. Then $\Sigma' = \{A_{1},\dots, A_{j}, B\}$ is an autoreduced set of lower rank than $\Sigma$, contrary to the fact that $\Sigma$ is a characteristic set of $J$. Thus, $B = 0$.
\end{proof}
Since for any $\Delta$-$\sigma$-polynomial $A$ and any $\gamma\in \Lambda_{\sigma}$, $ord_{i}(\gamma A) = ord_{i}A$ for $i=1,\dots, p$, one can introduce the concept of a coherent autoreduced set of a linear $\Delta$-$\sigma$-ideal of $K\{y_{1},\dots, y_{s}\}$ (that is, a $\Delta$-$\sigma$-ideal generated by a finite set of linear $\Delta$-$\sigma$-polynomials) in the same way as it is defined in the case of difference polynomials (see ~\cite[Section 6.5]{KLMP}): an autoreduced set $\Sigma = \{A_{1},\dots, A_{d}\}\subseteq K\{y_{1},\dots, y_{s}\}$ consisting of linear $\Delta$-$\sigma$-polynomials is called {\em coherent} if it satisfies the following two conditions:
(i)\, $\lambda A_{i}$ reduces to zero modulo $\Sigma$ for any $\lambda\in \Lambda, \,1\leq i\leq d$.
(ii)\, If $v_{A_{i}}\sim v_{A_{j}}$ and $w = \lambda v_{A_{i}} = \lambda'v_{A_{j}}$, where $\lambda\sim\lambda'\sim v_{A_{i}}\sim v_{A_{j}}$,
then the $\Delta$-$\sigma$-polynomial $(\lambda'I_{A_{j}})(\lambda A_{i}) - (\lambda I_{A_{i}})(\lambda'A_{j})$ reduces to zero modulo $\Sigma$.
The following two propositions can be proved precisely in the same way as the corresponding statements for difference polynomials,
see ~\cite[Theorem 6.5.3 and Corollary 6.5.4]{KLMP}).
\begin{proposition}
Any characteristic set of a linear $\Delta$-$\sigma$-ideal of the ring of $\Delta$-$\sigma$-polynomials $K\{y_{1},\dots, y_{s}\}$
is a coherent autoreduced set. Conversely, if $\Sigma$ is a coherent autoreduced set in $K\{y_{1},\dots, y_{s}\}$ consisting of linear
$\Delta$-$\sigma$-polynomials, then $\Sigma$ is a characteristic set of the linear $\Delta$-$\sigma$-ideal $[\Sigma]$.
\end{proposition}
\begin{proposition}
Let us consider a partial order $\preccurlyeq$ on $K\{y_{1},\dots, y_{s}\}$ such that $A\preccurlyeq B$ if and only if $v_{A}|v_{B}$. Let $A$
be a linear $\Delta$-$\sigma$-polynomial in $K\{y_{1},\dots, y_{s}\}$, $A\notin K$. Then the set of all minimal with respect to $\preccurlyeq$
elements of the set $\{\lambda A\,|\,\lambda\in\Lambda\}$ is a characteristic set of the $\Delta$-$\sigma$-ideal $[A]$.
\end{proposition}
Now we are ready to prove Theorem 3.1.
\begin{proof}
Let $L = K\langle\eta_{1},\dots, \eta_{s}\rangle$ be a $\Delta$-$\sigma$-field extension of $K$ generated by a finite set
$\eta = \{\eta_{1},\dots, \eta_{s}\}$. Then there exists a natural $\Delta$-$\sigma$-homomorphism $\Upsilon_{\eta}$ of the ring of $\Delta$-$\sigma$-polynomials
$K\{y_{1},\dots, y_{s}\}$ onto the $\Delta$-$\sigma$-subring $K\{\eta_{1},\dots,\eta_{s}\}$ of $L$ such that $\Upsilon_{\eta}(a) = a$ for any
$a\in K$ and $\Upsilon_{\eta}(y_{j}) = \eta_{j}$ for $j = 1,\dots, s$. (If $A\in K\{y_{1},\dots, y_{s}\}$, then $\Upsilon_{\eta}(A)$ is called the {\em value\/} of $A$ at $\eta$;
it is denoted by $A(\eta)$.) Obviously, the kernel $P$ of the $\Delta$-$\sigma$-homomorphism $\Upsilon_{\eta}$ is a prime $\Delta$-$\sigma$-ideal of
$K\{y_{1},\dots, y_{s}\}$. This ideal is called the {\em defining\/} ideal of $\eta$ over $K$ or the defining ideal of the extension $L = K\langle \eta_{1},\dots,\eta_{s}\rangle$. It is easy to see that if the quotient field $Q$ of the factor ring $\bar{R} = K\{y_{1},\dots, y_{s}\}/P$ is considered as a $\Delta$-$\sigma$-field (where $\delta(\frac{f}{g}) =
\frac{g\delta(f)-f\delta(g)}{g^2}$ and $\tau(\frac{f}{g}) = \frac{\tau(f)}{\tau(g)}$ for any $f, g\in \bar{R}$,
$\delta\in \Delta,\, \tau\in \sigma^{\ast}$), then $Q$ is naturally $\Delta$-$\sigma$-isomorphic to the field $L$. The corresponding isomorphism is identity on $K$ and maps
the images of the $\Delta$-$\sigma$-indeterminates $y_{1},\dots, y_{s}$ in the factor ring $\bar{R}$ to the elements $\eta_{1},\dots, \eta_{s}$, respectively.
Let $\Sigma = \{A_{1},\dots, A_{d}\}$ be a characteristic set of the defining $\Delta$-$\sigma$-ideal $P$.
For any $r_{1},\dots, r_{p+1}\in {\bf N}$, let us set $U_{r_{1}\dots r_{p+1}} =
\{u\in \Lambda Y| ord_{i}u\leq r_{i}$ for $i = 1,\dots, p$, $ord_{\sigma}u\leq r_{p+1}$, and either $u$ is
not a multiple of any $v_{A_{i}}$ or for every $\lambda\in\Lambda, A\in \Sigma$ such that
$u = \lambda v_{A}$ and $\lambda\sim v_{A}$, there exists $j\in \{1,\dots, p\}$
such that $ord_{j}(\lambda u_{A}^{(j)}) > r_{j}\}$.
We are going to show that the set $\bar{U}_{r_{1}\dots r_{p+1}} =
\{u(\eta)| u\in U_{r_{1}\dots r_{p+1}}\}$ is a transcendence basis of the
field $K(\D\bigcup_{j=1}^{n} \Lambda(r_{1},\dots, r_{p+1})\eta_{j})$ over $K$.
Let us show first that the set $\bar{U}_{r_{1}\dots r_{p+1}}$ is algebraically independent over $K$. Let $g$ be a polynomial in $k$
variables ($k\in {\bf N}, k \geq 1$) such that $g(u_{1}(\eta),\dots, u_{k}(\eta)) = 0$ for some $u_{1},\dots, u_{k}\in U_{r_{1}\dots r_{p+1}}$. Then the $\Delta$-$\sigma$-polynomial $\bar{g} = g(u_{1},\dots, u_{k})$ is reduced with respect to $\Sigma$. (Indeed, if $g$ contains a term $u = \lambda v_{A_{i}}$ with $\lambda\in\Lambda,\, \lambda\sim v_{A_{i}}$ ($1\leq i\leq d$), then there exists $k\in \{1,\dots, p\}$ such that $ord_{k}(\lambda u_{A_{i}}^{(k)}) > r_{k}\geq ord_{k}u_{\bar{g}}^{(k)}$). Since $\bar{g}\in P$, Proposition 5.8 implies that $\bar{g} = 0$. Thus, the set $\bar{U}_{r_{1}\dots r_{p+1}}$ is algebraically independent over $K$.
Now, let us prove that every element $\lambda \eta_{j}$ ($1\leq j\leq s, \lambda \in \Lambda(r_{1},\dots, r_{p+1})$) is algebraic over the field $K(\bar{U}_{r_{1},\dots, r_{p+1}})$. Let $\lambda \eta_{j} \notin \bar{U}_{r_{1},\dots, r_{p+1}}$ (if $\lambda \eta_{j} \in \bar{U}_{r_{1},\dots, r_{p+1}}$, the statement is obvious). Then $\lambda y_{j} \notin U_{r_{1},\dots, r_{p+1}}$ whence $\lambda y_{j}$ is equal to some term $\lambda'v_{A_{i}}$ where $\lambda'\in\Lambda$, $\lambda\sim v_{A_{i}}$ ($1\leq i\leq d$), and $ord_{k}(\lambda'u_{A_{i}}^{(k)})\leq r_{k}$ for $k = 1, \dots, p$. Let us represent $A_{i}$ as a polynomial in $v_{A_{i}}$: $A_{i} = I_{0}{(v_{A_{i}})}^{e} + I_{1}{(v_{A_{i}})}^{e-1} +\dots +
I_{e}$, where $I_{0}, I_{1},\dots I_{e}$ do not contain $v_{A_{i}}$ (therefore, all terms in these $\Delta$-$\sigma$-polynomials are lower than
$v_{A_{i}}$ with respect to $<_{\sigma}$). Since $A_{i}\in P$,
\begin{equation}
A_{i}(\eta) = I_{0}(\eta){(v_{A_{i}})(\eta)}^{e} +
I_{1}(\eta){(v_{A_{i}})(\eta)}^{e-1} +\dots + I_{e}(\eta) = 0
\end{equation}
It is easy to see that the $\Delta$-$\sigma$-polynomials $I_{0}$ and $S_{A_{i}} = \partial A_{i}/\partial v_{A_{i}}$ are reduced with
respect to any element of the set $\Sigma$. Applying Proposition 5.8 we obtain that $I_{0}\notin P$ and $S_{A_{i}}\notin P$ whence $I_{0}(\eta) \neq 0$ and $S_{A_{i}}(\eta)\neq 0$. Now, if we apply $\lambda'$ to both sides of equation (5.1), the resulting equation will show that the element $\lambda'v_{A_{i}}(\eta) = \lambda\eta_{j}$ is algebraic over the field $K(\{\bar{\lambda}\eta_{l} | ord_{i}\bar{\lambda}\leq r_{i},\, ord_{\sigma}\bar{\lambda}\leq r_{p+1}$, for $i=1,\dots, p, 1\leq l\leq s$, and $\bar{\lambda}y_{l} <_{1} \lambda'u_{A_{i}}^{(1)} = \lambda y_{j}\})$.
Now, the induction on the set of terms $\Lambda Y$ ordered by $<_{\sigma}$ completes the proof of the fact that $\bar{U}_{r_{1}\dots r_{p+1}}(\eta)$ is a transcendence basis of the field $K(\D\bigcup_{j=1}^{s} \Lambda(r_{1},\dots, r_{p+1})\eta_{j})$ over $K$.
\smallskip
Let $U_{r_{1}\dots r_{p+1}}^{(1)} = \{u\in \Lambda Y | ord_{i}u\leq r_{i}$ for
$i = 1,\dots, p$, $ord_{\sigma}u\leq r_{p+1}$, and $u$ is not a multiple of any $v_{A_{j}}$, $j = 1,\dots, d\}$ and
let $U_{r_{1}\dots r_{p+1}}^{(2)} = \{u\in \Lambda Y | ord_{i}u\leq r_{i}$, $ord_{\sigma}u\leq r_{p+1}$ for
$i = 1,\dots, p$ and there exists at least one pair $i, j$ ($1\leq i\leq p,
1\leq j\leq d$) such that $u = \lambda v_{A_{j}}$, $\lambda\sim v_{A_{j}}$, and
$ord_{i}(\lambda u_{A_{j}}^{(i)}) > r_{i}\}$. Clearly, $U_{r_{1}\dots r_{p+1}} =
U_{r_{1}\dots r_{p+1}}^{(1)}\bigcup U_{r_{1}\dots r_{p+1}}^{(2)}$ and
$U_{r_{1}\dots r_{p+1}}^{(1)}\bigcap U_{r_{1}\dots r_{p+1}}^{(2)} = \emptyset$.
By Theorem 4.6, there exists a numerical polynomial $\phi(t_{1},\dots, t_{p+1})$ in $p+1$ variables $t_{1},\dots, t_{p+1}$ such that $\phi(r_{1},\dots, r_{p+1}) = Card\,U_{r_{1}\dots r_{p+1}}^{(1)}$ for all sufficiently large $(r_{1},\dots, r_{p+1})\in {\bf N}^{p+1}$, $deg_{t_{i}}\phi \leq m_{i}$ ($1\leq i\leq p$), and $deg_{t_{p+1}}\phi\leq n$. Furthermore, repeating the arguments of the proof of theorem 4.1 of \cite{Levin3}, we obtain that there is a linear combination $\psi(t_{1},\dots, t_{p+1})$ of polynomials of the form (4.5) such that $\psi(r_{1},\dots, r_{p+1}) = Card\,U_{r_{1}\dots r_{p+1}}^{(2)}$ for all sufficiently large $(r_{1},\dots, r_{p+1})\in {\bf N}^{p+1}$. Then the polynomial
$\Phi_{\eta}(t_{1},\dots, t_{p+1}) = \phi(t_{1},\dots, t_{p+1}) + \psi(t_{1},\dots, t_{p+1})$ satisfies conditions (i) and (ii) of Theorem 3.1.
In order to prove the last part of the theorem, suppose that $\zeta = \{\zeta_{1},\dots, \zeta_{q}\}$ is another system of $\Delta$-$\sigma$-generators of $L/K$, that is,
$L = K\langle \eta_{1},\dots,\eta_{s}\rangle = K\langle \zeta_{1},\dots,\zeta_{q}\rangle$. Let $$\Phi_{\zeta}(t_{1},\dots, t_{p+1}) = \D\sum_{i_{1}=0}^{m_{1}}\dots \D\sum_{i_{p}=0}^{m_{p}}\D\sum_{i_{p+1}=0}^{n}b_{i_{1}\dots i_{p+1}} {t_{1}+i_{1}\choose i_{1}}\dots {t_{p+1}+i_{p+1}\choose i_{p+1}}$$
be the dimension polynomial of our $\Delta$-$\sigma$-field extension associated with the system of generators $\zeta$. Then there exist positive integers
$h_{1}, \dots, h_{p+1}$ such that $\eta_{i} \in K(\bigcup_{j=1}^{q}\Lambda(h_{1},\dots, h_{p+1})\zeta_{j})$ and $\zeta_{k} \in K(\bigcup_{j=1}^{s}\Lambda(h_{1},\dots, h_{p+1})\eta_{j})$ for any $i=1,\dots, s$ and $k=1,\dots, q$, whence $\Phi_{\eta}(r_{1},\dots, r_{p+1}) \leq \Phi_{\zeta}(r_{1}+h_{1},\dots, r_{p+1}+h_{p+1})$ and
$\Phi_{\zeta}(r_{1},\dots, r_{p+1}) \leq \Phi_{\eta}(r_{1}+h_{1},\dots, r_{p+1}+h_{p+1})$ for all sufficiently large $(r_{1},\dots, r_{p+1})\in {\bf N}^{p+1}$. Now the statement of the third part of Theorem 3.1 follows from the fact that for any element $(k_{1},\dots, k_{p+1})\in E_{\eta}'$, the term ${t_{1}+k_{1}\choose k_{1}}\dots {t_{p+1}+k_{p+1}\choose k_{p+1}}$ appears in $\Phi_{\eta}(t_{1},\dots, t_{p+1})$ and $\Phi_{\zeta}(t_{1},\dots, t_{p+1})$ with the same coefficient $a_{k_{1}\dots k_{p+1}}$. The equality of the coefficients of the corresponding terms of total degree $d = deg\,\Phi_{\eta}$ in $\Phi_{\eta}$ and $\Phi_{\zeta}$ can be shown in the same way as in the proof of Theorem 3.3.21 of \cite{Levin4}.
\end{proof}
\begin{example}
{\em Let us find the $\Delta$-$\sigma$-dimension polynomial that expresses the strength of the difference-differential equation
\begin{equation}
\frac{\partial^{2} y(x_{1}, x_{2})}{\partial x_{1}^{2}} + \frac{\partial^{2} y(x_{1}, x_{2})}{\partial x_{2}^{2}} + y(x_{1} + h) + a(x) = 0
\end{equation}
over some $\Delta$-$\sigma$-field of functions of two real variables $K$, where the basic set of derivations
$\Delta = \{\delta_{1} = \frac{\partial}{\partial x_{1}}, \delta_{2} = \frac{\partial}{\partial x_{2}}\}$ has the partition
$\Delta = \{\delta_{1}\}\bigcup \{\delta_{1}\}$ and $\sigma$ consists of one automorphisms $\alpha: f(x_{1}, x_{2})\mapsto f(x_{1}+h, x_{2})\}$
($h\in {\bf R}$).
In this case, the associated $\Delta$-$\sigma$-extension $K\langle\eta\rangle/K$ is $\Delta$-$\sigma$-isomorphic to the field of fractions of $K\{y\}/[\alpha y + \delta_{1}^{2}y + \delta_{2}^{2}y + a]$ (the element $a\in K$ corresponds to the function $a(x)$).
Applying Proposition 5.10 we obtain that the characteristic set of the defining ideal of the corresponding $\Delta$-$\sigma$-extension
$K\langle\eta\rangle/K$ consists of the $\Delta$-$\sigma$-polynomials $g_{1} = \alpha y + \delta_{1}^{2}y + \delta_{2}^{2}y + a$ and
$g_{2} = \alpha^{-1}g_{1} = \alpha^{-1}\delta_{1}^{2}y + \alpha^{-1}\delta_{2}^{2}y + y + \alpha^{-1}(a)$.
With the notation of the proof of Theorem 3.1, the application of the procedure described in this proof, Theorem 4.6(iii), and formula (4.4)
leads to the following expressions for the numbers of elements of the sets $U_{r_{1}r_{2}r_{3}}^{(1)}$ and $U_{r_{1}r_{2}r_{3}}^{(2)}$:
$Card\,U_{r_{1}r_{2}r_{3}}^{(1)} = r_{1}r_{2} + 2r_{2}r_{3} + r_{1} + r_{2} + 2r_{3} + 1$
and $Card\,U_{r_{1}r_{2}r_{3}}^{(2)} = 4r_{1}r_{3} + 2r_{2}r_{3} - 2r_{3}$ for all sufficiently large $(r_{1}, r_{2}, r_{3})\in {\bf N}^{3}$.
Thus, the strength of equation (5.2) corresponding to the given partition of the basic set of derivations is expressed by the $\Delta$-$\sigma$-polynomial $$\Phi_{\eta}(t_{1}, t_{2}, t_{3}) = t_{1}t_{2} + 4t_{1}t_{3} + 4t_{2}t_{3} + t_{1} + t_{2} + 1.$$}
\end{example}
\begin{example} {\em Let $K$ be a difference-differential ($\Delta$-$\sigma$-) field where the basic set of derivations $\Delta =
\{\delta_{1}, \delta_{2}\}$ is considered together with its partition
\begin{equation}\Delta = \{\delta_{1}\}\bigcup \{\delta_{1}\}\end{equation}
and $\sigma = \{\alpha\}$ for some automorphism $\alpha$ of $K$. Let
$L = K\langle \eta \rangle$ be a $\Delta$-$\sigma$-field extension with the
defining equation
\begin{equation}\delta_{1}^{a}\delta_{2}^{b}\alpha^{c}\eta +
\delta_{1}^{a}\delta_{2}^{b}\alpha^{-c}\eta + \delta_{1}^{a}\delta_{2}^{b+c}\eta + \delta_{1}^{a+c}\delta_{2}^{b}\eta= 0
\end{equation}
where $a$, $b$, and $c$ are positive integers. Let $\Phi_{\eta}(t_{1}, t_{2}, t_{3})$ denote the corresponding
difference-differential dimension polynomial (which expresses the strength of equation (5.4) with respect to the
given partition of the set of basic derivations $\Delta$). In order to compute $\Phi_{\eta}$, notice, first , that
the defining $\Delta$-$\sigma$-ideal $P$ of the extension $L/K$ is the linear $\Delta$-$\sigma$-ideal of $K\{y\}$
generated by the $\Delta$-$\sigma$-polynomial
$$f = \delta_{1}^{a}\delta_{2}^{b}\alpha^{c} y +
\delta_{1}^{a}\delta_{2}^{b}\alpha^{-c} y + \delta_{1}^{a}\delta_{2}^{b+c} y + \delta_{1}^{a+c}\delta_{2}^{b} y.$$
By Proposition 5.10, the characteristic set of the ideal $P$ consists of $f$ and
$$\alpha^{-1}f = \alpha^{-(c+1)}\delta_{1}^{a}\delta_{2}^{b} y +
\delta_{1}^{a}\delta_{2}^{b}\alpha^{c-1} y + \delta_{1}^{a}\delta_{2}^{b+c}\alpha^{-1}y + \delta_{1}^{a+c}\delta_{2}^{b}\alpha^{-1}y.$$
The procedure described in the proof of Theorem 3.1 shows that $Card\,U_{r_{1}r_{2}r_{3}}^{(1)} = \phi_{A}(r_{1}, r_{2}, r_{3})$ for all
sufficiently large $(r_{1}, r_{2}, r_{3})\in {\bf N}^{3}$, where
$\phi_{A}(t_{1}, t_{2}, t_{3})$ is the dimension polynomial of the set
$A = \{(a, b, c), (a, b, -(c+1)\,)\}\subseteq {\bf N}^{2}\times{\bf Z}$.
Applying Theorem 4.6(iii), and formula (4.4) we obtain that
$\phi_{A}(t_{1}, t_{2}, t_{3}) = 2ct_{1}t_{2} + 2bt_{1}t_{3} + 2at_{2}t_{3} + (b+2c-2bc)t_{1} +(a+2c-2ac)t_{2} + (2a+2b-2ab)t_{3} +
a+b+2c-ab-2ac-2bc+2abc$. The computation of $Card\,U_{r_{1}r_{2}r_{3}}^{(2)}$ with the use of the method of inclusion and exclusion described in the
proof of Theorem 3.1 yields the following: $Card\,U_{r_{1}r_{2}r_{3}}^{(2)} = (2r_{3}-2c+1)[c(r_{2}-b+1) + c(r_{1}-a+1) - c^{2}]$ for all
sufficiently large $(r_{1}, r_{2}, r_{3})\in {\bf N}^{3}$. Therefore, the $\Delta$-$\sigma$-dimension polynomial of the extension $L/K$, which expresses the strength of equation (5.4), is as follows.
$$\Phi_{\eta}(t_{1}, t_{2}, t_{3}) = 2ct_{1}t_{2} + 2(b+c)t_{1}t_{3} + 2(a+c)t_{2}t_{3} + (b+3c-2bc-c^{2})t_{1}$$
$$+(2a+2b+4c -2ab - 2ac - 2bc - 2c^{2})t_{3} + a+b+4c-ab-3ac-3bc$$
\begin{equation}
+ (a+3c-2ac -2c^{2})t_{2} + +2abc +2ac^{2} + 2bc^{2} + 2c^{3} - 5c^{2}.\hspace{0.3in}
\end{equation}
The computation of the Kolchin-type univariate $\Delta$-$\sigma$-dimension polynomial (see Theorem 2.1) via the method of K\"ahler differentials described in
~\cite[Section 6.5]{KLMP} (by mimicking Example 6.5.6 of ~\cite{KLMP}) leads to the following result:
\begin{equation}
\phi_{\eta|K}(t) = {\D\frac{D}{2}}t^{2} - {\D\frac{D(D-2)}{2}}t + {\D\frac{D(D-1)(D-2)}{6}}
\end{equation}
where $D = a+b+c$. In this case the polynomial $\phi_{\eta|K}(t)$ carries just one invariant $a+b+c$ of the extension $L/K$ while
$\Phi_{\eta}(t_{1}, t_{2}, t_{3})$ determines three such invariants: $c,\, b+c$, and $a+c$ (see Theorem 3.1(iii)\,),
that is, $\Phi_{\eta}$ determines all three parameters $a, b, c$ of the defining equation while $\phi_{\eta}(t)$ gives just the sum of these parameters.
The extension $K\langle \zeta\rangle/K$ with a $\Delta$-$\sigma$-generator $\zeta$, the same basic set $\Delta\bigcup\sigma$ ($\Delta =
\{\delta_{1}, \delta_{2}\}$, $\sigma = \{\alpha\}$), the same partition of $\Delta$ and defining equation
\begin{equation}\delta_{1}^{a+b}\alpha^{c}\zeta + \delta_{2}^{a+b}\alpha^{-c}\zeta = 0
\end{equation}
has the same univariate difference-dimension polynomial (5.6). However, its $\Delta$-$\sigma$-dimension polynomial is not only different, but also has different invariants described in part (iii) of Theorem 3.1:
$$\Phi_{\zeta}(t_{1}, t_{2}, t_{3}) = 2ct_{1}t_{2} + 2(a+b)t_{1}t_{3} + 2(a+b)t_{2}t_{3} + At_{1} + Bt_{2} + Ct_{3} + E$$
where $A = B = (a+b)(1-2c) + 2c$,\, $C = 2[1 - (a+b-1)^{2}]$,\, and \,$E = 1 + 2c(a+b-1)^{2}$.
Two systems of algebraic difference-differential ($\Delta$-$\sigma$-) equations with coefficients from a $\Delta$-$\sigma$-field $K$ are said to be {\em equivalent} if there is a $\Delta$-$\sigma$-isomorphism between the $\Delta$-$\sigma$-field extensions of $K$ with these defining equations, which is identity on $K$. Our example shows that using a partition of the basic set of derivations and the computation of the corresponding multivariate $\Delta$-$\sigma$-dimension polynomials, one can determine that two systems of $\Delta$-$\sigma$-equations (see systems (5.4) and (5.7)\,) are not equivalent, even though they have the same univariate difference-dimension polynomial.}
\end{example}
\bigskip
\noindent\Large {\bf Acknowledges}
\normalsize
\medskip
\noindent This research was supported by the NSF Grant CCF 1016608
|
1,941,325,221,102 | arxiv | \section{Some log geometry}
In this expository section, we summarize some facts
about log schemes needed later. The details can be found in \cite{kato,
ogus} together with some more specific references given below. For our purposes, the basic
example is given by a pair consisting of a smooth
scheme $X$ and a reduced divisor $E\subset X$ with normal crossings.
We refer to this as a {\em log pair}. We can segue to the more
flexible notion of log structure by noting that a log pair $(X,E)$,
gives rise to the
multiplicative submonoid $M\subset \OO_X$ of functions invertible
outside of $E$.
A {\em pre-log structure} on a scheme $X$ is a sheaf of commutative monoids
$M$ on the \'etale topology $X_{et}$ together with a homomorphism $\alpha:M\to \OO_X$, where the
latter is treated as a monoid with respect to multiplication. It is a
{\em log structure} if $\alpha^{-1}\OO_X^*\cong
\OO_X^*$. For example, the monoid associated to a log pair is a log
structure. In general, a pre-log structure can be completed to a log structure in
a canonical way. A {\em log scheme} is a scheme equipped with a log
structure. We will identify log pairs with the associated log scheme. In
order to avoid confusion below, we reserve the symbols $\X, \Y,
\ldots$ for log schemes, and use the corresponding symbols $X,Y,\ldots$ for the
underlying schemes. There are a number of other examples in addition to
the one given above. For example, any scheme $X$ can be turned into a log scheme with the {\em
trivial} log structure $M=\OO_X^*$.
There is an important connection between log geometry and toric
geometry. Indeed, given a finitely generated saturated submonoid $M\subseteq \Z^n$, the
affine toric variety $\Spec \C[M]$ is a log scheme
with respect to the pre-log structure induced by $M\to \C[M]$.
More generally, recall that a toroidal variety \cite{kkms} is given by a
variety $X$ and an open subset $U$, such that the pair $(X,U)$ is \'etale
locally isomorphic to a toric variety with its embedded torus. The toroidal
variety $X$ carries a log structure given locally by the one
above. The subset $U$ can be understood as the locus where this log
structure is trivial. In the log setting, it is convenient
to relax the condition on $M$ to allow it to be a
finitely generated monoid which is embeddable into an abelian group as a
saturated monoid. If $M$ satisfies these conditions, it is called
{\em fine and saturated} or simply {\em fs}. Note that there is a canonical
choice for the ambient group, namely the group $M^{gp}$ given as a
group of fractions. In general, we
want to restrict our attention to log schemes (called fs log schemes)
for which the {\em characteristic monoids} $\overline{ M}_x=
M_x/\OO^*_x$ are fs for all geometric points $x$.
A morphism of log schemes consists of a
morphism of schemes and a compatible morphisms of monoids. Of course,
any morphism of schemes can be regarded as a morphisms of log schemes
when equipped with the trivial log structures. Here are some more interesting examples.
\begin{ex}
Given log pairs $\X=(X,E)$ and $\Y = (Y,D)$.
A semistable morphism of log schemes $f:\X\to \Y$ is a
morphism $f$ of schemes which is given \'etale locally (or
analytically) by
$$y_1=x_1x_2\ldots x_{r_1}$$
$$y_2=x_{r_1+1}\ldots x_{r_2}$$
$$\ldots$$
$$y_{d+1}=x_{r_d+1}$$
$$\ldots$$
such that $x_1\ldots x_{r_d}$ and $y_1\ldots y_{d}$ are the local
equations for $E$ and $D$ respectively.
\end{ex}
\begin{ex}\label{ex:chart}
A homomorphism of fs monoids $P\to Q$ induces a
morphism of fs log schemes $\Spec \C[Q]\to \Spec \C[P]$. In
particular, toric morphisms of toric varieties are morphisms of log schemes.
\end{ex}
Log structures give rise to logarithmic differentials in a rather
general way. Given a log pair $\X=(X,E)$, set
$$\Omega_{\X}^k = \Omega_X^k(\log E)$$
which is generated locally by $k$-fold wedge products of
$$\frac{dx_1}{x_1},\ldots \frac{dx_{r_d}}{x_{r_d}}, dx_{r_d+1},\ldots$$
More generally, given a log scheme $\X$ over $\C$, we define the $\OO_X$-module
$\Omega_\X^1=\Omega_{\X/\C}^1$ as the universal sheaf which receives a $\C$-linear derivation $d:\OO_X\to
\Omega_{\X}^1$ and homomorphism $\dlog:M\to \Omega_{\X}^1$ satisfying
$m\dlog m = dm$. Of course, by the universal property of ordinary
K\"ahler differentials, we have a map $\Omega_X^1\to\Omega_\X^1$
taking $df$ to $df$, but this is generally not an isomorphism.
There is a relative version of differentials $\Omega_{\X/\Y}^1$ for a
morphism $f:\X\to \Y$ of log schemes. This fits into an exact sequence
$$f^*\Omega_\Y^1\to \Omega_\X^1\to \Omega_{\X/\Y}^1\to 0$$
There is a notion of smoothness in this setting, called {\em log
smoothness}, which can be defined using a variant of the usual
infinitesimal lifting condition \cite[\S 3.3]{kato}.
However, it is weaker than the
name suggests. For instance,
while smoothness implies flatness, the corresponding statement for log
smoothness is false. Nevertheless, some expected properties do hold.
For a log smooth map
$\Omega_{\X/\Y}^1$ and therefore its exterior powers $\Omega_{\X/\Y}^k$,
are locally free. Kato \cite[thm 3.5]{kato} gives a criterion which
allows us to verify some basic examples:
\begin{ex}
Toric (and more generally toroidal) varieties are log smooth over $\Spec \C$
with its trivial log structure.
\end{ex}
\begin{ex}\label{ex:semi}
A semistable map between log pairs is a log smooth morphism.
\end{ex}
To rectify some of the defects of log smoothness just alluded to, we need a few more
conditions. A morphism of fs monoids $h:P\to Q$ is exact if satisfies $(h^{gp})^{-1}(Q)= P$;
it is integral if $\Z[P]\to \Z[Q]$ is flat; it is saturated if for any
morphism $P\to P'$ to an fs monoid the push out $Q\oplus_P P'$ is fs; and it is vertical if the image of $P$
does not lie in any proper face of $Q$. For example, the diagonal
embedding $\N\subset \N^n$ satisfies all of these conditions.
A morphism of $f:(X,M)\to (Y,N)$ of fs log schemes,
is respectively {\em exact}, {\em
integral}, {\em saturated} or {\em vertical}, if the map of characteristic monoids
$(f^{-1}\overline{N})_x\to \overline{M}_x$ has the same property for each geometric
point $x$.
Integral maps are exact and also flat as a map of
schemes. Saturated morphisms are integral with reduced fibres \cite[\S
3.6]{it}
Here are the key examples for us.
\begin{ex}\label{ex:semi2}
A semistable map between log pairs is vertical and saturated (and
therefore integral and therefore exact). This is because the maps of characteristic monoids are sums
of diagonal embeddings $\N^m\subset \N^{n_1+\ldots +n_m}$.
\end{ex}
\begin{ex}\label{ex:semi3}
Abramovich and Karu \cite[def 0.1]{ak} define a map from a scheme to a
regular scheme to be weakly semistable if it is toroidal and equidimensional with reduced fibres.
Such a map is log smooth vertical and saturated when the
schemes are endowed with the log structures induced from the toroidal
structures, cf \cite[rmk 3.6.6]{it}.
\end{ex}
We note that there is a parallel theory of log analytic
spaces given by an analytic space $(X,\OO_X)$ and a homomorphism of
monoids $\alpha:M\to \OO_X$ satisfying conditions similar to those
above. Most of the basic definitions and constrictions carry over as before.
In addition, there is a new
construction often called the {\em real blow up} \cite{kn}.
Given an fs log analytic space $\X$, we can define a new topological
space $\X^{log}$ and a continuous map $\lambda:\X^{log}\to
X$.
As a set $$\X^{log}=
\left\{ (x,h)\mid x\in X, h\in Hom(M^{gp}_{x}, S^1), \forall f\in \OO_{x},
h(f)= \frac{f(x)}{|f(x)|} \right\}
$$
and $\lambda$ is given by the evident projection.
For example, when $\X$ is given by the log pair $(\C^n,\{z_1\ldots z_n=0\})$,
$\X^{log} = ([0,\infty)\times S^1)^n$ as a topological space, with $\lambda(r_1,
u_1,r_2,u_2,\ldots)\mapsto (r_1u_1, r_2u_2,\ldots)$. The construction
is functorial so that a morphism $f:\X\to \Y$ gives a continuous map
$f^{log}:\X^{log}\to \Y^{log}$ compatible with the $\lambda$'s.
We note also that $\X^{log}$ can be made into a ringed space with structure
sheaf $\OO_{\X^{log}}$ such that $\lambda$ becomes a morphism.
One can picture $(X,E)^{log}$ as adding an ideal boundary to $X-E$ which is homeomorphic to the boundary
of a tubular neighbourhood about $E$, and this picture is compatible
with what is happening on $Y$. This leads to the remarkable fact, due
to Usui, that when $f$ is proper and semistable $f^{log}$ is topologically a fibre bundle. A generalization of this due to Nakayama and Ogus \cite{no} will be used below.
\section{Main theorems}
\begin{thm}\label{thm:decomp2}
Let $f:\X\to \Y$ be a projective exact vertical log smooth map of fs log schemes and suppose
that $\Y$ is the log scheme associated to a
log pair.
Then
$$\R f_* \Omega_{\X/\Y}^k \cong \bigoplus_i R^i
f_*\Omega_{\X/\Y}^k[-i]$$
for all $k$.
\end{thm}
Readers dismayed by this long string of adjectives should keep in mind that
a projective semistable map satisfies all of the above assumptions, and therefore
the conclusion. The semistable case, however, is not strong enough to
imply Koll\'ar's theorem. We really need the version just stated.
Before giving the proof, we record the following elementary fact,
which can be proved by induction on the length of the filtrations.
\begin{lemma}\label{lemma:lin}
Suppose that $(V_i,F_i^\dt)$ are two filtered finite dimensional
vector spaces such that $\dim Gr^p(V_1)=\dim Gr^p(V_2)$ for all $p$.
Then a linear isomorphism $L:V_1\to V_2$ which is compatible with the
filtrations will induce an isomorphism of associated graded spaces.
\end{lemma}
\begin{proof}[Proof of theorem]
Let $d$ denote the relative dimension of $f$. Let $\Y$ be defined by
the log pair $(Y,D)$. The verticality assumption implies that the log
structure of $\X$ is trivial on $U = X-f^{-1}D$. The restriction $f|_U:U\to
Y-D$ is smooth and projective in the usual sense. So it is
topologically a fibre bundle. In fact a theorem of Nakayama and Ogus \cite{no} shows
that $f|_U$ prolongs naturally to a fibre bundle $f^{log}:\X^{log}\to
\Y^{log}$ over $\Y^{log}$.
Using the work of Illusie, Kato and Nakayama
\cite[cor 7.2]{ikn}, we may conclude that sheaves $R^i f_*\Omega_{\X/\Y}^k$ are
locally free and that the relative Hodge to de Rham spectral sequence
degenerates. Let
$\eta'\in H^1(X,\Omega_X^1)$ denote $c_1$ of a
relatively ample line bundle, and
let $\eta\in H^1(\Omega_{\X/\Y}^1)\cong
Hom_{D(Y)}(\OO_Y,\R f_*\Omega^1_{\X/\Y}[1])$ denote the image of
$\eta'$. Let $V^k=\R f_*\Omega_{\X/\Y}^k$ and $V=\bigoplus_k V^k[-k]$.
We have a Lefschetz operator $L:V^k\to V^{k+1}[1]$
given by cup product with $\eta$.
By adding these, we get a map $L:V\to V[2]$. Our goal is to establish
the hard Lefschetz property that $L^i$ induces an isomorphism
\begin{equation}
\label{eq:HL}
L^i:\mathcal{H}^{d-i}(V)\cong \mathcal{H}^{d+i}(V)
\end{equation}
for all $i$. Then the theorem will follow from Deligne's
theorem \cite{deligne}.
Note that we have canonical isomorphisms
\begin{equation}
\label{eq:cHV}
\cH^i(V)_y \cong \bigoplus_k (R^{i-k}f_*\Omega^k_{\X/\Y} )_y\cong
\bigoplus_k (Gr_F^k\R^i f_*\Omega_{\X/\Y}^\dt)_y
\end{equation}
where $F$ is the Hodge filtration. This isomorphism respects the action by $L$.
We note also that the ranks of $R^{d-i-k}f_*\Omega^k_{\X/\Y}$ and
$R^{d-k}f_*\Omega^{k+i}_{\X/\Y}$ coincide with the ranks of
$R^{d-i-k}f_*\Omega^k_{U/Y-D}$ and $R^{d-k}f_*\Omega^{k+i}_{U/Y-D}$
respectively, and therefore with each other. Therefore
by lemma \ref{lemma:lin} and Nakayama's lemma, it suffices to check
the hard Lefschetz property for $(\R^i f_*\Omega_{\X/\Y}^\dt)_y$ and
all $y\in Y$.
By \cite{fkato} , we have a canonical identification
\begin{equation}
\label{eq:RfOmega}
\lambda^*\R f_*\Omega_{\X/\Y}^\dt\cong \R f^{log}_*\C\otimes \OO_{Y^{log}}
\end{equation}
Note that $\R f^{log}_*\C$ also has an action by $L$, and \eqref{eq:RfOmega} respects these actions.
Since the stalk $(R^if^{log}_*\C)_y$ is just $H^i(X_y,\C)$, when $y\notin D$, and $R^i f^{log}_*\C$ is a local system,
we can conclude that we have a hard Lefschetz theorem everywhere,
i.e. $L^i: R^{d-i} f^{log}_*\C\cong R^{d+i}f^{log}_*\C$. By the
previous remarks, this implies \eqref{eq:HL} and consequently the theorem.
\end{proof}
\begin{remark}
The above argument can be pushed a bit to imply the same
decomposition for a proper holomorphic semistable map
of analytic log pairs provided that there is a $2$-form on $X$ which
restricts to a K\"ahler form on the components of all the fibres.
For this we may appeal to a
theorem of Fujisawa \cite[thm 6.10]{fujisawa} to conclude $R^i f_*\Omega_{\X/\Y}^k$ are
locally free and that the relative Hodge to de Rham spectral
degenerates. The rest of the argument is the same as above.
\end{remark}
Before turning to Koll\'ar's theorem, we need the following:
\begin{lemma}\label{lemma:2}
Suppose that $\Y=(Y,D)$ is a log pair, $X$ is a variety with normal
Gorenstein singularities, and that $f:X\to Y$ is map such that the log
scheme $\X$ defined by $f^{-1}D$ is fs. We suppose furthermore that
$f:\X\to \Y$ is log smooth and
saturated. Then
$\Omega_{\X/\Y}^d\cong \omega_{X/Y}$, where $d=\dim X-\dim Y$ and
$\omega_{X/Y}$ is the relative dualizing sheaf.
\end{lemma}
\begin{proof}
The line bundle
\begin{equation}
\label{eq:OmE}
\Omega_{\X/\Y}^d\otimes
\omega_{X/Y}^{-1}\cong \OO_{X}(E)
\end{equation}
where $E=\sum n_iE_i$ is a Cartier divisor supported on
$f^{-1}D$. In fact, we can make the choice of $E$ canonical. The
divisor $E$ is determined by its restriction to the regular locus of
$U\subseteq X$, since the complement of $U$ has codimension at least
$2$. So we may replace $X$ by $U$. Then $E$ is the divisor of the canonical
map
$$\omega_{X/Y}\cong \Omega^d_{X}\otimes
f^*\omega_{Y}^{-1}\to \Omega^d_{\X/\Y}$$
Our goal is to show that $E$ is in fact trivial as a
divisor. Since this is now a local problem, there is no loss of
generality in assuming that $Y$ is affine.
The log smoothness and saturation assumptions imply that $f$ is flat. Therefore all of the irreducible components of
$f^{-1}D$, and in particular $E$, must map onto components of
$D$. Thus the preimage of a general
curve $C\subset Y$ will meet all the components of $E$. The conditions of log smoothness and
saturation and the isomorphism \eqref{eq:OmE} are stable under base
change to $C$. Therefore we may assume that $Y$ is a curve and that $D$
is a point with local parameter $y$. The fibre $f^{-1}D$ is
reduced because $f$ is saturated. Choose a general point $p$ of an irreducible component $E_i$
of $E$. Then $E_i$ is smooth in neighbourhood of $p$ and $f^*y$ gives
a defining equation for it. Thus we may choose coordinates $x_1=f^*y,
x_2,\ldots x_n$ at $p$. Then a local generator of
$\Omega_{\X/\Y}^d$ at $p$ is given by
$$(d\log x_1\wedge dx_2\wedge \ldots \wedge dx_n)\otimes (d\log
y)^{-1} = (d x_1\wedge dx_2\wedge \ldots \wedge dx_n)\otimes
(dy)^{-1}$$
which coincides with a generator of $\omega_{X/Y}$. Thus $E$ is trivial.
\end{proof}
\begin{thm}[Koll\'ar]
If $X$ smooth and $f:X\to Y$ is projective then
$$\R f_* \omega_{X} \cong \bigoplus_i R^i f_*\omega_{X}[-i]$$
\end{thm}
\begin{proof}
The first step is to apply the log version of the weak semistable
reduction theorem of Abramovich and
Karu \cite{ak}. Although Illusie and Temkin
\cite[thm 3.10]{it} have given such a version, it is a bit simpler
to use the original form of the theorem, and then translate the conclusion.
The theorem yields a diagram
$$\xymatrix{
X'\ar[d]^{f'}\ar[r]^{\pi} & X\ar[d]^{f} \\
D\subset Y'\ar[r]^{p} & Y
}$$
where $p$ is generically finite, $Y'$ is smooth, $D$ is a divisor with simple
normal crossings, $X'$ is birational to the fibre product, and $f'$ is
weakly semistable. Then, as we explained in example \ref{ex:semi3}, $f'$ is log smooth exact vertical and
saturated with respect to the log schemes $\X'$ defined by
$f^{-1}D$ and $\Y'$ defined by $D$. Furthermore,
it is known that $X'$ has rational Gorenstein singularities \cite[lemma 6.1]{ak}.
By lemma \ref{lemma:2} and theorem~\ref{thm:decomp2}, we obtain
$$\R f'_* \omega_{ X'/Y'} =\bigoplus R^if'_*\omega_{ X'/Y'}[-i]$$
and therefore
\begin{equation}
\label{eq:Rf}
\R f'_* \omega_{X'} =\bigoplus R^if'_*\omega_{X'}[-i].
\end{equation}
Fix a resolution of singularities $g:\tilde X\to X'$. Then $\R g_*\omega_{\tilde
X}= g_*\omega_{\tilde X}= \omega_{X'}$, where the first equality follows from the Grauert-Riemenschneider
vanishing theorem, and the second from the fact that $X'$ has rational
singularities. This together with \eqref{eq:Rf} shows that
\begin{equation}
\label{eq:2}
\R (f'\circ g)_* \omega_{\tilde X} =\bigoplus R^i(f'\circ g)_*\omega_{\tilde X}[-i].
\end{equation}
We have an inclusion $(\pi\circ g)^*\omega_X\subset
\omega_{\tilde X}$ which gives an injection
$$\sigma:\omega_{X}\hookrightarrow (\pi\circ g)*(\pi\circ g)^*\omega_X \hookrightarrow
(\pi\circ g)_*\omega_{\tilde X}$$
The map $\sigma$ splits the normalized Grothendieck trace
$$\tau=\frac{1}{\deg X'/X}tr:\R (\pi\circ g)_*\omega_{\tilde X} \cong (\pi\circ g)_*\omega_{\tilde
X}\to \omega_X $$
It follows that $\omega_X$ is a direct summand of $(\pi\circ g)_*\omega_{\tilde
X}$, and that this relation persists after applying a direct image functor.
Therefore applying $\tau$ to \eqref{eq:2} yields
$$\R f_* \omega_{X} =\bigoplus R^if_*\omega_{X}[-i]. $$
\end{proof}
|
1,941,325,221,103 | arxiv | \section{#1}\let\thesection\oldthesection}
\documentclass[twocolumn,preprintnumbers,amsmath,amsfonts,amssymb,notitlepage,showpacs,pra]{revtex4-1}
\usepackage{geometry}
\date{\today}
\DeclareMathAlphabet{\mathpzc}{OT1}{pzc}{m}{it}
\geometry{verbose,letterpaper,tmargin=2cm,bmargin=2cm,lmargin=2cm,rmargin=2cm}
\usepackage[utf8x]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{nicefrac}
\usepackage{amsbsy}
\usepackage{float}
\usepackage{epstopdf}
\usepackage{dsfont}
\usepackage{siunitx}
\def {\rightarrow}{{\rightarrow}}
\def {\uparrow}{{\uparrow}}
\def {\downarrow}{{\downarrow}}
\def \begin{equation}{\begin{equation}}
\def \end{equation}{\end{equation}}
\def \begin{array}{\begin{array}}
\def \end{array}{\end{array}}
\def \begin{eqnarray}{\begin{eqnarray}}
\def \end{eqnarray}{\end{eqnarray}}
\def \! \perp{\! \perp}
\begin{document}
\title{Transverse collisional instabilities of a Bose-Einstein condensate in a driven one-dimensional lattice}
\author{Sayan Choudhury}
\email{[email protected]}
\author{Erich J Mueller}
\email{[email protected]}
\affiliation{Laboratory of Atomic and Solid State Physics, Cornell University, Ithaca, New York}
\pacs{67.85.Hj, 03.75.-b}
\begin{abstract}
Motivated by recent experiments, we analyze the stability of a three-dimensional Bose-Einstein condensate (BEC) loaded in a periodically driven one-dimensional optical lattice. Such periodically driven systems do not have a thermodynamic ground state, but may have a long-lived steady state which is an eigenstate of a ``Floquet Hamiltonian". We explore collisional instabilities of the Floquet ground state which transfer energy into the transverse modes. We calculate decay rates, finding that the lifetime scales as the inverse square of the scattering length and inverse of the peak three-dimensional density. These rates can be controlled by adding additional transverse potentials.
\end{abstract}
\maketitle
\section{Introduction}
In recent years, rapid progress has been made in quantum simulation, whereby one engineers a quantum system to study important phenomena experimentally \cite{NoriRMP, Ciracnatpcomm,Blochnatpcomm,Blattnatpcomm,Guziknatpcomm}. Periodically driven quantum systems (Floquet systems) are a particularly versatile platform for such simulations \cite{MoessnerFTIReview,HolthausFloquetReview} and have already been used to explore a variety of rich physics. This program has been particularly successful in cold atoms, where periodic driving has been integral to studying models of classical frustrated magnetism, and models of topological matter \cite{FerrariACTunnelingNatPhys2009,SengstockFrustratedScience2011,SengstockIsing2013,SengstockAbelianGaugePRL2012,EsslingerHaldaneFloquet, Blochbandtopology, Ketterlespinorbit, BlochHarper, KetterleHarper,TinoNJP2010,TinoPRL2008,MiyakeThesis,ChinFloquet2013,ChinFloquet2014}. These periodically driven systems have seen extensive theoretical modelling \cite{SengstockNonAbelianGaugePRL2012, ZhaiFTIarxiv, DemlerMajoranaPRL2011,DemlerFloquetTransport, OhMajoranPRB2013, BaurCooperFloquet2014,EckardtACEPL,DemlerFloquetAnamolous, ZhaoPRLAnamolous,MuellerFloquetAnamolous,creffieldsmf,GoldmanDalibardprx,PolkovnikovAnnals2013Floquet, RigolFloqetArxiv2014Floquet, EckardtPRLBoseSelection2013, DasMoessnerPRL,DasMoessner2014-2,SenguptaSensarma2013,ChandranAbaninMBLfloquet,chinz2theoryprl,polkovnikov2014floquet,EckardtFloquetPRL2005,Gomez-LeonFloquetDimensionPRL,NeupertFloquetPRL2014,TorresPRBGraphene2012,TorresPRBGraphene2014,GalitskiFTINatPhys2011,GaliskiFTIPRB2013,PodolskyFTI2013,BarangerFloquetMajorana,KunduSeradjehMajoranaPRL,Cooperarxiv2014,Cooperdalibardarxiv2014,Demlerarxiv2015}. Some of these experiments have experienced unexpected heating \cite{KetterleHarper}. In an earlier paper, we began addressing the sources of this heating by studying collisions within a one-dimensional BEC in a shaken optical lattice \cite{ChoudhuryMuellerPRA2014}. We found that in the presence of strong transverse confinement, interactions can drive instabilities but that there were large parameter ranges where the system was stable. Here we extend that work to the regime where there is no transverse confinement. The additional decay channels generally lead to more dissipation and diffusive dynamics. \\
In this paper, we consider two paradigmatic examples of Floquet systems in which a three dimensional BEC is loaded into an a modulated one-dimensional lattice. The difference lies in the nature of the drive: We consider (a) amplitude modulation of lattice depth (similar to the setup in Refs. \cite{TinoNJP2010,TinoPRL2008,MiyakeThesis}) and (b) lattice shaking (similar to the setup in Ref. \cite{ChinFloquet2013,ChinFloquet2014}). These two protocols are illustrated schematically in Fig. \ref{fig1}. We solve the Schr\"{o}dinger equation for both systems and treat the inter-atomic interactions perturbatively. Our analysis is along the lines of Ref. \cite{ChoudhuryMuellerPRA2014} where we used Fermi's golden rule to study the tight confinement limit. This kinetic approach can be contrasted with quantum coherent arguments such as those used by Creffield in Ref. \cite{CreffieldPRA2008}. Creffield used the Bogoliubov equations to look at a dynamical instability of a BEC in a shaken one dimensional optical lattice. These decay channels are important when the interactions are strong. We consider a different limit: for most recent experiments, the interaction strengths are too low for the interaction-driven modification of the dispersion to be relevant, rather the physics is dominated by the energy and momentum conserving scattering processes which are accounted for through our kinetic equations. In a field-theoretic formulation this corresponds to only keeping the imaginary part of the self-energy.\\
In section II, we analyze the stability of a BEC in an amplitude modulated tilted optical lattice. A similar analysis can be used for Raman-driven lattices, such as those used to realize the Harper Hamiltonian \cite{KetterleHarper, MiyakeThesis}. It also applies to the study of density induced tunnelling \cite{Sengstockdit} and is related to earlier studies of Bloch oscillations \cite{ArimondoBloch2001}. In section III, we study the stability of a BEC loaded in a shaken optical lattice. This system can be mapped onto a classical spin model which exhibits a paramagnetic-ferromagnetic phase transition as well as a roton-maxon excitation spectrum \cite{ChinFloquet2013,ChinFloquet2014}. In both section II and section III, we obtain analytical results for the lifetime of the BEC. Finally, in section IV, we discuss the general form of the dissipation rate in driven systems.\\
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{Fig1.eps}
\caption{(Color Online) The two protocols of lattice driving (a) An amplitude modulated tilted lattice and (b) A shaken lattice}
\label{fig1}
\end{center}
\end{figure}
\section{Amplitude Modulated Lattice}
In this section, we consider a BEC in a deep tilted one dimensional optical lattice. Adjacent sites are offset by an energy $\Delta \gg J$, suppressing tunneling ($J$ being the nearest neighbor tunnelling matrix element). There is no transverse confinement, yielding a one-dimensional array of pancakes. The lattice depth is then modulated at a frequency $\omega$($\approx \Delta$) so that tunnelling is restored between the pancakes. The Hamiltonian describing this system is :
\begin{eqnarray}\label{model}
H&=&\int d^2 r_{\! \perp} \sum_j -\left(J+2 \Omega \cos(\omega t)\right) \left(a_{j+1}^\dagger a_j+a_j^\dagger a_{j+1}\right)\nonumber\\
&+& \Delta j a_j^\dagger a_j + \frac{\overline{g}}{2} a_j^\dagger a_j^\dagger a_j a_j +\frac{\hbar^2}{2m} \nabla_{\! \perp} a_j^{\dagger} \nabla_{\! \perp} a_j,
\end{eqnarray}
The constant $\Omega$ parameterizes the modulation of the hopping matrix element. The transverse spatial components are suppressed : $a_j = a_j (r_{\! \perp})$ where $r_{\! \perp} = (x,y)$ and $\nabla_{\! \perp} = \hat{x} \partial_x + \hat{y} \partial_y$. The coupling constant is
\begin{eqnarray}
\overline{g}&=&\frac{4\pi \hbar^2 a_s}{m} \int\! dz\, \phi(z)^4 \nonumber \\
&=&\frac{4\pi \hbar^2 a_s}{m d}
\label{wan}
\end{eqnarray}
where $\phi(z)$ is the Wannier wavefunction in the $z$ direction, normalized so that $\int |\phi|^2 dz=1$ and $a$ is the lattice spacing. This equation defines $d$, the size of the Wannier state and is valid if $d \gg a_s$ \cite{HazzardMuellerPRA2010}.\\
Depending on how one sets up the problem the $\phi(z)$ used in Eq.(\ref{wan}) will be either the Wannier states of the static lattice, some time average of the instantaneous eigenstates or even some time-dependent function which yields an oscillating $\overline{g}$. The distinction will be important if the drive frequency is resonant with a band changing collision or if the modulation amplitude is large. Similarly, the relationship between $J, \Omega$ and the lattice parameters may be renormalised by large amplitude driving and the time-dependence of the parameters may not be sinusoidal. For most present experiments, where the amplitude of oscillations is small, these effects can be ignored. \\
As in \cite{KolovskyPRA2009}, we now perform a gauge transformation to replace the tilt with a time dependent phase :
\begin{equation}
a_j=b_j e^{-i \Delta j t}.
\end{equation}
The operators $b_j$ will evolve with a new Hamiltonian $H^\prime$, chosen so that
\begin{equation}
i\partial_t b_j = [b_j,H^\prime].
\end{equation}
Specializing to the resonant case $\omega=\Delta$, we Fourier transform this equation yielding
\begin{equation}\label{rf}
H^\prime = \sum_k \epsilon_{\bf k}(t) b_{\bf k}^\dagger b_{\bf k} + \frac{g}{2 V}\sum_{\bf k_1,k_2,k_3}b_{\bf k_1}^\dagger b_{\bf k_2}^\dagger b_{\bf k_3} b_{\bf k_4},
\end{equation}
where ${\bf k_4 = k_1+ k_2 - k_3}$, ${\bf k} = \{k_z, k_{\perp}\}$ and $g=\overline{g}a$, where $a$ is the lattice spacing. The instantaneous single-particle dispersion is given by:
\begin{eqnarray}
\epsilon_{\bf k}(t)&=& -2\Omega \cos(k_z) -2\Omega \cos(k_z-2 \Delta t) \nonumber \\
&-& 2 J \cos(k_z-\Delta t) + \frac{\hbar^2 k_{\! \perp}^2}{2m}
\end{eqnarray}
where $V$ is the system volume and $b_{\bf k} = \sum_j b_j \exp(i {\bf k} j)$. The best interpretation of this dispersion comes from looking at the group velocity of a wave-packet, $\partial \epsilon/\partial k$. There is a drift term, $v_d = \partial \epsilon/\partial k_z = 2 \Omega \sin(k_z)$ and an oscillating part $v_m = \partial \epsilon/\partial k_z = -4 \Omega \Delta \sin(k_z-2 \Delta t) - 2 J \sin(k_z-\Delta t) $ which is analogous to micro motion in ion traps \cite{Winelandjappphys98}
\\
We wish to explore the behaviour of a condensate at ${\bf k}=0$. To this end, we break our Hamiltonian into three terms $H^\prime=H_0+H_1+H_2$,
\begin{eqnarray}
H_0&=& \sum_{\bf k} \epsilon_{\bf k}(t) b_{\bf k}^\dagger b_{\bf k} +\frac{g}{2 V} b_0^\dagger b_0^\dagger b_0 b_0 + \frac{2 g}{V}\sum_{\bf k\neq 0} b_0^\dagger b_{\bf k}^\dagger b_{\bf k} b_0, \nonumber\\
\\
H_1&=& \alpha \frac{g}{2 V}\sum_{\bf k\neq 0} b_{\bf -k}^\dagger b_{\bf k}^\dagger b_0 b_0+{\rm H. C.},\\
H_2 &=& H-H_1-H_0
\end{eqnarray}
where $\alpha=1$ is a formal parameter we will use for perturbation theory. As $\alpha$ is accompanied by a factor of the interaction strength $g N/V$, this expansion is equivalent to perturbation theory in $g$. Here $H_0$ contains the single-particle physics and the Hartree-Fock terms, $H_1$ contains interaction terms corresponding to atoms scattering from the condensate to finite momentum states and $H_2$ contains terms where a condensed and a non-condensed atom scatter or two non-condensed atoms scatter. $H_2$ does not contribute at lowest order in perturbation theory, as there are initially no non-condensed atoms. \\
We will imagine that at time $t=0$ we are in the state
\begin{equation}
|0\rangle =\frac{\left(b_0^\dagger\right)^N}{\sqrt{N!}} |{\rm vac}\rangle,
\end{equation}
which is an eigenstate of $H_0$. We will perturbatively calculate how $|\psi(t)\rangle$ evolves. To lowest order,
\begin{eqnarray}
|\psi(t)\rangle
&=& e^{-i \frac {E_0 t}{\hbar}}\left[ |0\rangle+\sum_{\bf k} c_{\bf k}(t) |{\bf k}\rangle +\cdots\right]\
\end{eqnarray}
where the state $\vert \bf k \rangle$ is given by :
\begin{equation}
\vert {\bf k}\rangle = b_{\bf k}^\dagger b_{\bf -k}^\dagger \frac{\left(b_0^\dagger\right)^{N-2}}{\sqrt{(N-2)!}} |{\rm vac}\rangle.
\end{equation}
and the coefficient is
\begin{equation}\label{sol}
c_{\bf k}(t) =\frac{\Lambda_k}{i \hbar}\int_0^t\!d\tau\,\exp\left[
-i \int_\tau^t 2 \frac{E_k(s)}{\hbar} \,ds
\right].
\end{equation}
whose amplitude is given by
\begin{equation}
\Lambda_k = \langle {\bf k} | H_1 |0\rangle/\alpha = \frac{g n}{2}
\end{equation}
In Eq.(\ref{sol}), the (Hartee-Fock) excitation energy is
\begin{equation}
E_k(t) = \epsilon_{\bf k}(t)+g n-\epsilon_0(t).
\end{equation}
Performing the integral in the exponent yields
\begin{eqnarray}
\int_\tau^t E_k(s)\,ds&=& E^{(0)}_k \times(t-\tau) \nonumber \\
&+& \frac{\Omega}{\Delta} \left(\sin(k_z-2\Delta \tau) - \sin(k_z-2\Delta t) \right) \nonumber \\
&+& \frac{2 J}{\Delta} \left( \sin(k_z-\Delta \tau) - \sin(k_z-\Delta t)\right) \nonumber\\
\end{eqnarray}
where the ``effective dispersion" is
\begin{eqnarray}
E_k^{(0)}&=&2\Omega [1-\cos(k_z)] + g n +\frac{k_{\! \perp}^2}{2 m}.
\label{jeff}
\end{eqnarray}
\\
This energy corresponds to the spectrum one would obtain from Floquet theory. It takes the form of a tight-binding model along z with a nearest-neighbor hopping of strength $\Omega$. The resonant modulation has restored hopping. We now expand Eq.~(\ref{sol}) in powers of
$J/\Delta$ and $\Omega/\Delta$. Neglecting off-resonant terms and making the standard approximation $\sin^2(xt) /(xt)^2 \approx 2 \pi t \delta(x)$, finding
\begin{eqnarray}
|c_{\bf k}|^2 &\approx& \frac{|\Lambda_k|^2}{\hbar} \frac{\Omega^2}{\Delta^2} t\, 2\pi \delta(E^{(0)}_k-\Delta) \nonumber \\
&+& \frac{|\Lambda_k|^2}{\hbar} \frac{4 J^2}{\Delta^2} t \,2\pi \delta(E^{(0)}_k-\Delta/2),
\end{eqnarray}
which is analogous to Fermi's golden rule. The result can also be derived using the formulation in Ref. \cite{floquetunitary}. The first term proportional to $\Omega^2$ is naturally interpreted as coming from a pair of particles absorbing a lattice vibration. The second term involves one particle ``hopping downhill" with the potential energy converted to transverse motion.\\
We now calculate the total rate of scattering out of the condensate. The relevant timescale is
\begin{eqnarray}
\frac{1}{\tau}&=&\frac{1}{N_0}\partial_t N_0=\frac{2}{N}\partial_t \sum_k |c_{\bf k}|^2 \nonumber\\
&=& \frac{1}{\tau_2}+\frac{1}{\tau_1} \nonumber \\
\frac{1}{\tau_2}&=& \frac{2 |\Lambda_k|^2}{N \hbar} \frac{\Omega^2}{\Delta^2}\sum_k 2\pi \delta(E^{(0)}_k-\Delta)\\
\frac{1}{\tau_1}&=&\frac{2 |\Lambda_k|^2}{N \hbar} \frac{4 J^2}{\Delta^2}\sum_k 2\pi \delta(E^{(0)}_k-\Delta/2).
\end{eqnarray}
The sums over $k$ are straightforward. We first note that that because $\Omega$ is small, the dependence of $E_k^{(0)}$ on $k_z$ is weak, and can be neglected. Thus
the sum over $k$ just yields a constant
\begin{eqnarray}
\rho(\nu)&=&\sum_k 2\pi \delta(E^{(0)}_k-\nu) \nonumber \\
&\approx& \frac{V}{a}\int \frac{d^2 k_{\! \perp}}{(2\pi)^2} 2\pi\delta\left(\frac{k_{\! \perp}^2}{2m}+g n-\nu\right) \nonumber \\
&=& \frac{V m}{a}.
\end{eqnarray}
Putting in the factors of $\hbar$ the total rate of scattering out of the condensate is
\begin{eqnarray}
\frac{1}{\tau}&=&\frac{g^2 n m}{2a \hbar^3} \frac{\Omega^2+4 J^2}{\Delta^2} \nonumber \\
&=& gn \frac{2\pi a_s}{\hbar d}\frac{\Omega^2+4 J^2}{\Delta^2}
\label{t1}
\end{eqnarray}
Some typical numbers are $gn/h\sim 300 $Hz$, \Omega\sim 40 $Hz$, J\sim 5$Hz$, \Delta\sim 1 $kHz and $d\sim 75$nm. For $^{87} {\rm Rb}$, the scattering length is $a_s\sim 5$nm. Thus the lifetime of the BEC is about $750$ms.
\section{Shaken Lattice}
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{chin_scheme.eps}
\caption{(Color Online) Schematic showing first (top) and second (bottom) Floquet quasi-energy bands of an optical lattice: $\epsilon$ is the single-particle energy, $k$ is the quasi-momentum and $a$ is the lattice spacing. Since Floquet energies are only defined modulo the shaking quanta $\hbar \omega$, the energy of the second band has been shifted down by $\hbar \omega$. Alternatively, this shift can be interpreted as working in a dressed basis, where the energy includes a contribution from the phonons. The mixing between the bands depends on the shaking amplitude. Dashed curves correspond to weak shaking, where the first band has its minimum at $k = 0$. Solid curves correspond to strong shaking, where there are two minima at $k = \pm k_0 \ne 0$.}
\label{fig2}
\end{center}
\end{figure}
In this section, we look at the stability of a three-dimensional BEC loaded into a shaken one-dimensional optical lattice. We considered the strictly one-dimensional version in Ref. \cite{ChoudhuryMuellerPRA2014}. We are motivated by the set-up in Ref. \cite{ChinFloquet2014} where Ha {\it et al.} load a three-dimensional BEC of $^{133} {\rm Cs}$ atoms in a one-dimensional lattice and then shake the lattice at a frequency resonant with the zero-energy bandgap of the first two bands. This results in a strong mixing of the first two bands (schematically illustrated in Fig. \ref{fig2}). For our analysis, we we label the Bloch band connected adiabatically to the first Bloch band in the limit of zero shaking as the ground band. As is evident from Fig. \ref{fig2}, due to level repulsion between the Bloch bands, the ground band exhibits a bifurcation from having one minimum at $\{{\bf k}=0 \}$ to two minima at $\{{\bf k_{\! \perp} } = 0, k = k_0 \ne 0\}$. This is analogous to the paramagnetic-ferromagnetic phase transition in Landau theory for classical spin models. In the paramagnetic regime the bosons always condense at ${\bf k} = 0$, while in the ferromagnetic regime, the bosons condense at some finite momentum $\{{\bf k_{\! \perp}} = 0, k \ne 0\}$. Here, we first perturbatively analyze the stability of a BEC against collisions in the limit of weak forcing amplitude. This gives an intuitive picture about how the scattering rate varies with amplitude. We then numerically calculate collision rates for larger shaking amplitudes spanning the experimentally interesting critical region. We find that the linearised theory over-estimates the damping, but gives the correct order of magnitude.\\
\subsection{Model}
In the frame co-moving with the lattice, the tight-binding Hamiltonian describing the system can be written as $H_0 (t) + H_{\rm int}$:
\begin{eqnarray}
H_0(t) &=& \int d^2 r_{\! \perp} \sum_{ij} \left(-t_{ij} ^{(1)} a_{i}^{\dagger}a_{j} + t_{ij} ^{(2)} b_{i}^{\dagger}b_{j} + h.c.\right) \nonumber \\
&+& \sum_{j} F \cos(\omega t) \left(z_j \left(a_{j}^{\dagger} a_{j} + b_{j}^{\dagger} b_{j}\right) + \chi_j a_j^{\dagger} b_j + \chi_j ^{*} b_j^{\dagger} a_j \right) \nonumber\\
&+& \frac{\hbar^2}{2m} \left(\nabla_{\! \perp} a_j^{\dagger} \nabla_{\! \perp} a_j + \nabla_{\! \perp} b_j^{\dagger} \nabla_{\! \perp} b_j \right)\\
H_{\rm int} &=& \int d^2 r_{\! \perp} \sum_i \frac{\overline{g}_1}{2} a_i ^{\dagger} a_i ^{\dagger} a_i a_i + \frac{\overline{g}_2}{2} b_i ^{\dagger} b_i ^{\dagger} b_i b_i \nonumber \\
&+& 2 \overline{g}_{12} a_i ^{\dagger} b_i ^{\dagger} a_i b_i + H^{\prime}
\end{eqnarray}
where,
\begin{eqnarray}
\chi_j &=& \int\!dz \,\ z w_1^{*}(z-z_j) w_2(z-z_j) \nonumber \\
t_{ij} ^{(1)} &=& \int dz\,\ w_1^{*}(z-z_i)\left(\frac{-\hbar^2}{2 m} \frac{d^2}{dz^2}+ V(z)\right)w_1^{*}(z-z_j)\nonumber \\
t_{ij} ^{(2)} &=& \int dz\,\ w_2^{*}(z-z_i) \left(\frac{-\hbar^2}{2 m} \frac{d^2}{dx^2}+V(z)\right)w_2^{*}(z-z_j) \nonumber
\end{eqnarray}
with $V(z) = V_0 \sin^2\left(\frac{2 \pi z}{\lambda_L}\right)$ and $H^{\prime}$ is off-resonant. It should also be noted that $\chi_j$ is independent of $j$ and so we can call it $\chi$. If necessary more bands can be included. \\
We now perform a basis rotation : $\vert\psi\rangle \rightarrow U_{c}(t) \vert \psi\rangle$ with:
\begin{equation}
U_{c}(t) = \exp\left(- \frac{i}{\hbar} \int_{0}^{t} \sum_{j}z_ j F_0\cos(\omega t) (a_{j}^{\dagger} a_{j} + b_{j}^{\dagger} b_{j}) \right)
\label{unitary}
\end{equation}
Under this unitary transformation, the Hamiltonian becomes:
\begin{eqnarray}
H_0^{\prime} (t) &=& U_{c}H_0(t)U_{c}^{-1} - i \hbar U_{c}\partial_t U_{c}^{-1} \nonumber \\
&=& \sum_{ij} \left(-J_{ij} ^{(1)} (t) a_{i}^{\dagger}a_{j} + J_{ij} ^{(2)}(t) b_{i}^{\dagger}b_{j} + h.c.\right) \nonumber\\
&+&\sum_j F \cos(\omega t) \left(\chi a_{j}^{\dagger} b_{j} +\chi ^{*} b_{j}^{\dagger} a_{j} \right) + \sum_{k_{\! \perp}} \frac{\hbar^2 k_{\! \perp}^2}{2m}\nonumber \\
&=& \sum_k \sum_m \cos(m k a)\left(-J_{m} ^{(1)} (t) a_{\bf k}^{\dagger}a_{\bf k} -J_{m} ^{(2)} (t) b_{\bf k}^{\dagger}b_{\bf k}\right)\nonumber \\
&+& \sum_k F_0\cos(\omega t) \left(\chi a_{\bf k}^{\dagger} b_{\bf k} +\chi ^{*} b_{\bf k}^{\dagger} a_{\bf k} \right) + \sum_{k_{\! \perp}} \frac{\hbar^2 k_{\! \perp}^2}{2m}\nonumber \\
\end{eqnarray}
where,
\begin{eqnarray}
J_{ij}^{\sigma} (t) &=& t_{ij}^{\sigma} \exp(-i F_0 \frac{\cos(\omega t)}{\hbar \omega} (z_i-z_j)) \nonumber\\
&=& t_{ij}^{\sigma} \exp(-i F_0 \frac{\cos(\omega t)}{\hbar \omega} a (i-j)) ,
\label{rwa1}
\end{eqnarray}
$a = \lambda_L/2$ is the lattice spacing and $\chi=\chi^{*}$ for a suitable choice of phase for $a_k$ and $b_k$.\\
Thus, in the limit of $F/(\hbar \omega) \ll 1$, the Hamiltonian describing the system is : $H = H_{\rm sp} + H_{\rm int}$, where
\begin{eqnarray}
H_{\rm sp} &=& \sum_{\bf k} \epsilon^{(1)}_{\bf k} a_{\bf k}^{\dagger} a_{\bf k} + \epsilon^{(2)}_{\bf k} b_{\bf k}^{\dagger} b_{\bf k} +\chi F \cos(\omega t) \left (a_{\bf k}^{\dagger}b_{\bf k} + b_{\bf k}^{\dagger}a_{\bf k} \right)\nonumber\\
\\
H_{\rm int} &=& \int d^2 r_{\! \perp} \sum_i \frac{\overline{g}_1}{2 } a_i ^{\dagger} a_i ^{\dagger} a_i a_i + \frac{\overline{g}_2}{2} b_i ^{\dagger} b_i ^{\dagger} b_i b_i \nonumber \\
&+& 2 \overline{g}_{12} a_i ^{\dagger} b_i ^{\dagger} a_i b_i + H^{\prime}
\end{eqnarray}
Here, $\epsilon^{(1)}_{\bf k} (\epsilon^{(2)}_{\bf k})$ is the dispersion of the first (second) band and $a_{\bf k} (b_{\bf k})$ is the annihilation operator for particles in the first (second band).\\
We make the transformation $b_{\bf k} \rightarrow \exp(- i \omega t) b_{\bf k}$ and discard far off-resonant terms (making the rotating wave approximation) to simplify the single-particle terms :
\begin{eqnarray}
H_{\rm RWA} ^{(\rm sp)} &=& \sum_{\bf k} \epsilon^{(1)}_{\bf k} a_{\bf k} ^{\dagger} a_{\bf k} + \epsilon^{(2)}_{\bf k} b_{\bf k} ^{\dagger} b_{\bf k} \nonumber \\
&+& \chi F\left(a_{\bf k} ^{\dagger} b_{\bf k} + b_{\bf k} ^{\dagger} a_{\bf k} \right),
\end{eqnarray}
Here ${\bf k} = \{k, {\bf k_{\! \perp}}\}$, $\epsilon^{(1)}_{\bf k} = \epsilon^{(1)}_{k} + (\hbar k_{\! \perp})^2/(2m) $, $\epsilon^{(2)}_{k} = \epsilon^{(2)}_{\bf k} + (\hbar k_{\! \perp})^2/(2m) - \hbar \omega$. We diagonalise this quadratic form writing
\begin{equation}
H_{\rm RWA} ^{(\rm sp)} = \sum_{\bf k} \overline{\epsilon}^{(1)}_{\bf k} \overline{a}_{\bf k} ^{\dagger} \overline{a}_{\bf k} + \overline{\epsilon}^{(2)}_{\bf k} \overline{b}_{\bf k} ^{\dagger} \overline{b}_{\bf k}
\label{hamrwa}
\end{equation}
The dressed dispersions $\overline{\epsilon}^{(1)}_{\bf k}$ and $\overline{\epsilon}^{(2)}_{\bf k}$ are shown as solid lines in Fig.(\ref{fig2}). The bare dispersions $\epsilon^{(1)}_{\bf k}$ and $\epsilon^{(2)}_{\bf k}$ are shown as dashed lines. We treat $H_{\rm RWA} ^{(\rm sp)}$ both perturbatively and non-perturbatively to obtain scattering rates in the next two subsections.
\subsection{Perturbation Theory}
For small forcing amplitudes, we gain insight by a perturbative expansion in F. To linear order in F, the dressed operators are
\begin{eqnarray}
\overline{a}_{\bf k} ^{\dagger} &=& a_{\bf k} ^{\dagger} - (\chi F)/(\epsilon^{(2)}_{\bf k} - \epsilon^{(1)}_{\bf k} ) b_{\bf k} ^{\dagger} \\
\overline{b}_{\bf k} ^{\dagger} &=& b_{\bf k} ^{\dagger} + (\chi F)/(\epsilon^{(2)}_{\bf k} - \epsilon^{(1)}_{\bf k}) a_{\bf k} ^{\dagger}
\end{eqnarray}
Because we have made the rotating wave approximation, we have a time-independent problem and can simply apply Fermi's Golden Rule. The standard procedure yields a scattering rate:
\begin{eqnarray}
\frac{d N}{dt} &=& \int \frac{dk}{2 \pi} \int \frac{d^2 k_{\! \perp}}{(2 \pi)^2}\vert \langle \psi_f \vert H_{\rm int} \vert \psi_i \rangle\vert ^2 \sigma \\
\sigma &=& \frac{2 \pi}{\hbar} \delta(\overline{\epsilon}^{(1)}_{\bf k} + \overline{\epsilon}^{(2)}_{\bf k} + \frac{(\hbar k_{\! \perp})^2}{m} - 2 \overline{\epsilon}^{(1)}_{0}) \nonumber
\end{eqnarray}
The initial and final states are
\begin{eqnarray} \label{inf}
\vert \psi_i \rangle = \frac{(\overline{a}_{0} ^{\dagger})^N}{\sqrt{N !}} |0\rangle \nonumber \\
\vert \psi_f \rangle = \overline{b}_{\bf k} ^{\dagger} \overline{a}_{\bf -k} ^{\dagger} \frac{(\overline{a}_{0} ^{\dagger})^{(N-2)}}{\sqrt{N-2 !}} \vert 0\rangle
\end{eqnarray}
$\vert \psi_i \rangle$ represents all particles in the condensate, while $\vert \psi_f \rangle$ has one particle with momentum ${\bf k}$ in the dressed $b$ band and one with momentum ${\bf -k}$ in the ground band. \\
The transverse integrals are elementary and yield
\begin{equation}
\frac{dN}{dt} = \frac{m}{2 \hbar^3} n^2 \int \frac{dk}{2 \pi} (\frac{g_1}{\Delta_k} - 2 \frac{g_{12}}{\Delta_0})^2 (\chi F)^2,
\label{scatter}
\end{equation}
where $\Delta_k = \left(\epsilon^{(2)}_{k} - \epsilon^{(1)}_{k} \right)$, $\Delta_0 =\left( \epsilon^{(2)}_{0} - \epsilon^{(1)}_{ 0} \right)$ and $g = \overline{g} a$. While Eq.(\ref{scatter}) can always be integrated numerically, we have found a sequence of approximations which let us analytically estimate the scattering rate. First, we approximate the Wannier functions as $w_1(x) = (\frac{1}{d_1^2 \pi})^{1/4} \exp(- x^2/2 d_1^2) $ and $w_2(x) = (\frac{1}{\pi d_1^2})^{3/4} x \exp(- x^2/2 d_1^2) $, where $d_1 =a/(\pi (V_0)^{1/4})$ ($V_0$ being the lattice depth expressed in units of $E_R$). Within this approximation, $g_1 \approx 2 g_{12}$,
where $g_1 = (4 \pi \hbar^2 a_s a)/(m d)$, $d = d_1\sqrt{2 \pi}$ being the size of the Wannier state and $a_s$ is the scattering length . This is a good approximation as a numerical calculation using the exact Wannier states for the lattice in Ref. \cite{ChinFloquet2013,ChinFloquet2014} yields $g_1 =(1/0.41)\, g_{12}$. \\
As a second approximation, we note that except for $k$ near 0, $\Delta_k \gg \Delta_0$. The contribution of those parts to the integral in Eq.(\ref{scatter}) is small, allowing us to neglect the $k$ dependence of the integrand. Hence, we see that the rate of scattering is approximately:
\begin{equation}
\frac{dN}{dt} \approx (g_1 n)^2 (\frac{\chi F}{\Delta_0})^2 \frac{V m}{2 a \hbar^3}
\label{chinscat}
\end{equation}
This gives the timescale for the scattering to be:
\begin{equation}
\tau = \frac{N}{\frac{dN}{dt}} \approx \frac{2 \hbar^3 a}{m g_1^2 n} (\frac{\Delta_0}{\chi F})^2 .
\label{t2pert}
\end{equation}
Stronger interactions, higher density and larger forcing amplitudes all increase the scattering rate.
\subsection{Beyond Perturbation Theory}
In this section, we extend our results to larger F. This allows us to probe the critical and ferromagnetic region. Generically, we write
\begin{eqnarray}
\overline{a}_{\bf k} ^{\dagger} &=& u_{k} a_{\bf k} ^{\dagger} + v_{k} b_{\bf k} ^{\dagger} \\
\overline{b}_{\bf k} ^{\dagger} &=& -v_{k} a_{\bf k} ^{\dagger} + u_{k} b_{\bf k} ^{\dagger}
\end{eqnarray}
with $\vert u_{k} \vert^2 + \vert v_{k} \vert^2 = 1$. In particular,
\begin{eqnarray}
u_{k} &=& \frac{1}{\sqrt{1+\vert \gamma_k\vert^2}} ; \,\, v_{k} = \frac{g_k}{\sqrt{1+\vert \gamma_k\vert^2}} \nonumber \\
\frac{1}{\gamma_k} &=& \frac{\sqrt{4 F^2 \chi ^2 + \delta \epsilon_k^2}+\delta\epsilon_k}{2 \chi F} \nonumber \\
\delta \epsilon_k &=& \epsilon _{k} ^{(1)}-\epsilon_{k} ^{(2)} \nonumber
\end{eqnarray}
One can invert the above relationships to obtain:\\
\begin{eqnarray}
a_{\bf k} ^{\dagger} &=& u_{k} \overline{a}_{\bf k} ^{\dagger} - v_{k} \overline{b}_{\bf k} ^{\dagger} \\
b_{\bf k} ^{\dagger} &=& v_{k} \overline{a}_{\bf k} ^{\dagger} + u_{k} \overline{b}_{\bf k} ^{\dagger}
\end{eqnarray}
For $F < F_c$ ($F_c$ being the critical shaking force), we use Eq. (\ref{inf}) as our initial and final states. For $F>F_c$, we use
\begin{eqnarray}
\vert \psi_i \rangle &=& \frac{(\overline{a}_{{\bf k_0}} ^{\dagger})^N}{\sqrt{N !}} |0\rangle \nonumber \\
\vert \psi_f ^{(1)}\rangle &=& \overline{b}_{\bf k_0+k} ^{\dagger} \overline{a}_{\bf k_0-k} ^{\dagger} \frac{(\overline{a}_{\bf k_0} ^{\dagger})^{(N-2)}}{\sqrt{N-2 !}} \vert 0\rangle \nonumber \\
\vert \psi_f ^{(2)}\rangle &=& \overline{b}_{\bf k_0+k} ^{\dagger} \overline{b}_{\bf k_0-k} ^{\dagger} \frac{(\overline{a}_{0} ^{\dagger})^{(N-2)}}{\sqrt{N-2 !}} \vert 0\rangle \nonumber \\
\end{eqnarray}
The states are analogous to those in eq.(\ref{inf}). In particular, $\vert \psi_i \rangle$ has all particles in a finite momentum condensate (${\bf k_0} = \{k=k_0,{\bf k_{\! \perp}} = 0\}$). \\
The scattering rate is then:
\begin{eqnarray}
\frac{dN}{dt} &=& \int \frac{dk}{2 \pi} \int \frac{d^2 k_{\! \perp}}{(2 \pi)^2}\vert \langle \psi_f^{(1)} \vert H_{\rm int} \vert \psi_i \rangle\vert ^2 \sigma_{12} \nonumber \\
&+& \int \frac{dk}{2 \pi} \int \frac{d^2 k_{\! \perp}}{(2 \pi)^2}\vert \langle \psi_f^{(2)} \vert H_{\rm int} \vert \psi_i \rangle\vert ^2 \sigma_{22}
\end{eqnarray}
where
\begin{eqnarray}
\sigma_{12} &=& \frac{2 \pi}{\hbar} \delta(\overline{\epsilon}^{(1)}_{k_0-k} + \overline{\epsilon}^{(2)}_{k_0+k} + \frac{(\hbar k_{\! \perp})^2}{m} - 2 \overline{\epsilon}^{(1)}_{k_0}) \nonumber \\
\sigma_{22} &=& \frac{2 \pi}{\hbar} \delta(\overline{\epsilon}^{(2)}_{k_0-k} + \overline{\epsilon}^{(2)}_{k_0+k} + \frac{(\hbar k_{\! \perp})^2}{m} - 2 \overline{\epsilon}^{(1)}_{k_0}) \nonumber
\end{eqnarray}
In general $g_{12} = \alpha g_1$ and $g_2 = \beta g_1$. Approximating the Wannier functions with the harmonic oscillator wave functions would yield $\alpha= 1/2$ and $\beta = 3/4$. Rather than using this approximation, We numerically calculate the maximally localised Wannier functions for the experimental lattice depth of $V = 7 E_R$ and find that $\alpha = 0.41$ and $\beta = 0.6$.\\
Extracting the dimensional factors ,
\begin{equation}
\tau = \frac{N}{\frac{dN}{dt}} = \frac{2 \hbar^3 a}{ m g_1^2 n \Gamma}
\label{t2}
\end{equation}
where the dimensionless parameter $\Gamma$ depends on the forcing strength and can be expressed as
\begin{widetext}
\begin{eqnarray}
\Gamma &=& \int \frac{dk}{2 \pi} (\vert - u_{k_0-k} v_{k_0+k} u_{k_0} u_{k_0} + \alpha \, u_{k_0+k} v_{k_0-k} v_{k_0} v_{ k_0} + 2\,\beta(u_{ k_0+k} u_{k_0-k} u_{k_0} v_{k_0} - v_{k_0+k} v_{k_0-k} u_{k_0} v_{ k_0}) \vert ^2) \nonumber \\
&+& (\vert v_{ k_0-k} v_{ k_0+k} u_{ k_0} u_{ k_0} + \alpha\, u_{ k_0+k} u_{ k_0-k} v_{ k_0} v_{ k_0} - 2\, \beta(v_{ k_0+k} u_{ k_0-k} u_{ k_0} v_{ k_0} + u_{ k_0+k} v_{ k_0-k} u_{ k_0} v_{ k_0} ) \vert ^2)
\label{scatter1}
\end{eqnarray}
\end{widetext}
\begin{figure}
\begin{center}
\includegraphics[scale=0.55]{compare_rates_1.eps}
\caption{(Color Online) Plot of dimensionless decay rate $\Gamma$ as a function of amplitude of shaking, $F$ for $\omega = 5.5 \,\ E_R/\hbar$ and $V_0 = 7.0 E_R$. The dotted line shows $\Gamma$ calculated using Eq.(\ref{scatter1}), while the thick line shows the function $ (\frac{\chi F}{\Delta_0})^2$ corresponding to the rate in Eq.(\ref{t2pert}). The kink shows the paramagnetic-ferromagnetic phase transition.}
\label{chin}
\end{center}
\end{figure}
The dotted line in Fig.(\ref{chin}) shows $\Gamma$ using $\alpha = 0.41$ and $\beta = 0.6$ corresponding to a lattice depth of $V= 7 E_R$ There is a distinct kink in the $\Gamma$ vs $F$ plot which shows the paramagnetic-ferromagnetic phase transition. For all $F$, the numerical calculation gives a smaller $\Gamma$ than the perturbative estimate in Eq.(\ref{chinscat}). For the experimental lattice depths, $d \sim 100$nm, $gn/h \sim 150$Hz, $a_s \sim 1.5$nm yielding $\tau \sim 1$s which matches experimental observations \cite{ChinFloquet2013}.
\section{General Conclusions}
\subsection{Form of the scattering rate}
Generically two-particle scattering will give a rate proportional to $g^2 n$. The instabilities studied here relied upon scattering into transverse modes. These rates can be modified by tuning the density of these modes. For example, one could imagine engineering band gaps with transverse optical lattices. Note, such lattices may provide additional confinement and increase the effective $g$, inadvertently increasing the decay rate.
\subsection{Diffusive Dynamics}
The same dissipation which causes the condensate to decay can also lead to diffusive motion. Such diffusion may provide another way to study this physics. We model the kinetics by a Boltzmann equation:
\begin{equation}
\frac{\partial n (z,p) }{\partial t} + v(p) \frac{\partial n (z,p) }{\partial z} = \frac{n(z,p) - (n (z)/2\pi)}{\tau}
\end{equation}
Here $n(z,p)$ is the coarse-grained number of particles whose position along the lattice direction is $z$ and whose quasi-momentum in that direction is $p$, while $n(z) = \int dp \, n(z,p)$ is the linear density and the group velocity is $v(p) = \partial{\epsilon}/\partial{p}$. We have integrated over the transverse directions. The $\tau$ appearing here is exactly the same as in Eqs.(\ref{t1}), (\ref{t2pert}) and (\ref{t2}). The collision term takes this simple form because atoms are scattered to random values of momentum in the lattice direction after a collision. Taking the zeroth and first moments of the Boltzmann equation yields typical hydrodynamic equations
\begin{eqnarray}\
\frac{\partial n(z)}{\partial t} + \frac{\partial J}{\partial z} &=& 0 \\
\frac{\partial J}{\partial t} + \frac{\partial}{\partial z}(\langle v^2 \rangle n(z)) &=& \frac{J}{\tau}
\end{eqnarray}
where the current $J$ is defined by $J = \int dv \, v(p) n(z,p)$. In the over damped limit, these can be rewritten as a diffusion equation with diffusion constant $D= \langle v^2\rangle \tau \propto J_{\rm eff} ^2 \tau$, where $J_{\rm eff}$ is the effective tunnelling coefficient (cf. Eq.(\ref{jeff})). Observing the diffusive motion may be one way of experimentally measuring $\tau$, complementing more direct methods \cite{SchneiderNATP2012, SchneiderPRL2013}
\section{Summary and Outlook}
In this paper we analysed the stability of a BEC in a driven one-dimensional optical lattice with no transverse confinement. We found that due to the presence of transverse modes, the BEC would always be unstable and we calculate the decay rates. Experimentally, this instability would be manifest in many forms, including heating and diffusive dynamics. In previous work, we found that in the limit of extremely tight transverse confinement the BEC has regimes of stability. \\
Generally, experiments are neither in the tight binding limit, nor in the limit with no transverse confinement. The results in the present paper are applicable as long as the level spacing of the quantum modes in the transverse direction ($\sim 100$ Hz for the experiment in Ref.\cite{ChinFloquet2014}) are small as compared to the drive frequency $\omega$ ($\sim 7.3$ KHz for the experiment in Ref.\cite{ChinFloquet2014}). The results from \cite{ChoudhuryMuellerPRA2014} apply in the opposite limit.\\
\section*{Acknowledgements}
We thank the Ketterle group (Wolfgang Ketterle, Colin J. Kennedy, William Cody Burton and Woo Chang Chung) and the Chin group (Cheng Chin and Logan Clark) for correspondence about their experiments. We are particularly indebted to Wolfgang Ketterle for suggesting we investigate transverse mode instabilities. We acknowledge support from ARO-MURI Non-equilibrium Many-body Dynamics grant (W911NF-14-1-0003).
|
1,941,325,221,104 | arxiv | \section{#1}\setcounter{equation}{0}}
\newcommand{\, {\rm GeV}}{\, {\rm GeV}}
\newcommand{\, {\rm MeV}}{\, {\rm MeV}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\title{$K \to \pi$ vector form factor with $N_f=2+1+1$ Twisted Mass fermions}
\ShortTitle{$K \to \pi$ vector form factor with $N_f=2+1+1$ Twisted Mass fermions}
\author{ N. Carrasco$^{(a)}$, P. Lami$^{(a,b)}$, V. Lubicz$^{(a,b)}$, E. Picca$^{(a,b)}$,
\speaker{L. Riggio}$^{(a)}$, S. Simula$^{(a)}$, C. Tarantino$^{(a,b)}$
\\
\it $^{(a)}$ INFN, Sezione di Roma Tre, Rome, Italy. Email: \email{[email protected]}, \email{[email protected]}, \email{[email protected]}
\it $^{(b)}$ Dipartimento di Matematica e Fisica, Universit\`a Roma Tre, Rome, Italy. Email: \email{[email protected]}, \email{[email protected]}, \email{[email protected]}, \email{[email protected]}
\\
\bf{For the ETM Collaboration}
}
\abstract{ We present a lattice QCD determination of the vector form factor of the kaon semileptonic decay $K\to \pi \ell \nu$ which is relevant for the
extraction of the CKM matrix element $|V_{us}|$ from experimental data. Our result is based on the gauge configurations produced by the
European Twisted Mass Collaboration with $N_f=2+1+1$ dynamical fermions. We simulated at three different values of the lattice
spacing and with pion masses as small as $210$ MeV. Our preliminary estimate for the vector form factor at zero momentum transfer is $f_+(0)=0.9683(65)$, where the uncertainty is both statistical and systematic. By combining our result with the experimental value of $f_+(0)|V_{us}|$ we obtain $|V_{us}|=0.2234(16)$, which satisfies the unitarity constraint of the Standard Model at the permille level.}
\FullConference{The 32nd International Symposium on Lattice Field Theory,\\
23-28 June, 2014\\
Columbia University New York, NY}
\begin{document}
\section{Introduction and simulation details}
\label{sec:intro}
In the Standard Model (SM) the relative strenght of the flavor-changing weak currents is parametrized by the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements. An accurate determination of the CKM matrix elements is therefore crucial both for testing the SM and for searching new physics (NP).
In this letter we present the determination of the matrix element $|V_{us}|$ from the study of semileptonic kaon (Kl3) decays on the lattice.
This determination is obtained combining lattice results for the $K\to \pi \ell \nu$ form factor $f_+(0)$ with the experimental measure of $f_+(0)|V_{us}|$ extracted from the decay rate of the process. Another possible approach for the determination of $|Vus|$ consists in combining the experimental measurement of pion and kaon leptonic decays with the lattice results for the ratio of decay constants $f_K / f_\pi$. This calculation has been also performed by our collaboration and the results are presented in \cite{DECAYCONSTANTS}.
In this contribution we used the ensembles of gauge configurations produced by the European Twisted Mass (ETM) Collaboration with four flavors of dynamical quarks ($N_f = 2+1+1$), which include in the sea, besides two light mass degenerate quarks, also the strange and the charm quarks.
The simulations were carried out at three different values of the lattice spacing $a$ to allow a controlled extrapolation to the continuum limit, the smallest being approximately $0.06fm$, and at different lattice volumes. The simulated pion masses used in this analysis range from $210 \, {\rm MeV}$ to approximately $450 \, {\rm MeV}$.
For each ensemble we used a number of gauge configurations corresponding to a separation of 20 trajectories to avoid autocorrelations.
The gauge fields were simulated using the Iwasaki gluon action \cite{Iwasaki:1985we}, while sea quarks were implemented with the Wilson Twisted Mass Action \cite{Frezzotti:2003xj}, which at maximal twist allows for an automatic ${\cal{O}}(a)$-improvement \cite{Frezzotti:2003ni}.
To avoid mixing in the strange and charm sectors we adopted a non-unitary setup in which valence quarks are simulated for each flavor using the Osterwalder-Seiler action \cite{Osterwalder:1977pc}.
At each lattice spacing different values of light and strange quark masses have been considered to study the dependence of the form factor $f_+(0)$ on $m_\ell$ and to allow for a small interpolation in $m_s$. In our final result for the form factor $f_+(0)$ we used for the physical values of $m_\ell$ and $m_s$ the values obtained in \cite{Carrasco:2014cwa}.
Valence quarks were simulated at different values of the spatial momenta using Twisted Boundary conditions \cite{Bedaque:2004kc,deDivitiis:2004kq,Guadagnoli:2005be}, allowing us to cover both the spacelike and the timelike region of the $4-$momentum transfer.
For more details on the simulation the reader can consult \cite{Carrasco:2014cwa}.
In the present work we studied a combination of three-points correlation functions in order to extract the form factors $f_+$ and $f_0$ as functions of the squared $4-$momentum transfer $q^2$, the light quark mass $m_\ell$ and the lattice spacing $a$.
The small interpolation in the strange quark mass, which has been simulated at three different values close to the physical one, is addressed with a simple quadratic spline.
Our result is $f_+(0)=0.9683(65)$ where the uncertainty is both statistical and systematic. This allows us to extract the value of the CKM matrix element $|V_{us}|=0.2234(16)$, which is compatible with the unitarity constraint of the Standard Model.
\section{Extraction of the form factors at $q^2=0$}
The matrix element of the vector current between two pseudoscalar mesons decomposes into two form factors, $f_+$ and $f_-$,
\begin{equation}
\label{eq:matrixelementdecomposition}
\left< \pi(p')|V_{\mu}|K(p) \right>=(p_\mu+p'_\mu)f_+(q^2)+(p_\mu-p'_\mu)f_{-}(q^2),
\end{equation}
which depend on the square of the $4-$momentum transfer $q_\mu=p_\mu-p'_\mu$.
The scalar form factor $f_0$ is defined as
\begin{equation}
\label{eq:f0def}
f_0(q^2)=f_+(q^2)+\frac{q^2}{M_K^2-M_{\pi}^2}f_{-}(q^2),
\end{equation}
and therefore satisfies the relation $f_+(0)=f_0(0)$.
The matrix element in Eq.~(\ref{eq:matrixelementdecomposition}) can be derived from the time dependence of a convenient combination of Euclidean three-point correlation functions in lattice QCD.
As it is well known at large time distances the three-point functions can be written as
\begin{equation}
\label{eq:3pt}
C_\mu ^{K\pi } \left( {t_x ,t_y ,\vec p,\vec p^{~'}} \right)_{ ~ \overrightarrow{
t_x \gg a ~~
\left( {t_y - t_x } \right) \gg a} ~ } Z_V \frac{{\sqrt {Z_K Z_\pi } }} {{4E_K E_\pi }}\left\langle {\pi \left( {p'} \right)} \right|\; V_\mu \;\left| {K\left( p \right)} \right\rangle e^{ - E_K t_x - E_\pi \left( {t_y - t_x } \right)} ,
\end{equation}
and therefore they can be combined in the ratio $R_\mu$
\begin{eqnarray}
& R_\mu &(t,\vec{p},\vec{p'}) =\frac{ C_{\mu}^{K\pi}(t,\frac{T}{2},\vec{p},\vec{p'} ) C_{\mu}^{\pi K}(t,\frac{T}{2},\vec{p'},\vec{p})}{C_{\mu}^{\pi\pi}(t,\frac{T}{2},\vec{p'},\vec{p'})C_{\mu}^{KK}(t,\frac{T}{2},\vec{p},\vec{p})}, \\
&R_\mu &_{ ~ \overrightarrow{
t \gg a } ~ } \frac{\left< \pi(p')|V_{\mu}|K(p) \right> \left< K(p) |V_{\mu}| \pi(p') \right>}{\left< \pi(p')|V_{\mu}| \pi(p') \right>\left< K(p) |V_{\mu}|K(p) \right>},
\end{eqnarray}
which is independent of the vector renormalization constant $Z_V$ and on the matrix elements $Z_\pi = | \langle \pi | \overline{u} \gamma_5 d | 0 \rangle|^2$ and $Z_K = | \langle \pi | \overline{s} \gamma_5 u | 0 \rangle|^2$.
So the matrix elements $\left<V_0\right>$ and $\left<V_i\right>$ can be extracted from the $R_\mu(t,\vec{p},\vec{p'})$ plateaux as
\begin{eqnarray}
\left< \pi(p')|V_{0}|K(p) \right> & = &\left< V_0 \right>=2\sqrt{R_0}\sqrt{EE'},\\
\left< \pi(p')|V_{i}|K(p) \right> & = &\left< V_i \right>=2\sqrt{R_i}\sqrt{pp'},
\end{eqnarray}
and used to extract the form factors trough the relations
\begin{eqnarray}
\label{eq:f+&f-}
f_{+}(q^{2})& = &\frac{(E-E')\left<V_i\right>-(p_i-p'_i)\left<V_0\right>}{2Ep'_i-2E'p_i},\nonumber \\
f_{-}(q^{2})& = &\frac{(p_i+p'_i)\left<V_0\right>-(E+E')\left<V_i\right>}{2Ep'_i-2E'p_i}.
\end{eqnarray}
The energies appearing in Eqs.~(\ref{eq:f+&f-}) are extracted from the dispersion relation with the masses obtained fitting the two-points correlation function of pseudoscalar mesons at rest.
Finally $f_0(q^2)$ can be calculated from Eq.~(\ref{eq:f0def}).
\begin{figure}
\begin{center}
\scalebox{0.3}{
\includegraphics{matel-eps-converted-to.pdf}
}
\end{center}
\vspace*{-0.8cm}
\caption{Example of the matrix elements $\left< V_0 \right>$ and $\left< V_i \right>$ extracted from the quantity $R_{\mu}$ corresponding to an ensemble with $\beta=1.90$, $L/a=24$, $a\mu_l=0.0080$, $a\mu_s=0.0225$, $|\vec{p}|=|\vec{p'}|\simeq87$MeV. }
\label{fig:matel}
\end{figure}
An example of the extraction of the matrix elements can be seen in Fig.~\ref{fig:matel}.
The next step was the study of the form factors $f_+$ and $f_0$ as a function of the $4-$momentum transfer to interpolate the data at $q^2=0$. This was done by fitting simultaneously $f_+$ and $f_0$ using the $z-$expansion (Eq.~(\ref{eq:zexpansion})) as parametrized in \cite{Bourrely:2008za}, and by imposing the condition $f_+(0)=f_0(0)$.
\begin{eqnarray}
\label{eq:zexpansion}
f_ + (q^2 ) = \frac{{a_0 + a_1 \left( {z + \frac{1}{2}z^2 } \right)}}{{1 - \frac{{q^2 }}{{M^2 _V }}}}, \nonumber \\
f_0 (q^2 ) = \frac{{b_0 + b_1 \left( {z + \frac{1}{2}z^2 } \right)}}{{1 - \frac{{q^2 }}{{M^2 _S }}}}.
\end{eqnarray}
In Eq.~(\ref{eq:zexpansion}) $M_S$ and $M_V$ are the scalar and the vector pole mass respectively, and $z$ is defined as
\begin{equation}
z = \frac{{\sqrt {t_ + - q^2 } - \sqrt {t_ + - t_0 } }}{{\sqrt {t_ + - q^2 } + \sqrt {t_ + - t_0 } }}
\end{equation}
where $t_+$ and $t_0$ are
\begin{eqnarray}
t_ + & = & \left( {M_K + M_\pi } \right)^2 \nonumber \\
t_0 & = & \left( {M_K + M_\pi } \right)\left( {\sqrt {M_K } - \sqrt {M_\pi } } \right)^2 .
\end{eqnarray}
We also tried to fit the $q^2$ dependence using other fit ansatz (e.g. polynomial expression in $q^2$) and including in the fit also the data corresponding to large negative transferred momenta, obtaining nearly identical results as it can be seen in Fig~\ref{fig:q2fit}.
\begin{figure}[htb!]
\centering
\scalebox{0.25}{\includegraphics{f0fp_IB1_mul2_val23allq2poleVSquad.pdf}}
\scalebox{0.25}{\includegraphics{f0fp_IB1_mul2_val23allq2.pdf}}
\vspace*{-0.8cm}
\caption{\it Left panel: interpolation of the form factors to $q^2=0$ using the $z$ expansion (continuum line) compared to the one obtained with a polynomial fit (dashed line). Right panel: interpolation of the form factors to $q^2=0$ using only the data around $q^2=0$, for this ensemble $a^2q^2 < 0.01$ (continuum line), or a larger range in $q^2$ (dashed line). Both plots corresponds to $\beta=1.90$, $L/a=32$, $a\mu_l=0.0040$, $a\mu_s=0.0225$.
}
\label{fig:q2fit}
\end{figure}
\section{Extrapolation of $f_+(0)$}
In order to compute the physical value of the vector form factor $f_+(0)$, we first performed a small interpolation of our lattice data to the physical value of the strange quark mass $m_s$ determined in \cite{Carrasco:2014cwa}.
Then we analyzed the dependence of $f_+(0)$ as a function of the (renormalized) light-quark mass $m_\ell$ and of the lattice spacing and extrapolated it to the physical point using both an SU(2) and an SU(3) ChPT prediction. Notice however that also the SU(3) fit was performed at fixed physical value of the strange quark mass.\\
The SU(2) ChPT prediction at the next-to-leading order (NLO) for $f_+(0)$ \cite{Flynn:2008tg}, reads as follows:
\begin{equation}
\label{eq:SU2fit}
f_ + (0) = F^ + _0 \left( {1 - \frac{3}{4}\xi \log \xi + P_2 \xi + P_3 a^2 } \right)
\end{equation}
where $\xi_\ell = 2B m_\ell / 16\pi^2f^2$ with $B$ and $f$ being the SU(2) low-energy constants (LECs) entering the LO chiral Lagrangian determined in \cite{Carrasco:2014cwa}. $F^+_0$, $P_2$ and $P_3$, on the other hand, are left as free fit parameters.
In SU(3) ChPT the expression for the vector form factor $f_+(0)$ is the following:
\begin{equation}
\label{eq:SU3fit}
f_ + (0) = 1 + f_2 + \Delta f,
\end{equation}
where $f_2$ can be written in full QCD \cite{Gasser:1984ux,Gasser:1984gg} as:
\begin{equation}
f^{{\rm{full QCD}}} _2 = \frac{3}{2}H_{\pi K} + \frac{3}{2}H_{\eta K},
\end{equation}
with
\begin{equation}
H_{PQ} = - \frac{1}{{64\pi ^2 f^2 _\pi }}\left[ {M^2 _P + M^2 _Q + \frac{{2M^2 _P M^2 _Q }}{{M^2 _P - M^2 _Q }}\log \frac{{M^2 _Q }}{{M^2 _P }}} \right].
\end{equation}
The quantity $\Delta f$ represents next-to-next-to-leading order (NNLO) contributions and beyond, which in our fit is parametrised as:
\begin{equation}
\Delta f = \left( {m_s - m _\ell } \right)^2\left[ {\Delta _0 + \Delta _1 m_\ell} \right] + \Delta _2 a ^2,
\end{equation}
so that Eq.~(\ref{eq:SU3fit}) verifies the Ademollo Gatto theorem \cite{Ademollo:1964sr} in the continuum limit i.e. deviations from unity are proportional to $(m_s-m_\ell)^2$.
\begin{figure}[htb!]
\centering
\scalebox{0.25}{\includegraphics{f0_vs_mlSU2.pdf}}
\scalebox{0.25}{\includegraphics{f0_vs_mlSU3.pdf}}
\vspace*{-0.8cm}
\caption{\it Chiral and continuum extrapolation of $f_+(0)$ based on the NLO SU(2) ChPT fit given in Eq.~(\ref{eq:SU2fit}) (left) and on the NNLO SU(3) ChPT fit given in Eq.~(\ref{eq:SU3fit}) (right).}
\label{fig:SU2SU3fit}
\end{figure}
The chiral and continuum extrapolations of $f_+(0)$ are shown in Fig.~\ref{fig:SU2SU3fit} for both the SU(2) and the SU(3) fit.
Combining the two analysis we get our final result for the vector form factor $f_+(0)$:
\begin{equation}
\label{eq:fp0res}
f_+(0)=0.9683(50)_{stat+fit}(42)_{Chir}=0.9683(65),
\end{equation}
where $()_{stat+fit}$ indicates the statistical uncertainty which includes the one induced by the fitting procedure and the uncertainties in the determination of all the input parameters needed for the analysis, namely the values of the light quark mass $m_\ell$, the lattice spacing $a$ and the SU(2) ChPT low energy constants $f$ and $B$, which were determined in \cite{Carrasco:2014cwa}.
The systematic uncertainty in the chiral extrapolation, namely $()_{Chir}$, has been evaluated from the difference in the results corresponding to the two chiral extrapolations we performed. It should be noticed also that the two lattice points calculated at the same lattice spacing and light quark mass but different volumes, as it can be seen in Fig.~\ref{fig:SU2SU3fit}, are well compatible within uncertainties, allowing us to state that finite size effects can be neglected in our analysis.
Combining the present result with the experimental value of $|V_{us}|f_+(0)$ from \cite{Antonelli:2010yf} we can estimate $|V_{us}|$ obtaining:
\begin{equation}
|V_{us}|=0.2234(16).
\end{equation}
This value can also be compared with the determination of $|V_{us}|$ from the ratio of leptonic PS decay constants $f_{K^+} /f_{\pi^+}$ that we obtained in \cite{DECAYCONSTANTS} wich reads $|V_{us}|=0.2271(29)$
As a phenemenological application we can use our results to test the unitarity of the first row of the CKM matrix taking the value of $|V_{ud}|$ from the $\beta-$decay \cite{Hardy:2009} and ignoring $|V_{ub}|^2$, which is negligible given the present uncertainties, finding:
\begin{eqnarray}
\label{eq:utest}
|V_{ud}|^2+|V_{us}|^2+|V_{ub}|^2 &=& 0.9991(8) \hspace*{1.2cm} \rm {from } ~~K_{\ell 3} ~~~\rm{[this~ work]}\nonumber \\
|V_{ud}|^2+|V_{us}|^2+|V_{ub}|^2 &=& 1.0008(14) \hspace*{1cm} \rm {from } ~~K_{\ell 2} ~~~\mbox{\cite{DECAYCONSTANTS}}
\end{eqnarray}
As it can be seen in Eq.~(\ref{eq:utest}) both determinations confirm the first row unitarity at the permille level.
\section{An outlook on a possible extension }
As a possible extension of our analysis we provided an estimate of the vector form factors $f_+$ and $f_0$ not only at $q^2=0$, but on the entire $q^2$ region accessible to experiments, i.e from $q^2=0$ to $q^2=q^2_{max}=(M_K-M_\pi)^2$.
\begin{figure}[hbt!]
\begin{center}
\scalebox{0.3}{
\includegraphics{f0fp_overfp0_q2.pdf}
}
\end{center}
\vspace*{-0.8cm}
\caption{Fit results for the quantities $f_+(q^2)/f_+(0)$ and $f_0(q^2)/f_+(0)$ as functions of $q^2$ at the physical point. The red dot (square) corresponds to $q^2_{max}$ ($q^2_{CT}$). The dashed lines represent the uncertainty of above quantities at one standard deviation.}
\label{fig:f0fpoverfp0}
\end{figure}
To do so we performed a multi-combined fit of the $q^2$, $m_\ell$ and $a$ dependencies of the form factors following the strategy presented in \cite{Lubicz:2010bv}.
In particular the fit formulas were derived by expanding in powers of $x=M_{\pi}^2/M_K^2$ the NLO SU(3) ChPT predictions for the form factors \cite{Gasser:1984ux,Gasser:1984gg}. Moreover we included in the analysis the constraint from the Callan-Treiman theorem \cite{Callan:1966hu}, which relates the scalar form factor calculated at the unphysical $q^2_{CT}=M_K^2-M_{\pi}^2$ to the ratio of the decay constants $f_K/f_\pi$.
Preliminary result of the form factors in the physical region of $q^2$ are presented in fig. (\ref{fig:f0fpoverfp0}).
|
1,941,325,221,105 | arxiv | \section{Introduction\label{sec1}}
\begin{table*}
\begin{center}\caption{Spectroscopic constraints and complementary data of the {\it Kepler} targets \label{tab1}}
\begin{tabular}{lccccccccccccccccccc}
\hline\hline
KIC ID & \teff\ & \feh & $K_s$ & $A_{K_S}$ & P$_{\rm ROT}$ & Ref. \\
& (K) & (dex) & (mag) & (mag) & (days) & \\
\hline
1435467&6326 $\pm$ 77&$+$0.01 $\pm$ 0.10&7.718 $\pm$ 0.009&0.011 $\pm$ 0.004&6.68 $\pm$ 0.89&1,A\\
2837475&6614 $\pm$ 77&$+$0.01 $\pm$ 0.10&7.464 $\pm$ 0.023&0.008 $\pm$ 0.002&3.68 $\pm$ 0.36&1,A\\
3427720&6045 $\pm$ 77&$-$0.06 $\pm$ 0.10&7.826 $\pm$ 0.009&0.020 $\pm$ 0.019&13.94 $\pm$ 2.15&1,B\\
3656476&5668 $\pm$ 77&$+$0.25 $\pm$ 0.10&8.008 $\pm$ 0.014&0.022 $\pm$ 0.050&31.67 $\pm$ 3.53&1,A\\
3735871&6107 $\pm$ 77&$-$0.04 $\pm$ 0.10&8.477 $\pm$ 0.016&0.018 $\pm$ 0.027&11.53 $\pm$ 1.24&1,A\\
4914923&5805 $\pm$ 77&$+$0.08 $\pm$ 0.10&7.935 $\pm$ 0.017&0.017 $\pm$ 0.029&20.49 $\pm$ 2.82&1,A\\
5184732&5846 $\pm$ 77&$+$0.36 $\pm$ 0.10&6.821 $\pm$ 0.005&0.012 $\pm$ 0.007&19.79 $\pm$ 2.43&1,A\\
5950854&5853 $\pm$ 77&$-$0.23 $\pm$ 0.10&9.547 $\pm$ 0.017&0.002 $\pm$ 0.004&&1\\
6106415&6037 $\pm$ 77&$-$0.04 $\pm$ 0.10&5.829 $\pm$ 0.017&0.003 $\pm$ 0.020&&1\\
6116048&6033 $\pm$ 77&$-$0.23 $\pm$ 0.10&7.121 $\pm$ 0.009&0.013 $\pm$ 0.020&17.26 $\pm$ 1.96&1,A\\
6225718&6313 $\pm$ 76&$-$0.07 $\pm$ 0.10&6.283 $\pm$ 0.011&0.003 $\pm$ 0.001&&1\\
6603624&5674 $\pm$ 77&$+$0.28 $\pm$ 0.10&7.566 $\pm$ 0.019&0.008 $\pm$ 0.008&&1\\
6933899&5832 $\pm$ 77&$-$0.01 $\pm$ 0.10&8.171 $\pm$ 0.015&0.023 $\pm$ 0.017&&1\\
7103006&6344 $\pm$ 77&$+$0.02 $\pm$ 0.10&7.702 $\pm$ 0.015&0.007 $\pm$ 0.010&4.62 $\pm$ 0.48&1,A\\
7106245&6068 $\pm$ 102&$-$0.99 $\pm$ 0.19&9.419 $\pm$ 0.006&0.015 $\pm$ 0.029&&4\\
7206837&6305 $\pm$ 77&$+$0.10 $\pm$ 0.10&8.575 $\pm$ 0.011&0.004 $\pm$ 0.005&4.04 $\pm$ 0.28&1,A\\
7296438&5775 $\pm$ 77&$+$0.19 $\pm$ 0.10&8.645 $\pm$ 0.009&0.012 $\pm$ 0.018&25.16 $\pm$ 2.78&1,A\\
7510397&6171 $\pm$ 77&$-$0.21 $\pm$ 0.10&6.544 $\pm$ 0.009&0.018 $\pm$ 0.010&&1\\
7680114&5811 $\pm$ 77&$+$0.05 $\pm$ 0.10&8.673 $\pm$ 0.006&0.011 $\pm$ 0.013&26.31 $\pm$ 1.86&1,A\\
7771282&6248 $\pm$ 77&$-$0.02 $\pm$ 0.10&9.532 $\pm$ 0.010&0.005 $\pm$ 0.001&11.88 $\pm$ 0.91&1,A\\
7871531&5501 $\pm$ 77&$-$0.26 $\pm$ 0.10&7.516 $\pm$ 0.017&0.023 $\pm$ 0.021&33.72 $\pm$ 2.60&1,A\\
7940546&6235 $\pm$ 77&$-$0.20 $\pm$ 0.10&6.174 $\pm$ 0.011&0.023 $\pm$ 0.009&11.36 $\pm$ 0.95&1,A\\
7970740&5309 $\pm$ 77&$-$0.54 $\pm$ 0.10&6.085 $\pm$ 0.011&0.003 $\pm$ 0.013&17.97 $\pm$ 3.09&1,A\\
8006161&5488 $\pm$ 77&$+$0.34 $\pm$ 0.10&5.670 $\pm$ 0.015&0.009 $\pm$ 0.006&29.79 $\pm$ 3.09&1,A\\
8150065&6173 $\pm$ 101&$-$0.13 $\pm$ 0.15&9.457 $\pm$ 0.014&0.010 $\pm$ 0.013&&4\\
8179536&6343 $\pm$ 77&$-$0.03 $\pm$ 0.10&8.278 $\pm$ 0.009&0.005 $\pm$ 0.016&24.55 $\pm$ 1.61&1,A\\
8379927&6067 $\pm$ 120&$-$0.10 $\pm$ 0.15&5.624 $\pm$ 0.011&0.004 $\pm$ 0.012&16.99 $\pm$ 1.35&2,A\\
8394589&6143 $\pm$ 77&$-$0.29 $\pm$ 0.10&8.226 $\pm$ 0.016&0.013 $\pm$ 0.010&&1\\
8424992&5719 $\pm$ 77&$-$0.12 $\pm$ 0.10&8.843 $\pm$ 0.011&0.016 $\pm$ 0.018&&1\\
8694723&6246 $\pm$ 77&$-$0.42 $\pm$ 0.10&7.663 $\pm$ 0.007&0.003 $\pm$ 0.001&&1\\
8760414&5873 $\pm$ 77&$-$0.92 $\pm$ 0.10&8.173 $\pm$ 0.009&0.016 $\pm$ 0.012&&1\\
8938364&5677 $\pm$ 77&$-$0.13 $\pm$ 0.10&8.636 $\pm$ 0.016&0.003 $\pm$ 0.009&&1\\
9025370&5270 $\pm$ 180&$-$0.12 $\pm$ 0.18&7.372 $\pm$ 0.025&0.041 $\pm$ 0.030&&3\\
9098294&5852 $\pm$ 77&$-$0.18 $\pm$ 0.10&8.364 $\pm$ 0.009&0.011 $\pm$ 0.021&19.79 $\pm$ 1.33&1,A\\
9139151&6302 $\pm$ 77&$+$0.10 $\pm$ 0.10&7.952 $\pm$ 0.014&0.002 $\pm$ 0.011&10.96 $\pm$ 2.22&1,B\\
9139163&6400 $\pm$ 84&$+$0.15 $\pm$ 0.09&7.231 $\pm$ 0.007&0.013 $\pm$ 0.007&&6\\
9206432&6538 $\pm$ 77&$+$0.16 $\pm$ 0.10&8.067 $\pm$ 0.013&0.032 $\pm$ 0.037&8.80 $\pm$ 1.06&1,A\\
9353712&6278 $\pm$ 77&$-$0.05 $\pm$ 0.10&9.607 $\pm$ 0.011&0.011 $\pm$ 0.010&11.30 $\pm$ 1.12&1,A\\
9410862&6047 $\pm$ 77&$-$0.31 $\pm$ 0.10&9.375 $\pm$ 0.013&0.011 $\pm$ 0.001&22.77 $\pm$ 2.37&1,A\\
9414417&6253 $\pm$ 75&$-$0.13 $\pm$ 0.10&8.407 $\pm$ 0.009&0.010 $\pm$ 0.010&10.68 $\pm$ 0.66&7,A\\
9955598&5457 $\pm$ 77&$+$0.05 $\pm$ 0.10&7.768 $\pm$ 0.017&0.002 $\pm$ 0.001&34.20 $\pm$ 5.64&1,A\\
9965715&5860 $\pm$ 180&$-$0.44 $\pm$ 0.18&7.873 $\pm$ 0.012&0.005 $\pm$ 0.005&&3\\
10079226&5949 $\pm$ 77&$+$0.11 $\pm$ 0.10&8.714 $\pm$ 0.012&0.015 $\pm$ 0.025&14.81 $\pm$ 1.23&1,A\\
10454113&6177 $\pm$ 77&$-$0.07 $\pm$ 0.10&7.291 $\pm$ 9.995&0.042 $\pm$ 0.019&14.61 $\pm$ 1.09&1,A\\
10516096&5964 $\pm$ 77&$-$0.11 $\pm$ 0.10&8.129 $\pm$ 0.015&0.000 $\pm$ 0.012&&1\\
10644253&6045 $\pm$ 77&$+$0.06 $\pm$ 0.10&7.874 $\pm$ 0.021&0.008 $\pm$ 0.015&10.91 $\pm$ 0.87&1,A\\
10730618&6150 $\pm$ 180&$-$0.11 $\pm$ 0.18&7.874 $\pm$ 0.021&0.008 $\pm$ 0.015&&3\\
10963065&6140 $\pm$ 77&$-$0.19 $\pm$ 0.10&7.486 $\pm$ 0.011&0.003 $\pm$ 0.016&12.58 $\pm$ 1.70&1,A\\
11081729&6548 $\pm$ 82&$+$0.11 $\pm$ 0.10&7.973 $\pm$ 0.011&0.005 $\pm$ 0.001&2.74 $\pm$ 0.31&1,A\\
11253226&6642 $\pm$ 77&$-$0.08 $\pm$ 0.10&7.459 $\pm$ 0.007&0.017 $\pm$ 0.013&3.64 $\pm$ 0.37&1,A\\
11772920&5180 $\pm$ 180&$-$0.09 $\pm$ 0.18&7.981 $\pm$ 0.014&0.008 $\pm$ 0.005&&3\\
12009504&6179 $\pm$ 77&$-$0.08 $\pm$ 0.10&8.069 $\pm$ 0.019&0.005 $\pm$ 0.034&9.39 $\pm$ 0.68&1,A\\
12069127&6276 $\pm$ 77&$+$0.08 $\pm$ 0.10&9.494 $\pm$ 0.012&0.016 $\pm$ 0.005&0.92 $\pm$ 0.05&1,A\\
12069424&5825 $\pm$ 50&$+$0.10 $\pm$ 0.03&4.426 $\pm$ 0.009&0.005 $\pm$ 0.006&23.80 $\pm$ 1.80&5,B\\
12069449&5750 $\pm$ 50&$+$0.05 $\pm$ 0.02&4.651 $\pm$ 0.005&0.005 $\pm$ 0.006&23.20 $\pm$ 6.00&5,B\\
12258514&5964 $\pm$ 77&$+$-0.00 $\pm$ 0.10&6.758 $\pm$ 0.011&0.021 $\pm$ 0.021&15.00 $\pm$ 1.84&1,A\\
12317678&6580 $\pm$ 77&$-$0.28 $\pm$ 0.10&7.631 $\pm$ 0.009&0.027 $\pm$ 0.021&&1\\
\hline\hline
\end{tabular}
\end{center}
Spectroscopic references: $^1$\cite{Buchhave2015}, $^2$\cite{Ramirez2009}, $^3$\cite{Pinsonneault2012}, $^4$\cite{Huber2013}, $^5$\cite{Chaplin2014}, $^6$\cite{Pinsonneault2014},$^7$\cite{Casagrande2014}\\
Rotation period references: $^{\rm A}$\cite{garcia2014}, $^{\rm B}$\cite{cellier2016}
\end{table*}
\begin{figure*}
\center{\includegraphics[width=0.98\textwidth]{exlik}}
\caption{Normalized posterior probability functions for KIC~12069424.
From left to right we show radius, mass, and age. We also show the adopted
parameter $\langle P \rangle$ (dashed line),
the 68\% region (shaded) and $P_{\rm AMP}$ (dotted line).
\label{fig:examplelikelihood}}
\end{figure*}
Solar-like oscillations are stochastically excited and intrinsically
damped by turbulent motions in the near-surface layers of stars with
substantial outer convection zones. The sound waves produced by these
motions travel through the interior of the star, and those with resonant
frequencies drive global oscillations that modulate the integrated
brightness of the star by a few parts per million and change the
{surface} radial velocity by several meters per second. The characteristic
timescale of these variations is determined by the sound
travel time across the stellar diameter, which is around 5 minutes for a
star like the Sun. With sufficient precision, more than a
dozen consecutive overtones can be detected for each set of oscillation
modes with radial, dipole, quadrupole, and sometimes even octupole
geometry {(i.e., for $l=0, 1, 2,$ and 3, respectively, where $l$ is
the angular degree)}. The technique of asteroseismology uses these
oscillation frequencies {combined} with other observational constraints
to measure the stellar radius, mass, age, and other properties of the
stellar interior \citep[for a recent review, see][]{ChaplinMiglio2013}.
The {\it Kepler} space telescope yielded unprecedented data for the
study of solar-like oscillations in other stars. Ground-based radial
velocity data had previously allowed the detection of solar-like
oscillations in some of the brightest stars in the sky
\citep[e.g.,][]{Brown1991, Kjeldsen1995, Bedding2001, Bouchy2002,
Carrier2003}, but {intensive} multi-site campaigns were required to measure
and identify the frequencies unambiguously \cite[e.g.,][]{Arentoft2008}.
The {\it Convection Rotation and planetary Transits} satellite (CoRoT,
\citealt{Baglin2006}) achieved the {photometric} precision
necessary to detect solar-like oscillations in main-sequence stars
\citep[e.g.,][]{Michel2008}, and it obtained continuous photometry for up
to five months. NASA's {\it Kepler} mission \citep{Borucki2010} extended
these initial successes to a larger sample of solar-type stars, with
observations eventually spanning up to several years \citep{Chaplin2010}.
Precise photometry from {\it Kepler} led to the detection of solar-like
oscillations in nearly 600 main-sequence and subgiant stars
\citep{Chaplin2014}, including the measurement of individual frequencies
in more than 150 targets \citep{Appourchaux2012, Davies2016, Lund2016}.
Asteroseismic modeling has become more sophisticated over time, with
better methods gradually developing alongside the {extended} observations
and {improved} data analysis techniques.
Initial efforts attempted to reproduce the observed large and small
frequency separations with models that simultaneously matched constraints
from spectroscopy \citep[e.g.,][]{JCD1995, Thevenin2002, Fernandes2003,
Thoul2003}. As individual oscillation frequencies became available,
modelers started to match the observations in \'echelle diagrams that
{highlighted variations around the average frequency separations}
\citep[e.g.,][]{DiMauro2003, Guenther2004, Eggenberger2004}. This approach
continued until the frequency precision from longer space-based
observations became sufficient to reveal systematic errors in the models
that are known as {\it \textit{\textup{surface effects}}}, which arise from incomplete modeling of
the near-surface layers where the mixing-length treatment of convection
is {approximate}. \cite{Kjeldsen2008} proposed an empirical correction for {the} surface effects based on the
discrepancy for {the} standard solar model,
and applied it to ground-based observations of several stars with different
masses and evolutionary states. The correction was subsequently implemented
using stars observed by {CoRoT} and {\it Kepler} \citep{Kallinger2010,
Metcalfe2010}.
During the {\it Kepler} mission, asteroseismic modeling methods were
adapted as longer data sets became available. The first year of
short-cadence data \citep[{sampled at} 58.85~s,][]{Gilliland2010}
was devoted to an
asteroseismic survey of 2000 solar-type stars observed for one
month each.
The survey initially yielded {frequencies for} 22 stars, {allowing}
detailed modeling \citep{Mathur2012}, and hundreds of
targets were flagged for extended observations during the remainder of
the mission. Longer data sets improved the signal-to-noise ratio (S/N)
{of the power spectrum} for stars with {previously} marginal detections, and
yielded additional oscillation frequencies for the best targets in the sample.
The first coordinated analysis of nine-month data sets yielded individual
frequencies in 61 stars \citep{Appourchaux2012}, though many were
subgiants with complex patterns of dipole mixed-modes. The larger
set of radial orders observed in each star began to reveal the
limitations of the empirical correction for surface effects
\citep{Metcalfe2014}. This situation {motivated the implementation}
of a Bayesian method that marginalized over the unknown systematic error
{for each frequency} \citep{Gruberbauer2012}, as well as a
method for fitting ratios of frequency separations that are insensitive to
surface effects \citep{Roxburgh2003, Bazot2013, SilvaAguirre2013}. It also inspired
the development of a more physically motivated correction {based on
an analysis of frequency shifts induced by the solar magnetic cycle}
{\citep{Gough1990, Ball2014, schmittbasu2015}}.
The {\it Kepler} telescope completed its primary
mission in 2013, but the large samples of multi-year observations posed
an enormous data analysis challenge that has only recently been surmounted
\citep{benomar2014a, benomar2014b, Davies2015, Davies2016, Lund2016}. The first modeling
of these full-length data sets appeared in \cite{SilvaAguirre2015} and
\cite{Metcalfe2015}.
In this paper we apply the latest version of the Asteroseismic Modeling
Portal \citep[hereafter AMP, see][]{Metcalfe2009} to {oscillation
frequencies derived from} the full-length {\it Kepler} {observations}
for 57 stars, {as determined by \cite{Lund2016}}. The new fitting
method relies on ratios of frequency
separations rather than the individual frequencies, so that we can use the
modeling results to investigate the empirical amplitude and character of
{the} surface effects within the sample.
We describe the sources of our adopted
observational constraints in Section~\ref{sec2}. We outline
updates to the AMP input physics and fitting methods in Section~\ref{sec3},
including an overview of how the optimal stellar properties and their
uncertainties are determined. In Section~\ref{sec4} we present
the modeling results, and in Section~\ref{sec:sec5} we use them to establish
the limitations of the \cite{Kjeldsen2008} correction for surface
effects. Finally, after summarizing in Section~\ref{sec6}, we discuss our expectations for
asteroseismic modeling of future observations from the Transiting Exoplanet
Survey Satellite, \citep[TESS,][]{tess-ricker2015} and PLanetary Transits
and Oscillations of stars \citep[PLATO,][]{plato-rauer2014} missions.
\section{Observational constraints\label{sec2}}
To constrain the properties of each star in our sample, we adopted the
solar-like oscillation frequencies determined by \cite{Lund2016} from a
uniform analysis of the full-length {\it Kepler} data sets. {For each target},
the power spectrum of the time-series photometry shows the oscillations
embedded in several background components attributed to granulation,
faculae, and shot noise. The {power spectral distribution of}
individual {modes} were modeled as
Lorentzian functions, and the background components were optimized
simultaneously in a Bayesian manner using the procedure described in
\cite{Lund2014}. For the targets presented here, this analysis resulted in
sets of oscillation modes spanning 7 to 20
radial orders. In most cases, the identified
frequencies included only $l=0$, 1, and 2 modes, but {for} 14 stars,
{the mode-fitting procedure} also {identified}
limited sets of $l=3$ modes spanning 2 to 6 radial orders. Complete
tables of the identified frequencies for each star are published in
\cite{Lund2016}.
To complement the {oscillation frequencies}, we also adopted spectroscopic
constraints on the effective temperature, $T_{\rm eff}$, and metallicity,
[M/H], for each star. For 46 of the targets in our sample, we used the
uniform spectroscopic analysis of \cite{Buchhave2015}. In this case, the
values and uncertainties on $T_{\rm eff}$ and [M/H] were determined using
the Stellar Parameters Classification (SPC) method
described in detail by
\cite{Buchhave2012,Buchhave2014}. For the other 11 stars in our sample,
{which were not included in \cite{Buchhave2015}}, we
adopted constraints from a variety of sources, including
\cite{Ramirez2009,Pinsonneault2012,Huber2013,Chaplin2014,Pinsonneault2014},
and from the SAGA survey \citep{Casagrande2014}. The {57 stars in our sample}
span a range of $T_{\rm eff}$ from 5180 to 6642~K and [M/H] from $-$0.99 to 0.36 dex.
These atmospheric constraints are listed in Table~\ref{tab1} along with {the
K-band magnitude from 2MASS, $K_s$ \citep{2MASS}, the derived interstellar
absorption, $A_{Ks}$ (see Section~\ref{sec:asteroseismicdistances}),}
and rotational periods from \citet{garcia2014} and \citet{cellier2016}.
Although independent determinations of the
radius and luminosity are available for a few of the stars in our sample, we
excluded these constraints from the modeling so that we could use them to
assess the accuracy of our results (see Section~\ref{sec4}).
\begin{table}[]
\centering
\caption{Reference solar parameters from AMP using
the updated method and physics}
\begin{tabular}{lcccccc}
\hline\hline
& AMP$_1$ &$_2$ &$_3$ &$_4$ & $\langle P \rangle$ & $\sigma$ \\
\hline
$R$ (\rsol) & 1.002&1.003&1.003&1.010& 1.001 & 0.005 \\
$M$ (\msol) & 1.01 & 1.01 & 1.01 & 1.03& 1.001 & 0.02 \\
Age (Gyr) & 4.59 & 4.38 & 4.41 & 4.69 & 4.38 & 0.22 \\
\zi & 0.019 & 0.021 & 0.020 & 0.024&0.017 & 0.002 \\
\yi & 0.266 & 0.281 & 0.278 & 0.282& 0.265 & 0.023 \\
$\alpha$ & 2.16 & 2.24 & 2.24 & 2.30& 2.12 & 0.12 \\
$L$ (\lsol) & 0.96 &0.99&0.99&1.00& 0.97 & 0.03\\
$\log g$ (dex) &4.441 &4.439&4.439&4.442& 4.438 & 0.003\\
$\chi^2$ & 1.047 & 0.968 & 0.995 & 1.058\\
\hline\hline
\end{tabular}
\label{tab:solar-reference}
\end{table}
\begin{figure}
\includegraphics[width=0.48\textwidth]{hrdiag}
\caption{\label{fig:hrdiag}HR diagram showing the position of the sample of stars used for this work.
Evolutionary tracks for solar-metallicity models with 1.0, 1.2, and 1.4 \msol\ stellar masses are shown.}
\end{figure}
\section{Asteroseismic modeling\label{sec3}}
{Based on} the observational constraints described in Section~\ref{sec2},
{we determined} the properties of each star in our sample using the
latest version of AMP. {The method} {relies on} a parallel genetic algorithm
\citep[hereafter GA, see][]{Metcalfe2003} to optimize the match between the
{properties of a stellar} model and a given set of observations. The
asteroseismic models {are} generated by the Aarhus stellar evolution and
adiabatic pulsation codes \citep{JCD08a,JCD08b}. The
search procedure generates thousands of models that can be
used to evaluate the stellar properties and their uncertainties. Unlike
the usual grid-modeling approach, the GA preferentially samples
combinations of model parameters that provide a better than average match
to the observations. This approach allows us not only to identify the
globally optimal solution, but also to include the effects of parameter
correlations and non-uniqueness into reliable uncertainties. Below we outline
recent updates to the input physics and {model-fitting} methods, and we
describe improvements to our statistical analysis of the results.
\subsection{Updated physics and methods}
The AMP code has been in development since 2004. Details about previous
versions are outlined in \cite{Metcalfe2015}. For this paper we use version
1.3, which includes input physics that are mostly unchanged from version 1.2
\citep{Metcalfe2014}. It uses the {2005 release of the} OPAL equation of state
\citep{Rogers2002}, with opacities from OPAL \citep{Iglesias1996} supplemented
by \cite{Ferguson2005} at low temperatures. Nuclear reaction rates come from
the NACRE collaboration \citep{Angulo1999}. The prescription of \cite{Michaud1993}
for diffusion and settling is applied to helium, but not to heavier elements because some models are numerically instable. Convection is described using the
mixing-length treatment of \cite{BohmVitense1958} with no overshoot.
There have been several minor updates to the model physics for version 1.3 of the
AMP code. First, it incorporates the revised $^{14}N+p$ reaction from NACRE
\citep{Angulo2005}, which is particularly important for more evolved stars. Second,
it uses the solar mixture of \cite{GS1998} instead of \cite{GN1993}. This requires
different opacity tables and a slight modification to the calculation of metallicity
[$\log(Z_\odot/X_\odot) = -1.64$ instead of $-$1.61]. Finally, following the suggestion
of \cite{SilvaAguirre2015}, diffusion and settling is only applied to models with
$M < 1.2\ M_\odot$, to avoid potential biases that are due to the short diffusion timescales
in the envelopes of more massive stars.
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{P-8006161-fitdata}
\includegraphics[width=0.45\textwidth]{P-6603624-fitdata}
\includegraphics[width=0.45\textwidth]{P-10079226-fitdata}
\includegraphics[width=0.45\textwidth]{P-10454113-fitdata}
\caption{Representative examples of fits to the seismic
frequency ratios.}
\label{fig:fitdata0}
\end{figure*}
{The frequency separation ratios $r_{01}$ and $r_{02}$ were defined by
\cite{Roxburgh2003} as}
\begin{equation}
r_{01}(n) = \frac{\nu_{n-1,0} - 4\nu_{n-1,1} + 6\nu_{n,0} - 4\nu_{n,1} + \nu_{n+1,0}}{8(\nu_{n,1} - \nu_{n-1,1})}
\label{eqn:r01}
\end{equation}
and
\begin{equation}
r_{02}(n) = \frac{\nu_{n,0} - \nu_{n-1,2}}{\nu_{n,1} - \nu_{n-1,1}},
\end{equation}
{where $\nu$ is the mode frequency, $n$ is the radial order, and $l$ is
the angular degree.} These ratios were first included as observational
constraints in AMP~1.2. Version 1.3 uses these ratios exclusively, omitting
the individual oscillation frequencies to avoid potential biases from the
empirical correction for surface effects. AMP~1.3 also {calculates} the
full covariance matrix of $r_{01}$, which is necessary to properly account
for correlations
induced by the five-point smoothing that is
{implicit
in Eq.~(\ref{eqn:r01})}.
{For each stellar model, AMP~1.2 defined the quality of the match to
observations using a combination of metrics from four different sets of
constraints. For AMP~1.3, we} combine all observational constraints into
a single $\chi^2$ metric
\begin{equation}
\chi^2 = (x - x_M)^T C^{-1} (x - x_M),
\label{eqn:chisq}
\end{equation}
where $C$ is the covariance matrix of the observational constraints $x$,
and $x_M$ are the corresponding observables from the model. For the results
presented here, $x$ includes only the ratios $r_{01}$ and $r_{02}$
{augmented by} the atmospheric constraints \teff\ and \mh. {$C$ is
assumed to be diagonal for all observables except $r_{01}$. Like all
previous versions of AMP, the individual frequencies are} used to
calculate the average large separation of the radial modes $\Delta\nu_0$,
allowing us to optimize the {stellar} age {along each model sequence}
and then match the lowest observed radial mode frequency \citep[see][]{Metcalfe2009}.
\subsection{Statistical analysis}
Versions {1.0 and 1.1} of the AMP code performed a local analysis
near the optimal model to determine the uncertainties on each
parameter \citep{Metcalfe2009, Mathur2012}. This approach failed to capture
the uncertainties due to parameter correlations and non-uniqueness of the
solution, so that it typically {produced} implausibly small error bars,
although formally correct.
To {derive} more realistic uncertainties {in version 1.2},
\cite{Metcalfe2014} began using the thousands of models sampled by the GA {during the optimization procedure. As the GA approaches the optimal model,
each parameter is densely sampled with a uniform spacing in stellar mass
($M$), initial metallicity ($Z_i$), initial helium mass fraction ($Y_i$),
and mixing-length ($\alpha$)}. Each sampled model is assigned a likelihood
\begin{equation}
\mathcal{L}=\exp\left( \frac{-\chi^2}{2} \right),
\label{eqn:likelihooddefn}
\end{equation}
where $\chi^2$ is calculated from Eq.~(\ref{eqn:chisq}).
By assuming flat priors on each of the model parameters, we then
construct posterior probability functions (PPF) for each of the
stellar properties to obtain more reliable estimates of the values and
uncertainties from the dense ensemble of models sampled by the GA. We adopt
the median value of the PPF as the best estimate for the parameter value,
$\langle P \rangle$. We use the 68\% credible interval {of the PPF} to
define the associated uncertainty, $\sigma$. Sample PPFs for the radius,
mass, and age of KIC~12069424 are shown in Fig.~\ref{fig:examplelikelihood}.
Combining the best estimates for each of the stellar properties generally
will not produce the best stellar model. For many purposes it is useful
to identify a {\it \textup{reference model}}, an individual stellar model that is
representative of the PPF. The optimal model identified by
AMP, $P_{\rm AMP}$, is used as the reference model, but it can sometimes fall
near the edge of one or more of the distributions.
A comparison of the masses and ages estimated from $\langle P \rangle$
and $P_{\rm AMP}$ yields differences much smaller than 1$\sigma$ for most cases.
\subsection{Validation with solar data}
To validate our new approach, we used AMP~1.3 to {match a set of solar
oscillation frequencies comparable to the {\it Kepler} observations of
16~Cyg~A and B \citep{Metcalfe2015}. The frequencies were derived from
observations obtained with} the Variability of solar IRradiance and Gravity
Oscillations (VIRGO) instrument \citep{virgo} {using 2.5 years of data \citep{Davies2015}}.
The best models identified by the four independent runs of the GA are listed
in Table~\ref{tab:solar-reference} under the headings AMP$_N$ along with
their individual $\chi^2$ values\footnote{\small{See \url{https://amp.phys.au.dk/browse/simulation/829}} {for details of that AMP modeling}}.
The model with the lowest value of $\chi^2$ is the optimal solution
identified by AMP, and this is adopted as the reference model. The remaining
models reveal intrinsic parameter correlations, in particular between the
mass and initial composition. The final two columns of Table~\ref{tab:solar-reference}
show the values of $\langle P \rangle$ and $\sigma$ derived from the PPFs,
showing excellent agreement with the known solar properties: {$R, M, L \equiv 1$,
age\,$=4.60\pm0.04$~Gyr \citep{Houdek2011}.}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{lumrad}
\includegraphics[width=0.48\textwidth]{comparedist}
\caption{Comparison of measured radii (top), luminosities (middle),
and parallaxes (lower) with those deduced from the asteroseismic
parameters. The interferometric radii are denoted by the red circles
in the top panel, and the green triangle is the value from \citet{masana06}}
\label{fig:lumrad}
\end{figure}
\section{Results\label{sec4}}
\begin{table*}
\begin{center}\caption{Reference models of the {\it Kepler} targets and the Sun \label{tab:referencemodels}}
\begin{tabular}{lccccccccccccccccccc}
\hline\hline
KIC ID& $R$ & $M$ & Age & $Z_i$ & $Y_i$ & $\alpha$ & $X_c/X_i$ & $a_0$ & $\chi^2_{N,r01}$ & $\chi^2_{N,r02}$ & $\chi^2_{N,\rm spec}$\\
& (\rsol) & (\msol) & (Gyr) & & \\
\hline
Sun & 1.003 & 1.01 & 4.38 & 0.0210 & 0.281 & 2.24 & 0.50 & -2.54 & 1.03 & 0.78 & 0.71\\
1435467 & 1.704 & 1.41 & 1.87 & 0.0231 & 0.284 & 1.84 & 0.43 & -3.95 & 2.68 & 1.64 & 1.49\\
2837475 & 1.613 & 1.41 & 1.70 & 0.0168 & 0.247 & 1.70 & 0.53 & -4.48 & 1.29 & 2.07 & 0.32\\
3427720 & 1.125 & 1.13 & 2.17 & 0.0168 & 0.259 & 2.10 & 0.64 & -2.41 & 1.10 & 1.26 & 0.15\\
3656476 & 1.326 & 1.10 & 8.48 & 0.0231 & 0.248 & 2.30 & 0.00 & -2.22 & 2.35 & 0.68 & 1.57\\
3735871 & 1.089 & 1.08 & 1.57 & 0.0157 & 0.292 & 2.02 & 0.71 & -3.64 & 1.47 & 0.67 & 0.05\\
4914923 & 1.326 & 1.01 & 7.15 & 0.0121 & 0.260 & 1.68 & 0.02 & -4.51 & 0.56 & 1.50 & 3.35\\
5184732 & 1.365 & 1.27 & 4.70 & 0.0340 & 0.242 & 1.92 & 0.27 & -4.43 & 6.98 & 2.32 & 0.85\\
5950854 & 1.257 & 1.01 & 9.01 & 0.0147 & 0.249 & 2.16 & 0.00 & -1.27 & 0.60 & 4.61 & 1.30\\
6106415 & 1.213 & 1.06 & 4.43 & 0.0184 & 0.295 & 2.04 & 0.18 & -3.48 & 0.93 & 2.81 & 0.54\\
6116048 & 1.239 & 1.06 & 5.84 & 0.0114 & 0.242 & 2.16 & 0.11 & -3.27 & 3.27 & 2.48 & 0.44\\
6225718 & 1.194 & 1.06 & 2.30 & 0.0117 & 0.286 & 2.02 & 0.49 & -5.99 & 3.47 & 0.97 & 0.64\\
6603624 & 1.159 & 1.03 & 8.64 & 0.0455 & 0.313 & 2.12 & 0.01 & -2.34 & 3.42 & 135.14 & 5.90\\
6933899 & 1.535 & 1.03 & 6.58 & 0.0152 & 0.296 & 1.76 & 0.00 & -4.38 & 1.45 & 1.25 & 0.21\\
7103006 & 1.957 & 1.56 & 1.94 & 0.0224 & 0.239 & 1.66 & 0.36 & -7.28 & 1.15 & 0.69 & 1.33\\
7106245 & 1.120 & 0.97 & 6.05 & 0.0070 & 0.242 & 1.98 & 0.22 & -4.02 & 2.96 & 0.73 & 4.41\\
7206837 & 1.579 & 1.41 & 1.72 & 0.0255 & 0.249 & 1.52 & 0.60 & -4.61 & 1.48 & 1.43 & 1.52\\
7296438 & 1.371 & 1.10 & 5.93 & 0.0309 & 0.315 & 2.04 & 0.02 & -2.76 & 0.74 & 0.53 & 0.47\\
7510397 & 1.828 & 1.30 & 3.58 & 0.0129 & 0.248 & 1.84 & 0.08 & -2.37 & 0.75 & 2.23 & 0.55\\
7680114 & 1.395 & 1.07 & 7.04 & 0.0197 & 0.277 & 2.02 & 0.00 & -3.00 & 1.63 & 0.74 & 0.00\\
7771282 & 1.645 & 1.30 & 3.13 & 0.0168 & 0.257 & 1.78 & 0.19 & -4.03 & 2.10 & 0.75 & 0.33\\
7871531 & 0.859 & 0.80 & 9.32 & 0.0125 & 0.296 & 2.02 & 0.34 & -4.15 & 1.06 & 0.65 & 1.25\\
7940546 & 1.917 & 1.39 & 2.58 & 0.0152 & 0.259 & 1.74 & 0.07 & -6.26 & 2.47 & 0.82 & 1.45\\
7970740 & 0.779 & 0.78 & 10.59 & 0.0094 & 0.244 & 2.36 & 0.45 & -2.55 & 4.93 & 5.09 & 3.34\\
8006161 & 0.954 & 1.06 & 4.34 & 0.0485 & 0.288 & 2.66 & 0.61 & -0.63 & 2.33 & 1.21 & 1.26\\
8150065 & 1.394 & 1.20 & 3.33 & 0.0162 & 0.252 & 1.62 & 0.21 & -3.97 & 2.03 & 2.30 & 0.66\\
8179536 & 1.353 & 1.26 & 2.03 & 0.0157 & 0.249 & 1.88 & 0.50 & -3.89 & 1.51 & 0.62 & 0.01\\
8379927 & 1.105 & 1.08 & 1.65 & 0.0162 & 0.287 & 1.82 & 0.71 & -4.98 & 1.87 & 1.63 & 0.33\\
8394589 & 1.169 & 1.06 & 3.82 & 0.0094 & 0.247 & 1.98 & 0.37 & -3.14 & 0.71 & 0.70 & 0.01\\
8424992 & 1.056 & 0.94 & 9.62 & 0.0162 & 0.264 & 2.30 & 0.14 & -1.38 & 0.70 & 0.30 & 0.22\\
8694723 & 1.493 & 1.04 & 4.22 & 0.0085 & 0.309 & 2.36 & 0.00 & -2.23 & 0.70 & 1.46 & 3.18\\
8760414 & 1.028 & 0.82 & 12.09 & 0.0042 & 0.239 & 2.14 & 0.07 & -2.42 & 0.52 & 1.69 & 4.43\\
8938364 & 1.361 & 1.00 & 11.00 & 0.0217 & 0.272 & 2.14 & 0.00 & -2.09 & 1.44 & 3.52 & 3.26\\
9025370 & 1.000 & 0.97 & 5.50 & 0.0184 & 0.253 & 1.60 & 0.54 & -6.01 & 1.45 & 3.78 & 0.27\\
9098294 & 1.151 & 0.99 & 8.22 & 0.0129 & 0.245 & 2.14 & 0.11 & -3.13 & 1.93 & 0.96 & 0.23\\
9139151 & 1.167 & 1.20 & 1.84 & 0.0203 & 0.265 & 2.48 & 0.63 & -1.58 & 1.66 & 1.26 & 0.17\\
9139163 & 1.582 & 1.49 & 1.26 & 0.0330 & 0.245 & 1.64 & 0.71 & -9.60 & 0.95 & 1.89 & 4.25\\
9206432 & 1.499 & 1.37 & 1.32 & 0.0247 & 0.285 & 1.82 & 0.65 & -2.37 & 1.68 & 1.10 & 0.72\\
9353712 & 2.183 & 1.56 & 2.17 & 0.0203 & 0.249 & 1.76 & 0.08 & -1.89 & 2.57 & 0.73 & 1.16\\
9410862 & 1.159 & 0.99 & 6.15 & 0.0091 & 0.247 & 1.90 & 0.20 & -3.11 & 1.28 & 0.75 & 0.74\\
9414417 & 1.896 & 1.40 & 2.67 & 0.0147 & 0.244 & 1.70 & 0.11 & -5.41 & 1.01 & 0.78 & 0.39\\
9955598 & 0.876 & 0.87 & 6.38 & 0.0203 & 0.308 & 2.16 & 0.48 & -2.71 & 1.15 & 2.13 & 0.13\\
9965715 & 1.224 & 0.99 & 3.00 & 0.0080 & 0.310 & 1.58 & 0.33 & -5.57 & 0.78 & 0.65 & 1.76\\
10079226 & 1.135 & 1.09 & 2.35 & 0.0203 & 0.291 & 1.84 & 0.61 & -4.10 & 1.39 & 0.73 & 0.12\\
10454113 & 1.282 & 1.27 & 2.03 & 0.0217 & 0.244 & 2.02 & 0.58 & -0.79 & 2.07 & 4.38 & 1.79\\
10516096 & 1.407 & 1.08 & 6.44 & 0.0168 & 0.270 & 2.04 & 0.00 & -2.81 & 1.29 & 1.14 & 0.65\\
10644253 & 1.073 & 1.04 & 1.14 & 0.0162 & 0.319 & 1.78 & 0.78 & -4.91 & 0.78 & 0.62 & 0.31\\
10730618 & 1.729 & 1.33 & 2.55 & 0.0147 & 0.253 & 1.34 & 0.30 & -2.14 & 2.04 & 3.36 & 0.14\\
10963065 & 1.210 & 1.04 & 4.28 & 0.0114 & 0.277 & 2.04 & 0.22 & -3.53 & 1.41 & 0.98 & 0.00\\
11081729 & 1.393 & 1.25 & 1.88 & 0.0143 & 0.271 & 1.86 & 0.51 & -5.62 & 6.03 & 5.17 & 1.56\\
11253226 & 1.635 & 1.53 & 1.06 & 0.0224 & 0.248 & 1.90 & 0.69 & -4.76 & 2.76 & 1.83 & 2.00\\
11772920 & 0.839 & 0.81 & 11.11 & 0.0143 & 0.254 & 1.82 & 0.43 & -3.90 & 2.28 & 0.35 & 0.33\\
12009504 & 1.379 & 1.13 & 3.44 & 0.0157 & 0.294 & 1.96 & 0.26 & -4.67 & 0.81 & 0.88 & 0.10\\
12069127 & 2.262 & 1.58 & 1.89 & 0.0203 & 0.262 & 1.64 & 0.12 & -4.46 & 3.00 & 0.79 & 0.02\\
12069424 & 1.223 & 1.07 & 7.35 & 0.0179 & 0.241 & 2.12 & 0.09 & -4.41 & 3.78 & 1.02 & 1.39\\
12069449 & 1.105 & 1.01 & 6.88 & 0.0217 & 0.278 & 2.14 & 0.22 & -2.90 & 4.93 & 0.94 & 0.69\\
12258514 & 1.601 & 1.25 & 6.11 & 0.0247 & 0.229 & 1.64 & 0.00 & -4.04 & 2.45 & 0.92 & 9.89\\
12317678 & 1.749 & 1.27 & 2.18 & 0.0107 & 0.302 & 1.74 & 0.13 & -5.26 & 1.22 & 1.09 & 0.65\\
\hline\hline
\end{tabular}
\end{center}
Notes: The parameters are radius, mass, age, initial metallicity $Z_i$ and helium $Y_i$ mass fraction, mixing-length parameter $\alpha$, ratio of current central hydrogen to initial hydrogen mass fraction, $X_c/X_i$, the $a_0$ parameter in Eq.~\ref{eqn:kjeldsen}, and the normalized $\chi^2$ values for the $r_{01}$, $r_{02}$ and spectroscopic data.
\end{table*}
{The sample of stars analyzed in this work span the
main-sequence and early subgiant phase, as illustrated by their position in the
Hertzsprung-Russell diagram (Fig.~\ref{fig:hrdiag}).
They cover a range in mass of about 0.6 \msol, with about half of the sample
being within 10\% of the solar value.
For a representative set of four stars, Fig.~\ref{fig:fitdata0} compares
the measured frequency separation ratios (crosses) with the
corresponding values from the reference models (red filled dots).
Here it can be seen that the agreement with the seismic observations is
in general excellent, but some of the models do not necessarily
reproduce features of the observed data. One example is KIC~10454113,
which is shown in the lower right panel. It displays an oscillation as a function of
frequency that the models
fail to reproduce. These discrepancies are indeed noted in the normalized
\chisq\ value, $\chi^2_N = \chi^2/N = 3.2$, where $N$ is the number of frequency
ratios.
For KIC~8006161, shown in the top left panel, the fit is of higher quality
with $\chi^2_N = 1.8$.}
The parameters of the reference models that are used to compare with the
observations are listed in Table~\ref{tab:referencemodels} along with
the individual $\chi^2_N$ values for $r_{01}$, $r_{02}$, and combined
\teff\ and \mh.
{For the Sun and each star in our sample, we derived a best estimate and uncertainty
for the stellar radius, mass, age, metallicity, luminosity, and surface gravity using the
method described in Sec.~\ref{sec3}
(see Table~\ref{tab:properties_derived}).
Using the rotation periods given in Table~\ref{tab1} and the derived
radius, we also
computed their rotational velocities.}
{Since the AMP~1.3 method uses only one set of physics in the stellar modeling},
the derived uncertainties
do not
include possible systematic errors arising from errors in the
model physics, such as the equation of state, heavy element settling, and convective overshoot.
However, the uncertainties include sources of errors arising
from free parameters that are often fixed in {the stellar codes used in other methods}, for example,
the mixing-length parameter $\alpha$, the initial chemical composition $(X_i, Y_i, Z_i),$ or a chemical enrichment law.
The {uncertainty on these} parameters contributes substantially to the error budget, and in some
cases more so, for example, changing
the equation of state or the opacities.
The effect of such changes in the physics has been studied in detail
for HD\,52265 by \cite{lebreton2014}. A similar detailed {analysis} for each
star in the sample we studied is beyond the scope of this paper.
{We refer to \citet{silvaaguirre2016}, who also analyzed data from \citet{Lund2016}
using seven distinct modeling methods and codes.}
The accuracy, namely the bias and not {the} precision, of our results
can be {ascertained} by an analysis of the solar observations.
As {stated} above, we {derived a best-matched model with
values for the mass of} a 1 \msol\ model and a radius of 1 \rsol, and
an age that, within the derived uncertainty, matches the solar value.
A second accuracy test, at least for the age,
can be established based on the independently derived ages for the
binary system 16 Cyg A and B (also known as KIC~12069449 and KIC~12069424).
{The ages that we derive agree to within 1$\sigma$.}
\renewcommand{\tabcolsep}{4.2pt
\begin{table*
\caption{Derived stellar properties of the {\it Kepler} targets and the
Sun using VIRGO data \label{tab:properties_derived}}
\begin{tabular}{lcccccccccccccccccccccc}
\hline\hline
KIC ID& $R$ & $M$ & Age & $L$ & $T_{\rm eff}$ & $\log g$ & [M/H] & $\pi$ & $v$\\
& (\rsol) & (\msol) & (Gyr) & (\lsol) & (K) & (dex) & (dex) & (mas) & (km~s$^{-1}$)\\
\hline
Sun & 1.001 $\pm$ 0.005 & 1.001 $\pm$ 0.019 & 4.38 $\pm$ 0.22 & 0.97 $\pm$ 0.03 & 5732 $\pm$ 43 & 4.438 $\pm$ 0.003 & 0.07 $\pm$ 0.04 & & \\
1435467 & 1.728 $\pm$ 0.027 & 1.466 $\pm$ 0.060 & 1.97 $\pm$ 0.17 & 4.29 $\pm$ 0.25 & 6299 $\pm$ 75 & 4.128 $\pm$ 0.004 & 0.09 $\pm$ 0.09 & 6.99 $\pm$ 0.24 & 13.09 $\pm$ 1.76\\
2837475 & 1.629 $\pm$ 0.027 & 1.460 $\pm$ 0.062 & 1.49 $\pm$ 0.22 & 4.54 $\pm$ 0.26 & 6600 $\pm$ 71 & 4.174 $\pm$ 0.007 & 0.05 $\pm$ 0.07 & 8.18 $\pm$ 0.29 & 22.40 $\pm$ 2.22\\
3427720 & 1.089 $\pm$ 0.009 & 1.034 $\pm$ 0.015 & 2.37 $\pm$ 0.23 & 1.37 $\pm$ 0.08 & 5989 $\pm$ 71 & 4.378 $\pm$ 0.003 & -0.05 $\pm$ 0.09 & 11.04 $\pm$ 0.40 & 3.95 $\pm$ 0.61\\
3656476 & 1.322 $\pm$ 0.007 & 1.101 $\pm$ 0.025 & 8.88 $\pm$ 0.41 & 1.63 $\pm$ 0.06 & 5690 $\pm$ 53 & 4.235 $\pm$ 0.004 & 0.17 $\pm$ 0.07 & 8.49 $\pm$ 0.30 & 2.11 $\pm$ 0.24\\
3735871 & 1.080 $\pm$ 0.012 & 1.068 $\pm$ 0.035 & 1.55 $\pm$ 0.18 & 1.45 $\pm$ 0.09 & 6092 $\pm$ 75 & 4.395 $\pm$ 0.005 & -0.05 $\pm$ 0.04 & 8.05 $\pm$ 0.31 & 4.74 $\pm$ 0.51\\
4914923 & 1.339 $\pm$ 0.015 & 1.039 $\pm$ 0.028 & 7.04 $\pm$ 0.50 & 1.79 $\pm$ 0.12 & 5769 $\pm$ 86 & 4.198 $\pm$ 0.004 & -0.06 $\pm$ 0.09 & 8.64 $\pm$ 0.35 & 3.31 $\pm$ 0.46\\
5184732 & 1.354 $\pm$ 0.028 & 1.247 $\pm$ 0.071 & 4.32 $\pm$ 0.85 & 1.79 $\pm$ 0.15 & 5752 $\pm$ 101 & 4.268 $\pm$ 0.009 & 0.31 $\pm$ 0.06 & 14.53 $\pm$ 0.67 & 3.46 $\pm$ 0.43\\
5950854 & 1.254 $\pm$ 0.012 & 1.005 $\pm$ 0.035 & 9.25 $\pm$ 0.68 & 1.58 $\pm$ 0.11 & 5780 $\pm$ 74 & 4.245 $\pm$ 0.006 & -0.11 $\pm$ 0.06 & 4.41 $\pm$ 0.18 & \\
6106415 & 1.205 $\pm$ 0.009 & 1.039 $\pm$ 0.021 & 4.55 $\pm$ 0.28 & 1.61 $\pm$ 0.09 & 5927 $\pm$ 63 & 4.294 $\pm$ 0.003 & -0.00 $\pm$ 0.04 & 25.35 $\pm$ 0.87 & \\
6116048 & 1.233 $\pm$ 0.011 & 1.048 $\pm$ 0.028 & 6.08 $\pm$ 0.40 & 1.77 $\pm$ 0.13 & 5993 $\pm$ 73 & 4.276 $\pm$ 0.003 & -0.20 $\pm$ 0.08 & 13.31 $\pm$ 0.57 & 3.61 $\pm$ 0.41\\
6225718 & 1.234 $\pm$ 0.018 & 1.169 $\pm$ 0.039 & 2.23 $\pm$ 0.20 & 2.08 $\pm$ 0.11 & 6252 $\pm$ 63 & 4.321 $\pm$ 0.005 & -0.09 $\pm$ 0.06 & 19.32 $\pm$ 0.60 & \\
6603624 & 1.164 $\pm$ 0.024 & 1.058 $\pm$ 0.075 & 8.66 $\pm$ 0.68 & 1.23 $\pm$ 0.11 & 5644 $\pm$ 91 & 4.326 $\pm$ 0.008 & 0.24 $\pm$ 0.05 & 11.89 $\pm$ 0.59 & \\
6933899 & 1.597 $\pm$ 0.008 & 1.155 $\pm$ 0.011 & 7.22 $\pm$ 0.53 & 2.63 $\pm$ 0.06 & 5815 $\pm$ 47 & 4.093 $\pm$ 0.002 & 0.11 $\pm$ 0.03 & 6.48 $\pm$ 0.15 & \\
7103006 & 1.958 $\pm$ 0.025 & 1.568 $\pm$ 0.051 & 1.69 $\pm$ 0.12 & 5.58 $\pm$ 0.36 & 6332 $\pm$ 89 & 4.048 $\pm$ 0.006 & 0.09 $\pm$ 0.10 & 6.19 $\pm$ 0.23 & 21.44 $\pm$ 2.25\\
7106245 & 1.125 $\pm$ 0.009 & 0.989 $\pm$ 0.023 & 6.05 $\pm$ 0.39 & 1.56 $\pm$ 0.09 & 6078 $\pm$ 74 & 4.327 $\pm$ 0.003 & -0.44 $\pm$ 0.11 & 4.98 $\pm$ 0.20 & \\
7206837 & 1.556 $\pm$ 0.018 & 1.377 $\pm$ 0.039 & 1.55 $\pm$ 0.50 & 3.37 $\pm$ 0.15 & 6269 $\pm$ 87 & 4.191 $\pm$ 0.008 & 0.07 $\pm$ 0.15 & 5.28 $\pm$ 0.15 & 19.49 $\pm$ 1.37\\
7296438 & 1.370 $\pm$ 0.009 & 1.099 $\pm$ 0.022 & 6.37 $\pm$ 0.60 & 1.85 $\pm$ 0.08 & 5754 $\pm$ 55 & 4.205 $\pm$ 0.003 & 0.21 $\pm$ 0.07 & 6.09 $\pm$ 0.18 & 2.76 $\pm$ 0.30\\
7510397 & 1.823 $\pm$ 0.018 & 1.309 $\pm$ 0.037 & 3.51 $\pm$ 0.24 & 4.19 $\pm$ 0.20 & 6119 $\pm$ 69 & 4.031 $\pm$ 0.004 & -0.14 $\pm$ 0.06 & 11.75 $\pm$ 0.36 & \\
7680114 & 1.402 $\pm$ 0.014 & 1.092 $\pm$ 0.030 & 6.89 $\pm$ 0.46 & 2.07 $\pm$ 0.09 & 5833 $\pm$ 47 & 4.181 $\pm$ 0.004 & 0.08 $\pm$ 0.07 & 5.73 $\pm$ 0.17 & 2.70 $\pm$ 0.19\\
7771282 & 1.629 $\pm$ 0.016 & 1.268 $\pm$ 0.040 & 2.78 $\pm$ 0.47 & 3.61 $\pm$ 0.18 & 6223 $\pm$ 73 & 4.118 $\pm$ 0.004 & -0.03 $\pm$ 0.07 & 3.24 $\pm$ 0.10 & 6.94 $\pm$ 0.54\\
7871531 & 0.871 $\pm$ 0.008 & 0.834 $\pm$ 0.021 & 8.84 $\pm$ 0.46 & 0.60 $\pm$ 0.05 & 5482 $\pm$ 69 & 4.478 $\pm$ 0.006 & -0.16 $\pm$ 0.04 & 16.81 $\pm$ 0.81 & 1.31 $\pm$ 0.10\\
7940546 & 1.974 $\pm$ 0.045 & 1.511 $\pm$ 0.087 & 2.42 $\pm$ 0.17 & 5.69 $\pm$ 0.35 & 6330 $\pm$ 43 & 4.023 $\pm$ 0.005 & 0.00 $\pm$ 0.06 & 12.16 $\pm$ 0.44 & 8.79 $\pm$ 0.76\\
7970740 & 0.776 $\pm$ 0.007 & 0.768 $\pm$ 0.019 & 10.53 $\pm$ 0.43 & 0.42 $\pm$ 0.04 & 5282 $\pm$ 93 & 4.546 $\pm$ 0.003 & -0.37 $\pm$ 0.09 & 36.83 $\pm$ 1.71 & 2.19 $\pm$ 0.38\\
8006161 & 0.930 $\pm$ 0.009 & 1.000 $\pm$ 0.030 & 4.57 $\pm$ 0.36 & 0.64 $\pm$ 0.03 & 5351 $\pm$ 49 & 4.498 $\pm$ 0.003 & 0.41 $\pm$ 0.04 & 37.89 $\pm$ 1.18 & 1.58 $\pm$ 0.16\\
8150065 & 1.402 $\pm$ 0.018 & 1.222 $\pm$ 0.040 & 3.15 $\pm$ 0.49 & 2.52 $\pm$ 0.19 & 6138 $\pm$ 105 & 4.230 $\pm$ 0.005 & -0.04 $\pm$ 0.15 & 3.94 $\pm$ 0.18 & \\
8179536 & 1.350 $\pm$ 0.013 & 1.249 $\pm$ 0.031 & 1.88 $\pm$ 0.25 & 2.63 $\pm$ 0.11 & 6318 $\pm$ 59 & 4.274 $\pm$ 0.005 & -0.04 $\pm$ 0.07 & 6.91 $\pm$ 0.20 & 2.78 $\pm$ 0.18\\
8379927 & 1.102 $\pm$ 0.012 & 1.073 $\pm$ 0.033 & 1.64 $\pm$ 0.12 & 1.39 $\pm$ 0.10 & 5971 $\pm$ 91 & 4.382 $\pm$ 0.005 & -0.04 $\pm$ 0.05 & 30.15 $\pm$ 1.40 & 3.28 $\pm$ 0.26\\
8394589 & 1.155 $\pm$ 0.009 & 1.024 $\pm$ 0.030 & 3.82 $\pm$ 0.25 & 1.68 $\pm$ 0.09 & 6103 $\pm$ 61 & 4.324 $\pm$ 0.003 & -0.28 $\pm$ 0.07 & 8.47 $\pm$ 0.28 & \\
8424992 & 1.048 $\pm$ 0.005 & 0.930 $\pm$ 0.016 & 9.79 $\pm$ 0.76 & 0.99 $\pm$ 0.04 & 5634 $\pm$ 57 & 4.362 $\pm$ 0.002 & -0.12 $\pm$ 0.06 & 7.52 $\pm$ 0.23 & \\
8694723 & 1.463 $\pm$ 0.023 & 1.004 $\pm$ 0.036 & 4.85 $\pm$ 0.22 & 3.15 $\pm$ 0.18 & 6347 $\pm$ 67 & 4.107 $\pm$ 0.004 & -0.38 $\pm$ 0.08 & 8.18 $\pm$ 0.28 & \\
8760414 & 1.027 $\pm$ 0.004 & 0.814 $\pm$ 0.011 & 11.88 $\pm$ 0.34 & 1.15 $\pm$ 0.06 & 5915 $\pm$ 54 & 4.329 $\pm$ 0.002 & -0.66 $\pm$ 0.07 & 9.83 $\pm$ 0.32 & \\
8938364 & 1.362 $\pm$ 0.007 & 1.015 $\pm$ 0.023 & 10.85 $\pm$ 1.22 & 1.65 $\pm$ 0.15 & 5604 $\pm$ 115 & 4.174 $\pm$ 0.004 & 0.06 $\pm$ 0.06 & 6.27 $\pm$ 0.31 & \\
9025370 & 0.997 $\pm$ 0.017 & 0.969 $\pm$ 0.036 & 5.53 $\pm$ 0.43 & 0.71 $\pm$ 0.11 & 5296 $\pm$ 157 & 4.424 $\pm$ 0.006 & 0.01 $\pm$ 0.09 & 15.66 $\pm$ 1.44 & \\
9098294 & 1.150 $\pm$ 0.003 & 0.979 $\pm$ 0.017 & 8.23 $\pm$ 0.53 & 1.34 $\pm$ 0.05 & 5795 $\pm$ 53 & 4.312 $\pm$ 0.002 & -0.17 $\pm$ 0.07 & 8.30 $\pm$ 0.23 & 2.94 $\pm$ 0.20\\
9139151 & 1.137 $\pm$ 0.027 & 1.129 $\pm$ 0.091 & 1.94 $\pm$ 0.31 & 1.81 $\pm$ 0.11 & 6270 $\pm$ 63 & 4.375 $\pm$ 0.008 & 0.05 $\pm$ 0.10 & 9.57 $\pm$ 0.34 & 5.25 $\pm$ 1.07\\
9139163 & 1.569 $\pm$ 0.027 & 1.480 $\pm$ 0.085 & 1.23 $\pm$ 0.15 & 3.51 $\pm$ 0.24 & 6318 $\pm$ 105 & 4.213 $\pm$ 0.004 & 0.11 $\pm$ 0.00 & 9.85 $\pm$ 0.39 & \\
9206432 & 1.460 $\pm$ 0.015 & 1.301 $\pm$ 0.048 & 1.48 $\pm$ 0.31 & 3.47 $\pm$ 0.18 & 6508 $\pm$ 75 & 4.219 $\pm$ 0.009 & 0.06 $\pm$ 0.07 & 7.03 $\pm$ 0.26 & 8.39 $\pm$ 1.01\\
9353712 & 2.240 $\pm$ 0.061 & 1.681 $\pm$ 0.125 & 1.91 $\pm$ 0.14 & 7.27 $\pm$ 1.02 & 6343 $\pm$ 119 & 3.965 $\pm$ 0.008 & 0.12 $\pm$ 0.08 & 2.21 $\pm$ 0.16 & 10.03 $\pm$ 1.03\\
9410862 & 1.149 $\pm$ 0.009 & 0.969 $\pm$ 0.017 & 5.78 $\pm$ 0.82 & 1.56 $\pm$ 0.08 & 6017 $\pm$ 69 & 4.304 $\pm$ 0.003 & -0.34 $\pm$ 0.08 & 5.05 $\pm$ 0.16 & 2.55 $\pm$ 0.27\\
9414417 & 1.891 $\pm$ 0.015 & 1.401 $\pm$ 0.028 & 2.53 $\pm$ 0.17 & 4.98 $\pm$ 0.22 & 6260 $\pm$ 67 & 4.028 $\pm$ 0.004 & -0.07 $\pm$ 0.12 & 4.65 $\pm$ 0.13 & 8.96 $\pm$ 0.56\\
9955598 & 0.881 $\pm$ 0.008 & 0.885 $\pm$ 0.023 & 6.47 $\pm$ 0.45 & 0.58 $\pm$ 0.03 & 5400 $\pm$ 57 & 4.494 $\pm$ 0.003 & 0.06 $\pm$ 0.04 & 14.98 $\pm$ 0.53 & 1.30 $\pm$ 0.22\\
9965715 & 1.234 $\pm$ 0.015 & 1.005 $\pm$ 0.033 & 3.29 $\pm$ 0.33 & 1.85 $\pm$ 0.15 & 6058 $\pm$ 113 & 4.258 $\pm$ 0.004 & -0.27 $\pm$ 0.11 & 8.81 $\pm$ 0.51 & \\
10079226 & 1.129 $\pm$ 0.016 & 1.082 $\pm$ 0.048 & 2.75 $\pm$ 0.42 & 1.41 $\pm$ 0.10 & 5915 $\pm$ 89 & 4.364 $\pm$ 0.005 & 0.07 $\pm$ 0.06 & 7.05 $\pm$ 0.29 & 3.86 $\pm$ 0.33\\
10454113 & 1.272 $\pm$ 0.006 & 1.260 $\pm$ 0.016 & 2.06 $\pm$ 0.16 & 2.07 $\pm$ 0.08 & 6134 $\pm$ 61 & 4.325 $\pm$ 0.003 & 0.04 $\pm$ 0.04 & 11.94 $\pm$ 0.63 & 4.41 $\pm$ 0.33\\
10516096 & 1.398 $\pm$ 0.008 & 1.065 $\pm$ 0.012 & 6.59 $\pm$ 0.37 & 2.11 $\pm$ 0.08 & 5872 $\pm$ 43 & 4.173 $\pm$ 0.003 & -0.06 $\pm$ 0.06 & 7.53 $\pm$ 0.21 & \\
10644253 & 1.090 $\pm$ 0.027 & 1.091 $\pm$ 0.097 & 0.94 $\pm$ 0.26 & 1.45 $\pm$ 0.09 & 6033 $\pm$ 67 & 4.399 $\pm$ 0.007 & 0.01 $\pm$ 0.10 & 10.45 $\pm$ 0.39 & 5.05 $\pm$ 0.42\\
10730618 & 1.763 $\pm$ 0.040 & 1.411 $\pm$ 0.097 & 1.81 $\pm$ 0.41 & 4.04 $\pm$ 0.56 & 6156 $\pm$ 181 & 4.095 $\pm$ 0.011 & 0.05 $\pm$ 0.18 & 3.35 $\pm$ 0.27 & \\
10963065 & 1.204 $\pm$ 0.007 & 1.023 $\pm$ 0.024 & 4.33 $\pm$ 0.30 & 1.80 $\pm$ 0.08 & 6097 $\pm$ 53 & 4.288 $\pm$ 0.003 & -0.24 $\pm$ 0.06 & 11.46 $\pm$ 0.34 & 4.84 $\pm$ 0.65\\
11081729 & 1.423 $\pm$ 0.009 & 1.257 $\pm$ 0.045 & 2.22 $\pm$ 0.10 & 3.29 $\pm$ 0.07 & 6474 $\pm$ 43 & 4.215 $\pm$ 0.026 & 0.07 $\pm$ 0.03 & 7.48 $\pm$ 0.17 & 26.28 $\pm$ 2.98\\
11253226 & 1.606 $\pm$ 0.015 & 1.486 $\pm$ 0.030 & 0.97 $\pm$ 0.21 & 4.80 $\pm$ 0.20 & 6696 $\pm$ 79 & 4.197 $\pm$ 0.007 & 0.10 $\pm$ 0.05 & 8.07 $\pm$ 0.23 & 22.32 $\pm$ 2.28\\
11772920 & 0.845 $\pm$ 0.009 & 0.830 $\pm$ 0.028 & 10.79 $\pm$ 0.96 & 0.42 $\pm$ 0.06 & 5084 $\pm$ 159 & 4.502 $\pm$ 0.004 & -0.06 $\pm$ 0.09 & 14.82 $\pm$ 1.24 & \\
12009504 & 1.382 $\pm$ 0.022 & 1.137 $\pm$ 0.063 & 3.44 $\pm$ 0.44 & 2.46 $\pm$ 0.25 & 6140 $\pm$ 133 & 4.213 $\pm$ 0.006 & -0.04 $\pm$ 0.05 & 7.51 $\pm$ 0.42 & 7.44 $\pm$ 0.55\\
12069127 & 2.283 $\pm$ 0.033 & 1.621 $\pm$ 0.084 & 1.79 $\pm$ 0.14 & 7.26 $\pm$ 0.42 & 6267 $\pm$ 79 & 3.926 $\pm$ 0.010 & 0.15 $\pm$ 0.08 & 2.35 $\pm$ 0.08 & 125.54 $\pm$ 7.07\\
12069424 & 1.223 $\pm$ 0.005 & 1.072 $\pm$ 0.013 & 7.36 $\pm$ 0.31 & 1.52 $\pm$ 0.05 & 5785 $\pm$ 39 & 4.294 $\pm$ 0.001 & -0.04 $\pm$ 0.05 & 47.44 $\pm$ 1.00 & 2.60 $\pm$ 0.20\\
12069449 & 1.113 $\pm$ 0.016 & 1.038 $\pm$ 0.047 & 7.05 $\pm$ 0.63 & 1.21 $\pm$ 0.11 & 5732 $\pm$ 83 & 4.361 $\pm$ 0.007 & 0.15 $\pm$ 0.08 & 46.77 $\pm$ 2.10 & 2.43 $\pm$ 0.63\\
12258514 & 1.593 $\pm$ 0.016 & 1.251 $\pm$ 0.016 & 5.50 $\pm$ 0.40 & 2.63 $\pm$ 0.12 & 5808 $\pm$ 61 & 4.129 $\pm$ 0.002 & 0.10 $\pm$ 0.09 & 12.79 $\pm$ 0.40 & 5.37 $\pm$ 0.66\\
12317678 & 1.788 $\pm$ 0.014 & 1.373 $\pm$ 0.030 & 2.30 $\pm$ 0.20 & 5.49 $\pm$ 0.28 & 6587 $\pm$ 97 & 4.064 $\pm$ 0.005 & -0.26 $\pm$ 0.09 & 6.89 $\pm$ 0.23 & \\
\hline\hline
\end{tabular}
Notes: The mean model parameters are radius, mass, age, luminosity, effective temperature, surface gravity, metallicity, parallax, and rotational velocity.
The latter two are derived using data from this table and Table~\ref{tab1}.
\end{table*}
\subsection{Accuracy of radii and luminosities\label{sec:comparison}}
To test the accuracy of the derived radii and luminosities, we have
compiled measured values of these properties for nine stars (Table~\ref{tab:lumradobs}). {These stars have reliable Hipparcos parallaxes and are not members of
close binary systems.}
Only three of the radii of the subsample of stars have been
measured interferometrically \citep{huber12,white13}. {The angular diameters from \citet{masana06} and \citet{huber14} were derived from broadband photometry and from literature atmospheric properties and stellar evolution models, respectively.}
\citet{met1216cyg} and \citet{Metcalfe2014} derived the luminosities using
{extinction
estimates from \citet{ammons2006} and the bolometric corrections from \citeauthor[]{flower1996} (\citeyear{flower1996}, see \citealt{torres2010}).}
\begin{table}[]
\begin{center}
\caption{Luminosities, radii, and parallaxes from independent sources}
\label{tab:lumradobs}
\begin{tabular}{lcccccc}
\hline\hline
KIC ID & $L$ & $R$ & $\pi$ \\
& (\lsol) & (\rsol) & (mas) \\
\hline
8006161 & 0.61 $\pm$ 0.02 & 0.950$^1$ $\pm$ 0.020 & 37.47 $\pm$ 0.49 \\
9139151 & 1.63 $\pm$ 0.40 & 1.160$^3$ $\pm$ 0.020 & 9.46 $\pm$ 1.15 \\
9139163 & 3.88 $\pm$ 0.69 & 1.570$^3$ $\pm$ 0.030 & 9.49 $\pm$ 0.83 \\
9206432 & 4.95 $\pm$ 1.48 & 1.520$^3$ $\pm$ 0.030 & 5.85 $\pm$ 0.87 \\
10454113 & 2.60 $\pm$ 0.36 & 1.240$^3$ $\pm$ 0.020 & 9.95 $\pm$ 0.67 \\
11253226 & 4.22 $\pm$ 0.61 & 1.576$^4$ $\pm$ 0.143 & 8.52 $\pm$ 0.60 \\
12069424 & 1.56 $\pm$ 0.05 & 1.220$^1$ $\pm$ 0.020 & 47.44 $\pm$ 0.27 \\
12069449 & 1.27 $\pm$ 0.04 & 1.120$^1$ $\pm$ 0.020 & 47.14 $\pm$ 0.27 \\
12258514 & 2.84 $\pm$ 0.25 & 1.590$^3$ $\pm$ 0.040 & 12.32 $\pm$ 0.51 \\
\hline\hline
\end{tabular}
\end{center}
The luminosities are from \citet{met1216cyg,Metcalfe2014}. The references to
the radii are
$^1$\citet{huber12} $^2$\citet{white13} $^3$\citet{huber14} $^4$\citet{masana06}.
The parallaxes are from \citet{hipparcos07}.
\end{table}
A comparison of these independent measures of stellar radii and luminosities with
those derived using our asteroseismic methodology is shown in the top
two panels of Fig.~\ref{fig:lumrad}.
This comparison, using measurement differences relative to their
uncertainty as listed in the literature, shows no
systematic biases or trends for this subsample of nine stars.
The mean relative difference is --0.40 with a root mean
square (rms) around the
mean of 0.59
for the interferometrically measured radii (references 1 and 2, red filled circles)
and --0.28 $\pm$ 1.03 for the radii derived using photometry
and isochrones (references 3 and 4).
For the luminosity the mean relative difference is --0.35 with an rms around
the mean of 1.1.
\subsection{Asteroseismic parallaxes \label{sec:asteroseismicdistances}}
We used the luminosity $L$ that was derived from the asteroseismic analysis to compute
the stellar distance as a parallax.
Using the modeled surface gravity and the observed
\teff\ and \feh,\ we derived the amount of interstellar absorption between the top
of the Earth's atmosphere and the star, $A_{Ks}$, using
the isochrone method described in \citet{schultheis2014}.
Here, the subscript $Ks$ refers to the 2MASS $K_s$ filter \citep{2MASS}.
With the same observed \teff\ , we computed the corresponding
bolometric correction $BC_{Ks}$ for this band,
using $BC_{Ks} = 4.51465 -0.000524461 T_{\rm eff}$ \citep{marigo2008}
where the solar bolometric magnitude is 4.72 mag.
The $K_s$-band magnitude and $A_{Ks}$ are listed in Table~\ref{tab1}.
The distance, $d$, or parallax, $\pi$,
is then computed directly from $L$, $K_s$, $BC_{Ks}$ , and $A_{Ks}$.
{The parallaxes and uncertainties of the stars in our sample
are listed in Table~\ref{tab:properties_derived}.
They were derived using Monte Carlo simulations, described as follows.
We perturbed each of the input data measures $L$, $A_{Ks}$, $K_s$, and $BC_K$,
using noise sampled from a Gaussian distribution with zero mean
and standard deviation equivalent to their errors to calculate
a parallax.
By repeating the perturbations 10,000 times, we obtained
a distribution of parallaxes, which is modeled by
a Gaussian function.
The mean and standard deviation are adopted as the parallax
value and its uncertainty.
In most cases, the derived parallax error is dominated by the
luminosity error.}
A comparison between the derived parallaxes and existing literature
values (\citealt{hipparcos07}, Table~\ref{tab:lumradobs})
again validates our results, as shown in {the lower panel of}
Fig.~\ref{fig:lumrad}, where no
significant trend can be seen.
In particular, we note that for the binary KIC~12069424 and KIC~12069449
(16 Cyg A\&B), we obtain almost identical parallaxes of
{47.4 mas and 46.8 mas,} equivalent to
a difference of 0.3 pc at a distance of 21.2 pc.
{This result provides further evidence
of the accuracy of our derived properties.}
\subsection{Trends in stellar properties \label{sec:stellartrends}}
Performing a homogenous analysis on a {relatively large sample
allows us to check for trends in some stellar parameters
and compare them to trends derived or established by other methods.
We performed this check for two parameters: the mixing-length
parameter and the stellar age.}
\subsubsection{Mixing-length parameter versus \teff\ and \logg}
The mixing-length parameter $\alpha$ is
usually calibrated for a solar model and then applied to all
models for a set range of masses and metallicities.
However, several authors have shown that this approach
is not correct, for instance, \citet{yildiz2006,bonaca2012,creevey2012A}.
The values of $\alpha$ resulting from a GA {analysis} offer
an optimal approach to
effectively test and subsequently constrain this parameter, since {by design}
the GA only restricts $\alpha$ to be between 1.0 and 3.0, {a range large enough
to encompass all plausible values}.
The color-coded distribution of $\alpha$
with \logg\ and \teff\
{is shown}
in the top panel of Fig.~\ref{fig:poster_teffalpha}, using the
results derived from our sample of 57 stars and the Sun.
It is evident from this figure that for a given value of \logg, the
value of $\alpha$ has an upper limit.
This upper limit can be represented by the equation
$\alpha < 1.65 \log g - 4.75$,
and this
is denoted by the dashed line in the figure.
A regression analysis considering the model values of \logg, $\log$\teff\ and
[M/H] yields
\begin{eqnarray}
\indent \alpha &=& 5.972778 + 0.636997 \log g \nonumber \\
&& - 1.799968 \log T_{\rm eff} + 0.040094 [{\rm M}/{\rm H}], \nonumber \label{eqn:alpha_regression}
\\
\end{eqnarray}
with a mean and rms of the residual to the fit of --0.01 $\pm$ 0.15
for the 58 stars.
The residuals of this fit scaled by the uncertainties in $\alpha$
are shown in the lower panel of
Fig.~\ref{fig:poster_teffalpha} as a function of \teff.
No trend with this parameter can be seen.
This equation yields a value of $\alpha = 2.03$ for the known solar properties, within 1$\sigma$ of
its mean value (2.12).
\begin{figure}
\includegraphics[width=0.48\textwidth]{poster_teffalpha}
\includegraphics[width=0.48\textwidth]{residual-alpha}
\caption{{\sl Top:} Distribution of the \logg\ and $\alpha$ for the full sample.
The color coding shows in
red \teff\ $<$ 5600 K,
in yellow 5600 K $<$ \teff\ $<$ 6000 K,
in green 6000 K $<$ \teff\ $<$ 6300 K,
and blue \teff\ $>$ 6300 K.
{\sl Bottom:} Residuals of the regression analysis scaled by the uncertainties
in $\alpha$ as a function of \teff.
\label{fig:poster_teffalpha}}
\end{figure}
{These results agree in part with those derived by \citet{magic2015},
who used a full 3D radiative hydrodynamic simulation for modeling convective envelopes.}
These authors found that $\alpha$ increases with \logg\ and decreases with
\teff, {which is qualitatively} in agreement with our results.
{The size of the variation that they inferred}, however, is smaller than the
values we find.
In our sample, $\alpha$ varies between 1.7 and 2.4,
while for the same range in \logg, \teff, \citeauthor{magic2015} see variations
in $\alpha$ from 1.9 to 2.3.
We note that the range of metallicity
{in our sample is much smaller than the range in their work.
This could be the reason of the weak and opposite dependence on $\alpha$ that we find}.
\subsubsection{Age and $\langle r_{02} \rangle$}
\begin{figure}
\includegraphics[width=0.48\textwidth]{res_Age_r02}
\caption{Age determination as a function of the mean value of $r02$.
\label{fig:res_Age_r02}}
\end{figure}
The $r_{02}$ frequency ratios {contain what is known as
{\it \textup{small frequency separations,}}
and} these are effective at probing the gradients near the core of the star \citep{Roxburgh2003}.
As the core is most sensitive to nuclear processing, $r_{02}$ are a diagnostic
of the evolutionary state of the star.
Using theoretical models, \citet{lebMon2009} showed a relationship between
the mean value of $r_{02}$ and the {stellar} age.
This relationship was recently used by \citet{appourchaux2015} to
estimate the age of the binary KIC~7510397 (HIP~93511).
Figure~\ref{fig:res_Age_r02} shows the distribution
{of the mean of
the $r_{02}$ ratios, that is, $\langle r_{02} \rangle$,}
versus the derived ages for the sample of stars studied here.
A linear fit to these data leads to the following estimate of
the stellar age, $\tau$ in Gyr, based on $ \langle r_{02} \rangle$
\begin{equation}
\tau = 17.910 - 193.918 \langle r_{02} \rangle,
\label{eqn:r02}
\end{equation}
{This is, of course, only valid for the range covered by our sample.
The range of radial orders used for calculating $\langle r_{02} \rangle$ has
{almost no impact on
this result (an effect lower than a 1\%)}.
We note that when inserting the value of $\langle r_{02} \rangle = 0.068$ for the Sun,
Eq.~\ref{eqn:r02} yields an age of 4.7 Gyr, in excellent agreement with the Sun's
age as determined by other means.}
\section{Characterizing surface effects\label{sec:sec5}}
It is known that a direct comparison of observed frequencies
with model frequencies derived from 1D stellar structure models
reveals a systematic discrepancy {that increases with
the mode frequency}; this is commonly referred to as
\textup{\textup{{\it \textup{surface effects}}
}}(\citealt{rosenthal1997}, see Section~\ref{sec1}).
This discrepancy arises because a 1D stellar atmosphere does not represent the
{actual structural and thermal properties of the
stellar atmosphere in the layers close to the surface
and because non-adiabatic effects that are present immediately below the surface are not
included when computing resonant frequencies using an adiabatic code.}
Some recent works {have attempted} to produce more realistic stellar atmospheres
by replacing the outer layers of a 1D stellar envelope by an
averaged 3D surface simulation and by
including the effects of turbulent pressure in the
equation of hydrostatic support and opacity changes from the
temperature fluctuations, and by also
considering non-adiabatic effects \citep{Trampedach2014,Trampedach2016,Houdek2016}.
{This
reduced the approximately $-$15~$\mu$Hz discrepancy to around
+2~$\mu$Hz near 4,000 $\mu$Hz when including
both structural and modal effects. }
While progress is being made, we are still not in a position to
apply these calculations for a large sample of stars.
To sidestep this problem, several authors {have suggested} the use
of combination frequencies that are insensitive to this systematic
offset in frequency, see for example, \citet{Roxburgh2003},
hence the exclusive use of $r_{01}$ and $r_{02}$ in
the AMP~1.3 method.
However, {since individual frequencies contain more information
than ratios of frequency separations,}
some authors have derived simple prescriptions to
mitigate {the surface effects}.
One such parametrization is that of \citet{Kjeldsen2008},
who suggested a simple correction to the 1D model frequencies
$\delta\nu_{n,l}$ of the
form {of a power law,}
\begin{equation}
\delta\nu_{n,l} = a_0 \left ( \frac{\nu^{\rm obs}_{n,l}}{\nu_{\rm max}} \right )^{b}
\label{eqn:kjeldsen}
,\end{equation}
where $b = 4.82$ is a fixed value, {calibrated by a solar model},
{
$\nu_{\rm max}$ is the frequency corresponding
to the highest amplitude mode, see \citep{Lund2016}},
$a_0$ is {computed from the differences between the} observed and model
frequencies \citep{Metcalfe2009,Metcalfe2014},
\begin{equation}
a_0 = \frac{\langle \nu_{n,0}^{\rm obs} \rangle - \langle \nu_{n,0}^{\rm mod} \rangle}{N^{-1}_{0} \sum_{i=1}^{N_{0}} [\nu^{\rm obs}_i / \nu_{\rm max} ]^b}
.\end{equation}
{$\text{Here, }\nu^{\rm obs}_{n,l}$ and $\nu^{\rm mod}_{n,l}$ are
the observed and model frequency of radial order $n$ and
degree $l$, respectively,
and $N_{0}$ is the number of $l=0$ frequencies.}
In the absence of perfect 3D simulations,
the interest in using such a surface correction becomes
evident when we consider {not only}
that the individual frequencies contain
a higher information content, but more importantly, {that} the
$r_{01}$ and $r_{02}$ frequency ratios
are only useful if the precision on these derived
quantities is high enough.
{A precision like this on the ratio requires not only having a high precision
on the individual frequencies, but
enough radial order modes to constrain
the stellar modeling.} This is not necessarily the case for
some stars, where, for example, ground-based campaigns are limited in time-domain
coverage, such as the case of $\nu$ Ind \citep{nuind},
or even for space-based missions such as the TESS mission, where
{only one month of continuous data will be available for stars at
certain galactic latitudes}.
{Similary, limited precision will also be achieved for the stars observed in the
PLATO {\it \textup{step-and-stare}} phase, since the observation window
will only be two to three months each.}
The AMP~1.3 method exclusively uses the
$r_{01}$ and $r_{02}$ frequency ratios, and our results
{are therefore expected to be} insensitive
to {surface effects}.
{Hence}, using the resulting models and the observed frequencies,
we can explore the nature of the
surface term for a large sample of stars, and in particular,
we can test to which extent the \citet{Kjeldsen2008} prescription is useful.
\subsection{Surface {effects} as observed in the Sun at low degrees}
{The magnitude of the surface {effects} on the frequency discrepancy for the Sun is on the
order of 10-15 $\mu$Hz around 4000 $\mu$Hz for the low degrees ($l=0,1,2,\text{and }3$)}.
Our
analysis using the solar data reveals a similar offset.
In the top panel of Fig.~\ref{fig:best100_surfaceterm} we show the
solar surface term by comparing the input frequencies
with those of the models.
The term of the reference model is
shown by the thick line with filled black dots, and in
gray we show those
for 100 of the best solar models, with the mean of these 100 shown as the thick dashed line.
At \numax, the value of $a_0 = -2.5 \mu$Hz for the reference model,
and for 100 of the representative models it spans --2.3 to --4.6 $\mu$Hz.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{best100_surface9999999}
\caption{{\sl Top:} S{urface term} for the reference solar model
(connected black dots) for the $l=0$ frequencies as a function
of observed frequency scaled by \numax.
The surface term for
a sample of 100 of the best models is also shown for the Sun (gray),
with the mean
value highlighted by the dashed line.
The blue connected squares show the empirical surface correction $\delta\nu_{n,l}$
(Eq.~\ref{eqn:kjeldsen}) based on the reference model.
{\sl Lower:} The differences between the observed and corrected model
frequencies as a function of scaled frequency, with the solar observational
errors overplotted in blue. The shaded gray areas represent the mean and
standard deviation of $q$ for the same 100 models shown in the top panel.
The dotted vertical lines delimit the region used to calculate
the quality metric $Q$. }
\label{fig:best100_surfaceterm}
\end{center}
\end{figure}
When we apply Eq.~\ref{eqn:kjeldsen} to the reference solar model, we calculate
a correction $\delta\nu_{n,l}$ that successfully mitigates the surface {effects}.
This is clearly shown in the top panel of Fig.~\ref{fig:best100_surfaceterm} ,
where the surface {term} for the reference model (black connected dots) is traced by the
scaled surface correction $\delta\nu$ (blue connected squares) for the
$l=0$ modes alone.
By applying the proposed corrections $\delta\nu_{n,l}$ to the
observed frequencies, we can then make a quantitive
comparison between the model and the data.
This agreement is shown in the lower panel for $l=0,$ and we denote it as
$q_{n,l} = \nu^{\rm obs}_{n,l} - \nu^{\rm mod}_{n,l} + \delta\nu_{n,l}$.
To quantify the agreement between the corrected model frequencies and the observed ones,
we define the metric $Q$ as the median of the
{absolute value} of the residuals,
\begin{equation}
\\
Q = \mathrm{median}\left | {q_{n,l}} \right |,
\label{eqn:chit}
\end{equation}
for all observed $n$ and $l$ defined in the region of
$0.7 \le \nu_{n,l}^{\rm obs} / \nu_{\rm max} \le 1.3$.
This region is delimited in the lower panel by the vertical dotted lines.
We note that we purposely exclude any reference to an observational
error in the
definition of $q$,
as the surface correction results from an error in the models and is not related to the
precision of the frequency data.
In the ideal case and in the absence of errors in the data, $Q \rightarrow 0~\mu$Hz, which means that the model is perfect.
The value of
$Q$ is 0.38 $\mu$Hz for the reference solar model, and the mean
value for the 100 solar models shown in
Fig.~\ref{fig:best100_surfaceterm} is 0.51 $\mu$Hz.
From this figure and the low value of the quality metric, it is expected that the
\citet{Kjeldsen2008} empirical surface correction $\delta\nu_{n,l}$ (Eq.~\ref{eqn:kjeldsen})
is useful for mitigating {the surface effects} for this solar model.
\subsection{{Surface effects} for other stars}
Is the simplified surface correction useful in other stars? And
if so, to what extent?
These are the questions that we aim to answer by inspecting the
reference models (Table~\ref{tab:referencemodels})
of the best-fit stars within our sample.
We define a subset of stars by selecting those with
$\chi^2_N \leq 3.0$\footnote{The limit of 3.0 is rather arbitrary
and was chosen as a compromise between having an adequate sample size
and the best match to the data.
Using a threshold of 2.0 or 4.0 does not change the results significantly.} for both $r_{01}$ and
$r_{02}$.
This {selection results in a subset of} 44 stars.
The differences between the observed frequencies and
the frequencies of the reference models for this subset
are shown in Fig.~\ref{fig:star_surfacecorr},
{and we assume} that these differences are dominated
by {the surface effects.}
For the stars represented by the continuous lines it can be noted
that the {remaining discrepancies are} quite similar in
{magnitude} and shape for the {less evolved stars}.
{For} the more evolved stars {(\logg\ $>$ 4.2, indicated
by dashed lines), the remaining discrepancies are larger and {of a different nature},
and cannot readily be modeled by a simple power law.}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{Star_surfacecorr}
\caption{Surface terms for the stars in our subsample defined by
the criteria of $\chi_{N}^2 (r_{01},r_{02}) \le 3$.
For clarity, the more evolved
stars are shown by the dashed lines.}
\label{fig:star_surfacecorr}
\end{center}
\end{figure}
For each of the stars, {a value of $a_0$ is derived directly from
the comparison of model and observed frequencies (see Table~\ref{tab:properties_derived}),}
and
Eq.~\ref{eqn:kjeldsen} is used to calculate the surface
correction $\delta\nu_{n,l}$ to apply to the model frequencies.
We then calculate the metric $Q$ for each star in the subsample,
{and these values} are shown as a function of $a_0$ in
Fig.~\ref{fig:dif_cut}.
We see very clearly that as the difference between the
observed and model frequency at \numax\ increases (i.e., $a_0$ becomes more negative), $Q$ also increases,
indicating that the \citet{Kjeldsen2008} correction becomes less {adequate}
to mitigate the surface {effects}.
It seems then quite likely that there is a value of $Q$ (and $a_0$) that defines
a limit where the surface correction is useful.
By inspecting the residuals between observed and corrected model frequencies
for this subset of stars, we
found that when $Q \lesssim 1.0~\mu$Hz, we obtained a very good
match to the observed frequencies when the surface correction was included.
These stars also have values of $a_0$ that are typically lower than {$-$6.0~$\mu$Hz}, as shown in
Fig.~\ref{fig:dif_cut}, just like the solar case.
For {an illustration}, we present some \'echelle diagrams
in Fig.~\ref{fig:someechelles} with different values of $Q$
to {show} the validity of this criterion.
{A visual inspection of the residuals and the \'echelle diagrams
for this subsample of stars led to the same conclusion.}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{dif_cut}
\caption{Metric $Q$ versus $a_0$ for the stars in our subsample.
{The dashed lines highlight the approximate limitation in $Q$ and $a_0$
where the surface correction enables a useful comparison
between the observed and corrected model frequencies}.}
\label{fig:dif_cut}
\end{center}
\end{figure}
When we rely on the criteria of $Q \lesssim 1$ $\mu$Hz, we can trace the
ranges of the stellar parameters where the surface correction
mitigates the surface {effects}. This is {presented} in
Fig.~\ref{fig:whichparameterswork0}, {which shows}
the distribution of
observed and inferred stellar properties of stars from this subsample (open circles)
along with the stars that satisfy the criterion of
$Q \lesssim 1.0~\mu$Hz (filled dark blue circles) and
$Q \lesssim 1.2~\mu$Hz (filled light blue circles).
We also delimit the regions (dashed lines) where we infer that
the correction is no longer useful.
More concretely, we find that the limit of the solar-like regime
in terms of observed properties is
approximately at $\log g = 4.2$, \teff\ $= 6250$ K,
\mlsep\ $= 70 \mu$Hz and \numax\ $= 1600 \mu$Hz.
In terms of physical properties of the star, the limit is
around $R = 1.6$ \rsol, $M = 1.35$ \msol, and $L = 3.0$ \lsol,
with no evidence that the absolute age (not evolution state)
or the metallicity
playing any role.
In Table~\ref{tab:appliedsurface} we summarize these
{limiting regions, but adopt a slightly} more conservative limit.
The limit in \teff\ can probably be attributed to the Kraft break (e.g., \citealt{Kraft1967}), where
at around $6250$ K, these hotter stars rotate much faster
as a result of a lack of a deep convective envelope,
in which magnetic braking could slow the star down.
The depth of the convective region is shown as a function
of \teff\ in Fig.~\ref{fig:teffdcz}, and stars with regions larger than approximately
0.2 stellar radii satisfy this criterion. This limit is also compatible with the proposed mass
limit of approximately 1.3~\msol\ where a transition in envelope convection
takes place.
The negative slope of the surface correction at \numax\ is also
found to increase with increasing mass (becoming flatter), again
indicating a change in convective zone properties and \teff.
The limit in \logg\ points toward a transition
from the main-sequence to the subgiant phase where the
convective envelope begins to deepen.
These limits are imposed by the physical structure of the star itself,
but no quantitative measure of $a_0$ can be deduced from the
observed and/or inferred stellar properties at this stage, except for
a slight linear dependence of $a_0$ with \mlsep, \numax, or \logg\ with
a rather large scatter.
\begin{figure*}
\centering
\includegraphics[width=0.48\textwidth]{whichstars_numaxlogg}
\includegraphics[width=0.48\textwidth]{whichstars_teffdnu}
\includegraphics[width=0.48\textwidth]{whichstars_radmass}
\includegraphics[width=0.48\textwidth]{whichstars_lumage}
\caption{Distribution of observed (top panels) and derived
parameters (lower panels) for the {selected subsample of stars (open circles).
The dark and light blue filled dots represent the stars with $Q \le 1.0$ and 1.2.
The regions are delimited by dashed lines within which we infer
that the
\citet{Kjeldsen2008} surface prescription should be useful.}}
\label{fig:whichparameterswork0}
\end{figure*}
\begin{table}
\begin{center}
\caption{Stellar property regimes where the \citet{Kjeldsen2008} surface correction is useful.
\label{tab:appliedsurface} }
\begin{tabular}{rlllllll}
\hline\hline
Property & \multicolumn{2}{l}{Value} \\
\hline
\logg\ (cgs) & $\ge$ & 4.2 \\
\teff\ (K)& $\le$ & 6200 \\
\mlsep\ (\mhz)&$\ge$& 80 \\
\numax\ (\mhz)& $\ge$& 1700 \\
$a_0$ (\mhz)& $\le$& -6 \\
$R$ (\rsol)& $\le$& 1.5\\
$M$ (\msol)& $\le$& 1.3\\
$L$ (\lsol)& $\le$& 2.5\\
\hline\hline
\end{tabular}
\end{center}
\end{table}
\section{Summary\label{sec6}}
The high-quality and long-term photometric time series
provided by {\it Kepler} has enabled an unprecedented
precision on asteroseismic data
of stars like the Sun.
Thanks to the very high precision, we could {use} the frequency separation
ratios along with spectroscopic temperatures and metallicities
to infer stellar properties of the Sun and 57 {\it Kepler} stars,
comprising solar analogs, active stars,
components of binaries, and
planetary hosts, with a precision of the same quality
when using the individual frequencies.
Median uncertainties on radius and mass are 1\% and 3\%,
while uncertainties on the age compared to the estimated
main-sequence lifetime are typically 7\%
or 11\% compared to
the absolute age.
These realistic uncertainties {account for
unbiased determinations of mixing-length
parameter and initial chemical composition}.
Along with the physical stellar properties, we also derived the
interstellar absorption and distances to each star, and where
the rotation period was available, we derived the rotational velocity.
For nine stars {our derivation of radii, luminosities, and distances
are in very good agreement with independently measured values.}
{Our inferred ages are} validated
for the Sun and by comparing the ages of the individual components
of the binary system 16 Cyg A and B.
From an analysis of our derived properties for the full sample
we investigated the {dependence} of the mixing-length parameter
with stellar properties
and found it to correlate with \logg\ and
\teff\ , just as proposed by \citet{magic2015} from 3D RHD simulations
of convective envelopes.
We also derived a linear expression relating the mean value of
the $r_{02}$ frequency separation ratios directly to the age of the star, which
yields an age of 4.7 Gyr for the Sun.
By selecting a subsample of the stars using a $\chi^2_N$ {threshold},
we investigated the usefulness of the \citet{Kjeldsen2008}
empirical {correction for the surface effects} across a broad range of stellar parameters,
and we found that it is useful, {but only} in certain regimes,
{as also suggested by the theoretical study of \citet{schmittbasu2015}}.
This is of particular interest for stars with
much shorter time series, where the precision on the
individual frequencies or the number of radial orders
is not high enough
to constrain the stellar modeling.
In particular, this
will be the case for the forthcoming NASA TESS mission,
where some stars with ecliptic latitude $|b| \lesssim 60^{\circ}$ will
be observed continuously for only 27 days,
along with the {\it \textup{step-and-stare}}
phase of the future PLATO mission (launch 2024).
\section{Perspectives\label{sec7}}
In this work we used
\teff\ and \mh\ as the only complementary data to the asteroseismic
data. However, within a year from now, we will have
a homogenous set of microarcsecond precision parallaxes that
will give access to the intrinsic luminosity of the star.
{This quantity is sensitive to the interior stellar composition.}
While today we have very high precision radii along with other properties,
degeneracies in model parameters, such as the mass and initial
helium abundance (e.g., \citealt{Metcalfe2009,lebreton2014})
limit the full exploitation of
asteroseismic data for testing stellar interior models and
improving precision on model parameters.
The forthcoming Gaia data in Release 2 promise to
overcome this obstacle
and thus provide
even higher precision radii and ages, along with constraints on
interior and initial chemical composition, and thus pushing stellar models
to their limit.
We highlight the importance of the precise characterisation of exoplanetary
systems using asteroseismic data. In this work, we determined the radius and age
of three planetary hosts (KIC~9414417, KIC~9955598, and KIC~10963065).
Combining our data with
those of \citet{batalha2013} constrains the
planetary and orbital parameters. We illustrate this in
Fig.~\ref{fig:planetages}, where we depict the separation of the
planet and host as a function of stellar age (including the Earth).
The sizes of the symbols {are indicative of} the planetary radius, and
the equilibrium temperature decreases with distance from the host.
The diversity of planetary systems can be easily noted, and
such an analysis of a larger sample of planetary candidates
will yield important constraints on the formation and evolution of planetary
systems.
The future TESS and PLATO missions targeting bright stars with
asteroseismic characterization promise to be a goldmine for not only
exoplanetary physics, but with access to microarcsecond parallaxes
and homogenous
multiband photometry, also for stellar and Galactic physics.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{interesting_teffdcz}
\caption{Fractional depth of the convection zone as a function of \teff\ for our
selected subsample of stars. The color-coding is
the same as Fig.~\ref{fig:whichparameterswork0}.} \label{fig:teffdcz}
\end{figure}
\begin{figure}
\includegraphics[width=0.48\textwidth]{planetages}
\caption{Age of planet and separation from host. Symbol sizes represent planetary radius,
and equilibrium temperature decreases with distance from the host.
Age and radius are taken from this work, while other parameters are taken from \citep{batalha2013}. The Earth is shown at 1 AU.
\label{fig:planetages}}
\end{figure}
\begin{acknowledgements}
This work is based on data collected by the Kepler mission. Funding for the Kepler mission is provided by the NASA Science Mission directorate.
This collaboration was partially supported by funding from the Laboratoire Lagrange 2015 BQR.
This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France. The original description of the VizieR service was published in A\&AS 143, 23.
This work was supported in part by NASA grants NNX13AE91G and NNX16AB97G. Computational time at the Texas Advanced Computing Center was provided through XSEDE allocation TG-AST090107.
DS and RAG acknowledge the financial support from the CNES GOLF grants.
DS acknowledges the Observatoire de la C\^ote d'Azur for support during his stays. Some of these computations have been done on the ’Mesocentre SIGAMM’ machine, hosted by the Observatoire de la Cote d’Azur.
The authors wish to thank Sylvain Korzennik for his very careful reading of the
paper and valuable suggestions for improving the presentation and the scientific arguments.
\end{acknowledgements}
\begin{appendix}
\section{Supplementary material\label{sec:app0}}
\begin{figure*}
\centering
\includegraphics[width=0.48\textwidth]{S_6603624_echelle}
\includegraphics[width=0.48\textwidth]{S_10644253_echelle}
\includegraphics[width=0.48\textwidth]{S_12009504_echelle}
\includegraphics[width=0.48\textwidth]{S_11253226_echelle}
\caption{Echelle diagrams for two stars where the
surface correction appears to be useful (top) and for two stars where
the correction is not useful (lower).
Crosses are observed frequencies, and black circles
are corrected frequencies. The value of $\epsilon$ is an
arbitrary shift in x-axis for display purposes.}
\label{fig:someechelles}
\end{figure*}
\begin{table*}[h!]
\begin{center}\caption{Derived stellar properties of {\it Kepler} targets with M $\sim$ 1.2
\msol\ with or without diffusion of helium \label{tab:diffusioneffects}}
\begin{tabular}{lccccccccc}
\hline\hline
KIC ID & $R$ & $M$ & Age & $L$ & $\log g$ & [M/H] \\
& (\rsol) & (\msol) & (Gyr) & (\lsol) & (dex) & (dex)\\
\hline
no diffusion\\
9139151 & 1.132 & 1.11 & 1.96 & 1.80 & 4.375& -0.01 \\
12009504 & 1.366 & 1.10 & 3.38 & 2.39 & 4.210& -0.01\\
6225718 & 1.227 & 1.15 & 2.29 & 2.09 & 4.320& -0.10\\
\hline
diffusion\\
1225814 & 1.595 & 1.26 & 5.04 & 2.81 & 4.129 & 0.05\\
5184732 & 1.356 & 1.25 & 4.68 & 1.82 & 4.269& 0.25\\
8150065 & 1.397 & 1.21 & 3.12 & 2.54 & 4.228& -0.05\\
8179536 & 1.348 & 1.25 & 1.93 & 2.64 & 4.274& -0.05\\
7771282 & 1.631 & 1.26 & 3.34 & 3.65 & 4.116& -0.03\\
10454113& 1.250 & 1.20 & 1.98 & 2.04 & 4.320& -0.04\\
\hline\hline
\end{tabular}
\end{center}
\end{table*}
\end{appendix}
\bibliographystyle{aa}
|
1,941,325,221,106 | arxiv | \section{Introduction}
In wireless ad hoc or social networks, a variety of scenarios require
agents to share their individual information or resources with each
other for mutual benefits. A partial list includes file sharing and
rumor spreading \cite{QiuSri04,YanDeV04,Bor87,KarSchSheVoc00}, distributed
computation and parameter estimation \cite{Tsi84,KasSri07,BoydShah06,NedOzd09,JunShaShi10},
and scheduling and control \cite{ModShaZus,EryOzdShaMod10}. Due to
the huge centralization overhead and unpredictable dynamics in large
networks, it is usually more practical to disseminate information
and exchange messages in a decentralized and asynchronous manner to
combat unpredictable topology changes and the lack of global state
information. This motivates the exploration of dissemination strategies
that are inherently simple, distributed and asynchronous while achieving
optimal spreading rates.
\subsection{Motivation and Related Work}
Among distributed asynchronous algorithms, gossip algorithms are a
class of protocols which propagate messages according to rumor-style
rules, initially proposed in \cite{Dem88}. Specifically, suppose
that there are $k\leq n$ distinct pieces of messages that need to
be flooded to all $n$ users in the network. Each agent in each round
attempts to communicate with one of its neighbors in a random fashion
to disseminate a limited number of messages. There are two types of
push-based strategies on selecting which message to be sent in each
round: (a) one-sided protocols that are based only on the disseminator's
own current state; and (b) two-sided protocols based on current states
of both the sender and the receiver. Encouragingly, a simple uncoded
one-sided push-only gossip algorithm with random message selection
and peer selection is sufficient for efficient dissemination in some
cases like a static complete graph, which achieves a spreading time
of $\Theta\left(k\log n\right)$ %
\footnote{The standard notion $f(n)=\omega\left(g(n)\right)$ means $\underset{n\rightarrow\infty}{\lim}g(n)/f(n)=0$;
$f(n)=o\left(g(n)\right)$ means $\underset{n\rightarrow\infty}{\lim}f(n)/g(n)=0$;
$f(n)=\Omega\left(g(n)\right)$ means $\exists$ a constant $c$ such
that $f(n)\geq cg(n)$; $f(n)=O\left(g(n)\right)$ means $\exists$
a constant $c$ such that $f(n)\leq cg(n)$; $f(n)=\Theta\left(g(n)\right)$
means $\exists$ constants $c_{1}$ and $c_{2}$ such that $c_{1}g(n)\leq f(n)\leq c_{2}g(n)$.%
}, within only a logarithmic gap with respect to the optimal lower
limit $\Theta(k)$ \cite{Pittel1987,GossipTutorialShah,SanHajMas07}.
This type of one-sided gossiping has the advantages of being easily
implementable and inherently distributed.
\begin{comment}
Among distributed asynchronous algorithms, gossip algorithms are a
class of protocols which propagate messages according to rumor-style
rules, initially proposed in \cite{Dem88}. Specifically, suppose
that there are $k\leq n$ distinct pieces of messages that needs to
be flooded to all $n$ users in the network. Each agent in each round
attempts to communicate with one of its neighbors in a random fashion
to disseminate a limited number of messages. There are two types of
\textit{push-based} strategies on selecting which message to be sent
in each round: \textit{(a) one-sided protocols} that are based only
on the disseminator's own current state; and \textit{(b) two-sided
protocols} based on current states of both the sender and the receiver.
Encouragingly, a simple uncoded one-sided push-only gossip algorithm
with random message selection and peer selection is sufficient for
efficient dissemination in some cases like a static complete graph,
which achieves a spreading time of $\Theta\left(k\log n\right)$,
within only a logarithmic gap with respect to the optimal lower limit
$\Theta(k)$ \cite{GossipTutorialShah,SanHajMas07}. This type of
one-sided gossiping has the advantages of being easily implementable
and inherently distributed.%
\end{comment}
{}
Since each user can receive at most one message in any single slot,
it is desirable for a protocol to achieve close to the fastest possible
spreading time $\Theta\left(k\right)$ (e.g. within a $\text{polylog}(n)$
factor). It has been pointed out, however, that the spreading rate
of one-sided random gossip algorithms is frequently constrained by
the network geometry, e.g. the conductance of the graph \cite{MoskSha08,GossipTutorialShah}.
For instance, for one-sided rumor-style all-to-all spreading (i.e.
$k=n$), the completion time $T$ is much lower in a complete graph
$\left(T=O\left(n\log n\right)\right)$ than in a ring $\left(T=\Omega(n^{2})\right)$.
Intuitively, since each user can only communicate with its nearest
neighbors, the geometric constraints in these graphs limit the location
distribution of all copies of each message during the evolution process,
which largely limits how fast the information can flow across the
network. In fact, for message spreading over static wireless networks,
one-sided uncoded push-based random gossiping can be quite inefficient:
specifically up to $\Omega\left(\sqrt{\frac{n}{\text{poly}(\log n)}}\right)$
times slower than the optimal lower limit $\Theta\left(k\right)$
(i.e. a polynomial factor away from the lower bound), as will be shown
in Theorem \ref{thm-Random-Push-Static}.
\begin{comment}
Since each user can receive at most one message in any single slot,
it is desirable for a protocol to achieve the fastest possible spreading
time $\Theta\left(k\right)$. It has been pointed out, however, the
spreading rate of one-sided random gossip algorithms is frequently
constrained by the network geometry, e.g. the conductance of the graph
\cite{MoskSha08,GossipTutorialShah}. For instance, for one-sided
rumor-style all-to-all spreading (i.e. $k=n$), the completion time
$T$ is much lower in a complete graph ($T=O\left(n\log n\right)$)
than in a ring $\left(T=O\left(n^{2}\log n\right)\right)$. Intuitively,
since each user can only communicate with its nearest neighbors, the
geometric constraints in these graphs limit the location distribution
of all copies of each message during the evolution process, which
largely limits the conductance of potential flow. In fact, for message
spreading over static wireless networks or random geometric graphs,
one-sided uncoded push-based random gossiping can be as inefficient
as to $\Omega\left(\sqrt{\frac{n}{\text{poly}(\log n)}}\right)$%
\footnote{The standard notion $f(n)=\omega\left(g(n)\right)$ means $\underset{n\rightarrow\infty}{\lim}g(n)/f(n)=0$;
$f(n)=o\left(g(n)\right)$ means $\underset{n\rightarrow\infty}{\lim}f(n)/g(n)=0$;
$f(n)=\Omega\left(g(n)\right)$ means $\exists$ a constant $c$ such
that $f(n)\geq cg(n)$; $f(n)=O\left(g(n)\right)$ means $\exists$
a constant $c$ such that $f(n)\leq cg(n)$; $f(n)=\Theta\left(g(n)\right)$
means $\exists$ constants $c_{1}$ and $c_{2}$ such that $c_{1}g(n)\leq f(n)\leq c_{2}g(n)$.%
} times slower than the optimal lower limit $\Theta\left(k\right)$,
as will be shown in Theorem \ref{thm-Random-Push-Static}. %
\end{comment}
{}
Although one-sided random gossiping is not efficient for static wireless
networks, it may potentially achieve better performance if each user
has some degree of mobility -- an intrinsic feature of many wireless
and social networks. For instance, full mobility%
\footnote{By full mobility, we mean that the location of the mobile is distributed
independently and uniformly random over the entire network over consecutive
time-steps (i.e., the velocity of the mobile can be {}``arbitrarily
large''). This is sometimes also referred to in literature as the
i.i.d. mobility model. In this paper, we consider nodes with {}``velocity-limited''
mobility capability.%
} changes the geometric graph with transmission range $O\left(\sqrt{\frac{\log n}{n}}\right)$
to a complete graph in the unicast scenario. Since random gossiping
achieves a spreading time of $\Theta\left(n\text{log}(n)\right)$
for \emph{all-to-all} spreading over a complete graph \cite{SanHajMas07,GossipTutorialShah},
this allows near-optimal spreading time to be achieved within a logarithmic
factor from the fundamental lower limit $\Theta\left(n\right)$. However,
how much benefit can be obtained from more realistic mobility -- which
may be significantly lower than idealized best-case full mobility
-- is not clear. Most existing results on uncoded random gossiping
center on evolutions associated with static homogeneous graph structure
or a fixed adjacency matrix, which cannot be readily extended for
dynamic topology changes. To the best of our knowledge, the first
work to analyze gossiping with mobility was \cite{MobilityGossip},
which focused on \textit{energy-efficient} distributed averaging instead
of \textit{time-efficient} message propagation. Another line of work
by Clementi \emph{et. al.} investigate the speed limit for information
flooding over Markovian evolving graphs (e.g. \cite{Clementi2011,Baumann2009,Clementi2012}),
but they did not study the spreading rate under multi-message gossip.
Recently, Pettarin \emph{et. al.} \cite{Pettarin2011} explored the
information spreading over sparse mobile networks with no connected
components of size $\Omega\left(\log n\right)$, which does not account
for the dense (interference-limited) network model we consider in
this paper.
For a broad class of graphs that include both static and dynamic graphs,
the lower limit on the spreading time can be achieved through random
linear coding where a random combination of all messages are transmitted
instead of a specific message \cite{DebMedCho06}, or by employing
a two-sided protocol which always disseminates an innovative message
if possible \cite{SanHajMas07}. Specifically, through a unified framework
based on dual-space analysis, recent work \cite{Hae2011} demonstrated
that the optimal all-to-all spreading time $\Theta(n)$ can be achieved
for a large class of graphs \cite{Hae2011} including complete graphs,
geometric graphs, and the results hold in these network models even
when the topology is allowed to change dynamically at each time. However,
performing random network coding incurs very large computation overhead
for each user, and is not always feasible in practice. On the other
hand, two-sided protocols inherently require additional feedback that
increases communication overhead. Also, the state information of the
target may sometimes be unobtainable due to privacy or security concerns.
Furthermore, if there are $k\ll\sqrt{n}$ messages that need to be
disseminated over a static uncoordinated unicast wireless network
or a random geometric graph with transmission radius $\Theta\left(\sqrt{\frac{1}{n}\text{poly}\left(\log n\right)}\right)$,
neither network coding nor two-sided protocols can approach the lower
limit of spreading time $\Theta(k)$. This arises due to the fact
that the diameter of the underlying graph with transmission range
$\Theta\left(\sqrt{\frac{\text{poly}\left(\log n\right)}{n}}\right)$
scales as $\Omega\left(\sqrt{\frac{n}{\text{poly}(\log n)}}\right)$,
and hence each message may need to be relayed through $\Omega\left(\sqrt{\frac{n}{\text{poly}(\log n)}}\right)$
hops in order to reach the node farthest to the source.
\begin{comment}
This large gap can be closed if we take advantage of random linear
coding where a random combination of all messages are transmitted
instead of a specific message \cite{DebMedCho06,Hae2011}, or uses
a two-sided protocol which always disseminates an innovative message
if possible \cite{SanHajMas07}. For all-to-all spreading, it has
been shown that the order-wise optimal spreading time $\Theta(n)$
can be achieved for a large class of static graphs \cite{BorAviLot10,Hae2011},
e.g. complete graphs, geometric graphs with constant maximum degree.
However, performing random network coding incurs very large computation
overhead for each user, and is not always possible. Two-sided protocols
inherently require additional feedback that increases communication
overhead. Also, the state information of the target may sometimes
be unobtainable due to privacy or security concern. Furthermore, if
there are $k\ll\sqrt{n}$ messages that need to be disseminated over
a static unicast wireless network, neither network coding nor two-sided
protocols can approach the lower limit of spreading time $\Theta(k)$.
This arises due to the fact that the diameter of the underlying graph
scales as $\Omega\left(\sqrt{\frac{n}{\text{poly}(\log n)}}\right)$,
but each message may need to be transmitted through $\Omega\left(\sqrt{\frac{n}{\text{poly}(\log n)}}\right)$
hops in order to reach the farthest node.%
\end{comment}
{}
Another line of work has studied spreading scaling laws using more
sophisticated non-gossip schemes over static wireless networks, e.g.
\cite{Zhe2006,SubShaAra}. Recently, Resta \textit{et. al.} \cite{ResSan2010}
began investigating broadcast schemes for mobile networks with a \textit{single
static} source constantly propagating new data, while we focus on
a different problem with multiple mobile sources each sharing distinct
message. Besides, \cite{ResSan2010} analyzed how to combat the adverse
effect of mobility to ensure the same pipelined broadcasting as in
static networks, whereas we are interested in how to take advantage
of mobility to overcome the geometric constraints. In fact, with the
help of mobility, simply performing random gossiping -- which is simpler
than most non-gossip schemes and does not require additional overhead
-- is sufficient to achieve optimality.
Finally, we note that gossip algorithms have also been employed and
analyzed for other scenarios like distributed averaging, where each
node is willing to compute the average of all initial values given
at all nodes in a decentralized manner, e.g. \cite{Tsi84,BoydShah06}.
The objective of such distributed consensus is to minimize the total
number of computations. It turns out that the convergence rates of
both message sharing and distributed averaging are largely dependent
on the eigenvalues or, more specifically, the mixing times of the
graph matrices associated with the network geometry \cite{BoydShah06,GossipTutorialShah}.
\subsection{Problem Definition and Main Modeling Assumptions}
Suppose there are $n$ users randomly located over a square of unit
area. The task is to disseminate $k\leq n$ distinct messages (each
contained in one user initially) among all users. The message spreading
can be categorized into two types: (a) \textit{single-message dissemination}:
a single user (or $\Theta(1)$ users) wishes to flood its message
to all other users; (b) \textit{multi-message dissemination}: a large
number $k$$(k\gg1)$ of users wish to spread individual messages
to all other users. We note that distinct messages may not be injected
into the network simultaneously. They may arrive in the network (possibly
in batches) sequentially, but the arrival time information is unknown
in the network.
Our objective is to design a gossip-style one-sided algorithm in the
absence of coding, such that it can take advantage of the intrinsic
feature of mobility to accelerate dissemination. Only the {}``push''
operation is considered in this paper, i.e. a sender determines which
message to transmit solely based on its own current state, and in
particular not using the intended receiver's state. We are interested
in identifying the range of the degree of mobility within which our
algorithm achieves near-optimal spreading time $O\left(k\text{ poly}\left(\log n\right)\right)$
for each message regardless of message arrival patterns. Specifically,
our MOBILE PUSH protocol achieves a spreading time $O\left(k\log^{2}n\right)$
as stated in Theorem \ref{thm:DiscreteMP} for the mobility that is
significantly lower than the idealized full mobility. As an aside,
it has been shown in \cite{SanHajMas07,DebMedCho06} that with high
probability, the completion time for one-sided uncoded random gossip
protocol over complete graphs is lower bounded by $\Omega\left(k\log n\right)$,
which implies that in general the logarithmic gap from the universal
lower limit $\Theta\left(k\right)$ cannot be closed with uncoded
one-sided random gossiping.
Our basic network model is as follows. Initially, there are $n$ users
uniformly distributed over a unit square. We ignore edge effects so
that every node can be viewed as homogeneous. Our models and analysis
are mainly based on the context of wireless ad hoc networks, but one
can easily apply them to other network scenarios that can be modeled
as a random geometric graph of transmission radius $\Theta\left(\sqrt{\log n/n}\right)$.
\textbf{\textit{Physical-Layer Transmission Model}}. Each transmitter
employs the same amount of power $P$, and the noise power density
is assumed to be $\eta$. The path-loss model is used such that node
$j$ receives the signal from transmitter $i$ with power $Pr_{ij}^{-\alpha}$,
where $r_{ij}$ denotes the Euclidean distance between $i$ and $j$
with $\alpha>2$ being the path loss exponent. Denote by $\mathcal{\mathcal{T}}(t)$
the set of transmitters at time $t$. We assume that a packet from
transmitter $i$ is successfully received by node $j$ at time $t$
if \begin{align}
\text{SINR}_{ij}(t): & =\frac{Pr_{ij}^{-\alpha}}{\eta+\sum\limits _{k\neq i,k\in\mathcal{T}(t)}Pr_{kj}^{-\alpha}}\geq\beta,\end{align}
where $\text{SINR}_{ij}(t)$ is the signal-to-interference-plus-noise
ratio (SINR) at $j$ at time $t$, and $\beta$ the SINR threshold
required for successful reception. For simplicity, we suppose only
one fixed-size message or packet can be transmitted for each transmission
pair in each time instance.
\begin{table}
\caption{Summary of Notation}
\centering{}\begin{tabular}{l>{\raggedright}p{2.1in}}
$v(n)$ & velocity\tabularnewline
$m$ & the number of subsquares; $m=1/v^{2}(n)$\tabularnewline
$n$ & the number of users/nodes\tabularnewline
$k$ & the number of distinct messages\tabularnewline
$M_{i}$ & the message of source $i$\tabularnewline
$A_{k}$ & subsquare $k$\tabularnewline
$N_{i}(t)$,$\mathcal{N}_{i}(t)$ & the number, and the set of nodes containing $M_{i}$ at time $t$\tabularnewline
$N_{i,A_{k}}(t)$,$\mathcal{N}_{i,A_{k}}(t)$ & the number, and the set of nodes containing $M_{i}$ at subsquare
$A_{k}$ at time $t$\tabularnewline
$S_{i}(t),\mathcal{S}_{i}(t)$ & the number, and the set of messages node $i$ has at time $t$\tabularnewline
$\alpha$ & path loss exponent\tabularnewline
$\beta$ & SINR requirement for single-hop success\tabularnewline
\end{tabular}%
\end{table}
Suppose that each node can move with velocity $v(n)$ in this mobile
network. We provide a precise description of the mobility pattern
as follows.
\textbf{\textit{Mobility Model}}. We use a mobility pattern similar
to \cite[Section VIII]{NeeMod05}, which ensures that at steady state,
each user lies in each subsquare with equal probability. Specifically,
we divide the entire square into $m:=1/v^{2}(n)$ subsquares each
of area $v^{2}(n)$ (where $v(n)$ denotes the velocity of the mobile
nodes), which forms a $\sqrt{m}\times\sqrt{m}$ discrete torus. At
each time instance, every node moves according to a \textit{random
walk} on the $\sqrt{m}\times\sqrt{m}$ discrete torus. More precisely,
if a node resides in a subsquare $(i,j)\in\left\{ 1,\cdots,\sqrt{m}\right\} ^{2}$
at time $t$, it may choose to stay in $(i,j)$ or move to any of
the eight adjacent subsquares each with probability $1/9$ at time
$t+1$. If a node is on the edge and is selected to move in an infeasible
direction, then it stays in its current subsquare. The position inside
the new subsquare is selected \textit{uniformly at random}. See Fig.
\ref{fig:UnitSquareGrid} for an illustration.
We note that when $v(n)=1/3=\Theta(1)$, the pattern reverts to the
full mobility model. In this random-walk model, each node moves independently
according to a uniform ergodic distribution. In fact, a variety of
mobility patterns have been proposed to model mobile networks, including
i.i.d. (full) mobility \cite{MobilityCapacityTse}, random walk (discrete-time)
model \cite{GamMamProSha06,YinYanSri08}, and Brownian motion (continuous-time)
pattern \cite{LinMazShr06}. For simplicity, we model it as a discrete-time
random walk pattern, since it already captures intrinsic features
of mobile networks like uncontrolled placement and movement of nodes.
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[scale=0.45]{Grid.pdf}}
\par\end{centering}
\caption{\label{fig:UnitSquareGrid}The unit square is equally divided into
$m=1/v^{2}(n)$ subsquares. Each node can jump to one of its $8$
neighboring subsquares or stay in its current subsquare with equal
probability $1/9$ at the beginning of each slot.}
\end{figure}
\subsection{Contributions and Organization}
The main contributions of this paper include the following.
\begin{enumerate}
\item \textbf{Single-message dissemination over mobile networks.} We derive
an upper bound on the single-message ($k=\Theta\left(1\right)$) spreading
time using push-only random gossiping (called RANDOM PUSH) in mobile
networks. A gain of $\Omega\left(v(n)\sqrt{n}/\log^{2}n)\right)$
in the spreading rate can be obtained compared with static networks,
which is, however, still limited by the underlying geometry unless
there is full mobility.
\item \textbf{Multi-message dissemination over static networks.} We develop
a lower bound on the multi-message spreading time under RANDOM PUSH
protocol over static networks. It turns out that there may exist a
gap as large as $\Omega\left(\frac{\sqrt{n}}{\text{poly}\left(\log n\right)}\right)$
between its spreading time and the optimal lower limit $\Theta\left(k\right)$.
The key intuition is that the copies of each message $M_{i}$ tend
to cluster around the source $i$ at all time instances, which results
in capacity loss. This inherently constrains how fast the information
can flow across the network.
\item \textbf{Multi-message dissemination over mobile networks.} We design
a \textit{one-sided uncoded} message-selection strategy called MOBILE
PUSH that accelerates multi-message spreading ($k=\omega\left(\log n\right)$)
with mobility. An upper bound on the spreading time is derived, which
is the main result of this paper. Once $v(n)=\omega\left(\sqrt{\frac{\log n}{k}}\right)$
(which is still significantly smaller than full mobility), the near-optimal
spreading time $O\left(k\log^{2}n\right)$ can be achieved with high
probability. The underlying intuition is that if the mixing time arising
from the mobility model is smaller than the optimal spreading time,
the mixing property approximately \textit{uniformizes} the location
of all copies of each message, which allows the evolution to mimic
the propagation over a complete graph.
\end{enumerate}
The remainder of this paper is organized as follows. In Section \ref{sec:Strategies-and-Main-Results},
we describe our unicast physical-layer transmission strategy and two
types of message selection strategies, including RANDOM PUSH and MOBILE
PUSH. Our main theorems are stated in Section \ref{sec:Strategies-and-Main-Results}
as well, with proof ideas illustrated in Section \ref{thm:DiscreteMP}.
Detailed derivation of auxiliary lemmas are deferred to the Appendix.
\section{Strategies and Main Results\label{sec:Strategies-and-Main-Results}}
The strategies and main results of this work are outlined in this
section, where only the unicast scenario is considered. The dissemination
protocols for wireless networks are a class of scheduling algorithms
that can be decoupled into (a) physical-layer \textit{transmission}
strategies (link scheduling) and (b) message selection strategies
(message scheduling).
One physical-layer transmission strategy and two message selection
strategies are described separately, along with the order-wise performance
bounds.
\subsection{Strategies}
\subsubsection{Physical-Layer Transmission Strategy}
In order to achieve efficient spreading, it is natural to resort to
a decentralized transmission strategy that supports the order-wise
largest number (i.e. $\Theta(n)$) of concurrent successful transmissions
per time instance. The following strategy is a candidate that achieves
this objective with local communication. \vspace{6pt}
\framebox{%
\begin{minipage}[t]{3.2in}%
UNICAST Physical-Layer Transmission Strategy:
\begin{itemize}
\item At each time slot, each node $i$ is designated as a sender independently
with constant probability $\theta$, and a potential receiver otherwise.
Here, $\theta<0.5$ is independent of $n$ and $k$.
\item Every sender $i$ attempts to transmit one message to its \textit{nearest}
potential receiver $j(i)$.
\end{itemize}
\end{minipage}}
\vspace{8pt}
This simple {}``link'' scheduling strategy, when combined with
appropriate push-based message selection strategies, leads to the
near-optimal performance in this paper. We note that the authors in
\cite{MobilityCapacityTse}, by adopting a slightly different strategy
in which $\theta n$ nodes are randomly designated as senders (as
opposed to link-by-link random selection as in our paper), have shown
that the success probability for each unicast pair is a constant.
Using the same proof as for \cite[Theorem III-5]{MobilityCapacityTse},
we can see (which we omit here) that there exists a constant $c$
such that\begin{equation}
\mathbb{P}\left(\text{SINR}_{i,j(i)}(t)>\beta\right)\geq c\end{equation}
holds for our strategy. Here, $c$ is a constant irrespective of $n$,
but may depend on other salient parameters $P$, $\alpha$ and $\eta$.
That said, $\Theta\left(n\right)$ concurrent transmissions can be
successful, which is order-optimal. For ease of analysis and exposition,
we further assume that physical-layer success events are \textit{temporally
independent} for simplicity of analysis and exposition. Indeed, even
accounting for the correlation yields the same scaling results, detailed
in Remark 1.
\begin{remark}In fact, the physical-layer success events are correlated
across different time slots due to our mobility model and transmission
strategy. However, we observe that our analysis framework would only
require that the transmission success probability at time $t+1$ is
always a constant irrespective of $n$ given the node locations at
time $t$. To address this concern, we show in Lemma \ref{lemmaConcentration-3}
that for any $m<n/\left(32\log n\right)$, the number of nodes $N_{A_{i}}$
residing in each subsquare $A_{i}$ is bounded within $\left[\frac{n}{6m},\frac{7n}{3m}\right]$
$ $with probability at least $1-2n^{-3}$. Conditional on this high-probability
event that $N_{A_{i}}\in\left[\frac{n}{6m},\frac{7n}{3m}\right]$
with all nodes in each subsquare uniformly located, we can use the
same proof as \cite[Theorem III-5]{MobilityCapacityTse} to show that
$\mathbb{P}\left(\text{SINR}_{i,j(i)}(t)>\beta\right)\geq c$ holds
for some constant $c$. \end{remark}
Although this physical-layer transmission strategy supports $\Theta\left(n\right)$
concurrent local transmissions, it does not tell us how to take advantage
of these resources to allow efficient propagation. This will be specified
by the message-selection strategy, which potentially determines how
each message is propagated and forwarded over the entire network.
\subsubsection{Message Selection Strategy}
We now turn to the objective of designing a one-sided message-selection
strategy (only based on the transmitter's current state) that is efficient
in the absence of network coding. We are interested in a decentralized
strategy in which no user has prior information on the number of distinct
messages existing in the network. One common strategy within this
class is:
\vspace{6pt}
\framebox{%
\begin{minipage}[t]{3.2in}%
\textbf{RANDOM PUSH} Message Selection Strategy:
\begin{itemize}
\item In every time slot: each sender $i$ randomly selects one of the messages
it possesses for transmission.
\end{itemize}
\end{minipage}}
\vspace{8pt}
This is a simple gossip algorithm solely based on random message selection,
which is surprisingly efficient in many cases like a complete graph.
It will be shown later, however, that this simple strategy is inefficient
in a static unicast wireless network or a random geometric graph
with transmission range $\Theta\left(\sqrt{\frac{\log n}{n}}\right)$.
In order to take advantage of the mobility, we propose the following
alternating strategy within this class:
\vspace{6pt}
\framebox{%
\begin{minipage}[t]{3.2in}%
\textbf{MOBILE PUSH} Message Selection Strategy:
\begin{itemize}
\item Denote by $M_{i}$ the message that source $i$ wants to spread, i.e.
its own message.
\item In every odd time slot: for each sender $i$, if it has an individual
message $M_{i}$, then $i$ selects $M_{i}$ for transmission; otherwise
$i$ randomly selects one of the messages it possesses for transmission.
\item In every even time slot: each sender $i$ randomly selects one of
the messages it has received for transmission.
\end{itemize}
\end{minipage}}
\vspace{8pt}
In the above strategy, each sender alternates between random gossiping
and self promotion. This alternating operation is crucial if we do
not know \textit{a priori} the number of distinct messages. Basically,
random gossiping enables rapid spreading by taking advantage of all
available throughput, and provides a non-degenerate approach that
ensures an approximately {}``uniform'' evolution for all distinct
messages. On the other hand, individual message flooding step plays
the role of self-advocating, which guarantees that a sufficiently
large number of copies of each message can be forwarded with the assistance
of mobility (which is not true in static networks). This is critical
at the initial stage of the evolution.
\subsection{Main Results (without proof)}
Now we proceed to state our main theorems, each of which characterizes
the performance for one distinct scenario. Detailed analysis is deferred
to Section \ref{sec:Discrete-Jump-Model}.
\subsubsection{Single-Message Dissemination in Mobile Networks with RANDOM PUSH}
The first theorem states the limited benefits of mobility on the spreading
rate for single-message spreading when RANDOM PUSH is employed. We
note that MOBILE PUSH reverts to RANDOM PUSH for single-message dissemination,
and hence has the same spreading time.
\begin{theorem}\label{thm:DiscreteSP}Assume that the velocity obeys
$v(n)>\sqrt{\frac{32\log n}{n}}$, and that the number of distinct
messages obeys $k=\Theta\left(1\right)$. RANDOM PUSH message selection
strategy is assumed to be employed in the unicast scenario. Denote
by $T_{\mathrm{sp}}^{\mathrm{uc}}(i)$ the time taken for all users
to receive message $M_{i}$ after $M_{i}$ is injected into the network,
then with probability at least $1-n^{-2}$ we have\begin{equation}
\forall i,\quad T_{\mathrm{sp}}^{\mathrm{uc}}(i)=O\left(\frac{\log n}{v(n)}\right)\quad\text{and}\quad T_{\mathrm{sp}}^{\mathrm{uc}}(i)=\Omega\left(\frac{1}{v(n)}\right).\end{equation}
\end{theorem}
Since the single-message flooding time is $\Omega(\sqrt{n}/\log n)$
under RANDOM PUSH over static wireless networks or random geometric
graphs of radius $\Theta\left(\sqrt{\log n/n}\right)$ \cite{MoskSha08},
the gain in dissemination rate due to mobility is $\Omega\left(v(n)\sqrt{n}/\log^{2}n)\right)$.
\begin{comment}
In fact, a random static network can usually be treated as equivalent
(in order of magnitude) to a random geometric graph with transmission
radius $r(n)=\Theta\left(\sqrt{\frac{\log n}{n}}\right)$. It is well
known that such a random geometric graph has conductance $\Phi\left(n\right)=\Theta\left(r(n)\right)$
(see Section \ref{sub:Preliminaries}), which requires a spreading
time $\Theta\left(\frac{\log n}{\Phi\left(n\right)}\right)=\Theta\left(\frac{\log n}{r\left(n\right)}\right)$
\cite{MoskSha08}.%
\end{comment}
{}When the mobility is large enough (e.g. $v(n)=\omega\left(\sqrt{\frac{\log n}{n}}\right)$),
it plays the role of increasing the transmission radius, thus resulting
in the speedup. It can be easily verified, however, that the universal
lower bound on the spreading time is $\Theta\left(\log n\right)$,
which can only be achieved in the presence of full mobility. To summarize,
while the speedup $\Omega\left(v(n)\sqrt{n}/\log^{2}n)\right)$ can
be achieved in the regime $\sqrt{\frac{32\log n}{n}}\leq v(n)\leq\Theta(1)$,
RANDOM PUSH cannot achieve near-optimal spreading time $O\left(\text{poly}\left(\log n\right)\right)$
for single-message dissemination unless full mobility is present.
\subsubsection{Multi-Message Dissemination in Static Networks with RANDOM PUSH}
Now we turn to multi-message spreading over static networks with uncoded
random gossiping. Our analysis is developed for the regime where there
are $k$ distinct messages that satisfies $k=\omega\left(\text{polylog}\left(n\right)\right)$,
which subsumes most multi-spreading cases of interest. For its complement
regime where $k=O(\text{polylog}\left(n\right))$, an apparent lower
bound $\Omega\left(\sqrt{n}/\text{polylog}(n)\right)$ on the spreading
time can be obtained by observing that the diameter of the underlying
graph with transmission radius $\Theta\left(\sqrt{\frac{\log n}{n}}\right)$
is at least $\Omega\left(\sqrt{n}/\text{polylog}(n)\right)$. This
immediately indicates a gap $\Omega\left(\sqrt{n}/\text{polylog}(n)\right)$
between the spreading time and the lower limit $k=O\left(\text{polylog}(n)\right)$.
The spreading time in the regime $k=\omega\left(\text{poly}\left(\log n\right)\right)$
is formally stated in Theorem \ref{thm-Random-Push-Static}, which
implies that simple RANDOM GOSSIP is inefficient in static wireless
networks, under a message injection scenario where users start message
dissemination sequentially. The setting is as follows: $(k-1)$ of
the sources inject their messages into the network at some time prior
to the $k$-th source. At a future time when each user in the network
has at least $w=\omega\left(\text{poly}\left(\log n\right)\right)$
messages, the $k$-th message (denoted by $M^{*}$) is injected into
the network. This pattern occurs, for example, when a new message
is injected into the network much later than other messages, and hence
all other messages have been spread to a large number of users. We
will show that without mobility, the spreading time under MOBILE PUSH
in these scenarios is at least of the same order as that under RANDOM
PUSH %
\footnote{We note that this section is devoted to showing the spreading inefficiency
under two uncoded one-sided push-only protocols. It has recently been
shown in \cite{Hae2011} that a network coding approach can allow
the optimal spreading time $\Theta(k)$ to be achieved over static
wireless networks or random geometric graphs.%
}, which is a polynomial factor away from the universal lower limit
$\Theta(k)$. In fact, the individual message flooding operation of
MOBILE PUSH does not accelerate spreading since each source has only
$O(\text{poly}\left(\log n\right))$ potential neighbors to communicate.
The main objective of analyzing the above scenario is to uncover the
fact that uncoded one-sided random gossiping fails to achieve near-optimal
spreading for a large number of message injection scenarios over static
networks. This is in contrast to mobile networks, where protocols
like MOBILE PUSH with the assistance of mobility is robust to all
initial message injection patterns and can always achieve near-optimal
spreading, as will be shown later.
\begin{theorem}\label{thm-Random-Push-Static}
Assume that a new message $M^{*}$ arrives in a static network later
than other $k-1$ messages, and suppose that $M^{*}$ is first injected
into the network from a state such that each node has received at
least $w=\omega\left(\text{poly}\left(\log n\right)\right)$ distinct
messages. Denote by $T^{*}$ the time until every user receives $M^{*}$
using RANDOM PUSH, then for any constant $\epsilon>0$ we have\begin{equation}
T^{*}>w^{1-\epsilon}\sqrt{\frac{n}{128\log n}}\end{equation}
with probability exceeding $1-n^{-2}$.
\end{theorem}
\begin{remark}Our main goal is to characterize the spreading inefficiency
when each node has received a few messages, which becomes most
significant when each has received $\Theta\left(k\right)$ messages.
In contrast, when only a constant number of messages are available
at each user, the evolution can be fairly fast since the piece selection
has not yet become a bottleneck. Hence, we consider $w=\omega(\text{polylog}(n))$,
which captures most of the spreading-inefficient regime $\left(\omega(\text{polylog}(n))\leq w\leq\Theta(k)\right)$.
The spreading can be quite slow for various message-injection process
over static networks, but can always be completed within $O\left(k\log^{2}n\right)$
with the assistance of mobility $v(n)=\omega\left(\sqrt{\log n/k}\right)$
regardless of the message-injection process, as will be shown in Theorem
\ref{thm:DiscreteMP}.
\begin{comment}
Our main objective is to characterize the spreading inefficiency when
each node has received a few messages, which becomes most remarkable
when each has received $\Theta\left(k\right)$ messages. In contrast,
when only a constant number of messages are available at each user,
the evolution can be fairly fast since the piece selection has not
yet become a bottleneck. Hence, we suppose $w=\omega(\text{polylog}(n))$
which, while convenient for our analysis purpose, captures most of
the spreading-inefficient regime ($\omega(\text{polylog}(n))\leq w\leq\Theta(k)$).
We also note that spreading can be quite slow for various message-injection
process over static networks, but can always be completed within $O\left(k\log^{2}n\right)$
under mobility $v(n)=\omega\left(\sqrt{\log n/k}\right)$ regardless
of the message-injection process, as will be shown in Theorem \ref{thm:DiscreteMP}.%
\end{comment}
{}
\end{remark}
Theorem \ref{thm-Random-Push-Static} implies that if $M^{*}$ is
injected into the network when each user contains $\omega\left(\frac{k^{1+2\epsilon}}{\sqrt{n}\text{poly}\left(\log n\right)}\right)$
messages for any $\epsilon>0$, then RANDOM PUSH is unable to approach
the fastest possible spreading time $\Theta\left(k\right)$. In particular,
if the message is first transmitted when each user contains $\Omega\left(k/\text{poly}(\log n)\right)$
messages, then at least $\Omega\left(k^{1-\epsilon}\sqrt{n}/\text{poly}(\log n)\right)$
time slots are required to complete spreading. Since $\epsilon$ can
be arbitrarily small, there may exist a gap as large as $\Omega\left(\frac{\sqrt{n}}{\text{poly}(\log n)}\right)$
between the lower limit $\Theta(k)$ and the spreading time using
RANDOM PUSH. The reason is that as each user receives many distinct
messages, a bottleneck of spreading rate arises due to the low piece-selection
probability assigned for each message. A number of transmissions are
wasted due to the blindness of the one-sided message selection, which
results in capacity loss and hence largely constrain how efficient
information can flow across the network. The copies of each message
tend to cluster around the source -- the density of the copies decays
rapidly with the distance to the source. Such inefficiency becomes
more severe as the evolution proceeds, because each user will assign
an increasingly smaller piece-selection probability for each message.
\subsubsection{Multi-Message Dissemination in Mobile Networks with MOBILE PUSH}
Although the near-optimal spreading time $O\left(\text{polylog}(n)\right)$
for single message dissemination can only be achieved when there is
near-full mobility $v(n)=\Omega\left(1/\text{polylog}(n)\right)$,
a limited degree of velocity turns out to be remarkably helpful in
the multi-message case as stated in the following theorem.
\begin{theorem}\label{thm:DiscreteMP}Assume that the velocity obeys:
$v(n)=\omega\left(\sqrt{\frac{\log n}{k}}\right)$, where the number
of distinct messages obeys $k=\omega\left(\text{poly}\left(\log n\right)\right)$.
MOBILE PUSH message selection strategy is employed along with unicast
transmission strategy. Let $T_{\mathrm{mp}}^{\mathrm{uc}}(i)$ be
the time taken for all users to receive message $M_{i}$ after $M_{i}$
is first injected into the network, then with probability at least
$1-n^{-2}$ we have\begin{equation}
\forall i,\quad T_{\mathrm{mp}}^{\mathrm{uc}}(i)=O\left(k\log^{2}n\right).\end{equation}
\end{theorem}
Since each node can receive at most one message in each time slot,
the spreading time is lower bounded by $\Theta\left(k\right)$ for
any graph. Thus, our strategy with limited velocity spreads the information
essentially as fast as possible. Intuitively, this is due to the fact
that the velocity (even with restricted magnitude) helps \textit{uniformize}
the locations of all copies of each message, which significantly increases
the conductance of the underlying graph in each slot. Although the
velocity is significantly smaller than full mobility (which simply
results in a complete graph), the relatively low mixing time helps
to approximately achieve the same objective of uniformization. On
the other hand, the low spreading rates in static networks arise from
the fact that the copies of each message tend to cluster around the
source at any time instant, which decreases the number of flows going
towards new users without this message.
\begin{remark}Note that there is a $O\left(\log^{2}n\right)$ gap
between this spreading time and the lower limit $\Theta(k)$. We conjecture
that $\Theta\left(k\log n\right)$ is the exact order of the spreading
time, where the logarithmic gap arises from the blindness of peer
and piece selection. A gap of $\Theta\left(\log n\right)$ was shown
to be indispensable for complete graphs when one-sided random push
is used \cite{DebMedCho06}. Since the mobility model simply mimics
the evolution in complete graphs, a logarithmic gap appears to be
unavoidable when using our algorithm. Nevertheless, we conjecture
that with a finer tuning of the concentration of measure techniques,
the current gap $O\left(\log^{2}n\right)$ can be narrowed to $\Theta\left(\log n\right)$.
See Remark \ref{remark-Gap}.
\begin{comment}
Note that there is a $O\left(\log^{2}n\right)$ gap between this spreading
time and the lower limit $\Theta(k)$. We conjecture that $\Theta\left(k\log n\right)$
is the exact order of the spreading time, where the logarithmic gap
arises from the blindness of peer and piece selection. A gap of $\Theta\left(\log n\right)$
was shown to be indispensable for complete graphs when one-sided random
push is used \cite{DebMedCho06}. Since the mobility model simply
allows us to mimic the evolution in complete graphs, the logarithmic
gap does not seem to be closable by our algorithm. Nevertheless, we
conjecture that with a finer tuning of the concentration of measure
techniques, our current gap $O\left(\log^{2}n\right)$ can be narrowed
to $\Theta\left(\log n\right)$. %
\end{comment}
{}
\end{remark}
\begin{comment}
\subsection{Strategies (Broadcast)}
The wireless links are inherently broadcast links --
\subsubsection{Physical-Layer Transmission Strategy}
The following strategy is a candidate that achieve this objective.
\vspace{6pt}
\framebox{%
\begin{minipage}[t]{3.2in}%
BROADCAST Physical-Layer Transmission Strategy:
\begin{itemize}
\item In every odd slot, each node $i$ is designated as a sender if $i$
has an individual message $M_{i}$.
\item In every even slot, each node $i$ is designated as a sender independently
with constant probability $\frac{m}{n}\theta$, and a potential receiver
otherwise. Here, $\theta<0.5$ is independent of $n$ and $k$.
\item Every sender $i$ attempts to multicast one message to each potential
receiver that finds $i$ as the closest sender.
\end{itemize}
\end{minipage}}
\vspace{8pt}
We notice that for appropriately chosen constant $\theta$, most subsquares
will have at most one transmitter with high probability. Let each
receiver attempts to get the message from a nearest transmitter located
in the same subsquare. With similar arguments, we note without proof
that for each receiver $j$:\[
\mathbb{P}\left(\text{SINR}_{j}>\beta\right)\geq\hat{c},\quad\left(n\rightarrow\infty\right),\]
for some constant $\hat{c}$, and the expected throughput of each
time slot is thus $\Theta\left(n\right)$.
\subsubsection{Message Selection Strategy}
The message selection strategy is exactly the same as that in unicast
scenario.
\subsection{Main Results (Broadcast)}
\subsubsection{Broadcast}
If broadcast nature is exploited in this model, velocity will play
a similar role as in unicast scenario. Similarly, limited degree of
velocity cannot help achieve the universally optimal spreading rates
in single-message dissemination, as stated in the following theorem:
\begin{theorem}\label{thm:ContinuousSP}Assume that the velocity
constraint obeys: $v(n)=\omega\left(\sqrt{\frac{\log n}{n}}\right)$.
For single-message dissemination in broadcast case, let $T_{\text{sp}}^{\text{mc}}$
be the first time that all users receive each message, then with probability
at least $1-n^{-2}$ we have\begin{equation}
T_{\text{sp}}^{\text{mc}}=\begin{cases}
O\left(\frac{\log n}{v(n)}\right) & \quad\text{if }v(n)=\Omega\left(\left(\frac{\log n}{n}\right)^{\frac{1}{3}}\right);\\
O\left(nv^{2}(n)\log n\right) & \quad\text{if }v(n)=o\left(\left(\frac{\log n}{n}\right)^{\frac{1}{3}}\right).\end{cases}\end{equation}
\end{theorem}
This theorem assumes all nodes adopt the same transmission strategy.
There is one more bottleneck: the source node needs $\Theta\left(\frac{n}{m\log n}\right)$
time slots before it is first assigned for transmitting. Obviously,
there is still a gap of $\Theta\left(1/v(n)\right)$ from the universally
optimal condition.
The next theorem again provides a positive result on the impact of
limited degree of velocity on multi-message spreading.
\begin{theorem}\label{thm:ContinuousMP}Assume that the velocity
constraint obeys: $v(n)=\omega\left(\sqrt{\frac{\log n}{k}}\right)$
where $k$ denotes the number of distinct messages. Let $T_{\text{mp}}^{\text{mc}}$
be the first time slot that all users have received all messages in
multicast scenario, then with probability at least $1-n^{-2}$ we
have\begin{equation}
T_{\text{mp}}^{\text{mc}}=O\left(k\log^{3}n\right).\end{equation}
\end{theorem}
The key aspect of the multi-cast strategy is that the probability
of being an active transmitter is independent of the number of distinct
messages $k$, but only depends on the degree of mobility. This makes
the strategy inherently asynchronous and decentralized -- no information
on the exact number of distinct messages is needed. Choosing the transmission
probability according to the degree of mobility allows us to achieve
optimal performance if possible.%
\end{comment}
{}
\section{Proofs and Discussions of Main Results\label{sec:Discrete-Jump-Model}}
The proofs of Theorem \ref{thm:DiscreteSP}-\ref{thm:DiscreteMP}
are provided in this section. Before continuing, we would like to
state some preliminaries regarding the mixing time of a random walk
on a 2-dimensional grid, some related concentration results, and a
formal definition of conductance.
\subsection{Preliminaries \label{sub:Preliminaries}}
\subsubsection{Mixing Time}
Define the probability of a typical node moving to subsquare $A_{i}$
at time $t$ as $\pi_{i}(t)$ starting from any subsquare, and denote
by $\pi_{i}$ the steady-state probability of a node residing in subsquare
$A_{i}$. Define the \textit{mixing time} of our random walk mobility
model as $T_{\text{mix}}\left(\epsilon\right):=\min\left\{ t:\left|\pi_{i}(t)-\pi_{i}\right|\leq\epsilon,\forall i\right\} $,
which characterizes the time until the underlying Markov chain is
close to its stationary distribution. It is well known that the mixing
time of a random walk on a grid satisfies (e.g. see \cite[Corollary 2.3]{ChenGunes07}
and \cite[Appendix C]{ChenGunes07}): \begin{equation}
T_{\text{mix}}\left(\epsilon\right)\leq\hat{c}m\left(\log\frac{1}{\epsilon}+\log n\right)\end{equation}
for some constant $\hat{c}$. We take $\epsilon=n^{-10}$ throughout
this paper, so $T_{\text{mix}}\left(\epsilon\right)\leq c_{0}m\log n$
holds with $c_{0}=10\hat{c}$. After $c_{0}m\log n$ amount of time
slots, all the nodes will reside in any subsquare \textit{almost uniformly
likely}. In fact, $n^{-10}$ is very conservative and a much larger
$\epsilon$ suffices for our purpose, but this gives us a good sense
of the sharpness of the mixing time order. See \cite[Section 6]{RandomGraphDynamics}
for detailed characterization of the mixing time of random walks on
graphs.
\subsubsection{Concentration Results}
The following concentration result is also useful for our analysis.
\begin{lem}\label{lemmaConcentration-3} Assume that $b$ nodes are
thrown independently into $m$ subsquares. Suppose for any subsquare
$A_{i}$, the probability $q_{A_{i}}$ of each node being thrown to
$A_{i}$ is bounded as\begin{equation}
\left|q_{A_{i}}-\frac{1}{m}\right|\leq\frac{1}{3m}.\end{equation}
Then for any constant $\epsilon$, the number of nodes $N_{A_{i}}(t)$
falling in any subsquare $A_{i}$$(1\leq i\leq m<n)$ at any time
$t\in\left[1,n^{2}\right]$ satisfies
a) if $b=\Theta\left(m\log n\right)$ and $b>32m\log n$, then\[
\mathbb{P}\left(\forall(i,t):\frac{b}{6m}\leq N_{A(i)}(t)\leq\frac{7b}{3m}\right)\geq1-\frac{2}{n^{3}};\]
b) if $b=\omega\left(m\log n\right)$, then\[
\mathbb{P}\left(\forall(i,t):\frac{\left(\frac{2}{3}-\epsilon\right)b}{m}\leq N_{A(i)}(t)\leq\frac{\left(\frac{4}{3}+\epsilon\right)b}{m}\right)\geq1-\frac{2}{n^{3}}.\]
\end{lem}
\begin{IEEEproof}See Appendix \ref{sub:Proof-of-Lemma-Concentration}.\end{IEEEproof}
This implies that the number of nodes residing in each subsquare at
each time of interest will be reasonably close to the true mean. This
concentration result follows from standard Chernoff bounds \cite[Appendix A]{Alon2008},
and forms the basis for our analysis.
\subsubsection{Conductance}
Conductance is an isoperimetric measure that characterizes the expansion
property of the underlying graph. Consider an irreducible reversible
transition matrix $P$ with its states represented by $V\text{ }\left(\left|V\right|=n\right)$.
Assume that the stationary distribution is uniform over all states.
In spectral graph theory, the conductance associated with $P$ is
\cite{RandomGraphDynamics} \begin{equation}
\Phi\left(P\right)=\inf_{B\subset V,\left|B\right|\leq\frac{n}{2}}\frac{\sum_{i\in B,j\in B^{c}}P_{ij}}{\left|B\right|},\end{equation}
which characterizes how easy the probability flow can cross from one
subset of nodes to its complement. If the transition matrix $P$ is
chosen such that\begin{equation}
P_{ij}=\begin{cases}
\frac{1}{d_{i}}, & \quad\text{if }j\in\text{neighbor}(i),\\
0, & \quad\text{else},\end{cases}\label{eq:TransitionMatrix}\end{equation}
where $d_{i}$ denotes the degree of vertex $i$, then the conductance
associated with random geometric graph with radius $r(n)$ obeys $\Phi(P)=\Theta\left(r(n)\right)$
\cite{ChenGunes07}, where $r(n)$ is the transmission radius.
\subsection{Single-message Dissemination in Mobile Networks}
We only briefly sketch the proof for Theorem \ref{thm:DiscreteSP}
in this paper, since the approach is somewhat standard (see \cite{GossipTutorialShah}).
Lemma \ref{lemmaConcentration-3} implies that with high probability,
the number of nodes residing in each subsquare will exhibit sharp
concentration around the mean $n/m$ once $n>32m\log n$. For each
message $M_{i}$, denote by $N_{i}(t)$ the number of users containing
$M_{i}$ at time $t$. The spreading process is divided into $2$
phases: $1\leq N_{i}(t)\leq n/2$ and $n/2<N_{i}(t)\leq n$.
As an aside, if we denote by $p_{lj}$ the probability that $l$ successfully
transmit to $j$ in the next time slot conditional on the event that there are $\Theta(n/m)$ users residing in each subsquare, then if $m<\frac{n}{32\log n}$,
one has \[
p_{lj}=\begin{cases}
\Theta\left(\frac{m}{n}\right),\quad & \text{if }l\text{ and }j\text{ }\text{can move to the same}\\
& \text{subsquare in the next time slot},\\
0. & \text{else}.\end{cases}\]
Concentration results imply that for any given time $t$ and any user
$l$, there are $\Theta\left(\frac{n}{m}\right)$ users that can lie
within the same subsquare as $l$ with high probability. On the other
hand, for a geometric random graph with $r(n)=\sqrt{1/m}=v(n)$, the
transition matrix defined in (\ref{eq:TransitionMatrix}) satisfies
$P_{lj}=\Theta\left(\frac{m}{n}\right)$ for all $j$ inside the transmission
range of $l$ (where there are with high probability $\Theta\left(\frac{n}{m}\right)$
users inside the transmission range). Therefore, if we define the
\emph{conductance related to this mobility model} as $\Phi\left(n\right)=\inf_{B\subset V,\left|B\right|\leq\frac{n}{2}}\frac{\sum_{i\in B,j\in B^{c}}p_{lj}}{\left|B\right|}$,
then this is order-wise equivalent to the conductance of the geometric
random graph $ $with $r(n)=v(n)$, and hence $\Phi(n)=\Theta(r(n))=\Theta(v(n))$.
\subsubsection{Phase 1}
Look at the beginning of each slot, all senders containing $M_{i}$
may transmit it to any nodes in the $9$ subsquares equally likely
with constant probability by the end of this slot. Using the same
argument as \cite{GossipTutorialShah}, one can see that the expected
increment of $N_{i}(t)$ by the end of this slot can be lower bounded
by the number of nodes $N_{i}(t)$ times the conductance related to
the mobility model $\Phi(n)=\Theta\left(v(n)\right)$ defined above.
We can thus conclude that before $N(t)=n/2$, \[
\mathbb{E}\left(N_{i}(t+1)-N_{i}(t)\mid N_{i}(t)\right)\geq b_{1}N_{i}(t)\Phi\left(n\right)=\tilde{b}_{1}N_{i}(t)v(n)\]
holds for some constant $b_{1}$ and $\tilde{b}_{1}$. Following
the same martingale-based proof technique used for single-message
dissemination in \cite[Theorem 3.1]{GossipTutorialShah}, we can prove
that for any $\epsilon>0$, the time $T_{i1}(\epsilon)$ by which
$N_{i}(t)\geq n/2$ holds with probability at least $1-\epsilon$
can be bounded by \begin{equation}
T_{i1}(\epsilon)=O\left(\frac{\log n+\log\epsilon^{-1}}{\Phi(n)}\right)=O\left(\frac{\log n+\log\epsilon^{-1}}{v(n)}\right).\end{equation}
Take $\epsilon=n^{-3}$, then $T_{i1}(\epsilon)$ is bounded by $O\left(\frac{\log n}{v(n)}\right)$
with probability at least $1-n^{-3}$.
\subsubsection{Phase 2}
This phase starts at $T_{i1}$ and ends when $N_{i}(t)=n$. Since
the roles of $j$ and $l$ are symmetric, the probability of $j$
having $l$ as the nearest neighbor is equal to the probability of
$l$ having $j$ as the nearest neighbor. This further yields $p_{lj}=p_{jl}$
by observing that the transmission success probability is the same
for each designated pair. Therefore, we can see: \begin{align}
& \mathbb{E}\left(N_{i}(t+1)-N_{i}(t)\mid N_{i}(t)\right)\nonumber \\
\geq & \enskip b_{2}\sum_{l\in\mathcal{N}_{i}(t),j\notin\mathcal{N}_{i}(t)}p_{lj}\label{eq:Phase2SinglePieceStayMove}\\
= & \enskip b_{2}\left(n-N_{i}(t)\right)\frac{\sum_{j\notin\mathcal{N}_{i}(t),l\in\mathcal{N}_{i}(t)}p_{jl}}{n-N_{i}(t)}\nonumber \\
\geq & \enskip b_{2}\left(n-N_{i}(t)\right)\Phi(n).\end{align}
Denote by $T_{i2}$ the duration of Phase 2. We can follow the same
machinery in \cite{MoskSha08} to see:\begin{equation}
T_{i2}=O\left(\frac{\log n}{\Phi(n)}\right)=O\left(\frac{\log n}{v(n)}\right)\end{equation}
with probability exceeding $1-n^{-3}$.
By combining the duration of Phase 1 and Phase 2 and applying the
union bound over all distinct messages, we can see that $T_{\text{sp}}^{\text{uc}}(i)=O\left(\frac{\log n}{v(n)}\right)$
holds for all distinct messages with high probability. When $v(n)>\sqrt{\frac{32\log n}{n}}$,
at any time instance each node can only transmit a message to nodes
at a distance at most $O\left(v(n)\right)$, and hence it will take
at least $\Omega\left(1/v(n)\right)$ time instances for $M_{i}$
to be relayed to the node farthest from $i$ at time $0$, which is
a universal lower bound on the spreading time. Therefore, $T_{\text{sp}}^{\text{uc}}(i)$
is only a logarithmic factor away from the fundamental lower bound
$\Omega\left(\frac{1}{v(n)}\right)$. It can be seen that the bottleneck
of this upper bound lies in the conductance of the underlying random
network. When $v(n)=\omega\left(\sqrt{\frac{\log n}{n}}\right)$,
mobility accelerates spreading by increasing the conductance. The
mixing time duration is much larger than the spreading time, which
implies that the copies of each message is still spatially constrained
in a fixed region (typically clustering around the source) without
being spread out over the entire square. We note that with full mobility,
the spreading time in single message dissemination case achieves the
universal lower bound $\Theta(\log n)$, which is much smaller than
that with a limited degree of velocity.
\subsection{Multi-message Spreading in Static Networks with RANDOM PUSH}
The proof idea of Theorem \ref{thm-Random-Push-Static} is sketched
in this subsection.
\subsubsection{The Lower Bound on the Spreading Time}
To begin our analysis, we partition the entire unit square as follows:
\begin{itemize}
\item The unit square is divided into a set of nonoverlapping \textbf{\textit{tiles}}
$\left\{ B_{j}\right\} $ each of side length $\sqrt{32\log n/n}$
as illustrated in Fig. \ref{fig:DecayVert} (Note that this is a different
partition from subsquares $\left\{ A_{j}\right\} $ resulting from
the mobility model).
\item The above partition also allows us to slice the network area into
\textbf{\textit{vertical strips}} each of width $\sqrt{32\log n/n}$
and length $1$. Label the vertical strips as $\left\{ V_{l}\right\} \left(1\leq l\leq\sqrt{n/\left(32\log n\right)}\right)$
in increasing order from left to right, and denote by $N_{V_{l}}(t)$
and $\mathcal{N}_{V_{l}}(t)$ the number and the set of nodes in $V_{l}$
that contains $M^{*}$ by time $t$.
\item The vertical strips are further grouped into \textbf{\textit{vertical
blocks}} $\left\{ V_{j}^{\text{b}}\right\} $ each containing $\log n$
strips, i.e. $V_{j}^{\text{b}}=\left\{ V_{l}:(j-1)\log n+1\leq l\leq j\log n\right\} $.
\end{itemize}
\begin{remark}Since each tile has an area of $32\log n/n$, concentration
results (Lemma \ref{lemmaConcentration-3}) imply that there are $\Theta\left(\log n\right)$
nodes residing in each tile with high probability. Since each sender
only attempts to transmit to its nearest receiver, then with high
probability the communication process occurs only to nodes within
the same tile or in adjacent tiles. \end{remark}
Without loss of generality, we assume that the source of $M^{*}$
resides in the \textit{leftmost} vertical strip $V_{1}$. We aim at
counting the time taken for $M^{*}$ to cross each vertical block
horizontally. In order to decouple the counting for different vertical
blocks, we construct a new spreading process $\mathcal{G}^{*}$ as
follows.
\vspace{6pt}
\framebox{%
\begin{minipage}[t]{3.2in}%
Spreading in Process $\mathcal{G}^{*}$:
\begin{enumerate}
\item At $t=0$, distribute $M^{*}$ to all nodes residing in vertical strip
$V_{1}$.
\item Each node adopts RANDOM PUSH as the message selection strategy.
\item Define $T_{l}^{\text{b}}=\min\left\{ t:N_{V_{l}^{\text{b}}}(t)>0\right\} $
as the first time that $M^{*}$ reaches vertical block $V_{l}^{\text{b}}$.
For all $l\geq2$, distribute $M^{*}$ to all nodes residing in either
$V_{l-1}^{\text{b}}$ or the leftmost strip of $V_{l}^{\text{b}}$
at time $t=T_{l}^{\text{b}}$ .
\end{enumerate}
\end{minipage}}
\vspace{6pt}
It can be verified using a coupling approach that $\mathcal{G}^{*}$
evolves stochastically faster than the true process. By enforcing
mandatory dissemination at every $t=T_{l}^{\text{b}}$, we enable
separate counting for spreading time in different blocks -- the spreading
in $V_{l+1}^{\text{b}}$ after $T_{l+1}^{\text{b}}$ is independent
of what has happened in $V_{l}^{\text{b}}$. Roughly speaking, since
there are $\sqrt{\frac{n}{32\log^{3}n}}$ blocks, the spreading time
over the entire region is $\Theta\left(\sqrt{\frac{n}{\text{poly}\left(\log n\right)}}\right)$
times the spreading time over a typical block.
We perform a single-block analysis in the following lemma, and characterize
the rate of propagation across different strips over a typical block
in $\mathcal{G}^{*}$. By Property 3) of $\mathcal{G}^{*}$, the time
taken to cross each typical block is equivalent to the crossing time
in $V_{1}^{\text{b}}$. Specifically, we demonstrate that the time
taken for a message to cross a single block is at least $\Omega\left(w^{1-\epsilon}\right)$
for any positive $\epsilon$. Since the crossing time for each block
in $\mathcal{G}^{*}$ is statistically equivalent, this single-block
analysis further allows us to lower bound the crossing time for the
entire region.
\begin{comment}
\begin{lem}\label{lemma-Vertical-Strip}Suppose that each node contains
at least $w$ distinct messages, and that only the source $i$ contains
the message $M_{i}$ at $t=0$. Define $\mathcal{V}_{i}$ as the set
of nodes that reside in the same vertical strip as $i$ and that are
at least $\log^{1.5}n$ tiles away from $i$. Assume that each node
can only receive $M_{i}$ from nodes within the same vertical strip.
Denoting by $T_{\mathcal{V}_{i}}$ the time until any node in $\mathcal{V}_{i}$
receives $M_{i}$ using RANDOM PUSH, we have\begin{equation}
T_{\mathcal{V}_{i}}\geq c_{20}w\log^{0.5}n\label{eq:TimeLBWithinStrip}\end{equation}
holds with probability at least $1-n^{-5}$. \end{lem}
\begin{IEEEproof}{[}\textbf{Sketch of Proof of Lemma \ref{lemma-Vertical-Strip}}{]}Since
each node contains at least $w$ messages, so the expected time taken
for each node to transmit $M_{i}$ for a second time is at least $w$.
Define a set of i.i.d. Bernoulli random variables $\left\{ X_{i}(t)\right\} $
such that $\mathbb{P}\left(X_{i}(t)\right)=\Theta\left(\frac{\log n}{w}\right)$.
It can be shown that $T_{\mathcal{V}_{i}}$ is stochastically larger
than $\tilde{T}_{i}=\min\left\{ t:\underset{1\leq l\leq t}{\sum}X_{i}(l)=\log^{1.5}n\right\} $.
Applying Chernoff bound on the sum of $X_{i}(t)$ gives the lower
bound on $T_{\mathcal{V}_{i}}$. \end{IEEEproof}
See Appendix \ref{sec:Proof-of-Lemma-Vertical-Strip} for a detailed
derivation.
Lemma \ref{lemma-Vertical-Strip} basically points out that the number
of copies cannot increase $\text{poly}\left(\log n\right)$ times
within $w$ time slots if only within-strip spreading is accounted
for. This latency of the expansion is due to the randomness of one-sided
message selection and the geometric constraint. In fact, the evolution
is mainly due to cross-strip spreading. %
\end{comment}
{}
\begin{comment}
This latency of the expansion is due to the randomness of one-sided
message selection and the geometric constraint (i.e. propagation occurs
only within the same tile or between adjacent tiles). In fact, the
evolution is mainly due to cross-strip spreading. %
\end{comment}
{}
\begin{lem}\label{lemma-cross-strip}Consider the spreading of $M^{*}$
over $V_{1}^{\text{b}}$ in the original process $\mathcal{G}$. Suppose
each node contains at least $w=\omega\left(\text{poly}\left(\log n\right)\right)$
messages initially. Define $t_{X}:=w^{1-\epsilon}$, and define $l^{*}=\min\left\{ l:N_{V_{l}}(t_{X})=O\left(w^{\epsilon}\log n\right)\right\} $.
Then with probability at least $1-3n^{-3}$, we have
(a) $l^{*}\leq\frac{1}{2}\log n;$
(b) $\forall s$$\text{ }\left(1\leq s<l^{*}\right)$, there exists
a constant $c_{31}$ such that \begin{align}
& N_{V_{s}}(t_{X})\leq\left(\frac{\log n}{w^{\epsilon}}\right)^{s-1}\left(c_{31}\sqrt{n}\log n\right);\label{eq:cross-strip-decay}\end{align}
(c) $N_{V_{\frac{1}{2}\log n}}(t_{X})\leq\log^{2}n$.
\end{lem}
\begin{IEEEproof}[{\bf Sketch of Proof of Lemma \ref{lemma-cross-strip}}]The
proof makes use of the fixed-point type of argument. The detailed
derivation is deferred to Appendix \ref{sec:Proof-of-Lemma-Cross-Strip}.
\begin{comment}
The set of nodes $\mathcal{N}_{V_{l^{*}}}(t_{X})$ may receive $M^{*}$
either from nodes in adjacent strips or in $V_{l^{*}}$. Specifically,
they can only receive $M^{*}$ from the set of nodes $\mathcal{N}_{V_{l^{*}-1}}(t_{X})\bigcup\mathcal{N}_{V_{l^{*}}}(t_{X})\bigcup\mathcal{N}_{V_{l^{*}+1}}(t_{X})$.
Also, since every node contains $\Omega\left(w\right)$ messages,
each transmitter containing $M^{*}$ at time $t$ will select $M^{*}$
for transmission with probability at most $1/w$. Through fixed-point
analysis we can derive\[
N_{V_{l^{*}}}(t_{X})\leq\frac{t_{X}}{w}\left(N_{V_{l^{*}-1}}(t_{X})+N_{V_{l^{*}}}(t_{X})+N_{V_{l^{*}+1}}(t_{X})\right)\]
\[
\Longrightarrow\quad N_{V_{l^{*}}}(t_{X})\leq\frac{2}{w^{\epsilon}}\left(N_{V_{l^{*}-1}}(t_{X})+N_{V_{l^{*}+1}}(t_{X})\right)\]
with high probability. When $w\gg1$, we have $N_{V_{l^{*}}}(t_{X})\ll N_{V_{l^{*}-1}}(t_{X})+N_{V_{l^{*}+1}}(t_{X})$,
which demonstrates a sharp decay. Repeatedly applying such comparisons
between adjacent strips from $s=l^{*}$ to $s=2$ yields \begin{align*}
& \text{ }N_{V_{l^{*}-s}}(t_{X})+N_{V_{l^{*}+s}}(t_{X})\\
\leq\text{ } & \frac{\log n}{w^{\epsilon}}\left(N_{V_{l^{*}-s-1}}(t_{X})+N_{V_{l^{*}+s+1}}(t_{X})\right).\end{align*}
That said, $N_{V_{l}}$ decays as $l$ increases with geometric rate
$(1\leq l<l^{*})$. Such rapid decay rate in turn guarantees that
$l^{*}=O\left(\log n\right)$ with high probability. See Appendix
\ref{sec:Proof-of-Lemma-Cross-Strip} for detailed derivation. %
\end{comment}
{}\end{IEEEproof}
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[scale=0.4]{DecayVertical.jpg}}
\par\end{centering}
\caption{\label{fig:DecayVert}The plot illustrates that the number $N_{V_{i}}(t_{X})$
of nodes containing $M_{i}$ in vertical strip $V_{i}$ by time $t_{X}$
is decaying rapidly with geometric rate. }
\end{figure}
The key observation from the above lemma is that the number of nodes
in $V_{s}$ containing $M^{*}$ is decaying rapidly as $s$ increases,
which is illustrated in Fig. \ref{fig:DecayVert}. We also observe
that $N_{V_{l}}(t_{X})$ decreases to $O\left(\log^{2}n\right)$ before
$V_{\frac{1}{2}\log n}$.
While Lemma \ref{lemma-cross-strip} determines the number of copies
of $M^{*}$ inside $V_{1}\sim V_{\frac{1}{2}\log n}$ by time $t_{X}$,
it does not indicate whether $M^{*}$ has crossed the block $V_{1}^{\text{b}}$
or not by $t_{X}$. It order to characterize the crossing time, we
still need to examine the evolution in strips $V_{\frac{1}{2}\log n+1}\sim V_{\log n}$.
Since communication occurs only between adjacent strips or within
the same strip, all copies lying to the right of $V_{\frac{1}{2}\log n}$
must be relayed via a path that starts from $V_{1}$ and passes through
$V_{\frac{1}{2}\log n}$. That said, all copies in $V_{\frac{1}{2}\log n+1}\sim V_{\log n}$
by time $t_{X}$ must have been forwarded (possibly in a multi-hop
manner) via some nodes having received $M^{*}$ by $t_{X}$. If we
denote by $\mathcal{N}_{V_{\log n/2}}^{*}\left(t_{X}\right)$ the
set of nodes in $V_{\log n/2}$ having received $M^{*}$ by $t_{X}$
in $\mathcal{G}^{*}$, then we can construct a process $\overline{\mathcal{G}}$
in which all nodes in $\mathcal{N}_{V_{\log n/2}}^{*}\left(t_{X}\right)$
receive $M^{*}$ from the very beginning ($t=0$), and hence the evolution
in $\overline{\mathcal{G}}$ can be stochastically faster than $\mathcal{G}^{*}$
by time $t_{X}$.
\vspace{6pt}
\framebox{%
\begin{minipage}[t]{3.2in}%
Spreading in Process $\overline{\mathcal{G}}$:
\begin{enumerate}
\item Initialize (a): at $t=0$, for all $v\in\mathcal{N}_{V_{\log n/2}}^{*}(t_{X})$,
distribute $M^{*}$ to all nodes residing \textit{in the same tile}
as $v$.
\item Initialize (b): at $t=0$, if $v_{1}$ and $v_{2}$ are two nodes
in $\mathcal{N}_{V_{\log n/2}}^{*}(t_{X})$ such that $v_{1}$ and
$v_{2}$ are less than $\log n$ tiles away from each other, then
distribute $M^{*}$ to all nodes in all tiles between $v_{1}$ and
$v_{2}$ in $V_{\log n/2}$. After this step, tiles that contain $M^{*}$
forms a set of \textit{nonoverlapping} substrips.
\item By time $t_{X}=w^{1-\epsilon}$, the evolution to the left of $V_{\log n/2}$
occurs exactly the same as in $\mathcal{G}^{*}$.
\item At the first time slot in which any node in the above substrips selects
$M^{*}$ for transmission, distribute $M^{*}$ to all nodes in all
tiles adjacent to any of these substrips. In other words, we expand
all substrips outwards by one tile.
\item Repeat from 4) but consider the new set of substrips after expansion.
\end{enumerate}
\end{minipage}}
\vspace{6pt}
By our construction of $\overline{\mathcal{G}}$, the evolution to
the left of $V_{\frac{1}{2}\log n}$ stays completely the same as
that in $\mathcal{G}^{*}$, and hence there is no \emph{successful}
transmission of $M^{*}$ between nodes in $V_{1}\sim V_{\frac{1}{2}\log n-1}$
and those in $V_{\log n/2}$ but not contained in $\mathcal{N}_{V_{\log n/2}}^{*}(t_{X})$.
Therefore, in our new process $\overline{\mathcal{G}}$, the evolution
to the left of $V_{\frac{1}{2}\log n}$ by time $t_{X}$ is decoupled
with that to the right of $V_{\frac{1}{2}\log n}$ by time $t_{X}$.
Our objective is to examine how likely $T_{2}^{*}=\min\left\{ t:M^{*}\text{ reaches }V_{\log n}\text{ in }\overline{\mathcal{G}}\right\} $
is smaller than $t_{X}$. It can be observed that any two substrips
would never merge before $T_{2}^{*}$ since they are initially spaced
at least $\log n$ tiles from each other. This allows us to treat
them separately. Specifically, the following lemma provides a lower
bound on $T_{2}^{*}$ by studying the process $\overline{\mathcal{G}}$.
\begin{lem}\label{Lemma-Cross-Strip-Remaining}Suppose $t_{X}=w^{1-\epsilon}$
and each node contains at least $w$ distinct messages since $t=0$.
Then we have\begin{equation}
\mathbb{P}\left(T_{2}^{*}\leq t_{X}\right)\leq\frac{4}{n^{3}}.\end{equation}
\end{lem}
\begin{IEEEproof}See Appendix \ref{sec:Proof-of-Lemma-Cross-Strip-Remaining}.
\end{IEEEproof}
This lemma indicates that $M^{*}$ is unable to cross $V_{1}^{\text{b}}$
by time $t_{X}=w^{1-\epsilon}$ in $\overline{\mathcal{G}}$. Since
$\overline{\mathcal{G}}$ is stochastically faster than the original
process, the time taken for $M^{*}$ to cross a vertical block in
the original process exceeds $t_{X}$ with high probability. In other
words, the number of nodes having received $M^{*}$ by $t_{X}$ vanishes
within no more than $O\left(\log n\right)$ further strips.
Since there are $\Theta\left(\sqrt{n/\text{poly}(\log n)}\right)$
vertical blocks in total, and crossing each block takes at least $\Omega\left(w^{1-\epsilon}\right)$
time slots, the time taken for $M^{*}$ to cross all blocks can thus
be bounded below as \begin{equation}
T^{*}=\mbox{\ensuremath{\Omega}}\left(w^{1-\epsilon}\sqrt{\frac{n}{\text{poly}\left(\log n\right)}}\right)\end{equation}
with high probability.
\subsubsection{Discussion}
Theorem \ref{thm-Random-Push-Static} implies that if a message $M^{*}$
is injected into the network when each user contains $\Omega\left(k/\text{poly}(\log n)\right)$
messages, the spreading time for $M^{*}$ is $\mbox{\ensuremath{\Omega}}\left(k^{1-\epsilon}\sqrt{n/\text{poly}(\log n)}\right)$
for arbitrarily small $\epsilon$. That said, there exists a gap as
large as $\Omega\left(\sqrt{n/\text{poly}(\log n)}\right)$ from optimality.
The tightness of this lower bound can be verified by deriving an \textit{upper
bound} using the conductance-based approach as follows.
We observe that the message selection probability for $M^{*}$ is
always lower bounded by $1/k$. Hence, we can couple a new process
adopting a different message-selection strategy such that a transmitter
containing $M^{*}$ selects it for transmission with \textit{state-independent}
probability $1/k$ at each time. It can be verified that this process
evolves stochastically slower than the original one. The conductance
associated with the new evolution for $M^{*}$ is $\Phi(n)=\frac{1}{k}\Theta\left(r(n)\right)=O\left(\frac{1}{k}\sqrt{\frac{\log n}{n}}\right)$.
Applying similar analysis as in \cite{MoskSha08} yields\begin{equation}
T_{i}=O\left(\frac{\text{poly}\left(\log n\right)}{\Phi(n)}\right)=O\left(k\sqrt{n}\text{poly}(\log n)\right)\end{equation}
with probability exceeding $1-n^{-2}$, which is only a poly-logarithmic
gap from the lower bound we derived.
The tightness of this upper bound implies that the propagation bottleneck
is captured by the conductance-based measure -- the copies of each
message tend to cluster around the source at any time instead of spreading
out (see Fig. \ref{fig:SpreadOut}). That said, only the nodes lying
around the boundary are likely to forward the message to new users.
Capacity loss occurs to the users inside the cluster since many transmissions
occur to receivers who have already received the message and are thus
wasted. This graph expansion bottleneck can be overcome with the assistance
of mobility.
\subsection{Multi-message Spreading in Mobile Networks with MOBILE PUSH}
The proof of Theorem \ref{thm:DiscreteMP} is sketched in this subsection.
We divide the entire evolution process into 3 phases. The duration
of Phase 1 is chosen to allow each message to be forwarded to a sufficiently
large number of users. After this initial phase (which acts to {}``seed''
the network with a sufficient number of all the messages), random
gossiping ensures the spread of all messages to all nodes.
\subsubsection{Phase 1}
This phase accounts for the first $c_{6}\left(c_{0}m\log n+\frac{m}{c_{\text{h}}\log n}\right)\log^{2}n=\Theta\left(m\log^{3}n\right)$
time slots, where $c_{0}$, $c_{6}$ and $c_{\text{h}}$ are constants
independent of $m$ and $n$. At the end of this phase, each message
will be contained in at least $32m\log n=\Theta\left(m\log n\right)$
nodes. The time intended for this phase largely exceeds the mixing
time of the random walk mobility model, which enables these copies
to {}``uniformly'' spread out over space.
We are interested in counting how many nodes will contain a particular
message $M_{i}$ by the end of Phase 1. Instead of counting all potential
multi-hop relaying of $M_{i}$, we only look at the set of nodes that
receive $M_{i}$ \textit{directly} from source $i$ in \textit{odd}
slots. This approach provides a crude lower bound on $N_{i}(t)$ at
the end of Phase 1, but it suffices for our purpose.
Consider the following scenario: at time $t_{1}$, node $i$ attempts
to transmit its message $M_{i}$ to receiver $j$. Denote by $Z_{i}(t)$
$(1\leq i\leq n)$ the subsquare position of node $i$, and define
the relative coordinate $Z_{ij}(t):=Z_{i}(t)-Z_{j}(t)$. Clearly,
$Z_{ij}(t)$ forms another two-dimensional random walk on a discrete
torus. For notional convenience, we introduce the the notation $\mathbb{P}_{0}\left(\cdot\right)\overset{\Delta}{=}\mathbb{P}\left(\cdot\mid Z_{ij}(0)=\left(0,0\right)\right)$
to denote the \emph{conditional measure} given $Z_{ij}\left(0\right)=\left(0,0\right)$.
The following lemma characterizes the hitting time of this random
walk to the boundary.
\begin{lem}\label{lemHittingTime}Consider the symmetric random walk
$Z_{ij}(t)$ defined above. Denote the set $\mathcal{A}_{\mathrm{bd}}$
of subsquares on the boundary as\[
\mathcal{A}_{\mathrm{bd}}=\left\{ A_{i}\left|A_{i}=\left(\pm\frac{\sqrt{m}}{2},j\right)\right.\text{ or }A_{i}=\left(j,\pm\frac{\sqrt{m}}{2}\right),\forall j\right\} ,\]
and define the first hitting time to the boundary as $T_{\mathrm{hit}}=\min\left\{ t:Z_{ij}(t)\in\mathcal{A}_{\mathrm{bd}}\right\} $,
then there is a constant $c_{h}$ such that\begin{equation}
\mathbb{P}_{0}\left(T_{\mathrm{hit}}<\frac{m}{c_{h}\log n}\right)\leq\frac{1}{n^{4}}.\end{equation}
\end{lem}
\begin{IEEEproof}See Appendix \ref{sec:Proof-of-Lemma-Hitting-Time}.\end{IEEEproof}
Besides, the following lemma provides an upper bound on the expected
number of time slots by time $t$ during which the walk returns to
$(0,0)$.
\begin{lem}\label{lemSingleWalk}For the random walk $Z_{ij}(t)$
defined above, there exist constants $c_{3}$ and $c_{\text{h}}$
such that for any $t<\frac{m}{c_{\text{h}}\log n}$: \begin{align}
\mathbb{E}\left(\left.\sum_{k=1}^{t}\mathds{1}\left(Z_{ij}(k)=(0,0)\right)\right|Z_{ij}(0)=(0,0)\right)\leq c_{3}\log t.\label{eq:RandomWalkReturnTimes}\end{align}
Here, $\mathds{1}\left(\cdot\right)$ denotes the indicator function.
\end{lem}
\begin{IEEEproof}[{\bf Sketch of Proof of Lemma \ref{lemSingleWalk}}]Denote
by $\mathcal{H}_{\text{bd}}$ the event that $Z_{ij}(t)$ hits the
boundary $\mathcal{A}_{\text{bd}}$ (as defined in Lemma \ref{lemHittingTime})
before $t=m/\left(c_{\text{h}}\log n\right)$. Conditional on $Z_{ij}(0)=\left(0,0\right)$,
the probability $q_{ij}^{0}(t)$ of $Z_{ij}(t)$ returning to $(0,0)$
at time $t$ can then be bounded as\begin{equation}
q_{ij}^{0}(t)\leq\mathbb{P}_{0}\left(\mathcal{H}_{\text{bd}}\right)+\mathbb{P}_{0}\left(Z_{ij}(t)=\left(0,0\right)\wedge\overline{\mathcal{H}}_{\text{bd}}\right).\end{equation}
Now, observe that when restricted to the set of sample paths where
$Z_{ij}(t)$ does not reach the boundary by $t$, we can couple the
sample paths of $Z_{ij}(t)$ to the sample paths of a random walk
$\tilde{Z}_{ij}(t)$ over an infinite plane before the corresponding
hitting time to the boundary. Denote by $\tilde{\mathcal{H}}_{\text{bd}}$
the event that $\tilde{Z}_{ij}(t)$ hits $\mathcal{A}_{\text{bd}}$
by $t=m/\left(c_{\text{h}}\log n\right)$ , then \begin{align*}
\mathbb{P}_{0}\left(Z_{ij}(t)=\left(0,0\right)\wedge\overline{\mathcal{H}}_{\text{bd}}\right) & =\mathbb{P}_{0}\left(\tilde{Z}_{ij}(t)=\left(0,0\right)\wedge\overline{\tilde{\mathcal{H}}}_{\text{bd}}\right)\\
& \leq\mathbb{P}_{0}\left(\tilde{Z}_{ij}(t)=\left(0,0\right)\right).\end{align*}
The return probability obeys $\mathbb{P}_{0}\left(\tilde{Z}_{ij}(t)=\left(0,0\right)\right)\sim t^{-1}$
for a random walk over an infinite plane \cite{FosterGood53}, and
$\mathbb{P}_{0}\left(\mathcal{H}_{\text{bd}}\right)$ will be bounded
in Lemma \ref{lemHittingTime}. Summing up all $q_{ij}^{0}(t)$ yields
(\ref{eq:RandomWalkReturnTimes}). See Appendix \ref{sec:Proof-of-Lemma-SingleWalk}
for detailed derivation.\end{IEEEproof}
In order to derive an estimate on the number of distinct nodes receiving
$M_{i}$ directly from source $i$, we need to calculate the number
of slots where $i$ fails to forward $M_{i}$ to a new user. In addition
to physical-layer outage events, some transmissions occur to users
already possessing $M_{i}$, and hence are not successful. Recall
that we are using one-sided push-only strategy, and hence we cannot
always send an innovative message. Denote by $F_{i}\left(t\right)$
the number of \textit{wasted transmissions} from $i$ to some users
already containing $M_{i}$ by time $t$. This can be estimated as
in the following lemma.
\begin{lem}\label{lemma-NumOfFailureBcRetransmissions} For $t_{0}=\frac{m}{c_{\text{h}}\log n}$,
the number of \textit{wasted transmissions} $F_{i}(t)$ defined above
obeys \begin{equation}
\mathbb{E}\left(F_{i}(t_{0})\right)\leq c_{5}\frac{m\log n}{n}t_{0}\end{equation}
for some fixed constant $c_{5}$ with probability exceeding $1-3n^{-3}$.
\end{lem}
\begin{IEEEproof}[{\bf Sketch of Proof of Lemma \ref{lemma-NumOfFailureBcRetransmissions}}]Consider
a particular pair of nodes $i$ and $j$, where $i$ is the source
and $j$ contains $M_{i}$. A wasted transmission occurs when (a)
$i$ and $j$ meets in the same subsquare again, and (b) $i$ is designated
as a sender with $j$ being the intended receiver. The probability
of event (a) can be calculated using Lemma \ref{lemSingleWalk}. Besides,
the probability of (b) is $\Theta\left(m/n\right)$ due to sharp concentration
on $N_{A_{i}}$. See Appendix \ref{sec:Proof-of-Lemma-Number-Of-Fairlure}.
\end{IEEEproof}
The above result is helpful in estimating the \textit{expected number}
of distinct users containing $M_{i}$. However, it is not obvious
whether $F_{i}(t)$ exhibits desired sharp concentration. The difficulty
is partly due to the dependence among $\left\{ Z_{ij}(t)\right\} $
for different $t$ arising from its Markov property. Due to their
underlying relation with location of $i$, $Z_{ij_{1}}(t)$ and $Z_{ij_{2}}(t)$
are not independent either for $j_{1}\neq j_{2}$. However, this difficulty
can be circumvented by constructing different processes that exhibit
approximate mutual independence as follows.
The time duration $\left[1,c_{6}\left(c_{0}m\log n+m/\left(c_{\text{h}}\log n\right)\right)\log^{2}n\right]$
of Phase 1 are divided into $c_{6}\log^{2}n$ non-overlapping subphases
$P_{1,j}$ $\left(1\leq j\leq\log^{2}n\right)$ for some constant
$c_{6}$. Each odd subphase accounts for $m/\left(c_{\text{h}}\log n\right)$
time slots, whereas each even subphase contains $c_{0}m\log n$ slots.
See Fig. \ref{fig:Phase1Subphase} for an illustration. Instead of
studying the true evolution, we consider different evolutions for
each subphase. In each odd subphase, source $i$ attempts to transmit
message $M_{i}$ to its intended receiver as in the original process.
But in every even subphase, all new transmissions will be immediately
deleted. The purpose for constructing these \textit{clearance} or
\textit{relaxation} processes in even subphases is to allow for approximately
independent counting for odd subphases. The duration $c_{0}m\log n$
of each even subphase, which is larger than the typical mixing time
duration of the random walk, is sufficient to allow each user to move
to everywhere almost uniformly likely.
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[scale=0.8]{Phase1Plot.jpg}}
\par\end{centering}
\caption{\label{fig:Phase1Subphase}Phase 1 is divided into $2c_{6}\log^{2}n$
subphases. Each odd subphase accounts for $m/\left(c_{\text{h}}\log n\right)$
slots, during which all nodes perform message spreading. Each even
subphase contains $c_{0}m\log n$ slots, during which no transmissions
occur; it allows all nodes containing a typical message to be uniformly
spread out. }
\end{figure}
\begin{lem}\label{lemma-Phase1-Ni-Bound} Set $t$ to be $c_{6}\left(c_{0}m\log n+\frac{m}{c_{\text{h}}\log n}\right)\log^{2}n$,
which is the end time slot of Phase 1. The number of users containing
each message $M_{i}$ can be bounded below as
\begin{equation}
\forall i,\quad N_{i}\left(t\right)>32m\log n\end{equation}
with probability at least $1-c_{7}n^{-2}$.
\end{lem}
\begin{IEEEproof}See Appendix \ref{sec:Proof-of-Lemma-Phase1-Ni-Bound}.\end{IEEEproof}
In fact, if $m\log^{2}n\ll n$ holds, the above lemma can be further
refined to $N_{i}\left(t\right)=\Theta\left(m\log^{2}n\right)$. This
implies that, by the end of Phase 1, each message has been flooded
to $\Omega\left(m\log n\right)$ users. They are able to \textit{cover}
all subsquares (i.e., the messages' locations are roughly uniformly
distributed over the unit square) after a further mixing time duration.
\begin{figure}[htbp]
\begin{centering}
\textsf{\includegraphics[scale=0.43]{Spreadout.pdf}}
\par\end{centering}
\caption{\label{fig:SpreadOut}The left plot illustrates the clustering phenomena
of $\mathcal{N}_{i}(t)$ in the evolution in a static network. However,
even restricted mobility may allow these nodes to spread out within
the mixing time duration as illustrated in the right plot. }
\end{figure}
\subsubsection{Phase 2}
This phase starts from the end of Phase 1 and ends when $N_{i}(t)>n/8$
for all $i$. We use $t=0$ to denote the starting slot of Phase 2
for convenience of presentation. Instead of directly looking at the
original process, we generate a new process $\tilde{\mathcal{G}}$
which evolves slower than the original process $\mathcal{G}$. Define
$\mathcal{S}_{i}(t)$ and $\tilde{\mathcal{S}}_{i}(t)$ as the set
of messages that node $i$ contains at time $t$ in $\mathcal{G}$
and $\tilde{\mathcal{G}}$, with $S_{i}(t)$ and $\tilde{S}_{i}(t)$
denoting their cardinality, respectively. For more clear exposition,
we divide the entire phase into several time blocks each of length
$k+c_{0}\log n/v^{2}(n)$, and use $t_{B}$ to label different time
blocks. We define $\tilde{\mathcal{N}}_{i}^{B}(t_{B})$ to denote
$\tilde{\mathcal{N}}_{i}(t)$ with $t$ being the starting time of
time block $t_{B}$. $\tilde{\mathcal{G}}$ is generated from $\mathcal{G}$:
everything in these two processes remains the same (including locations,
movements, physical-layer outage events, etc.) except message selection
strategies, detailed below:
\vspace{6pt}
\framebox{%
\begin{minipage}[t]{3.2in}%
Message Selection Strategy in the Coupled Process $\tilde{\mathcal{G}}$:
\begin{enumerate}
\item ${\bf \text{Initialize}}$: At $t=0$, for all $i$, copy the set
$\mathcal{S}_{i}(t)$ of all messages that $i$ contains to $\tilde{\mathcal{S}}_{i}(t)$.
Set $t_{B}=0$.
\item In the next $c_{0}\log n/v^{2}(n)$ time slots, all new messages received
in this subphase are immediately deleted, i.e., no successful forwarding
occurs in this subphase regardless of the locations and physical-layer
conditions.
\item In the next $k$ slots, for every sender $i$, each message it contains
is randomly selected with probability $1/k$ for transmission.
\item For all $i$, if the number of nodes containing $M_{i}$ is larger
than $2\tilde{N}_{i}^{B}(t_{B})$, delete $M_{i}$ from some of these
nodes so that $\tilde{N}_{i}(t)=2\tilde{N}_{i}^{B}(t_{B})$ by the
end of this time block.
\item Set $t_{B}\leftarrow t_{B}+1$. Repeat from (2) until $N_{i}>n/8$
for all $i$.
\end{enumerate}
\end{minipage}}
\vspace{6pt}
Thus, each time block consists of a relaxation period and a spreading
period. The key idea is to simulate an \textit{approximately spatially-uniform
evolution}, which is summarized as follows:
\begin{itemize}
\item After each spreading subphase, we give the process a \textit{relaxation}
period to allow each node to move almost uniformly likely to all subsquares.
This is similar to the relaxation period introduced in Phase 1.
\item Trimming the messages alone does \textit{not} necessarily generate
a slower process, because it potentially increases the selection probability
for each message. Therefore, we force the message selection probability
to be a lower bound $1/k$, which is \textit{state-independent}. Surprisingly,
this conservative bound suffices for our purpose because it is exactly
one of the \textit{bottlenecks} for the evolution.
\end{itemize}
The following lemma makes a formal comparison of $\mathcal{G}$ and
$\tilde{\mathcal{G}}$.
\begin{lem}\label{lemma-StochasticOrderPhase2}$\tilde{\mathcal{G}}$
evolves stochastically slower than $\mathcal{G}$, i.e.\begin{equation}
\mathbb{P}\left(T_{2}>x\right)<\mathbb{P}\left(\tilde{T}_{2}>x\right),\quad\forall x>0\end{equation}
where $T_{2}=\min\left\{ t:N_{i}(t)>n/8,\forall i\right\} $ and
$\tilde{T}_{2}=\min\left\{ t:\tilde{N}_{i}(t)>n/8,\forall i\right\} $
are the stopping time of Phase 2 for $\mathcal{G}$ and $\tilde{\mathcal{G}}$,
respectively. \end{lem}
\begin{IEEEproof}Whenever a node $i$ sends a message $M_{k}$ to
$j$ in $\mathcal{G}$: (a) if $M_{k}\in\tilde{\mathcal{S}}_{i}$,
then $i$ selects $M_{k}$ with probability $S_{i}/k$, and a random
useless message otherwise; (b) if $M_{k}\notin\tilde{\mathcal{S}}_{i}$,
$i$ always sends a random noise message. The initial condition $\tilde{\mathcal{S}}_{i}=\mathcal{S}_{i}$
guarantees that $\tilde{\mathcal{S}}_{i}\subseteq\mathcal{S}_{i}$
always holds with this coupling method. Hence, the claimed stochastic
order holds. \end{IEEEproof}
\begin{lem}\label{lemmaPhase2}Denote by $\tilde{T}_{2}^{B}:=\min\left\{ t_{B}:\tilde{N}_{i}^{B}(t_{B})>n/8,\forall i\right\} $
the stopping time block of Phase 2 in $\tilde{\mathcal{G}}$. Then
there exists a constant $c_{14}$ independent of $n$ such that\[
\mathbb{P}\left(\tilde{T}_{2}^{B}\leq4\log_{c_{14}}n\right)\leq1-n^{-2}.\]
\end{lem}
\begin{IEEEproof}[{\bf Sketch of Proof of Lemma \ref{lemmaPhase2}}]We
first look at a particular message $M_{i}$, and use union bound later
after we derive the concentration results on the stopping time associated
with this message. We observe the following facts: after a mixing
time duration, the number of users $N_{i,A_{k}}(t)$ containing $M_{i}$
at each subsquare $A_{k}$ is approximately \textit{uniform}. Since
$N_{i}^{B}(t_{B})$ is the lower bound on the number of copies of
$M_{i}$ across this time block, concentration results suggest that
$N_{i,A_{k}}(t)=\Omega\left(N_{i}^{B}(t_{B})/m\right)$. Observing
from the mobility model that the position of any node inside a subsquare
is \textit{i.i.d.} chosen, we can derive\begin{equation}
\mathbb{E}\left(\tilde{N}_{i}^{B}(t_{B}+1)-\tilde{N}_{i}^{B}(t_{B})\mid\tilde{\mathcal{N}}_{i}^{B}(t_{B})\right)\geq\frac{\tilde{c}_{9}}{2}\tilde{N}_{i}^{B}(t_{B})\end{equation}
for some constant $\tilde{c}_{9}$. A standard \textit{martingale}
argument then yields an upper bound on the stopping time. See Appendix
\ref{sec:Proof-of-Lemma-Phase2} for detailed derivation. \end{IEEEproof}
This lemma implies that after at most $4\log_{c_{14}}n$ time blocks,
the number of nodes containing all messages will exceed $n/8$ with
high probability. Therefore, the duration $\tilde{T}_{2}$ of Phase
2 of $\tilde{\mathcal{G}}$ satisfies $\tilde{T}_{2}=O\left(k\log n\right)$
with high probability. This gives us an upper bound on $T_{2}$ of
the original evolution $\mathcal{G}$.
\subsubsection{Phase 3}
This phase ends when $N_{i}(t)=n$ for all $i$ with $t=0$ denoting
the end of Phase 2. Assume that $N_{i,A_{j}}(0)>\frac{n}{16m}$ for
all $i$ and all $j$, otherwise we can let the process further evolve
for another mixing time duration $\Theta\left(\log n/v^{2}(n)\right)$.
\begin{lem}\label{lemmaPhase3Unicast}Denote by $T_{3}$ the duration
of Phase 3, i.e. $T_{3}=\min\left\{ t:N_{i}(t)=n\mid N_{i}(0)\geq n/8,\forall i\right\} $.
Then there exists a constant $c_{18}$ such that\begin{equation}
\mathbb{P}\left(T_{3}\leq\frac{64}{c_{18}}k\log n\right)\geq1-\frac{15}{16n^{2}}.\end{equation}
\end{lem}
\begin{IEEEproof}[{\bf Sketch of Proof of Lemma \ref{lemmaPhase3Unicast}}]The
random push strategies are efficient near the start (exponential growth),
but the evolution will begin to slow down after Phase 2. The concentration
effect allows us to obtain a different evolution bound as\begin{align*}
& \mbox{\text{ }}\mathbb{E}\left(N_{i}(t+1)-N_{i}(t)\left|N_{i}(t)\right.\right)\\
=\mbox{\text{ }} & \mathbb{E}\left(n-N_{i}(t)-\left(n-N_{i}(t+1)\right)\left|N_{i}(t)\right.\right)\\
\geq\mbox{\text{ }} & \frac{c_{18}}{16k}\left(n-N_{i}(t)\right).\end{align*}
Constructing a different submartingale based on $n-N_{i}(t)$ yields
the above results. See Appendix \ref{sec:Proof-of-Lemma-Phase3-Unicast}.
\end{IEEEproof}
\subsubsection{Discussion}
Combining the stopping time in all three phases, we can see that:
the spreading time $T_{\text{mp}}^{\text{d}}=\min\left\{ t:\forall i,N_{i}(t)=n\right\} $
satisfies\[
T_{\text{mp}}^{\text{d}}\leq O\left(\frac{\log^{3}n}{v^{2}(n)}\right)+O\left(k\log n\right)+O\left(k\log n\right)=O\left(k\log^{2}n\right).\]
It can be observed that, the mixing time bottleneck will not be critical
in multi-message dissemination. The reason is that the mixing time
in the regime $v(n)=\omega\left(\sqrt{\frac{\log n}{k}}\right)$ is
much smaller than the optimal spreading time. Hence, the nodes have
sufficient time to spread out to everywhere. The key step is how to
seed the network with a sufficiently large number of copies at the
initial stage of the spreading process, which is accomplished by the
self-promotion phase of MOBILE PUSH.
\begin{remark}\label{remark-Gap}It can be observed that the upper
bounds on spreading time within Phase 2 and Phase 3 are order-wise
tight, since a gap of $\Omega(\log n)$ exists even for complete graphs
\cite{SanHajMas07}. The upper bound for Phase 1, however, might not
be necessarily tight. We note that the $O\left(\log^{2}n\right)$
factor arises in the analysis stated in Lemma \ref{lemma-Phase1-Ni-Bound},
where we assume that each relaxation subphase is of duration $\Theta\left(m\log n\right)$
for ease of analysis. Since we consider $\Theta\left(\log^{2}n\right)$
subphases in total, we do not necessarily need $\Theta\left(m\log n\right)$
slots for each relaxation subphase in order to allow spreading of
all copies. We conjecture that with a finer tuning of the concentration
of measures and coupling techniques, it may be possible to obtain
a spreading time of $\Theta(k\log n)$.
\end{remark}
\begin{comment}
Conclusion: In this paper, we design a simple distributed gossip-style
protocol that achieves order-optimal spreading rate for multi-message
dissemination, with the assistance of mobility. The key observation
is that the performance of random gossiping over geometric graphs
is inherently constrained by the expansion property of the underlying
graph -- capacity loss occurs since the copies are spatially constrained
instead of being spread out. Encouragingly, this bottleneck can indeed
be overcome in mobile networks, even with limited degrees of velocity.
In fact, the limited velocity assists in achieving large expansion
property in the long-term perspective, which simulates a spatially-uniform
evolution, and hence optimal.%
\end{comment}
{}
\section{Concluding Remarks}
In this paper, we design a simple distributed gossip-style protocol
that achieves near-optimal spreading rate for multi-message dissemination,
with the assistance of mobility. The key observation is that random
gossiping over static geometric graphs is inherently constrained by
the expansion property of the underlying graph -- capacity loss occurs
since the copies are spatially constrained instead of being spread
out. Encouragingly, this bottleneck can indeed be overcome in mobile
networks, even with fairly limited degree of velocity. In fact, the
velocity-constrained mobility assists in achieving a large expansion
property from a long-term perspective, which simulates a spatially-uniform
evolution.
\section*{Acknowledgment}
\addcontentsline{toc}{section}{Acknowledgment} The authors would
like to thank Yudong Chen and Constantine Caramanis for helpful discussions.
|
1,941,325,221,107 | arxiv | \section{Introduction}
In this paper we introduce a new coercivity condition through which one can obtain estimates for higher order moments for stochastic partial differential equations (SPDEs) of the form
\begin{equation}\label{eq:variationalintro}
\mathrm{d} u(t) = A(t, u(t)) \mathrm{d} t + B(t, u(t)) \mathrm{d} W(t), \qquad u(0) = u_0.
\end{equation}
Here $W$ is a $U$-cylindrical Brownian motion. We will be concerned with the so-called {\em variational} or {\em monotone operator approach} to SPDEs in Hilbert spaces. In particular, we assume that $(V, H, V^*)$ is a Gelfand triple, where $H$ is a separable Hilbert space and $V$ a reflexive Banach space.
The variational approach for SPDEs was introduced in 1972 by Bensoussan and Temam using time discretizations methods \cite{bensoussan_equations_1972}. Pardoux improved the latter via Lions' approach for PDEs in \cite{pardoux_equations_1975}. In this approach, Galerkin approximations are used together with a priori energy estimates to obtain existence and uniqueness. Since then, both Krylov and Rozovskii \cite{krylov_1979} and Liu and R\"{o}ckner \cite{rockner_2010, Rockner_SPDE_2015} have extended this approach even further by allowing monotone and locally monotone operators, respectively, as the driving part of the equation.
An advantage of the variational approach is that it directly applies to nonlinear equations. Another key property is that it typically gives global existence and uniqueness at once, and there is often no need to check any further blow-up criteria for the solution. When combined with other approaches this can be very effective (see e.g.\ \cite{AV20_NS} for the stochastic Navier-Stokes equations).
Each of the above papers assumes a coercivity condition on $(A,B)$ of the form (see Section~\ref{sec:main} for explanation on the notation):
\begin{equation}\label{eq:coercivityintro}
2\langle A(t, v), v \rangle + \|B(t, v)\|_{{\mathcal L}_2(U, H)}^2 \leq -\theta\|v\|_V^2 + K\|v\|_H^2 + f(t).
\end{equation}
Note that $B(t, \cdot)$ is allowed to be defined on the smallest space $V$. In the above mentioned results for the variational approach to \eqref{eq:variationalintro} one obtains estimates for
\begin{equation}\label{eq:toestpintro}
{\mathbb E}\sup_{t\in [0,T]} \|u(t)\|^p_H \ \ \text{and} \ \ {\mathbb E}\|u\|_{L^2(0,T;V)}^p,
\end{equation}
but only for $p=2$. Estimates for $p>2$ are not available unless $B$ is assumed to be defined on $H$ instead of $V$ (see \cite[Section 5]{Rockner_SPDE_2015}).
An attempt to treat more general $p\geq 2$ (and even $p<2$) was made in \cite{veraar_2012} by Brze\'{z}niak and the third author. Here it also turned out that the classical coercivity condition is not strong enough to obtain finite $L^p$-moments. The paper \cite{veraar_2012} only considers a simplified setting. Therefore, it was enlightening to see that in \cite{neelima_2020} by Neelima and \v{S}i\v{s}ka, some results can be proved in a general monotone setting. However, the $L^p$-bounds proved there are only sub-optimal (see Remark~\ref{rem:comparison} for details), and the coercivity condition they used seems too restrictive in some cases, which becomes clear further below and in the presented applications.
In the current paper we obtain a complete generalization of the classical monotone operator framework leading to estimates for \eqref{eq:toestpintro} for $p>2$. From \cite{veraar_2012} it follows that the terms in \eqref{eq:toestpintro} are infinite for $p>2$. Therefore, a restriction is necessary. The key ingredient turns out to be the following $p$-dependent coercivity condition:
\begin{equation}\label{eq:coercivityintrop}
\begin{aligned}
2\langle A(t, v), v \rangle + \|B(t, v)\|^2_{{\mathcal L}_2(U, H)} + (p-2)&\frac{\|B(t, v)^*v\|_U^2}{\|v\|_H^2} \\ & \leq -\theta\|v\|_V^\alpha + K_c\|v\|_H^2 + f(t).
\end{aligned}
\end{equation}
Our main result (Theorem~\ref{Main_theorem}) states that under \eqref{eq:coercivityintrop} and the usual conditions in the monotone operator framework, one can estimate the norms in \eqref{eq:toestpintro}. Note that \eqref{eq:coercivityintrop} reduces to \eqref{eq:coercivityintro} if $p=2$. In Example~\ref{ex:optimal} we use a specific choice suggested in \cite{veraar_2012} to show that \eqref{eq:coercivityintrop} is optimal. The proof of the main result is elementary, but quite tedious. In some cases we give explicit constants in the obtained estimates for the moments.
An interesting special case occurs if $B(t, v)^*v = 0$, since then the $p$-dependent term in \eqref{eq:coercivityintrop} vanishes and we get estimates for all $p\geq 2$. This typically occurs for differential operators of odd order with suitable boundary conditions. In some cases we can even let $p\to \infty$ to obtain uniform estimates in $\Omega$.
In Section~\ref{sec:appl} we consider applications to the stochastic heat equation with Dirichlet and Neumann boundary conditions, Burgers' equation, the stochastic Navier-Stokes equations in dimension two, systems, higher order equations, and the $p$-Laplace equation.
\section{Setting and main result}\label{sec:main}
Before we state our main result we fix our notation and terminology. For further details on Gelfand triples and stochastic integration theory we refer to \cite{Rockner_SPDE_2015}.
Throughout this paper $(U, (\cdot, \cdot)_U)$ and $(H, (\cdot, \cdot)_H)$ denote real separable Hilbert spaces and $(V, \|\cdot\|_V)$ is a reflexive Banach space embedded continuously and densely in $H$. The dual of $V$ (relative to $H$) is denoted by $V^*$ and the duality pairing between $V$ and $V^*$ by $\langle \cdot, \cdot \rangle$. The probability space $(\Omega, \mathcal{A},\P)$ and filtration $({\mathscr F}_t)_{t\geq 0}$ will be fixed. The progressive $\sigma$-algebra is denoted by $\mathcal{P}$. Furthermore, suppose that $(W(t))_{t\geq 0}$ is a $U$-cylindrical Brownian motion with respect to $({\mathscr F}_t)_{t\geq 0}$.
\subsection{Assumptions}
The main assumptions on the nonlinearities are as follows:
\begin{assumptions}\label{main_assumptions}
Let
\[A:[0, T] \times \Omega \times V \to V^*, \ \ \text{and} \ \ B: [0, T] \times \Omega \times V \to {\mathcal L}_2(U, H) \]
both be $\mathcal{P}\otimes \mathcal{B}(V)$-measurable. Suppose that there exist finite constants
\[\alpha > 1, \ \beta \geq 0, \ p \geq \beta + 2, \ \theta > 0, \ K_c, K_A, K_B, K_\alpha \geq 0\]
and $f \in L^\frac{p}{2}(\Omega; L^1([0, T]))$ such that for all $t\in[0, T]$ a.s.
\begin{enumerate}[(H1)]
\item\label{it:hem} (Hemicontinuity) For all $u, v, w \in V$, $\omega \in \Omega$, the following map is continuous:
\[\lambda \mapsto \langle A(t, u+\lambda v, \omega), w \rangle.\]
\item\label{it:weak_mon} (Local weak monotonicity) For all $u, v \in V$,
\begin{align*}
&2\langle A(t, u)-A(t, v), u-v\rangle + \|B(t, u) - B(t, v) \|^2_{{\mathcal L}_2(U, H)}\\
&\quad \leq K (1+\|v\|_V^\alpha)(1+\|v\|_H^\beta)\|u-v\|_H^2 .
\end{align*}
\item\label{it:coerc} (Coercivity) For all $v \in V$, $v\neq 0$, $$2\langle A(t, v), v \rangle + \|B(t, v)\|^2_{{\mathcal L}_2(U, H)} +(p-2)\frac{\|B(t, v)^*v\|_U^2}{\|v\|_H^2} \leq -\theta\|v\|_V^\alpha + f(t) + K_c\|v\|_H^2.$$
\item\label{it:bound1} (Boundedness 1) For all $v\in V$, $$\|A(t, v)\|_{V^*}^{\frac{\alpha}{\alpha-1}}\leq K_A(f(t)+\|v\|_V^{\alpha})(1+\|v\|_H^\beta).$$
\item\label{it:bound2} (Boundedness 2) For all $v \in V$,
$$\|B(t, v)\|_{{\mathcal L}_2(U, H)}^2 \leq f(t)+K_B\|v\|_H^2 + K_\alpha\|v\|_V^\alpha.$$
\end{enumerate}
\end{assumptions}
Most conditions are standard and appear in previous works that treat the variational approach to SPDEs (see \cite{pardoux_equations_1975,krylov_stochastic_1981,Rockner_SPDE_2015}).
Usually, in these works $\beta = 0$. The case $\beta\geq 0$ is considered in \cite{brzezniak_strong_2014} where L\'{e}vy noise is treated as well. The condition $p\geq \beta+2$ is needed for a priori bounds for involving the $L^{\frac{\alpha}{\alpha-1}}(\Omega\times[0,T])$-norm of $\|A(t,u(t))\|$ for $u\in L^{p}(\Omega;C([0,T];H))\cap L^{\frac{p \alpha}{2}}(\Omega;L^{\alpha}([0,T];V))$ needed in the existence proof. Often it can be avoided by a localization argument.
Our hypothesis \condref{it:coerc} is new and will allow us to obtain estimates for $L^p$-moments. It reduces to the classical coercivity assumption if $p=2$. The function $f$ can be used to include inhomogeneous terms in $A$ and $B$.
After these preparations we can define solutions to \eqref{eq:variationalintro}.
\begin{definition}\label{solution_definition}
Suppose that Assumptions~\ref{main_assumptions} hold and let $u(0):\Omega\to H$ be ${\mathscr F}_0$-measurable. An adapted, continuous $H$-valued process $u$ is called a {\em solution} to \eqref{eq:variationalintro} if $u\in L^{\alpha}(0,T;V)$ a.s.\ and
for every $t\in[0, T]$, a.s., $$u(t) = u(0) + \int_0^t A(s, u(s)) \mathrm{d} s + \int_0^t B(s, u(s)) \mathrm{d} W(s).$$
\end{definition}
Note that due to \condref{it:bound1}, $t\mapsto A(t, u(t))\in L^{\frac{\alpha}{\alpha-1}}(0,T;V^*)$ a.s.\ and thus the above Bochner integral is well-defined. Due to \condref{it:bound2}, $t\mapsto B(t, u(t))\in L^2(0,T;{\mathcal L}_2(U,H))$ a.s.\ and thus the stochastic integral is also well-defined.
The following can be checked by elementary arguments involving Young's inequality and inequalities for convex functions:
\begin{remark}\label{rem:additive}
Let $\phi\in L^{\frac{p\alpha}{2(\alpha-1)}}(\Omega;L^{\frac{\alpha}{\alpha-1}}(0,T;V^*))$ and $\psi\in L^{p}(\Omega;L^2(0,T;{\mathcal L}_2(U,H)))$
If $(A,B)$ satisfies Assumptions~\ref{main_assumptions}, then $(A+\phi, B+\psi)$ satisfies Assumptions~\ref{main_assumptions} with the same $\alpha, \beta$, $p$, and $f$ replaced by
\[\tilde{f} = f + \|\phi\|^{\frac{\alpha}{\alpha-1}}_{V^*} + \|\psi\|_{{\mathcal L}_2(U,H)}^2.\]
\end{remark}
\subsection{Main result}
The main result of this paper is the following well-posedness result with higher order moments:
\begin{theorem}\label{Main_theorem}
Suppose that Assumptions~\ref{main_assumptions} hold and let $u(0) \in L^{p}(\Omega,{\mathscr F}_0;H)$. Then \eqref{eq:variationalintro} has a unique solution $u$, and there exists a constant $C$ depending on $\alpha$, $\beta$, $\theta$, $p$, $K_c$, $K_A$, $K_B$, $K_{\alpha}$ such that
\begin{equation}\label{eq:aprioripnew}
\begin{split}
{\mathbb E}\sup\limits_{t\in[0, T]}\|u(t)\|_H^p + {\mathbb E}\Big(\int_0^T \|u(t)\|_V^\alpha \mathrm{d} t\Big)^{\frac{p}{2}}\leq Ce^{CT}\Big[{\mathbb E}\|u(0)\|_H^p + {\mathbb E}\Big(\int_0^T f(t) \mathrm{d} t\Big)^{\frac{p}{2}}\Big].
\end{split}
\end{equation}
\end{theorem}
The proof is given in Section~\ref{sec:proofMain}. The main novelty is the a priori estimate \eqref{eq:aprioripnew}. The existence and uniqueness can be obtained by standard Galerkin approximation techniques.
In Corollary~\ref{cor:a_priori_remark} in case $K_B = K_c = 0$, the $p$-dependence in the estimate \eqref{eq:aprioripnew} will be made explicit.
The following example is taken from \cite{veraar_2012} and implies optimality of Theorem~\ref{Main_theorem} with respect to $p$ in the sense that if $p$ is replaced by some number $q>p$, then it can happen that ${\mathbb E}\|u(t)\|_H^{q}=\infty$.
\begin{example}[Optimality]\label{ex:optimal}
On the torus $\mathbb{T}$ consider the equation
\begin{equation}\label{veraar_example}
\mathrm{d} u(t) = \Delta u(t) \mathrm{d} t + 2 \gamma (-\Delta)^{\frac{1}{2}} u(t) \mathrm{d} W(t), \qquad u(0) = u_0.
\end{equation}
Here $\gamma \in \mathbb{R}$, $u_0\in L^{p}(\Omega,{\mathscr F}_0;L^2(\mathbb{T}))$ and $W$ is a real-valued Wiener process (thus $U = {\mathbb R}$). In \cite{veraar_2012} it is proved that \eqref{veraar_example} has a unique solution in $L^{p}(\Omega;L^2(0,T; H^1(\mathbb{T})))$ if $2\gamma^2(p-1) < 1$. Indeed, setting $V = H^1(\mathbb{T})$, $H = L^2(\mathbb{T})$, $A = \Delta$, and $B = 2\gamma(-\Delta)^{1/2}$, Assumptions \ref{main_assumptions} \condref{it:hem}, \condref{it:weak_mon}, \condref{it:bound1}, \condref{it:bound2} hold with $\alpha = 2$, $\beta = 0$ and $f=0$ and suitable constants $K$, $K_A$ and $K_B$. To check \condref{it:coerc} note that
\begin{align*}
2\langle \Delta v, v\rangle+ \|B(v)\|_{L^2(\mathbb{T})}^2 + (p-2) \frac{|B(v)^*v|^2}{\|v\|_{L^2(\mathbb{T})}^2}
& \leq 2\langle \Delta v, v\rangle+ (p-1)\|B(v)\|_{L^2(\mathbb{T})}^2
\\ & \leq -2 \|\nabla v\|_{L^2(\mathbb{T})}^2 + 4\gamma^2(p-1)\|v\|^2_{H^1(\mathbb{T})}\\ & \leq -\theta\|v\|^2_{H^1(\mathbb{T})} + 2 \|v\|_{L^2(\mathbb{T})}^2,
\end{align*}
where $\theta:=2-4\gamma^2(p-1)>0$. This proves \condref{it:coerc} and thus the well-posedness follows from Theorem~\ref{Main_theorem}.
On the other hand, it follows from \cite[Theorem~4.1(ii)]{veraar_2012} that there exists an initial datum $u_0\in C^\infty(\mathbb{T})$ such that if $q>p$ and $\gamma>0$ is such that $2\gamma^2(p-1) < 1$ and $2\gamma^2(q-1) > 1$, then there is a $t>0$ such that ${\mathbb E}\|v(t)\|_{L^2(\mathbb{T})}^{q} = \infty$. Moreover, even ${\mathbb E}\|v(t)\|_{H^{s}(\mathbb{T})}^{q} = \infty$ for all $s\in {\mathbb R}$.
\end{example}
\begin{remark}\label{rem:comparison}
In \cite{neelima_2020} the following coercivity condition was proposed:
\begin{align}\label{eq:coercivitypmin1}
2\langle A(t, v), v \rangle + (p-1)\|B(t, v)\|^2_{{\mathcal L}_2(U, H)}\leq -\theta\|v\|_V^\alpha + f(t) + K_c\|v\|_H^2, \ \ v\in V.
\end{align}
The latter is more restrictive than \condref{it:coerc}, since $\frac{\|B(t, v)^*v\|_U^2}{\|v\|_H^2}\leq \|B(t, v)\|^2_{{\mathcal L}_2(U, H)}$. Replacing our condition \condref{it:coerc} by \eqref{eq:coercivitypmin1}, the main result in \cite{neelima_2020} states that
\begin{align*}
\sup_{t\in [0,T]} {\mathbb E}\|u(t)\|^{p} & \leq C \Big[{\mathbb E}\|u(0)\|_H^{p} + {\mathbb E}\Big(\int_0^T f(t) \mathrm{d} t\Big)^{\frac{p}{2}}\Big],
\\ {\mathbb E}\sup_{t\in [0,T]} \|u(t)\|^{r p} &\leq C_r \Big[{\mathbb E}\|u(0)\|_H^{p} + {\mathbb E}\Big(\int_0^T f(t) \mathrm{d} t\Big)^{\frac{p}{2}}\Big],
\end{align*}
where $r\in (0,1)$.
Both estimates are sub-optimal. The result \eqref{eq:aprioripnew} shows that the supremum can actually be inside the expectation and thus one can take $r=1$. In \cite{neelima_2020} the growth condition \condref{it:bound2} on $B$ is not explicitly assumed, but as far as we can see \condref{it:bound2} is used in their estimate (13).
Similar results were obtained in \cite{brzezniak_strong_2014}, under a different coercivity condition. A detailed comparison with \eqref{eq:coercivitypmin1} can be found in \cite[Remark 6.1]{neelima_2020}.
\end{remark}
\section{Proof of the main result}\label{sec:proofMain}
In \cite[Theorem~4.2.5, p. 91]{Rockner_SPDE_2015} the following version of It\^o's formula is obtained for $p=2$. The $p>2$ version can be obtained from the $p=2$ version combined with
the real case by considering $(\|X_t\|^2+\varepsilon)^{p/2}$ and letting $\varepsilon\downarrow 0$ or by applying \cite[Theorem~3.2, p.~73]{rozovskii_1990}.
\begin{lemma}[It\^{o}'s formula for $\|\cdot\|_H^p$]\label{ito_p}
Let $p\in [2, \infty)$, $\alpha \in (1, \infty)$, $X_0 \in L^p(\Omega; \mathcal{F}_0; H)$ and $Y\in L^{\frac{\alpha}{\alpha-1}}([0, T]\times \Omega; \mathrm{d} t \otimes \mathbb{P};V^*)$, $Z\in L^2([0, T]\times \Omega; \mathrm{d} t \otimes \mathbb{P}; {\mathcal L}_2(U, H))$ both progressively measurable. If $X \in L^\alpha([0, T]\times \Omega; \mathrm{d} t \otimes \mathbb{P}; V)$ and for a.e.\ $t\in[0, T]$ ${\mathbb E}(\|X_t\|_H^2) < \infty$, and a.s.
$$X_t = X_0 + \int_0^t Y_s \mathrm{d} s + \int_0^t Z_s \mathrm{d} W_s, \quad t\in[0, T]$$
is satisfied in $V^*$, then X is a continuous $H$-valued $\mathcal{F}_t$-adapted process and the following holds a.s.:
\begin{align*}
\|X_t\|_H^p & = \|X_0\|_H^p + p\int_0^t \|X_s\|_H^{p-2} Z_s^* X_s \mathrm{d} W_s \\
& \quad + \frac{p(p-2)}{2}\int_0^t \|X_s\|_H^{p-4} \|Z_s^* X_s\|_U^2 \mathrm{d} s \\
& \quad + \frac{p}{2}\int_0^t \|X_s\|_H^{p-2} \left(2\langle Y_s, X_s\rangle + \|Z_s\|_{{\mathcal L}_2(U, H)}^2 \right)\mathrm{d} s, \quad t\in[0, T],
\end{align*}
where $\|X_s\|_H^{p-4}$ is defined as zero if $X_s=0$.
\end{lemma}
The main step in the proof of Theorem~\ref{Main_theorem} is the following new a priori estimate, where we note that the condition $p\geq \beta+2$ in Assumptions~\ref{main_assumptions} is not needed.
\begin{theorem}\label{a_priori_theorem}
Suppose $u$ is a solution of equation \eqref{eq:variationalintro} with initial condition $u(0)\in L^{p}(\Omega; H)$ and \condref{it:coerc}, \condref{it:bound1} and \condref{it:bound2} from Assumptions~\ref{main_assumptions} hold with $f\in L^\frac{p}{2}(\Omega; L^1([0, T]))$. Then, there exists a constant $C$ depending on $\alpha$, $\beta$, $\theta$, $p$, $K_c$, $K_A$, $K_B$, $K_{\alpha}$ such that
\begin{equation}\label{eq:estapriorimain}
\begin{split}
{\mathbb E}\sup\limits_{t\in[0, T]}\|u(t)\|_H^p + {\mathbb E}\Big(\int_0^T \|u(t)\|_V^\alpha \mathrm{d} t\Big)^{\frac{p}{2}}
&\leq Ce^{CT}\Big[{\mathbb E}\|u(0)\|_H^p + {\mathbb E}\Big(\int_0^T f(t) \mathrm{d} t\Big)^{\frac{p}{2}}\Big].
\end{split}
\end{equation}
\end{theorem}
\begin{proof}
\textit{Step 0: Stopping time argument}.
For $n\geq 1$ consider the following sequence of stopping times:
\begin{equation*}
\tau_n = \inf\{t\in[0, T] : \|u(t)\|_H \geq n\} \wedge \inf\{t\in[0, T]: \int_0^t \|u(s)\|_V^\alpha ds \geq n \},
\end{equation*}
where we set $\inf \emptyset = T$. Then $\tau_n \to T$ a.s. as $n\to \infty$ by Definition~\ref{solution_definition}. Since $u$ solves \eqref{eq:variationalintro} in the sense of Definition~\ref{solution_definition}, Lemma~\ref{ito_p} implies the following:
\begin{align*}
& \|u(t\wedge \tau_n)\|_H^{p} = \|u(0)\|_H^{p} + p\int_0^{t\wedge\tau_n} \|u(s)\|_H^{p-2}B(s, u(s))^* u(s) \mathrm{d} W(s) \\
& + \frac{p}{2}\int_0^{t\wedge\tau_n} \|u(s)\|_H^{p-2} \Big(2\langle A(s, u(s)), u(s)\rangle + \|B(s, u(s))\|_{{\mathcal L}_2(U, H)}^2\\
&\quad + (p-2)\frac{\|B(s, u(s))^*u(s)\|_U^2}{\|u(s)\|_H^2}\Big) \mathrm{d} s.
\end{align*}
Using the coercivity assumption \condref{it:coerc}, the latter implies
\begin{equation}\label{ItoIneq}
\begin{split}
\|u(t\wedge \tau_n)\|_H^{p} &+\frac{\theta p}{2}\int_0^{t\wedge\tau_n}\|u(s)\|_H^{p-2} \|u(s)\|_V^\alpha \mathrm{d} s \\
& \leq \|u(0)\|_H^{p} + p\int_0^{t\wedge\tau_n} \|u(s)\|_H^{p-2} B(s, u(s))^*u(s) \mathrm{d} W(s) \\
& \phantom{\leq} + \frac{p}{2}\int_0^{t\wedge\tau_n} \|u(s)\|_H^{p-2} \left(f(s)+K_c\|u(s)\|_H^2 \right) \mathrm{d} s.\\
\end{split}
\end{equation}
Taking expectations in \eqref{ItoIneq}, the stochastic integral cancels and we find
\begin{equation}\label{apriori}
\begin{split}
{\mathbb E}\|u(t\wedge\tau_n)&\|_H^p +\frac{\theta p}{2}{\mathbb E}\int_0^{t\wedge\tau_n}\|u(s)\|_H^{p-2}\|u(s)\|_V^\alpha \mathrm{d} s\\
& \leq {\mathbb E}\|u(0)\|_H^p +\frac{p}{2}{\mathbb E}\int_0^{t\wedge\tau_n}\|u(s)\|_H^{p-2}f(s) \mathrm{d} s
+\frac{p}{2}K_c{\mathbb E}\int_0^{t\wedge\tau_n}\|u(s)\|_H^p \mathrm{d} s.
\end{split}
\end{equation}
Estimates~\eqref{ItoIneq} and \eqref{apriori} will be used several times to derive new estimates which ultimately lead to \eqref{eq:estapriorimain}.
\smallskip
\textit{Step 1: Estimating the supremum term} ${\mathbb E}\sup\limits_{t\in[0, T]}\|u(s)\|_H^p$.
Taking suprema and expectations in \eqref{ItoIneq}, we obtain the following estimate
\begin{equation}\label{SPDEest1}
\begin{split}
{\mathbb E}\sup\limits_{r\in[0, t]}\|u(r\wedge\tau_n)\|_H^p &\leq {\mathbb E}\|u(0)\|_H^p + p{\mathbb E}\sup\limits_{r\in[0, t]}\int_0^{r\wedge\tau_n} \|u(s)\|_H^{p-2} B(s, u(s))^* u(s)\mathrm{d} W(s)\\
&\quad + \frac{p}{2}{\mathbb E}\int_0^{t\wedge\tau_n}\|u(s)\|_H^{p-2} f(s) \mathrm{d} s+\frac{pK_c}{2}{\mathbb E}\int_0^{t\wedge\tau_n}\|u(s)\|_H^{p}\mathrm{d} s.
\end{split}
\end{equation}
Let $\varepsilon_1 > 0$. Then
\begin{align*}
& {\mathbb E}\sup\limits_{r\in[0, t]}\int_0^{r\wedge\tau_n}
\|u(s)\|_H^{p-2}B(s, u(s))^*u(s)\mathrm{d} W(s) \\
& \stackrel{\mathrm{(i)}}{\leq}2\sqrt{2}{\mathbb E}\Big(\int_0^{t\wedge\tau_n} \|u(s)\|_H^{2p-2}\|B(s, u(s))\|_{{\mathcal L}_2(U, H)}^2 \mathrm{d} s\Big)^{\frac{1}{2}} \\
& \stackrel{\mathrm{(ii)}}{\leq} 2\sqrt{2}\Big({\mathbb E}\sup\limits_{r\in[0, t]}\|u(r\wedge\tau_n)\|_H^p\Big)^{\frac{1}{2}}\Big({\mathbb E}\int_0^{t\wedge \tau_n}\|u(s)\|_H^{p-2}\|B(s, u(s))\|_{{\mathcal L}_2(U, H)}^2\mathrm{d} s \Big)^{\frac{1}{2}} \\
& \stackrel{\mathrm{(iii)}}{\leq} \sqrt{2}\varepsilon_1{\mathbb E}\sup\limits_{r\in[0, t]}\|u(r\wedge\tau_n)\|_H^p + \frac{\sqrt{2}}{\varepsilon_1}{\mathbb E}\int_0^{t\wedge \tau_n}\|u(s)\|_H^{p-2}\|B(s, u(s))\|_{{\mathcal L}_2(U, H)}^2 \mathrm{d} s \\
& \stackrel{\mathrm{(iv)}}{\leq}\sqrt{2}\varepsilon_1{\mathbb E}\sup\limits_{r\in[0, t]}\|u(r\wedge\tau_n)\|_H^p\\
&\quad +\frac{\sqrt{2}}{\varepsilon_1}{\mathbb E}\int_0^{t\wedge \tau_n} \|u(s)\|_H^{p-2}(f(s)+K_B\|u(s)\|_H^2+K_\alpha\|u(s)\|_V^\alpha)\mathrm{d} s,
\end{align*}
where in (i) we have applied the Burkholder-Davis-Gundy inequality with constant $2\sqrt{2}$ (see \cite[Theorem~1]{Ren08}), in (ii) H\"older's inequality, in (iii) Young's inequality and in (iv) hypothesis \condref{it:bound2}. Using the latter estimate in \eqref{SPDEest1}, we find
\begin{align}\label{inter_1}
& (1-p\sqrt{2}\varepsilon_1){\mathbb E}\sup\limits_{r\in[0, t]}\|u(r\wedge\tau_n)\|_H^p \nonumber \\
& \leq {\mathbb E}\|u(0)\|_H^p + p\Big(\tfrac{\sqrt{2}}{\varepsilon_1}+\tfrac{1}{2}\Big){\mathbb E}\int_0^{t\wedge \tau_n}\|u(s)\|_H^{p-2}f(s) \mathrm{d} s \\ \nonumber & \quad +p\big(\tfrac{\sqrt{2}K_B}{\varepsilon_1}+\tfrac{K_c}{2}\big){\mathbb E}\int_0^{t\wedge \tau_n}\|u(s)\|_H^p \mathrm{d} s + \tfrac{pK_\alpha\sqrt{2}}{\varepsilon_1}{\mathbb E}\int_0^{t\wedge \tau_n}\|u(s)\|_H^{p-2}\|u(s)\|_V^\alpha \mathrm{d} s.
\end{align}
Using estimate~\eqref{apriori} for the last term of \eqref{inter_1} leads to
\begin{align}\nonumber
& (1-p\sqrt{2}\varepsilon_1){\mathbb E}\sup\limits_{r\in[0, t]}\|u(r\wedge\tau_n)\|_H^p \\
& \leq \big(1+K_\alpha\tfrac{2\sqrt{2}}{\varepsilon_1\theta}\big){\mathbb E}\|u(0)\|_H^p + p\big(\tfrac{\sqrt{2}}{\varepsilon_1}+\tfrac{1}{2}+K_\alpha\tfrac{\sqrt{2}}{\varepsilon_1\theta}\big){\mathbb E}\int_0^{t\wedge \tau_n}\|u(s)\|_H^{p-2}f(s) \mathrm{d} s \label{inter_2} \\
& \quad+ p\big(\tfrac{\sqrt{2}K_B}{\varepsilon_1}+\tfrac{K_c}{2}+\tfrac{\sqrt{2}K_\alpha K_c}{\varepsilon_1\theta}\big){\mathbb E}\int_0^{t\wedge \tau_n}\|u(s)\|_H^p \mathrm{d} s.\nonumber
\end{align}
It remains to absorb the integrals of $u$ on the right-hand side of \eqref{inter_2}.
To this end, let $\varepsilon_2 > 0$. By H\"{o}lder's inequality and Young's inequality we obtain
\begin{equation}\label{f_s_estimate}
\begin{split}
{\mathbb E}\int_0^{t\wedge \tau_n} \|u(s)\|_H^{p-2}f(s) \mathrm{d} s &\leq {\mathbb E}\sup\limits_{r\in[0, t]}\|u(r\wedge\tau_n)\|_H^{p-2}\int_0^{t}f(s) \mathrm{d} s\\
&\leq\Big(\varepsilon_2{\mathbb E}\sup\limits_{r\in[0,t]}\|u(r)\|_H^p\Big)^{\frac{p-2}{p}} \Big(\varepsilon_2^{\frac{2-p}{2}}{\mathbb E}\Big(\int_0^{t} f(s) \mathrm{d} s\Big)^{\frac{p}{2}}\Big)^{\frac{2}{p}}\\
&\leq\tfrac{p-2}{p}\varepsilon_2{\mathbb E}\sup\limits_{r\in[0,t]}\|u(r\wedge\tau_n)\|_H^p+\tfrac{2}{p} \varepsilon_2^{\frac{2-p}{2}}{\mathbb E}\Big(\int_0^{t}f(s) \mathrm{d} s\Big)^{\frac{p}{2}}.\\
\end{split}
\end{equation}
Setting
$\phi(\varepsilon_1, \varepsilon_2) = p\sqrt{2}\varepsilon_1 +(p-2)\varepsilon_2\big(\tfrac{\sqrt{2}}{\varepsilon_1}+\tfrac{1}{2}+K_\alpha\tfrac{\sqrt{2}}{\varepsilon_1\theta}\big)$
and using \eqref{f_s_estimate} in \eqref{inter_2}
we obtain:
\begin{equation}\label{final_inter_estimate}
\begin{split}
(1-\phi(\varepsilon_1,\varepsilon_2)){\mathbb E}\sup\limits_{r\in[0, t]}\|u(r\wedge\tau_n)&\|_H^p \leq \big(1+K_\alpha\tfrac{2\sqrt{2}}{\varepsilon_1\theta}\big){\mathbb E}\|u(0)\|_H^p \\
& + 2 \varepsilon_2^{\frac{2-p}{p}}\big(\tfrac{\sqrt{2}}{\varepsilon_1}+\tfrac{1}{2}+K_\alpha\tfrac{\sqrt{2}}{\varepsilon_1\theta}\big){\mathbb E}\Big(\int_0^{t}f(s) \mathrm{d} s\Big)^{\frac{p}{2}} \\
& + p\big(\tfrac{\sqrt{2}K_B}{\varepsilon_1}+\tfrac{K_c}{2}+\tfrac{\sqrt{2}K_\alpha K_c}{\varepsilon_1\theta}\big){\mathbb E}\int_0^{t\wedge \tau_n}\|u(s)\|_H^p \mathrm{d} s
\end{split}
\end{equation}
First choosing $\varepsilon_1$ small enough, and then $\varepsilon_2$ such that $\phi(\varepsilon_1,\varepsilon_2)=\frac12$, it follows that there is a constant $C>0$ (only depending on $\alpha$, $\beta$, $\theta$, $p$, $K_c$, $K_A$, $K_B$, $K_{\alpha}$) such that
\begin{equation}\label{intermediateresult}
{\mathbb E}\sup\limits_{r\in[0, t]}\|u(r\wedge\tau_n)\|_H^p \leq C\Big({\mathbb E}\|u(0)\|_H^p +{\mathbb E}\Big(\int_0^{t}f(s) \mathrm{d} s\Big)^{\frac{p}{2}}+{\mathbb E}\int_0^{t}{{\bf 1}}_{[0,\tau_n]}(s)\|u(s)\|_H^p\Big).
\end{equation}
Applying Gronwall's inequality to $v(t):= \sup_{r\in[0, t]}\|u(r\wedge \tau_n)\|_H^p$ we find
\begin{equation*}
{\mathbb E}\sup\limits_{t\in[0, T]}\|u(t\wedge\tau_n)\|_H^p \leq Ce^{CT}\Big({\mathbb E}\|u(0)\|_H^p + {\mathbb E}\Big(\int_0^{T} f(s) \mathrm{d} s\Big)^{\frac{p}{2}}\Big)
\end{equation*}
By Fatou's lemma this leads to
\begin{equation}\label{sup_estimate}
{\mathbb E}\sup\limits_{t\in[0, T]}\|u(t)\|_H^p \leq Ce^{CT}\Big({\mathbb E}\|u(0)\|_H^p + {\mathbb E}\biggl(\int_0^{T}f(s) \mathrm{d} s\biggr)^{\frac{p}{2}}\Big)
\end{equation}
and completes the proof of the supremum estimate.
\smallskip
\textit{Step 2: Estimating the $V$-norm} ${\mathbb E}\Big(\int_0^T \|u(s)\|_V^\alpha \mathrm{d} s\Big)^{\frac{p}{2}}$.
In order to estimate this quantity, by Lemma~\ref{ito_p} we find
\begin{equation*}
\begin{split}
\|u(t)\|_H^2 = \|u(0)\|_H^2 &+ \int_0^t \Big( 2\langle A(s, u(s)), u(s)\rangle + \|B(s, u(s))\|_{{\mathcal L}_2(U, H)}^2 \Big) \mathrm{d} s\\
& \quad + 2\int_0^t B(s, u(s))^*u(s) \mathrm{d} W(s)
\end{split}
\end{equation*}
By the coercivity condition \condref{it:coerc} we find that
\begin{align*}
\|u(t)\|_H^2 & + \int_0^t \Big( (p-2)\frac{\|B(s, u(s))^*u(s)\|_U^2}{\|u(s)\|_H^2} + \theta\|u(s)\|_V^\alpha \Big) \mathrm{d} s \\
& \leq \|u(0)\|_H^2 + \int_0^t \big(f(s) + K_c \|u(s)\|_H^2\big) \mathrm{d} s + 2\int_0^t B(s, u(s))^* u(s) \mathrm{d} W(s)
\end{align*}
Selecting just the term $\theta \|u(s)\|_V^\alpha$ and evaluating at $t= \tau_n$ gives
\begin{equation}\label{eq:Vtermstep}
\theta\int_0^{\tau_n} \|u(s)\|_V^\alpha \mathrm{d} s \leq \|u(0)\|_H^2 + \int_0^t \big(f(s) + K_c \|u(s)\|_H^2\big) \mathrm{d} s + 2\int_0^{\tau_n} B(s, u(s))^* u(s)\mathrm{d} W(s)
\end{equation}
Applying the function $|\cdot |^{\frac{p}{2}}$ to both sides of \eqref{eq:Vtermstep} and taking expectations, we obtain
\begin{equation}\label{V_norm}
\begin{split}
&\frac{\theta^{\frac{p}{2}}}{a_p}{\mathbb E}\Big(\int_0^{\tau_n}\|u(s)\|_V^\alpha \mathrm{d} s\Big)^{\frac{p}{2}} \leq {\mathbb E}\|u(0)\|_H^p + \E\Big(\int_0^T f(s) \dint s \Big)^{\frac{p}{2}}
\\ & \qquad
+ K_c^{\frac p 2} {\mathbb E}\Big(\int_0^{T} \|u(s)\|_H^2 \mathrm{d} s\Big)^{\frac{p}{2}}
+ 2^{\frac{p}{2}} {\mathbb E}\left|\int_0^{\tau_n} B(s, u(s))^* u(s)\mathrm{d} W(s)\right|^{\frac{p}{2}},
\end{split}
\end{equation}
where $a_p = 2^{p-2}$.
The $\|u(s)\|_H^2$-terms can be estimated with help of \eqref{sup_estimate} by
\begin{equation}\label{est_ush_term}
\begin{split}
{\mathbb E}\Big(\int_0^{T} \|u(s)\|_H^2 \mathrm{d} s\Big)^{\frac{p}{2}} &\le T^{\frac p 2} {\mathbb E}\sup\limits_{t\in[0, T]}\|u(t)\|_H^p\\
&\leq C T^{\frac p 2} e^{CT}\Big({\mathbb E}\|u(0)\|_H^p + {\mathbb E}\biggl(\int_0^{T}f(s) \mathrm{d} s\biggr)^{\frac{p}{2}}\Big).
\end{split}
\end{equation}
Thus it remains to estimate the $B$-term. We obtain:
\begin{equation}\label{BDG_V_norm}
\begin{split}
{\mathbb E}\Big|\int_0^{\tau_n} &B(s, u(s))^* u(s)\mathrm{d} W(s)\Big|^{\frac{p}{2}}
\stackrel{\mathrm{(i)}}{\leq} C_p{\mathbb E}\Big(\int_0^T \|u(s)\|_H^2 {{\bf 1}}_{[0,\tau_n]}(s) \|B(s, u(s))\|_{{\mathcal L}_2(U, H)}^2 \mathrm{d} s\Big)^{\frac{p}{4}}\\
&\stackrel{\mathrm{(ii)}}{\leq} C_p{\mathbb E}\Big(\sup\limits_{t\in[0, T]}\|u(t)\|_H^2 \int_0^{\tau_n} \|B(s, u(s))\|_{{\mathcal L}_2(U, H)}^2 \mathrm{d} s\Big)^{\frac{p}{4}}\\
&\stackrel{\mathrm{(ii)}}{\leq} C_p\Big({\mathbb E}\sup\limits_{t\in[0, T]}\|u(t)\|_H^p\Big)^{\frac{1}{2}}\Big({\mathbb E}\Big(\int_0^{\tau_n} \|B(s, u(s))\|_{{\mathcal L}_2(U, H)}^2 \mathrm{d} s\Big)^{\frac{p}{2}}\Big)^{\frac{1}{2}}\\
&\stackrel{\mathrm{(iii)}}{\leq} C_p\frac{1}{2\varepsilon}{\mathbb E}\sup\limits_{t\in[0, T]}\|u(t)\|_H^p + C_p\frac{\varepsilon}{2}{\mathbb E}\Big(\int_0^{\tau_n} \|B(s, u(s))\|_{{\mathcal L}_2(U, H)}^2 \mathrm{d} s\Big)^{\frac{p}{2}},
\end{split}
\end{equation}
where in (i) we have applied the Burkholder-Davis-Gundy inequality, (ii) follows from H\"older's inequality, and (iii) is a consequence of Young's inequality. Applying \condref{it:bound2}, the $B$-term can be estimated as
\begin{align*}
& {\mathbb E}\Big(\int_0^{\tau_n} \|B(s, u(s))\|_{{\mathcal L}_2(U, H)}^2 \mathrm{d} s\Big)^{\frac{p}{2}} \leq {\mathbb E}\Big(\int_0^{\tau_n} \big( f(s) + K_B \|u(s)\|_H^2 + K_{\alpha} \|u(s)\|_V^\alpha \big) \mathrm{d} s\Big)^{\frac{p}{2}} \\
& \leq b_p{\mathbb E}\Big(\int_0^T f(s) \mathrm{d} s \Big)^{\frac{p}{2}} + b_p K_B^{\frac p 2} {\mathbb E}\Big(\int_0^T \|u(s)\|_H^2 \mathrm{d} s\Big)^{\frac{p}{2}}+ b_pK_\alpha^{\frac{p}{2}}{\mathbb E}\Big(\int_0^{\tau_n} \|u(s)\|_V^\alpha \mathrm{d} s\Big)^{\frac{p}{2}} \\
& \leq b_p\E\Big(\int_0^T f(s) \dint s \Big)^{\frac{p}{2}} + b_p K_B^{\frac p 2} T^{\frac{p}{2}}{\mathbb E}\sup\limits_{t\in[0, T]}\|u(t)\|_H^p + b_p K_\alpha^{\frac{p}{2}}{\mathbb E}\Big(\int_0^{\tau_n} \|u(s)\|_V^\alpha \mathrm{d} s \Big)^{\frac{p}{2}},
\end{align*}
where $b_p = 3^{\frac{p-2}{2}}$.
Recombining this estimate with \eqref{sup_estimate} and \eqref{BDG_V_norm}, we obtain:
\begin{align*}
{\mathbb E} & \Big|\int_0^t B(s, u(s))^* u(s) \mathrm{d} W(s)\Big|^{\frac{p}{2}} \leq \Big(C_p \frac{1}{2\varepsilon} + b_pK_B^{\frac p 2} C_pT^{\frac{p}{2}}\frac{\varepsilon}{2} \Big) {\mathbb E}\sup\limits_{t\in[0, T]}\|u(t)\|_H^p \\ &\qquad \qquad + b_pC_p\frac{\varepsilon}{2}\E\Big(\int_0^T f(s) \dint s \Big)^{\frac{p}{2}} + b_pK_\alpha^{\frac{p}{2}} C_p\frac{\varepsilon}{2}{\mathbb E}\Big(\int_0^{\tau_n} \|u(s)\|_V^\alpha \mathrm{d} s\Big)^{\frac{p}{2}}\\
& \leq C_\varepsilon(1+T^{\frac{p}{2}}) e^{CT}\Big[{\mathbb E}\|u(0)\|_H^p + \E\Big(\int_0^T f(s) \dint s \Big)^{\frac{p}{2}}\Big] + b_p K_\alpha^{\frac{p}{2}} C_p\frac{\varepsilon}{2}{\mathbb E}\Big(\int_0^{\tau_n} \|u(s)\|_V^\alpha \mathrm{d} s\Big)^{\frac{p}{2}}
\end{align*}
Using this and \eqref{est_ush_term} in \eqref{V_norm}, it follows that
\begin{align*}
\theta^{\frac{p}{2}}{\mathbb E}\Big(\int_0^{\tau_n}\|u(s)\|_V^\alpha \mathrm{d} s\Big)^{\frac{p}{2}} & \leq C_\varepsilon'(1+T^{\frac{p}{2}}) e^{C T}\Big[{\mathbb E}\|u(0)\|_H^p + \E\Big(\int_0^T f(s) \dint s \Big)^{\frac{p}{2}}\Big] \\
& \phantom{\leq} + a_p b_p 2^{\frac{p-2}{2}}K_\alpha^{\frac{p}{2}} C_p\varepsilon{\mathbb E}\Big(\int_0^{\tau_n} \|u(s)\|_V^\alpha \mathrm{d} s\Big)^{\frac{p}{2}}.
\end{align*}
Therefore, choosing $\varepsilon > 0$ small enough, we obtain
\begin{equation*}
{\mathbb E}\Big(\int_0^T \|u(s)\|_V^\alpha \mathrm{d} s\Big)^{\frac{p}{2}} \leq C''e^{C''T}\Big({\mathbb E}\|u(0)\|_H^p + \E\Big(\int_0^T f(s) \dint s \Big)^{\frac{p}{2}}\Big).
\end{equation*}
Since we have estimated all three terms in the above steps, this finishes the proof.
\end{proof}
\begin{remark}
One can also prove an estimate for the integral of $\|u(s)\|_H^{p-2}\|u(s)\|_V^\alpha$. Indeed, by H\"older's and Young's inequality
\begin{align*}
{\mathbb E}\int_0^{T} \|u(s)\|_H^{p-2} \|u(s)\|_V^\alpha \mathrm{d} s & \leq {\mathbb E} \sup_{s\in [0,T]}\|u(s)\|_H^{p-2} \int_0^{T} \|u(s)\|_V^\alpha \mathrm{d} s
\\
& \le \tfrac{p-2}{p} {\mathbb E} \sup_{t \in [0,T]} \|u(t)\|_H^p + \tfrac 2 p {\mathbb E} \Big( \int_0^T \|u(t)\|_V^\alpha \mathrm{d} t\Big)^{\frac p 2},
\end{align*}
where the last line is bounded by the left-hand side of \eqref{eq:estapriorimain}.
\end{remark}
If $K_B = K_c = 0$ in Assumptions~\ref{main_assumptions} \condref{it:coerc} and \condref{it:bound2}, it is possible to improve the dependency on $p$ in estimate~\eqref{eq:estapriorimain}. Here the condition $p\geq \beta+2$ is not needed.
\begin{corollary}\label{cor:a_priori_remark}
Suppose $u$ is a solution of equation \eqref{eq:variationalintro} with initial condition $u(0)\in L^{p}(\Omega; H)$ and \condref{it:coerc}, \condref{it:bound1}, \condref{it:bound2} from Assumptions ~\ref{main_assumptions} hold with $K_B = K_c=0$ and $f\in L^\frac{p}{2}(\Omega; L^1([0, T]))$. Then there exists a constant $C$ only depending on $\alpha, \beta, \theta, K_A, K_{\alpha}$ such that
\begin{equation}\label{eq:estimateforallpsup}
\begin{aligned}
\|u\|_{L^p(\Omega; C([0, T]; H))} + & p^{-1/2}\|u\|_{L^p(\Omega;L^2([0,T];V))} \\ & \leq C\big[\|u(0)\|_{L^p(\Omega; H)} + \|f\|^{\frac{1}{2}}_{L^p(\Omega; L^1(0, T))}\big].
\end{aligned}
\end{equation}
Moreover, if $B(v)^*v = 0$ for all $v\in V$, then
the above estimates hold for all $p\in [2, \infty]$, and $p^{-1/2}$ can be omitted.
\end{corollary}
The main point is that $C$ does not depend on $p$ and $T$. In particular, we can let $T\to\infty$ in \eqref{eq:estimateforallpsup} if $f$ is integrable over ${\mathbb R}_+$.
\begin{proof}
Estimate~\eqref{final_inter_estimate} gives for every $\varepsilon_1, \varepsilon_2 > 0$:
\begin{equation}\label{p_indep_start}
\begin{split}
& \big(1-\phi(\varepsilon_1,\varepsilon_2)\big){\mathbb E}\sup\limits_{t\in[0, S]}\|u(t\wedge\tau_n)\|_H^p \\
& \qquad \leq \big(1+K_\alpha\tfrac{2\sqrt{2}}{\varepsilon_1\theta}\big){\mathbb E}\|u(0)\|_H^p + 2\varepsilon_2^{\frac{2-p}{p}}\big(\tfrac{\sqrt{2}}{\varepsilon_1}+\tfrac{1}{2}+K_\alpha\tfrac{\sqrt{2}}{\varepsilon_1\theta}\big){\mathbb E}\Big(\int_0^{\tau_n}f(s) \mathrm{d} s\Big)^{\frac{p}{2}},\\
\end{split}
\end{equation}
where $\phi(\varepsilon_1, \varepsilon_2) = p\sqrt{2}\varepsilon_1 +(p-2)\varepsilon_2\big(\tfrac{\sqrt{2}}{\varepsilon_1}+\tfrac{1}{2}+K_\alpha\tfrac{\sqrt{2}}{\varepsilon_1\theta}\big)$. Choosing
$$\varepsilon_1 = \frac{1}{2\sqrt{2}p}, \qquad \varepsilon_2 = \frac{1}{2(p-2)(8p+1+K_\alpha\frac{8p}{\theta})}$$
gives $\phi(\varepsilon_1,\varepsilon_2) = \frac 1 4$. Moreover,
$$\frac{1}{\varepsilon_2} \leq 16p^2(1+K_\alpha \tfrac{1}{\theta}) + p^2 + 1 \leq A p^2,$$
where $A$ is a constant depending on $K_\alpha$ and $\theta$. Therefore, we get:
\begin{align*}
\tfrac{1}{4}{\mathbb E}\sup\limits_{t\in[0, S]}\|u(t\wedge\tau_n)\|_H^p & \leq (1+K_\alpha\tfrac{8p}{\theta}){\mathbb E}\|u(0)\|_H^p + (Ap^2+1)^{\frac{p-2}{p}}{\mathbb E}\Big(\int_0^{\tau_n} f(s) \mathrm{d} s\Big)^{\frac{p}{2}}
\end{align*}
Taking $1/p$-th powers, the supremum of \eqref{eq:estimateforallpsup} follows since for every $\gamma>0$,
\[
\sup_{p\in [2, \infty)}p^{\gamma/p} = \sup_{p \in [2,\infty)} \big(1+(p-1)\big)^{\gamma/p} \le \sup_{p \in [2,\infty)} e^{\gamma (p-1)/p} = e^\gamma.
\]
Under the additional assumption $B(v)^*v=0$, it follows that condition~\condref{it:coerc} holds for all $p\in [2, \infty)$. Therefore, we can let $p\to \infty$ in \eqref{eq:estimateforallpsup}.
In order to derive the estimate~\eqref{eq:estimateforallpsup} for the $V$-term, we use \eqref{V_norm} and the assumption $K_c = 0$ to find that
\begin{equation}\label{V_norm_ineq}
\begin{split}
\frac{\theta^{\frac{p}{2}}}{a_p}{\mathbb E}\Big(\int_0^{T} \|u(s)\|_V^\alpha &\mathrm{d} s\Big)^{\frac{p}{2}}
\leq {\mathbb E}\|u(0)\|_H^p + \E\Big(\int_0^T f(s) \dint s \Big)^{\frac{p}{2}} \\
& + 2^{\frac{p}{2}}{\mathbb E}\Big|\int_0^{T} B(s, u(s))^*u(s)\mathrm{d} W(s)\Big|^{\frac{p}{2}} ,
\end{split}
\end{equation}
where $a_p = 2^{p-2}$. If $B(v)^*v = 0$ for all $v \in V$, then the stochastic integral vanishes and thus \eqref{V_norm_ineq} already implies the required result.
It remains to prove estimate~\eqref{eq:estimateforallpsup} for the $V$-norm in the case the stochastic integral in \eqref{V_norm_ineq} does not vanish. For this we use the Burkholder-Davis-Gundy inequality with $\gamma_p = \frac{(2p)^{p/4}}{2}$ as in \eqref{BDG_V_norm} (see \cite[Theorem~A]{carlen_1991}), giving for all $\varepsilon>0$
\begin{align*}
&{\mathbb E}\Big|\int_0^{T} B(s, u(s))^*u(s)\mathrm{d} W(s)\Big|^{\frac{p}{2}}\\
&\leq \frac{\gamma_p }{\varepsilon}{\mathbb E}\sup\limits_{t\in[0, T]}\|u(t)\|_H^p + \gamma_p \varepsilon {\mathbb E}\Big(\int_0^{T} \|B(s, u(s))\|_{{\mathcal L}_2(U, H)}^2 \mathrm{d} s \Big)^{\frac{p}{2}},
\end{align*}
where $\varepsilon > 0$ is arbitrary. Using assumption \condref{it:bound2}, we additionally obtain
\begin{equation*}
\begin{split}
&{\mathbb E}\Big(\int_0^{T} \|B(s, u(s))\|_{{\mathcal L}_2(U, H)}^2 \mathrm{d} s \Big)^{\frac{p}{2}}\\
&+\leq 2^{\frac{p-2}{2}}\E\Big(\int_0^T f(s) \dint s \Big)^{\frac{p}{2}} + 2^{\frac{p-2}{2}}K_\alpha^{\frac{p}{2}}{\mathbb E}\Big(\int_0^{T} \|u(s)\|_V^\alpha \mathrm{d} s\Big)^{\frac{p}{2}}.
\end{split}
\end{equation*}
Recombining all terms with inequality \eqref{V_norm_ineq} we find
\begin{align*}
\frac{\theta^{\frac{p}{2}}}{a_p} {\mathbb E}\Big(\int_0^{T} & \|u(s)\|_V^\alpha \mathrm{d} s\Big)^{\frac{p}{2}} \leq {\mathbb E}\|u(0)\|_H^p + (1+ \gamma_p \varepsilon 2^{p-1}) \E\Big(\int_0^T f(s) \dint s \Big)^{\frac{p}{2}}
\\ &+
\tfrac{2^{\frac{p}{2}}\gamma_p}{\varepsilon}{\mathbb E}\sup\limits_{t\in[0, T]} \|u(t)\|_H^p
+ 2^{p-1} K_\alpha^{\frac{p}{2}}\gamma_p \varepsilon {\mathbb E}\Big(\int_0^{T}\|u(s)\|_V^\alpha\Big)^{\frac{p}{2}}.
\end{align*}
Therefore, setting $\varepsilon = \frac{\theta^{\frac{p}{2}}}{a_p 2^{p+1} K_\alpha^{\frac{p}{2}}\gamma_p}$
we obtain
\begin{align*}
\frac{\theta^{\frac{p}{2}}}{2 a_p} {\mathbb E}\Big(\int_0^{T} \|u(s)\|_V^\alpha \mathrm{d} s\Big)^{\frac{p}{2}} & \leq {\mathbb E}\|u(0)\|_H^p + (1+ \gamma_p \varepsilon 2^{p-1}) \E\Big(\int_0^T f(s) \dint s \Big)^{\frac{p}{2}}
\\ & \qquad +
\tfrac{2^{\frac{p}{2}}\gamma_p}{\varepsilon}{\mathbb E}\sup\limits_{t\in[0, T]} \|u(t)\|_H^p.
\end{align*}
Taking $p$-th powers and observing that the leading term is $\gamma_p^{2/p}\leq C \sqrt{p}$, we arrive at the desired inequality.
\end{proof}
Given the a priori estimates of Theorem~\ref{a_priori_theorem}, one can now complete the proof of Theorem~\ref{Main_theorem} by showing existence and uniqueness as in the classical case $p=2$. Details are standard and can be found in \cite{rockner_2010}. As our assumptions differ from the latter some changes are required, and in particular, we require $p\geq \beta+2$, which is needed for technical reasons in the existence proof, but can often be avoided by a localization argument. Note that it was not used in Theorem~\ref{a_priori_theorem}. For details we refer to the existence and uniqueness proofs in \cite{brzezniak_strong_2014, neelima_2020}.
\section{Applications}\label{sec:appl}
In this section, we apply our framework to
\begin{itemize}
\item linear scalar second-order parabolic equations, namely the stochastic heat equation with both Dirichlet (section~\ref{stoch_heat_dir}) and Neumann boundary conditions (section~\ref{stoch_heat_neu}), in which the $p$-dependent term in the coercivity condition \ref{it:coerc} reduces to the classical setting in certain cases.
\item semilinear second-order parabolic equations, namely the stochastic Burgers' equation (section~\ref{stoch_burg}) and the stochastic Navier-Stokes equations in two dimensions (section~\ref{stoch_nav_sto}),
\item systems of SPDEs (section~\ref{systems_SPDE}) and higher-order SPDEs (section~\ref{higher_SPDE}) as treated in \cite{du_2020,wang_2020},
\item the fully nonlinear evolution induced by the $p$-Laplacian influenced by noise (section~\ref{stoch_p_lapl}).
\end{itemize}
The treated examples demonstrate the wide range of applicability of our unifying abstract framework. In several cases the regularity estimates in $L^p(\Omega)$ for $p> 2$ seem new. In all cases, the approach to prove them via our Theorem \ref{Main_theorem} also seems new. The variety of the examples will hopefully be enough to explain the reader how to apply our framework to concrete SPDEs.
\subsection{Stochastic heat equation with Dirichlet boundary conditions}\label{stoch_heat_dir}
We consider a stochastic heat equation with additive noise and Dirichlet boundary conditions.
\begin{equation}\label{eq: stoch_heat_equation_dir}
\mathrm{d} u(t) = \Big(\sum\limits_{i, j =1}^d \partial_i (a^{ij}\partial_{j} u(t)) + \phi(t) \Big)\mathrm{d} t + \sum\limits_{k=1}^\infty\Big(\sum\limits_{i=1}^d b_k^i\partial_i u(t) + \psi_{k, t} \Big) \mathrm{d} W_k(t).
\end{equation}
Here the $W_k(t)$ are real-valued Wiener processes. In what follows, we use:
\begin{assumptions}\label{ass: stoch_heat_equation_dir}
Let $\mathcal{D} \subseteq {\mathbb R}^d$ be a an open set. Let
$$(V, H, V^*) = (H_0^1(\mathcal{D}), L^2(\Distr), H^{-1}(\Distr))$$ and $U = \ell^2$.
Suppose that $a^{ij} \in L^\infty(\Omega \times [0,T] \times \mathcal D)$ for $1 \le i,j \le d$ and $(b_k^i)_{k = 1}^\infty \in L^\infty(\Omega \times [0,T]; W^{1,\infty}(\mathcal D;\ell^2))$ for $1\leq i \leq d$. Furthermore, we assume that the coefficients are progressively measurable. Define
\begin{equation}\label{eq: sigma_ij_dir}
\sigma^{ij} = \sum\limits_{k=1}^\infty b_k^{i} b_k^{j}, \qquad i, j \in\mathbb{N}
\end{equation}
and suppose that the uniform ellipticity condition on $a^{ij}$ and $b_k^{i}$:
\begin{equation}\label{cond:ellipticity_condition_dir}
\sum\limits_{i, j = 1}^d \left(2a^{ij}-\sigma^{ij}\right)\xi^i\xi^j \geq \theta |\xi|^2 \qquad \text{for all } \xi \in \mathbb{R}^d
\end{equation}
holds true where $\theta > 0$. Furthermore, assume $\phi \in L^{p}(\Omega; L^2([0, T]; H^{-1}(\Distr)))$,\\ $\psi \in L^{p}(\Omega; L^2([0, T]; L^2(\Distr;\ell^2)))$, and $u_0 \in L^{p}(\Omega; L^2(\mathcal{D}))$, where $p \geq 2$.
\end{assumptions}
Equation~\eqref{eq: stoch_heat_equation_dir} can be reformulated as a stochastic evolution equation of the form $$\mathrm{d} u(t) = A(t, u(t)) \ \mathrm{d} t + \sum\limits_{k=1}^\infty B_k(t, u(t)) \ \mathrm{d} W_k(t),$$
with the deterministic linear operator $A(t): H^1_0(\mathcal{D}) \to H^{-1}(\mathcal{D})$ defined by
\begin{equation*}
\langle A(t, u), v\rangle = -\sum_{i,j=1}^{d}\int_\mathcal{D} a^{ij}(\partial_i u \, \partial_j v \, \mathrm{d} x +\langle \phi(t), v\rangle \qquad \text{for } u, v\in H_0^1(\mathcal{D}),
\end{equation*}
and stochastic operators $B_k(t): H_0^1(\Distr) \to L^2(\Distr)$ given by
\begin{equation*}
B_k(t, v) = \sum\limits_{i=1}^d b_k^{i} \partial_i v + \psi_{k, t} \qquad \text{for } v\in H_0^1(\mathcal{D}).
\end{equation*}
It turns out that the $p$-dependent term in the coercivity condition \condref{it:coerc} vanishes. Therefore, the solution admits moment estimates of all orders $p \geq 2$, only limited by the integrability of the additive noise and the initial condition:
\begin{proposition}\label{prop: stoch_heat_equation_dir}
Suppose that Assumptions~\ref{ass: stoch_heat_equation_dir} are satisfied. Then, a unique variational solution $u$ of equation~\eqref{eq: stoch_heat_equation_dir} in the sense of Definition~\ref{solution_definition} exists and the following estimates hold:
\begin{align*}
{\mathbb E}&\sup\limits_{t\in[0, T]}\|u(t)\|_{L^2(\mathcal{D})}^p + {\mathbb E}\Big(\int_0^T \|u(t)\|_{H_0^1(\Distr)}^2 \mathrm{d} t\Big)^{\frac{p}{2}}
\\ & \leq Ce^{CT}\bigg({\mathbb E}\|u(0)\|_{L^2(\mathcal{D})}^p + {\mathbb E} \Big(\int_0^T \|\phi(t)\|_{H^{-1}(\mathcal{D})}^2 \mathrm{d} t\Big)^{\frac{p}{2}} + {\mathbb E} \Big(\int_0^T \|\psi(t)\|_{L^2(\mathcal{D};\ell^2)}^2 \mathrm{d} t\Big)^{\frac{p}{2}}\bigg)
\end{align*}
where $C$ depends on $\theta, p, a^{ij}$ and $b_k^i$ for all $i, j, k \in \mathbb{N}$.
\end{proposition}
\begin{remark}
Assuming that $\Distr$ is bounded and all $b_k$ are not space dependent, we can use Corollary~\ref{cor:a_priori_remark} to obtain $p$-independent constants, and even take $p= \infty$. That is, there exists a constant $C$ such that for all $p\in [2, \infty]$
\begin{align*}
&\|u\|_{L^p(\Omega; C([0, T]; L^2(\Distr)))} + \|u\|_{L^p(\Omega;L^2(0,T;H^1_0(\Distr)))} \\ & \leq C \Big[\|u(0)\|_{L^p(\Omega; L^2(\Distr))} + \|\phi\|_{L^p(\Omega;L^2(0,T;H^{-1}(\mathcal D)))} + \|\psi\|_{L^p(\Omega;L^2(0,T;L^2(\mathcal{D};\ell^2)))}\Big]
\end{align*}
where $C$ only depends on $\theta, a^{ij}, b_k^{i}$ for all $i, j, k, \in \mathbb{N}$.
\end{remark}
\begin{remark}\label{rem:Dirichletnoreg}
A version of Proposition~\ref{prop: stoch_heat_equation_dir} holds if we only assume $(b_k^i)_{k = 1}^\infty \in L^\infty(\Omega \times [0,T]\times \mathcal D;\ell^2))$. However, in this case we can only use $\frac{\|B(t, v)^*v\|_U^2}{\|v\|_H^2}\leq \|B(t, v)\|^2_{{\mathcal L}_2(U, H)}$ which leads to the $p$-dependent coercivity condition
\begin{equation*}
\sum\limits_{i, j = 1}^d \left(2a^{ij}-(p-1)\sigma^{ij}\right)\xi^i\xi^j \geq \theta |\xi|^2 \qquad \text{for all } \xi \in \mathbb{R}^d.
\end{equation*}
\end{remark}
\begin{proof}[Proof of Proposition~\ref{prop: stoch_heat_equation_dir}]
By Remark~\ref{rem:additive} and Theorem~\ref{Main_theorem}, it suffices to show Assumptions~\ref{main_assumptions}, \condref{it:hem}-\condref{it:bound2}, for $(A,B)$ with $\alpha = 2$, $\phi = 0$, $\psi = 0$, and $f = 0$.
Hemicontuinty \condref{it:hem} is immediate from the definition of $A$. For local weak monotonicity \condref{it:weak_mon}, observe that it suffices to prove the inequality for $v \in H_0^1(\mathcal{D})$ and $u = 0$ by linearity.
Using uniform ellipticity \eqref{cond:ellipticity_condition_dir}, it follows:
\begin{align}\label{eq: stoch_heat_equation_dir_H2}
\begin{split}
2\langle A(t, v), v\rangle + &
\|B(t, v)\|^2_{{\mathcal L}_2(\ell^2, L^2(\Distr))}\\
&= -\sum\limits_{i, j=1}^\infty\int_\mathcal{D}2a^{ij} \partial_i v \, \partial_j v \, \mathrm{d} x + \sum\limits_{k=1}^\infty \int_\mathcal{D} \sum\limits_{i, j = 1}^d b_k^{i} b_k^{j} \partial_i v \, \partial_j v \, \mathrm{d} x\\
&\ = \sum\limits_{i, j = 1}^d \int_{\mathcal{D}} (-2a^{ij}+\sigma^{ij}) \, \partial_i v \, \partial_j v \, \mathrm{d} x\\
&\leq-\theta\|v\|_{H_0^1(\mathcal{D})}^2 + \theta\|v\|_{L^2(\Distr)}^2,
\end{split}
\end{align}
that is, \condref{it:weak_mon} is satisfied with $K =\theta$ (if $\Distr$ is bounded one can take $K=0$ by Poincar\'e's inequality). For coercivity \condref{it:coerc}, observe that the first two terms in \condref{it:coerc} form the first line of \eqref{eq: stoch_heat_equation_dir_H2}. Therefore, it remains to derive an expression for $\|B(t, v)^*v\|_{\ell^2}^2/\|v\|_{L^2(\mathcal{D})}^2$, where $v\in H_0^1(\mathcal{D})$. Integration by parts gives
\begin{equation*}
(B(t, v)^*v)_k = \int_\mathcal{D} b^{i}_k \partial_i v v \mathrm{d} x = \frac{1}{2}\int_\mathcal{D} \partial_i b^{i}_k v^2 \mathrm{d} x.
\end{equation*}
Using the spatial regularity of $b_k^{i}$, we obtain:
\begin{equation*}
\begin{split}
\Big \|k\mapsto \sum\limits_{i=1}^d \int_\mathcal{D} (b^i_k \partial_i v) v \mathrm{d} x\Big \|_{\ell^2} &= \Big \|k\mapsto \sum\limits_{i=1}^d \int_\mathcal{D} \frac{1}{2}(\partial_i b^i_k) v^2 \mathrm{d} x\Big \|_{\ell^2}\\
&\leq \frac{1}{2} \int_\mathcal{D} \|\DIV(b)\|_{\ell^2} v^2 \mathrm{d} x\\
&\leq \|\DIV(b)\|_{L^\infty(\mathcal{D}; \ell^2)} \|v\|_{L^2(\mathcal{D})}^2.
\end{split}
\end{equation*}
Therefore,
\begin{equation*}
\begin{split}
2\langle A(t, v), v\rangle + \sum\limits_{k=1}^\infty \Big\|\sum\limits_{i=1}^d b^{ik}\partial_i v\Big\|_{L^2(\mathcal{D})}^2 &+ (p-2)\frac{\|(B_t(v)^*v)\|^2_{\ell^2}}{\|v\|_{L^2(\mathcal{D})}^2}\\
&\leq -\theta\|v\|_{H_0^1(\mathcal{D})}^2 + C(p-2) \|v\|_{L^2(\mathcal{D})}^2,
\end{split}
\end{equation*}
that is, \condref{it:coerc} is satisfied with $f = 0$ and $K_c = C(p-2)$.
For the boundedness condition \condref{it:bound1}, let $u, v\in H_0^1(\mathcal{D})$. Then,
\begin{equation*}
\begin{split}
|\langle A(t, u), v\rangle | \leq \sum\limits_{i,j = 1}^d \|a^{ij}\|_{L^\infty(\Distr)} \|u\|_{H_0^1(\mathcal{D})}\|v\|_{H_0^1(\mathcal{D})},
\end{split}
\end{equation*}
that is, $\|A(t, u)\|_{H^{-1}(\mathcal{D})}^2 \leq \left(\sum\|a^{ij}\|_{L^\infty(\Omega \times [0,T] \times \mathcal D)}\right)^2 \|u\|_{H_0^1(\mathcal{D})}^2$, implying \condref{it:bound1} for $\alpha = 2$, $\beta = 0$, and $K_A = \left(\sum\|a^{ij}\|_{L^\infty(\Omega \times [0,T] \times \mathcal D)}\right)^2/2$. Similarly, because of \eqref{eq: sigma_ij_dir},
\begin{equation*}
\|B(t, v)\|_{L^2(\mathcal{D};\ell^2)}^2 \leq \Big\|\sum\limits_{i, j = 1}^d \sigma^{ij}\Big\|_{L^\infty(\Distr)} \|v\|_{H_0^1(\mathcal{D})}^2.
\end{equation*}
Hence condition \condref{it:bound2} holds with $K_\alpha = \Big\|\sum\limits_{i, j = 1}^d \sigma^{ij}\Big\|_{L^\infty(\Omega \times [0,T] \times \mathcal D)}$ and $K_B = 0$.
\end{proof}
From the above proof it follows that the regularity condition on $b$ in Assumption~\ref{ass: stoch_heat_equation_dir} can actually be weakened to $b\in L^\infty(\Omega \times [0,T]\times \mathcal D;\ell^2)$ and $\DIV(b)\in L^\infty(\mathcal{D}; \ell^2)$, and where the divergence only needs to exist in distributional sense.
\subsection{Stochastic heat equation with Neumann boundary conditions}\label{stoch_heat_neu}
The second equation we consider is the same stochastic heat equation as before, but now with Neumann boundary conditions on a domain $\mathcal{D} \subseteq \mathbb{R}^d$. For completeness, this equation is:
\begin{equation}\label{eq:stoch_heat_eq_neumann}
\mathrm{d} u(t) = \Big(\sum\limits_{i, j =1}^d \partial_i (a^{ij}\partial_{j} u(t)) + \phi(t) \Big)\mathrm{d} t + \Big(\sum\limits_{k=1}^\infty\sum\limits_{i=1}^d b_k^{i}\partial_i u(t) + \psi_{t, k} \Big) \mathrm{d} W_k(t).
\end{equation}
Most assumptions and computations will be similar as before, though some special care is needed to derive the coercivity condition in the Neumann setting.
\begin{assumptions}\label{ass: stoch_heat_eq_neumann}
Let $p\in [2, \infty)$. Let $\mathcal{D} \subseteq {\mathbb R}^d$ be a bounded $C^1$-domain, and consider
$$(V, H, V^*) = (H^1(\mathcal{D}), L^2(\Distr), H^1(\Distr)^*).$$
Suppose that $a^{ij} \in L^\infty(\Omega \times [0,T] \times \mathcal D)$ for $1 \le i,j \le d$ and $(b_k^{i})_{k=1}^\infty \in L^\infty(\Omega \times [0,T]; W^{1,\infty}(\mathcal D;\ell^2))$ for $1\leq i \leq d$. Furthermore, we assume that the coefficients are progressively measurable. Define
\begin{equation}\label{sigma_ij}
\sigma^{ij} = \sum\limits_{k=1}^\infty b_k^{i} b_k^{j}, \qquad i, j \in\mathbb{N}
\end{equation}
and suppose that the uniform ellipticity condition on $a^{ij}$ and $b_k^{i}$:
\begin{equation}\label{eq: stoch_heat_eq_neumann_ellipticity_condition}
\sum\limits_{i, j = 1}^d \left(2a^{ij}-\sigma^{ij} - (p-2)C_b^2\right)\xi_i\xi_j \geq \theta |\xi|^2 \qquad \text{for all } \xi \in \mathbb{R}^d
\end{equation}
holds true where $\theta > 0$, and $C_b = \|b\cdot n\|_{L^\infty(\partial D; \ell^2)}$. Furthermore, assume $u_0 \in L^{p}(\Omega; L^2(\mathcal{D}))$,
\[\phi \in L^{p}(\Omega; L^2([0, T]; H^1(\Distr)^*)) \ \ \text{and} \ \ \psi \in L^{p}(\Omega; L^2([0, T]; H^1(\Distr;\ell^2))).
\]
\end{assumptions}
Equation~\eqref{eq:stoch_heat_eq_neumann} can be reformulated as a stochastic evolution equation of the form $$\mathrm{d} u(t) = A(t, u(t)) \ \mathrm{d} t + \sum\limits_{k=1}^\infty B_k(t, u(t)) \ \mathrm{d} W_k(t),$$
with the deterministic linear operator $A(t): H^1(\mathcal{D}) \to H^1(\mathcal{D})^*$ defined by
\begin{equation}\label{eq: stoch_heat_equation_neumann_A}
\langle A(t, u), v\rangle = -\sum_{i,j=1}^{d}\int_\mathcal{D} a^{ij}(t, x) \partial_i u \, \partial_j v \, \mathrm{d} x +\langle \phi(t), v\rangle \qquad \text{for } u, v\in H^1(\mathcal{D}),
\end{equation}
and stochastic operators $B_k(t): H^1(\Distr) \to L^2(\Distr)$ given by
\begin{equation}\label{eq: stoch_heat_equation_neumann_B}
B_k(t, v) = \sum\limits_{i=1}^d b_k^{i} \partial_i v + \psi_{k, t} \qquad \text{for } v\in H^1(\mathcal{D}).
\end{equation}
Unlike section~\ref{stoch_heat_dir}, the $p$-dependent term in the coercivity condition \condref{it:coerc} does not vanish completely and enters through the term $b\cdot n|_{\partial \Distr}$. If $b\cdot n$ vanishes at the boundary of $\Distr$, then the $p$-dependent term solution admits moment estimates of all orders $p \geq 2$, only limited by the integrability of the additive noise and the initial condition. The main result for the Neumann case is:
\begin{proposition}\label{prop:Neumann}
Suppose Assumptions~\ref{ass: stoch_heat_eq_neumann} hold. Then, a unique solution $u$ of equation \eqref{eq:stoch_heat_eq_neumann} exists and the following estimate holds:
\begin{align*}
& {\mathbb E}\sup\limits_{t\in[0, T]}\|u_t\|_{L^2(\mathcal{D})}^p + {\mathbb E}\Big(\int_0^T \|u_t\|_{H^1(\Distr)}^2 \mathrm{d} t\Big)^{\frac{p}{2}} \\ & \leq Ce^{CT}\bigg({\mathbb E}\|u_0\|_{L^2(\mathcal{D})}^p + {\mathbb E} \Big(\int_0^T \|\phi(t)\|_{H^{1}(\mathcal{D})^*}^2 \mathrm{d} t\Big)^{\frac{p}{2}} + {\mathbb E} \Big(\int_0^T \|\psi(t)\|_{L^2(\mathcal{D};\ell^2)}^2 \mathrm{d} t\Big)^{\frac{p}{2}}\bigg)
\end{align*}
where $C$ depends on $\theta, p, a^{ij}$ and $b_k^{i}$ for all $i, j, k \in \mathbb{N}$.
\end{proposition}
Remark~\ref{rem:Dirichletnoreg} applies in the Neumann case as well, and thus this gives an alternative to \eqref{eq: stoch_heat_eq_neumann_ellipticity_condition} which additionally works without smoothness of $b$.
Before starting the proof of the proposition, we state a lemma that is needed to show the coercivity condition \condref{it:coerc}.
\begin{lemma}\label{lemma: neumann_coercivity}
Consider assumptions ~\ref{ass: stoch_heat_eq_neumann} and $B$ as defined in \eqref{eq: stoch_heat_equation_neumann_B}. For every $\varepsilon \in (0, 1)$ there exists a constant $C_\varepsilon > 0$ such that for every nonzero $v \in H^1({\mathcal D})$ one has
\begin{equation}
\frac{\|B(t, v)^*v\|_{\ell^2}^2}{\|v\|_{L^2({\mathcal D})}^2} \leq (1+\varepsilon)C_b^2\|\nabla v\|_{L^2({\mathcal D})}^2 + C_\varepsilon(C_b^2 + D_b^2)\|v\|_{L^2({\mathcal D})}^2,
\end{equation}
where $C_b^2 = \|b \cdot n\|_{L^\infty(\partial D; \ell^2)}^2$ and $D_b^2 = \|\DIV(b)\|_{L^\infty(D; \ell^2)}^2$, where $n$ is the outer normal and $\DIV$ denotes the divergence.
\end{lemma}
\begin{proof}
Observe that ${\rm Tr}(\phi u) = \phi {\rm Tr}(u)$ for $\phi \in C^1(\overline{\mathcal{D}})$ and $u\in W^{1,1}(\mathcal{D})$. Indeed, for $u\in C^1(\overline{\mathcal{D}})$ this is clear, and the general case follows by approximation and boundedness of ${\rm Tr}:W^{1,1}(\mathcal{D})\to L^1(\partial D)$.
Thus, by integration by parts
\begin{align*}
\int_{\mathcal{D}} b^{ik} (\partial_i v) v dx = \frac12 \int_{\mathcal{D}} b^{ik} (\partial_i v^2) dx = \frac12 \int_{\partial\mathcal{D}} b^{ik} {\rm Tr}(v^2) n_i dS + \frac12 \int_{\mathcal{D}} (\partial_i b^{ik}) v^2 dx,
\end{align*}
where $n$ denotes the outer normal of $\mathcal{D}$.
Taking sums over $i$ and $\ell^2$-norms in $k$ for the last term we can write
\begin{align*}
\Big\|k\mapsto \sum_{i=1}^d \int_{\mathcal{D}} (\partial_i b^{ik}) v^2 dx\Big\|_{\ell^2} \leq \int_{\mathcal{D}} \|{\rm div}(b)\|_{\ell^2} v^2 dx\leq D_b^2 \|v\|_{L^2(\mathcal{D})}^2.
\end{align*}
For the boundary term we obtain
\begin{align*}
\Big\|k\mapsto \sum_{i=1}^d \int_{\partial\mathcal{D}} b^{ik} {\rm Tr}(v^2) n_i dS\Big\|_{\ell^2} \leq \int_{\partial\mathcal{D}} \|b\cdot n\|_{\ell^2} {\rm Tr}(v^2) dS
\leq C_b \|{\rm Tr}(v^2)\|_{L^1(\partial \mathcal{D})}
\end{align*}
By \cite[Theorem 2.7]{Motron02} for every $\varepsilon\in (0,1)$ there exists a constant $C_{\varepsilon}>0$ such that
\begin{align*}
\|{\rm Tr}(v^2)\|_{L^1(\partial D)} &\leq (1+\varepsilon) \|\nabla (v^2)\|_{L^1(\mathcal{D})} + C_{\varepsilon} \|v^2\|_{L^1(\mathcal{D})}
\\ & \leq 2(1+\varepsilon) \|v \nabla v\|_{L^1(\mathcal{D})} + C_{\varepsilon} \|v\|_{L^2(\mathcal{D})}^2
\\ & \leq 2(1+\varepsilon) \|\nabla v\|_{L^2(\mathcal{D})}\|v\|_{L^2(\mathcal{D})} + C_{\varepsilon} \|v\|_{L^2(\mathcal{D})}^2,
\end{align*}
Therefore, for $v\neq 0$
\begin{align*}
\frac{\Big\|k\mapsto\sum_{i=1}^d \int_{\mathcal{D}} b^{ik} (\partial_i v) v dx\Big\|_{\ell^2}}{\|v\|_{L^2(\mathcal{D})}} &\leq (1+\varepsilon) C_{b} \|\nabla v\|_{L^2(\mathcal{D})} + (C_{\varepsilon} C_{b}+D_b) \|v\|_{L^2(\mathcal{D})}.
\end{align*}
Taking squares we obtain the desired estimate by using $(x+y)^2\leq (1+\varepsilon) x^2+ C_{\varepsilon}'y^2$, and by redefining $\varepsilon$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:Neumann}]
We show that Assumptions ~\ref{ass: stoch_heat_eq_neumann} \condref{it:hem}-\condref{it:bound2} hold, where we set $\phi = 0, \psi = 0$. Application of Theorem~\ref{Main_theorem} gives the result. We see that \condref{it:hem}, \condref{it:bound1} and \condref{it:bound2} are similar to the proof of Proposition~\ref{prop: stoch_heat_equation_dir}. To prove \condref{it:weak_mon}, we require an extra step in inequality \eqref{eq: stoch_heat_equation_dir_H2}. Note that the same sequence of inequalities hold, since we only use the uniform ellipticity condition. This condition also follows from the new uniform ellipticity condition in Assumptions ~\ref{ass: stoch_heat_eq_neumann}. By linearity of the operators, it suffices to consider $v \in H^1(\mathcal{D})$. Using inequality \eqref{eq: stoch_heat_equation_dir_H2}, this results in:
\begin{align*}
2\langle A(t, v), v\rangle + \sum\limits_{k=1}^\infty \Big\|\sum\limits_{i=1}^d b_k^{i}\partial_i v\Big\|_{L^2(\mathcal{D})}^2
&\leq -\theta\sum\limits_{i=1}^d \int_\mathcal{D} |\partial_i v|^2 \mathrm{d} x \\ & \leq -\theta\|v\|_{H^1(\mathcal{D})}^2 + \theta \|v\|_{L^2(\mathcal{D})}^2.
\end{align*}
We are left to prove \condref{it:coerc}. It only remains to inspect the term $\|B_t(v)^*v\|/\|v\|^2$ where $v\in H^1(\mathcal{D})$. Let $\varepsilon \in (0, 1)$ and invoke lemma \eqref{lemma: neumann_coercivity} to produce the bound
\begin{equation*}
\begin{split}
&2\langle A(t, v), v\rangle + \sum\limits_{k=1}^\infty \Big\|\sum\limits_{i=1}^d b_k^{i}\partial_i v\Big\|_{L^2(\mathcal{D})}^2 + (p-2)\frac{\|B_t(v)^*v\|_{\ell^2}^2}{\|v\|_{L^2({\mathcal D})}^2}\\
&\leq (-\theta + (p-2)\varepsilon C_b^2)\|\nabla v\|_{L^2({\mathcal D})}^2 + C_\varepsilon(C_b^2+D_b^2) \|v\|_{L^2({\mathcal D})}^2\\
&\leq (-\theta + (p-2)\varepsilon C_b^2)\|v\|_{H^1({\mathcal D})}^2 + (-(1+\varepsilon)C_b^2 + \theta + C_\varepsilon(C_b^2+D_b^2)) \|v\|_{L^2({\mathcal D})}^2,
\end{split}
\end{equation*}
where $v \neq 0$. Since $C_b\in L^\infty(\Omega)$, we can choose $\varepsilon>0$ such that $(p-2)\varepsilon C_b^2)\leq \theta/2$, and this gives \condref{it:coerc}. Applying Theorem~\ref{Main_theorem} the required statement follows.
\end{proof}
\subsection{Stochastic Burgers' equation with Dirichlet boundary conditions}\label{stoch_burg}
We consider Burgers' equation with multiplicative gradient noise, that is,
\begin{equation}\label{eq:Burgers_equation}
\mathrm{d} u(t) = \left(\partial^2 u(t) + u(t) \partial u(t) \right) \mathrm{d} t + \gamma \partial u(t) \mathrm{d} W(t), \quad x \in (0,1),
\end{equation}
where $W(t)$ is a real-valued Wiener process. Equation~\eqref{eq:Burgers_equation} was first studied in \cite{brzezniak_1991} and subsequently in \cite{DaPrato_1994} with space-times white noise. We consider the same setting treated in \cite[Example 6.3]{neelima_2020} of a one-dimensional gradient noise term. The novelty is that our main abstract theorem allows to treat arbitrary moments in $\Omega$ using the classical parabolicity condition.
\begin{assumptions}\label{ass:burgers_assumptions}
Let $\gamma \in (-\sqrt 2, \sqrt 2)$, $T > 0$, and
\[
(V, H, V^*) = (H_0^1(0, 1), L^2(0, 1), H^{-1}(0, 1))
\]
and take $U = {\mathbb R}$.
\end{assumptions}
Now \eqref{eq:Burgers_equation} can be reformulated as a stochastic evolution equation
\begin{equation*}
\mathrm{d} u(t) = A(u(t)) \mathrm{d} t + B(u(t)) \mathrm{d} W(t),
\end{equation*}
where $A:H_0^1(\Distr) \to H^{-1}(\Distr)$ is given by
\begin{equation}\label{A_burgers}
\langle A(u), v\rangle = -\int_0^1 \partial u \, \partial v \, \mathrm{d} x + \int_0^1 u \, \partial u \, v \, \mathrm{d} x \qquad \text{for } u, v \in H_0^1(0, 1),
\end{equation}
and $B \colon H_0^1(0, 1) \to L^2(0, 1)$ is defined by
\begin{equation}\label{B_burgers}
B(v) = \gamma \partial v \qquad \text{for } v \in H_0^1(0, 1).
\end{equation}
Note that in order to align with our abstract framework, we would have to take $B \colon H_0^1(0, 1) \to \mathcal L_2({\mathbb R},L^2(0, 1))$ but we do not distinguish between the through the multiplication operation trivially isomorphic spaces $\mathcal L_2({\mathbb R},L^2(0, 1))$ and $L^2(0, 1)$.
Since we can allow $p=\infty$ in the above, it will turn out that we are able to obtain uniform estimates in $\Omega$ for this particular example. This is in correspondence with what has been shown in \cite{brzezniak_1991}, albeit obtained in a different way.
\begin{proposition}
Suppose that Assumptions~\ref{ass:burgers_assumptions} are satisfied. Let $p\in [4, \infty]$. Then, for any $u_0 \in L^{p}(\Omega; L^2(0, 1))$ the equation \eqref{eq:Burgers_equation} has a unique solution $u$, and the following energy estimate holds
\begin{equation*}
\|u(t)\|_{L^p(\Omega;C([0,T];L^2(0,1)))} + \|u\|_{L^p(\Omega;L^2(0,T;H^1_0(0,1)))} \leq C\|u_0\|_{L^p(\Omega; L^2(0,1))},
\end{equation*}
where $C$ only depends on $\gamma$.
\end{proposition}
\begin{proof}
As in previous instances, it suffices to verify Assumptions ~\ref{main_assumptions}, \condref{it:hem}-\condref{it:bound2}, with $f = 0$ and $K_B = K_c = 0$, so that the proposition follows from Theorem~\ref{Main_theorem} and Corollary~\ref{cor:a_priori_remark}. Hemicontinuity \condref{it:hem} is obvious. In order to prove local weak monotonicity \condref{it:weak_mon}, note that for $u, v \in H_0^1(0, 1)$ we have
\begin{align*}
\langle A(u)-A(v), u-v\rangle & \stackrel{\eqref{A_burgers}}{=} -\int_0^1 \partial(u-v) \, \partial(u-v) \, \mathrm{d} x + \int_0^1 (u \, \partial u - v \, \partial v) \, (u-v) \, \mathrm{d} x \\
& \, = - \|u-v\|_{H_0^1(0, 1)}^2 - \frac 1 2 \int_0^1 (u-v) \, \partial (u^2-v^2) \, \mathrm{d} x.
\end{align*}
Integration by parts then entails
\begin{align*}
- \frac 1 2 \int_0^1 (u-v) \, \partial (u^2-v^2) \, \mathrm{d} x & = \frac{1}{2}\int_0^1 (u^2-v^2) \, \partial(u-v) \, \mathrm{d} x \\ & = \frac{1}{6} \int_0^1 \partial(u-v)^3 \, \mathrm{d} x + \int_0^1 v \, (u-v) \, \partial(u-v) \, \mathrm{d} x \\
& = \int_0^1 v (u-v) \, \partial(u-v) \, \mathrm{d} x,
\end{align*}
so that
\begin{align*}
\langle A(u)-A(v), u-v\rangle & = -\|u-v\|_{H_0^1(0, 1)}^2 - \int_0^1 v (u-v) \, \partial(u-v) \, \mathrm{d} x \\
& \leq -\|u-v\|_{H_0^1(0, 1)}^2 + \|v\|_{L^4 (0, 1)} \|u-v\|_{L^4(0,1)} \|u-v\|_{H_0^1(0, 1)}.
\end{align*}
We employ the Sobolev-Gagliardo-Nirenberg and Poincar\'{e} inequality to obtain
\begin{equation*}
\|v\|_{L^4(0, 1)} \leq C \|v\|_{L^2(0, 1)}^{\frac{3}{4}} \|v\|_{H_0^1(0, 1)}^{\frac{1}{4}}\leq C'\|v\|_{L^2(0, 1)}^{\frac{1}{2}} \|v\|_{H_0^1(0, 1)}^{\frac{1}{2}},
\end{equation*}
so that
\begin{align*}
\langle A(u)-A(v), u-v\rangle
& \leq -\|u-v\|_{H_0^1(0, 1)}^2 + \|v\|_{L^4(0, 1)} \|u-v\|_{L^2(0, 1)}^{\frac{1}{2}}\|u-v\|_{H_0^1(0, 1)}^{\frac{3}{2}} \\
& \stackrel{\mathrm{(i)}}{\leq} (\varepsilon-1)\|u-v\|_{H_0^1(0, 1)}^2 + C_{\varepsilon} \|v\|_{L^4(0, 1)}^4 \|u-v\|_{L^2(0, 1)}^2 \\
& \stackrel{\mathrm{(ii)}}{\leq} (\varepsilon-1)\|u-v\|_{H_0^1(0, 1)}^2 + C_{\varepsilon}\|v\|_{L^2(0, 1)}^2 \|v\|_{H_0^1(0, 1)}^2 \|u-v\|^2_{L^2(0, 1)},
\end{align*}
where (i) follows from Young's inequality for some $\varepsilon \in (0, 1)$ and (ii) is a consequence of the Sobolev-Gagliardo-Nirenberg inequality. Now we combine with \eqref{B_burgers} to get
\begin{equation*}
\begin{split}
&2\langle A(u)-A(v), u-v\rangle + \|B(u)-B(v)\|_{L^2(0, 1)}^2 \\ & \leq (\gamma^2 + 2\varepsilon-2)\|u-v\|_{H_0^1(0, 1)}^2
+ C_\varepsilon\big(1+\|v\|_{L^2(0, 1)}^2\big)\big(1+\|v\|_{H_0^1(0, 1)}^2\big)\|u-v\|_{L^2(0, 1)}^2,
\end{split}
\end{equation*}
where $u, v\in H_0^1(0, 1)$. Noting that $\gamma \in (-\sqrt 2, \sqrt 2)$ and taking $\varepsilon = \frac{2-\gamma^2}{2}$, \condref{it:weak_mon} holds with $K = C_\varepsilon$, $\alpha = 2$, and $\beta = 2$.
For coercivity \condref{it:coerc}, we first inspect the quantity $\frac{\|B(v)^*v\|_U^2}{\|v\|_H^2}$ with $v\in H_0^1(0, 1)$ and $v\neq 0$. Now, note that the following holds by using integration by parts
\begin{equation*}
\|B(v)^*v\|_U^2 = \gamma \int_0^1 v \, \partial v \, \mathrm{d} x = \frac{\gamma}{2} \int_0^1 \partial (v^2) \, \mathrm{d} x = 0.
\end{equation*}
By \eqref{A_burgers} and \eqref{B_burgers}, this leads to
\begin{equation*}
\begin{split}
2\langle A(v), v\rangle + \|B(v)\|_{L^2(0,1)}^2 + & (p-2)\frac{\|B(v)^*v\|_U^2}{\|v\|_H^2} \\ & = -2\|v\|_{H_0^1(0, 1)}^2 + \int_0^1 v^2 \, \partial v \, \mathrm{d} x + \gamma^2\|v\|_{H_0^1(0, 1)}^2.\\
\end{split}
\end{equation*}
Since $\int_0^1 v^2 \, \partial v \, \mathrm{d} x = \frac 1 3 \int_0^1 \partial (v^3) \, \mathrm{d} x = 0$,
we get
\begin{equation*}
2\langle A(v), v\rangle + \|B(v)\|_{L^2(0, 1)}^2 + (p-2)\frac{\|B(v)^*v\|_U^2}{\|v\|_H^2} = (-2+\gamma^2)\|v\|_{H_0^1(0, 1)}^2.
\end{equation*}
Therefore, \condref{it:coerc} holds with $\theta = 2 - \gamma^2 > 0$, $\alpha = 2$, $f = 0$, and $K_c = 0$
Let $u, v\in H_0^1(0, 1)$. For the boundedness condition \condref{it:bound1}, observe
\begin{equation*}
|\langle A(u), v \rangle| \leq \int_0^1 |\partial u| \, |\partial v| \, \mathrm{d} x + \Big|\int_0^1 u \, \partial u \, v \, \mathrm{d} x\Big|,
\end{equation*}
where
\begin{equation*}
\int_0^1 |\partial u| \, |\partial v| \, \mathrm{d} x \leq \|u\|_{H_0^1(0, 1)} \|v\|_{H_0^1(0, 1)}
\end{equation*}
by the Cauchy-Schwarz inequality and
\begin{equation*}
\begin{split}
\Big|\int_0^1 u \, \partial u \, v \, \mathrm{d} x \Big| & \ \stackrel{\mathrm{(i)}}{=} \Big|\int_0^1 \frac{1}{2} (u^2) \, \partial v \, \mathrm{d} x\Big| \stackrel{\mathrm{(ii)}}{\le} \frac{1}{2} \|u\|_{L^4(0, 1)}^2 \|v\|_{H_0^1(0, 1)} \\
&\stackrel{\mathrm{(iii)}}{\le} C \|u\|_{L^2(0, 1)}\|u\|_{H_0^1(0, 1)}\|v\|_{H_0^1(0, 1)},
\end{split}
\end{equation*}
where we have applied integration by parts in (i), H\"{o}lder's inequality in (ii), and the Sobolev-Gagliardo-Nirenberg inequality in (iii). This results in
\begin{equation*}
\left|\langle A(u), v\rangle\right| \leq \big(\|u\|_{H_0^1(0, 1)} + C\|u\|_{L^2(0, 1)}\|u\|_{H_0^1(0, 1)}\big)\|v\|_{H_0^1(0, 1)},
\end{equation*}
Using $\alpha = 2$ as in \condref{it:weak_mon} and \condref{it:coerc}, we obtain
\begin{equation*}
\|A(u)\|_{H^{-1}(0, 1)}^2 \leq C' \|u\|_{H_0^1(0, 1)}^2 \big(1 + \|u\|_{L^2(0, 1)}^2\big),\\
\end{equation*}
proving \condref{it:bound1} with $K_A = C'$ and $\beta = 2$. Finally, for $v \in H_0^1(0, 1)$, $\|B(v)\|_{L^2(0, 1)}^2 = \gamma^2 \|v\|_{H_0^1(0, 1)}^2$, so that \condref{it:bound2} is satisfied with $K_B = 0$ and $K_\alpha = \gamma^2$.
\end{proof}
\subsection{Stochastic Navier-Stokes equations in 2D}\label{stoch_nav_sto}
Consider the stochastic Navier-Stokes equations in two space dimensions with multiplicative gradient noise
\begin{equation}\label{stoch_navier}
\mathrm{d} u(t) = (\nu \Delta u(t) - (u, \nabla )u)\mathrm{d} t + \sum\limits_{k=1}^\infty [(b_k, \nabla)u] \mathrm{d} W_k(t) - (\nabla p) \mathrm{d} t.
\end{equation}
Here, $(W_k(t))_{t\geq 0}$ is a collection of independent real Wiener processes indexed by $k\in\mathbb{N}$. The components $b_k$ are set to be vectors of divergence free vector fields (see Assumptions~\ref{navier_stokes_assumptions} below). Equation~\eqref{stoch_navier} was considered in \cite{brzezniak_1991} using semigroup methods, and later on in many other papers (see \cite{AV20_NS} and references therein). For simplicity we do not consider additional forcing terms, but they can be included without difficulty (see Remark \ref{rem:additive}).
In what follows, we use:
\begin{assumptions}\label{navier_stokes_assumptions}
Suppose $\mathcal{D} \subseteq \mathbb{R}^2$ is a bounded domain. Furthermore, assume $\nu > 0$, $T > 0$,
$(b_k)_{k \in {\mathbb N}} \in L^\infty((0,T)\times \Omega\times \mathcal D;\ell^2({\mathbb N};{\mathbb R}^{2\times2}))$ which is progressively measurable, and satisfies $\DIV b_k = (\sum_{i = 1}^2 \partial_i b_k^{i\gamma})_{\gamma = 1}^2 = 0$ in the sense of distributions for all $k \in {\mathbb N}$. We impose the coercivity condition that there exists $\kappa > 0$ such that
\begin{equation}\label{coercivity_assumption_navier}
\Big(2\nu \sum\limits_{i, \gamma = 1}^2 (\xi^{i, \gamma})^2 - \sum\limits_{k=1}^\infty\sum\limits_{\gamma, \gamma' = 1}^2 \sum\limits_{i, j = 1}^2 b_k^{i\gamma} b_k^{j\gamma'} \xi^{i, \gamma} \xi^{j, \gamma'} \Big) \geq \kappa \sum\limits_{i, \gamma = 1}^2 (\xi^{i, \gamma})^2,
\end{equation}
for all $\xi \in \mathbb{R}^{2\times 2}$.
Set $U := \ell^2$ and define $(V,H,V^*)$ by
\begin{equation*}
V = \{v \in W_0^{1, 2}(\mathcal{D}; \mathbb{R}^2) : \nabla \cdot v = 0 \quad a.e. \text{ on } \mathcal{D}\}, \quad \|v\|_V := \Big(\int_{\mathcal{D}}|\nabla v|^2 \mathrm{d} x\Big)^{\frac{1}{2}},
\end{equation*}
and where $H$ is the closure of $V$ with respect to the norm
\begin{equation*}
\|v\|_H := \Big(\int_{\mathcal{D}}|v|^2 \mathrm{d} x\Big)^{\frac{1}{2}}.
\end{equation*}
\end{assumptions}
Defining the Helmholtz-Leray projection $\mathbb{P}_\mathrm{HL}$ as the orthogonal projection
\begin{equation*}
\mathbb{P}_\mathrm{HL}: L^2(\mathcal{D}; \mathbb{R}^2) \to H,
\end{equation*}
equation~\eqref{stoch_navier} turns into a stochastic evolution equation
\begin{equation}\label{reduced_navier}
\mathrm{d} u(t) = (Lu(t)+ F(u(t)))\mathrm{d} t + \sum\limits_{k=1}^\infty B_k(u(t)) \mathrm{d} W_k(t),
\end{equation}
where $L: H^{2, 2}(\Distr; {\mathbb R}^2) \cap V \to H$ is given by
$$Lu = \nu \mathbb{P}_\mathrm{HL} (\Delta u), \quad u \in H^{2, 2}(\Distr; {\mathbb R}^2) \cap V$$
and can be extended to a map $L: V\to V^*$ such that $\|Lu\|_{V^*} \leq \|u\|_V, u \in V$. Furthermore, set $F$ to be a nonlinear operator $F: V \to V^*$ given by
$$F(u) = -\mathbb{P}_\mathrm{HL}[(u, \nabla)u] = - \mathbb P_\mathrm{HL}[\DIV (u \otimes u)], \quad u \in V.$$
Finally, define $B \colon V \to \mathcal L_2(U,H)$ by
\[
B(u)e_k = B_k(u) = \mathbb{P}_\mathrm{HL}[(b_k, \nabla) u], \quad u \in V.
\]
\begin{theorem}\label{thm:SNS}
Suppose Assumption~\ref{navier_stokes_assumptions} holds and let $p \in[2, \infty]$. Then, for any $u_0 \in L^{p}(\Omega; H)$, there exists a unique solution $u$ to equation \eqref{reduced_navier} and there exists a constant $C$ only depending on $\kappa$ such that
\begin{equation*}
\|u\|_{L^p(\Omega;C([0,T];H))} + \|u\|_{L^p(\Omega;L^2(0,T;V))} \leq C\|u_0\|_{L^p(\Omega; H)}
\end{equation*}
\end{theorem}
\begin{remark}
The special case of periodic boundary conditions in case of rough initial data was recently considered in \cite{agresti2021stochastic}, where high order regularity was proved. There the monotone operator setting (in $L^2(\Omega)$) was combined with a new approach to SPDEs based on maximal regularity techniques (see \cite{AV19_QSEE_1, AV19_QSEE_2}). The main difficulty to prove high order regularity for the solution to \eqref{stoch_navier} is that the nonlinearity is {\em critical} for the space $L^2(0,T;V)$. Therefore, classical bootstrapping arguments do not give any regularity.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{thm:SNS}]
It suffices to use Theorem \ref{Main_theorem} and Corollary~\ref{cor:a_priori_remark} with $A(u) := L u + F(u)$, for which \condref{it:hem}-\condref{it:bound2} under the assumption $K_B = K_c = 0$ have to be shown. We will only show \condref{it:weak_mon}, \condref{it:coerc} and \condref{it:bound2}. For the other assumptions we refer to \cite{brzezniak_strong_2014,Rockner_SPDE_2015}. In order to show local monotonicity \condref{it:weak_mon}, let $u, v \in V$. The quantity $\langle Lu-Lv, u-v\rangle$ can be computed from the definition
\begin{equation}\label{eq:lin_navier}
\langle Lu-Lv, u-v\rangle = - \nu \|u-v\|_V^2.
\end{equation}
Next, we compute
\[
\langle F(u)-F(v), u-v\rangle = - \langle \DIV((u-v) \otimes v), u-v\rangle - \langle \DIV(u \otimes (u-v)), u-v\rangle,
\]
where
\begin{equation}\label{eq:nse_non_0}
\langle \DIV(u \otimes (u-v)), u-v\rangle = - \frac 1 2 \langle \nabla |u-v|^2, u \rangle = 0
\end{equation}
and the Sobolev-Gagliardo-Nirenberg inequality entails
\[
- \langle \DIV((u-v) \otimes v), u-v\rangle \le C \|u-v\|_V^{\frac 3 2} \|u-v\|_H^{\frac 1 2} \|v\|_{L^4(\mathcal{D};\mathbb{R}^2)}
\]
for a constant $C$, so that by Young's inequality
\begin{equation}\label{eq:nonlin_navier}
\langle F(u)-F(v), u-v\rangle \leq \kappa \|u-v\|_V^2+\frac{C'}{\kappa^3}\|v\|_{L^4(\mathcal{D};\mathbb{R}^2)}^4\|u-v\|_H^2
\end{equation}
for a constant $C'$. Finally, the contribution \condref{it:weak_mon} coming from the stochastic integral is
\begin{equation}\label{eq:stoch_navier}
\begin{split}
\sum\limits_{k=1}^\infty \|B_k(u)-B_k(v)&\|^2_{H} \stackrel{(\mathrm{i})}{\leq} \sum\limits_{k=1}^\infty \|[(b_k, \nabla)(u-v)]\|^2_H\\
&\stackrel{(\mathrm{ii})}{=} \sum\limits_{k=1}^\infty \sum\limits_{\gamma,\gamma' =1}^2\sum\limits_{i, j = 1}^2 \int_{\mathcal{D}} b_k^{i\gamma}b_k^{j\gamma'}\partial_i (u-v)^{\gamma} \partial_j (u-v)^{\gamma'} \mathrm{d} x,
\end{split}
\end{equation}
where (i) follows since projections are contractive and (ii) is the first line written out. By \eqref{eq:lin_navier}, \eqref{eq:nonlin_navier}, \eqref{eq:stoch_navier} and the coercivity condition \eqref{coercivity_assumption_navier} we obtain
\begin{align*}
& 2\langle L u + F(u) - (L v +F(v)), u-v\rangle + \sum\limits_{k=1}^\infty \|B_k(u)-B_k(v)\|_H^2 \\
& \leq \frac{C'}{\kappa^3}\|v\|_{L^4(\mathcal{D}; \mathbb{R}^2)}^4 \|u-v\|_H^2 \\
& \stackrel{\mathrm{(i)}}{\leq} \frac{C''}{\kappa^3} \|v\|_V^2 \|v\|_H^2 \|u-v\|_H^2 \\
& \leq \frac{C''}{\kappa^3}(1+\|v\|_V^2)(1+\|v\|_H^2)\|u-v\|_H^2,
\end{align*}
with a constant $C''$ and (i) follows from the Sobolev-Gagliardo-Nirenberg inequality. The above implies that \condref{it:weak_mon} holds with $\alpha = \beta = 2$ and $K = \frac{C''}{\kappa^3}$.
In order to show \condref{it:coerc}, note that
\begin{equation*}
\langle Lv, v \rangle \stackrel{\eqref{eq:lin_navier}}{=} -\nu \|v\|_V^2, \qquad v \in V.
\end{equation*}
We also note that $\langle F(v), v\rangle \stackrel{\eqref{eq:nse_non_0}}{=} 0$ for $v \in V$. Therefore, the only term that remains to be estimated is
$\frac{\|B^*(u)u\|_{U}^2}{\|u\|_H^2}$.
This will also turn out to be 0, by using that the components of $b^k$ are divergence free vector fields. Indeed, we obtain for $k \in {\mathbb N}$,
\begin{equation*}
\begin{split}
&(B^*(v)v)_k= \int_{\mathcal{D}}\left[(b_k, \nabla) v\right]\cdot v \ \mathrm{d} x\\
&= \underbrace{\int_{\mathcal{D}} \big((b_k^{11}\partial_1 v^1) v^1 + (b_k^{12} \partial_2 v^1) v^1\big) \mathrm{d} x}_{\boxed{\text{A}}} + \int_{\mathcal{D}} \big((b_k^{21} \partial_1 v^2)v^2 + (b_k^{22} \partial_2 v^2)v^2\big) \mathrm{d} x.
\end{split}
\end{equation*}
By renumbering, it suffices to treat \boxed{A}. Using integration by parts, we see:
\begin{equation*}
\begin{split}
\boxed{\text{A}} = \frac 1 2 \int_{\mathcal{D}} \big(b_k^{11} \partial_1 (v^1)^2 + b_k^{12} \partial_ 2 (v^1)^2\big) \mathrm{d} x = \frac 1 2 \int_{\mathcal D} (\partial_1 b_k^{11} + \partial_2 b_k^{12}) (v^1)^2 \mathrm{d} x = 0 \\
\end{split}
\end{equation*}
and thus $(B^*(v)v)_k = 0$ for all $k\in \mathbb{N}$. We therefore conclude that the coercivity condition \condref{it:coerc} is as follows:
\begin{equation*}
2\langle Lv + F(v), v\rangle + \sum\limits_{k=1}^\infty \|B_k(v)\|_{H}^2 \leq -\nu \|v\|_V^2, \qquad v \in V,
\end{equation*}
that is, we can choose $\theta = \kappa$, $f(t) = 0$, and $K_c = 0$.
In order to show \condref{it:bound2}, we use \eqref{coercivity_assumption_navier} and \eqref{eq:stoch_navier} once more and arrive at
\begin{equation*}
\sum\limits_{k=1}^\infty \|B_k(v)\|_{H}^2 \leq (2\nu -\kappa)\|v\|_V^2,
\end{equation*}
showing that also \condref{it:bound2} holds on choosing $K_B = 0$ and $K_\alpha = 2\nu-\kappa$.
\end{proof}
\subsection{Systems of second order SPDEs}\label{systems_SPDE}
The authors of \cite{du_2020} develop
a $C^{2+\delta}$ theory for systems of SPDEs.
This relies on integral estimates for a model system of SPDEs (see \cite[Theorem~3.1]{du_2020}). We will show that one of the underlying assumptions, which the authors of \cite{du_2020} call the \textit{modified stochastic parabolicity condition}, fits naturally in our framework.
Sharpness follows from \cite[Example 1.1]{du_2020} which is based on \cite[Section 3]{KimLee}.
Consider a random field $$\mathbf{u} = (u^1, ..., u^N)': \mathbb{R}^d \times [0, \infty) \times \Omega \to \mathbb{R}^N$$ described by the following linear system of SPDEs:
\begin{equation}\label{eq: KaiDu_SPDE}
\mathrm{d} u^\alpha = \left(a^{ij}_{\alpha\beta} \partial_{ij} u^\beta + \phi_\alpha\right)\mathrm{d} t + \left(\sigma^{i}_{k, \alpha\beta}\partial_i u^\beta + \psi_{k, \alpha}\right) \mathrm{d} W_k(t)
\end{equation}
where the collection $\{W_k\}_{k\geq 1}$ are countably many independent Wiener processes.
In this section we use Einstein's summation convention with $$i, j = 1, 2, ..., d; \quad \alpha, \beta = 1, 2, ..., N; \quad k = 1, 2, ...$$ The assumptions are:
\begin{assumptions}\label{system_assumptions}
Let $p\in [2,\infty)$, $d\geq 1$ and $N\geq 1$. Let
\begin{equation*}
(V, H, V^*) = (H^{m+1}({\mathbb R}^d; {\mathbb R}^N), H^m({\mathbb R}^d; {\mathbb R}^N), H^{m-1}({\mathbb R}^d; {\mathbb R}^N))
\end{equation*}
and $U = \ell^2$.
Further assume that $a^{ij}_{\alpha\beta} \in L^\infty(\Omega \times [0, T])$ for all $1 \leq i, j \leq d$, $1 \leq \alpha, \beta \leq N$ and $(\sigma^{i}_{k, \alpha\beta})_{k=1}^\infty \in L^\infty(\Omega \times [0, T]; \ell^2)$ for all $1 \leq i \leq d$, $1 \leq \alpha, \beta \leq N$.
and suppose that the following stochastic modified parabolicity condition is satisfied:
\begin{itemize}
\item[(MSP)] The coefficients $a = (a^{ij}_{\alpha\beta})$ and $\sigma = (\sigma^{i}_{k, \alpha\beta})$ are said to satisfy the stochastic modified parabolicity (MSP) condition if there are measurable functions $\lambda^{i}_{k, \alpha\beta}: \mathbb{R}^d \times [0, \infty)\times \Omega \to \mathbb{R}$ with $\lambda^{i}_{k, \alpha\beta} = \lambda^{i}_{k, \beta\alpha}$ such that for
\begin{equation*}
\mathcal{A}^{ij}_{\alpha\beta} = 2a^{ij}_{\alpha\beta} -\sigma^{i}_{k, \gamma\alpha}\sigma^{j}_{k, \gamma\beta} - (p-2)(\sigma^{i}_{k, \gamma\alpha}-\lambda^{i}_{k, \gamma\alpha})(\sigma^{j}_{k, \gamma\beta}-\lambda^{j}_{k, \gamma\beta})
\end{equation*}
there exists a constant $\kappa > 0$ with
\begin{equation*}
\mathcal{A}^{ij}_{\alpha\beta} \xi_i \xi_j \eta^\alpha\eta^\beta \geq \kappa |\xi|^2 |\eta|^2 \quad \forall \xi\in\mathbb{R}^d, \eta\in\mathbb{R}^N
\end{equation*}
everywhere on $\mathbb{R}^d \times [0, \infty) \times \Omega$.
\end{itemize}
Suppose that $u_0\in L^{p}(\Omega,{\mathscr F}_0;H)$ and
\[
\phi \in L^p(\Omega; L^2([0, T]; \\ H^{m-1}(\mathbb{R}^d; \mathbb{R}^N))), \quad
\psi\in L^p(\Omega; L^2([0, T]; H^m(\mathbb{R}^d; \ell^2(\mathbb{N}; \mathbb{R}^N)))).
\]
\end{assumptions}
\begin{remark}
The above ellipticity condition $\mathcal{A}^{ij}_{\alpha\beta} \xi_i \xi_j \eta^\alpha\eta^\beta \geq \kappa |\xi|^2 |\eta|^2$ is known as the Legendre-Hadamard condition. In case the coefficients depend on the space variable some smoothness is required if one wishes to assume this type of ellipticity. Alternatively, one can consider measurable coefficients with a more restrictive ellipticity condition. For details on these matters we refer to \cite{AHMT}.
In the MSP condition one typically takes
$\lambda^{i}_{k, \alpha\beta} = (\sigma^{j}_{k, \alpha\beta} + \sigma^{j}_{k, \beta\alpha})/2$ or $\lambda^{i}_{k, \alpha\beta} = 0$.
\end{remark}
We can reformulate \eqref{eq: KaiDu_SPDE} as a stochastic evolution equation
\begin{equation}\label{reduced_kaidu}
\mathrm{d} u(t) = A(u(t)) \mathrm{d} t + \sum\limits_{k=1}^\infty B_k(u(t)) \mathrm{d} W_k(t).
\end{equation}
For this, define the deterministic part of the equation as an operator
\[
A\colon H^{m+1}(\mathbb{R}^d; \mathbb{R}^N) \to H^{m-1}(\mathbb{R}^d; \mathbb{R}^N)
\]
such that for any $u, v \in H^{m+1}(\mathbb{R}^d; \mathbb{R}^N)$
\begin{equation}\label{System_A}
\langle A(u), v \rangle = -\int_{\mathbb{R}^d} a^{ij}_{\alpha\beta} \partial_i u^\beta \partial_j u^\alpha \mathrm{d} x.
\end{equation}
The stochastic part of the equation is defined as an operator
\[
B\colon H^{m+1}(\mathbb{R}^d; \mathbb{R}^N) \to \mathcal L_2(\ell^2; H^m(\mathbb{R}^d; \mathbb{R}^N))
\]
such that for any $u \in H^{m+1}(\mathbb{R}^d; \mathbb{R}^N)$ and
\begin{equation}\label{System_B}
B(u)e_k = B_k(u) \quad \text{with} \quad B_{k, \alpha} (u) = \sigma^{i}_{k, \alpha\beta}\partial_i u^\beta.
\end{equation}
We are now in a position to recover \cite[Theorem~3.1]{du_2020}:
\begin{proposition}\label{prop:system}
Let $m\geq 0$, and suppose that Assumptions~\ref{system_assumptions} are satisfied.
Then, \eqref{eq: KaiDu_SPDE} has a unique solution
\[
u\in L^p(\Omega; C([0, T]; H^m(\mathbb{R}^d; \mathbb{R}^N)))\cap L^p(\Omega; L^2([0, T]; H^{m+1}(\mathbb{R}^d; \mathbb{R}^N))).
\]
Moreover, for any multi-index $\mathfrak{s}$ with $|\mathfrak{s}| \leq m$, there exists a constant $C$ depending on $d$, $\kappa$ and $K$ such that
\begin{align*}
& {\mathbb E} \sup\limits_{t\in[0, T]} \|\partial^{\mathfrak{s}} u(t)\|^p_{L^2(\mathbb{R}^d; \mathbb{R}^N)} + p^{-1/2} {\mathbb E} \Big(\int_0^T \|\partial^{\mathfrak{s}} \partial_x u(t)\|_{L^2(\mathbb{R}^d; \mathbb{R}^N)}^2 \mathrm{d} t\Big)^\frac{p}{2} \\
& \quad \leq C \Big({\mathbb E}\|\partial^{\mathfrak{s}} u_0\|_{L^2({\mathbb R}^d; {\mathbb R}^N)}^p + {\mathbb E}\Big(\int_0^T \|\partial^{\mathfrak{s}}\phi(t)\|_{H^{-1}(\mathbb{R}^d; \mathbb{R}^N)}^2 \mathrm{d} t \Big)^{\frac{p}{2}}\\
& \qquad + {\mathbb E} \Big(\int_0^T \|\partial^{\mathfrak{s}}\psi(t) \|_{L^2(\mathbb{R}^d; \ell^2(\mathbb{N}; \mathbb{R}^N))}^2 \mathrm{d} t\Big)^{\frac{p}{2}}\Big).
\end{align*}
\end{proposition}
\begin{proof}[Proof of Proposition~\ref{prop:system}]
Without loss of generality, one can restrict to the case $m = 0$, since the other cases can be obtained by differentiation. We will check the conditions of Theorem~\ref{Main_theorem} and Corollary~\ref{cor:a_priori_remark}. We proceed by showing that Assumptions ~\ref{main_assumptions} \condref{it:hem}-\condref{it:bound2} hold. By Remark~\ref{rem:additive} we may assume $\phi = \psi = 0$. We only verify coercivity \condref{it:coerc}, since \condref{it:hem}, \condref{it:weak_mon}, \condref{it:bound1} and \condref{it:bound2} are very similar to the stochastic heat equation, to which equation~\eqref{eq: KaiDu_SPDE} reduces on setting $N=1$, and which was treated in subsections ~\ref{stoch_heat_dir} on arbitrary domains. To this end, let $v \in H^1(\mathbb{R}^d; \mathbb{R}^N)$ and consider the following (using the summation convention):
\begin{align*}
2\langle A(v), v \rangle & = -2\int_{\mathbb{R}^d}a^{ij}_{\alpha\beta} \partial_i v^\beta \partial_j v^\alpha \mathrm{d} x.
\end{align*}
Next, we use definition \eqref{System_B} to consider the term $\|B_t(v)\|^2$:
\begin{equation*}
\begin{split}
\|B_t(v)\|_{L^2(\mathbb{R}^d; \ell^2(\mathbb{N}; \mathbb{R}^N))}^2
&= \int_{\mathbb{R}^d} \sigma^{i}_{k, \gamma\alpha}\sigma^{j}_{k, \gamma\beta} \partial_i v^{\beta} \partial_j v^{\alpha} \mathrm{d} x
\end{split}
\end{equation*}
Considering $\|B_t(v)^*v\|_{\ell^2}^2/\|v\|_{L^2({\mathbb R}^d;{\mathbb R}^N)}^2$,
for $v\in H^1(\mathbb{R}^d; \mathbb{R}^N)$, $v\neq 0$, we have:
\begin{equation}\label{adjoint_term}
\begin{split}
\|B_t(v)^*v\|_{\ell^2}^2 &= \sum\limits_{k=1}^\infty |(B_t(v)^*v)_k|^2=\sum\limits_{k=1}^\infty\Big(\int_{\mathbb{R}^d} \sigma^{i}_{k, \gamma\beta} (\partial_ i v ^\beta) v^\gamma \mathrm{d} x\Big)^2.
\end{split}
\end{equation}
Note that the following identity holds:
\begin{equation*}
\sigma^{i}_{k, \gamma\beta}\partial_i v^\beta v^\gamma = (\sigma^{i}_{k, \gamma\beta}-\lambda^{i}_{k, \gamma\beta})v^\gamma \partial_i v^\beta + \frac{1}{2} \lambda_{k, \gamma\beta}^{i} \partial_i(v^\gamma v^\beta).
\end{equation*}
Integrating both sides of the above expression over $\mathbb{R}^d$, by equation \eqref{adjoint_term} we find:
\begin{align*}
\|B_t(v)^*v\|_{\ell^2}^2 &= \sum\limits_{k=1}^\infty\Big(\int_{\mathbb{R}^d} (\sigma^{i}_{k,\gamma\beta} - \lambda^{i}_{k,\gamma\beta}) (\partial_ i v ^\beta) v^\gamma \mathrm{d} x\Big)^2 \\ & \stackrel{\mathrm{(i)}}{\leq} \sum\limits_{k=1}^\infty \Big(\int_{\mathbb{R}^d}\Big(\sum\limits_{\gamma=1}^N (v^\gamma)^2\Big)^{\frac{1}{2}} \Big( \sum\limits_{\gamma=1}^N((\sigma^{i}_{k,\gamma\beta} - \lambda^{i}_{k,\gamma\beta})\partial_i v^\beta)^2\Big)^{\frac{1}{2}} \mathrm{d} x\Big)^2 \\ & \stackrel{\mathrm{(i)}}{\leq} \sum\limits_{k=1}^\infty \|v\|^2_{L^2(\mathbb{R}^d; \mathbb{R}^N)} \Big(\int_{\mathbb{R}^d}((\sigma^{i}_{k,\gamma\beta}-\lambda^{i}_{k,\gamma\beta})\partial_i v^\beta)^2 \mathrm{d} x \Big) \\
& = \|v\|^2_{L^2(\mathbb{R}^d; \mathbb{R}^N)} \int_{\mathbb{R}^d} (\sigma^{i}_{k,\gamma\beta}-\lambda^{i}_{k,\gamma\beta})(\sigma^{j}_{k,\gamma\alpha}-\lambda^{j}_{k,\gamma\alpha})\partial_i v^\beta \partial_j v^\alpha \mathrm{d} x ,
\end{align*}
where the Cauchy-Schwarz inequality is applied at (i). This leads to
\begin{equation*}
\frac{\|B_t(v)^*v\|_{\ell^2}^2}{\|v\|^2_{L^2(\mathbb{R}^d; \mathbb{R}^N)}} \leq \sum\limits_{k=1}^\infty\Big(\int_{\mathbb{R}^d} (\sigma^{i}_{k,\gamma\beta}-\lambda^{i}_{k,\gamma\beta})(\sigma^{j}_{k,\gamma\alpha}-\lambda^{j}_{k,\gamma\alpha})\partial_i v^\beta \partial_j v^\alpha \mathrm{d} x \Big).
\end{equation*}
Therefore, the coercivity condition \condref{it:coerc} can be derived from (MSP) as:
\begin{equation*}
\begin{split}
&2\langle A(v), v\rangle + \|(B_t(v))\|_{L^2(\mathbb{R}^d; \ell^2(\mathbb{N};\mathbb{R}^N))}^2 + (p-2)\frac{\|B_t(v)^*v\|_{\ell^2}^2}{\|v\|^2_{L^2(\mathbb{R}^d; \mathbb{R}^N)}}\\
&\leq \int_{\mathbb{R}^d} \left(-2a^{ij}_{\alpha\beta} + \sigma^{i}_{k,\gamma\alpha}\sigma^{j}_{k,\gamma\beta} + (p-2)(\sigma^{i}_{k,\gamma\beta}-\lambda^{i}_{k,\gamma\beta})(\sigma^{j}_{k,\gamma\alpha}-\lambda^{j}_{k,\gamma\alpha})\right)\partial_i v^\beta \partial_j v^\alpha \mathrm{d} x\\
&\leq -\kappa \|v\|_{H^1(\mathbb{R}^d; \mathbb{R}^N)}^2 + \kappa\|v\|_{L^2({\mathbb R}^d;{\mathbb R}^N)}^2,
\end{split}
\end{equation*}
which shows that \condref{it:coerc} holds with $\theta = \kappa$, $f = 0$, and $K_c = \kappa$.
\end{proof}
\subsection{Higher order SPDEs}\label{higher_SPDE}
In this section we consider the following on ${\mathbb R}^d$:
\begin{equation}\label{higher_order_spde}
\mathrm{d} u(t) = \Big[(-1)^{m+1}\sum\limits_{|\alpha|,|\beta|=m}\partial^\beta (A^{\alpha\beta})\partial^{\alpha}u(t)) + \phi(t)\Big]\mathrm{d} t + \sum\limits_{k=1}^\infty \Big[\sum\limits_{|\alpha|=m}B_{k, \alpha}\partial^\alpha u(t) + \psi_{k, t}\Big] \mathrm{d} W_k(t),
\end{equation}
where $(W_k(t))_{t\geq 0}$ are countably many independent Wiener processes.
The above equation was considered in \cite{wang_2020}, and below we will show that the $p$-dependent well-posedness results can be obtained within our abstract framework. Additionally, our coefficients to be space dependent. The assumptions are:
\begin{assumptions}\label{higher_order_assumptions}
Let $d\geq 1$, $m \geq 1$ and let
\begin{equation*}
(V, H, V^*) = (H^m(\mathbb{R}^d), L^2(\mathbb{R}^d), H^{-m}(\mathbb{R}^d)).
\end{equation*}
and take $U = \ell^2$. Further assume that the coefficients $A^{\alpha\beta} \in L^\infty(\Omega\times[0, T]\times {\mathcal D})$ for all $1\leq\alpha,\beta \leq d$. Suppose that
\[(B_{k, \alpha})_{k=1}^\infty \in \left\{
\begin{array}{ll}
L^\infty(\Omega\times [0, T] \times {\mathbb R}^d; \ell^2), & \hbox{if $m$ is even;} \\
W^{1, \infty}({\mathbb R}^d;\ell^2), & \hbox{if $m$ is odd.}
\end{array}
\right.
\]
Assume that the coefficients satisfy the following coercivity condition:
\begin{equation}\label{eq:wang_coercivity}
2\sum\limits_{|\alpha|, |\beta = m} A^{\alpha\beta}\xi_\alpha\xi_\beta - \frac{p+(-1)^m(p-2)}{2}\sum\limits_{k=1}^\infty\Big|\sum\limits_{|\alpha|=m} B_{k, \alpha} \xi_\alpha \Big|^2 \geq \lambda \sum\limits_{|\alpha|=m} |\xi_\alpha|^2,
\end{equation}
where $\lambda >0$. Furthermore, suppose $u_0\in L^{p}(\Omega, {\mathscr F}_0; H)$,
\[
\phi\in L^p(\Omega; L^2([0, T]; \\ H^{-m}(\mathbb{R}^d))) \quad and \quad \psi\in L^p(\Omega; L^2([0, T]; L^2(\mathbb{R}^d; \ell^2))).
\]
\end{assumptions}
Next, we reformulate SPDE \eqref{higher_order_spde} into a stochastic evolution equation $$\mathrm{d} u(t) = A(t, u(t)) \mathrm{d} t + \sum\limits_{k=1}^\infty B_k(t, u(t)) \mathrm{d} W_k(t).$$
The drift part of the equation is defined as a time-dependent linear operator
\[
A(t)\colon H^{m}(\mathbb{R}^d) \to H^{-m}(\mathbb{R}^d),
\]
where for all $u, v \in H^{m}(\mathbb{R}^d)$:
\begin{equation}\label{higher_order_A}
\langle A(t, u), v\rangle = - \sum\limits_{|\alpha|, |\beta| = m} \langle A^{\alpha\beta}\partial^\alpha u, \partial^\beta v\rangle = -\sum\limits_{|\alpha|, |\beta| = m} \int_{\mathbb{R}^d} A^{\alpha\beta} (\partial^\alpha u) (\partial^\beta v) \mathrm{d} x.
\end{equation}
Similarly, the stochastic part is defined as a time-dependent linear operator
\[
B\colon H^m({\mathbb R}^d) \to \mathcal L_2(\ell^2, L^2(\mathbb{R}^d)),
\]
where for all $u \in H^{m}(\mathbb{R}^d)$
\begin{equation*}
B(u)e_k = B_k(u)= \sum\limits_{|\alpha|=m} B_{k, \alpha}(t) \partial^\alpha u.
\end{equation*}
\begin{proposition}\label{prop:higherorder}
Suppose that Assumption~\ref{higher_order_assumptions} is satisfied. Then, \eqref{higher_order_spde} has a unique solution in
\[
u \in L^p(\Omega; C([0, T]; L^2(\mathbb{R}^d))) \cap L^p(\Omega; L^2([0, T]; H^{m}(\mathbb{R}^d))).
\]
Furthermore, there exists a constant $C$ only depending on $\lambda$, $d$ and $p$ such that
\begin{align*}
&{\mathbb E} \sup\limits_{t\in[0, T]}\|u(t)\|_{L^2(\mathbb{R}^d)}^p + {\mathbb E} \Big(\int_0^T \|u(t)\|_{H^m(\mathbb{R}^d)}^2 \mathrm{d} t\Big)^{\frac{p}{2}} \\ &\leq Ce^{CT} \bigg({\mathbb E}\|u_0\|_{L^2({\mathcal D})}^p + {\mathbb E} \Big(\int_0^T \|\phi(t)\|^2_{H^{-m}(\mathbb{R}^d)}\mathrm{d} t\Big)^{\frac{p}{2}} + {\mathbb E} \Big(\int_0^T \|\psi(t)\|_{L^2(\mathbb{R}^d;\ell^2)}^2 \mathrm{d} t\Big)^{\frac{p}{2}}\bigg).
\end{align*}
\end{proposition}
\begin{remark}
If the coefficients are not space-dependent, one can shift the regularity as in subsection \ref{systems_SPDE}. Moreover, in that case the estimate can be obtained with more explicit constants independent of $p$ and $T$ as in Corollary \ref{cor:a_priori_remark}.
\end{remark}
\begin{remark}
For $m$ even, no smoothness assumptions on $B$ have been made. In case $m$ is odd one can also deal with the non-smooth case, but this will require a $p$-dependent coercivity condition as in the even case.
\end{remark}
Before starting the proof, we state a lemma needed for the coercivity condition.
\begin{lemma}\label{lemma: higher_order_coercivity}
Suppose that $m = 2n+1$ with $n\in {\mathbb N}_0$, and that Assumption~\ref{higher_order_assumptions} is satisfied.
Let $\zeta \in W^{1, \infty}({\mathbb R}^d;\ell^2)$. Let $\alpha\in {\mathbb N}^d$ be such that $|\alpha|\leq m$. Then for every $\varepsilon>0$ there exists a $C_{\varepsilon}>0$ depending on $m$ such that for all $v\in H^m({\mathbb R}^d)$
\[
\frac{\Big\|\int_{{\mathbb R}^d} \zeta v \partial^{\alpha} v dx\Big\|_{\ell^2}}{\|v\|_{L^2({\mathbb R}^d)}}\leq \varepsilon \|v\|_{H^{m}({\mathbb R}^d)} + C_{\varepsilon} \|v\|_{L^2({\mathbb R}^d)}.
\]
\end{lemma}
\begin{proof}
By density it suffices to consider $v\in C^\infty_c({\mathbb R}^d)$.
If $|\alpha|=m$, then we reduce the number of derivatives by one order.
Integrating by parts $|\alpha|$ times we obtain
\begin{align*}
\int_{{\mathbb R}^d} \zeta_k v \partial^{\alpha} v dx = -\int_{{\mathbb R}^d} \zeta_k v\partial^{\alpha}v dx + R.
\end{align*}
where $R_k$ is a linear combination of terms of the form
$\int_{{\mathbb R}^d} \partial^{\widetilde{\alpha}} \zeta_k \partial^{\beta} v \partial^{\gamma} v dx$ with $|\widetilde{\alpha}| + |\beta| + |\gamma| = |\alpha|$ and $|\widetilde{\alpha}| = 1$. Therefore,
$\int_{{\mathbb R}^d} \zeta_k v \partial^{\alpha} v dx =\frac12R_k$
is of lower order in $v$. Moreover, note that
\[\Big\|\int_{{\mathbb R}^d} \partial^{\widetilde{\alpha}} \zeta \partial^{\beta} v \partial^{\gamma} v dx\Big\|_{\ell^2} \leq \|\zeta\|_{W^{1, \infty}(\Distr;\ell^2)}\int_{{\mathbb R}^d} |\partial^{\beta} v| \, |\partial^{\gamma} v| dx.\]
From the above it follows that it remains to show that for every $|\beta|+|\gamma|\leq m-1$.
\begin{align}\label{eq:oddexpress}
\frac{\int_{{\mathbb R}^d} |\partial^{\beta} v| \, |\partial^{\gamma} v| dx}{\|v\|_{L^2({\mathbb R}^d)}} \leq \varepsilon \|v\|_{H^{m}({\mathbb R}^d)} + C_{\varepsilon} \|v\|_{L^2({\mathbb R}^d)}
\end{align}
By Cauchy--Schwarz' inequality and standard interpolation estimates we find that
\begin{align*}
\int_{{\mathbb R}^d} |\partial^{\beta} v| \, |\partial^{\gamma} v| dx& \leq \|v\|_{H^{|\beta|}({\mathbb R}^d)} \|v\|_{H^{|\gamma|}({\mathbb R}^d)}
\\ & \leq C\|v\|_{H^m({\mathbb R}^d)}^{\frac{|\beta|}{m}} \|v\|_{L^2({\mathbb R}^d)}^{1-\frac{|\beta|}{m}} \|v\|_{H^m({\mathbb R}^d)}^{\frac{|\beta_3|}{m}} \|v\|_{L^2({\mathbb R}^d)}^{1-\frac{|\gamma|}{m}}
\\ & = C\|v\|_{H^m({\mathbb R}^d)}^{\ell/m} \|v\|_{L^2({\mathbb R}^d)}^{2-\frac{\ell}{m}}
\end{align*}
where we have set $\ell:=|\beta|+|\gamma|\leq m-1$,
Therefore, by Young's inequality we obtain that for every $\varepsilon>0$ there exists a $C_{\varepsilon}>0$ such that
\begin{align*}
\frac{\int_{{\mathbb R}^d} |\partial^{\beta} v| \, |\partial^{\gamma} v| dx}{\|v\|_{L^2({\mathbb R}^d)}} \leq C \|v\|_{H^m({\mathbb R}^d)}^{\ell/m} \|v\|_{L^2({\mathbb R}^d)}^{1-\frac{\ell}{m}} \leq \varepsilon \|v\|_{H^m({\mathbb R}^d)} +C_{\varepsilon}\|v\|_{L^2({\mathbb R}^d)}
\end{align*}
which is \eqref{eq:oddexpress}.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:higherorder}]
Furthermore, set $\phi = \psi = 0$ by Remark~\ref{rem:additive}. We only check coercivity \condref{it:coerc}, since the other conditions are similar to the stochastic heat equation treated in subsections~\ref{stoch_heat_dir} and \ref{stoch_heat_neu} in case of bounded domains. From now on, consider an arbitrary $v\in H^m(\mathbb{R}^d)$. From \eqref{higher_order_A}, we see that
\begin{equation*}
2\langle A(t, v), v \rangle = -2 \sum\limits_{|\alpha|, |\beta| = m} \int_{\mathbb{R}^d} A_{\alpha\beta} (\partial^\alpha v) (\partial^\beta v) \mathrm{d} x.
\end{equation*}
For $\|B(t, v)\|_{L^2(\mathbb{R}^d;\ell^2)}^2$ we obtain
\begin{align*}
\|B(t, v)\|_{L^2(\mathbb{R}^d;\ell^2)}^2 & = \sum\limits_{k=1}^\infty \Big\|\sum\limits_{|\alpha|=m} B_{k, \alpha} \partial^\alpha v\Big\|_{L^2(\mathbb{R}^d)}^2 \\ & = \sum\limits_{k=1}^\infty \int_{\mathbb{R}^d} \sum\limits_{|\alpha|, |\beta|=m} B_{k, \alpha} B_{k, \beta} (\partial^\alpha v) (\partial^\beta v) \mathrm{d} x.
\end{align*}
The last term that needs to be inspected is $\|B(t, v)^*v\|_{\ell^2}^2$, which is inspected for the cases $m$ odd and $m$ even separately. If $m$ is odd, write $m = 2n + 1$ for $n \in {\mathbb N}_0$.
By Lemma~\ref{lemma: higher_order_coercivity} we obtain
\begin{equation}
\frac{\|B(t, v)^*v\|_{\ell^2}^2}{\|v\|_{L^2(\mathbb{R}^d)}^{2}} \leq \varepsilon \|v\|_{H^m({\mathbb R}^d)}^2 + C_\varepsilon\|v\|_{L^2({\mathbb R}^d)}^2,
\end{equation}
where we are free to choose $\varepsilon > 0$, and $C_\varepsilon$ depends on $B$. Therefore, if $m$ is odd, the following inequalities for the coercivity condition \condref{it:coerc} hold:
\begin{equation*}
\begin{split}
&2\langle A(t, v), v\rangle + \|B(t, v)\|_{L^2(\mathbb{R}^d;\ell^2)}^2 + (p-2)\frac{\|B(t, v)^*v\|_{\ell^2}^2}{\|v\|_{L^2(\mathbb{R}^d)}}\\
&\leq \sum\limits_{|\alpha|, |\beta| = m} \int_{\mathbb{R}^d} \Big(-2A_{\alpha\beta} + \sum\limits_{k=1}^\infty B_{k, \alpha} B_{k, \beta} \Big) (\partial^\alpha v) (\partial^\beta v) \mathrm{d} x\\
&\quad + \varepsilon (p-2)\|v\|_{H^m({\mathbb R}^d)}^2 + C_\varepsilon (p-2) \|v\|_{L^2({\mathbb R}^d)}^2\\
&\leq (-\lambda +\varepsilon) \|v\|_{H^m(\mathbb{R}^d)}^2 + C_\varepsilon \|v\|_{L^2({\mathbb R}^d)}^2.\\
\end{split}
\end{equation*}
Choosing $\varepsilon$ small enough, the coercivity condition \condref{it:coerc} holds with $\theta = \lambda-\varepsilon(p-2)$, $f = 0$ and $K_c = \varepsilon(p-2)$.
If $m$ is even, we use the Cauchy-Schwarz inequality to show
\begin{equation*}
\frac{\|B(t, v)^*v\|_{\ell^2}^2}{\|v\|_{L^2(\mathbb{R}^d)}^{2}}\leq \|B(t, v)\|_{L^2(\mathbb{R}^d;\ell^2)}^2.
\end{equation*}
Using the condition \eqref{eq:wang_coercivity} on the coefficients of Assumptions~\ref{higher_order_assumptions}, we can combine all terms to get the following inequalities for the coercivity condition \condref{it:coerc}:
\begin{equation*}
\begin{split}
&2\langle A(t, v), v\rangle + \|B(t, v)\|_{L^2(\mathbb{R}^d;\ell^2)}^2 + (p-2)\frac{\|B(t, v)^*v\|_{\ell^2}^2}{\|v\|_{L^2(\mathbb{R}^d)}}\\
&\leq \sum\limits_{|\alpha|, |\beta| = m} \int_{\mathbb{R}^d} \Big(-2A_{\alpha\beta} + (p-1)\sum\limits_{k=1}^\infty B_{k, \alpha} B_{k, \beta} \Big) (\partial^\alpha v) (\partial^\beta v) \mathrm{d} x\\
&\leq -\lambda \|v\|_{H^m(\mathbb{R}^d)}^2.\\
\end{split}
\end{equation*}
In this case, the coercivity condition \condref{it:coerc} holds with $\theta = \lambda$, $f = 0$, and $K_c = 0$.
\end{proof}
\subsection{Stochastic p-Laplacian with Dirichlet boundary conditions}\label{stoch_p_lapl}
We consider the following stochastic version of the $p$-Laplace equation:
\begin{equation}\label{stoch_p_laplacian}
\mathrm{d} u(t) = \nabla \cdot (|\nabla u(t)|^{\alpha-2} \nabla u(t)) \mathrm{d} t+ \sum\limits_{k=1}^\infty B_k(u(t)) \mathrm{d} W_k(t),
\end{equation}
where $(W_k(t))_{t\geq 0}$ are countably many independent Wiener processes. Since we reserve $p$ for the moment in probability, we use $\alpha > 2$ instead of $p$ in the $p$-Laplacian.
We will prove existence, uniqueness and an energy estimate. The arguments are similar to \cite{neelima_2020}, who consider a slightly different leading order operator in \eqref{stoch_p_laplacian}. Moreover, they have an additional nonlinear term $f(u)dt$, which can also be included in our setting.
\begin{assumption}\label{assumptions_p_laplacian}
Let $\Distr \subset {\mathbb R}^d$ be a bounded domain, $\alpha > 2$, $\gamma^2 \leq 8 \frac{\alpha-1}{\alpha^2}$ and $p \in \big[2, \frac{2}{\gamma^2} + 1\big)$ and $u_0 \in L^{p}(\Omega; L^2(\mathcal{D}))$. Consider
\[
(V, H, V^*) = (W_0^{1, \alpha}(\Distr), L^2(\Distr), W^{1, \alpha}_0(\Distr)^*),
\]
and set $U = \ell^2$. Let $B \colon W_0^{1,\alpha}(\mathcal D) \to \mathcal L_2(\ell^2,L^2(\mathcal D))$, where for $u \in W^{1,\alpha}_0(\mathcal D)$ and we have $B(u) e_k = B_k(u)$ and $B_k\colon W_0^{1, \alpha}(\Distr) \to L^2(\Distr)$ satisfies $B_k(0) = 0$ and for all $u, v \in W_0^{1, \alpha}(\mathcal{D})$:
\begin{equation}\label{p-laplace-cond-bk}
\|B_k(u)-B_k(v)\|^2_{L^2(\mathcal{D})} \leq \gamma_k^2 \| |\nabla u|^{\frac{\alpha}{2}}-|\nabla v|^{\frac{\alpha}{2}}\|_{L^2(\mathcal{D})}^2 + C_k^2\|u-v\|_{L^2(\mathcal{D})}^2,
\end{equation}
where we assume $\sum_{k = 1}^\infty \gamma_k \leq \gamma^2$ and $\sum_{k = 1}^\infty C_k^2 < \infty$.
\end{assumption}
Next, we turn SPDE \eqref{stoch_p_laplacian} into a stochastic evolution equation of the form
\[
\mathrm{d} u(t) = A(u(t)) \mathrm{d} t + \sum\limits_{k=1}^\infty B_k(u(t)) \mathrm{d} W_k(t),
\]
where $A\colon W_0^{1, \alpha}(\mathcal{D}) \to W^{-1, \alpha}(\mathcal{D})$ is given by
\[
\langle A(u), v\rangle = -\int_{\mathcal{D}}|\nabla u|^{\alpha-2} \nabla u \cdot \nabla v \mathrm{d} x \qquad \text{for all } u, v \in W_0^{1, \alpha}(\mathcal{D}).
\]
\begin{proposition}
Given Assumption~\ref{assumptions_p_laplacian}, there exists a unique solution to equation \eqref{stoch_p_laplacian}. Furthermore, there exists a constant $C$ depending on $\gamma$, $\alpha$ and $p$ such that the following estimate holds:
\begin{equation*}
{\mathbb E}\sup\limits_{t\in[0, T]} \|u(t)\|_{L^2(\mathcal D)}^p + {\mathbb E}\Big(\int_0^T \|u(t)\|_{W_0^{1, \alpha}(\mathcal{D})}^\alpha \mathrm{d} t\Big)^{\frac{p}{2}}
\leq Ce^{CT}{\mathbb E}\|u_0\|_{L^2(\mathcal{D})}^p.
\end{equation*}
\end{proposition}
\begin{remark}
An admissible choice for $B_k$ is $B_k(u) = \gamma_k |\nabla u|^{\frac{\alpha}{2}}$.
\end{remark}
\begin{proof}
We show that \condref{it:hem}-\condref{it:bound2} hold for equation \eqref{stoch_p_laplacian} and can therefore apply Theorem~\ref{Main_theorem}. Hemicontinuity \condref{it:hem} can be found in \cite[p. 82]{Rockner_SPDE_2015}. For local weak monotonicity \condref{it:weak_mon}, take $u, v \in W_0^{1, \alpha}(\mathcal{D})$ and consider the following inequality which follows from \cite[p. 82]{Rockner_SPDE_2015}:
\begin{equation}\label{H2_term1}
2\langle A(u)-A(v), u-v\rangle \leq - 2\int_{\mathcal{D}} \left(|\nabla u|^{\alpha-1}-|\nabla v|^{\alpha-1}\right) (|\nabla u| - |\nabla v|) \mathrm{d} x
\end{equation}
We now consider the other term for \condref{it:weak_mon}. By \eqref{p-laplace-cond-bk} we obtain
\begin{equation}\label{H2_term2}
\begin{aligned}
\|B(u)-&B(v)\|_{\mathcal L_2(\ell^2,L^2(\mathcal D))}^2 \leq \sum\limits_{k=1}^\infty \|B_k(u)-B_k(v)\|^2_{L^2(\mathcal{D})}\\
&\leq \sum\limits_{k=1}^\infty\gamma_k^2 \| |\nabla u|^{\frac{\alpha}{2}}-|\nabla v|^{\frac{\alpha}{2}}\|_{L^2(\mathcal{D})}^2 + \sum\limits_{i=1}^k C_k^2\|u-v\|_{L^2(\mathcal{D})}^2 \\
&\leq \gamma^2 \| |\nabla u|^{\frac{\alpha}{2}}-|\nabla v|^{\frac{\alpha}{2}}\|_{L^2(\mathcal{D})}^2 + C \|u-v\|_{L^2(\mathcal{D})}^2.
\end{aligned}
\end{equation}
The bounds \eqref{H2_term1} and \eqref{H2_term2} combine to
\begin{equation*}
\begin{split}
&2\langle A(u)-A(v), u-v\rangle + \|B(u)-B(v)\|^2_{\mathcal L_2(\ell^2,L^2(\mathcal D))} \\
&\leq -\int_{\mathcal{D}} \big((|\nabla u|^{\alpha-1}-|\nabla v|^{\alpha-1}|) (|\nabla u| - |\nabla v|) + \gamma^2 (|\nabla u |^{\frac{\alpha}{2}}- |\nabla v|^{\frac{\alpha}{2}})^2\big) \mathrm{d} x\\
& \quad + C\|u-v\|_{L^2(\mathcal{D})}^2\\
\end{split}
\end{equation*}
Now \condref{it:weak_mon} follows from the inequality (which holds since $\gamma^2 \leq 8 \frac{\alpha-1}{\alpha^2}$) :
\begin{equation*}
2\left(x^{\alpha-1}-y^{\alpha-1}\right)(x-y) - \gamma^2 (x^{\frac{\alpha}{2}}-y^{\frac{\alpha}{2}})^2 \geq 0 \quad \text{for all $x,y \geq 0$}.
\end{equation*}
In order to show coercivity \condref{it:coerc} note that for $v \in W^{1,\alpha}_0(\mathcal D)$ we have
\begin{equation*}
2\langle A(v), v\rangle = -2\int_{\mathcal{D}}|\nabla v |^{\alpha} \mathrm{d} x = -2\|v\|_{W_0^{1, \alpha}(\mathcal{D})}^\alpha
\end{equation*}
and using the Cauchy-Schwarz inequality, we obtain
\begin{equation*}
\frac{\|B_t(v)^*v\|_{\ell^2}^2}{\|v\|_{L^2(\mathcal{D})}^2} \leq \|B_t(v)^*\|_{\mathcal L_2(L^2(\mathcal D),\ell^2)}^2 = \|B_t(v)\|^2_{\mathcal L_2(\ell^2, L^2(\mathcal{D}))}.
\end{equation*}
Therefore, we conclude with the following $p$-dependent condition for \condref{it:coerc}:
\begin{equation*}
\begin{split}
&2\langle A(v), v\rangle + \|B_t(v)\|_{\mathcal L_2(\ell^2,L^2(\mathcal D))}^2 + (p-2)\frac{\|B_t(v)^*v\|_{\ell^2}^2}{\|v\|_{L^2(\mathcal{D})}^2}\\
&\leq \left((p-1)\gamma^2-2\right)\|v\|_{W^{1, \alpha}(\mathcal{D})}^\alpha + C\|v\|_{L^2(\mathcal{D})}^2.
\end{split}
\end{equation*}
The first term on the RHS is negative by assumption. Therefore, \condref{it:coerc} holds with $\theta = 2-(p-1)\gamma^2$ and $f = 0$.
We are only left to show the boundedness conditions \condref{it:bound1} and \condref{it:bound2}. For $v \in W^{1,\alpha}_0(\mathcal{D})$, we use H\"{o}lder's inequality to obtain:
\begin{align*}
|\langle A(u), v\rangle| &\leq \Big|\int_{\mathcal{D}} |\nabla u|^{\alpha-2} \nabla u\cdot \nabla v\mathrm{d} x\Big|
\\ & \leq \Big(\int_{\mathcal{D}}|\nabla u|^\alpha \mathrm{d} x\Big)^{\frac{\alpha-1}{\alpha}}\Big(\int_{\mathcal{D}} |\nabla v|^\alpha \mathrm{d} x\Big)^{\frac{1}{\alpha}}
\leq \|u\|_{W_0^{1, \alpha}(\mathcal{D})}^{\alpha-1}\|v\|_{W_0^{1, \alpha}(\mathcal{D})}.
\end{align*}
Therefore, it follows for all $v \in W^{1, \alpha}_0(\mathcal{D})$ that $\|A(v)\|_{W^{-1, \alpha}(\mathcal{D})}^{\frac{\alpha}{\alpha-1}} \leq \|v\|_{W_0^{1, \alpha}(\mathcal{D})}^\alpha$,
which entails \condref{it:bound1} with $K_A = \frac 1 2$ and $\beta = 0$. We omit \condref{it:bound2}, since it is clear by assumption.
\end{proof}
\bibliographystyle{plain}
|
1,941,325,221,108 | arxiv |
\section{Introduction}
Building meaningful similarity models that incorporate prior knowledge about the data and the task is an important area of machine learning and information retrieval \cite{DBLP:journals/bioinformatics/ZienRMSLM00,DBLP:books/daglib/0021593}. Good similarity models are needed to find relevant items in databases \cite{DBLP:conf/webdb/NiermanJ02,DBLP:conf/ismir/PampalkFW05,DBLP:journals/jcisd/WillettBD98}. Similarities (or kernels) are also the starting point of a large number of machine learning models including discriminative learning \cite{bishop06,DBLP:books/lib/ScholkopfS02}, unsupervised learning \cite{macqueen1967, DBLP:journals/csur/JainMF99, DBLP:journals/pami/ShiM00,DBLP:journals/neco/ScholkopfPSSW01}, and data embedding/visualization \cite{DBLP:journals/neco/ScholkopfSM98,DBLP:conf/nips/MikolovSCCD13,DBLP:journals/ml/MaatenH12}.
An important practical question is how to select the similarity model appropriately. Assembling a labeled dataset of similarities for validation can be difficult: The labeler would need to inspect meticulously multiple pairs of data points and come up with exact real-valued similarity scores. As an alternative, selecting a similarity model based on performance on some proxy task can be convenient (e.g.\ \cite{DBLP:conf/icml/BachLJ04,DBLP:journals/jmlr/SonnenburgRSS06,DBLP:journals/jmlr/WeinbergerS09,DBLP:journals/jmlr/BergstraB12}). In both cases, however, the selection procedure is exposed to a potential lack of representativity of the training data (cf.\ the `Clever Hans' effect \cite{lapuschkin-ncomm19}).---In this paper, we aim for a more direct way to assess similarity models, and make use of explainable ML for that purpose.
\smallskip
Explainable ML \cite{DBLP:series/lncs/11700,DBLP:journals/cacm/Lipton18,DBLP:journals/dsp/MontavonSM18} is a subfield of machine learning that focuses on making predictions interpretable to the human. By highlighting the input features (e.g.\ pixels or words) that are used for predicting, explainable ML allows to gain systematic understanding into the model decision structure. Numerous approaches have been proposed in the context of ML classifiers \cite{DBLP:journals/jmlr/BaehrensSHKHM10,lrp,DBLP:conf/kdd/Ribeiro0G16,DBLP:conf/iccv/SelvarajuCDVPB17}.
\smallskip
In this paper, we bring explainable ML to similarity. We contribute a new method that systematically explains similarity models of the type:
$$
y(\boldsymbol{x},\boldsymbol{x}') = \big\langle \phi_L \circ \dots \circ \phi_1(\boldsymbol{x})\,,\,\phi_L \circ \dots \circ \phi_1(\boldsymbol{x}') \big\rangle,
$$
e.g.\ dot products built on some hidden layer of a deep neural network. Our method is based on the insight that similarity models can be naturally decomposed on {\em pairs} of input features. Furthermore, this decomposition can be computed as a combination of multiple LRP explanations \cite{lrp} (and potentially other successful explanation techniques). As a result, it inherits qualities such as broad applicability and scaling to highly nonlinear models. Our method, which we call `BiLRP{}', is depicted at a high level in Fig.\ \ref{fig:intro}.
\begin{figure}[h]
\centering
\includegraphics[width=.97\linewidth]{expsum.pdf}
\caption{Proposed BiLRP{} method for explaining similarity. Produced explanations are in terms of pairs of input features.}
\label{fig:intro}
\end{figure}
Conceptually, BiLRP{} performs a {\em second}-order `deep Taylor decomposition' \cite{DBLP:journals/pr/MontavonLBSM17} of the similarity score, which lets us retrace, layer after layer, features that have jointly contributed to the similarity. Our method reduces for specific choices of parameters to a `Hessian$\kern 0.08em \times \kern 0.08em $Product{}' baseline. With appropriate choices of parameters BiLRP{} significantly improves over this baseline and produces explanations that robustly extend to complex deep neural network models.
We showcase BiLRP{} on similarity models built at various layers of the well-established VGG-16 image classification network \cite{DBLP:journals/corr/SimonyanZ14a}. Our explanation method brings useful insights into the strengths and limitations of each similarity model. We then move to an open problem in the digital humanities, where similarity between scanned astronomical tables needs to be assessed \cite{mva19}. We build a highly engineered similarity model that is specialized for this task. Again BiLRP{} proves useful by being able to inspect the similarity model and validate it from limited data.
Altogether, the method we propose brings transparency into a key ingredient of machine learning: similarity. Our contribution paves the way for designing and validating similarity-based ML models in an efficient, fully informed, and human-interpretable manner.
\subsection{Related Work}
Methods such as LLE \cite{Roweis2000}, diffusion maps \cite{Coifman2006}, or t-SNE \cite{DBLP:journals/ml/MaatenH12} give insight into the similarity structure of large datasets by embedding data points in a low-dimensional subspace where relevant similarities are preserved. While these methods provide useful visualization, their purpose is more to find {\em global} coordinates to comprehend a whole dataset, than to explain why two {\em individual} data points are predicted to be similar.
The question of explaining individual predictions has been extensively studied in the context of ML classifiers. Methods based on occlusions \cite{DBLP:conf/eccv/ZeilerF14,DBLP:conf/iclr/ZintgrafCAW17}, surrogate functions \cite{DBLP:conf/kdd/Ribeiro0G16,DBLP:conf/nips/LundbergL17}, gradients \cite{DBLP:journals/jmlr/BaehrensSHKHM10, DBLP:journals/corr/SimonyanVZ13,DBLP:journals/corr/SmilkovTKVW17, DBLP:conf/icml/SundararajanTY17}, or reverse propagation \cite{lrp,DBLP:conf/eccv/ZeilerF14}, have been proposed, and are capable of highlighting the most relevant features. Some approaches have been extended to unsupervised models, e.g.\ anomaly detection \cite{Kauffmann20,DBLP:conf/icdm/MicenkovaNDA13} and clustering \cite{Kauffmann19}. Our work goes further along this direction and explains {\em similarity} by identifying relevant {\em pairs} of input features.
Several methods for joint features explanations have been proposed. Some of them extract feature interactions globally \cite{DBLP:conf/iclr/TsangC018,kaski-pairwise}. Other methods produce individual explanations for simple pairwise matching models \cite{leupold2017second}, or models with explicit multivariate structures \cite{DBLP:conf/kdd/CaruanaLGKSE15}. Another method extracts joint feature explanations in nonlinear models by estimating the integral of the Hessian \cite{DBLP:journals/corr/abs-2002-04138}. In comparison, our BiLRP{} method leverages the layered structure of the model to robustly explain complex similarities, e.g.\ built on deep neural networks.
A number of works improve similarity models by leveraging prior knowledge or ground truth labels. Proposed approaches include structured kernels \cite{Watkins99dynamicalignment,DBLP:journals/bioinformatics/ZienRMSLM00,DBLP:journals/neco/TsudaKRSM02,DBLP:journals/sigkdd/Gartner03}, or siamese/triplet networks \cite{DBLP:conf/nips/BromleyGLSS93,DBLP:conf/cvpr/ChopraHL05,DBLP:conf/cvpr/WangSLRWPCW14,DBLP:conf/simbad/HofferA15,DBLP:conf/eccv/SeguinSdK16}. Beyond similarity, applications such as collaborative filtering \cite{DBLP:conf/www/HeLZNHC17}, transformation modeling \cite{DBLP:journals/neco/MemisevicH10}, and information retrieval \cite{Tzompanaki2012}, also rely on building high-quality matching models between pairs of data.---Our work has an orthogonal objective: It assumes an already trained well-performing similarity model, and makes it explainable to enhance its verifiability and to extract novel insights from it.
\section{Towards Explaining Similarity}
\label{section:towards}
In this section, we present basic approaches to explain similarity models in terms of input features. We first discuss the case of a simple linear model, and then extend the concept to more general nonlinear cases.
\subsection{From Linear to Nonlinear Models}
Let us begin with a simple scenario where $\boldsymbol{x},\boldsymbol{x}' \in \mathbb{R}^d$ and the similarity score is given by some dot product $y(\boldsymbol{x},\boldsymbol{x}') = \langle W \boldsymbol{x}, W \boldsymbol{x}' \rangle$, with $W$ a projection matrix of size $h \times d$. The similarity score can be easily decomposed on input features by rewriting the dot product as:
\begin{align}
\textstyle y(\boldsymbol{x},\boldsymbol{x}') =& \textstyle \sum_{ii'} \langle W_{:,i}, W_{:,i'} \rangle \cdot x_i x'_{i'}.
\label{eq:linear}
\end{align}
We observe from Eq.\ \eqref{eq:linear} that the similarity is decomposable on {\em pairs} of features $(i,i')$ of the two examples. In other words, input features interact to produce a high/low similarity score.
In practice, more accurate models of similarity can be obtained by relaxing the linearity constraint. Consider some similarity model $y(\boldsymbol{x},\boldsymbol{x}') = \langle \phi(\boldsymbol{x}), \phi(\boldsymbol{x}') \rangle$ built on some abstract feature map $\phi \colon \mathbb{R}^d \to \mathbb{R}^h$ which we assume to be differentiable. A simple and general way of attributing the similarity score to the input features is to compute a Taylor expansion \cite{lrp} at some reference point $(\widetilde{\bx},\widetilde{\bx}')$:
\begin{align*}
y(\boldsymbol{x},\boldsymbol{x}')
&= y(\widetilde{\bx},\widetilde{\bx}')\\
&\textstyle \quad + \sum_i \, [\nabla y(\widetilde{\bx},\widetilde{\bx}')]_{i} \, (x_i - \widetilde{x}_i)\\[1mm]
&\textstyle \quad\quad + \sum_{i'} \, [\nabla y(\widetilde{\bx},\widetilde{\bx}')]_{i'} \, (x'_{i'} - \widetilde{x}'_{i'})\\[1mm]
&\textstyle \quad\quad\quad + \sum_{ii'} \, [\nabla^2 y(\widetilde{\bx},\widetilde{\bx}')]_{ii'} \, (x_i - \widetilde{x}_i) \,(x'_{i'} - \widetilde{x}'_{i'})\\
&\textstyle \quad\quad\quad\quad + \dots
\end{align*}
The explanation is then obtained by identifying the multiple terms of the expansion. Here again, like for the linear case, some of these terms can be attributed to pairs of features $(i,i')$. For general nonlinear models, it is difficult to systematically find reference points $(\widetilde{\bx},\widetilde{\bx}')$ at which a Taylor expansion represents well the similarity score. To address this, we will need to apply some restrictions to the analyzed model.
\subsection{The `Hessian$\kern 0.08em \times \kern 0.08em $Product{}' Baseline}
Consider the family of similarity models that can be represented as dot products on positively homogeneous feature maps, i.e.\
\begin{align*}
y(\boldsymbol{x},\boldsymbol{x}') &= \langle \phi(\boldsymbol{x}) , \phi(\boldsymbol{x}') \rangle,\\
\phi &\colon \mathbb{R}^d \to \mathbb{R}^h \quad \text{with} \quad \forall_{\boldsymbol{x}}\forall_{t>0} : \phi(t\boldsymbol{x}) = t\phi(\boldsymbol{x}).
\end{align*}
The class of functions $\phi$ is broad enough to include (with minor restrictions) interesting models such as the mapping on some layer of a deep rectifier network \cite{DBLP:journals/jmlr/GlorotBB11,DBLP:journals/corr/SimonyanZ14a,DBLP:conf/cvpr/HeZRS16}.
\smallskip
We perform a Taylor expansion of the similarity function at the reference point $(\widetilde{\boldsymbol{x}},\widetilde{\boldsymbol{x}}') = (\varepsilon\kern 0.04em \boldsymbol{x},\varepsilon\kern 0.04em \boldsymbol{x}')$ with $\varepsilon$ almost zero. Zero- and first-order terms of the expansion vanish, leaving us with a decomposition on the interaction terms:
\begin{align}
\textstyle y(\boldsymbol{x},\boldsymbol{x}') &= \textstyle \sum_{ii'} [\nabla^2 y(\boldsymbol{x},\boldsymbol{x}')]_{ii'} \, x_i x'_{i'}
\label{eq:hessprod}
\end{align}
(cf.\ Appendix A of the Supplement). Inspection of these interaction terms reveals that a pair of features $(i,i')$ is found to be relevant if:
\begin{enumerate}[label=(\roman*)]
\item the features are jointly expressed in the data, and
\item the similarity model jointly reacts to these features.
\end{enumerate}
\noindent We call this method `Hessian$\kern 0.08em \times \kern 0.08em $Product{}' (HP) and use it as a baseline in Section \ref{section:baselines}. This baseline can also be seen as a reduction of `Integrated Hessians' \cite{DBLP:journals/corr/abs-2002-04138} for the considered family of similarity models.
\smallskip
HP is closely connected to a common baseline method for explaining ML classifiers: Gradient$\kern 0.08em \times \kern 0.08em $Input{} \cite{DBLP:journals/corr/ShrikumarGSK16,DBLP:conf/iclr/AnconaCO018,axioms}. The matrix of joint feature contributions found by HP can be obtained by performing $2\times h$ `Gradient$\kern 0.08em \times \kern 0.08em $Input{}' (GI) computations:
\begin{align*}
\mathrm{HP}(y,\boldsymbol{x},\boldsymbol{x}') &= \sum_{m=1}^h \mathrm{GI}(\phi_m,\boldsymbol{x}) \otimes \mathrm{GI}(\phi_m,\boldsymbol{x}')
\end{align*}
(cf.\ Appendix A.3 of the Supplement). This gradient-based formulation makes it easy to implement HP using neural network libraries with automatic differentiation. However, because of this close relation, `Hessian$\kern 0.08em \times \kern 0.08em $Product{}' also inherits some weaknesses of `Gradient$\kern 0.08em \times \kern 0.08em $Input{}', in particular, its high exposure to gradient noise \cite{axioms}. In deep architectures, the gradient is subject to a shattering effect \cite{DBLP:conf/icml/BalduzziFLLMM17} making it increasingly large, high-varying, and uninformative, with every added layer.
\section{Better Explanations with BiLRP{}}\label{section:bilrp}
Motivated by the limitations of the simple techniques presented in Section \ref{section:towards}, we introduce our new BiLRP{} method for explaining similarities. The method is inspired by the `layer-wise relevance propagation' (LRP) \cite{lrp} method, which was first introduced for explaining deep neural network classifiers. LRP leverages the layered structure of the model to produce robust explanations.
\smallskip
BiLRP{} brings the robustness of LRP to the task of explaining dot product similarities. Our method assumes as a starting point a layered similarity model:
$$
y(\boldsymbol{x},\boldsymbol{x}') = \big\langle \phi_L \circ \dots \circ \phi_1(\boldsymbol{x})\,,\,\phi_L \circ \dots \circ \phi_1(\boldsymbol{x}') \big\rangle,
$$
typically, a dot product built on some hidden layer of a deep neural network. Similarly to LRP, the model output is propagated backward in the network, layer after layer, until the input features are reached. BiLRP{} operates by sending messages $R_{jj'\leftarrow kk'}$ from pairs of neurons $(k,k')$ at a given layer to pairs of neurons $(j,j')$ in the layer below.
\subsection{Extracting BiLRP{} Propagation Rules}
\label{section:bilrp-derivation}
To build meaningful propagation rules, we make use of the `deep Taylor decomposition' (DTD) \cite{DBLP:journals/pr/MontavonLBSM17} framework. DTD expresses the relevance $R_{kk'}$ available for redistribution as a function of activations $\boldsymbol{a}$ in the layer below. The relation between these two quantities is depicted in Fig.\ \ref{fig:map}.
\begin{figure}[t]
\centering
\includegraphics[width=0.98\linewidth]{neurons.pdf}
\caption{Diagram of the map used by DTD to derive BiLRP{} propagation rules. The map connects activations at some layer to relevance in the layer above.}
\label{fig:map}
\end{figure}
Specifically, DTD seeks to perform a Taylor expansion of the function $R_{kk'}(\boldsymbol{a})$ at some reference point $\widetilde{\ba}$:
\begin{align*}
R_{kk'}(\boldsymbol{a})
&= \textstyle R_{kk'}(\widetilde{\ba})\\
& \quad + \textstyle \sum_j [\nabla R_{kk'}(\widetilde{\ba})]_j \cdot (a_j - \widetilde{a}_j) \\
&\quad\quad + \textstyle \sum_{j'} [\nabla R_{kk'}(\widetilde{\ba})]_{j'} \cdot (a_{j'} - \widetilde{a}_{j'})\\
&\quad\quad\quad+ \textstyle \sum_{jj'} [\nabla^2 R_{kk'}(\widetilde{\ba})]_{jj'} \cdot (a_j - \widetilde{a}_j) \,
(a_{j'} - \widetilde{a}_{j'})\\
&\quad\quad\quad\quad+ \dots
\end{align*}
so that messages $R_{jj'\leftarrow kk'}$ can be identified. In practice, the function $R_{kk'}(\boldsymbol{a})$ is difficult to analyze, because it subsumes a potentially large number of forward and backward computations. DTD introduces the concept of a `relevance model' $\widehat{R}_{kk'}(\boldsymbol{a})$ which locally approximates the true relevance score, but only depends on corresponding activations \cite{DBLP:journals/pr/MontavonLBSM17}. For linear/ReLU layers \cite{DBLP:journals/jmlr/GlorotBB11}, we define the relevance model:
\begin{align*}
\widehat{R}_{kk'}(\boldsymbol{a}) &=\textstyle
\underbrace{ \textstyle
\big(\sum_{j} a_j w_{jk} \big)^+
}_{a_k}
\,
\underbrace{ \textstyle
\big(\sum_{j'} a_{j'} w_{j'k'} \big)^+
}_{a_{k'}}
\,
c_{kk'}
\end{align*}
\vskip -1mm
\noindent with $c_{kk'}$ a constant set in a way that $\widehat{R}_{kk'}(\boldsymbol{a}) = R_{kk'}$. This relevance model is justified later in Proposition \ref{prop:model}. We now have an easily analyzable model, more specifically, a model that is bilinear on the joint activated domain and zero elsewhere. We search for a root point $\widetilde{\boldsymbol{a}}$ at the intersection between the two ReLU hinges and the plane $\{\widetilde{\ba}(t,t') \mid t,t' \in \mathbb{R}\}$ where:
\begin{align*}
[\,\widetilde{\ba}(t,t')\,]_{j} &= a_{j} - t a_{j} \cdot (1 + \gamma \cdot 1_{w_{jk} > 0}),\\
[\,\widetilde{\ba}(t,t')\,]_{j'} &= a_{j'} - t' a_{j'} \cdot (1 + \gamma \cdot 1_{w_{j'k'} > 0})
\end{align*}
with $\gamma \geq 0$ a hyperparameter. This search strategy can be understood as starting with the activations $\boldsymbol{a}$, and jointly decreasing them (especially the ones with positive contributions) until $\widehat{R}_{kk'}(\widetilde{\ba})$ becomes zero. Zero- and first-order terms of the Taylor expansion vanish, leaving us with the interaction terms $R_{jj' \leftarrow kk'}$. The total relevance received by $(j,j')$ from neurons in the layer above is given by:
\begin{align}
R_{jj'}
&= \sum_{kk'}\frac{a_j a_{j'} \rho(w_{jk}) \rho(w_{j'k'})}{\sum_{jj'} a_j a_{j'} \rho(w_{jk}) \rho(w_{j'k'})}
R_{kk'}
\label{eq:lrpgamma2}
\end{align}
with $\rho(w_{jk}) = w_{jk} + \gamma w_{jk}^+$. A derivation is given in Appendix B.1 of the Supplement. This propagation rule can be seen as a second-order variant of the LRP-$\gamma$ rule \cite{lrpoverview} used for explaining DNN classifiers. It has the following interpretation: A pair of neurons $(j,j')$ is assigned relevance if the following three conditions are met:
\smallskip
\begin{enumerate}[label=(\roman*)]
\item it jointly activates,
\item some pairs of neurons in the layer above jointly react,
\item these reacting pairs are themselves relevant.
\end{enumerate}
\smallskip
In addition to linear/ReLU layers, we would like BiLRP{} to handle other common layers such as max-pooling and min-pooling. These two layer types can be seen as special cases of the broader class of {\em positively homogeneous} layers (i.e.\ satisfying $\forall_{\boldsymbol{a}} \forall_{t>0}:~a_k(t\kern 0.04em \boldsymbol{a}) = t\kern 0.04em a_k(\boldsymbol{a})$). For these layers, the following propagation rule can be derived from DTD:
\begin{align}
R_{jj'} = \sum_{kk'} \frac{a_j a_{j'} [\nabla^2 a_k a_{k'}]_{jj'}}{\sum_{jj'} a_j a_{j'} [\nabla^2 a_k a_{k'}]_{jj'}} R_{kk'}
\label{eq:lrpother2}
\end{align}
(cf.\ Appendix B.2 of the Supplement). This propagation rule has a similar interpretation to the one above, in particular, it also requires for $(j,j')$ to be relevant that the corresponding neurons activate, that some neurons $(k,k')$ in the layer above jointly react, and that the latter neurons are themselves relevant.
\subsection{BiLRP{} as a Composition of LRP Computations}
\label{section:bilrp-composition}
A limitation of a plain application of the propagation rules of Section \ref{section:bilrp-derivation} is that we need to handle at each layer a data structure $(R_{kk'})_{kk'}$ which grows quadratically with the number of neurons. Consequently, for large neural networks, a direct computation of these propagation rules is unfeasible. However, it can be shown that relevance scores at each layer can also written in the factored form:
\begin{align*}
R_{kk'} &= \textstyle \sum_{m=1}^h R_{km} R_{k'm}\\
R_{jj'} &= \textstyle \sum_{m=1}^h R_{jm} R_{j'm}
\end{align*}
where $h$ is the dimension of the top-layer feature map, and where the factors can be computed iteratively as:
\begin{align}
R_{jm} &= \sum_k \frac{a_j \rho(w_{jk})}{\sum_j a_j \rho(w_{jk})} R_{km}
\label{eq:lrpgamma}
\end{align}
for linear/ReLU layers, and
\begin{align}
R_{jm} &= \sum_k \frac{a_j [\nabla a_k]_j}{\sum_j a_j [\nabla a_k]_j} R_{km}
\label{eq:lrpother}
\end{align}
for positively homogeneous layers. The relevance scores that result from applying these factored computations are strictly equivalent to those one would get if using the original propagation rules of Section \ref{section:bilrp-derivation}. A proof is given in Appendix C of the Supplement.
\smallskip
Furthermore, in comparison to the $(\#\,\text{neurons})^2$ computations required at each layer by the original propagation rules, the factored formulation only requires $(\#\,\text{neurons} \times 2h)$ computations. The factored form is therefore especially advantageous when $h$ is low. In the experiments of Section \ref{section:vgg}, we will improve the explanation runtime of our similarity models by adding an extra layer projecting output activations to a smaller number of dimensions.
\smallskip
Lastly, we observe that Equations \eqref{eq:lrpgamma} and \eqref{eq:lrpother} correspond to common rules used by standard LRP. The first one is equivalent to the LRP-$\gamma$ rule \cite{lrpoverview} used in convolution/ReLU layers of DNN classifiers. The second one corresponds to the way LRP commonly handles pooling layers \cite{lrp}. These propagation rules apply independently on each branch and factor of the similarity model. This implies that BiLRP{} can be implemented as a combination of multiple LRP procedures that are then recombined once the input layer has been reached:
\begin{align*}
\text{BiLRP}(y,\boldsymbol{x},\boldsymbol{x}') &= \sum_{m=1}^h \text{LRP}([\phi_{L} \circ \dots \circ \phi_1]_m,\boldsymbol{x})\\[-2mm]
&\qquad \qquad \quad \otimes \text{LRP}([\phi_{L} \circ \dots \circ \phi_1]_m,\boldsymbol{x}')
\end{align*}
This modular approach to compute BiLRP{} explanations is shown graphically in Fig.\ \ref{fig:expflow}.
\begin{figure}[h]
\centering
\includegraphics[width=.98\linewidth]{expflow.pdf}
\caption{Illustration of our approach to compute BiLRP{} explanations: \figletter{A} Input examples are mapped by the neural network up to the layer at which the similarity model is built. \figletter{B} LRP is applied to all individual activations in this layer, and the resulting array of explanations is recombined into a single explanation of predicted similarity.}
\label{fig:expflow}
\end{figure}
With this modular structure, BiLRP{} can be easily and efficiently implemented based on existing explanation software. We note that the modular approach described here is not restricted to LRP. Other explanation techniques could in principle be used in the composition. Doing so would however lose the interpretation of the explanation procedure as a deep Taylor decomposition.
\subsection{Theoretical Properties of BiLRP{}}
A number of results can be shown about BiLRP{}. A first result relates the produced explanation to the predicted similarity. Another result lets us view the Hessian$\kern 0.08em \times \kern 0.08em $Product{} method as a special case of BiLRP{}. A last result provides a justification for the relevance models used in Section \ref{section:bilrp-derivation}.
\input{1.tex}
\noindent (See Appendix D.1 of the Supplement for a proof.) Conservation ensures that relevance scores are in proportion to the output of the similarity model.
\input{2.tex}
\noindent (See Appendix D.2 of the Supplement for a proof.) We will find in Section \ref{section:baselines} that choosing non-zero values of $\gamma$ gives better explanations.
\input{3.tex}
\noindent (Cf.\ Appendix D.3 of the Supplement.) This property supports the modeling of $c_{jj'}, c_{kk'}, \dots$ as constant, leading to easily analyzable relevance models from which the BiLRP{} propagation rules of Section \ref{section:bilrp-derivation} can be derived.
\section{BiLRP{} vs.\ Baselines}
\label{section:baselines}
This section tests the ability of the proposed BiLRP{} method to produce faithful explanations. In general, ground-truth explanations of ML predictions, especially nonlinear ones, are hard to acquire \cite{DBLP:journals/dsp/MontavonSM18,DBLP:journals/corr/abs-1911-09017}. Thus, we consider an {\em artificial} scenario consisting of:
\smallskip
\begin{enumerate}[label=(\roman*)]
\item a hardcoded similarity model from which it is easy to extract ground-truth explanations,
\item a neural network trained to reproduce the hardcoded model exactly on the whole input domain.
\end{enumerate}
\smallskip
Because the hardcoded model and the neural network become exact functional copies after training, explanations for their predictions should be the same. Hence, this gives us ground-truth explanations to evaluate BiLRP{} against baseline methods.
The hardcoded similarity model takes two random sequences of $6$ digits as input and counts the number of matches between them. The matches between the two sequences form the ground truth explanation. The neural network is constructed and trained as follows: Each digit forming the sequence is represented as vectors in $\mathbb{R}_+^{10}$. To avoid a too simple task, we set these vectors to be correlated. Vectors associated to the digits in the sequence are then concatenated to form an input $\boldsymbol{x} \in \mathbb{R}_+^{6 \times 10}$. The input goes through two hidden layers of size $100$ and one top layer of size $50$ corresponding to the feature map. We train the network for $10000$ iterations of stochastic gradient descent to minimize the mean square error between predictions and ground-truth similarities, and reach an error of $10^{-3}$, indicating that the neural network solves the problem perfectly.
Because there is currently no well-established method for explaining similarity, we consider three simple baselines and use them as a benchmark for evaluating BiLRP{}:
\smallskip
\begin{enumerate}[label=--]
\item `Saliency': $R_{ii'} = (x_i x'_{i'})^2$
\vskip 0.5mm
\item `Curvature': $R_{ii'} = ([\nabla^2 y(\boldsymbol{x},\boldsymbol{x}')]_{ii'} )^2$
\vskip 0.5mm
\item `Hessian$\kern 0.08em \times \kern 0.08em $Product{}': $R_{ii'} = x_i x'_{i'}\, [\nabla^2 y(\boldsymbol{x},\boldsymbol{x}')]_{ii'}$
\end{enumerate}
\smallskip
Each explanation method produces a scoring over all pairs of input features, i.e.\ a $(6 \times 10) \times (6 \times 10)$-dimensional explanation. The latter can be pooled over embedding dimensions (cf.\ Appendix E of the Supplement) to form a $6 \times 6$ matrix connecting the digits from the two sequences. Results are shown in Fig.\ \ref{fig:toy}. The closer the produced connectivity pattern to the ground truth, the better the explanation method. High scores are shown in red, low scores in light red or white, and negative scores in blue.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth,clip=True,trim=10 0 10 0]{toy.pdf}
\caption{Benchmark comparison on a toy example where we have ground-truth explanation of similarity. BiLRP{} performs better than all baselines, as measured by the average cosine similarity to the ground truth.}
\label{fig:toy}
\end{figure}
We observe that the `Saliency' baseline does not differentiate between matching and non-matching digits. This is explained by the fact that this baseline is not output-dependent and thus does not know the task. The `Curvature' baseline, although sensitive to the output, does not improve over saliency. The `Hessian$\kern 0.08em \times \kern 0.08em $Product{}' baseline, which can be seen as a special case of BiLRP{} with $\gamma=0$, matches the ground truth more accurately but introduces some spurious negative contributions. BiLRP{}, through a proper choice of parameter $\gamma$ (here set to $0.09$) considerably reduces these negative contributions.
This visual inspection is validated quantitatively by considering a large number of examples and computing the average cosine similarity (ACS) between the produced explanations and the ground truth. An ACS of 1.0 indicates perfect matching with the ground truth. `Saliency' and 'Curvature' baselines have low ACS. The accuracy is strongly improved by `Hessian$\kern 0.08em \times \kern 0.08em $Product{}' and further improved by BiLRP{}. The effect of the parameter $\gamma$ of BiLRP{} on the ACS score is shown in Fig.\ \ref{figure:gamma}.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{gamma.pdf}
\caption{Effect of the BiLRP{} parameter $\gamma$ on the average cosine similarity between the explanations and the ground truth.}
\label{figure:gamma}
\end{figure}
We observe that the best parameter $\gamma$ is small but non-zero. Like for standard LRP, the explanation can be further fine-tuned, e.g.\ by setting the parameter $\gamma$ different at each layer or by considering a broader set of LRP propagation rules \cite{lapuschkin2017faces,lrpoverview}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\linewidth]{pascal.pdf}
\caption{Application of BiLRP{} to a dot-product similarity model built on VGG-16 features at layer $31$. BiLRP{} identifies patterns in the data (e.g.\ ears, eyes) that contribute to the modeled similarity.}
\label{figure:pascal}
\end{figure*}
\section{Interpreting Deep Similarity Models}
\label{section:vgg}
Our next step will be to use BiLRP{} to gain insight into practical similarity models built on the well-established \mbox{VGG-16} convolutional neural network \cite{DBLP:journals/corr/SimonyanZ14a}. We take a pretrained version of this network and build the similarity model
\begin{align*}
y(\boldsymbol{x},\boldsymbol{x}') = \big\langle \text{VGG}_{:31}(\boldsymbol{x}) , \text{VGG}_{:31}(\boldsymbol{x}') \big\rangle,
\end{align*}
i.e.\ a dot product on the neural network activations at layer $31$. This layer corresponds to the last layer of features before the classifier. The mapping from input to layer $31$ is a sequence of convolution/ReLU layers, and max-pooling layers. It is therefore explainable by BiLRP{}. However, the large number of dimensions entering in the dot product computation ($512$ feature maps of size $\frac{w}{32} \times \frac{h}{32}$) where $w$ and $h$ are the dimensions of the input image, makes a direct application of BiLRP{} computationally expensive. To reduce the computation time, we append to the last layer a random projection layer that maps activations to a lower-dimensional subspace. In our experiments, we find that projecting to $100$ dimensions provides sufficiently detailed explanations and achieves the desired computational speedup. We set the BiLRP{} parameter $\gamma$ to $0.5, 0.25, 0.1, 0.0$ for layers 2--10, 11--17, 18--24, 25--31 respectively. For layer 1, we use the $z^\mathcal{B}$-rule, that specifically handles the pixel-domain \cite{DBLP:journals/pr/MontavonLBSM17}. Finally, we apply a $8 \times 8$ pooling on the output of BiLRP{} to reduce the size of the explanations. Details of the rendering procedure are given in Appendix F of the Supplement.
\smallskip
Figure \ref{figure:pascal} shows our BiLRP{} explanations on a selection of images pairs taken from the Pascal VOC 2007 dataset \cite{pascal-voc-2007} and resized to $128 \times 128$ pixels. Positive relevance scores are shown in red, negative scores in blue, and score magnitude is represented by opacity. Example A shows two identical images being compared. BiLRP{} finds that eyes, nose, and ears are the most relevant features to explain similarity. Example B shows two different images of birds. Here, the eyes are again contributing to the high similarity. In Example C, the front part of the two planes are matched.
Examples D and E show cases where the similarity is not attributed to what the user may expect. In Example D, the horse's muzzle is matched to the head of a sheep. In Example E, while we expect the matching to occur between the two large animals in the image, the true reason for similarity is a small white calf in the right part of the first image. In example F, the scene is cluttered, and does not let appear any meaningful similarity structure, in particular, the two cats are not matched. We also see in this last example that a substantial amount of negative relevance appears, indicating that several joint patterns contradict the similarity score.
Overall, the BiLRP{} method gives insight into the strengths and weaknesses of a similarity model, by revealing the features and their relative poses/locations that the model is able or not able to match.
\subsection{How {\em Transferable} is the Similarity Model?}
Deep neural networks, through their multiple layers of representation, provide a natural framework for multitask/transfer learning \cite{DBLP:journals/ml/Caruana97,DBLP:conf/cvpr/OquabBLS14}. DNN-based transfer learning has seen many successful applications \cite{DBLP:conf/kdd/ZhangLZSKYJ15,DBLP:journals/mia/LitjensKBSCGLGS17,DBLP:journals/cacie/GaoM18}. In this section, we consider the problem of transferring a {\em similarity} model to some task of interest. We will use BiLRP{} to compare different similarity models, and show how their transferability can be assessed visually from the explanations.
We take the pretrained VGG-16 model and build dot product similarity models at layers $5, 10, 17, 24, 31$ (i.e.\ after each max-pooling layer):
\begin{align*}
y^{(5)}(\boldsymbol{x},\boldsymbol{x}') &= \big\langle \text{VGG}_{:5}(\boldsymbol{x}) , \text{VGG}_{:5}(\boldsymbol{x}') \big\rangle,\\[-2mm]
&~~\vdots\\[-2mm]
y^{(31)}(\boldsymbol{x},\boldsymbol{x}') &= \big\langle \text{VGG}_{:31}(\boldsymbol{x}) , \text{VGG}_{:31}(\boldsymbol{x}') \big\rangle
\end{align*}
Like in the previous experiment, we add to each feature representation a random projection onto $100$ dimensions in order to make explanations faster to compute. In the following experiments, we consider transfer of similarity to the following three datasets:
\smallskip
\begin{enumerate}[label=--]
\item `Unconstrained Facial Images' (UFI) \cite{DBLP:conf/micai/LencK15},
\item `Labeled Faces in the Wild' (LFW) \cite{LFWTech},
\item `The Sphaera Corpus' \cite{mva19,mva20}.
\end{enumerate}
\smallskip
\noindent The first two datasets are face identification tasks. In identification tasks, a good similarity model is needed in order to reliably extract the closest matches in the training data \cite{DBLP:conf/cvpr/ChopraHL05,DBLP:conf/cvpr/SunWT14}. The third dataset is composed of 358 scanned academic textbooks from the 15th to the 17th century containing texts, illustrations and tables related to astronomical studies. Again, similarity between these entities is important, as it can serve to consolidate historical networks \cite{DBLP:conf/eccv/SeguinSdK16,DBLP:journals/lalc/KrautliV18,Lang18}.
\smallskip
Faces and illustrations are fed to the neural network as images of size $64 \times 64$ pixels and $96 \times 96$ pixels respectively. We choose for each dataset a pair composed of a test example and the most similar training example. For each pair, we compute the BiLRP{} explanations. Results for the similarity model at layer $17$ and $31$ are shown in Fig.\ \ref{fig:faces}.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{transfer.pdf}
\caption{Application of BiLRP{} to study how VGG-16 similarity transfers to various datasets.}
\label{fig:faces}
\end{figure}
We observe that the explanation of similarity at layer $31$ is focused on a limited set of features: the eyes or the nose on face images, and a reduced set of lines on the Sphaera illustrations. In comparison, explanations of similarity at layer $17$ cover a broader set of features. These observations suggest that similarity in highest layers, although being potentially capable of resolving very fine variations (e.g.\ for the eyes), might not have kept sufficiently many features in other regions, in order to match images accurately.
To verify this hypothesis, we train a collection of linear SVMs on each dataset where each SVM takes as input activations at a particular layer. On the UFI dataset, we use the original training and test sets. On LFW and Sphaera, data points are assigned randomly with equal probability to the training and test set. The hyperparameter $C$ of the SVM is selected by grid search from the set of values $\{0.001, 0.01, 0.1, 1, 10,100,1000\}$ over $4$ folds on the training set. Test set accuracies for each dataset and layer are shown in Table \ref{table:transfer}.
\begin{table}[h]
\caption{Accuracy of a SVM built on different layers of the VGG-16 network and for different datasets.}
\label{table:transfer}
\centering
\small
\begin{tabular}{lc|ccccc}\toprule
& & \multicolumn{5}{c}{layer}\\[1mm]
dataset & \# classes & 5 & 10 & 17 & 24 & 31 \\\midrule
UFI & 605 & 0.45 & 0.57 & \bf 0.62 & 0.54 & 0.19 \\
LFW & 61 & 0.78 & 0.86 &\bf 0.92 & 0.89 & 0.75\\
Sphaera & 111 & 0.93 & 0.96 & \bf 0.98 & 0.97 & 0.96 \\
\bottomrule
\end{tabular}
\end{table}
These results corroborate the hypothesis initially constructed from the BiLRP{} explanations: Overspecialization of top layers on the original task leads to a sharp drop of accuracy on the target task. Best accuracies are instead obtained in the intermediate layers.
\subsection{How {\em Invariant} is the Similarity Model?}
To further demonstrate the potential of BiLRP{} for characterizing a similarity model, we consider the problem of assessing its invariance properties. Representations that incorporate meaningful invariance are particularly desirable as they enable learning and generalizing from fewer data points \cite{DBLP:journals/pami/BrunaM13,Chmiela2018}.
Invariance can however be difficult to measure in practice: On one hand, the model should respond equally to the input and its transformed version. On the other hand, the response should be selective \cite{Anselmi2016,DBLP:conf/nips/GoodfellowLSLN09}, i.e.\ not the same for every input. In the context of neural networks, a proposed measure of invariance that implements this joint requirement is the local/global firing ratio \cite{DBLP:conf/nips/GoodfellowLSLN09}. In a similar way, we consider an invariance measure for similarity models based on the local/global similarity ratio:
\begin{align}
\textsc{Inv} =
\frac{\big\langle y(\boldsymbol{x},\boldsymbol{x}') \big\rangle_{\text{local}}}{\big\langle y(\boldsymbol{x},\boldsymbol{x}') \big\rangle_{\text{global}}}
\label{eq:invariance}
\end{align}
The expression $\langle \cdot \rangle_{\text{local}}$ denotes an average over pairs of transformed points (which our model should predict to be similar), and $\langle \cdot \rangle_{\text{global}}$ denotes an average over all pairs of points.
\smallskip
We study the layer-wise forming of invariance in the VGG-16 network. We use for this the `UCF Sports Action' video dataset \cite{Rodriguez2008ActionMA, ucfsports2014}, where consecutive video frames readily provide a wealth of transformations (translation, rotation, rescaling, etc.) which we would like our model to be invariant to, i.e.\ produce a high similarity score. Videos are cropped to square shape and resized to size $128 \times 128$. We define $\langle \cdot \rangle_{\text{local}}$ to be the average over pairs of nearby frames in the same video ($\Delta t \leq 5$), and $\langle \cdot \rangle_{\text{global}}$ to be the average over all pairs, also from different videos. Invariance scores obtained for similarity models built at various layers are shown in Table \ref{table:invariance}.
\begin{table}[h]
\caption{Invariance measured by Eq.\ \eqref{eq:invariance} at various layers of the VGG-16 network on the UCF Sports Action dataset.}
\label{table:invariance}
\centering
\small
\begin{tabular}{r|ccccc}\toprule
& \multicolumn{5}{c}{layer}\\[1mm]
& 5 & 10 & 17 & 24 & 31 \\\midrule
\textsc{Inv} & 2.30 & 2.31 & 2.43 & 2.87 & \bf 4.00\\
\bottomrule
\end{tabular}
\end{table}
Invariance increases steadily from the lower to the top layers of the neural network and reaches a maximum score at layer $31$. We now take a closer look at the invariance score in this last layer, by applying the following two steps:
\smallskip
\begin{enumerate}[label=(\roman*)]
\item The invariance score is decomposed on the pairs of video frames that directly contribute to it, i.e.\ through the term $\langle \cdot \rangle_\text{local}$ of Eq.\ \eqref{eq:invariance}.
\item BiLRP{} is applied to these pairs of contributing video frames in order to produce a finer pixel-wise explanation of invariance.
\end{enumerate}
\smallskip
This two-step analysis is shown in Fig.\ \ref{figure:invariance} for a selection of videos and pairs of video frames.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sports.pdf}
\caption{Explanation of measured invariance at layer $31$. {\em Left:} Similarity matrix associated to a selection of video clips. The diagonal band outlined in black contains the pairs of examples in $\langle \cdot \rangle_\text{local}$. {\em Right:} BiLRP{} explanations for selected pairs from the diagonal band.}
\label{figure:invariance}
\end{figure}
The first example shows a diver rotating counterclockwise as she leaves the platform. Here, the contribution to invariance is meaningfully attributed to the different parts of the rotating body. The second example shows a soccer player performing a corner kick. Part of the invariance is attributed to the player moving from right to left, however, a sizable amount of it is also attributed in an unexpected manner to the static corner flag behind the soccer player. The last example shows a golf player as he strikes the ball. Again, invariance is unexpectedly attributed to a small red object in the grass. This small object would have likely been overlooked, even after a preliminary inspection of the input images.
The reliance of the invariance measure on unexpected objects in the image (corner flag, small red object) can be viewed as a `Clever Hans' effect \cite{lapuschkin-ncomm19}: the observer assesses how `intelligent' (or invariant) the model is, based on looking at the outcome of a given experiment (the computed invariance score), instead of investigating the decision structure that leads to the high invariance score. This effect may lead to an overestimation of the invariance properties of the model.
Similar `Clever Hans' effects can also be observed beyond video data, e.g.\ when applying the similarity model to illustrations in the Sphaera corpus. Figure \ref{figure:chans2} shows two pairs of illustrations whose content is equivalent up to a rotation, and for which our model predicts a high similarity.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{cleverhans.pdf}
\caption{Pairs of illustrations from the Sphaera corpus, explained with BiLRP{}. The high similarity originates mainly from matching fixed features in the image rather than capturing the rotating elements.}
\label{figure:chans2}
\end{figure}
Once more, BiLRP{} reveals in both cases that the high similarity is not due to matching the rotated patterns, but mainly fixed elements at the center and at the border of the image respectively.
\smallskip
Overall, we have demonstrated that BiLRP{} can be useful to identify unsuspected and potentially undesirable reasons for high measured invariance. Practically, applying this method can help to avoid deploying a model with false expectations in real-world applications. Our analysis also suggests that better {\em explanation-based} invariance measures could be designed in the future, potentially in combination with optical flows \cite{DBLP:conf/iccv/DosovitskiyFIHH15}, in order to better distinguish between the matching structures that should and should not contribute to the invariance score.
\section{Engineering Explainable Similarities}
\label{section:engineering}
\begin{figure*}[h]
\centering
\includegraphics[width=.95\textwidth]{table.pdf}
\caption{\figletter{A} Collection of tables from the Sphaera Corpus \cite{mva19} from which we extract two tables with identical content. \figletter{B}~Proposed `bigram network' supporting the table similarity model. \figletter{C} BiLRP{} explanations of predicted similarities between the two input tables.}
\label{figure:sphaera}
\end{figure*}
In this section, we turn to an open and significant problem in the digital humanities: assessing similarity between numeric tables in historical textbooks. We consider scanned numeric tables from the Sphaera Corpus \cite{mva19}. Tables contained in the corpus typically report astronomical measurements or calculations of the positions of celestial objects in the sky. Examples of such tables are given in Fig.\ \ref{figure:sphaera}\,A. Producing an accurate model of similarity between astronomical tables would allow to further consolidate historical networks, which would in turn allow for better inferences.
\smallskip
The similarity prediction task has so far proved challenging: First, it is difficult to acquire ground truth similarity. Getting similarity labels would require a meticulous inspection by a human expert of potentially large tables, and the process would need to be repeated for many of pairs of tables. Also, unlike natural images, faces, or illustrations, which are all well represented by existing pretrained convolutional neural networks, table data usually requires ad-hoc approaches \cite{husson14,DBLP:conf/icdar/SchreiberAWDA17}. In particular, we need to specify which aspects of the tables (e.g.\ numbers, style, or layout) should support the similarity.
\subsection{The `Bigram Network'}
\label{section:bigramnet}
We propose a novel `bigram network' to predict table similarity. Our network can be learned from very few human annotations and is designed to encourage the prediction to be based on relevant numerical features. The network consists of two parts:
\smallskip
The first part is a standard stack of convolution/ReLU layers taking a scanned table $\boldsymbol{x}$ as input and producing $10$ activation maps $\{\boldsymbol{a}_j(\boldsymbol{x})\}_{j=1}^{10}$ detecting the digits $0$--$9$. The map $\boldsymbol{a}_j(\boldsymbol{x})$ is trained to produce small Gaussian blobs at locations where digits of class $j$ are present. The convolutional network is trained on a few hundreds of single digit labels along with their respective image patches. We also incorporate a comparable amount of negative examples (from non-table pages) to correctly handle the absence of digits.
\smallskip
The second part of the network is a hard-coded sequence of layers that extracts task-relevant information from the single-digit activation maps. The first layer in the sequence performs an element-wise `min' operation:
\begin{align*}
\boldsymbol{a}_{jk}^{(\tau)}(\boldsymbol{x}) &= \min\big\{\boldsymbol{a}_j(\boldsymbol{x}) ,\tau(\boldsymbol{a}_k(\boldsymbol{x}))\big\}
\end{align*}
The `min' operation be interpreted as a continuous `\textsc{and}' \cite{Kauffmann20}, and tests at each location for the presence of bigrams $jk \in 00$--$99$. The function $\tau$ represents some translation operation, and we apply several of them to produce candidate alignments between the digits forming the bigrams (e.g.\ horizontal shifts of 8, 10, and 12 pixels). We then apply the max-pooling layer:
\begin{align*}
\boldsymbol{a}_{jk}(\boldsymbol{x}) &= \max_\tau \big\{ \boldsymbol{a}_{jk}^{(\tau)}(\boldsymbol{x}) \big\}.
\end{align*}
The `max' operation can be interpreted as a continuous `\textsc{or}', and determines at each location whether a bigram has been found for at least one candidate alignment. Finally, a global sum-pooling layer is applied spatially:
$$
\phi_{jk}(\boldsymbol{x}) = \big\|\boldsymbol{a}_{jk}(\boldsymbol{x})\big\|_1
$$
It introduces global translation invariance into the model and produces a $100$-dimensional output vector representing the sum of activations for each bigram. The bigram network is depicted in Fig.\ \ref{figure:sphaera}\,B.
\smallskip
From the output of the bigram network, the similarity score can be obtained by applying the dot product $y(\boldsymbol{x},\boldsymbol{x}') = \langle \phi(\boldsymbol{x}),\phi(\boldsymbol{x}') \rangle$. Furthermore, because the bigram network is exclusively composed of convolution/ReLU layers and standard pooling operations, similarities built at the output of this network remain fully explainable by BiLRP{}.
\subsection{Validating the `Bigram Network' with BiLRP{}}
We come to the final step which is to validate the `bigram network' approach on the task of predicting table similarity. Examples of common validation procedures include precision-recall curves, or the ability to solve a proxy task (e.g.\ table classification) from the predicted similarities. These validation procedures require label information, which is however difficult to obtain for this type of data. Furthermore, when the labeled data is not sufficiently representative, these procedures are potentially affected by the `Clever Hans' effect \cite{lapuschkin-ncomm19}.
\smallskip
In the following, we will show that BiLRP{}, through the explanatory feedback it provides, offers a much more data efficient way of performing model validation.
We take a pair of tables $(\boldsymbol{x},\boldsymbol{x}')$, which a preliminary manual inspection has verified to be similar. We then apply BiLRP{} to explain:
\smallskip
\begin{enumerate}[label=(\roman*)]
\item the similarity score at the output of our engineered task-specific `bigram network',
\item the similarity score at layer 17 of a generic pretrained VGG-16 network.
\end{enumerate}
\smallskip
For the bigram network, the BiLRP{} parameter $\gamma$ is set to $0.5$ at each convolution layer. For the VGG-16 network, we use the same BiLRP{} parameters as in Section \ref{section:vgg}. The result of our analysis is shown in Fig.\ \ref{figure:sphaera}\,C.
The bigram network similarity model correctly matches pairs of digits in the two tables. Furthermore, matches are produced between sequences occurring at different locations, thereby verifying the structural translation invariance of the model. Pixel-level explanations further validate the approach by showing that individual digits are matched in a meaningful manner. In contrast, the similarity model built on VGG-16 does not distinguish between the different pairs of digits. Furthermore, part of the similarity score is supported by aspects that are not task-relevant, such as table borders.---Hence, for this particular table similarity task, BiLRP{} can clearly establish the superiority of the bigram network over VGG-16.
We stress that this assessment could be readily obtained from a {\em single} pair of tables. If instead we would have applied a validation technique that relies only on similarity scores, significantly more data would have been needed in order to reach the same conclusion with confidence. This sample efficiency of BiLRP{} (and by extension any successful explanation technique) for the purpose of model validation is especially important in digital humanities or other scientific domains, where ground-truth labels are typically scarce or expensive to obtain.
\section{Conclusion}
Similarity is a central concept in machine learning that is precursor to a number of supervised and unsupervised machine learning methods. In this paper, we have shown that it can be crucial to get a human-interpretable explanation of the predicted similarity before using it to train a practical machine learning model.
We have contributed a theoretically well-founded method to explain similarity in terms of pairs of input features. Our method called BiLRP{} can be expressed as a composition of LRP computations. It therefore inherits its robustness and broad applicability, but extends it to the novel scenario of similarity explanation.
The usefulness of BiLRP{} was showcased on the task of understanding similarities as implemented by the VGG-16 neural network, where it could predict transfer learning capabilities and highlight clear cases of `Clever Hans' \cite{lapuschkin-ncomm19} predictions. Furthermore, for a practically relevant problem in the digital humanities, BiLRP{} was able to demonstrate with very limited data the superiority of a task-specific similarity model over a generic VGG-16 solution.
Future work will extend the presented techniques from binary towards n-ary similarity structures, especially aiming at incorporating the different levels of reliability of the input features. Furthermore we will use the proposed research tool to gain insight into large data collections, in particular, grounding historical networks to interpretable domain-specific concepts.
\section*{Acknowledgements}
This work was funded by the German Ministry for Education and Research as BIFOLD -- Berlin Institute for the Foundations of Learning and Data (ref.\ 01IS18025A and ref.\ 01IS18037A), and the German Research Foundation (DFG) as Math+: Berlin Mathematics Research Center (EXC 2046/1, project-ID: 390685689). This work was partly supported by the Institute for Information \& Communications Technology Planning \& Evaluation (IITP) grant funded by the Korea government (No. 2017-0-00451, No. 2017-0-01779).
\bibliographystyle{IEEEtran}
\section{`Hessian$\kern 0.08em \times \kern 0.08em $Product{}' Baseline}
The `Hessian$\kern 0.08em \times \kern 0.08em $Product{}' (HP) baseline we consider in this paper applies to similarity models of the type:
$$
y(\boldsymbol{x},\boldsymbol{x}') = \langle \phi(\boldsymbol{x}),\phi(\boldsymbol{x}') \rangle
$$
a dot product on a feature map $\phi\colon\mathbb{R}^d \to \mathbb{R}^h$ satisfying first-order positive homogeneity i.e.\ $\forall_{\boldsymbol{x}},\forall_{t>0}:~\phi(t\boldsymbol{x}) = t\phi (\boldsymbol{x})$.
\subsection{Derivation of HP}
We derive Hessian$\kern 0.08em \times \kern 0.08em $Product{} as the result of a Taylor expansion of the similarity model at the root point $(\widetilde{\bx},\widetilde{\bx}') = (\varepsilon \kern 0.01em \boldsymbol{x},\varepsilon \kern 0.01em \boldsymbol{x}')$ with $\varepsilon$ almost zero. For the zero-order term, we get:
\begin{align*}
y(\widetilde{\bx},\widetilde{\bx}') &= 0
\end{align*}
Let $\nabla$ and $\nabla'$ be the gradient operators with respect to the features forming $\boldsymbol{x}$ and $\boldsymbol{x}'$ respectively. First-order terms associated to the features of $\boldsymbol{x}$ are given by:
\begin{align*}
R_i &= [\nabla y(\widetilde{\bx},\widetilde{\bx}')]_i \cdot (x_i - \widetilde{x}_i)\\
&= \textstyle \big[\nabla \sum_m \phi_m(\widetilde{\bx})\phi_m(\widetilde{\bx}')\big]_i \cdot x_i\\
&= \textstyle \big[\sum_m (\nabla \phi_m(\widetilde{\bx}))\cdot \phi_m(\widetilde{\bx}')\big]_i \cdot x_i\\
&= \textstyle \big[\sum_m (\nabla \phi_m(\widetilde{\bx}))\cdot \phi_m(\boldsymbol{0})\big]_i \cdot x_i\\
&= 0
\end{align*}
In a similar way, for the features of $\boldsymbol{x}'$, we get $R_{i'} = 0$. To extract the interaction terms $R_{ii'}$, we first show that $\forall_{t>0}:~\nabla \phi_m(t\boldsymbol{x}) = \nabla \phi_m(\boldsymbol{x})$:
\begin{align*}
\nabla \phi_m(t\boldsymbol{x}) = t^{-1} \frac{\partial}{\partial \boldsymbol{x}} \phi_m(t\boldsymbol{x}) = t^{-1}\frac{\partial}{\partial \boldsymbol{x}} t\phi_m(\boldsymbol{x}) = \nabla\phi_m(\boldsymbol{x})
\end{align*}
Then, we develop the interaction terms of the Taylor expansion as:
\begin{align}
R_{ii'}
&= [\nabla^2 y(\widetilde{\bx},\widetilde{\bx}')]_{ii'} \cdot (x_i - \widetilde{x}_i) \cdot (x'_{i'} - \widetilde{x}'_{i'})\nonumber\\
&= \textstyle \big[ \sum_m \nabla^2 \phi_m(\widetilde{\bx})\phi_m(\widetilde{\bx}')\big]_{ii'} \cdot x_i x'_{i'}\nonumber\\
&= \textstyle \big[ \sum_m \nabla \nabla' \phi_m(\widetilde{\bx})\phi_m(\widetilde{\bx}')\big]_{ii'} \cdot x_i x'_{i'}\nonumber\\
&= \textstyle \big[ \sum_m (\nabla \phi_m(\widetilde{\bx})) \otimes (\nabla' \phi_m(\widetilde{\bx}'))\big]_{ii'} \cdot x_i x'_{i'}\nonumber
\intertext{where $\otimes$ denotes the outer product, we apply the property shown above ($\nabla \phi_m(t\boldsymbol{x}) = \nabla \phi_m(\boldsymbol{x})$) to get}
&= \textstyle \big[ \sum_m (\nabla \phi_m(\boldsymbol{x})) \otimes (\nabla' \phi_m(\boldsymbol{x}'))\big]_{ii'} \cdot x_i x'_{i'}\label{eq:outer}
\intertext{and finally, we apply the steps in reverse order}
&= \textstyle \big[\sum_m \nabla \nabla' \phi_m(\boldsymbol{x})\phi_m(\boldsymbol{x}')\big]_{ii'} \cdot x_i x'_{i'}\nonumber\\
&= \textstyle \big[\sum_m \nabla^2 \phi_m(\boldsymbol{x})\phi_m(\boldsymbol{x}')\big]_{ii'} \cdot x_i x'_{i'}\nonumber\\
&= [\nabla^2 y(\boldsymbol{x},\boldsymbol{x}')]_{ii'} \cdot x_i x'_{i'}.\nonumber
\end{align}
The last line corresponds to the HP baseline.
\subsection{Conservation of HP}
We show that `Hessian$\kern 0.08em \times \kern 0.08em $Product{}' sums to the similarity score, and thus constitutes an explanation that is conservative. For this, we can first show that $\boldsymbol{x}^\top \nabla \phi_m(\boldsymbol{x}) = \phi_m(\boldsymbol{x})$:
\begin{align*}
\boldsymbol{x}^\top \nabla \phi_m(t\boldsymbol{x}) = \frac{\partial}{\partial t} \phi_m(t\boldsymbol{x}) = \frac{\partial}{\partial t} t\phi_m(\boldsymbol{x}) = \phi_m(\boldsymbol{x})
\end{align*}
Choosing $t=1$ completes the proof. (This result is known as Euler's homogeneous function theorem.)
\smallskip
\noindent Starting from Eq.\ \eqref{eq:outer}, we then write:
\begin{align*}
\textstyle \sum_{ii'} R_{ii'}
&= \textstyle \sum_{ii'} \big[ \sum_m (\nabla \phi_m(\boldsymbol{x})) \otimes (\nabla' \phi_m(\boldsymbol{x}'))\big]_{ii'} \cdot x_i x'_{i'}\\
&= \textstyle \sum_m \sum_{i} x_i [\nabla \phi_m(\boldsymbol{x})]_i \cdot \sum_{i'} x'_{i'} [\nabla' \phi_m(\boldsymbol{x}')]_{i'} \\
&= \textstyle \sum_m \boldsymbol{x}^\top \nabla \phi_m(\boldsymbol{x}) \cdot \boldsymbol{x}'^\top \nabla' \phi_m(\boldsymbol{x}')\\
&= \textstyle \sum_m \phi_m(\boldsymbol{x}) \cdot \phi_m(\boldsymbol{x}')\\
&= y(\boldsymbol{x},\boldsymbol{x}')
\end{align*}
which shows that the explanation is conservative.
\subsection{`Gradient$\kern 0.08em \times \kern 0.08em $Input{}' Formulation of HP}
We show that `Hessian$\kern 0.08em \times \kern 0.08em $Product{}' can be rewritten as $2 \times h$ `Gradient$\kern 0.08em \times \kern 0.08em $Input{}' (GI) computations. Starting from Eq.\ \eqref{eq:outer}, we get:
\begin{align*}
R_{ii'} &= \textstyle \big[ \sum_m (\nabla \phi_m(\boldsymbol{x})) \otimes (\nabla' \phi_m(\boldsymbol{x}'))\big]_{ii'} \cdot x_i x'_{i'}\\
&= \textstyle \big[ \sum_m (\nabla \phi_m(\boldsymbol{x}) \odot \boldsymbol{x}) \otimes (\nabla' \phi_m(\boldsymbol{x}') \odot \boldsymbol{x}')]_{ii'}\\
&=\textstyle \big[ \sum_m \text{GI}(\phi_m,\boldsymbol{x}) \otimes \text{GI} (\phi_m,\boldsymbol{x}') \big]_{ii'}
\end{align*}
Therefore, scores $R_{ii'}$ produced by HP are the elements of a sum of outer products of GI computations.
\section{Derivation of BiLRP{}}
The deep Taylor decomposition \cite{DBLP:journals/pr/MontavonLBSM17} (DTD) framework we use to derive BiLRP{} propagation rules assumes that relevance propagated up to a certain layer can be modeled as
$$
\widehat{R}_{kk'}(\boldsymbol{a}) = a_k a_{k'} c_{kk'}
$$
i.e.\ a product of activations in the two branches of the similarity computation, multiplied by a term $c_{kk'}$ assumed to be constant and set in a way that $\widehat{R}_{kk'}(\boldsymbol{a}) = R_{kk'}$. DTD seeks to propagate the modeled relevance to the layer below by identifying the terms of a Taylor expansion. In the following, we distinguish between (1) linear/ReLU layers, and (2) positively homogeneous layers (e.g.\ min- or max-pooling).
\subsection{Linear/ReLU Layers}
These layers produce output activations of the type
\begin{align*}
a_k &= \textstyle \big(\sum_{j} a_j w_{jk}\big)^+\\
a_{k'} &= \textstyle \big(\sum_{j'} a_{j'} w_{j'k'}\big)^+
\end{align*}
where the weighted sum can be either a dense layer, or a convolution. The relevance model can be written as:
\begin{align*}
\widehat{R}_{kk'}(\boldsymbol{a}) &= a_k a_{k'} c_{kk'}\\
&= \textstyle \big(\sum_j a_j w_{jk}\big)^+ \big( \sum_{j'} a_{j'} w_{j'k'}\big)^+ c_{kk'}
\end{align*}
When neurons $a_k$ and $a_{k'}$ are jointly activated (i.e.\ $a_k,a_{k'} > 0$), a second-order Taylor expansion of $R_{kk'}$ at some reference point $\widetilde{\ba}$ is given by:
\begin{align*}
\widehat{R}_{kk'}(\boldsymbol{a}) &= \textstyle \big(\sum_{j} \widetilde{a}_{j} w_{jk}\big) \big(\sum_{j'} \widetilde{a}_{j'} w_{j'k'}\big) c_{kk'}\\
& \quad \textstyle + \sum_{j} (a_j - \widetilde{a}_j) w_{jk} \big(\sum_{j'} \widetilde{a}_{j'} w_{j'k'}\big) c_{kk'}\\
& \quad \quad \textstyle + \sum_{j'} \big(\sum_{j} \widetilde{a}_{j} w_{jk}\big) (a_{j'} - \widetilde{a}_{j'}) w_{j'k'} c_{kk'}\\
& \quad \quad \quad \textstyle + \sum_{jj'} \textstyle (a_j - \widetilde{a}_j) w_{jk} (a_{j'} - \widetilde{a}_{j'}) w_{j'k'} c_{kk'}
\end{align*}
BiLRP{} chooses the reference point $\widetilde{\ba}$ to be subject to the following two constraints:
\begin{enumerate}
\item very close to the ReLU hinges of neurons $k$ and $k'$ (but still on the activated domain)
\item on the plane $\{\widetilde{\ba}(t,t') |~ t,t' \in \mathbb{R}\}$
where
\begin{align*}
[\widetilde{\ba}(t,t')]_j &= a_j - t a_j \cdot (1 + \gamma \cdot 1_{w_{jk} > 0})\\
[\widetilde{\ba}(t,t')]_{j'} &= a_{j'} - t' a_{j'} \cdot (1 + \gamma \cdot 1_{w_{j'k'} > 0})
\end{align*}
with $\gamma$ a hyperparameter.
\end{enumerate}
We now analyze the different terms of the expansion at this reference point.
\begin{itemize}
\item The zero-order term is zero.
\item The first-order terms are also zero because the reference point is chosen at the {\em intersection} of the ReLU hinges of neurons $k$ and $k'$, hence the non-differentiated term is zero.
\item The interaction terms are given by:
\begin{align*}
R_{jj' \leftarrow kk'} &= t a_j (1 + \gamma 1_{w_{jk}>0}) \\
& \qquad \cdot~t' a_{j'} (1 + \gamma 1_{w_{j'k'}>0})\\
& \qquad \qquad \cdot~w_{jk} w_{j'k'} c_{kk'}\\
&= tt' a_j a_{j'} \rho(w_{jk}) \rho(w_{j'k'}) c_{kk'}
\end{align*}
where $\rho(w_{jk}) = w_{jk} + \gamma w_{jk}^+$ and where the product of parameters $tt'$ must still be resolved.
\end{itemize}
Because we expand a bilinear form, and because zero-order and first-order terms are zero, the constraint $\sum_{jj'} R_{jj' \leftarrow kk'} = R_{kk'}$ must be satisfied. This constraint allows us to resolve the product $tt'$, leading to the following closed-form expression for the interaction terms:
\begin{align*}
R_{jj' \leftarrow kk'} &= \frac{a_j a_{j'} \rho(w_{jk}) \rho(w_{j'k'})}{\sum_{jj'} a_j a_{j'} \rho(w_{jk}) \rho(w_{j'k'})} R_{kk'}
\end{align*}
This propagation rule is also consistent with the case where $a_k$ or $a_{k'}$ are zero and where no relevance needs to be redistributed. Aggregate relevance scores for the layer below are obtained by summing over neurons in the higher-layer:
\begin{align}
\textstyle
R_{jj'} &= \textstyle \sum_{kk'} R_{jj' \leftarrow kk'}\nonumber\\
&=\sum_{kk'} \frac{a_j a_{j'} \rho(w_{jk}) \rho(w_{j'k'})}{\sum_{jj'} a_j a_{j'} \rho(w_{jk}) \rho(w_{j'k'})} R_{kk'}
\label{eq:Rjj}
\end{align}
This last equation is the propagation rule used by BiLRP{} to propagate relevance in linear/ReLU layers.
\subsection{Positively Homogeneous Layers}
When $a_k$ and $a_{k'}$ are positively homogeneous functions of their input activations (e.g.\ min- and max-pooling layers), the relevance model can be expressed in terms of the Hessian:
\begin{align*}
\widehat{R}_{kk'}(\boldsymbol{a}) &= a_k a_{k'} c_{kk'}\\
&= \textstyle
\big(\sum_j a_j [\nabla a_k]_j\big)
\big(\sum_{j'} a_{j'} [\nabla a_{k'}]_{j'}\big)
c_{kk'}\\
&= \textstyle \sum_{jj'} a_j a_{j'} [\nabla^2 a_k a_{k'}]_{jj'} c_{kk'}
\end{align*}
The last form can also be interpreted as the interaction terms of a Taylor expansion of $\widehat{R}_{kk'}$ at $\widetilde{\ba} = \varepsilon \kern 0.01em \boldsymbol{a}$ with $\varepsilon$ almost zero. Zero-order and first-order terms of the expansion vanish, and interaction terms can be rewritten in a propagation-like manner as:
$$
R_{jj' \leftarrow kk'} = \frac{a_j a_{j'} [\nabla^2 a_k a_{k'}]_{jj'}}{\sum_{jj'} a_j a_{j'} [\nabla^2 a_k a_{k'}]_{jj'}} R_{kk'},
$$
and finally,
\begin{align}
R_{jj'} &= \sum_{kk'} \frac{a_j a_{j'} [\nabla^2 a_k a_{k'}]_{jj'}}{\sum_{jj'} a_j a_{j'} [\nabla^2 a_k a_{k'}]_{jj'}} R_{kk'},
\label{eq:Rjjother}
\end{align}
which is the BiLRP{} propagation rule we use in these layers.
\section{Factorization of BiLRP{}}
\label{appendix:bilrp-factorization}
In this appendix, we show how the propagation rules in Equations \eqref{eq:Rjj} and \eqref{eq:Rjjother} can be factorized to be expressed as compositions of standard LRP \cite{lrp} propagation rules. In the top layer, the dot product similarity can be written as:
\begin{align*}
\textstyle y = \sum_{kk'} a_k a_{k'} 1_{\text{id}(k) = \text{id}(k')}
\end{align*}
where `id' is a function returning the neuron index in its respective branch (a number from $1$ to $h$). Relevance scores can be identified and developed as:
\begin{align*}
R_{kk'} &= a_k a_{k'} 1_{\text{id}(k) = \text{id}(k')}\\
&= \textstyle a_k a_{k'} \sum_{m=1}^h 1_{\text{id}(k) = m} 1_{\text{id}(k') = m}\\
&= \textstyle \sum_{m=1}^h {\underbrace{a_k 1_{\text{id}(k)=m}}_{R_{km}}} \cdot {\underbrace{ a_{k'} 1_{\text{id}(k')=m}}_{R_{k'm}}}
\end{align*}
where we have extracted the desired factor structure. We now apply an inductive argument: Assume that at some layer, $R_{kk'}$ factorizes as $R_{kk'} =\sum_{m=1}^h R_{km} R_{k'm}$. We can show that the same holds in the layer below, in particular, Eq. \eqref{eq:Rjj} can be rewritten as:
\begin{align*}
R_{jj'} &= \sum_{kk'}\frac{a_j a_{j'} \rho(w_{jk}) \rho(w_{j'k'})}{\sum_{jj'} a_j a_{j'} \rho(w_{jk}) \rho(w_{j'k'})} \sum_{m=1}^h R_{km} R_{k'm}\\
&= \sum_{m=1}^h \sum_{kk'}\frac{a_j \rho(w_{jk}) a_{j'} \rho(w_{j'k'})}{\sum_{j} a_j \rho(w_{jk}) \sum_{j'} a_{j'} \rho(w_{j'k'})} R_{km} R_{k'm}\\
&= \footnotesize \sum_{m=1}^h \Big(\underbrace{\sum_{k}\frac{a_j \rho(w_{jk})}{\sum_{j} a_j \rho(w_{jk})} R_{km}\!}_{R_{jm}}\Big)\!\cdot\! \Big( \underbrace{\sum_{k'}\frac{a_{j'} \rho(w_{j'k'})}{\sum_{j'} a_{j'} \rho(w_{j'k'})} R_{k'm}\!}_{R_{j'm}}\Big)
\end{align*}
where we identify a similar factorization. Furthermore, terms of the factorization can be computed using standard LRP rules, here, LRP-$\gamma$ \cite{lrpoverview}. Similarly, Eq.\ \eqref{eq:Rjjother} can be rewritten as:
\begin{align*}
R_{jj'} &= \sum_{kk'}\frac{a_j a_{j'} [\nabla^2 a_k a_{k'}]_{jj'}}{\sum_{jj'} a_j a_{j'} [\nabla^2 a_k a_{k'}]_{jj'}} \sum_{m=1}^h R_{km} R_{k'm}\\
&= \sum_{m=1}^h \sum_{kk'}\frac{a_j [\nabla a_k]_{j} a_{j'} [\nabla a_{k'}]_{j'}}{\sum_{j} a_j [\nabla a_k]_{j} \sum_{j'}a_{j'} [\nabla a_{k'}]_{j'}} R_{km} R_{k'm}\\
&= \footnotesize \sum_{m=1}^h \Big(\underbrace{\sum_{k}\frac{a_j [\nabla a_k]_{j}}{\sum_{j} a_j [\nabla a_k]_{j}} R_{km}\!}_{R_{jm}}\Big)\!\cdot\! \Big( \underbrace{\sum_{k'}\frac{a_{j'} [\nabla a_{k'}]_{j'}}{\sum_{j'} a_{j'} [\nabla a_{k'}]_{j'}} R_{k'm}\!}_{R_{j'm}}\Big)
\end{align*}
which again factorizes into a composition of LRP-type propagation rules.
\section{Theoretical Properties of BiLRP{}}
In this appendix, we give proofs for the theoretical properties stated in Section 3.3 of the paper.
\subsection{Conservation of BiLRP{}}
An important property of LRP \cite{lrp} is conservation, i.e.\ the relevance scores assigned to the input features sum to the prediction output\footnote{In LRP, exact conservation requires using non-dissipative propagation rules (e.g.\ LRP-0 and LRP-$\gamma$), as well as avoiding contribution of biases (e.g.\ by training a model with biases set to zero).}. Similar results can be obtained for BiLRP{}.
\input{1.tex}
\noindent We first show conservation when propagating with Eq.\ \eqref{eq:Rjj} in a linear/ReLU layer:
\begin{align*}
{\textstyle \sum_{jj'} R_{jj'}} &= \sum_{jj'}\sum_{kk'}\frac{a_j a_{j'} \rho(w_{jk}) \rho(w_{j'k'})}{\sum_{jj'} a_j a_{j'} \rho(w_{jk}) \rho(w_{j'k'})} R_{kk'}\\
&= \sum_{kk'}\frac{\sum_{jj'} a_j a_{j'} \rho(w_{jk}) \rho(w_{j'k'})}{\sum_{jj'} a_j a_{j'} \rho(w_{jk}) \rho(w_{j'k'})} R_{kk'} = \textstyle \sum_{kk'}R_{kk'}
\end{align*}
Same conservation property can be shown for the propagation rule in Eq.\ \eqref{eq:Rjjother}. Because these rules are applied repeatedly at each layer, we get the chain of equalities
\begin{align*}
\textstyle
\sum_{ii'} R_{ii'} = \dots =
\sum_{jj'} R_{jj'} =
\sum_{kk'} R_{kk'} = \dots = y(\boldsymbol{x},\boldsymbol{x}')
\end{align*}
where we observe that conservation also holds globally.
\subsection{HP as a Special Case of BiLRP{}}
A result due to \cite{DBLP:journals/corr/ShrikumarGSK16} is that application of a special case of LRP (referred by \cite{lrpoverview} as LRP-0, or LRP-$\gamma$ with $\gamma=0$) at each layer of the network produces an explanation that is equivalent to Gradient$\kern 0.08em \times \kern 0.08em $Input{}. A similar result can be shown for BiLRP{}.
\input{2.tex}
\noindent Rewriting relevance scores as $R_{jj'} = a_j a_{j'} c_{jj'}$ and $R_{kk'} = a_k a_{k'} c_{kk'}$ and observing that for $\gamma=0$, we have $\rho(w_{jk}) = w_{jk}$, the propagation from one layer to another can be written for Eq.\ \eqref{eq:Rjj} as:
\begin{align*}
c_{jj'} &= \sum_{kk'} w_{jk} w_{j'k'} \frac{a_k}{\sum_j a_j w_{jk}}\frac{a_{k'}}{\sum_{j'}a_{j'} w_{j'k'}} c_{kk'}\\
&= \sum_{kk'} w_{jk} w_{j'k'} 1_{a_k>0}1_{a_{k'}>0} c_{kk'}\\
&= \sum_{kk'} [\nabla a_k]_j [\nabla a_{k'}]_{j'} c_{kk'}
\intertext{and similarly for Eq.\ \eqref{eq:Rjjother} as:}
c_{jj'} &= \sum_{kk'} [\nabla^2 a_k a_{k'}]_{jj'} c_{kk'}\\
&= \sum_{kk'} [\nabla a_k]_j [\nabla a_{k'}]_{j'} c_{kk'}
\end{align*}
For the considered class of functions, this relation is equivalent to the formula for propagating second-order derivatives (cf.\ \cite{DBLP:series/lncs/LeCunBOM12}), where $c_{jj'}$ and $c_{kk'}$ denote $[\nabla^2 y]_{jj'}$ and $[\nabla^2 y]_{kk'}$ respectively. Hence, we get at the end of the LRP procedure the quantity $c_{ii'} = [\nabla^2 y]_{ii'}$ and therefore $R_{ii'} = x_i x'_{i'} c_{ii'}$ is equivalent to `Hessian$\kern 0.08em \times \kern 0.08em $Product{}'.
\subsection{Product Approximation in BiLRP{}}
We highlight in the following the product structure of relevance scores produced by BiLRP{} at each layer. This product structure supports the relevance model used by DTD, from which BiLRP{} propagation rules can be derived.
\input{3.tex}
\noindent In the top layer, we have $c_{kk'} = 1_{\text{id}(k) = \text{id}(k')}$ (cf.\ Appendix \ref{appendix:bilrp-factorization}), which is constant. We apply an inductive argument: Assume that at some layer, $c_{kk'}$ is locally approximately constant, we would like to show that the same holds for $c_{jj'}$ in the layer below.
Relevance scores in Eq.\ \eqref{eq:Rjj} can be rewritten as $R_{jj'} = a_j a_{j'} c_{jj'}$ with:
\begin{align*}
c_{jj'} = \sum_{kk'} \rho(w_{jk}) \rho(w_{j'k'}) \frac{\big(\sum_{j} a_j w_{jk}\big)^{\!+} \big(\sum_{j'} a_{j'} w_{j'k'}\big)^{\!+}\!\!}{\sum_{jj'} a_j a_{j'} \rho(w_{jk}) \rho(w_{j'k'})}\, c_{kk'}.
\end{align*}
\noindent The term $c_{jj'}$ depends on $a_j$ and $a_{j'}$ only through (1) nested sums, which can be seen as diluting the effect of these activations, and (2) the term $c_{kk'}$ which we have assumed as a starting point to be locally approximately constant.
Similarly, for Eq.\ \eqref{eq:Rjjother}, the redistributed relevance can be written in product form, with $\textstyle c_{jj'} = \sum_{kk'} \,[\nabla^2 a_k a_{k'}]_{jj'} c_{kk'}$. This time, $c_{jj'}$ depends on local activations through (1) a combination of a nested sum and a second-order differentiation, with the same diluting effect as above, and (2) the term $c_{kk'}$ which is locally approximately constant.
\smallskip
Overall, in both cases, the weak dependency of $c_{jj'}$ on local activations provides support for treating this term as constant in the relevance model used by DTD.
\section{Coarse-Grained Explanations}
\label{section:bilrp-pooling}
When the input has $d$ dimensions, BiLRP{} explanations have size $d^2$, which can be very large. In practice, similarity does not necessarily need to be attributed to every single pair of pixels or input dimensions. A coarse-grained explanation in terms of groups of features jointly representing a super-pixel, a character, or a word, is often sufficient. Let $(\mathcal{I}_1,\mathcal{I}_2,\dots)$ and $(\mathcal{I}'_1,\mathcal{I}'_2,\dots)$ be two partitions of features for the two input examples $\boldsymbol{x}$ and $\boldsymbol{x}'$. These partitions form the coarse-grained structure in terms of which we would like to produce an explanation. Coarse-grained relevance scores are then given by:
$$
\textstyle
R_{\mathcal{I}\mathcal{I}'} = \sum_{i \in \mathcal{I}}\sum_{i' \in \mathcal{I}'} R_{ii'}.
$$
When the original explanation is conservative, it can be verified that the same holds for the coarse-grained explanation ($\sum_{\mathcal{I}\mathcal{I}'}R_{\mathcal{I}\mathcal{I}'} = \sum_{\mathcal{I}\mathcal{I}'} \sum_{ii' \in \mathcal{I}\mathcal{I}'} R_{ii'} = \sum_{ii'} R_{ii'}$).
\section{Rendering of BiLRP{} Explanations}
BiLRP{} explanations of images are composed of $(\# \text{pixels} \times \#\text{pixels})$ scores connecting pairs of pixels in the two input images. Visually rendering these high-dimensional explanations requires to compress them while retaining the relevant information they contain. The rendering procedure we use in this paper is given in Algorithm \ref{alg:rendering}.
\begin{algorithm}
\caption{Rendering of BiLRP{} explanations}
\label{alg:rendering}
\begin{spacing}{1.1}
\begin{algorithmic}
\STATE $R_{\mathcal{I}\mathcal{I'}} \leftarrow \textstyle \sum_{i \in \mathcal{I}}\sum_{i' \in \mathcal{I'}} R_{ii'}$ \hfill {\small (coarse-graining)}
\STATE $R_{\mathcal{I}\mathcal{I'}} \leftarrow R_{\mathcal{I}\mathcal{I'}} / \sqrt[4]{\mathbb{E}[ R_{\mathcal{I}\mathcal{I'}}^4]}$ \hfill {\small (normalization)}
\STATE $ \displaystyle R_{\mathcal{I}\mathcal{I'}} \leftarrow R_{\mathcal{I}\mathcal{I'}} - \text{clip}(R_{\mathcal{I}\mathcal{I'}},[-l,l])$ \hfill {\small (sparsification)}
\STATE $\Delta = h - l$
\STATE $R_{\mathcal{I}\mathcal{I'}} \leftarrow \text{clip}(R_{\mathcal{I}\mathcal{I'}},[-\Delta,\Delta])/ \Delta$ \hfill {\small (thresholding)}
\FORALL{$R_{\mathcal{I}\mathcal{I'}} \neq 0$}
\STATE $\alpha= |R_{\mathcal{I}\mathcal{I'}}|^p$ \hfill {\small (set opacity)}
\IF{$R_{\mathcal{I}\mathcal{I'}} > 0$}
\STATE connect$(\mathcal{I},\mathcal{I}',\text{red},\alpha)$
\ELSE
\STATE connect$(\mathcal{I},\mathcal{I}',\text{blue},\alpha)$
\ENDIF
\ENDFOR
\end{algorithmic}
\end{spacing}
\end{algorithm}
\noindent The procedure pools relevance scores on super-pixels, normalizes them, shrinks them so that only a limited number of connections need to be plotted, thresholds them so that they fit into a finite color space, and raises them to some power $p$. The parameter $l$ controls the level of sparsification and we tune it mostly for computational reasons. The parameter $h$ forces all scores beyond a certain range to be plotted to the maximum color value. The parameter $p$ lets the explanation focus on all or the highest relevance scores. A large value for $p$ makes it more easily interpretable, however contributions to similarity that are spread to a larger group of input features can become visually imperceptible. Example of heatmaps with different values of $p$ are shown in Fig.\ \ref{figure:rendering}. Parameters retained for each dataset, as well as pooling and input sizes are given in Table \ref{table:parameters}.
\begin{figure}[h]
\centering
\vskip -2.5mm
\includegraphics[width=1.0\linewidth]{pstudy.pdf}
\vskip -2.5mm
\caption{Effect of the parameter $p$ on the rendering of the explanation. The higher the parameter $p$, the sparser the explanation.}
\label{figure:rendering}
\end{figure}
\begin{table}[h]
\caption{Parameters used on each dataset for rendering BiLRP{} explanations.}
\label{table:parameters}
\small
\centering
\begin{tabular}{lccccc}\toprule
Dataset & input size & pool & $l$ & $h$ & $p$\\\midrule
Pascal VOC 2007 & $128\times 128$ & $8 \times 8$ & 0.25 & 13 & 2 \\
Faces (UFI \& LFW) & $64\times 64$ & $4 \times 4$ & 0.3 & 60 & 1\\
UCF Sport & $128\times 128$ & $8 \times 8$ & 0.25 & 20 & 1\\
Sphaera (illustrations)\!\!\!\!\! & $96\times 96$ & $6 \times 6$ & 0.25 & 15 & 2\\
Sphaera (tables) & $140\times 140$ & $20 \times 20$ & 0.01 & 4 & 2\\
\bottomrule
\end{tabular}
\end{table}
\bibliographystyle{IEEEtran}
|
1,941,325,221,109 | arxiv | \section{Introduction}
\label{sec:intro}
The process by which information escapes from the black hole hole interior is a pivotal question in the study of quantum gravity. Recent work has brought to light two important points on this front. The first is that at least in some gravitational models the Euclidean gravitational path integral (GPI) exhibits traces of unitarity: a GPI calculation of the entropy of the Hawking radiation reproduces the unitary Page curve~\cite{Pen19, AEMM, BouTom19, PenShe19, AlmHar19, MarMax20, GidTur20, HarSha20, GauFri20}. This hinges crucially on the contribution from Euclidean replica wormhole saddles that connect disconnected boundaries. The inclusion of such wormholes implies that absent some further UV effects, the GPI would not factorize across disconnected boundaries~\cite{Col88, GidStr88,GidStr88a, MalMao04, ArkOrg07}. In this case, the GPI cannot be interpreted as computing the partition function of a standard quantum mechanical theory. One possible explanation is that the GPI should instead be interpreted as computing the ensemble average of many different quantum theories (see also~\cite{HarJaf18,StaWit19, Ill19, KapMah19} for related discussions). Another possibility is that additional contributions should be included, which would lead to the expected factorization of the Euclidean partition function on disconnected surfaces.
Our goal in this paper is to understand more systematically how Euclidean wormholes influence the physics of the GPI. We will put aside for the time being any further potential UV effects (such as certain doubly non-perturbative effects in JT gravity) which might be necessary to describe the dual of an individual quantum theory. We will investigate the contribution of Euclidean wormholes to a more general -- and in a sense simpler -- class of observables than the entropies described above. We find that, completely independently of any considerations of black hole physics, these wormholes make important (and apparently indispensable) contributions to the dynamics of the theory.
To understand how Euclidean wormholes contribute, let us imagine computing a Euclidean GPI where we sum over geometries with a particular choice of boundary~$B$:
\be
\Pcal(B) \equiv \int_{\partial M=B} Dg \, e^{-S}.
\ee
We will take $B$ to be a connected surface, so this is usually interpreted as giving the gravitational computation of a partition function $Z(B)$. One can also consider the integral over geometries with boundary $B^m = B \cup \dots \cup B$:
\be
\Pcal(B^m) \equiv \int_{\partial M=B^m} Dg \, e^{-S}.
\ee
If Euclidean wormholes contribute, then $\Pcal(B^m) \ne \Pcal(B)^m$ and the resulting partition function (or, more generally, correlation functions) do not factorize.
One potential interpretation is that the GPI computes ensemble averages:
\be
\label{eq:GPIaverage}
\Pcal(B) = \overline{Z(B)}, \qquad \Pcal(B^m) = \overline{Z(B)^m},
\ee
where the overline denotes the average over a family of unitary quantum theories, and $Z(B)$ the partition function of a member of this family\footnote{The details of the ensemble and how it is computed will depend on the gravitational theory.
In a specific case like JT gravity, we interpret~$Z(B)$ as~$\Tr e^{-\beta H}$ where~$H$ is a random Hermitian matrix over which we average to get~$\overline{Z(B)}$~\cite{SSS}; see~\cite{MalWit20,PerTro20,CotJen20} for somewhat similar examples in one higher dimension.}. Here we will remain agnostic on whether this ensemble average is genuinely a feature of the gravitational theory, or whether it merely appears as an approximate contribution to the low-energy effective description of some UV-complete theory. Nevertheless, we will continue to interpret the GPI as an ensemble average, bearing in mind that this interpretation may only be valid in some effective description. We will revisit these issues in more detail in Section~\ref{sec:disc}.
Our first observation is that if Euclidean wormholes contribute to the GPI, then they should contribute to even the most basic observable of the theory: the free energy ~$F = - T \ln Z$ evaluated at a particular temperature $T$. In particular, let us imagine computing the free energy via a GPI, where~$T$ enters through the choice of $B$ (for example, in a two-dimensional theory of gravity,~$B$ is a circle of length $\beta \equiv 1/T$). Na\"ively, of course, one might try to compute it by simply taking
\be
F = F_\mathrm{ann} \equiv -T \ln \Pcal(B) = -T \ln \overline{Z(B)}.
\label{annealed}
\ee
This, however, is in tension with the ensemble interpretation: since~$\overline{Z(B)}$ involves an integration over the random variables defining a particular instance of the ensemble, we may interpret~$\overline{Z(B)}$ as the partition function of a theory in which the random variables themselves are permitted to fluctuate and come into equilibrium. In condensed matter systems, the free energy~$F_\mathrm{ann}$ defined above is therefore interpreted as an \textit{annealed} free energy. Instead, what one is really interested in is the \textit{quenched} free energy, in which the random variables defining a particular instance of the ensemble are not allowed to equilibrate. In other words, the free energy~$F = -T \ln Z(B)$ is computed in a particular instance of the ensemble, and \textit{then} the average is taken:
\be
\overline{F} = -T \, \overline{\ln Z(B)}.
\label{quenched}
\ee
In general the annealed and quenched free energies will be different. Indeed, from the gravitational point of view one might expect that $\overline{\ln Z(B)} \ne \ln {\overline{Z(B)}}$ whenever Euclidean wormholes are present in the theory, for the same reason that $\overline{Z}^m \ne \overline{Z^m}$.
In order to understand exactly how Euclidean wormholes contribute to (\ref{quenched}), one needs to compute~$\overline{F}$ from the GPI using a replica trick that involves considering the GPI on~$m$ copies of the boundary~$B$ and then analytically continuing to~$m = 0$. This replica trick is distinct from the one that is employed to compute the von Neumann entropy (which instead considers the GPI defined by an~$n$-sheeted boundary manifold and then continues to near~$n = 1$), and a completely consistent calculation of entanglement entropy must implement \textit{both} replica tricks. This version of the replica trick will be reviewed in section \ref{sec:replicatrick}, and is common in the condensed matter literature, especially in the study of spin glasses.
In fact, although we have focused on the free energy, this new replica trick will apply to the computation of any extensive observable. For example, in the calculation of the Renyi entropy $S_{n}$ of a pure state from the GPI in \cite{AlmHar19}, the result vanishes only to leading order if this additional replica trick is not implemented: the Renyi entropy vanishes \textit{identically} only when the calculation correctly implements both replica tricks.
This additional replica trick means that~$\overline{F}$ becomes sensitive to the contribution of wormholes connecting the replicas, and leads to the conclusion that it is \textit{not consistent} to simultaneously interpret $\Pcal(B)$ as computing an ensemble average and to compute the free energy (or more generally, any extensive obervable) without including contributions from Euclidean wormholes. If the free energy computation is dominated by the disconnected topology, then the ensemble averaging leaves no visible footprint, and the quenched free energy coincides with the annealed free energy:~$\overline{F} \approx -T \ln \Pcal(B)$. However, if in some regime replica wormholes contribute nontrivially to~$\overline{F}$, then ensemble averaging is important for the computation of \textit{any} observable in that regime. Failure to properly compute the free energy via the replica trick above will erase subtle signatures of the ensemble.
Of course, the skeptical reader may be concerned that replica wormholes might \textit{never} actually make an appreciable contribution to~$\overline{F}$, at least in those regimes in which we have some control over the gravitational theory. Indeed, although it has now been verified that replica wormholes are important in the study of black hole entropy, it need not follow that such wormholes will be important in the computation of~$\overline{F}$.
To address this potential concern, in Sections~\ref{sec:CGHS} and~\ref{sec:JT} we compute the free energy in two different models of~2D gravity. We find that the na\"ive calculation of the annealed free energy~$F_\mathrm{ann}$ exhibits pathological behavior at sufficiently low temperature. Specifically, it is non-monotonic with temperature, implying a negative thermodynamic entropy~$S = - \partial F/\partial T$. We then use the replica trick to investigate the contribution of replica wormholes to~$\overline{F}$, finding that this contribution becomes larger than that of the disconnected topology when the annealed free energy exhibits its unphysical behavior. The inclusion of wormholes ameliorates the pathological behavior of the free energy at low temperature, at least with a certain implementation of the replica trick.
The gravitational systems that we consider are~$\widehat{\mathrm{CGHS}}$~\cite{CGHS, CGHShat} and JT gravity~\cite{Tei83, Jackiw}, and importantly we compute the free energy using the full GPI (computed for~$\widehat{\mathrm{CGHS}}$ in~\cite{GodMar20} and JT gravity in~\cite{SSS}), rather than a saddle-point approximation. In both models, we find that replica wormholes substantially change the behavior of the free energy at sufficiently low temperature. Interestingly, in JT gravity, we find that the temperature at which the pathological behavior of the disconnected free energy manifests, and the temperature at which contributions from replica wormholes dominate, both scale like~$e^{-2S_0/3}$ (where~$e^{-S_0}$ controls the JT gravity genus expansion). Since the gravitational theory is only under control for large~$S_0$, one might be concerned that the contribution of the replica wormholes happens in a regime of the theory in which we have no perturbative control. In fact, working at large~$S_0$ but with~$T e^{2S_0/3}$ of order unity puts us in the so-called Airy limit, where the system is controlled by the universal behavior of the edge of the classical density of eigenvalues~$\rho_0(E)$\footnote{We will discuss subtleties involved in this limit in Section~\ref{subsec:Airy}.}. In this limit, the genus expansion can be summed, providing a handle on doubly-nonperturbative corrections (in~$S_0$). We find that these corrections are unimportant in part of the regime where replica wormholes dominate, so we can conclude that they genuinely do contribute even when doubly-nonperturbative corrections do not. This story is entirely analogous to the replica wormholes narrative in the context of black hole evaporation: some parameter~$k$ parametrizing the entropy of matter must become nonperturbatively large in~$S_0$ in order for replica wormholes to dominate, and this transition happens right at the edge of validity of the semiclassical approximation. In our context, the parameter that must become large for wormholes to dominate is instead the inverse temperature~$\beta$.
In an intriguing turn of events, while the replica wormholes do mitigate the pathologies in the free energy, we cannot show that they remove them entirely. We argue that this is due to the inherent ambiguity in the analytic continuation that defines~$\overline{F}$. To gain more insight into this ambiguity, in Section~\ref{sec:RSB} we point out that an extremely similar phenomenon happens in spin glass systems, where a quenched disorder can allow for the spontaneous coupling of replicas used to calculate~$\overline{F}$. In that context, we review the Sherrington-Kirkpatrick (SK) model of spin glasses, and note that similar to our gravity calculations, at high temperature the free energy is dominated by a paramagnetic phase in which the replicas are uncorrelated, while at sufficiently low temperatures the system enters a spin glass phase in which the replicas correlate\footnote{We should be quick to note that our gravitational results also exhibit some important qualitative differences from spin glasses, notably the fact that we need to go to nonperturbatively low temperature to see an exchange of dominance, while the spin glass phase transition happens at a temperature of order unity and can be seen in a strictly thermodynamic limit.}. A replica-symmetric analysis of the spin glass phase exhibits the same sorts of pathologies that we see in the quenched free energy of~$\widehat{\mathrm{CGHS}}$ and JT gravity; it turns out that in the SK model, replica symmetry breaking (RSB) is the key structure that ``fixes'' the analytic continuation in the replica trick and gives the correct free energy down to zero temperature. Motivated by the parallels between spin glasses and our gravitational results, we conjecture that the same sort of RSB is needed in the gravitational case to fully capture the correct behavior of~$\overline{F}$ at low temperature. Importantly, the RSB that we discuss is notably different from the sort of RSB ordinarily discussed in the context of gravitational calculations of Renyi entropies. We make more exploratory comments about possible parallels between gravity and spin glasses in Section~\ref{sec:disc}, but also note that our results should not necessarily be interpreted as indicative of a literal gravitational spin glass phase.
\paragraph{Relation to prior work:} In the context of JT gravity, preludes of the transition in which we are interested can be found in analyses of the two-point correlator~$\overline{Z(\beta_1) Z(\beta_2)}$, which is relevant for studies of the spectral form factor. For instance,~\cite{OkuSak19,OkuSak20} find that at temperatures lower than~$\Ocal(e^{-2S_0/3})$, the contribution of the cylinder topology to this correlator can become larger than that of the disk; see also~\cite{Joh20a,Joh20} for the same behavior in nonperturbative completions of JT gravity, without needing to work at large~$S_0$. See also~\cite{Oku19} for an analogous transition in a Gaussian matrix model. Our purpose here is specifically to investigate the contributions of connected topologies to the quenched free energy via the replica trick for~$\overline{\ln Z}$.
While we emphasize that we do not claim a bona fide spin glass phase in JT gravity, the behavior is sufficiently similar that further comment is warranted given recent studies on SYK. These investigations show that SYK does not exhibit a spin glass phase; that is, a saddle-point analysis of the replica trick in the large-$N$ limit (see e.g.~\cite{BagAlt16,KitSuh17}) indicates that no saddles correlating different replicas dominate the correlators~$\overline{Z^m}$ at any temperature~\cite{MalSta16,GarVer16,BagAlt16,CotGur16,GurMah18,AreKhr18,CarCar18,Ye18,GeoPar01,FuSac16}. Here we point out that (i) we do not work in a saddle-point approximation, and in fact we expect that the behavior we study would be invisible in such a limit; and (ii) JT gravity is only dual to a low-energy regime of SYK, and as shown in~\cite{AreKhr18} an appropriate IR limit of SYK can exhibit a different phase structure than the full SYK system. Hence there is no tension with our results.
More generally, attempts to model spin glasses holographically, such as e.g.~\cite{FujHik08, AhaKom15}, typically manually turn on a correlation between the different replica boundaries in order to induce a spin glass phase transition; this is analogous to the correlation between replica boundaries that occurs in computations of the entropy of Hawking radiation (due to tracing out a subsystem), or to the coupling of two boundaries in the traversable wormhole setup of~\cite{GaoJaf16, MalQi18}. Here we are specifically interested in the contribution of replica wormholes to the GPI~$\Pcal(B^m)$ defined by~$m$ \textit{completely uncoupled} boundaries: the coupling happens entirely spontaneously and is an inevitable consequence of replica wormholes.
On a more tangential note, let us finally point out that there has been an ongoing discussion of the relevance of spin glasses to the physics of eternal inflation as well as to the landscape of string vacua. See e.g.~\cite{AnnDen11} as well as ~\cite{DenTASI} for an excellent review, and also ~\cite{JaiVan15} for more recent work. In a similar vein,~\cite{AnnAno13b} discussed these topics in the context of AdS$_{2}$, and \cite{AnnAno11} and \cite{AnnAno13a} studied a spin glass phase of black hole microstates (without external coupling). It would be interesting to explore connections to our present work.
\section{The Replica Trick for $\overline{\ln Z}$}
\label{sec:replicatrick}
The purpose of this section is to discuss in more detail the replica trick necessary for the computation of the free energy~$\overline{F}$, and more generally the ensemble average of the generating functional~$\overline{\ln Z}$ considered as an arbitrary function of sources. Since such an average appears in the computation of Renyi entropies~$S_n$, and hence also of the von Neumann entropy, we will also discuss the relation to the replica trick used in the computation of von Neumann entropy.
The key point is that if the GPI is interpreted as the ensemble average of a partition function as per~\eqref{eq:GPIaverage}, then it cannot directly compute the ensemble average of any extensive quantity, such as~$\overline{\ln Z}$. The replica trick relates such extensive observables to non-extensive objects via
\be
\label{eq:replicatrick}
\overline{\ln Z(B)} = \lim_{m \to 0} \frac{1}{m} \left(\overline{Z(B)^m}-1\right)=\lim\limits_{m\rightarrow 0} \frac{1}{m} (\Pcal(B^{m})-1),
\ee
where~$B^m$ denotes~$m$ copies of the boundary~$B$, and we have assumed that the pre-average partition function obeys~$Z(B)^m = Z(B^m)$; that is, that~$m$ copies of the (non-averaged) partition function on the boundary~$B$ can equivalently be expressed as the partition function of~$m$ copies of~$B$ (this is certainly the case if~$Z(B)$ is the partition function of an ordinary QFT living on~$B$).
The implementation of this replica trick clearly yields different behaviors of~$\overline{\ln Z(B)}$ depending on whether connected topologies contribute nontrivially to~$\Pcal(B^m)$. In general, we have
\be
\Pcal(B^m) = \Pcal(B)^m + \sum_{\substack{\mathrm{connected} \\ \mathrm{topologies}}},
\ee
where the first term comes from summing over geometries that leave all the replica copies of~$B$ disconnected from one another, while the sum represents integrals over geometries that connect two or more copies of the boundary (i.e.~replica wormholes)\footnote{It is sometimes suggested that the factorization problem of the GPI can be avoided if either the sum over connected topologies is supposed to be excluded, or if somehow it conspires to give a vanishing contribution to~$\Pcal(B^m)$. Here we adopt the perspective of~\cite{MarMax20} that excluding the connected topologies requires a non-local constraint, while having their collective contribution vanish would require fine-tuning.}. We therefore generically have~$\Pcal(B^m) \neq \Pcal(B)^m$. However, in certain cases one topological sector may dominate over others. If the dominant contribution is disconnected, then
we have
\be
\Pcal(B^m) \approx \Pcal(B)^m.
\ee
In this case, using~\eqref{eq:replicatrick} we see that~$\overline{\ln Z} \approx \ln \overline{Z}$, so the replica trick has no appreciable effect; in the condensed matter language used in Section~\ref{sec:intro}, the quenched free energy and the annealed free energy approximately coincide. In particular, we may compute the gravitational free energy by just just taking~$\overline{F} \approx -T \ln \Pcal(B)$, as usual. On the other hand, if a topology connecting multiple copies of~$B$ dominates, then we should expect that
\be
\Pcal(B^m) \not\approx \Pcal(B)^m,
\ee
so the quenched and annealed free energies should not even approximately coincide, and a proper computation of the gravitational free energy will not coincide with the annealed free energy:~$\overline{F} \not\approx -T \ln \Pcal(B)$.
Let us now exhibit how the replica trick~\eqref{eq:replicatrick} relates to the one used to compute the von Neumann entropy. This latter replica trick defines the von Neumann entropy of a subsystem (say a region~$R \subset B$) as a limit of Renyi entropies:
\be
S = \lim_{n \to 1} S_n,
\ee
where the Renyi entropies~$S_n$ are given by
\be
S_n \equiv \frac{1}{1-n} \left(\ln Z(B_n) - n \ln Z(B) \right),
\ee
with~$B_n$ an~$n$-sheeted geometry consisting of~$n$ copies of~$B$ cut along the region~$R$ and then cyclically identified along this cut; see Figure~\ref{fig:ManyReplicas}. If~$R$ is empty,~$B_n$ is just~$B^n$, consisting of~$n$ copies of~$B$.
Suppose we now wish to evaluate the Renyi entropies via a gravitational path integral, under the interpretation that it computes an \textit{ensemble average} of~$S_n$ (and hence also of the von Neumann entropy). Such a computation requires the ensemble averages~$\overline{\ln Z(B_n)}$ and~$\overline{\ln Z(B)}$, which in turn requires use of the ``extra'' replica trick~\eqref{eq:replicatrick}:
\be
\label{eq:Renyiaverage}
\overline{S_n} = \frac{1}{1-n} \left(\lim_{m \to 0} \frac{1}{m} \left(\Pcal(B_n^m) - 1\right) - n \lim_{m \to 0} \frac{1}{m} \left(\Pcal(B^m) - 1\right)\right),
\ee
where~$B_n^m$ consists of~$m$ separate copies of the~$n$-sheeted geometry~$B_n$, as shown in Figure~\ref{fig:ManyReplicas}. A correct calculation of the von Neumann entropy therefore \textit{requires} taking the double limit~$m \to 0$,~$n \to 1$.
\begin{figure}
\centering
\includegraphics[page=1,width=0.8\textwidth]{Figures-pics}
\caption{A computation of the Renyi entropy~\eqref{eq:Renyiaverage} from the GPI requires an additional replica trick, involving computing the GPI with the boundary~$B_n^m$ shown here. Each of the columns is an~$n$-sheeted geometry~$B_n$ constructed by slicing~$n$ copies of~$B$ along the region~$R$ and then identifying these copies cyclically along the cut. $B_n^m$ consists of~$m$ copies of this multi-sheeted geometry. The disorder-averaged von Neumann entropy is computed in the double limit~$m \to 0$,~$n \to 1$.}
\label{fig:ManyReplicas}
\end{figure}
A key distinction to note here is that the~$m$ replicated boundaries~$B_n^m$ are \textit{completely disconnected}; any geometric connection between them must come spontaneously from the GPI. On the other hand, when~$R$ is non-empty, the geometry~$B_n$ is a single connected geometry, due to the identification of the~$n$ sheets along the cut~$R$. In fact, when~$R$ is the empty set, the Renyi entropies must vanish exactly (since we are computing the entropy of a pure state); it is precisely the auxiliary replica trick over~$m$ that guarantees this. To see this, note that if~$R$ is empty,~$B_n^m = B^{nm}$, and hence
\be
\lim_{m \to 0} \frac{1}{m} \left(\Pcal(B_n^m) - 1\right) = \left.\frac{\partial \Pcal(B^{nm})}{\partial m} \right|_{m = 0} = n \left. \frac{\partial \Pcal(B^{\widetilde{m}})}{\partial \widetilde{m}} \right|_{\widetilde{m} = 0},
\ee
so the two terms in~\eqref{eq:Renyiaverage} cancel identically, giving~$\overline{S_n} = 0$ for all~$n$. Importantly, the vanishing of the Renyi entropy is independent of the dominant topology contributing to the path integral. This should be contrasted with, for example, the computation of Renyi entropy performed in~\cite{AlmHar19}, which (working in a semiclassical regime) claimed that the entropy a pure state vanishes because in that case the GPI is dominated only by disconnected topologies. The trouble with that interpretation is that even when the disconnected topology dominates, the path integral will still receive subdominant corrections from connected topologies which would lead to a nonvanishing (but small) Renyi entropy. The double replica trick makes clear that the Renyi entropy of a pure state vanishes exactly, and even when the dominant geometry is a replica wormhole.
Of course, the claim of~\cite{AlmHar19} that (at least in their JT gravity model) the disconnected topology dominates the gravitational path integral in a semiclassical limit when~$R$ is the empty set might lead to a concern: even if replica wormholes make subdominant contributions to the free energy, they might never be dominant in a regime in which the gravitational theory is under control. If so, the extra replica trick~\eqref{eq:replicatrick} will in practice never be necessary for computing leading-order effects. To address this concern, we will now explore explicit examples of gravitational models in which connected saddles do make dominant contributions when the theory is at least somewhat under control, focusing specifically on computations of the free energy~$\overline{F}$.
\section{Free Energy in $\widehat{\mathrm{CGHS}}$}
\label{sec:CGHS}
We begin the investigation in gravity with a variant of standard CGHS dilaton gravity~\cite{CGHS}, introduced as the $\widehat{\mathrm{CGHS}}$ model in~\cite{AfsGon19} (following~\cite{CanJac92}). This model is given by the Euclidean action
\be
S = \frac{\kappa}{2} \int d^2 x \, \sqrt{g} \left(\Phi R - 2\Psi + 2 \Psi \varepsilon^{\mu\nu} \partial_\mu A_\nu\right) + S_\partial,
\ee
where~$S_\partial$ is a boundary term. The equation of motion for~$A_\mu$ fixes~$\Psi$ to be constant, and it is this constant value that sets the temperature of black hole solutions, while the equation of motion for~$\Phi$ sets~$R = 0$. In fact, even in the path integral the integration over~$\Phi$ means that only strictly flat geometries contribute. Hence the only contributions can come from the disk or the cylindrical topology, corresponding to one and two boundaries, respectively; see Figure~\ref{fig:CGHS}. It is this simplification that will allow us to make definitive statements about the structure of the replicas and free energy in this model, without needing to worry about nonperturbative effects arising from higher-genus contributions. This section is therefore a warmup for the JT gravity calculation in Section~\ref{sec:JT}, which is complicated by contributions from all topologies.
\begin{figure}[t]
\centering
\subfloat[][]{
\includegraphics[width=0.2\textwidth,page=2]{Figures-pics}
\label{subfig:CGHSdisk}
}%
\hspace{3cm}
\subfloat[][]{
\includegraphics[width=0.25\textwidth,page=3]{Figures-pics}
\label{subfig:CGHScyl}
}
\caption{The only topologies that can appear in the~$\widehat{\mathrm{CGHS}}$ path integral are the disk and the cylinder.}
\label{fig:CGHS}
\end{figure}
\subsection{Path integrals in $\widehat{\mathrm{CGHS}}$}
\label{subsec:CGHSpathintegral}
The path integrals of the disk and cylinder in~$\widehat{\mathrm{CGHS}}$ were computed in~\cite{GodMar20}.
For the disk with boundary length~$\beta$, the result is\footnote{In the notation of \cite{GodMar20} we have chosen units where the coupling $\gamma$ (which is related to the boundary value of the dilaton) has been set equal to one, and where the normalization factor $\alpha$ which appears in the symplectic form is also equal to $1$.}
\be
\Pcal_\mathrm{disk}(\beta) = \frac{2\pi}{\beta^2}.
\ee
Already we can deduce the need for a phase transition. If the disk were to dominate the free energy, we would have~$\overline{F} = -T \ln \Pcal_\mathrm{disk}$, which is clearly a non-monotonic function of temperature: it has a local maximum at~$T_\mathrm{max} = 1/\sqrt{2\pi} e$, corresponding to a negative thermodynamic entropy~$-\partial F/\partial T$ when~$T < T_\mathrm{max}$ (in fact, the entropy is logarithmically divergent at~$T = 0$). We might hope that the contribution of the cylinder will rectify this low-temperature behavior.
To that end, the path integral on the cylinder (each of whose boundaries has length~$\beta$) is
\be
\Pcal_\mathrm{cyl}(\beta) = \frac{2\pi^2}{\beta}.
\ee
Let us use~$\Pcal_m(\beta)$ to denote the GPI defined by~$m$ boundaries of length~$\beta$. This path integral receives competing contributions from the disk and the cylinder; the completely disconnected topology gives a contribution of
\be
\Pcal_m(\beta) \supset \Pcal_\mathrm{disk}(\beta)^m = \left(\frac{2\pi}{\beta^2}\right)^m,
\ee
while the topology that connects~$m/2$ pairs of boundaries with cylinders (temporarily taking~$m$ to be even) gives a contribution
\be
\Pcal_m(\beta) \supset \Pcal_\mathrm{cyl}(\beta)^{m/2} = \left(\frac{2\pi^2}{\beta} \right)^{m/2}.
\ee
At temperatures larger than~$T_c \equiv 2^{-1/3}$, the contribution from the disk topology is larger, while for temperatures smaller than~$T_c$, the contributions from the cylinder topology is larger. So already at the level of this rough analysis we see a transition: the high-temperature behavior is controlled by the disconnected topology, while the low-temperature behavior is controlled by a connected one\footnote{Because this computation is done using the full path integral, there is no sense in which we can interpret these as saddles, with one ``dominating'' over the other. The point is that both topologies contribute nontrivially, and for sufficiently large or small temperatures one contributes substantially more than the other. The transition between these two behaviors cannot be expected to be sharp, of course.}. Importantly,~$T_c > T_\mathrm{max}$, so the contribution from the cylinder modifies the free energy in the temperature regime in which the annealed free energy~$F_\mathrm{ann} \equiv -T\ln \Pcal_\mathrm{disk}$ was pathological.
Now let us be more thorough and compute~$\Pcal_m(\beta)$ exactly, therefore attempting to obtain the free energy via the~$m \to 0$ limit~\eqref{eq:replicatrick}. Defining~$r \equiv \Pcal_\mathrm{cyl}/\Pcal_\mathrm{disk}^2$, we have
\be
\label{eq:PCGHSsum}
\Pcal_m(\beta) = \Pcal_\mathrm{disk}^m \sum_{m' = 0}^{\lfloor m/2 \rfloor} \begin{pmatrix} m \\ 2m' \end{pmatrix} (2m'-1)!! \, r^{m'},
\ee
where the sum counts contributions from all aways of connecting an even number~$2m'$ of boundaries together via cylinders, the binomial coefficient counts the ways of choosing~$2m'$ boundaries from the full set of~$m$, and the double factorial counts how many distinct ways there are of connecting those~$2m'$ boundaries pairwise with cylinder topologies. Expressing the double factorial as
\be
(2m'-1)!! = \frac{2^{m'}}{\sqrt{\pi}} \, \Gamma\left(m'+\frac{1}{2}\right) = \int_0^\infty \frac{dt}{\sqrt{\pi t}} \, (2t)^{m'} e^{-t},
\ee
we find
\be
\Pcal_m(\beta) = \Pcal_\mathrm{disk}^m \int_0^\infty \frac{dt}{\sqrt{\pi t}} \, e^{-t} \sum_{m' = 0}^{\lfloor m/2 \rfloor} \begin{pmatrix} m \\ 2m' \end{pmatrix} (2tr)^{m'}.
\ee
The sum can be evaluated using the identity\footnote{\eqref{eq:identity} can be shown by expanding the binomials on the right-hand side and then using the identity for sums of roots of unity:
\be
\sum_{j = 0}^{M-1} \left(e^{2\pi j i/M}\right)^k = \begin{cases} 0, & k \in \mathbb{Z} \text{ and } k \neq 0 \text{ (mod }M) \\
M, & k \in \mathbb{Z} \text{ and } k = 0 \text{ (mod }M) \end{cases}. \nonumber
\ee}
\be
\label{eq:identity}
\sum_{m' = 0}^{\lfloor m/M \rfloor} \begin{pmatrix} m \\ M m' \end{pmatrix} y^{M m'} = \frac{1}{M} \sum_{j = 0}^{M-1} \left(1 + e^{2j\pi i/M} y\right)^m
\ee
for any positive integers~$m$ and~$M$, resulting in
\be
\label{eq:PbetamCGHS}
\Pcal_m(\beta) = \Pcal_\mathrm{disk}^m \int_0^\infty \frac{dt}{2\sqrt{\pi t}} \, e^{-t} \left(\left(1+\sqrt{2tr}\right)^m + \left(1-\sqrt{2tr}\right)^m\right).
\ee
To compute $\overline{\ln Z}$, we want to now continue to $m\rightarrow 0$.
\subsection{Continuing to non-integer $m$}
\label{subsec:CGHScontinuation}
The result~\eqref{eq:PbetamCGHS} can be naturally continued to non-integer~$m$, but it exhibits a curious feature: because the second term~$1-\sqrt{2tr}$ will always become negative somewhere in the region of integration, for non-integer~$m$ this term need not be (and is not) real. Invoking the replica trick~\eqref{eq:replicatrick} at this stage would then yield a complex free energy, which is manifestly unphysical. Evidently, the obvious analytic continuation of~\eqref{eq:PbetamCGHS} to non-integer~$m$ cannot be the correct one for the replica trick. A more well-behaved alternative can be obtained by noting the following. For any analytic function~$f(z)$ of a complex variable~$z$, let~$f^*(z)$ be the function obtained by complex-conjugating the Taylor series coefficients of~$f(z)$; then by construction the function~$f_r(z) \equiv (f(z) + f^*(z))/2$ is also analytic, and is real whenever~$z$ is. If~$f(z)$ is real when~$z$ is a positive integer, then~$f_r(z) = f(z)$ when~$z$ is a positive integer, and both~$f(z)$ and~$f_r(z)$ therefore give admissible analytic continuations from the positive integers to general complex~$z$. For this reason, for the purposes of computing~$\overline{F}$ via the replica trick we are free to simply use the real part of~\eqref{eq:PbetamCGHS} when~$m$ is real, which gives
\begin{multline}
\label{eq:PbetamCGHSreal}
\Pcal_m(\beta) = \Pcal_\mathrm{disk}^m \left\{\int_0^\infty \frac{dt}{2\sqrt{\pi t}} \, e^{-t} \left(\left|1+\sqrt{2tr}\right|^m + \left|1-\sqrt{2tr}\right|^m\right) \right. \\ \left. - 2 \sin^2\left(\frac{\pi m}{2}\right) \int_{1/2r}^\infty \frac{dt}{2\sqrt{\pi t}} \, e^{-t} \left|1-\sqrt{2tr}\right|^m \right\}.
\end{multline}
It may seem that we have pushed the replica trick to a breaking point. Of course there was always an infinite amount of freedom in how to continue the path integral~$\Pcal_m(\beta)$ from positive integer~$m$ to non-integer~$m$ near zero, but the implied hope was that a ``natural'' analytic continuation should present itself, and that this continuation should be the correct one for getting the physically correct free energy. But the natural continuation of~\eqref{eq:PbetamCGHS} gives a complex free energy, and we had to introduce a rather ad hoc procedure for modifying the continuation to obtain~\eqref{eq:PbetamCGHSreal}. What prevents us from, say, adding~$g(T) \sin(\pi m)$ to~$\Pcal_m(\beta)$ with~$g(T)$ an arbitrary function of temperature, and therefore getting whatever free energy we want?
This discomfort is well-justified, for there is an even more serious problem with the continuation of either~\eqref{eq:PbetamCGHS} or~\eqref{eq:PbetamCGHSreal} to general \textit{complex}~$m$. In order to consistently interpret~$\Pcal_m(\beta)$ as giving the disorder average~$\overline{Z^m}$ of some power of the partition function, its behavior for purely imaginary~$m = i\alpha$ must be bounded since
\be
\left|\Pcal_{i\alpha} (\beta)\right| = \left| \overline{Z^{i\alpha}} \right| \leq \overline{\left|Z^{i\alpha} \right|} = 1,
\ee
where we have assumed that the disorder average is defined by a proper probability distribution (i.e.~one that is positive and normalized). But while the terms on the first line of~\eqref{eq:PbetamCGHSreal} are bounded when~$m$ is imaginary, the term on the second line is not, and indeed it grows arbitrarily large for large imaginary~$m$. So~\eqref{eq:PbetamCGHSreal} cannot be interpreted as the analytic continuation to complex~$m$ of an ensemble average~$\overline{Z^m}$ with respect to a positive and normalized probability distribution.
In principle we should therefore look for a different analytic continuation that is well-behaved for imaginary~$m$ and hope that, say, Carlson's theorem is sufficient to ensure uniqueness of this continuation\footnote{Carlson's theorem says that if a function~$f(z)$ is analytic in the right half-plane~$\mathrm{Re}(z) > 0$, grows more slowly than~$\sin(\pi z)$ on the imaginary axis and no faster than exponentially elsewhere in the right half-plane, and vanishes on the non-negative integers, then~$f(z)$ vanishes identically.}. However, the growth of~\eqref{eq:PbetamCGHSreal} at large \textit{real}~$m$ excludes this possibility. To see why, note that~\eqref{eq:PbetamCGHSreal} grows faster than exponentially in~$m$ at large real integer~$m$, which can be seen easily by, say, keeping only the~$m' = \lfloor m/2 \rfloor$ term in the sum~\eqref{eq:PCGHSsum}. To try to prove that the analytic continuation to non-integer~$m$ must be unique (once we impose boundedness for imaginary~$m$), suppose we had two different analytic continuations~$\Pcal_m^{(1)}$ and~$\Pcal_m^{(2)}$, and let us try to show that their difference~$\Delta \Pcal_m$ must vanish. This difference of course vanishes on the positive integers, and must also be bounded on the imaginary axis if both~$\Pcal_m^{(1)}$ and~$\Pcal_m^{(2)}$ are. To invoke Carlson's theorem to conclude that~$\Delta \Pcal_m$ must vanish identically, we therefore only need to guarantee that~$\Delta \Pcal_m$ grows no faster than exponentially in the right half-plane; but this is not a condition we can enforce via any constraint on~$\Pcal_m^{(1)}$ and~$\Pcal_m^{(2)}$ due to their superexponential growth for integer~$m$, and hence Carlson's theorem cannot be invoked.
The ambiguity in finding the ``correct'' analytic continuation is a substantial obstacle that we will address in much more detail in Section~\ref{sec:RSB}; it will be interpreted as a signature of replica symmetry breaking. For the time being, we will forge ahead by just using~\eqref{eq:PbetamCGHSreal}, assuming that the temperatures at which the quenched free energy is sensitive to contributions from the cylinder coincide with the temperatures at which the~$\Pcal_m(\beta)$, and therefore the free energy obtained from~\eqref{eq:PbetamCGHSreal}, are. In proceeding in this way, we will be unable to determine what the correct form of the quenched free energy~$\overline{F}$ actually should be, but we can still investigate when contributions from the cylinder cause the quenched and annealed free energies to differ.
With this important caveat in mind, the free energy obtained from~\eqref{eq:PbetamCGHSreal} is
\be
\label{eq:CGHSfreenergy}
\overline{F} = -T\left(\ln \Pcal_\mathrm{disk} + \int_0^\infty \frac{dt}{2\sqrt{\pi t}} \, e^{-t} \, \ln\left|1-2rt\right| \right).
\ee
At hight temperature~$T \gg T_c$,~$r$ is small, so the second term is suppressed like~$\Ocal(r)$ and the free energy is controlled by the disconnected topology. On the other hand, at low temperature~$T \ll T_c$,~$r$ is large and the integral can formally be expanded in powers of~$1/r$, with the leading contribution given by~$(1/2)\ln r$. Hence the behavior of the quenched free energy is
\be
\label{eq:CGHSfreeenergyasymptotics}
\overline{F} = -T \begin{cases} 2 \ln (T/T_c) + \cdots, & T \gg T_c \\ \frac{1}{2} \ln (T/T_c) + \cdots, & T \ll T_c \end{cases}.
\ee
where the ellipses denote subleading terms of order unity. At high temperatures, the free energy is the annealed free energy~$-T \ln \Pcal_\mathrm{disk}$ sensitive only to the the disk topology, while at low temperature the leading-order behavior is modified thanks to the cylinders.
Note that~$\overline{F}$ is still not monotonic in temperature, even with the cylinder contribution. In particular, while the cylinder contribution decreases the severity of the logarithmic divergence (in reducing the prefactor of~2 to a~$1/2$), it does not eliminate it entirely. As discussed above, since the calculation of~$\Pcal_m(\beta)$ for integer~$m$ was exact and involved no approximation, the culprit for this unphysical behavior is the analytic continuation away from integer~$m$\footnote{Another option, of course, is that~$\widehat{\mathrm{CGHS}}$ gravity is itself pathological. But since we are merely using it as a toy model to foreshadow the same sort of behavior that occurs in JT gravity, our main discussion is not enhanced by considering this possibility.}. This should come as no surprise, as we have already established that the analytic continuation given by~\eqref{eq:PbetamCGHSreal} does not behave correctly for imaginary~$m$; clearly it needs to be modified to remove the pathological behavior entirely.
Nevertheless, the key point is that the replica trick is \textit{required} to see that~$\overline{F}$ receives large corrections from the cylinder topology right around the temperature where the annealed free energy is badly-behaved. Without properly understanding how the analytic continuation to non-integer~$m$ is to be perfored, we cannot know in precisely what way these additional corrections modify the free energy; the analytic continuation given in~\eqref{eq:PbetamCGHSreal} is insufficient to remove the low-temperature pathology entirely, but we expect that the correct continuation should give a monotonic free energy that yields a vanishing entropy~$-\partial \overline{F}/\partial T$ at zero temperature. We will revisit this issue in Section~\ref{sec:RSB}.
\section{Free Energy in JT Gravity}
\label{sec:JT}
We have seen that the inclusion of connected topologies in the~$\widehat{\mathrm{CGHS}}$ path integral is of paramount importance for the low-temperature behavior of the free energy. In that model, the calculation was substantially simplified by the paucity of two-dimensional flat geometries. We now turn our attention to a more complex gravitational system: JT gravity.
\subsection{Euclidean wormholes can dominate the free energy}
We will first do a preliminary analysis of the role of Euclidean wormholes in the replica computation of the free energy, beginning with a brief review of the salient features of the JT gravity path integral (using specifically the results of Saad, Shenker, and Stanford~\cite{SSS}). The (Euclidean) JT gravity action is
\be
\label{eq:JTaction}
S_{JT} = -\frac{S_0}{2\pi} \left(\frac{1}{2} \int_M R + \int_{\partial M} K\right) - \left(\frac{1}{2} \int_M \phi(R + 2) + \int_{\partial M} \phi K \right),
\ee
where volume elements are left implied and~$K$ is the extrinsic curvature of~$\partial M$. When $\partial M$ consists of a single circle, the boundary conditions take the length of~$\partial M$ to be~$\beta/\eps$ and set the dilaton~$\phi|_{\partial M} = \gamma/\eps$ there; after the introduction of an appropriate counterterm, the limit~$\eps \to 0$ is understood. For simplicity, we will work in units where~$\gamma = 1$; this amounts to working with the dimensionless rescaled inverse temperature and free energy~$\beta/\gamma$,~$\gamma \overline{F}$ respectively. When~$\partial M$ consists of several circles we may specify boundary conditions separately on each, but for our purposes it will suffice to take all boundary components to have the same length~$\beta/\eps$.
The path integral over the dilaton fixes the path integral over geometries to only include those with constant negative curvature; this space of topologies is of significantly richer structure than its flat counterpart and leads to the organization of the path integral in a genus expansion. For example, if~$\Pcal_{\mathrm{conn},2}(\beta)$ is the path integral over geometries that connect two boundary components (both of which have length~$\beta/\eps$), pictorially we have
\be
\Pcal_{\mathrm{conn},2}(\beta) = \vcenter{\hbox{\includegraphics[width=0.7\textwidth,page=4]{Figures-pics}}}
\ee
Explicitly, the path integral~$\Pcal_{\mathrm{conn},m}(\beta)$ over geometries that connect~$m$ boundary components is given by
\be
\label{eq:JTgenusexpansion}
\Pcal_{\mathrm{conn},m}(\beta) = \sum_{g = 0}^\infty e^{-S_0(2g+m-2)} Z_{g,m}(\beta),
\ee
where the objects~$Z_{g,m}(\beta)$ are
\begin{subequations}
\label{eqs:Zgm}
\begin{align}
Z_{0,1}(\beta) &= Z_\mathrm{disk}(\beta) \equiv \frac{e^{2\pi^2/\beta}}{\sqrt{2\pi} \beta^{3/2}}, \\
Z_{0,2}(\beta) &= \int_0^\infty b \, db \, Z_\mathrm{trumpet}(b,\beta)^2 = \frac{1}{4\pi}, \\
Z_{g,m}(\beta) &= \int_0^\infty \left(\prod_{i = 1}^m db_i \, b_i \, Z_\mathrm{trumpet}(b_i,\beta) \right) V_{g,m}(b_1, \ldots, b_m) \mbox{ if } (g,m) \neq (0,1) \mbox{ or } (0,2); \label{subeq:ZgmVgm}
\end{align}
\end{subequations}
here
\be
Z_\mathrm{trumpet}(b,\beta) \equiv \frac{e^{-b^2/(2\beta)}}{\sqrt{2\pi\beta}}
\ee
and~$V_{g,m}(b_1, \ldots, b_m)$ are the volumes of the moduli spaces of Riemann surfaces with~$m$ geodesic boundaries of lengths~$b_1, \ldots, b_m$ (we work in the convention where the normalization~$\alpha$ of these volume forms is one, corresponding to~$V_{0,3} = 1$). The~$V_{g,m}$ can be computed algorithmically using, for example, Mirzakhani's recursion relation~\cite{Mir06}; a table summarizing the data for small $g$ and $m$ can be found in~\cite{Do11}.
The genus expansion, as well as the contribution of topologies that connect arbitrarily many boundary components, makes the story for JT gravity substantially more involved than for~$\widehat{\mathrm{CGHS}}$. Nevertheless, even at this heuristic level we can now see that connected topologies must be included in, and will upon inclusion significantly affect, the low-temperature behavior of the free energy: for example, if we were to only consider the contributions from the disk topology~$Z_{0,1}$ and the ``double trumpet''~$Z_{0,2}$, the analysis would proceed just as in the~$\widehat{\mathrm{CGHS}}$ case, and we would expect the double trumpet contribution to the free energy to compete with that of the disk whenever~$Z_{0,2}/(e^{S_0} Z_{0,1})^2$ is order unity or larger. For large~$S_0$, this will occur at temperature~$T \lesssim e^{-2S_0/3}$, so that at sufficiently small temperatures failure to include the connected topologies yields a result that is manifestly wrong, as those topologies contribute at least as much as the disconnected ones.
This observation raises a potential concern. The parameter~$e^{-S_0}$ is supposed to suppress the contributions from higher genus, as well as from topologies that connect more boundary components. But at low temperature~$\beta \gg 1$, the leading-order behavior of the~$Z_{g,m}$ scales like~$\beta^{(3/2)(2g+m-2)}$, so contributions from higher genus and more-connected topologies are controlled by~$\beta^{3/2} e^{-S_0}$. The regime in which Euclidean wormholes contribute to the free energy therefore corresponds to the parametric regime in which we lose perturbative control of the genus expansion. What do we make of this?
From the perspective of the Euclidean wormholes, the story is completely analogous to that of quantum extremal islands in the computation of the entropy of Hawking radiation~\cite{PenShe19,AlmHar19}. In that case, there is an auxiliary parameter~$k$ parametrizing the entropy of matter fields\footnote{In the end-of-the-world brane model of~\cite{PenShe19},~$k$ is just the number of internal states of the brane.}, and replica wormholes lead to the presence of a quantum extremal island when~$k$ is nonperturbatively large: the Page transition happens at~$k \sim e^{S_0}$. In the present context, the inverse temperature~$\beta$ plays the role of~$k$. On the other hand, from the perspective of the genus expansion we are justified in being concerned, because without control of the connected path integral~$\Pcal_{\mathrm{conn},m}$ we cannot expect to make any substantive claim regarding the contribution of Euclidean wormholes. Fortunately, the regime we are discussing -- that is, taking~$S_0$ large but keeping~$\beta^{3/2} e^{-S_0}$ of order unity -- recovers the so-called Airy case of random matrix integrals, in which the partition function~$Z(\beta)$ is governed by the behavior at the edge of the spectral density~$\rho(E)$. This simplification makes it possible to resum the genus expansion to include doubly-nonperturbative (in~$S_0$) effects, which we can use to assess how well-behaved the genus expansion is. Before proceeding, it will therefore be useful to discuss this regime in more detail.
\subsection{The Airy limit}
\label{subsec:Airy}
Before diving into the details of the Airy case\footnote{We are grateful to Douglas Stanford for comments that led to the development of this section.}, let us first do a rough analysis of the behavior of the genus expansion in the regime~$\beta \sim e^{2S_0/3}$ where we expect contributions from Euclidean wormholes to become important. Recall that the genus expansion~\eqref{eq:JTgenusexpansion} is asymptotic, meaning that it does not converge even when~$\beta^{3/2} e^{-S_0}$ is small.
Nevertheless, as with any asymptotic series, the partial sums in the genus expansion can be used to bound the free energy.
When $\beta^{3/2} e^{-S_0}$ is not too small, the genus expansion can still be ``under control'' in the sense that the first few
terms in the series~\eqref{eq:JTgenusexpansion} decrease,
so that the partial sums provide a tight bound on the free energy.
To that end, using~\eqref{eqs:Zgm} and the explicit forms of~$V_{g,m}$ found in e.g.~Appendix~B of~\cite{Do11}, in Figure~\ref{subfig:JTgenusconverge} we plot the annealed free energy~$F_\mathrm{ann} \equiv -T \ln \Pcal_{\mathrm{conn},1}$ (corresponding to the disconnected topology free energy~$-T \ln \overline{Z}$) for~$S_0 = 7$ where we include topologies only up to genus~$g = 5$. The first few partial sums of the genus expansion do indeed provide accurate approximations to the free energy for~$T e^{2S_0/3} \gtrsim 0.3$, which crucially includes a local maximum. This is suggestive that this maximum should also be present in a full nonperturbative computation of ~$F_\mathrm{ann}$
-- but as discussed above, such a maximum is an unphysical feature of the free energy, which we expect to be resolved by the inclusion of connected topologies,
indicating that inclusion of the latter is indeed necessary.
\begin{figure}[t]
\centering
\subfloat[][]{
\includegraphics[width=0.49\textwidth]{JT_Free_Energy_S7_varyg}
\label{subfig:JTgenusconverge}
}%
\subfloat[][]{
\includegraphics[width=0.49\textwidth]{JT_Free_Energy_S7_varyell}
\label{subfig:JTellconverge}
}
\caption{The annealed free energy~$F_\mathrm{ann}$ for~$S_0 = 7$. \protect\subref{subfig:JTgenusconverge}: From top to bottom, the solid blue curves show the result after including up to genus~$g = 0,1,2,3,4$, and~5 in the genus expansion~\eqref{eq:JTgenusexpansion}; the dashed red curve shows the result obtained from the low-temperature expansion~\eqref{eq:JTlowtemp} truncated to~$\ell \leq 2$. \protect\subref{subfig:JTellconverge}: From top to bottom, the dashed red curves show the result after including up to~$\ell = 0,1$, and~2 in the low-temperature expansion~\eqref{eq:JTlowtemp}; the solid blue curve shows the result obtained from keeping up to~$g \leq 5$ in the genus expansion~\eqref{eq:JTgenusexpansion}. The local maximum at~$e^{2S_0/3} T \approx 0.7$ is robust against the inclusion of higher order perturbative as well as doubly non-perturbative effects.}
\label{fig:JTgenusconverge}
\end{figure}
To proceed more carefully, we can in fact exchange the asymptotic genus expansion for an asymptotic low-temperature expansion with~$T e^{2S_0/3}$ fixed, verifying that it reproduces the behavior exhibited in Figure~\ref{fig:JTgenusconverge}. To do so, note that the Weil-Petersson volume forms~$V_{g,m}$ appearing in~\eqref{eqs:Zgm} are polynomials in the~$b_i$, and therefore the~$Z_{g,m}$ are polynomials in~$\beta$ of order~$(3/2)(2g+m-2)$, as mentioned above:
\be
Z_{g,m}(\beta) = \left(\beta^{3/2}\right)^{2g+m-2} \sum_{\ell = 0}^{\infty} \beta^{-\ell} P_{\ell,g,m},
\ee
where (up to various constants) the leading-order terms~$P_{0,g,m}$ are the intersection numbers of Chern classes (more generally, the~$P_{\ell,g,m}$ are intersection numbers of the first Miller-Morita-Mumford class with Chern classes~\cite{DijWit18,OkuSak19}; more explicit expressions can be found in Appendix~\ref{app:Airy}). Inserting this expression into~\eqref{eq:JTgenusexpansion}, for certain~$m$ the sum over genus can be performed as described in~\cite{OkuSak19,OkuSak20} to produce a low-temperature asymptotic expansion; for example, for~$m = 1$ we have
\be
\label{eq:JTlowtemp}
\Pcal_{\mathrm{conn},1}(\beta) = \frac{\exp\left(e^{-2S_0} \beta^3/24\right)}{\sqrt{2\pi} \beta^{3/2}} \, e^{S_0} \sum_{\ell = 0}^\infty \frac{1}{\ell!} \left(\frac{\beta}{2\pi^2}\right)^{-\ell} \tilde{z}_\ell\left(\frac{\beta^{3/2} e^{-S_0} }{\sqrt{2}}\right),
\ee
where the first few~$\tilde{z}_\ell(h)$ are given explictly in~\cite{OkuSak19}. For~$\beta e^{-2S_0/3}$ of order unity, this asymptotic expansion is under control for large~$\beta$. In Figure~\ref{subfig:JTellconverge} we show the annealed free energy computed using~\eqref{eq:JTlowtemp} for~$S_0 = 7$, and find that as expected, the low-temperature expansion agrees with the first few partial sums of the genus expansion in the region~$T e^{2S_0/3} \gtrsim 0.3$. This allows us to conclude that the unphysical peak in the free energy at~$T e^{2S_0/3} \approx 0.7$ cannot be eliminated by either higher order terms in the genus expansion or by doubly non-perturbative effects.
In fact, there is more we can say in this low-temperature limit. Since~$\tilde{z}_0(h) = 1$, the leading-order term in~\eqref{eq:JTlowtemp} is given by
\be
\Pcal_{\mathrm{conn},1}(\beta) = \frac{\exp\left(e^{-2S_0} \beta^3/24\right)}{\sqrt{2\pi} \beta^{3/2}} \, e^{S_0} + \cdots.
\ee
This is precisely the partition function in the Airy case of random matrix theory and topological gravity,
\be
\overline{Z(\beta)} = \int dE \, \overline{\rho}_\mathrm{Airy}(E) e^{-\beta E} = \frac{\exp\left(e^{-2S_0} \beta^3/24\right)}{\sqrt{2\pi} \beta^{3/2}} \, e^{S_0},
\ee
where the Airy density of eigenvalues is given by~\cite{Wit90,Kon92}
\be
\overline{\rho}_\mathrm{Airy}(E) = e^{2S_0/3} \left[\mathrm{Ai}'\!\left(-e^{2S_0/3} E\right) + e^{2S_0/3} E \, \mathrm{Ai}\left(-e^{2S_0/3} E\right)^2 \right].
\ee
The leading-order behavior (in~$e^{-S_0}$) of~$\overline{\rho}_\mathrm{Airy}(E)$ is just
\be
\label{eq:spectraledge}
\rho_0(E) = \frac{e^{S_0}}{\pi} \, \sqrt{E} \mbox{ with } E > 0,
\ee
which is the universal behavior of the leading-order density of eigenvalues near the edge of the of the spectrum in the double-scaled matrix models of~\cite{SSS}. Hence the low-temperature expansion~\eqref{eq:JTlowtemp} can be thought of as an expansion about the low-energy edge of the spectrum, with the subleading terms capturing deviations from the exact form~\eqref{eq:spectraledge}. Concretely, it corresponds to taking~$S_0 \to \infty$ while keeping~$\beta e^{-2S_0/3}$ fixed. The contribution to~$\Pcal_{\mathrm{conn},m}$ from this leading-order low-temperature behavior can be summed over genus for any~$m$ using the results of~\cite{Oko01}; we summarize the relevant results in Appendix~\ref{app:Airy}, and the relevant expression for~$\Pcal_{\mathrm{conn},m}$ is given by~\eqref{eq:PmAiry}.
The fact that the low-temperature limit in which we are interested is dominated by the universal behavior~\eqref{eq:spectraledge} means that we may gain some qualitative insights into the competition between connected and disconnected topologies by considering particularly simple matrix models. For example, the Gaussian matrix integral has a leading-order density of eigenvalues given by the Wigner semicircle
\be
\rho_0(E) = \frac{e^{S_0}}{\pi} \sqrt{\frac{a^2 - E^2}{2a}}, \mbox{ with } -a < E < a,
\ee
which recovers~\eqref{eq:spectraledge} in the double-scaling limit~$E \to E - a$ followed by~$a \to \infty$~\cite{GinMoo93}. The exchange of dominance between connected and disconnected topologies in the Gaussian matrix integral was studied in~\cite{Oku19}, where it was found that the connected correlator~$\overline{Z(\beta)^2}_\mathrm{conn}$ becomes larger than the disconnected correlator~$\overline{Z(\beta)}^2$ at temperatures lower than~$\sim N^{-2/3}$ (or~$\sim e^{-2S_0/3}$ using JT terminology). So the behavior we are exploring is a general feature of random matrix models.
The upshot is that the low-temperature regime in which we are interested is quite well-understood; importantly, the contributions of higher genera (and their associated doubly-nonperturbative corrections) are insufficient to eliminate the pathological behavior of the annealed free energy. Therefore, we now turn to a computation of the quenched free energy via an analytic continuation to near~$m = 0$.
\subsection{The continuation in $m$}
To compute the contribution of Euclidean wormholes to the quenched free energy via the replica trick, we need the JT gravitational path integral~$\Pcal_m(\beta)$ defined by~$m$ disconnected boundary circles, each of length~$\beta/\eps$. These are related to the connected path integrals~\eqref{eq:JTgenusexpansion} by the usual relation
\be
\label{eq:connectedexpansion}
\sum_{m = 0}^\infty \frac{t^m}{m!} \Pcal_m(\beta) = \exp\left(\sum_{m = 1}^\infty \frac{t^m}{m!} \Pcal_{\mathrm{conn},m}(\beta)\right).
\ee
In order to continue to near~$m = 0$, we need to express~$\Pcal_m(\beta)$ in a form analytic in~$m$; this is difficult because the Weil-Petersson volume forms~$V_{g,m}$, and consequently the coefficients~$Z_{g,m}(\beta)$ in the genus expansion, are not known analytically in~$m$. This is true also in the Airy limit discussed in Section~\ref{subsec:Airy} where although explicit formulas are known (see Appendix A for a review) they are not written as analytic functions of $m$. We will therefore proceed in an alternative fashion: we define a ``truncated'' path integral~$\Pcal_{m,M}$ to be the JT gravity path integral including only topologies that connect up to~$M$ boundaries, with~$M$ some fixed integer (this amounts to truncating the sum on the right-hand side of~\eqref{eq:connectedexpansion} to~$m \leq M$). We then analytically continue~$\Pcal_{m,M}$ to non-integer~$m$ with~$M$ held fixed, defining a truncated free energy
\be
\overline{F}_M = - T \lim_{m \to 0} \frac{1}{m} \left(\Pcal_{m,M}(\beta) - 1\right).
\ee
Now, for integer~$m \leq M$,~$\Pcal_{m,M}(\beta)$ will of course coincide with the exact result~$\Pcal_m(\beta)$, and hence for all integer~$m$ we have
\be
\Pcal_m(\beta) = \lim_{M \to \infty} \Pcal_{m,M}(\beta).
\ee
If as~$M \to \infty$ the analytic continuation of~$\Pcal_{m,M}(\beta)$ to non-integer~$m$ converges to a function~$\Pcal_{m,\infty}(\beta)$ which is also analytic in~$m$, we may take~$\Pcal_{m,\infty}(\beta)$ to define the analytic continuation of~$\Pcal_m(\beta)$ to non-integer~$m$. We can then express the free energy as\footnote{Assuming the limits~$M \to \infty$,~$m \to 0$ commute.}
\be
\overline{F} = \lim_{M \to \infty} \overline{F}_M = -T \lim_{M \to \infty} \lim_{m \to 0} \frac{1}{m} \left(\Pcal_{m,M}(\beta) - 1\right).
\ee
In practice, we will compute the truncated free energies~$\overline{F}_M$ for some relatively small values of~$M$, which by the argument above we might expect to give us an approximation to the exact free energy~$\overline{F}$. In particular,~$\overline{F}_1$ is just the annealed free energy shown in Figure~\ref{fig:JTgenusconverge}, so we are interested in modifications to the behavior of~$\overline{F}_M$ as~$M$ is increased, specifically in the regime~$T e^{2S_0/3} \gtrsim 0.3$.
To obtain the aforementioned continuation of~$\Pcal_{m,M}(\beta)$ to non-integer~$m$, we proceed inductively: noting that for~$M = 1$ we have~$\Pcal_{m,1}(\beta) = \Pcal_{\mathrm{conn},1}(\beta)^m$, we will suppose that for arbitrary~$M$ we may write
\be
\label{eq:PmMinductive}
\Pcal_{m,M}(\beta) = \, ^{(M)} \! \sum_I \left(A_I^{(M)}\right)^m
\ee
for some~$m$-independent object~$A_I^{(M)}$, where the sum~$^{(M)} \! \sum_I$ (and the corresponding index~$I$) is very schematic and can include both discrete sums and integrals. We then show that if~$\Pcal_{m,M-1}$ can be written in the form~\eqref{eq:PmMinductive}, then so can~$\Pcal_{m,M}$; since~\eqref{eq:PmMinductive} is true for~$M = 1$, we conclude it holds for all~$M$. Explicit forms for~$^{(M)} \sum_I$ and~$A_I^{(M)}$ can then be generated by iterating the inductive step. The continuation of~\eqref{eq:PmMinductive} to non-integer~$m$ is immediate, and the free energy can then easily be obtained.
To perform the inductive step, we wish to express~$\Pcal_{m,M}(\beta)$ as a sum over all possible ways of connecting~$m$ boundaries using topologies that connect no more than~$M$ of them. To do so, we first choose precisely~$M m'$ of the boundaries to be filled in by wormholes that connect exactly~$M$ boundaries (there will be~$m'$ such wormholes), while the remaining~$m - Mm'$ boundaries will be filled in by topologies connecting no more than~$M-1$ boundaries. The~$m'$ wormholes connecting the~$Mm'$ boundaries will make a contribution of~$\Pcal_{\mathrm{conn},M}^{m'}$ to the path integral, while the remaining boundaries contribute~$\Pcal_{m-Mm',M-1}(\beta)$. The full path integral~$\Pcal_{m,M}(\beta)$ is then obtained by summing over all possible~$m'$. For example, we would pictorially express~$\Pcal_{12,4}$ as
\be
\vcenter{\hbox{\includegraphics[width=0.85\textwidth,page=5]{Figures-pics}}} \, ,
\ee
where dotted lines denote boundaries that contribute to the indicated path integral~$\Pcal_{m,M}$, and each term in the sum should come with a factor that counts how many distinct ways there are of arranging the twelve boundaries into the corresponding configuration. For general~$m$,~$M$, we have
\be
\Pcal_{m,M}(\beta) = \sum_{m' = 0}^{\lfloor m/M \rfloor} \text{(counting factor)} \Pcal_{\mathrm{conn},M}(\beta)^{m'} \Pcal_{m-Mm',M-1}(\beta),
\ee
where the counting factor is given by
\be
\text{(counting factor)} = \begin{pmatrix} m \\ M m' \end{pmatrix} \times \frac{1}{m'!} \prod_{j = 1}^{m'} \begin{pmatrix} j M \\ M \end{pmatrix} = \begin{pmatrix} m \\ M m' \end{pmatrix} \frac{(M m')!}{(M!)^{m'} m'!}.
\ee
The first term in this expression simply counts how many distinct ways there are of choosing~$Mm'$ boundaries from the full set of~$m$. The second term counts how many distinct ways there are of grouping the~$M m'$ boundaries into groups of~$M$; the product over binomial coefficients can be interpreted as the number of ways of choosing~$M$ boundaries to connect out of the total~$m' M$, multipled by the number of ways of choosing~$M$ boundaries out of the remaining~$(m'-1)M$, and so on, with the~$m'!$ cancelling out the overcounting of the same groupings in different orders. Invoking the inductive hypothesis~\eqref{eq:PmMinductive}, we therefore have
\be
\label{eq:PmMhalfstep}
\Pcal_{m,M}(\beta) = \, ^{(M-1)} \! \sum_I \left(A_I^{(M-1)}\right)^m \sum_{m' = 0}^{\lfloor m/M \rfloor} \begin{pmatrix} m \\ M m' \end{pmatrix} \frac{(M m')!}{m'!} \left(\frac{\Pcal_{\mathrm{conn},M}(\beta)}{M! \left(A_I^{(M-1)}\right)^M}\right)^{m'}.
\ee
We now write
\be
\label{eq:factorialintegrals}
(Mm')! = \int_0^\infty dt \, e^{-t} t^{Mm'}, \qquad \frac{1}{m'!} = \frac{1}{2\pi i} \int_C dz\, e^z z^{-(m'+1)},
\ee
where~$C$ is any contour that encloses~$z = 0$. Both of these equations are correct for integer~$m'$; for~$m'$ not an integer, the first expression is of course just the definition of the gamma function~$\Gamma(Mm'+1)$ (for~$\mathrm{Re}(Mm') > -1$), but due to the branch cut of~$z^{-(m'+1)}$ along the negative real axis, the second only coincides with~$1/\Gamma(m'+1)$ if~$C$ is chosen to be a Hankel countour\footnote{That is, if~$C$ runs from~$z = -\infty$ to~$z = 0$ and back to~$z = -\infty$, looping in the positive direction around the branch cut.}. But since~\eqref{eq:factorialintegrals} are only required to hold when~$m'$ is a positive integer, there is no need to require~$C$ to be a Hankel contour, and in the freedom in choosing~$C$ we already see a foreshadowing of the freedom that will manifest in the analytic continuation to near~$m = 0$.
Using the identity~\eqref{eq:identity}, we may evaluate the sum over~$m'$ to obtain
\be
\Pcal_{m,M}(\beta) = \frac{1}{M} \int d\mu(t,z) \sum_{j = 0}^{M-1} \, ^{(M-1)}\sum_I \left(A_I^{(M-1)} + e^{2j\pi i/M} \left(\frac{\Pcal_{\mathrm{conn},M}(\beta)}{M! \, z}\right)^{1/M} t \right)^m,
\ee
where
\be
d\mu(t,z) \equiv \frac{dt \, dz}{2\pi i z} \, e^{-t+z}
\ee
and the appropriate contours of integration for~$t$ and~$z$ are understood. This expression for~$\Pcal_{m,M}(\beta)$ is of the form~\eqref{eq:PmMinductive} we assumed for our inductive argument, so we have concluded that~\eqref{eq:PmMinductive} is consistent, with~$A^{(M)}_I$ and the schematic sum~$^{(M)} \! \sum_I$ obeying
\bea
^{(M)} \sum_I &= \int d\mu(t,z) \frac{1}{M} \sum_{j = 0}^{M-1} \, ^{(M-1)} \! \sum_J, \\
A_I^{(M)} &= A_I^{(M-1)} + e^{2j\pi i/M} \left(\frac{\Pcal_{\mathrm{conn},M}(\beta)}{M! \, z}\right)^{1/M} t.
\eea
Iterating these from the base case~$M = 1$ (for which the sum~$^{(1)} \! \sum_I$ is empty and~$A^{(1)} = \Pcal_{\mathrm{conn,1}}$), we therefore find
\begin{subequations}
\begin{multline}
\label{eq:PgeneralNJT}
\Pcal_{m,M}(\beta) = \int \left(\prod_{k = 1}^{M-1} d\mu(t_k,z_k)\right) \\ \times \frac{1}{M!} \sum_{j_1 = 0}^1 \sum_{j_2 = 0}^2 \cdots \sum_{j_{M-1} = 0}^{M-1} A^{(M)}_{j_1, \ldots, j_{M-1}}(z_1,t_1, \ldots, z_{M-1},t_{M-1})^m,
\end{multline}
\be
\label{eq:ANclosedform}
A^{(M)}_{j_1, \ldots, j_{M-1}}(\{t_k,z_k\}) = \Pcal_{\mathrm{conn},1}(\beta) + \sum_{k = 2}^M e^{2j_{k-1} \pi i/k} \left(\frac{\Pcal_{\mathrm{conn},k}(\beta)}{k! \, z_{k-1}}\right)^{1/k}t_{k-1}.
\ee
\end{subequations}
The analytic continuation to near~$m = 0$ is now straightforward; bearing in mind that as in the~$\widehat{\mathrm{CGHS}}$ case we must take the real part, we find
\be
\label{eq:JTfreeenergy}
\overline{F}_M = -T \, \mathrm{Re} \int \left(\prod_{k = 1}^{M-1} d\mu(t_k,z_k)\right) \frac{1}{M!} \, \sum_{j_1 = 0}^1 \cdots \sum_{j_{M-1} = 0}^{M-1} \ln A^{(M)}_{j_1, \ldots, j_{M-1}}(\{t_k,z_k\}).
\ee
As already noted, this free energy depends on the choice of contours~$C_k$ for the integrals over~$z_k$ introduced in the analytic continuation~\eqref{eq:factorialintegrals}. Specifically, the integrand of~\eqref{eq:JTfreeenergy} exhibits branch cuts in the complex~$z_k$ planes, and will therefore be sensitive to where the contour~$C$ intersects these cuts. This is not surprising: as discussed in Section~\ref{subsec:CGHScontinuation}, inferring the ``correct'' analytic continuation to near~$m = 0$ is rather subtle.
\begin{figure}[t]
\centering
\subfloat[][Just $g = 0$]{
\includegraphics[width=0.49\textwidth]{JT_Free_Energy_S7_g0_varyM}
\label{subfig:JTfreeenergyS0g0second}
}%
\subfloat[][Up to $g = 1$]{
\includegraphics[width=0.49\textwidth]{JT_Free_Energy_S7_g1_varyM}
\label{subfig:JTfreeenergyS0g1second}
}
\\
\subfloat[][Up to $g = 2$]{
\includegraphics[width=0.5\textwidth]{JT_Free_Energy_S7_g2_varyM}
\label{subfig:JTfreeenergyS0g2second}
}
\caption{The low-temperature behavior of the JT gravity free energy~$\overline{F}_M$ for various~$M$; here we take~$S_0 = 7$, and the contour~$C$ in~\eqref{eq:factorialintegrals} is the unit circle. From top left to bottom, the energy is computed using topologies with genus up to zero, one, or two. The blue, orange, green, red, and purple curves correspond to~$M = 1,2,3,4,5$, respectively.}
\label{fig:JTfreeenergyS7}
\end{figure}
\begin{figure}[t]
\centering
\subfloat[][Using~\eqref{eq:factorialintegrals} with~$C$ the unit circle.]{
\includegraphics[width=0.49\textwidth]{JT_Free_Energy_S7_g0_varyM}
\label{subfig:JTfreeenergyS0g0secondcompare}
}%
\subfloat[][Using~\eqref{eq:gammamultiplication}.]{
\includegraphics[width=0.49\textwidth]{JT_Free_Energy_S7_g0_varyM_othercontinuation}
\label{subfig:JTfreeenergyS0g0firstcompare}
}
\caption{The low-temperature behavior of the JT gravity free energy~$\overline{F}_M$ obtained using two different analytic continuations to non-integer~$m$: on the left we used the continuation~\eqref{eq:factorialintegrals} with the contour~$C$ taken to be the unit circle (this is the same as Figure~\ref{subfig:JTfreeenergyS0g0second}), while on the right we used~\eqref{eq:gammamultiplication}. The qualitative features agree, but quantitative details do not. The blue, orange, green, red, and purple curves correspond to~$M = 1,2,3,4,5$, respectively, and we take~$S_0 = 7$.}
\label{fig:othercontinuation}
\end{figure}
We would now like to verify that the corrections from replica wormholes significantly alter and even dominate the behavior of the free energy in the regime~$e^{2S_0/3} T \gtrsim 0.3$ with~$S_0$ large in which we have shown we have perturbative control of the genus expansion. To that end, we again use~\eqref{eqs:Zgm} (along with the explicit forms of the~$V_{g,m}$) to compute~$\overline{F}_M$, incorporating contributions up to~$g = 2$ and~$M = 5$; the results are shown in Figure~\ref{fig:JTfreeenergyS7}. Note that in Figure~\ref{fig:JTfreeenergyS7} we take the contour~$C$ in~\eqref{eq:factorialintegrals} to be the unit circle for simplicity. It is clear that in the regime~$e^{2S_0/3} T \gtrsim 0.3$, the inclusion of replica wormholes can substantially modify the behavior of the free energy. The unphysical local maximum appears to be ``softened'' by the replica wormholes contribution, though we should be careful not to draw any firm conclusions about the quantitative features of~$\overline{F}_M$ due to the ambiguity in the continuation to near~$m = 0$ (including, for instance, whether the~$M \to \infty$ limit even exists). In short, we can ascribe meaning to the fact that the free energy changes when replica wormholes are included, but we cannot know its quantitative behavior until we know how to pick the ``right'' continuation. To highlight this point, in Figure~\ref{fig:othercontinuation} we compare the~$g = 0$ free energies obtained from the analytic continuation~\eqref{eq:factorialintegrals} with~$C$ the unit circle to another analytic continuation in which we instead used the gamma function multiplication theorem to write
\be
\label{eq:gammamultiplication}
\frac{(Mm')!}{m'!} = \frac{M^{Mm'+1/2}}{(2\pi)^{(M-1)/2}} \prod_{k = 1}^{M-1} \Gamma\left(m' + \frac{k}{M}\right),
\ee
and then expressed the gamma functions in the product in their integral form. The qualitative features of the free energy computed with these two different analytic continuations agree well, but of course they differ quantitatively. At this point we do not know how to specify the correct prescription, but for reasons that we will describe in the next section, we expect the answer will involve replica symmetry breaking in the~$m \to 0$ limit.
As a final note, it is interesting to examine the behavior of~$\overline{F}_M$ using the the leading-order low-temperature behavior of~$\Pcal_{\mathrm{conn},m}$ discussed in Section~\ref{subsec:Airy} and Appendix~\ref{app:Airy}. Specifically, using equation~\eqref{eq:PmAiry} for the path integral in the Airy limit, we obtain the behavior of~$\overline{F}_M$ shown in Figure~\ref{fig:Airyfree}. While again we may not draw any definitive quantitative conclusions due to the ambiguity in the analytic continuation, we see that connected topologies affect the behavior of the free energy even when all all terms in the genus expansion are included.
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{Airy_Free_Energy_varyM}
\caption{The behavior of the quenched free energy~$\overline{F}_M$ for the Airy case, including contributions from all genera using the result~\eqref{eq:PmAiry}. This amounts to taking~$S_0 \to \infty$ with~$T e^{2S_0/3}$ held fixed in the JT path integral. As in Figure~\ref{fig:JTfreeenergyS7}, here we take the contour~$C$ in~\eqref{eq:factorialintegrals} to be the unit circle, and the blue, orange, green, red, and purple curves correspond to~$M = 1,2,3,4,5$, respectively.}
\label{fig:Airyfree}
\end{figure}
\section{Replica Symmetry Breaking and a Spin Glass Analogy}
\label{sec:RSB}
We have shown that in computing extensive quantities like the free energy in gravitational systems, the interpretation of the GPI as an ensemble average -- requiring a replica trick for the computation of the quenched free energy -- can lead to a contribution from replica wormholes that exceeds that of disconnected topologies. The necessity of these corrections can already be inferred from the pathological properties of the low-temperature behavior of the annealed free energy, computed just from disconnected topologies without resorting to a replica trick. We have also seen that the inclusion of replica wormholes remedies some of these pathologies but is not sufficient to remove them entirely; we interpret this as necessitating a clearer understanding of the correct analytic continuation to~$m = 0$. Indeed, let us emphasize that in the simpler case of $\widehat{\mathrm{CGHS}}$, the (nonperturbative) calculation that includes all of the allowed geometries \textit{still} exhibits a pathological annealed free energy at low temperatures. This calculation had only one potential pitfall: the $m\rightarrow 0$ analytic continuation. This immediately implies that it is the choice of the straightforward analytic continuation that is directly responsible for the incorrect result.
All of these features -- an annealed free energy with pathological low-temperature behavior, an improvement in this behavior under the inclusion of connected replicas in computing the quenched free energy, and the need for a careful analytic continuation to near~$m = 0$ to eliminate the pathological behavior entirely -- are exhibited in the well-studied context of spin glasses. In order to draw an analogy with these systems, we will now review one particularly well-known example: the Sherrington-Kirkpatrick (SK) model~\cite{SheKir75}. In this system, we will see that the non-uniqueness of the analytic continuation to~$m = 0$ is due to a replica symmetry-breaking transition that occurs at~$m < 1$, suggesting that a similar transition likely occurs in the gravitational systems we have examined, and that it is unlike the usual~$\mathbb{Z}_{n}$-replica symmetry breaking that is discussed in the context of gravitational calculations of the Renyi entropies. We will keep the review of the SK model limited to the bare essentials, but would recommend~\cite{SherringtonReview,CasCavReview} and especially~\cite{SpinGlassBook} for more comprehensive treatments.
\subsection{Review of the SK model}
The SK model is an infinite-ranged classical Ising model of~$N$ interacting spins~$\sigma_i$, with Hamiltonian
\be
H_{\{J_{ij}\}}[\sigma] = -\sum\limits_{(ij)} J_{ij}\sigma_{i}\sigma_{j},
\ee
where the sum runs over all distinct pairs of spins~$(ij)$. Each of the random couplings~$J_{ij}$ is drawn from a Gaussian\footnote{We could consider a more general distribution, but the important physics is captured by just the second moment of~$P(J_{ij})$.} distribution~$P(J_{ij})$ with mean~$J_0/N$ and and variance~$J^2/N$. As above, we will denote averages over the distribution~$P(J_{ij})$ via an overline, so that, for instance, the ensemble average of the logarithm of the partition function is
\be
\overline{\ln Z} = \int \left(\prod_{(ij)} dJ_{ij} P(J_{ij}) \right)
\ln \Tr e^{-\beta H_{\{J_{ij}\}}[\sigma]}.
\ee
Note that~$\overline{\ln Z}$ is quite difficult to compute directly, but using the replica trick~\eqref{eq:replicatrick} requires us to simply compute the ensemble average of the $m$-replicated partition function
\be
\overline{Z^m} = \overline{
\left(\Tr e^{-\beta H_{\{J_{ij}\}}[\sigma]}\right)^m} = \overline{\Tr_m \exp\left(-\beta \sum_{\alpha = 1}^m H_{\{J_{ij}\}}[\sigma^\alpha]\right)},
\ee
where~$\alpha$ is a replica index that labels~$m$ copies of the spins~$\sigma^\alpha$, and the last trace is over all~$m$ replica systems. The last average is quite easy to express in terms of the moments~$J_0$ and~$J$ of the distribution~$P(J_{ij})$:
\be
\label{eq:ZmSKdisorderaverage}
\overline{Z^m} = \Tr_m \exp \left\{\frac{1}{N} \sum_{(ij)} \left(J_0 \beta \sum_{\alpha = 1}^m \sigma_i^\alpha \sigma_j^\alpha + \frac{(\beta J)^2}{2} \left(\sum_{\alpha = 1}^m \sigma_i^\alpha \sigma_j^\alpha\right)^2 \right) \right\}.
\ee
The fact that the couplings~$J_{ij}$ are correlated between the different replicas has led to the introduction of an effective coupling between replicas via the ensemble average. Moreover, by completing the squares in the sums over spin sites and introducing auxiliary variables~$s_\alpha$,~$q_{(\alpha,\gamma)}$ with~$\alpha \neq \gamma$ (sometimes called Hubbard-Stratonovich variables, collective fields, or mean fields), we may decouple the spin sites:
\be
\label{eq:ZmSKexact}
\overline{Z^m} = B \int \left(\prod_\alpha ds_\alpha\right) \left(\prod_{(\alpha,\gamma)} dq_{(\alpha,\gamma)}\right) e^{N H_\mathrm{eff}},
\ee
where~$B$ is a prefactor that is sub-exponential in~$N$ (and therefore will be irrelevant in the thermodynamic limit~$N \to \infty$), the variables~$s_\alpha$ and~$q_{(\alpha,\gamma)}$ are all integrated over the real axis, and the notation~$(\alpha,\gamma)$ denotes all distinct pairs of replicas. Here the effective Hamiltonian~$H_\mathrm{eff}$ is independent of~$N$ and given by
\begin{subequations}
\be
H_\mathrm{eff} = \ln \underset{\{\sigma^\alpha\}}{\Tr} e^{\Lcal[\sigma^\alpha]} - \Kcal,
\ee
where the trace is now over all~$m$ replicas of a \textit{single spin site} and
\begin{align}
\Kcal &\equiv \frac{\beta J_0}{2} \sum_\alpha s_\alpha^2 + \frac{(\beta J)^2}{2} \sum_{(\alpha,\gamma)} q_{(\alpha,\gamma)}^2 - \frac{m}{4} (\beta J)^2, \\
\Lcal[\sigma^\alpha] &\equiv \beta J_0 \sum_{\alpha} s_{\alpha} \sigma^{\alpha} + (\beta J)^2 \sum_{(\alpha,\gamma)} q_{(\alpha,\gamma)} \sigma^{\alpha} \sigma^{\gamma}.
\end{align}
\end{subequations}
At this point~\eqref{eq:ZmSKexact} is still an exact equation, whose existence is made possible thanks to the all-to-all coupling of the SK model: the fact that the couplings between all pairs of sites are drawn from the same distribution allows for the factorization of different spin sites in~\eqref{eq:ZmSKdisorderaverage} via the introduction of the variables~$s_\alpha$ and~$q_{(\alpha,\beta)}$. We may now take the thermodynamic limit~$N \to \infty$, finding via a saddle point approximation that
\be
\label{eq:ZmSKsaddle}
\overline{Z^m} \sim \exp\left(N H_\mathrm{eff}\left(s_\alpha,q_{(\alpha,\gamma)}\right)\right),
\ee
where now~$s_\alpha$ and~$q_{(\alpha,\gamma)}$ are solutions to the saddle point equations~$\partial H_\mathrm{eff}/\partial s_\alpha = 0 = \partial H_\mathrm{eff}/\partial q_{(\alpha,\gamma)}$. It is easy to see that these conditions reduce to
\be
\label{eq:meanfields}
s_\alpha = \ev{\sigma^\alpha}_\Lcal, \qquad q_{(\alpha,\gamma)} = \ev{\sigma^\alpha \sigma^\gamma}_\Lcal, \quad \mbox{where} \quad \ev{X}_\Lcal \equiv \frac{\Tr_{\{\sigma^\alpha\}}(X e^{\Lcal[\sigma^\alpha]})}{\Tr_{\{\sigma^\alpha\}}e^{\Lcal[\sigma^\alpha]}},
\ee
giving~$s_\alpha$ and~$q_{(\alpha,\gamma)}$ the interpretation of mean fields fixed by the self-consistency conditions~\eqref{eq:meanfields}. Importantly, the field~$q_{(\alpha,\gamma)}$ is interpreted as a coupling between replicas; a saddle with nonzero~$q_{(\alpha,\gamma)}$ indicates the spontaneous ``turning on'' of this coupling. Because this coupling is our main focus, from now on we will set~$J_0 = 0$ so that~$\overline{Z^m}$ becomes independent of the mean field~$s_\alpha$ (this excludes the possibility of a ferromagnetic phase, in which we are not currently interested).
In order to now compute~$\overline{\ln Z}$ (and therefore~$\overline{F}$) in the thermodynamic limit, we must analytically continue~\eqref{eq:ZmSKsaddle} to non-integer~$m$ near zero. Because the sums in~$H_\mathrm{eff}$ are only well-defined for integer~$m$, this procedure requires positing some ansatz for the matrix~$q_{(\alpha,\gamma)}$ that is amenable to the analytic continuation to~$m = 0$. Given the replica symmetry of the problem (corresponding to the permutation group~$\mathbb{S}_m$), it is natural to take the replica-symmetric ansatz
\be
\label{eq:replicasymmetricq}
q_{(\alpha,\gamma)} = q.
\ee
Indeed, for positive integer~$m$, the dominant saddles do exhibit this symmetry~\cite{HemmenPalmer79}. The analytic continuation to near~$m = 0$ is then straightforward, and the free energy becomes
\bea
\label{subeq:SKfreeenergy}
-\beta N^{-1} \overline{F} &= \frac{(\beta J)^2}{4} (1-q)^2 + \int_{-\infty}^\infty \frac{dy}{\sqrt{2\pi}} \, e^{-y^2/2} \ln \left(2 \cosh(\beta J \sqrt{q} \, y)\right), \\
& \mbox{where } q = \int_{-\infty}^\infty \frac{dy}{\sqrt{2\pi}} \, e^{-y^2/2} \tanh^2(\beta J \sqrt{q} \, y).
\eea
When~$\beta J < 1$ (i.e.~at sufficiently high temperature), the only solution is~$q = 0$, and hence the replicas are uncorrelated; this is the paramagnetic phase. The free energy obtained in this phase therefore satisfies $\overline{\ln Z} = \ln \overline{Z}$, i.e. we may average $Z$ before taking the logarithm with no loss of information. Hence the replica trick does not introduce any novel behavior. As the temperature is lowered, however, a solution with nonzero~$q$ begins to exist once~$\beta J > 1$. This new solution dominates the free energy\footnote{The number of off-diagonal components of~$q_{(\alpha,\gamma)}$ is~$m(m-1)/2$, which is \textit{negative} for~$0 < m < 1$; this implies that the saddle that maximizes~$\overline{Z^m}$ with with respect to the components~$q_{(\alpha,\gamma)}$ actually \textit{minimizes}~$\overline{Z^m}$ with respect to~$q$ when~$m < 1$. Hence the saddle that dominates the free energy is in fact the one that \textit{maximizes} it with respect to~$q$. \label{foot:maximize}}, corresponding to the spin-glass phase in which the replicas spontaneously couple.
While the field~$q$ was introduced in the context of the replica formalism, it has an interpretation in the~$m \to 0$ limit: it computes the so-called Edwards-Anderson order parameter~$q_\mathrm{EA}$ defined by the disorder-averaged square magnetization~\cite{EdwAnd75}:
\be
\lim_{m \to 0} q = q_\mathrm{EA} \equiv \overline{\ev{\sigma_i}^2}.
\ee
Here independence of the choice of lattice site~$i$ follows from translational invariance (after the disorder average), and the expectation value is a standard thermodynamic average taken with respect to a particular sampling of couplings:
\be
\ev{\sigma_i} \equiv \frac{\Tr \sigma_i e^{-\beta H_{\{J_{ij}\}}}}{\Tr e^{-\beta H_{\{J_{ij}\}}}}.
\ee
The non-vanishing of~$q$ in the spin glass phase therefore corresponds to magnetic order for any particular sampling of the couplings~$J_{ij}$. However, for~$J_0 = 0$ the disorder-averaged magnetization vanishes:~$\overline{\ev{\sigma_i}} = 0$. Since this disorder-averaged magnetization measures the ferromagnetic order of the system, we see that the spin-glass phase corresponds to a cooperatively frozen magnetic state but with no ferromagnetic order.
\subsection{Replica symmetry breaking in the SK model}
As can be seen directly from~\eqref{subeq:SKfreeenergy}, the free energy of the paramagnetic phase~$q = 0$ is pathological if we extend it to arbitrarily low temperature: at large temperatures it scales like~$-T$, while at low temperatures is exhibits a~$-1/T$ divergence. These behaviors imply that it is non-monotonic, with the thermodynamic entropy becoming negative at sufficiently low temperatures (and in fact diverging at zero temperature). As shown in Figure~\ref{fig:SKfreeenergy}, the turning on of the spin glass phase when~$T/J < 1$ is necessary to alleviate these pathologies, rendering the free energy finite. However, it is still non-monotonic: the zero-temperature entropy is~$\overline{S}_{T = 0} = -N/2\pi$. Clearly the calculation remains incomplete; from our earlier discussion, we expect that this missing ingredient involves some nontrivial behavior of the analytic continuation from~$\overline{Z^m}$ at positive integer~$m$ to~$m = 0$\footnote{Though we note that unlike the~$\widehat{\mathrm{CGHS}}$ case discussed in Section~\ref{subsec:CGHScontinuation}, the analytic continuation of the replica-symmetric ansatz~\eqref{eq:replicasymmetricq} in~\eqref{eq:ZmSKsaddle} to imaginary~$m$ does indeed obey the boundedness condition~$\left|\overline{Z^{i\alpha}}\right| \leq 1$. However,~$\overline{Z^m}$ still exhibits superexponential growth for real~$m$, so Carlson's theorem is still inapplicable~\cite{HemmenPalmer79}.}. How do we understand what the correct analytic continuation is?
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{SK_Free_Energy_replica_symmetric}
\caption{The free energy of the SK model, computed using the replica-symmetric ansatz~\eqref{subeq:SKfreeenergy}. For~$T/J > 1$, there is only the paramagnetic phase~$q = 0$; continuing this phase to~$T = 0$ (dashed red line) gives a free energy that is non-monotonic and divergent at~$T = 0$. The appearance of the spin glass phase~$q \neq 0$ when~$T/J < 1$ (solid blue line) removes the divergence, but the free energy is still non-monotonic. (As mentioned in footnote~\ref{foot:maximize}, a feature of the analytic continuation to~$m = 0$ is that the dominant phase is in fact the one that \textit{maximizes} the free energy.)}
\label{fig:SKfreeenergy}
\end{figure}
The answer can be gleaned by performing a stability analysis of the replica-symmetric ansatz~\eqref{eq:replicasymmetricq}. Indeed, though~\eqref{eq:replicasymmetricq} does give the correct form of the saddles for computing~$\overline{Z^m}$ when~$m$ is a positive integer, it becomes unstable for sufficiently small~$m < 1$: an eigenvalue of the Hessian~$\partial^2 H_\mathrm{eff}/\partial q_{(\alpha,\gamma)} \partial q_{(\beta,\delta)}$ evaluated on the ansatz~$q_{(\alpha,\gamma)} = q$ becomes positive in the limit~$m \to 0$~\cite{AlmTho78}. We must therefore invoke an alternative ansatz for~$q_{(\alpha,\gamma)}$ that avoids this instability as~$m \to 0$. The correct analytic continuation to~$m = 0$ will then be determined by the behavior of the ansatz for~$q_{(\alpha,\gamma)}$ which remains stable down to~$m = 0$; this behavior will undergo a phase transition at some critical~$m_c(T) < 1$~\cite{Kon83} that was missed by just considering the replica-symmetric ansatz~\eqref{eq:replicasymmetricq}. The presence of this phase transition means that it is crucial to analytically continue the saddle-point equations~$\partial H_\mathrm{eff}/\partial q_{(\alpha,\beta)} = 0$ themselves down to~$m = 0$, rather than first evaluating their on-shell value at integer~$m$ and then analytically continuing the results.
Because the number of components of~$q_{(\alpha,\gamma)}$ is~$m(m-1)/2 < 0$ when~$m < 1$, it is far from obvious how to construct a replica symmetry-breaking (RSB) ansatz that is amenable to analytic continuation. The answer is the well-established Parisi ansatz~\cite{Par79a,Par79b,Par80a,Par80b}. To get an idea of how this procedure works, consider splitting up the~$m$ replicas that define~$\overline{Z^m}$ into groups of~$m_1$, with~$m_1$ an integer that divides~$m$. We then write~$q_{(\alpha,\gamma)}$ in a block-diagonal form according to this grouping:
\be
\label{eq:1RSB}
q_{(\alpha,\gamma)} = \begin{pmatrix} Q_2 & Q_1 & Q_1 & Q_1 \\ Q_1 & Q_2 & Q_1 & Q_1 \\ Q_1 & Q_1 & Q_2 & Q_1 \\ Q_1 & Q_1 & Q_1 & Q_2 \end{pmatrix},
\ee
where~$Q_1$ and~$Q_2$ are~$m_1 \times m_1$ matrices all of whose entries are~$q_1$ and~$q_2$, respectively (in this example, we have~$m/m_1 = 4$). This ansatz for~$q_{(\alpha,\beta)}$ can be analytically continued to~$m = 0$ while leaving~$m_1$,~$q_1$, and~$q_2$ free as variational parameters to be fixed by extremizing the free energy with respect to them (since~$1 \leq m_1 \leq m$, the analytic continuation of~$m$ also continues~$m_1$ to be between zero and one). This procedure, called one-step RSB (or 1RSB), substantially improves the pathologies in the free energy shown in Figure~\ref{fig:SKfreeenergy}, but the zero-temperature entropy is still negative (though substantially closer to zero)\footnote{There are other models of spin glasses in which~1RSB is in fact sufficient to obtain a stable ansatz, e.g.~the~$p$-spin spherical model~\cite{Der80,Der81,CriSom92}.}.
To proceed further, we iterate this procedure: we introduce a new integer~$m_2$ that divides~$m_1$ and partition~$Q_2$ into the same block-diagonal structure as~\eqref{eq:1RSB},
\be
Q_2 = \begin{pmatrix} Q_3 & \widetilde{Q}_2 & \widetilde{Q}_2 \\ \widetilde{Q}_2 & Q_3 & \widetilde{Q}_2 \\ \widetilde{Q}_2 & \widetilde{Q}_2 & Q_3 \end{pmatrix},
\ee
where~$\widetilde{Q}_2$ and~$Q_3$ are~$m_2 \times m_2$ matrices all of whose entries are~$q_2$ and~$q_3$, respectively (in this example~$m_1/m_2 = 3$). Repeating this process~$r$ times, we may then continue to~$m = 0$, obtaining an expression for the free energy that depends on~$2p+1$ variational parameters,~$m_i$ for~$i = 1, \ldots, p$ and~$q_i$ for~$i = 1, \ldots, p+1$. After the continuation to~$m = 0$ has been made, we may in fact take the limit~$p \to \infty$ which turns the~$(q_i,m_i)$ into a continuous function~$q(x)$. The free energy is then a functional of~$q(x)$, and is obtained by a functional extremizaton with respect to~$q(x)$.
One way of understanding what the~$p \to \infty$ limit means is as follows. For positive integer~$m$, the the ansatz~\eqref{eq:1RSB} breaks the full replica symmetry group~$\mathbb{S}_m$ into the subgroup
\be
\mathbb{S}_m \xrightarrow[\mathrm{break}]{} \left(\mathbb{S}_{m_1}\right)^{\otimes m/m_1} \otimes \mathbb{S}_{m/m_1},
\ee
with the first factor corresponding to the permutation symmetry of each of the groups of~$m_1$ rows and columns, and the second corresponding to the permutation symmetry of the~$m/m_1$ groups amongst themselves. The iterative procedure outlined above amounts to breaking the subgroup further, into
\be
\mathbb{S}_m \xrightarrow[\mathrm{break}]{} \mathbb{S}_{m/m_1} \otimes \bigotimes_{i = 1}^{p} (\mathbb{S}_{m_i/m_{i+1}})^{\otimes m/m_i}
\ee
(with~$m_{p+1} \equiv 1$), but of course we cannot take~$p$ arbitrarily large if the~$m_i$ must all be divisors of~$m$. However, if we analytically continue this group structure to~$m = 0$, we obtain
\be
\mathbb{S}_0 \xrightarrow[\mathrm{break}]{} \mathbb{S}_0 \otimes \bigotimes_{i = 1}^{p} (\mathbb{S}_{m_i/m_{i+1}})^{\otimes 0}.
\ee
So we find that~$\mathbb{S}_0$ contains itself as a subgroup, which means we may continue to break the symmetry as much as desired by breaking the~$\mathbb{S}_0$ factor on the right-hand side. This is the feature that allows us to take~$p \to \infty$ in the Parisi ansatz after the continuation to~$m = 0$ has been performed.
The point is that RSB is contained in the structure of the Parisi function~$q(x)$: in the replica-symmetric ansatz~\eqref{eq:replicasymmetricq}~$q(x)$ is just a constant~$q$, so nontrivial structure in~$q(x)$ is indicative of RSB. Because the Parisi ansatz changes the na\"ive analytic continuation to~$m = 0$, we see that RSB is the mechanism reponsible for the phase transition at~$m < m_c(T)$, and it answers the question posed above: how do we correctly continue to~$m = 0$?
\subsection{RSB in Gravity \`a la Spin Glass}
\label{subsec:RSBgravity}
In Sections~\ref{sec:CGHS} and~\ref{sec:JT} we saw that in simple gravitational models, the introduction of replica wormholes alleviated some of the low-temperature pathologies of the disconnected free energy, but it did not remove them entirely; we interpreted this result as the statement that our anaytic continuation to~$m = 0$ (which in the JT gravity case exhibited considerable freedom) was not correct. Having now reviewed spin glasses, there is quite an obvious analogy: since the paramagnetic and spin glass phases are characterized by correlated and uncorrelated replicas, respectively, we would like to interpret the ``turning on'' of replica wormholes in the gravitational free energy as the onset of spin glass-like behavior. It is important to note that the analogy will not be literal: perhaps the most important distinction is that a spin glass is a bona fide sharp phase transition that can be seen in the thermodynamic~$N \to \infty$, whereas we did not work in any saddle point approximation in our gravitational models (and in fact, the fact that the temperature at which connected topologies contributed was nonperturbatively small in~$S_0$ suggests that the transition should be invisible to a semiclassical~$S_0 \to \infty$ analysis). The most relevant paralle we would like to highlight has to do with the all-important analytic continuation: in the spin glass model, a replica symmetric ansatz remedies some low-temperature pathologies of the free energy, but it gives the incorrect analytic continuation, and RSB must be invoked due to a phase transition at small~$m$. What does this analogy suggest for how to obtain the correct analytic continuation to~$m = 0$ in the gravitational case?
One of the key lessons to draw from the spin glass example is that a na\"ive analytic continuation from the \textit{values} of~$\overline{Z^m}$ for positive integer~$m$ to near~$m = 0$ gives a wrong answer: we must first analytically continue the \textit{saddle point equations} to near~$m = 0$ with an appropriate ansatz, and only then do we solve them for the small-$m$ behavior of~$\overline{Z^m}$. In Sections~\ref{sec:CGHS} and~\ref{sec:JT}, this is not what we did: we instead expressed the gravitational path integrals~$\Pcal_m(\beta)$ for integer~$m$, and then looked for an analytic continuation to~$m = 0$. For the same reason as the spin glass, we might expect that in a gravitational theory we must look for RSB saddle points in order to perform the analytic continuation correctly.
Let us first be clear on what we mean by ``replica symmetry breaking''. There is a sense in which we could say that any replica wormhole breaks replica symmetry, since the symmetry group of~$m$ disconnected boundaries is~$\mathbb{S}_m$, which is broken by any gravitational saddle that connects two or more of these boundaries. But the sort of RSB that appears in the spin glass example, and which we expect to determine the correct analytic continuation to near~$m = 0$, is something more subtle: it is the breaking at~$m < 1$ of a symmetry that is exhibited by the dominant saddles when~$m$ is a positive integer. For example, if the~$m$-boundary gravitational path integral is dominated by disconnected saddles whenever~$m$ is a positive integer, the symmetry group is indeed~$\mathbb{S}_m$, and we would say that RSB occurs if this group is broken for~$m < 1$. But if the path integral for positive integer~$m$ is dominated by, say, a connected wormhole with~$\mathbb{Z}_m$ symmetry, we would not say that RSB occurs as~$m \to 0$ unless the~$\mathbb{Z}_m$ is broken for some~$m < 1$.
Now, since in Sections~\ref{sec:CGHS} and~\ref{sec:JT} we did not work in a saddle point approximation, no equations of motion were involved in our calculation. Hence it is not immediately clear what the analogue of the Parisi procedure might be in this models. It may instead be easier to consider working in the semiclassical limit of some more general gravitational theory, in which case probing the role of RSB, and computing the correct analytic continuation to near~$m = 0$, requires us to look for a RSB ansatz for a gravitational solution that allows for the continuation of the gravitational equations of motion to near~$m = 0$. This is still a difficult task, which is a natural starting point for future work. Instead, let us compare the approach we have in mind in this context with that of the Lewkowykz-Maldacena replica trick used to compute holographic von Neumann entropies~\cite{LewMal13}. In the latter case, we are required to compute the gravitational path integeral defined by an~$n$-sheeted connected boundary manifold~$B_n$ with~$\mathbb{Z}_n$ symmetry. Assuming the dominant bulk saddle also exhibits this symmetry, we may quotient the bulk geometry by~$\mathbb{Z}_n$, after which the analytically-continued bulk equations of motion are just those on a manifold with boundary~$B_1$ consisting of a single sheet, except with a conical defect proportional to~$(n-1)$ at the fixed point of the~$\mathbb{Z}_n$ isometry. For~$n$ near one, the bulk equations of motion can be expanded perturbatively around the smooth geometry with boundary~$B_1$ and no conical defect, and the condition that the equations of motion hold near the (perturbative) conical defect reproduces the Ryu-Takayagani formula for holographic entanglement entropy~\cite{RyuTak06}. In this context, the ``usual'' notion of RSB is the breaking of the~$\mathbb{Z}_n$ for~$n \neq 1$ -- but of course there is no breaking of replica symmetry for~$n = 1$, since~$\mathbb{Z}_1$ is trivial. According to the alternative definition of RSB that occurs in spin glasses, RSB would require that the dominant saddles at positive integer~$n$ to exhibit~$\mathbb{Z}_n$ symmetry, but for the dominant saddles at small~$n$, including~$n = 0$, to break it.
Clearly the LM approach is along the lines we have in mind, as it continues the gravitational equations of motion to non-integer~$n$. However, this continuation relies crucially on two properties. The first is the assumption of~$\mathbb{Z}_n$ symmetry, without which it would be unclear how to express the equations of motion on a manifold with a single boundary (just as in the SK model it was unclear how to generalize the replica-symmetric ansatz~\eqref{eq:replicasymmetricq} until Parisi's breakthrough). The second is that there is a known~$n = 1$ saddle around which the equations of motion can be perturbed to study the behavior near~$n = 1$; there is no such saddle with~$n = 0$. These are the two primary challenges that need to be overcome in order to properly understand the role of RSB in computing gravitational free energies, and more generally any extensive quantity.
\section{Discussion}
\label{sec:disc}
We have argued that the computation of extensive quantities via a gravitational path integral should be done using a replica trick which includes contributions from connected geometries. The inclusion of these connected saddle points dramatically changes the behavior of the theory at very low temperatures, and naturally accommodates the interpretation of semiclassical gravity as dual to an ensemble average rather than to a particular quantum theory. Let us now discuss open questions and natural directions for future work.
\paragraph{Ensemble Averaging in Higher Dimensions} As alluded to in Section~\ref{sec:intro}, UV corrections to the GPI may remedy the apparent lack of factorization that motivated the ensemble averaging interpretation in the first place, as discussed in the context of random matrix models and JT gravity in~\cite{SSS}. Such a picture becomes especially crisp in higher dimensions: for example,~${\cal N}=4$ SYM is a single theory, and AdS/CFT provides numerous other examples of unitary quantum theories of gravity without the need to ensemble average. If, however, one would like to apply the techniques of~\cite{PenShe19, AlmHar19} to higher dimensions then we must include replica wormholes, whose most obvious interpretation is of an ensemble average. One possibility is that averaging is only genuinely necessary in certain low-dimensional theories (as was argued in e.g.~\cite{McNamara:2020uza}). For example, the low-temperature spectrum of higher dimensional gravity (and CFTs) is perfectly well-behaved, has a unique ground state, and does not resemble a spin glass. We do not expect to see replica wormholes or RSB dominating the free energy calculation at low temperature. Nevertheless, it is natural to speculate that replica wormholes will contribute to $\overline{\ln Z}$ whenever we are in a regime where non-perturbative quantum gravitational corrections are important: for example, after the Page time~\cite{Pag93} or at the Hawking-Page phase transition~\cite{HawPag83}.
Another interesting possibility arises from the phenomenon of self-averaging: in a chaotic theory, the average over an ensemble of theories is often essentially harmless, as each individual instance of the ensemble is representative of the ensemble as a whole, at least for relatively coarse-grained observables. The ensemble average in this case is interpreted as a useful calculational trick to construct a universal effective theory which governs the dynamics at low energy, but of course the UV dynamics of each individual instance of the ensemble is that of a unitary quantum theory. Perhaps any gravitational theory which includes Euclidean wormholes should be understood as a low-energy effective theory in this sense; in this interpretation, the GPI plays the role of a convenient calculational trick for computing observables in a semiclassical limit. Such a possibility was discussed in various forms in~\cite{PenShe19,BeldeB20,PolRoz20}.
\paragraph{Nonperturbative Completions} At the end of Section~\ref{sec:intro}, we briefly mentioned that although a large-$N$ analysis of SYK does not exhibit a spin glass phase,~\cite{AreKhr18} showed that in a large-coupling (or low-temperature) limit that reduces to an EFT of the low-energy dynamics of SYK, saddles that correlate replicas in the computation of~$\overline{Z^m}$ become \textit{dominant} at both positive integer~$m$ as well as in the~$m \to 0$ limit, and therefore lead to a spin-glass like phase transition in this low-energy EFT. This observation may raise a concern: if a spin glass phase can only be obtained from the SYK model by excluding the UV, is the phase transition that we have found in JT gravity eliminated by a good UV completion? Our study of the Airy limit in Section~\ref{subsec:Airy} shows that a nonperturbative completion of JT gravity cannot eliminate the effect we have studied, since it is dominated by the universal behavior of the edge of the spectral density~$\rho(E)$. Indeed, the recent discussion of such completions in~\cite{Joh20} explicitly finds that the two-point correlator~$\overline{Z(\beta)^{2}}$ is controlled by the contribution of connected topologies at sufficiently low temperatures, even in a nonperturbative completion.
More generally, the results of~\cite{SSS} suggest that a good nonperturbative description of JT gravity should be available in the form of a matrix model (though this completion is not unique). Because the behavior we have studied in this paper is due to universal behavior at the spectral edge (at least at sufficiently low temperatures), we might investigate it more thoroughly by working in a toy matrix model like the Gaussian matrix integral investigated in~\cite{Oku19}. To this end, it would be interesting to compute~$\overline{\ln Z}$ in such a model by expressing~$\ln Z = \ln \Tr e^{-\beta H}$ and then explicitly computing an average over the random matrix~$H$, without resorting to a replica trick. We should expect to find a monotonic free energy all the way to zero temperature, with a free energy that agrees with the annealed free energy of the Airy case once the temperature becomes sufficiently (but not too) large.
\paragraph{The Emergence of Semiclassical Gravity} A longstanding question in quantum gravity is how the (semi)classical metric~$g_{ab}$ emerges from an underlying quantum theory. In the SK model, the partition functions~$\overline{Z^m}$ can be expressed \textit{exactly} via the introduction of the mean fields~$s_\alpha$ and~$q_{(\alpha,\gamma)}$ in~\eqref{eq:ZmSKexact}. In a large-$N$ limit, the phase structure of the system is determined by the saddle point equations for these fields. Importantly, they appear purely as a consequence of the disorder average; they are not fundamental in the pre-disorder theory. (In the SYK case, the analogous fields are the auxiliary fields~$G_{\alpha\beta}(\tau_1,\tau_2)$ and~$\Sigma_{\alpha\beta}(\tau_1,\tau_2)$.)
If we are to interpret the GPI as computing a disorder average (either genuinely or in an effective description for the purpose of probing appropriately coarse-grained observables), is there a sense in which the metric should then be thought of as a mean field, with the GPI analogous to the right-hand side of~\eqref{eq:ZmSKexact}? That is, rather than being a fundamental field of the underlying theory, is the metric a field whose existence relies fundamentally on the ensemble average? In such a case we would interpret the ``turning on'' of connected geometries between disconnected boundaries as analogous to the ``turning on'' of the matrix~$q_{(\alpha,\gamma)}$ in the SK model. This would give a clear meaning to the sum over topologies in the path integral, but even in the 2D models we have studied here it is unclear how this interpretation would incorporate a UV completion.
\paragraph{RSB and the Parisi Ansatz in Gravity} In the 2D models studied in this paper, the need for replica wormholes in the free energy (or more generally, any extensive quantity) is clear, and we have discovered hints of RSB. These suggest that gravity has some features analogous to a glassy phase just at the edge of semiclassicality. Since the gravitational path integral is in general -- and in this regime in particular -- of clear interest, clearly one important extension of our analysis would be the construction of a gravitational analogue of the Parisi ansatz for RSB. Of course, because we did not work in any saddle point approximation, we did not consider classical equations of motion. The resulting lack of any saddles to analyze for stability or to continue to~$m = 0$ makes it difficult to explore the structure of RSB in any detail. In particular, the fact that (pure) JT gravity replica wormholes do not exist as solutions to any classical equations of motion suggests that there may be no way to study RSB in JT gravity in a way analogous to conventional spin glass systems (though admittedly the possibility of a phase transition at~$m < 1$ means that the lack of on-shell wormholes for integer~$m$ does not necessarily exclude on-shell analytically continued wormholes for~$m$ near zero). A natural question, then, is whether there exist models of gravity that are sufficiently simple to allow for the continuation of classical equations of motion to~$m = 0$, but sufficiently complex to still exhibit a phase transition. In other words, it would be valuable to find a gravitational model in which the effects of Euclidean wormholes can be disentangled from those of disconnected geometries with higher genus (analogous to the case of~$\widehat{\mathrm{CGHS}}$, in which higher genera don't appear at all).
In such a model, we might imagine that the correct ``gravitational'' Parisi ansatz is a multi-branched wormhole connecting the various disconnected boundaries with wormholes of different sizes, with these sizes left as variational parameters with respect to which the free energy should be extremized. In the case of a near-extremal black hole (and consequently low temperature), the picture might be reminiscent of AdS fragmentation~\cite{MalMic98}, in which the AdS$_2$ throat can fragment into many throats or disconnected universes. Understanding how this story works in gravity would be especially illuminating because the Parisi function~$q(x)$, which plays the role of an order parameter for the spin glass phase transition in the SK model, also probes the structure of microstates of the model. An analogous function in gravity could shed light onto the details of the underlying (that is, pre-disorder-average) theory.
\section*{Acknowledgements}
It is a pleasure to thank D.~Harlow and J. Sully for helpful discussions and D.~Anninos and D.~Stanford for useful comments on an early version of this paper. The work of NE is supported by the Office of High Energy Physics of U.S. Department of Energy under grant Contract Number DE-SC0012567 and by the MIT department of physics.
Research of SF and AM is supported in part by the Simons Foundation Grant No.~385602 and the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference number SAPIN/00032-2015.
|
1,941,325,221,110 | arxiv | \section{Introduction}
The inflationary scenario \cite{inflation} allows us to consider how the
primordial seeds of macroscopic structures were generated in the Universe,
since due to its quantum nature, the field that drives Inflation can be
decomposed into a mean field and fluctuations around it. The former gives an
homogeneous background of matter and the latter induce the production of
local inhomogeneities. These fluctuations evolve and are amplified during
the inflationary era. At the end of this epoch, the inflaton field decays
into relativistic ordinary matter. Heuristic arguments show that, as a first
approximation, a scale-invariant spectrum of density fluctuations results,
in rough agreement with observations \cite{observ}.
Despite this success, the conventional method of identifying the structure
creating fluctuations with the quantum fluctuations of the inflaton field
during slow roll is conceptually unsatisfactory, and eventually leads to an
overestimation of the produced density contrast (\cite{Mat}, \cite{CH95}) .
The basic point is that when we talk of the 'metric', or the density profile
of the Universe (as in 'a Friedmann - Robertson - Walker metric', or even
today, when we say 'space - time is flat' when talking about local physics)
we are referring to a macroscopic construct whereby microscopic (quantum)
fluctuations of geometry and matter fields are skipped over or coarse
grained away \cite{Hu95}. The difference between microscopic and macroscopic
fluctuations is not merely one of wavelength: the real difference is that
macroscopic fluctuations, when left to unfold over the relevant space and
time scales, effectively decohere from each other and thus acquire
individual reality. Indeed, this is the process by which a quantum
homogeneous state (such as the De Sitter invariant vacuum during slow roll)
may evolve into an inhomogeneous Universe: decoherence gives a formal
device, such as the harmonic analysis of quantum fluctuations, its physical
content. Now, given that macroscopic and microscopic fluctuations are to be
distinguished (and structure formation definitively belongs to the physics
of the former, as cosmic structures are 'classical', individually existing
objects), the relationship between them is not obvious and requires
elucidation. In the same way that the usual Brill-Hartle waves of general
relativity \cite{MTW}, being a first order effect, only react on the
background metric at second order, we should not expect microscopic
fluctuations by themselves to be lifted into the macroscopic level, but
rather that they will act on the macroscopic level as some higher order
effect. The goal of this paper is to present a detailed analysis of the
action of microscopic fluctuations on the macro level, obtaining from it an
improved estimate of the produced density contrast.
This issue can not even be posed correctly unless an open system view of the
inflaton dynamics is adopted. In this approach, the ''decoherence'', that
is, the conversion into c-number, of the q-number fluctuations is due to the
interaction of the inflaton field with a partially unknown and uncontrolled
environment. There are several proposals as to how the exact separation of
system and environment should be carried out (\cite{Mat}, \cite{PolStar},
\cite{MazLom}).
In this paper we shall present an improved discussion of to what extent
primordial fluctuations are ''quantum'' or ''classical'', from the viewpoint
of the ''consistent histories'' approach to quantum mechanics \cite{conhis}.
As it turns out, a detailed analysis of the conceptual difficulties of
Inflation points the way to the solution of the quantitative problems as
well (\cite{Mat}, \cite{CH95}, \cite{morikawa}).
The consistent histories approach views quantum evolution as the coherent
unfolding of individual histories for a given system, the main physical
input being the specification of the particular histories relevant to the
description of a concrete observer's experiences. For example, we could
choose our histories as containing an exhaustive description of the values
of all the fields in the theory at every space time location. A description
in terms of these "fine grained" histories is equivalent to a full quantum
field theoretic account of the dynamics. We shall rather assume that the
relevant histories for cosmological modeling are "coarse grained".
Concretely, we shall assume that close enough fine grained histories are
physically indistinguishable and should be bundled together as a single
coarse grained history. Each coarse grained history is thus labelled by the
value of a typical or representative history within the bundle. The actual
histories in a given bundle will differ from this representative by amounts
of the order of the quantum fluctuations of the corresponding fields (we
could consider also tighter or looser coarse graining, but as a matter of
fact these histories dominate the actual evolution of the system \cite{dch}).
Given a pair of coarse grained histories, we can compute the so-called
decoherence functional (df) between them. The df measures the quantum
overlap between these two histories. If the df between any two histories of
a given set is strongly suppressed, then quantum interference effects will
be unobservable, and it will be possible to treat each history classically,
that is, to assign individual probabilities to each of them. Moreover, the
most likely histories will be those for which the phase of the df is
stationary, which yields the ''equations of motion'' for the representative
history \cite{GelHar2}.
Going back to the problem of generation of fluctuations in Inflation, our
starting point is to assume that the evolution of the model is described in
terms of coarse grained histories as said, and to compute the df between two
generic coarse grained histories. We shall show that, for a variety of
models involving coupling the inflaton to massless fields of different spin,
coarse grained histories are indeed mutually consistent, and that the
equations of motion, as derived from the decoherence functional, are
stochastic. Thus, the representative fields naturally evolve fluctuations,
and these are responsible for the creation of primordial density
inhomogeneities at reheating.
It should be stressed that we are not assuming that the representative
fields are ''classical''; on the contrary, its classical nature is a
consequence of the theory itself, and follows from the suppression of the df
between generic coarse grained histories. Physically, the representative
field is decohered by its progressive entanglement with the microscopic
quantum fluctuations which surround it. This entanglement is a necessary
consequence of the nonlinear interaction between the two (for generic
initial conditions), and at the level of the equations of motion for the
representative field it appears as damping and noise. Thus, decoherence,
damping and noise are just different manifestations of the same process, a
point further elaborated elsewhere (\cite{CH95}, \cite{dch}).
In what follows, we shall consider inflationary models where the inflaton
field is nonlinearly coupled to itself, and to spin 1/2 and 1 massless
fields, respectively (the spin 2 case has been dealt with in ref. \cite{CH95}%
).
The paper is organized as follows. In next section, we consider in some
detail a simple model of Inflation, where the inflaton interacts with itself
through a cubic coupling. Treating the fluctuations around the
representative or physical value of the inflaton as a massless, minimally
coupled field, we shall derive the density contrast generated and discuss
both the amplitude of the scale invariant spectrum and the corrections to
it. In the following two sections, we briefly present the necessary
adaptations when the inflaton is coupled to massless, conformally invariant
spin 1/2 and 1 fields, respectively, and discuss the corresponding changes
in the predictions of the theory. We summarize our results in Section 5.
\section{Fluctuation generation from inflaton self - coupling}
\subsection{The model}
The production of the primordial seeds for structure generation began soon
after the set-up of Inflation and ended in the radiation dominated era.
Although realistic description of the phenomena that took place during this
epoch requires a detailed knowledge of the inflationary potential, it is
common to consider toy models that simplify the mathematical aspects of the
problem but are still accurate enough to give a qualitative description of
the related physics. We first consider a cubic field theory as a model for
the inflationary Universe
\begin{equation}
V\left( \phi \right) =V(0)-\frac 16g\phi ^3 \label{pot}
\end{equation}
where $\phi $ is a c-number, homogeneous field, whose precise meaning shall
be discussed below. The dynamics of geometry is governed by the Friedmann
equation
\begin{equation}
H^2=\frac{V(\phi )}{m_p^2}
\end{equation}
where $H$ is the Hubble constant (we assume a spatially flat Friedmann -
Robertson - Walker (FRW) Universe and work, in this subsection, in the
cosmological time frame) and $m_p$ is Planck's mass. This equation assumes
vacuum dominance, namely
\begin{equation}
V(\phi )\gg \dot\phi^2
\end{equation}
We shall also assume potential flatness, that is
\begin{equation}
V(\phi )\sim V(0)\gg g\phi^3
\end{equation}
The field begins Inflation at some small positive value and then ''rolls
down'' the slope of the potential (at some point the potential must bend
upwards again, but that concerns the physics of reheating and shall not be
discussed here (\cite{reheat}, \cite{reh})). The dynamics of the homogenous
field is described by the Klein - Gordon equation
\begin{equation}
\ddot \phi +3H\dot \phi +(1/2)g\phi ^2=0
\end{equation}
(quantum corrections to this equation shall be discussed below). Under slow
roll over conditions ($\ddot \phi \ll 3H\dot \phi $) we find the solution
\begin{equation}
\phi (t)=\phi _0\left\{ 1-{\frac{g\phi _0t}{6H}}\right\} ^{-1}
\end{equation}
Slow roll over breaks down when
\begin{equation}
1-\frac{g\phi _0t}{6H}\sim \frac{g\phi _0}{9H^2}
\end{equation}
Vacuum dominance applies to the whole slow rolling period under the mild
bound $\phi _0\leq m_p$. Potential flatness requires $H^4/g^2m_p^2\leq
10^{-3}$. Since $H$ is essentially constant during slow roll, the condition
for enough Inflation $Ht\geq 60$ implies $H^2\geq 10g\phi _0$. Current
bounds on $\Omega $ suggest that this bound is probably saturated; in this
regime the flatness condition is already satisfied given the other ones. The
final requirement on the model is enough reheating, namely $m_p^2H^2\leq
(T_{GUT})^4$.
The density contrast in the Universe is given in terms of the fluctuations
in $\phi $ by the formula \cite{GalForInf}.
\begin{equation}
\left. \left( \frac{\delta \rho }\rho \right) _k\right| _{\text{in}}=\left. H%
\frac{\delta \phi _k}{\dot \phi }\right| _{\text{out}} \label{hyperfam}
\end{equation}
which relates the density contrast at horizon entry to the amplitude of
fluctuations at horizon exit. Conventional accounts of the fluctuation
generation process estimate $\delta \phi _k$ from the value of the free
quantum fluctuations of a scalar field in a De Sitter Universe ($Hk^{-3/2}$
at horizon crossing) and thus find a Harrison - Zel'dovich (HZ) scale
invariant spectrum with amplitude
\begin{equation}
{\frac{H^2}{\dot \phi }}\sim {\frac gH}\sim \sqrt{\frac g{\phi _0}}
\end{equation}
Thus, the observational bound of $10^{-6}$ on the density contrast implies $%
g\leq 10^{-12}\phi _0$.
One of the main aims of this paper is to present a different estimate. In
the approach to be presented below, the actual fluctuations in $\phi $ are
much less than expected (of order $gk^{-3/2}$), which leads to a revised
estimate $\delta \rho /\rho \sim (g/\phi _0)$ (no square root), and thus
relaxing the bounds on the self coupling by six orders of magnitude. This is
consistent with recent findings by Matacz and by Calzetta and Hu (\cite{Mat}%
, \cite{CH95}).
We proceed now to show how the revised estimate is found.
\subsection{Consistent histories account of fluctuation generation}
Let us now upgrade the inflaton field $\phi $ to a full fledged quantum
field $\Phi $ with a potential
\begin{equation}
V\left( \Phi \right) =V(0)+c\Phi -\frac 16g\Phi ^3 \label{pot}
\end{equation}
(we have added the linear term for renormalization purposes). The massless
quantum field $\Phi $ obeys the Heisenberg equation of motion
\begin{equation}
-\Box \Phi -\frac{dV}{d\Phi }=0 \label{ecmov}
\end{equation}
As described in the introduction, we shall assume that the fine details of
the evolution of the inflaton are inaccessible to cosmological observations.
Thus, we shall split the field as in
\[
\Phi =\phi +\varphi
\]
where $\phi $ represents a typical field history within a bundle of
indistinguishable configurations, and $\varphi $ describes the unobserved
microscopic fluctuations. We identify $\phi $ with the classical inflaton
field of the previous subsection. $\varphi $ obeys linearized equations
\begin{equation}
-\Box \varphi -g\phi \varphi =0 \label{ecfluc}
\end{equation}
The equation for $\phi $ is obtained by substracting eqn. (\ref{ecfluc})
from eqn. (\ref{ecmov})
\[
-\Box \phi +c-\frac 12g\phi ^2-\frac 12g\left\langle \varphi ^2\right\rangle
_\phi =\frac 12g(\varphi ^2-\left\langle \varphi ^2\right\rangle _\phi )
\]
where $\left\langle ...\right\rangle _\phi $ means the expectation value of
the quantity between brakets, evaluated around a particular configuration $%
\phi $ of the physical field. If the constant $c$ takes the value $c=\frac 12%
g\left\langle \varphi ^2\right\rangle _0$ (which corresponds to evaluate $%
\left\langle \varphi ^2\right\rangle $ for the false-vacuum configuration $%
\phi =0$), and the right hand side is neglected, this equation admits the
false vacuum solution $\phi =0$ in a de Sitter geometry $g_{\mu \nu }=\frac 1%
{\left( H\tau \right) ^2}\eta _{\mu \nu }.$
We can also linearize this last expression to get the wave equation for
small fluctuations in $\phi $. The additional hypothesis that the phases of
the microscopic field $\varphi $ are aleatory assures that the right hand
side of this equation is always small. Indeed, if we were to identify $\phi $
with the expectation value of $\Phi $, we would drop this term altogether.
Since we are not doing such an identification, we shall retain it a little
longer, simply observing that we can evaluate this term at the false vacuum $%
\phi =0$ configuration:
\begin{equation}
-\Box \phi +\frac 12g\left( \left\langle \varphi ^2\right\rangle _\phi
-\left\langle \varphi ^2\right\rangle _0\right) =gj(\,x) \label{ec26}
\end{equation}
where
\begin{equation}
j(x)\equiv \frac 12\left[ \varphi ^2(x)-\left\langle \varphi ^2\right\rangle
_\phi (x)\right] \label{fuente2}
\end{equation}
is seen as a noise source. The self correlation of this source is given by
the so called noise kernel (\cite{CH95}, \cite{CH94}).
\begin{equation}
N(x_1,x_2)\equiv \frac 12\left\langle \{j(x_1),j(x_2)\}\right\rangle _\phi
\approx \frac 12\left\langle \{j(x_1),j(x_2)\}\right\rangle _0
\label{noiseker}
\end{equation}
The last term is a valid approximation provided the physical field $\phi $
remains close to its false vacuum configuration.
It is common to write eqn. (\ref{ec26}) as
\begin{equation}
-\Box _x\phi \left( \,x\right) +g^2\int d^4x^{\prime }\sqrt{_{-}^{(4)}g}%
D\left( x,x^{\prime }\right) \phi \left( \,x^{\prime }\right) =gj(\,x)
\label{ec28}
\end{equation}
where
\[
D\left( x,x^{\prime }\right) \equiv -\frac 1{2g}\frac{\delta \left\langle
\varphi ^2\right\rangle \left( \,x\right) }{\delta \phi \left( \,x^{\prime
}\right) }\mid _{\phi =0}
\]
is the dissipation kernel (\cite{CH95}, \cite{CH94}). The physical meaning
of the noise and dissipation kernels is borne out by the df between two
histories described by different typical fields
\[
{\cal D}\left[ \phi ,\phi ^{\prime }\right] =\int D\varphi D\varphi ^{\prime
}\;e^{i\left( S\left[ \phi +\varphi \right] -S\left[ \phi ^{\prime }+\varphi
^{\prime }\right] \right) }
\]
where the integral is over fluctuation fields matched on a constant time
surface in the far future. Actual evaluation yields (\cite{CH95}, \cite{CH94}%
)
\[
{\cal D}\left[ \phi ,\phi ^{\prime }\right] \sim \;e^{\left\{ iI-R\right\} }
\]
\[
I=S\left[ \phi \right] -S\left[ \phi ^{\prime }\right] +(g^2/2)\int d^4x%
\sqrt{_{-}^{(4)}g}d^4x^{\prime }\sqrt{_{-}^{(4)}g^{\prime }}\left[ \phi
-\phi ^{\prime }\right] (x)D\left( x,x^{\prime }\right) \left[ \phi +\phi
^{\prime }\right] \left( \,x^{\prime }\right)
\]
\[
R=(g^2/2)\int d^4x\sqrt{_{-}^{(4)}g}d^4x^{\prime }\sqrt{_{-}^{(4)}g^{\prime }%
}\left[ \phi -\phi ^{\prime }\right] (x)N\left( x,x^{\prime }\right) \left[
\phi -\phi ^{\prime }\right] \left( \,x^{\prime }\right)
\]
We see that the dissipation kernel contributes to the phase of the df close
to the diagonal, and thus to the equations of motion for the most likely
histories, while the noise kernel directly determines whether interference
effects are suppressed or not, and thus the consistency of the chosen coarse
grained histories.
\subsection{Actual estimates of fluctuation generation}
The above treatment of fluctuation generation implies that there are
essentially two sources of fluctuations in $\phi $, namely, uncertainties in
the initial value data of $\phi $ at the beginning of Inflation, and
fluctuations induced by stochastic sources during the slow roll period (as
we shall see below, noise generation cuts off naturally after horizon
crossing).
Let us assume that decoherence is efficient (see below), and thus that we
can deal with each history individually. Then we must conclude that only
those histories where the initial value of $\phi $ is exceptionally smooth
may lead to Inflation (see Appendix). This limitation on initial data for
Inflation has been discussed by several authors, most notably from numerical
simulations by Goldwirth and Piran \cite{GoldPir}, and from general
arguments by Calzetta and Sakellariadou, Deruelle and Goldwirth \cite{infi}
and others. Discarding the fluctuations in the initial conditions, we find
the solution
\[
\phi \left( \,x\right) =g\int d^4x_1\sqrt{_{-}^{(4)}g}G_{ret}(x,x_1)\,j(x_1)
\]
where $G_{ret}$ is the scalar field retarded propagator, and the two-point
correlation function
\[
\frac 12\left\langle \left\{ \phi \left( \vec x,\tau \right) ,\phi \left(
0,\tau \right) \right\} \right\rangle \approx g^2\int d^4x_1\sqrt{_{-}^{(4)}g%
}\int d^4x_2\sqrt{_{-}^{(4)}g}G_{ret}((\vec x,\tau ),x_1)G_{ret}((0,\tau
),x_2)N(x_1,x_2)
\]
The noise and dissipation kernels can be written as
\begin{equation}
N(x_1,x_2)\approx \text{Re}\left[ \left\langle j\left( x_1\right) j\left(
x_2\right) \right\rangle _0\right] =\text{Re}\left[ \left\langle \varphi
_1\varphi _2\right\rangle _0^{\,2}\right] \label{njot}
\end{equation}
\begin{equation}
D(x_1,x_2)\approx \text{Im}\left[ \left\langle \varphi _1\varphi
_2\right\rangle _0^{\,2}\right] \,\theta \left( \tau _1-\tau _2\right)
\label{disip1}
\end{equation}
Returning to the fluctuation field associated to the $g\Phi ^3$ coupling, we
can write
\begin{equation}
\left\langle \varphi \left( x_1\right) \varphi \left( x_2\right)
\right\rangle _0=H^2\Lambda \left( r,\tau _1,\tau _2\right) \label{capi2220}
\end{equation}
where $\Lambda $ is the dimensionless function:
\begin{equation}
\Lambda \left( r,\tau _1,\tau _2\right) =\frac 1{2\pi ^2}\int_0^\infty \frac{%
dk}{k^2r}\sin \left( kr\right) f_k\left( \tau _1\right) f_k^{*}\left( \tau
_2\right) \label{opapo}
\end{equation}
The $f_k$ are the positive frequency modes for the free-field in a de Sitter
geometry and are solutions for eqn. (\ref{ecfluc}) valid to first order.
These $f_k$ are functions of one single variable $k\tau _i.$
\[
f_k\left( \tau _i\right) =e^{ik\tau _i}\left( 1-ik\tau _i\right)
\]
In the same 'first order' approach, we can consider that $G_{ret}$ is well
described by the free field retarded propagator:
\[
G_{ret}(x,x_1)=-i\frac{H^2}{\left( 2\pi \right) ^3}\theta \left( \tau -\tau
_1\right) \int \frac{d^3k}{2k^3}e^{i\vec k\cdot (\vec x-\vec x_1)}\left\{
f_k(\tau )f_k^{*}(\tau _1)-f_k^{*}(\tau )f_k(\tau _1)\right\}
\]
It is easy to see that the spatial Fourier transform of this quantity can be
written as $H^2k^{-3}{\cal G}\left( k\tau ,\beta _i\right) ,$ where we have
defined the dimensionless variable $\beta _i=k\tau _i$ and the wave-number
dependence has been factorized out of ${\cal G}$. Moreover, if we look at
eqns. (\ref{noiseker}), (\ref{capi2220}) and (\ref{opapo}) we conclude that
the spatial Fourier transform of the noise kernel can be written in
principle as $H^4k^{-3}{\cal N}\left( \beta _1,\beta _2\right) ,$ i.e. it
depends on $k$ only through the $k^{-3}$ factor. The Fourier transform of $%
\left\langle \phi \left( x_1\right) \phi \left( x_2\right) \right\rangle $
becomes
\begin{equation}
\Delta _k\left( k\tau \right) =\frac 14\frac{g^2}{k^3}\frac 1{\left( 2\pi
\right) ^6}\int_{-\infty }^{k\tau }\frac{d\beta _1}{\,\beta _1^4}%
\int_{-\infty }^{k\tau }\frac{d\beta _2}{\,\beta _2^4}{\cal G}\left( k\tau
,\beta _1\right) {\cal G}\left( k\tau ,\beta _2\right) {\cal N}(k,\beta
_1,\beta _2) \label{ec18.1}
\end{equation}
The double-integral in eqn. (\ref{ec18.1}) represents a function of the
comoving wave number $k$ and the conformal time $\tau $ which appear in the
one-variable combination $k\tau .$
As we shall show below, fluctuation generation is effective only until
horizon crossing. As the $k$ mode of the field becomes greater than the
horizon when $k\tau =-1,$ the last consideration suggests that the integrals
in eqn. (\ref{ec18.1}) can be truncated at this value, and will therefore
take the form
\begin{equation}
\Delta _k\left( k\tau =-1\right) =\frac 14\frac{g^2}{k^3}\frac 1{\left( 2\pi
\right) ^6}\int_{-\infty }^{-1}\frac{d\beta _1}{\,\beta _1^4}\int_{-\infty
}^{-1}\frac{d\beta _2}{\,\beta _2^4}{\cal G}\left( -1,\beta _1\right) {\cal G%
}\left( -1,\beta _2\right) {\cal N}(k,\beta _1,\beta _2) \label{newver}
\end{equation}
If we take the above equations at face value, we find no explicit $k$
dependence within the integrand, and therefore the spectrum of field
fluctuations can be written as:
\begin{equation}
\Delta _k^{\text{sca}}\left( \tau \right) \propto g^2\frac 1{k^3}
\label{pesca5}
\end{equation}
where the superscript indicates that this prediction corresponds to a scalar
field theory. This is, of course, the well-established prediction of a
scale-invariant spectrum of density fluctuations.
However, in a de Sitter geometry a minimally coupled massless scalar field
is not well defined at the infrared limit, and the propagators associated to
it are divergent \cite{allen}. We can handle this problem by introducing an
infrared cut-off and studying the way in which this new parameter modifies
the Harrison-Zel'dovich spectrum (a small inflaton mass would have the same
physical effects). Our new propagator is:
\begin{equation}
\Lambda _{\text{cut}}\left( r,\tau _1,\tau _2\right) =\frac 1{2\pi ^2}%
\int_{k_{\text{infra}}}^\infty \frac{dk}{k^2r}\sin \left( kr\right)
f_k\left( \tau _1\right) f_k^{*}\left( \tau _2\right) \label{trunc}
\end{equation}
We want to find the $k$-dependence of $\Delta _k$ for the noise kernel
associated to the cubic coupling between two scalar fields, $\phi $ and $%
\varphi $. $N$ is obtained immediately from eqn. (\ref{njot}) as $N\left(
x_1,x_2\right) =H^4$ Re$\left[ \Lambda _{\text{cut}}^2\left( r,\tau _1,\tau
_2\right) \right] .$ As it was already noted, if we consider the retarded
propagators for the free field, then only ${\cal N}$ will have a non-trivial
$k$-dependence. Of course, $k$ will always appear as an adimensional
quantity $k_{\text{infra}}/k$.
In order to analyze the emergence of corrective terms to a HZ spectrum, it
is convenient to note that ${\cal N}$ can be written as
\[
{\cal N}={\cal N}_{\text{HZ}}+{\cal N}_{\text{infra}}
\]
where ${\cal N}_{\text{HZ}}$ is independent of $k_{\text{infra}}$, and $%
{\cal N}_{\text{infra}}$ contains $k_{\text{infra}}$ only as $\ln \left(
k/k_{\text{infra}}\right) .$ It is now evident that, after performing the
double integration for the ${\cal N}_{\text{HZ}}$ term in eqn. (\ref{newver}%
), one will arrive to the usual $\Delta _k\propto k^{-3}$
Harrison-Zel'dovich spectrum. Furthermore, the logarithmic terms can be
factored out of the integral, i.e. the corrective terms to the spectrum will
have the form $\ln \left( k/k_{\text{infra}}\right) ,$ and eqn. (\ref{newver}%
) will just give the amplitude for this corrections. This means that,
provided the phenomena that induce the generation of fluctuations are
effective only until horizon-crossing and that it is a good approximation to
consider free-retarded propagators for the field, the spectrum of
fluctuations takes the form:
\begin{equation}
\Delta _k^{\text{sca}}\left( k\tau =-1\right) =\frac C{k^3}\left[ 1+B\ln
\left( \frac k{k_{\text{infra}}}\right) \right] \label{scapow}
\end{equation}
where $C$ is the Harrison-Zel'dovich amplitude and $B$ is the amplitude of
the corrections. An actual evaluation shows that $B$ is positive and that
its numerical value is about $5\times 10^{-3}$. This result tells us that
the logarithmic corrections increase the spectral power, specially for small
scales (large $k$), and the spectrum moves slightly to the blue.
As for the amplitude of the scale invariant part of the spectrum, we may
adopt the simple estimate eqn. (\ref{pesca5}). This leads to the revised
bound $g\leq 10^{-6}\phi _0$ discussed at the beginning of this section.
\subsection{Loose ends}
The method developed in this section to describe the generation of
primordial fluctuations can be applied with only trivial modifications to
other nonlinear theories involving the inflaton, as shall be demonstrated
below by considering couplings to spin 1/ 2 and 1 fields. However, before we
proceed, it is convenient to discuss in full two essential elements of our
argument, namely, that coarse grained histories described by generic values
of $\phi $ are truly consistent, and that super horizon fluctuations are
dynamically decoupled from the noise sources (more concretely, we must show
that on super horizon scales $\delta \phi _k\sim \delta \tau _k\dot \phi (t)$%
, since this formula enters the derivation of eqn. (\ref{hyperfam})).
Let us first consider the issue of consistency. We wonder if the history we
have considered, starting from vanishing initial conditions at the beginning
of Inflation, is truly decohered from any other history differing from it by
amounts of the order of the quantum fluctuations of an scalar field in De
Sitter space. If this is the case, then we are justified to treat this
history classically.
The answer to this question lies on whether the df between any such two
histories is strongly suppressed or not. In other terms, we must compute
\begin{equation}
-2\ln \{|{\cal D}[\phi ,\phi ^{\prime }]|\}\equiv g^2\int ~d^4x~\sqrt{-g(x)}%
~d^4x^{\prime }~\sqrt{-g(x^{\prime })}(\phi -\phi ^{\prime
})(x)N(x,x^{\prime })(\phi -\phi ^{\prime })(x^{\prime })
\end{equation}
Or, Fourier transforming on the space variables
\begin{equation}
g^2\int ~\frac{{d^3k}}{{(2\pi )^3}}~{\frac{d\tau }{(H\tau )^4}}~{\frac{d\tau
^{\prime }}{(H\tau ^{\prime })^4}}(\phi _k-\phi _k^{\prime })(t)~N_k(\tau
,\tau ^{\prime })(\phi _k-\phi _k^{\prime })(\tau ^{\prime })
\end{equation}
For each mode, the integral extends from the beginning of Inflation up to
horizon crossing. Due to the $\tau ^4$ suppression factor, the integral is
actually dominated by the upper limit. In this regime
\begin{equation}
N_k(\tau ,\tau ^{\prime })\sim H{^4/k^3}
\end{equation}
By choice, the value of the product of the fields is close to the
expectation value of quantum fluctuations, namely
\begin{equation}
(\phi _k-\phi _k^{\prime })(\tau )(\phi _k-\phi _k^{\prime })(\tau ^{\prime
})\sim (H{^2/k^3)}\delta (0)\sim (H{^2/k^3)k_{{\rm infra}}^{-3}}
\end{equation}
(As follows from conventional quantization in the De Sitter background).
\begin{equation}
\int^{k^{-1}}~{\frac{d\tau }{(H\tau )^4}}\sim k{^3/}H{^4}
\end{equation}
Finally
\begin{equation}
-\ln \{|{\cal D}[\phi ,\phi ^{\prime }]|\}\equiv \int \frac{{d^3k}}{{(2\pi
k)^3}}~\left( \frac gH\right) ^2({\frac k{k_{{\rm infra}}}})^3
\end{equation}
As we have seen, $g^2/H^2\sim g/\phi _0\sim 10^{-6}$, and so decoherence
obtains for all modes $k\gg 10^2k_{{\rm infra}}$. For example, if we take $%
k_{{\rm infra}}$ as corresponding to the horizon length at the beginning of
Inflation, and fine tune the model so that this will also correspond to the
horizon today, all modes entering the horizon prior to recombination would
be classical in this sense. Of course, in a realistic model $k_{{\rm infra}}$
would be much larger than today's horizon, and all physically meaningful
modes will be decohered. In this case, moreover, we would obtain decoherence
even between histories much closer to each other than the quantum limit.
Let us consider now the issue of noise on super horizon scales. In order to
arrive to the previous results, we have considered the integration of our
expression for the power spectrum of the fluctuations of the field (eqn. \ref
{ec18.1}) from the beginning of Inflation up to the moment in which each
mode $k$ crossed the horizon. The full expression can be rewritten as
\begin{eqnarray}
\Delta _k\left( \tau \right) =-\frac{g^2}{H^4}\frac 1{\left( 2\pi \right) ^6}%
\left\{ \int_{-\infty }^{-1}\frac{d\beta _1}{\beta _1^4}\int_{-\infty }^{-1}%
\frac{d\beta _2}{\beta _2^4}{\cal G}_1{\cal G}_2{\cal N}+\int_{-1}^{k\tau }%
\frac{d\beta _1}{\beta _1^4}\int_{-1}^{k\tau }\frac{d\beta _2}{\beta _2^4}%
{\cal G}_1{\cal G}_2{\cal N}+2\int_{-1}^{k\tau }\frac{d\beta _1}{\beta _1^4}%
\int_{-\infty }^{-1}\frac{d\beta _2}{\beta _2^4}{\cal G}_1{\cal G}_2{\cal N}%
\right\} \label{gertru3}
\end{eqnarray}
where the second and third terms represent the contribution of a given mode
when it is outside the horizon. In the previous sections, we have ignored
these terms. If we consider the behavior of the noise kernel ${\cal N}$ far
away from the horizon, it is easy to verify that the last term may be
effectively ignored. The second term requires some additional
considerations. First we observe that the noise kernel is not oscillatory
outside the horizon, so the sources at different times are strongly
correlated. We can write $j_k\left( \tau \right) \sim j_k\sqrt{{\cal N}%
\left( k\tau \right) }$, where the $j_k$ are time-independent gaussian
variables. The wave equation that governs the evolution of each mode may be
written as
\begin{equation}
-\ddot \phi _k+\left( H\tau \right) ^2k^2\phi _k\left( \tau \right)
+g^2\int_{-\infty }^\tau \frac{d\tau ^{\prime }}{\left( H\tau ^{\prime
}\right) ^4}{\cal D}\left( \tau -\tau ^{\prime }\right) \phi _k\left( \tau
^{\prime }\right) =gj_k\left( \tau \right) \simeq gj_k\sqrt{{\cal N}\left(
k\tau \right) } \label{elena}
\end{equation}
where ${\cal D}$, $j_k$ and ${\cal N}$ indicate the spatial Fourier
transforms of the dissipation kernel, the source and the noise kernel,
respectively. When the mode is outside the horizon ($\left| k\tau \right|
\ll 1$) we can write the last equation as
\begin{equation}
-\ddot \phi _k+g^2\int_{-\infty }^\tau \frac{d\tau ^{\prime }}{\left( H\tau
^{\prime }\right) ^4}{\cal D}\left( \tau -\tau ^{\prime }\right) \phi
_k\left( \tau ^{\prime }\right) \simeq gj_k\sqrt{{\cal N}\left( k\tau
\right) } \label{elena217}
\end{equation}
The dissipative term is dominated by the contribution close to the upper
limit, and it can be written as:
\begin{equation}
\ \frac{g^2}{H^4}\delta \phi _k\left( \tau \right) \int_{-\infty }^\tau
\frac{d\tau ^{\prime }}{\tau ^{\prime \;4}}{\cal D}\left( \tau -\tau
^{\prime }\right) \label{oop}
\end{equation}
The dissipation kernel can be obtained from eqn. (\ref{disip1}). The
asymptotic expressions near and far away from the coincidence limit $\tau
\simeq \tau ^{\prime }$ are, respectively:
\begin{eqnarray*}
\sqrt{-^{\left( 4\right) }g}{\cal D}_{\text{near}}\left( \tau -\tau ^{\prime
}\right) &\approx &\frac \pi {\tau ^{\prime \;4}}\left[ \frac 53\ln \left(
k\left( \tau -\tau ^{\prime }\right) \right) -1.50\right] \left( \tau -\tau
^{\prime }\right) ^3 \\
\sqrt{-^{\left( 4\right) }g}{\cal D}_{\text{away}}\left( \tau -\tau ^{\prime
}\right) &\approx &-\frac{\pi \tau }{2k}\frac 1{\tau ^{\prime \;3}}\sin
\left( k\left( \tau -\tau ^{\prime }\right) \right) \ln \left( k\left( \tau
-\tau ^{\prime }\right) \right)
\end{eqnarray*}
The upper formula holds when $\tau -\tau ^{\prime }\preceq k^{-1}$. ${\cal D}%
_{\text{near}}$ goes to zero rapidly as $\tau -\tau ^{\prime }\rightarrow 0$
and its contribution to the integral in eqn. (\ref{oop}) will be completely
negligible. Moreover, the oscillatory part of ${\cal D}_{\text{away}}$
cancels the contribution of the dissipative term far away from the
coincidence limit. From these observations, we conclude that dissipation is
not effective for modes that are outside the horizon, i.e. those modes
behave as a free field.
Since ${\cal N}\left( k\tau \right) $ grows at most logarithmically, we find
that the particular solution to eqn. (\ref{elena}) vanishes faster than $%
O\left( \tau \right) ,$ while the homogeneous (growing) solution is 'frozen'
into a constant value. Thus the value of $\phi _k$ obeys the usual
(classical) Klein - Gordon equation while beyond the horizon, and the
conventional derivation of eqn. (\ref{hyperfam}) holds \cite{GalForInf}.
\section{Yukawa coupling}
Now we consider the interaction between the inflaton field and a massless
Dirac field. The Lagrangian density for a theory in which two Dirac fields
are coupled to a scalar massless field is
\[
{\cal L}=\partial _\mu \Phi \partial ^\mu \Phi +\frac i2\left[ \bar \Psi
\gamma ^\mu \partial _\mu \Psi -\partial _\mu \bar \Psi \gamma ^\mu \Psi
\right] +f\bar \Psi \Psi \Phi
\]
where $f$ is an arbitrary coupling constant. The equation of motion for the
inflaton $\Phi $ is
\[
-\Box \Phi +m^2\Phi -f\bar \Psi \Psi =0
\]
If we consider the separation of $\Phi $ in a mean field and fluctuations $%
\Phi =\phi +\varphi $ , the linearized equation of motion for the physical
field is
\[
-\Box \phi \left( \,x\right) +m^2\phi \left( \,x\right) =f\,j_{\text{Yuk}%
}\left( x\right)
\]
where
\begin{eqnarray*}
j_{\text{Yuk}}\left( x\right) =\bar \Psi \left( x\right) \Psi \left( x\right)
\end{eqnarray*}
The noise kernel, defined as the mean value of the anticommutator of the
sources (see eqn. \ref{noiseker}), takes the form
\begin{eqnarray}
N_{\text{Yuk}}\left( x_1,x_2\right) &\approx &\frac 12\left\langle \{j_{%
\text{Yuk}}(x_1),j_{\text{Yuk}}(x_2)\}\right\rangle _0 \label{yuk6} \\
&& \nonumber \\
\ &\approx &\frac 12\left[ \left\langle \bar \Psi \left( x_1\right) \Psi
\left( x_1\right) \bar \Psi \left( x_2\right) \Psi \left( x_2\right)
\right\rangle _0+\left( 1\leftrightarrow 2\right) \right] \nonumber
\end{eqnarray}
The four-point function can be reduced to a product of two-point functions
which correspond to the fermionic propagators
\[
\left\langle \bar \Psi \left( x_1\right) \Psi \left( x_1\right) \bar \Psi
\left( x_2\right) \Psi \left( x_2\right) \right\rangle =\left\langle \bar
\Psi \left( x_1\right) \Psi \left( x_2\right) \right\rangle \left\langle
\Psi \left( x_1\right) \bar \Psi \left( x_2\right) \right\rangle
\]
where
\begin{eqnarray*}
\left\langle \bar \Psi \left( x_1\right) \Psi \left( x_2\right)
\right\rangle &\equiv &-iS^{+}\left( x_2-x_1\right) \\
&& \\
\left\langle \Psi \left( x_1\right) \bar \Psi \left( x_2\right)
\right\rangle &\equiv &-iS^{-}\left( x_1-x_2\right)
\end{eqnarray*}
These expressions allow us to write the noise kernel as
\begin{eqnarray}
N_{\text{Yuk}}\left( x_1,x_2\right) \approx -\frac 12f^{\,2}S^{-}\left(
x_1-x_2\right) S^{+}\left( x_2-x_1\right) \label{nyuc321}
\end{eqnarray}
This expression is valid provided the scalar field remains near its false
vacuum configuration. As the spinor field is conformally invariant, the
propagators corresponding to a curved space-time can be written in terms of
those associated to a minkowskian geometry \cite{birrel}. For a de Sitter
background geometry, we have
\[
S_{\text{dS}}^{\pm }\left( x_1,x_2\right) =H^3\left( \tau _1\tau _2\right)
^{3/2}S_{\text{Mink}}^{\pm }\left( x_1,x_2\right)
\]
The minkowskian propagators for the spinor field can be written as
derivatives of the scalar field propagators
\[
S^{\pm }=-i\gamma ^\mu \partial _\mu D^{\pm }=\pm \frac 1{\left( 2\pi
\right) ^3}\gamma ^\mu \partial _\mu \int \frac{d^3k}{2k}e^{i\left( \pm kx_0-%
\vec k\cdot \vec x\right) }
\]
The noise kernel takes the form:
\[
N_{\text{Yuk}}=-f^2H^4\left( \tau _1\tau _2\right) ^3\partial _\mu D_{\text{%
Mink}}^{-}\left( x_1-x_2\right) \partial ^\mu D_{\text{Mink}}^{-}\left(
x_1-x_2\right)
\]
To arrive to a specific integral for the power spectrum generated by the
Yukawa coupling, we can proceed in close analogy to the scalar field case
(eqn. \ref{newver}):
\begin{eqnarray*}
\Delta _k^{\text{Yuk}}\left( k\tau =-1\right) &=&-\frac 14\frac{f^2}{k^3}%
\frac{H^4}{\left( 2\pi \right) ^6}\int_{-\infty }^{-1}\frac{d\beta _1}{%
\,\beta _1}\int_{-\infty }^{-1}\frac{d\beta _2}{\,\beta _2}{\cal G}\left(
-1,\beta _1\right) {\cal G}\left( -1,\beta _2\right) {\cal N}^{\text{Yuk}} \\
\ \qquad \qquad \qquad \qquad \qquad \qquad {\cal N}^{\text{Yuk}} &=&\frac 12%
{\cal F}\left[ \partial _\mu D_{\text{Mink}}^{-}\partial ^\mu D_{\text{Mink}%
}^{-}+c.c.\right]
\end{eqnarray*}
where ${\cal F}\left[ ...\right] $ represents the three-dimensional Fourier
transform of $\left[ ...\right] $.
As we are now considering conformal fields, the propagators are perfectly
defined and the last expression will produce a 'pure' HZ spectrum. As in the
scalar field, there are no relevant corrections coming from the ultraviolet
limit. The spectrum produced by this coupling will be of the scale invariant
form
\[
\Delta _k^{\text{Yuk}}\left( k\tau =-1\right) =\frac{C^{\prime }}{k^3}
\]
As a rough approximation, we may take ${C^{\prime }}\approx f^2H^4$, leading
to
\[
\frac{\delta \rho }\rho \sim \frac{H\;\delta \phi }{\dot \phi }\sim \frac{%
H^3\;f}{g\;\phi ^2}
\]
As we can write $H\sim \sqrt{g\phi }$ we obtain
\[
\frac{\delta \rho }\rho \sim \sqrt{\frac g\phi }f
\]
Given our previous estimate for the self-coupling, agreement between this
expression and the observational data requires that the coupling constant $%
f\sim 10^{-3}.$
\section{Electromagnetic coupling}
As a last example, let us consider the coupling between the inflaton field
and a massless vectorial field. The Lagrangian density for a theory with
massless scalar and electromagnetic fields is
\[
{\cal L}=\left( \partial _\mu +ieA_\mu \right) \Phi \left( \partial _\mu
-ieA_\mu \right) \Phi ^{*}-\frac 14F^{\mu \nu }F_{\mu \nu }
\]
from which we can deduce the equation of motion
\[
-\left( \Box -e^2A^\mu A_\mu -ie\partial _\mu A^\mu \right) \Phi -2ieA^\mu
\partial _\mu \Phi =0
\]
If we decompose the inflaton field in its physical and virtual components $%
\Phi =\phi +\varphi $ and write linearized equations, we obtain
\[
-\Box \varphi =0
\]
where we are assuming that $A^\mu $ is always small and can be thought as a
fluctuation in the electromagnetic potential $V^\mu =0.$ The more general
case $A^\mu =V^\mu +\delta A^\mu $ would give essentially the same results
for the small deviations $\delta A^\mu .$ The equation for the physical
field is:
\[
-\left( \Box -e^2A^\mu A_\mu \right) \phi =2ieA^\mu \partial _\mu \varphi
+ie\left( \partial _\mu A^\mu \right) \varphi
\]
The right hand side of this equation defines the source
\[
j\left( x\right) =\left[ A^\mu \partial _\mu \varphi +\frac 12\left(
\partial _\mu A^\mu \right) \varphi \right]
\]
As usual, the noise kernel associated to this source is $N\left(
x_1,x_2\right) \simeq \left\langle \left\{ j\left( x_1\right) ,j\left(
x_2\right) \right\} \right\rangle _0$ where:
\begin{equation}
\left\langle j\left( x_1\right) j\left( x_2\right) \right\rangle _0=\qquad
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
\qquad \qquad \qquad \qquad \label{aa}
\end{equation}
\begin{eqnarray*}
&&\left\langle A^\mu \left( x_1\right) A^\nu \left( x_2\right) \right\rangle
_0\partial _{\mu ,1}\partial _{\nu ,2}\left\langle \varphi \left( x_1\right)
\varphi \left( x_2\right) \right\rangle _0+\frac 14\partial _{\mu
,1}\partial _{\nu ,2}\left\langle A^\mu \left( x_1\right) A^\nu \left(
x_2\right) \right\rangle _0\left\langle \varphi \left( x_1\right) \varphi
\left( x_2\right) \right\rangle _0 \label{camp147} \\
&& \\
&&+\frac 12\partial _{\nu _{,2}}\left\langle A^\mu \left( x_1\right) A^\nu
\left( x_2\right) \right\rangle _0\partial _{\mu ,1}\left\langle \varphi
\left( x_1\right) \varphi \left( x_2\right) \right\rangle _0+\frac 12%
\partial _{\mu ,1}\left\langle A^\mu \left( x_1\right) A^\nu \left(
x_2\right) \right\rangle _0\partial _{\nu ,2}\left\langle \varphi \left(
x_1\right) \varphi \left( x_2\right) \right\rangle _0
\end{eqnarray*}
Before we proceed, it will be convenient to write this expression in term of
the propagators of the interacting fields. The scalar propagator has been
considered in a previous section. It can be shown that a massless vectorial
field couples to the space-time curvature conformally. This result implies
that the covariant electromagnetic propagators for a de Sitter geometry are
identical to the minkowskian ones \cite{birrel}:
\[
\left\langle A_\alpha \left( x_1\right) A_\beta \left( x_2\right)
\right\rangle _{\text{dS}}=\left\langle A_\alpha \left( x_1\right) A_\beta
\left( x_2\right) \right\rangle _{\text{Mink}}\equiv \left\langle A_\alpha
\left( x_1\right) A_\beta \left( x_2\right) \right\rangle
\]
Rising indexes with $g^{\mu \nu }\left( x_i\right) =\left( H\tau _i\right)
^2\eta ^{\mu \nu }$ and adopting the Feynman gauge, where the minkowskian
electromagnetic and scalar propagators are related by
\[
\left\langle A_\alpha \left( x_1\right) A_\beta \left( x_2\right)
\right\rangle =-i\eta _{\alpha \beta }D^{+}\left( x_1-x_2\right)
\]
we obtain
\[
\left\langle A_\alpha \left( x_1\right) A_\beta \left( x_2\right)
\right\rangle _{\text{dS}}=iH^4\left( \tau _1\tau _2\right) ^2\eta ^{\mu \nu
}D_{\text{Mink}}^{+}\left( x_1-x_2\right)
\]
As this is a propagator for a conformal field, it is well defined and will
not produce any correction to the power spectrum. Only the factors which
correspond to the inflaton in eqn. (\ref{aa}) will produce corrections. In
order to get these corrections we must consider the truncated scalar
propagators defined in a previous section (see eqns. \ref{capi2220} and \ref
{trunc}). As we have already seen, the cut-off dependence can be isolated as
$\log \left( \frac k{k_{\text{infra}}}\right) $, where $k$ is a parameter
that will be associated to the Fourier transform of the noise kernel. Thus
we can say that the sought for corrections will be logarithmic:
\[
\Delta _k^{\text{Em}}\left( k\tau =-1\right) =\frac{C^{\prime \prime }}{k^3}%
\left[ 1+B^{\prime \prime }\ln \left( \frac k{k_{\text{infra}}}\right)
\right]
\]
As in the previous examples we considered, $C^{\prime \prime }$ is
undetermined because it includes the square of the coupling constant.
Roughly, $C^{\prime \prime }\sim H^4e^2,$ leading to $e\lesssim 10^{-3}$ to
match density production bounds. The ultraviolet contribution is always
irrelevant. $B^{\prime \prime }$ measures the relative importance of the
logarithmic corrections compared with the HZ background.
\section{Conclusions}
We considered fluctuation generation in the context of three elementary
regularizable field theories that represent the interaction of the inflaton
with itself and other massless fields of different spin. In each case, we
obtained the power spectrum for the field fluctuations $\Delta _k,$ which
can be easily related to the primordial density inhomogeneities that
constituted the seeds for structure generation. These fluctuations are
produced by a random noise source. We found that the predicted spectrum is
scale invariant when only conformal fields contribute to the noise term; in
a more general situation, such as when the source includes the virtual
scalar field, there appear logarithmic corrections.
Two features of our results stand out, namely, that we satisfy current
observational bounds on the amplitude of the primordial spectrum for values
of the inflaton self coupling much larger than previously reported, and that
the corrections to the HZ spectrum depend not only on the shape of the
inflaton potential, but also on what exactly the inflaton is coupled to.
Concerning the first issue, it should be clear that the drastic relaxation
on the bounds for the inflaton self coupling we have obtained is related to
much tighter bounds on the initial conditions for the inflaton field than
previously used. Of course, this is not the only factor that determines this
relaxation, for which we would also have to consider, at least, the {\it %
r\^ole} of the dissipation terms, which have been ignored so far. In this
sense, it might seem that we have just traded one fine tuning for another.
However, it should be remembered that the fine tuning of initial conditions
is not added ad hoc to match the COBE observations, but it is independently
necessary to obtain Inflation at all. As a matter of fact, this fine tuning
is necessary even if we accept the usual estimate of $g/\phi _0\sim 10^{-12}$%
. So, even if not yet totally satisfactory, it may be said that the model
has improved in regard to fine tuning. As we mentioned previously, a similar
result concerning fine tuning has been obtained by Matacz \cite{Mat} and by
Calzetta and Hu \cite{CH95}. Matacz considered a phenomenological model of
Inflation consisting of a system surrounded by an environment of time
dependent harmonic oscillators that back-react on the former acting as a
stochastic source of white noise. The approach by Calzetta and Hu consisted
on coarse-graining the graviton degrees of freedom associated to the
geometry of space-time. The latter methodology is followed closely in the
present work. We complement its results in some aspects such as making the
explicit calculation of the most relevant physical quantities, generalizing
the possible interactions of the scalar field and computing the main
corrections to the scale invariant spectrum.
In the long run, it may well be that the second aspect of our conclusions,
namely, the much wider scope to seek corrections to the fundamental Harrison
- Zel'dovich spectrum, will prove to be more relevant. Indeed, it is well
known that for any observed spectrum it is possible to ''taylor'' an
inflationary potential that will reproduce it \cite{taylor}. But these ad
hoc potentials have no other motivation that matching this result, and more
often than not are unmotivated or even pathological from the standpoint of
current high energy physics. The extra freedom afforded by the possibility
that the primordial spectrum of fluctuations could depend on the coupling of
the inflaton to other fields (which must exist if we are to have reheating)
could be the key to building simpler and yet more realistic theories of the
generation of primordial fluctuations.
Of course, the massless theories considered in this paper are too simplistic
to live up to this promise. Couplings to massive fields, and even the
possibility that the inflaton could be part of a larger, maybe grand
unified, theory, ought to be considered before actual predictions may be
extracted. We continue our research on this key issue in Early Universe
cosmology.
\section{Acknowledgments}
It is a pleasure to thank Antonio Campos, Jaume Garriga, Salman Habib,
Bei-lok Hu, Alejandra Kandus, Andrew Matacz, Diego Mazzitelli, Emil Mottola,
Juan Pablo Paz and Enric Verdaguer for multiple exchanges concerning this
project. We also wish to thank the hospitality of the Universidad de
Barcelona and the Workshop on Non Equilibrium Phase Transitions (Saint
John's College, Santa Fe, New Mexico, July 1996), where parts of it were
completed. This work has been partially supported by Universidad de Buenos
Aires, CONICET and Fundaci\'on Antorchas, and by the Commission of the
European Communities under Contract CI1*-CJ94-0004. |
1,941,325,221,111 | arxiv | \section{Introduction}
\subsection{Setup and Problem}
For a domain $\Omega$ in $\mathbb{C}^n$, we denote the space of square integrable functions and the space of square integrable holomorphic functions on $\Omega$ by $L^{2}(\Omega)$ and $A^{2}(\Omega)$ (the Bergman space of $\Omega$), respectively. The Bergman projection operator, $P$, is the orthogonal projection from $L^2(\Omega)$ onto $A^{2}(\Omega)$. It is an integral operator with the kernel called the Bergman kernel, which is denoted by $B_{\Omega}(z,w)$. Moreover, if $\{e_n(z)\}_{n=0}^{\infty}$ is an orthonormal basis for $A^2(\Omega)$ then the Bergman kernel can be represented as
$$B_{\Omega}(z,w)=\sum\limits_{n=0}^{\infty}e_n(z)\overline{e_n(w)}.$$
On complete Reinhardt domains the monomials $\left\{z^{\gamma}\right\}_{\gamma\in \mathbb{N}^{n}}$ (or a subset of them) constitute an orthogonal basis for $A^2(\Omega)$.
For $f\in A^2(\Omega)$, the Hankel operator with the anti-holomorphic symbol $\overline{f}$ is formally defined on $A^2(\Omega)$ by
\[ H_{\overline{f}}(g)=(I-P)(\overline{f}g).\]
Note that this (possibly unbounded) operator is densely defined on $A^2(\Omega)$.
For a multi-index $\gamma=(\gamma_1,\ldots,\gamma_n)\in \mathbb{N}^{n}$, we set
\begin{align}
c_{\gamma}^{2}=\int\limits_{\Omega}\abs{z^{\gamma}}^{2}dV(z).
\end{align}
Then on complete Reinhardt domains the set $\left\{\frac{z^{\gamma}}{c_{\gamma}}\right\}_{\gamma\in \mathbb{N}^{n}}$ gives a complete orthonormal basis for $A^{2}(\Omega)$. Each $f\in A^{2}(\Omega)$ can be written in the form $f(z)=\sum\limits_{\gamma\in \mathbb{N}^{n}}f_{\gamma}\frac{z^{\gamma}}{c_{\gamma}}$ where the sum converges in $A^{2}(\Omega)$, but also uniformly on compact subset of $\Omega$. For the coefficients $f_{\gamma}$, we have $f_{\gamma}= \langle f(z),\frac{z^{\gamma}}{c_{\gamma}}\rangle_{\Omega}$.
\begin{definition}
A linear bounded operator $T$ on a Hilbert space $H$ is called a \textit{Hilbert-Schmidt operator} if there is an orthonormal basis $\{\xi_{j}\}$ for $H$ such that the sum $\sum\limits_{j=1}^{\infty}\norm{T(\xi_{j})}^2$ is finite.
\end{definition}
The sum does not depend on the choice of orthonormal basis $\{\xi_{j}\}$. For more on Hilbert-Schmidt operators see \cite[Section X]{Retherford93}.
In this paper, we investigate the following problem. On a given Reinhardt domain in $\mathbb{C}^n$, characterize the symbols for which the corresponding Hankel operators are Hilbert-Schmidt. This question was first studied in $\mathbb{C}$ on the unit disc in \cite{ArazyFisherPeetre88}. The problem was studied on higher dimensional domains in \cite[Theorem at pg. 2]{KeheZhu90} where the author showed that when $n\geq 2$, on an $n$-dimensional complex ball there are no nonzero Hilbert-Schmidt Hankel operators (with anti-holomorphic symbols) on the Bergman space. The result was revisited in \cite{Schneider07} with a more robust approach. On more general domains in higher dimensions, the problem was explored in \cite[Theorem 1.1]{KrantzLiRochberg97} where the authors extended the result \cite[Theorem at pg. 2]{KeheZhu90} to bounded pseudoconvex domains of finite type in $\mathbb{C}^2$ with smooth boundary. Moreover, the authors of the current article studied the same problem on complex ellipsoids \cite{CelikZeytuncu2013}, in $\mathbb{C}^{2}$ with not necessarily smooth boundary.
The same question was investigated on Cartan domains of tube type in \cite[Section 2]{Arazy1996} and on strongly psuedoconvex domains in \cite{Li93, Peloso94}. Arazy studied the natural generalization of Hankel operators on Cartan domains (a circular, convex, irreducible bounded symmetric domains in $\mathbb{C}^n$) of tube type and rank $r>1$ in $\mathbb{C}^n$ for which $n/r$ is an integer. He showed that there is no non-trivial Hilbert-Schmidt Hankel operators with anti-holomorphic symbols on those type of domains. Li and Peloso, independently, obtained the same result on strongly pseudoconvex domains with smooth boundary.
\subsection{Results} Let
\[\Omega=\{(z_1,z_2)\in \mathbb{C}^{2}\ |\ z_1\in\mathbb{D}\ \text{ and }\ |z_2|<e^{-\varphi(z_1)}\}\]
($\varphi(z_1)=\varphi(|z_1|)$) be a complete pseudoconvex Reinhardt domain where monomials $\{z^{\alpha}\}$ (or a subset of monomials) form a complete system for $A^{2}(\Omega)$. In this paper, we show that on complete pseudoconvex Reinhardt domains in $\mathbb{C}^{2}$, there are no nonzero Hilbert-Schmidt Hankel operators with anti-holomorphic symbols. Moreover, we also present examples of unbounded non-pseudoconvex domains on which there are nonzero Hilbert-Schmidt Hankel operators with anti-holomorphic symbols.
\begin{theorem}\label{Main}
Let $\Omega$ be as above and $f \in A^2 (\Omega)$.
If the Hankel operator $H_{\overline{f}}$ is Hilbert-Schmidt on $A^2 (\Omega)$ then $f$ is constant.
\end{theorem}
\begin{remark}
Theorem \ref{Main} generalizes Zhu's result on the unit ball in $\mathbb{C}^{n}$ \cite{KeheZhu90}, Schnider's result on the unit ball in $\mathbb{C}^{n}$ and its variations \cite{Schneider07}. Theorem \ref{Main} also generalizes the result in \cite[Theorem 1.1]{KrantzLiRochberg97} by dropping the finite type condition on complete pseudoconvex Reinhardt domains.
\end{remark}
\begin{remark}\label{RemarkNew}
The new ingredient in the proof of Theorem \ref{Main} is the explicit use of the pseudoconvexity property of the domain $\Omega$, see the assumption made at \eqref{assumption 2} and how it is used at \eqref{use of pseudoconvexity}. Additionally, we employ the key estimate \eqref{first step on estimating S_{alpha}} proven in \cite{CelikZeytuncu2013}.
\end{remark}
\begin{remark}
After completing this note, the authors have learned that by using the estimate \eqref{first step on estimating S_{alpha}}, Le obtained the same result on bounded complete Reinhardt domains without the pseudoconvexity assumption, see \cite{TrieuLe}. Although our statement requires pseudoconvexity, it also works on unbounded domains. The study of complex function theory on unbounded domains (and relation to pseudoconvexity) has been investigated recently in \cite{HarringtonRaich2013,HarringtonRaich2014} and new phenomenas have been observed.
\end{remark}
Wiegerinck in \cite{Wiegerinck84}, constructed Reinhardt domains (unbounded but with finite volume) in $\mathbb{C}^2$ for which the Bergman spaces are $k$-dimensional. In fact, for these domains the Bergman spaces are spanned by monomials of the form $\{(z_1z_2)^j\}_{j=1}^{k-1}$. Therefore, Hankel operators with non-trivial anti-holomorphic symbols are Hilbert-Schmidt. We revisit these and similar domains in the last section to present examples of domains that admit nonzero Hilbert-Schmidt Hankel operators with anti-holomorphic symbols.
\section{An Identity and an Estimate on Reinhardt Domains}
The set $\left\{\frac{z^{\gamma}}{c_{\gamma}}\right\}_{\gamma\in \mathbb{N}^{n}}$ is an orthonormal basis for $A^{2}(\Omega)$. In order to prove Theorem \ref{Main}, we will look at the sum
\begin{align}\label{norm of Hankel - last step}
\sum\limits_{\gamma}\left\|H_{\overline{f}}\left(\frac{z^{\gamma}}{c_{\gamma}}\right)\right\|^2=\sum\limits_{\alpha}\left| f_{\alpha}\right|^{2}\sum\limits_{\gamma}\left(\frac{c_{\alpha+\gamma}^{2}}{c_{\gamma}^{2}}
-\frac{c_{\gamma}^{2}}{c_{\gamma-\alpha}^{2}}\right)
\end{align}
for $f\in A^{2}(\Omega)$. For detailed computation of \eqref{norm of Hankel - last step} and of the later estimate \eqref{first step on estimating S_{alpha}} we refer to \cite{CelikZeytuncu2013}.
The term $\sum\limits_{\gamma}\left(\frac{c_{\gamma+\alpha}^{2}}{c_{\gamma}^2}-\frac{c_{\gamma}^{2}}{c_{\gamma-\alpha}^2}\right)$ in the identity \eqref{norm of Hankel - last step} plays an essential role in the rest of the proof, and we label it as,
\begin{align}\label{S-alpha}
S_{\alpha}:=\sum\limits_{\gamma}\left(\frac{c_{\gamma+\alpha}^{2}}{c_{\gamma}^2}-\frac{c_{\gamma}^{2}}{c_{\gamma-\alpha}^2}\right).
\end{align}
Note that, the Cauchy-Schwarz inequality guarantees that $\frac{c_{\gamma+\alpha}^{2}}{c_{\gamma}^2}-\frac{c_{\gamma}^{2}}{c_{\gamma-\alpha}^2}\geq 0$ for all $\alpha$ and $\gamma$.
The computations above hold on any domains where the monomials (or a subset of monomials) form an orthonormal basis for the Bergman space.
Now, we estimate the term $S_{\alpha}$ on complete pseudoconvex Reinhardt domains. Our goal is to show that $S_{\alpha}$ diverges for all nonzero $\alpha$ on these domains. By \eqref{norm of Hankel - last step}, this will be sufficient to conclude Theorem \ref{Main}.
In earlier results, $S_{\alpha}$'s were computed explicitly to obtain the divergence. Here we obtain the divergence by using the estimate \eqref{first step on estimating S_{alpha}}.
For any sufficiently large $N$, we have
\begin{align}\label{first step on estimating S_{alpha}}
S_{\alpha}\geq \sum\limits_{|\gamma|=N} \frac{c_{\gamma+\alpha}^{2}}{c_{\gamma}^2}
\end{align}
for any nonzero $\alpha$, see \cite{CelikZeytuncu2013}.
\section{Computations on Complete Pseudoconvex Reinhardt Domains, proof of Theorem $1$}
Let $\phi(r)\in C^{2}([0,1))$, define the following complete Reinhardt domain
\[\Omega=\{(z_1,z_2)\in \mathbb{C}^{2}\ |\ z_1\in\mathbb{D}\ \text{ and }\ |z_2|<e^{-\phi(z_1)}\}.\]
Note that $\phi(z_1)=\phi(|z_1|)$.
If $\limsup\limits_{r\rightarrow 1^{-}}\phi(r)$ is finite then $\exists c>0$ such that for any $z_{1}\in\mathbb{D}$ the fiber in the $z_{2}$ direction contains the disc of radius $c$. Hence, $\Omega$ contains a polydisc $\mathbb{D}\times c\mathbb{D}$. This indicates that there are no nonzero Hilbert-Schmidt Hankel operators with anti-holomorphic symbols on $\Omega$.
This also indicates that there are no compact Hankel operators with anti-holomorphic symbols.
Therefore, from this point we assume
\[\limsup\limits_{r\rightarrow 1^{-}}\phi(r)=+\infty.\]
In fact, the later assumption \eqref{assumption 2} made on the domain forces $\phi(r)$ not to oscillate, so we can assume
\begin{align}\label{assumption 1}
\lim\limits_{r\rightarrow 1^{-}}\phi(r)=+\infty.
\end{align}
On the other hand, $\Omega$ is pseudoconvex if and only if $z_1\longrightarrow\phi(|z_1|)$ is a subharmonic function on $\mathbb{D}$. A simple calculation gives $\Delta\phi(z_1)=\phi^{\prime\prime}(r)+\frac{1}{r}\phi^{\prime}(r)$. We assume $\Omega$ is pseudoconvex therefore we have
\begin{align}\label{assumption 2}
\phi^{\prime\prime}(r)+\frac{1}{r}\phi^{\prime}(r)\geq 0\ \ \text{ on }\ \ (0,1).
\end{align}
Our goal is to show that the sum $\sum\limits_{|\gamma|=N} \frac{c_{\gamma+\alpha}^{2}}{c_{\gamma}^2}$ diverges for any nonzero $\alpha$ on a complete pseudoconvex Reinhardt domain $\Omega$. We start with computing $c_{\gamma}$'s.
We have,
\begin{align*}
c_{\gamma}^2&=\int\limits_{\Omega}\abs{z^{\gamma}}^{2}dV(z)
=\int\limits_{\mathbb{D}}\abs{z_1}^{2\gamma_1}\int\limits_{|z_2|<e^{-\phi(|z_1|)}}\abs{z_{2}}^{2\gamma_2}dA(z_{2})dA(z_{1}) \\
&=\int\limits_{\mathbb{D}}\left\{\abs{z_1}^{2\gamma_1}\frac{2\pi}{2\gamma_2+2} e^{-(2\gamma_{2}+2)\phi(|z_1|)}\right\}dA(z_1)
=\frac{2\pi^{2}}{\gamma_2+1}\int\limits_{0}^{1} r^{2\gamma_1+1}e^{-(2\gamma_{2}+2)\phi(r)}dr.
\end{align*}
For sufficiently large $x$ and $y$, consider the following ratio
\begin{align}\label{the ratio}
R_{x,y}:=
\frac{\int\limits_{0}^{1}r^{x+2\alpha_1} e^{-(y+2\alpha_2)\phi(r)}dr}
{\int\limits_{0}^{1}r^{x}e^{-y\phi(r)}dr},
\end{align}
and define
\[\Phi_{x,y}(r):=
\frac{r^{x}e^{-y\phi(r)}}
{\int\limits_{0}^{1}r^{x}e^{-y\phi(r)}dr}.\]
Note that $\Phi_{x,y}(0)=0$, $\Phi_{x,y}(1)=0$, and $\int\limits_{0}^{1}\Phi_{x,y}(r)dr=1$.
Also, define
\begin{align}\label{g-alpha}
g_{\alpha}(r)=r^{2\alpha_1}e^{-2\alpha_2\phi(r)}.
\end{align}
Note that $g_{\alpha}(r)$ does not vanish inside the interval $(0,1)$, but may vanish at $r=0$ and $r=1$ depending on $\alpha$. Now, we can rewrite the ratio $R_{x,y}$ as
\begin{align}
R_{x,y}=
\int\limits_{0}^{1}\Phi_{x,y}(r)r^{2\alpha_1}e^{-2\alpha_2\phi(r)}dr
=\int\limits_{0}^{1}\Phi_{x,y}(r)g_{\alpha}(r)dr.
\end{align}
Our goal is to find a sub-interval $(a,b)\subset\subset(0,1)$ such that for sufficiently large $x$ and $y$
\[\int\limits_{a}^{b}\Phi_{x,y}(r)dr\geq \frac{1}{2}.\]
For this purpose, we analyze $\Phi_{x,y}(r)$ further on $(0,1)$ and locate the local maximum of $\Phi_{x,y}(r)$. We have
\[\frac{d}{dr}\Phi_{x,y}(r)=\left(x-y\phi^{\prime}(r)r\right)\left(r^{x-1}e^{-y\phi(r)}\right)\frac{1}
{\left(\int\limits_{0}^{1}r^{x}e^{-y\phi(r)}dr\right)}.\]
Therefore,
\[\frac{d}{dr}\Phi_{x,y}(r)=0\ \text{ on }\ (0,1)\ \text{ when }\ x-y\phi^{\prime}(r)r=0.\]
We label $f_{x,y}(r):=x-y\phi^{\prime}(r)r$. Note that $f_{x,y}(r)$ controls the sign of $\frac{d}{dr}\Phi_{x,y}(r)$, since the rest of the terms in $\frac{d}{dr}\Phi_{x,y}(r)$ are positive. Furthermore,
\[f_{x,y}(0)=x\ >0\] and
\begin{align}\label{use of pseudoconvexity}
\frac{d}{dr}f_{x,y}(r)=-y\left(\phi^{\prime}(r)+r\phi^{\prime\prime}(r)\right)\ <0\ \text{( by the assumption \eqref{assumption 2}}).
\end{align}
Hence, $f_{x,y}(r)$ decreases on $(0,1)$ and can vanish at a point. We will show that by choosing $x,y$ appropriately we can guarantee $f_{x,y}(r)$ vanishes on $(0,1)$.\\
All we need is a point $s\in(0,1)$ such that
\[s\phi^{\prime}(s)>0.\]
However, this is possible by the assumption \eqref{assumption 1}. If there was no such a point $s\in(0,1)$, then $\phi(r)$ wouldn't grow up to infinity. Moreover, if $\exists s\in(0,1)$ such that $s\phi^{\prime}(s)>0$ then since $r\phi^{\prime}(r)>0$ is an increasing function
\[r\phi^{\prime}(r)>0\ \text{ for all }\ r\in[s,1).\]
Therefore, there exists a relatively compact subinterval $(a,b)$ of $(0,1)$ such that
\[a\phi^{\prime}(a)>0\]
and hence $r\phi^{\prime}(r)>0$ on $(a,b)$. Moreover, by choosing $x$ and $y$ appropriately we can make
\begin{align*}
f_{x,y}\left(a\right)>0 ~\text{ and }~ f_{x,y}\left(b\right)<0.
\end{align*}
That is,
\[x-ya\phi^{\prime}(a)>0 ~\text{ and }~ x-yb\phi^{\prime}(b)<0.\]
Equivalently,
\[a\phi^{\prime}(a)<\frac{x}{y} ~\text{ and }~ \frac{x}{y}<b\phi^{\prime}(b).\]
Therefore, as long as we keep
\begin{align}\label{the interval}
a\phi^{\prime}(a)<\frac{x}{y}<b\phi^{\prime}(b)
\end{align}
there exist a solution to $x-yr\phi^{\prime}(r)=0$ on the interval $(a,b)\subset\subset(0,1)$, and so we guarantee that the function $\Phi_{x,y}(r)$ assumes its maximum somewhere inside $(a,b)$. \\
Let us take the point $\rho_{xy}\in(a,b)$ where $\Phi_{x,y}(r)$ takes its maximum value . We have
\[\int\limits_{0}^{\frac{a}{2}}\Phi_{x,y}(r)dr\leq \int\limits_{\frac{a}{2}}^{\rho_{xy}}\Phi_{x,y}(r)dr\ \ \text{ and }\ \ \int\limits_{\frac{1+b}{2}}^{1}\Phi_{x,y}(r)dr\leq \int\limits_{\rho_{xy}}^{\frac{1+b}{2}}\Phi_{x,y}(r)dr\]
Hence, we deduce that
\begin{align}\label{key inequality}
\int\limits_{\frac{a}{2}}^{\frac{1+b}{2}}\Phi_{x,y}(r)dr\geq \int\limits_{0}^{1}\Phi_{x,y}(r)dr\geq \frac{1}{2}
\end{align}
as long as $a\phi^{\prime}(a)<\frac{x}{y}<b\phi^{\prime}(b)$.
The inequality at \eqref{key inequality} is the crucial step for the rest of the proof. It guarantees that the integral of $\Phi_{x,y}(r)$ is located somewhere in the middle, i.e. does not lean towards any of the end points.
For a multi-index $\gamma=(\gamma_1,\gamma_2)$, let us write $\Phi_{\gamma}(r)=\Phi_{\gamma_1,\gamma_2}(r)$. Then
\begin{align}
\frac{c_{\gamma+\alpha}^2}{c_{\gamma}^2}&=\frac{\gamma_2+1}{\gamma_2+\alpha_2+1}
\cdot\frac{\int\limits_{0}^{1}r^{2\gamma_1+2\alpha_1+1} e^{-(2\gamma_2+2+2\alpha_2)\phi(r)}dr}
{\int\limits_{0}^{1}r^{2\gamma_1+1}e^{-(2\gamma_2+2)\phi(r)}dr}\\
\nonumber&=\frac{\gamma_2+1}{\gamma_2+\alpha_2+1}\int\limits_{0}^{1}\Phi_{2\gamma_1+1,2\gamma_2+2}(r)g_{\alpha}(r)dr
\end{align}
Then,
\begin{align}\label{First step on the estimate}
S_{\alpha}&\geq \sum\limits_{|\gamma|=N} \frac{c_{\gamma+\alpha}^{2}}{c_{\gamma}^2}
=\sum\limits_{k=0}^{N} \frac{c_{\alpha+(k,N-k)}^{2}}{c_{(k,N-k)}^2}\\
\nonumber &=\sum\limits_{k=0}^{N} \frac{c_{(k+\alpha_1,N-k+\alpha_2)}^{2}}{c_{(k,N-k)}^2}=\sum\limits_{k=0}^{N}\frac{N-k+1}{N-k+\alpha_2+1}\int\limits_{0}^{1}\Phi_{2k+1,2(N-k)+2}(r)g_{\alpha}(r)dr.\\
\end{align}
We want to keep \[\frac{2k+1}{2N-2k+2}\in\left(a\phi^{\prime}(a),b\phi^{\prime}(b)\right),\] see \eqref{the interval}. It is equivalent to asking $k$ to be in the interval
\begin{align*}
\frac{2a\phi^{\prime}(a)}{2a\phi^{\prime}(a)+2}N+\frac{2a\phi^{\prime}(a)-1}{2a\phi^{\prime}(a)+2}<k<\frac{2b\phi^{\prime}(b)}{2b\phi^{\prime}(b)+2}N+\frac{2b\phi^{\prime}(b)-1}{2b\phi^{\prime}(b)+2}.
\end{align*}
We further restrict $k$ to the interval
\begin{align*}
I_{N}:= \left(\frac{2a\phi^{\prime}(a)}{2a\phi^{\prime}(a)+2}N+\frac{2a\phi^{\prime}(a)-1}{2a\phi^{\prime}(a)+2}\ ,\ \frac{2b\phi^{\prime}(b)}{2b\phi^{\prime}(b)+2}N+\frac{2b\phi^{\prime}(b)-1}{2b\phi^{\prime}(b)+2}\right)\cap (0,N).
\end{align*}
Therefore, the estimate \eqref{First step on the estimate} can be rewritten as
\begin{align}
S_{\alpha}\geq \sum\limits_{k\in I_{N}}\frac{N-k+1}{N-k+\alpha_2+1}\int\limits_{0}^{1}\Phi_{2k+1,2(N-k)+2}(r)g_{\alpha}(r)dr.
\end{align}
When $k\in I_{N}$ we have
\begin{align*}
\frac{N-k+1}{N-k+\alpha_2+1}\int\limits_{0}^{1}\Phi_{2k+1,2(N-k)+2}(r)g_{\alpha}(r)dr&\geq \frac{1}{1+\alpha_2}\int\limits_{\frac{a}{2}}^{\frac{1+b}{2}}\Phi_{2k+1,2(N-k)+2}(r)g_{\alpha}(r)dr\\
&\geq\frac{1}{1+\alpha_2}\left(\min\limits_{\frac{a}{2}\leq r\leq\frac{1+b}{2}}\{g_{\alpha}(r)\}\right)\int\limits_{\frac{a}{2}}^{\frac{1+b}{2}}\Phi_{2k+1,2(N-k)+2}(r)dr\\
\text{by \eqref{key inequality} }\ \ \ &\geq\frac{1}{1+\alpha_2}\left(\min\limits_{\frac{a}{2}\leq r\leq\frac{1+b}{2}}\{g_{\alpha}(r)\}\right)\frac{1}{2}.
\end{align*}
Let $\lambda_{\alpha}:=\frac{1}{2(1+\alpha_2)}\left(\min\limits_{\frac{a}{2}\leq r\leq\frac{1+b}{2}}\{g_{\alpha}(r)\}\right)$. Note that $\lambda_{\alpha}>0$ since $g_{\alpha}(r)$ is strictly positive on $\left(\frac{a}{2},\frac{1+b}{2}\right)$, see \eqref{g-alpha}. This gives us
\begin{align*}
S_{\alpha}&\geq \sum\limits_{k\in I_N} \frac{c_{\gamma+\alpha}^{2}}{c_{\gamma}^2}\geq\sum\limits_{k\in I_{N}}\frac{N-k+1}{N-k+\alpha_2+1}\int\limits_{0}^{1}\Phi_{2k+1,2(N-k)+2}(r)g_{\alpha}(r)dr
\geq \sum\limits_{k\in I_N} \lambda_{\alpha}=|I_{N}|\lambda_{\alpha}.
\end{align*}
Note that the number of integers in $I_{N}$ is comparable to $N$. Therefore, $S_{\alpha}\gtrsim N$ and this suffices to conclude $S_{\alpha}$ diverges for nonzero $\alpha$.
\section{Examples of Unbounded Non-Pseudoconvexs Domain with Nonzero Hilbert-Schmidt Hankel Operators}
In this section, we present two examples of domains that admit nonzero Hilbert-Schmidt Hankel operators with anti-holomorphic symbols. In the first example, the Bergman space is finite dimensional and the claim holds for trivial reasons. In the second example, the Bergman space is infinite dimensional; however, some of the terms $S_{\alpha}$'s are bounded.
We start with defining the following domains from \cite{Wiegerinck84}.
\begin{align*}
X_{1}&=\left\{(z_1,z_2)\in\mathbb{C}^2\ : |z_1|>e, |z_2|<\frac{1}{|z_1|\log|z_1|}\right\}\\
X_{2}&=\left\{(z_1,z_2)\in\mathbb{C}^2\ : |z_2|>e, |z_1|<\frac{1}{|z_2|\log|z_2|}\right\}\\
X_{3}&=\left\{(z_1,z_2)\in\mathbb{C}^2\ : |z_1|\leq e, |z_2|\leq e\right\}\\
\Omega_0&=X_1\cup X_2\cup X_3\\
B_m&=\left\{(z_1,z_2)\in\mathbb{C}^2\ :|z_1|, |z_2|>1,\Bigl\lvert |z_1|- |z_2|\Bigr\rvert<\frac{1}{(|z_1|+|z_2|)^m}\right\}\\
\Omega_k&=\Omega_0\cup B_{4k}
\end{align*}
Note that $\Omega_0$ and $\Omega_k$ are unbounded non-pseudoconvex complete Reinhardt domains with finite volume. The following proposition is also from \cite{Wiegerinck84}.
\begin{proposition}\label{Proposition 1} Let $k$ be a positive integer.
\begin{itemize}
\item[(i.)] The Bergman space, $A^2(\Omega_k)$, is spanned by the monomials $\left\{(z_1z_2)^j\right\}_{j=0}^k$.
\item[(ii.)] The Bergman space, $A^2(\Omega_0)$, is spanned by the monomials $\left\{(z_1z_2)^j\right\}_{j=0}^{\infty}$.
\end{itemize}
\end{proposition}
Next, we look at the Hankel operators on the Bergman spaces of $\Omega_0$ and $\Omega_k$.
\subsection{Example 1} We start with $\Omega_k$. Since $A^2(\Omega_k)$ is finite dimensional, for any multi-index of the form $(j,j)$ for $j=1,\cdots,k$; the term $S_{(j,j)}$ is a finite sum and consequently finite when restricted on the subspace of $A^2(\Omega_k)$ where the multiplication operator with the symbol $\overline{f}$ is bounded. Hence, for any $f\in A^2(\Omega_k)$, the Hankel operator with the symbol $\overline{f}$ is Hilbert-Schmidt on the subspace of $A^2(\Omega_k)$ where the operator is bounded.
\subsection{Example 2} Next, we look at $\Omega_0$ and we observe that the terms $S_{\alpha}$ take a simpler form. Namely, for a multi-index $(j,j)$,
$$S_{(j,j)}=\sum_{k=0}^{\infty}\left(\frac{c_{(k+j,k+j)}^2}{c_{(k,k)}^2}-\frac{c_{(k,k)}^2}{c_{(k-j,k-j)}^2}\right),$$
where
$$c_{(k,k)}^2=\int_{\Omega_0}|z_1z_2|^{2k}dV(z_1,z_2).$$
We will particularly compute $S_{(1,1)}$. A simple integration indicates,
$$c_{(k,k)}^2=4\pi^2\left(\frac{2}{2k+1}+\frac{e^{4k+4}}{(2k+2)^2}\right)$$
and with simple algebra we obtain,
\begin{align*}
\frac{c_{(k+1,k+1)}^2}{c_{(k,k)}^2}-\frac{c_{(k,k)}^2}{c_{(k-1,k-1)}^2}&=\frac{e^{8k+8}\frac{(2k+2)^4-(2k+4)^2(2k)^2}{(2k+4)^2(2k)^2(2k+2)^4}+e^{4k}\frac{p_1(k)}{p_2(k)}+\frac{p_3(k)}{p_4(k)}}{e^{8k+8}\frac{1}{(2k)^2(2k+2)^2}+e^{4k}\frac{p_5(k)}{p_6(k)}+\frac{p_7(k)}{p_8(k)}}
\end{align*}
where $p_1(k),\cdots,p_8(k)$ are polynomials in $k$. For large values of $k$, the first terms at the numerator and the denominator dominate and we obtain,
$$\frac{c_{(k+1,k+1)}^2}{c_{(k,k)}^2}-\frac{c_{(k,k)}^2}{c_{(k-1,k-1)}^2}\approx \frac{\frac{(2k+2)^4-(2k+4)^2(2k)^2}{(2k+4)^2(2k)^2(2k+2)^4}}{\frac{1}{(2k)^2(2k+2)^2}}\approx \frac{1}{k^2}.
$$
Therefore, $S_{(1,1)}$ is finite and the Hankel operator $H_{\overline{z_1z_2}}$ is Hilbert-Schmidt on $A^2(\Omega_0)$.
\section{Remarks}
\subsection{Canonical solution operator for $\overline{\partial}$-problem:}
The canonical solution operator for $\overline{\partial}$-problem restricted to $(0,1)$-forms with holomorphic coefficients is not a Hilbert-Schmidt operator on complete pseudoconvex Reinhardt domains because the canonical solution operator for $\overline{\partial}$-problem restricted to $(0,1)$-forms with holomorphic coefficients is a sum of Hankel operators with $\{\overline{z}_{j}\}_{j=1}^{n}$ as symbols (by Theorem \ref{Main} such Hankel operators are not Hilbert-Schmidt), $$\overline{\partial}^*N_1(g)=\overline{\partial}^*N_1 \left(\sum\limits_{j=1}^{n}g_{j}d\overline{z}_{j}\right)=\sum\limits_{j=1}^{n}H_{\overline{z}_j}(g_{j})$$
for any $(0,1)$-form $g$ with holomorphic coefficients.
\section{Acknowledgement}
We would like to thank Trieu Le for valuable comments on an earlier version of this manuscript.
\bibliographystyle{amsplain}
|
1,941,325,221,112 | arxiv | \section{Introduction}
Detection of single electron transfers through
quantum systems such as quantum dots
has become experimentally
feasible.\cite{Lu03422,Fuj042343,Gus06076605,Fuj061634}
Theoretical investigations on the underlying statistics
were mostly obtained in terms of higher cumulants,
e.g.\ noise and skewness, by using generating function
techniques.\cite{Lev964845,Lev04115305,Wab05165347,Ram04115327,%
She03485,Fli05475,Uts06086803, Ped05195330,Bac011317,Kie06033312}
Expansion of the higher cumulants
in non-Markovian corrections has revealed significant memory
effects in quantum dots \cite{Eng04136602} when
strong Coulomb interaction,\cite{Bra06026805}
phonon bath \cite{Bra081745} or initial
correlations \cite{Fli08150601} are present.
Statistics based on waiting time distribution (WTD) provides additional
information on the system. Higher cumulants can be
derived from the WTD \cite{Bra08477}, but not vice versa.
Waiting times were recently utilized to analyze single electron transfers
in the Markovian regime, for example, in double quantum
dots,\cite{Wel08195315} single molecules,\cite{Koc05056801,Wel081137}
single particle transport \cite{Bra08477} and Aharonov-Bohm
interferometers.\cite{Wel0957008}
Non-Markovian treatment of WTD has shown significant
features in photon counting statistics.\cite{Zai873897}
Non-Markovian effects are induced by a small bias voltage
or a finite bandwidth of the system-electrode coupling.
While the former can be eliminated
easily, the latter scenario is given by the setup of experiment.
In order to explore both regimes, a non-Markovian Pauli rate equation
based on a microscopic description of the electrode-system coupling using a
Lorentzian spectral density is derived. It can be utilized
for a variety of system, such as single molecule and quantum dots.
A formal connection of the WTD
with the shot noise spectrum of electron
transports through quantum junctions has been established in the
Markovian regime.\cite{Bra08477} The non-Markovian shot noise
spectrum \cite{Jin0806,Eng04136602} provides a more accurate
description of the signal
and its relation to the physics of the
junction than a Markovian version,
since it reveals several distinct intrinsic system frequencies.
In this paper we derive a non-Markovian theory for the WTD
of single particle transfer trajectories based on the derivation
of a non-Markovian microscopic Pauli rate equation.
It provides a general framework to study non-Markovian electron transport through many-body systems
and allows us to distinguish between non-Markovian effects due to intrinsic properties of the
system, finite electrode-system coupling band-width and small bias voltage.
The WTD is evaluated in time domain by
perturbation theory leading to non-Markovian corrections.\cite{Bra06026805}
We shall analyze the effect of memory on consecutive electron
transfers through double quantum junctions (DQD), see \Fig{fig:0},
and demonstrate the influence of many-body Coulomb coupling
on memory landscapes displaying the memory that is preserved in the system for several consecutive
electron transfers. The non-Markovian spectrum is obtained from a Laplace transformation.
The results reveal that the non-Markovian spectrum of the WTD
provides similar information content which qualifies it as an alternative method
to the non-Markovian shot noise spectrum.
\begin{figure}
\includegraphics[width=7.0cm,clip]{fig0.eps}
\caption{Illustrated set-up of the DQD in series and notation
of the important parameters. The charge state of the DQD is measured
by the quantum point contact (QPC) which provides an electron transfer trajectory.
\label{fig:0}}
\end{figure}
The paper is organized as follows.
In section \ref{thrate}, we present the derivation of the
non-Markovian rate equation. The expressions for the
non-Markovian WTD are shown in
section \ref{secWTD}. The formalism is applied to the DQD system
and the results are given in section \ref{demo}.
We conclude with a summary and outlook.
\section{Non-Markovian rate theory of quantum transport}\label{thrate}
\subsection{Hamiltonian}
Consider a junction consisting of a DQD in
series as the system, two electron reservoirs,
and the respective system--reservoir coupling, as shown
in \Fig{fig:0}. The total Hamiltonian assumes
$H_T=H_S+H_R+H_{SR}$.
The system part describes the DQD which is modeled by
\begin{equation}\label{eq1}
H_S=\sum_{s=1}^2 E_{s} \hat n_s +U {\hat n}_1{\hat n}_{2} - \Delta (c_1^\dagger c_{2} + c_2^\dagger c_{1}).
\end{equation}
Here, $\hat n_s=c_s^\dagger c_{s}$ is the
electron number operator of quantum dot $s=1$ or $2$ with orbital energy $E_s$,
and $U$ specifies the Coulomb interaction between two
dots.
The reservoirs of left and right ($\alpha=l$ and $r$)
electrodes are described by
\begin{equation}\label{eq2}
H_R = \sum_{\alpha=l,r} H_{R_\alpha}
= \sum_{\alpha=l,r}\sum_q \epsilon_{\alpha q} c_{\alpha q}^\dag c_{\alpha q} .
\end{equation}
The system--reservoirs coupling responsible for
electron transfer between the system and the electrodes is
\begin{equation} \label{eq3}
H_{SR}=\sum_{\alpha=l,r} \sum_{q}
\left[T_{1q}^{(l)} c^\dag_1 c_{l q} + T_{2q}^{(r)} c^\dag_2 c_{r q} + {\rm H.c.} \right].
\end{equation}
The electron creation (annihilation) operators
$c^\dag_s$ $(c_s)$ and $c^\dag_{\alpha q}$ $(c_{\alpha q})$
involved in \Eqs{eq1}--(\ref{eq3})
satisfy the anti-commutator relations.
In this system, single electron transfer trajectories can be obtained
from the charge state of the DQD that is constantly measured by a
quantum point contact (QPC). Such a configuration was employed in
experiment \cite{Fuj061634} operated at a small bias voltage.
\subsection{Generalized non-Markovian rate equation}
We now turn to the non--Markovian rate equation.
Let $\rho(t)\equiv \mathrm{tr}_R\,\rho_T(t)$ be the reduced
system density operator. The total density operator
is assumed to be initially factorisable into a system and a
reservoir part, $\rho_T(t_0)=\rho(t_0)\rho_R(t_0)$,
and the system--electrode couplings are assumed to be weak.
Using the standard approach,
one can readily derive a non-Markovian quantum master
equation.\cite{Wei99,Yan05187,Wel06044712}
For the present study, we adopt
$T^{(\alpha)}_{sq}=T^{(\alpha)}_s T^{(\alpha)}_q$ for simplification.
A rotating wave approximation to the cross
coupling terms between the system orbitals is not required here since each electrode
is coupled to one orbital site only.\cite{Wel08195315}
We denote the system Liouville operator $\mathcal L_{S}\cdot\equiv[H_{S},\cdot]$
and set $\hbar =1$.
The resulting quantum master equation
in the time-nonlocal form
reads \cite{Wel06044712,Wel08195315}
\begin{align}\label{equ:master2local}
\dot{\rho}(t)
= & -i \mathcal L_S(t) \rho(t)
- \sum_{\alpha s} \int_0^t \mathrm d \tau \,\vert T^{(\alpha)}_s \vert^2 \nonumber \\\quad
& \times \Bigl\{C^{(+)}_{l}(t-\tau)
\big[c_s, e^{-iH_S(t-\tau)} c_{s}^\dagger \rho(\tau) e^{iH_S(t-\tau)}\big]
\nonumber \\
&\ \ - C^{(-)}_{l}(t-\tau) \big[
c_s, e^{-iH_S(t-\tau)} \rho(\tau) c_{s}^\dagger e^{iH_S(t-\tau)} \big]
\nonumber \\\quad &
+ \mathrm{H.c.} \Bigr\}.
\end{align}
The reservoir correlation functions,
\begin{equation}\label{equ:correl+}
C^{(+)}_{\alpha}(t)=\sum_{q} \vert T_{q}^{(\alpha)} \vert^2
\langle c^{\dag}_{\alpha q}(t)c_{\alpha q}(0) \rangle_{R_\alpha},
\end{equation}
and
\begin{equation}\label{equ:correl-}
C^{(-)}_{\alpha}(t)=\sum_{q} \vert T_{q}^{(\alpha)}\vert^2
\langle c_{\alpha q}(0)c^{\dag}_{\alpha q}(t) \rangle_{R_\alpha},
\end{equation}
contain the properties of the electrodes.
Here, $c^{\dag}_{\alpha q}(t)\equiv
e^{i H_{R_\alpha} t} c_{\alpha q}^\dagger e^{-i H_{R_\alpha} t}$
and $\langle O \rangle_{R_\alpha}\equiv \mathrm{tr}_{R_\alpha}
\lbrace O \rho_{R_\alpha} \rbrace$,
with $\rho_{R_\alpha}$ being the
density operator of the bare electrode $\alpha$
under a constant chemical potential $\mu_{\alpha}$.
Physically, $C^{(+)}_{\alpha}(t)$
describes the process of electron transfer
from the $\alpha$--electrode to the system,
while $C^{(-)}_{\alpha}(t)$ describes the reverse process.
These two correlation functions are not independent; they
are related via the fluctuation--dissipation
theorem.
Electron counting experiments are operated either
in the large--bias limit in order to achieve a {\em directional}
trajectory of single transfer events\cite{Lu03422,Fuj042343,Gus06076605}
or at small bias in order to realize transfer against
the direction of the bias.\cite{Fuj061634}
Non-Markovian effects are either due to small
bias or finite band-width.
In order to study both regimes, we derive a rate equation by
projecting the master equation (\ref{equ:master2local})
into the Fock space of system and by considering
only the population part $p_{m}\equiv \rho_{mm}$.
Some simple algebra leads from Eq.\,(\ref{equ:master2local}) to
the non-Markovian Pauli rate equation
\begin{align}\label{ratecom}
\dot{p}_{m}(t) &= \sum_{\alpha n}\!
\int_0^t \! \mathrm d \tau \big[
\Gamma_{mn}^{(\alpha)} C^{(+)}_\alpha(t-\tau)e^{-i \omega_{mn} (t-\tau)}
p_{n}(\tau)
\nonumber \\&\quad
+ \Gamma_{nm}^{(\alpha)} C^{(-)}_\alpha (t-\tau)e^{-i \omega_{nm} (t-\tau)}
p_{n}(\tau)
\nonumber \\&\quad
-\Gamma_{nm}^{(\alpha)} C^{(+)}_\alpha(t-\tau)e^{-i \omega_{nm} (t-\tau)}
p_{m}(\tau)
\nonumber \\&\quad
-\Gamma_{mn}^{(\alpha)} C^{(-)}_\alpha (t-\tau)e^{-i \omega_{mn} (t-\tau)}
p_{m}(\tau) \big] + {\rm c.c.}
\nonumber \\ &\equiv
\sum_n \int_0^t \! \mathrm d \tau K_{mn}(t-\tau) p_n(\tau).
\end{align}
Here, $\omega_{mn}\equiv E_m-E_n$ is
the transition frequency between two Fock states;
\begin{equation}\label{eq:Gamn}
\Gamma_{mn}^{(\alpha)} = \vert T_s^{(\alpha)}
\vert^2 \vert \bra{m} c^{\dagger}_s \ket{n}\vert^2,
\end{equation}
with $s=1$ or $2$ for $\alpha=l$ or $r$, respectively,
is the state--dependent non-Markovian system--reservoir coupling strength.
As inferred from \Eq{eq:Gamn}, $\Gamma_{mn}^{(\alpha)}\neq 0$ only if
$\vert m\rangle$ has one more electron than $\vert n\rangle$.
We can therefore identify the {\it rate kernel}
elements involved in \Eq{ratecom} with
three physically distinct contributions
\begin{equation}\label{eq13}
K(t) \equiv \sum_{\alpha}[K^{(\alpha +)}(t)+ K^{(\alpha -)}(t)] + K_0(t).
\end{equation}
$K^{(\alpha +)}(t)$ and $K^{(\alpha -)}(t)$ realize an electron
transfer in and out of the system through the
$\alpha$--electrode, respectively.
They summarize the off--diagonal matrix
elements of the transfer rate kernel $K(t)$
in \Eq{ratecom},
\begin{equation} \label{eq9}
K^{(\alpha +)}_{mn}(t)= \Gamma_{mn}^{(\alpha)}
C^{(+)}_\alpha(t-\tau)e^{-i \omega_{mn} t} +{\rm c.c.},
\end{equation}
\begin{equation}\label{eq10}
K^{(\alpha -)}_{mn}(t)=\Gamma_{nm}^{(\alpha)}
C^{(-)}_\alpha (t-\tau)e^{-i \omega_{nm} t} + {\rm c.c.}
\end{equation}
$K_0(t)$ summarizes the diagonal matrix elements of $K(t)$
and leaves the number of electrons in system unchanged.
These diagonal elements satisfy
\begin{equation}\label{K0}
(K_0)_{nn}=-\sum_{\alpha,m} \, \big[K^{(\alpha +)}_{mn}(t)+ K^{(\alpha -)}_{mn}(t)\big].
\end{equation}
For the Lorentzian spectral density model
[\Eq{Lorent_J}], where the reservoir
spectral density assumes the form
$J_{\alpha}(\omega)=
\gamma^2_{\alpha}/[(\omega-\Omega_{\alpha})^2 +\gamma_{\alpha}^2]$,
we obtain for the off--diagonal rate kernel
elements the following expressions,
\begin{align} \label{KmnLoren}
K^{(\alpha \pm)}_{mn}(t)
&= 2 \Gamma_{mn}^{(\alpha)} \Big\{ e^{- \gamma_\alpha t}
[a_\alpha^\pm \cos( \Omega^\alpha_{mn} t)
- b_\alpha^\pm \sin( \Omega^\alpha_{mn} t)]
\nonumber \\ &\quad
+ \sum_{k=1}^{\infty} e^{ -\varpi_k t}
[c_{\alpha k}^\pm \cos( \mu^\alpha_{mn} t)
- d_{\alpha k}^\pm \sin( \mu^\alpha_{mn} t) ]\Big\}.
\end{align}
Here, $\varpi_k = (2k-1)\pi/\beta$
is the fermionic Matsubara frequency,
while $\Omega^\alpha_{mn}\equiv \Omega_\alpha-\omega_{mn}$ and
$\mu^\alpha_{mn}\equiv \mu_\alpha-\omega_{mn}$.
The coefficients $a_\alpha^\pm$,
$b_\alpha^\pm$, $c_\alpha^\pm$, and $d_\alpha^\pm$
are all real, given explicitly in Appendix\,\ref{Appendix1}
by \Eq{coef_all}.
The first term in the curly brackets of \Eq{KmnLoren} reflects
the spectral properties of the electrode-system coupling, while
the second term arises from the
decomposition into Matsubara frequencies which induces memory
effects due to small bias voltages. From the expressions one can infer
that large $\gamma_\alpha$, wide bands,
and large $\varpi_k$, high bias,
cause a fast decay of the transfer rates in time.
The decay is responsible
for the memory loss in the system.
The non-Markovian Pauli rate equation (\ref{ratecom}), in terms of
the population vector ${\bm p}(t)=\{p_m(t)\}$
and the involved transfer matrices, is
\begin{equation}\label{rateeq}
\dot{\bm p}(t)=\int_{t_0}^{t} \mathrm d\tau \,
K(t-\tau) {\bm p}(\tau).
\end{equation}
It reads in Laplace frequency domain
\begin{equation}\label{rateeq-laplace}
s \tilde {\bm p}(s)- {\bm p}_0= \tilde K(s) \tilde{\bm p}(s).
\end{equation}
The corresponding electron transfer rates are
\begin{align}
\tilde K^{(\alpha \pm)}_{mn}(s) &
= 2 \Gamma_{mn}^{(\alpha)} \Bigl\{
\frac{a_\alpha^\pm ( s+ \gamma_\alpha)- b_\alpha^\pm \Omega^\alpha_{mn}}
{(s+ \gamma_\alpha)^2 + (\Omega^\alpha_{mn})^2 }
\nonumber \\ &\qquad\quad
+ \sum_{k=1}^{\infty}
\frac{c_{\alpha k}^\pm (s+ \varpi_k)
-d_{\alpha k}^\pm \mu^\alpha_{mn}}
{ (s+ \varpi_k)^2 + (\mu^\alpha_{mn})^2 }
\Bigr\}.
\end{align}
The derived non-Markovian rate equation formalism
is based
on a microscopic description of the electrode-system coupling,
and is valid for arbitrary bias and temperature.
Compared to the quantum master equation
in the same regime \cite{Yan05187,Wel06044712},
the exclusion of the coherence makes it numerically
feasible to calculate multilevel
systems such as large molecules.\cite{Wel081137}
This allows to include non--Markovian effects
in large many--body systems,
e.g.\ quantum-chemistry calculations, since the properties
of the molecular-junction enter only through the couplings
$\Gamma_{nm}^{(\alpha)}$ and the fitting
parameters of the Lorentzian spectrum.
To rate equation (\ref{rateeq}), the Born-Markov approximation
can be applied by separating the integration variables and
extending the upper limit to infinity in \Eq{rateeq}.
The resulting integration over time,
\begin{equation}
W^{(\alpha \pm)}_{mn} = \int_{0}^{\infty}\!{\mathrm d}t\,
K^{(\alpha \pm)}_{mn}(t)
=\tilde K^{(\alpha \pm)}_{mn}(s) \vert_{s=0} ,
\end{equation}
gives the Markovian electron transfer rates. The second identity
is via the Laplace domain rate equation (\ref{rateeq-laplace}),
by which the Born-Markov approximation amounts to the zero frequency
contribution.
\section{Non-Markovian waiting time distribution}
\label{secWTD}
\subsection{Statistics analysis}
We consider two consecutive electron transfers contained in a time
series as illustrated in \Fig{fig:0}. An electron entered the system
from the left electrode at an earlier time $t_0$ is detected at
time $t$ leaving the system through the right electrode. No other
electron transfers are detected in between.
The joint-probability for the consecutive electron transfer events is
\begin{equation}\label{joint1}
P(t)=\langle \langle W^{(r-)} G(t,t_0) W^{(l+)} {\bm p}(t_0) \rangle \rangle.
\end{equation}
Here, $\langle \langle\cdots\rangle \rangle$ denotes the sum over the final system states.
We assume that the transfer events are instantaneous compared
to the time-scale of the system propagation
in between as shown in \Fig{fig:0}.
Therefore we have used the Markovian forms of rate matrices,
for the ascribed two consecutive events.
This assumption is reasonable in accordance with electron
counting experiments,
where typical waiting times are long
compared to the fast transfer
events.\cite{Lu03422,Fuj042343,Gus06076605,Fuj061634}
The memory of the system is contained in
$G(t,t_0)$, the non-Markovian propagator of the system from $t_0$
to $t$ in absence of transfers.
It is therefore associated with the diagonal
rate matrix $K_0$ of \Eq{K0}, satisfying
\begin{equation}
\frac{d}{dt} G(t,t_0) = \int_{t_0}^{t} \mathrm d\tau'\,
K_{0}(t-\tau') G(\tau',t_0) .
\end{equation}
For the given two--event case, the joint--probability
is equivalent to a waiting time distribution.\cite{Wel08195315,Wel081137}
Now consider the event of an electron transferred into the system and
the subsequent
waiting time before any other transfer takes place,
also referred to as survival probability.
In the present notation it is given by
$
\langle \langle G(t,t_0) W^{(\alpha\pm)} {\bm p}(t_0) \rangle \rangle.
$
While the joint probability is subject to the nature of the second transfer,
the specific form of the second event is
irrelevant to the survival probability.
Consequently, we introduce the survival time operator
\begin{equation}
Z^{(\alpha\pm)}(t,t_0)= G(t,t_0) W^{(\alpha\pm)}.
\end{equation}
If memory is absent, the
survival probability is indifferent from the previous
waiting times. To study the
memory of a previous survival time that carries on into the following
survival time, we introduce two--time joint survival probabilities
of the form
\begin{equation}\label{equ:twotimes}
Q(\tau_2,\tau_1)= \langle \langle Z^{(r-)}(\tau_2 , \tau_1)
Z^{(l+)}( \tau_1, t_0) {\bm p}(t_0) \rangle \rangle.
\end{equation}
\subsection{Non-Markovian corrections}
The formal solution to the propagator in Laplace domain is given by
\begin{equation} \label{prop1}
\tilde G(s)=\frac{1}{s-\tilde K_{0}(s)}.
\end{equation}
The complex Laplace frequency $s= \gamma + i \omega$ is
associated with the system residing in its state.
The bilateral Laplace transformation reduces to a Fourier transformation
by setting $\gamma=0$.
Since $\tilde K_{0}(s)$ is strictly diagonal in the many-body eigenspace
of the system, the matrix inversion required in \Eq{prop1}
can be efficiently carried out for large systems.
The technique of expanding the propagation into
non-Markovian corrections
has been applied to electron transport
recently.\cite{Bra06026805,Bra081745,Fli08150601}
Here we apply it to the WTD.
Let us first express \Eq{prop1} by its series
\begin{equation}
\tilde G(s) =\sum_{n=0}^\infty \frac{ [\tilde K_{0}(s) ]^n} {s^{n+1}}.
\end{equation}
Assuming the derivative
$\partial^m_s [\tilde K_{0}(s) ]$ exists for all $m$,
the kernel can then be expanded into a Taylor series
$[\tilde K_{0}(s)]^n=\sum_{m=0}^\infty \partial^m_s
[\tilde K_{0}(s)]^n\vert_{s=0} \frac{s^m}{m!}$.
Thus,
\begin{equation}
\tilde G(s)= \sum_{n=0}^\infty \sum_{m=0}^\infty \frac{\partial^m_s
[\tilde K_{0}(s) ]^n\vert_{s=0}}{m!\, s^{n+1-m}}.
\end{equation}
Now we apply the inverse Laplace transform
$
x(t)=\frac{1}{2 \pi i} \int_{\gamma - i\infty}^{\gamma+i \infty}
\mathrm ds \, e^{st} \tilde x(s)
$
to switch back into time domain.
One can simplify the poles by using $m=n$
which neglects the transient terms.\cite{Bra06026805,Bra081745,Fli08150601}
We obtain
\begin{equation}\label{eq18}
G(t) = \!\sum_{n=0}^{\infty}\! \frac{1}{n!}
\left[
\frac{\partial^n}{\partial s^n}\!\!
\left([ \tilde K_{0}(s) ]^n
e^{\tilde K_{0}(s) t}
\right)\right]_{s=0} \!\!
\equiv \sum_{n=0}^{\infty} G^{(n)}(t),
\end{equation}
with $G^{(n)}(t)$ denoting
the individual term involved,
where
$G^{(0)}(t)=e^{\tilde K_{0}(s) t} \vert_{s=0}$
describes the Markovian dynamics.
The first identity of expression (\ref{eq18})
is asymptotically exact since the dynamics
is reduced to the poles $m=n$.
The WTD
can also be expressed in terms
of $P(t)=\sum_n P^{(n)}(t)$,
with $P^{(0)}(t)$ denoting the Markovian contribution;
so can the survival probabilities.
\section{Demonstration and discussion}\label{demo}
We employ a non-Markovian rate equation to calculate the two--electron
system as illustrated in
\Fig{fig:0}. This system resembles the counting experiment
conducted in Ref.\,\onlinecite{Fuj061634}.
Here, the DQD provides a total number of four eigenstates:
the unoccupied ($\vert 0 \rangle$)
two single--occupied ($\vert 1 \rangle$ and $\vert 2 \rangle$),
and one double-occupied ($\vert 3 \rangle$),
with the energies of $\epsilon_0=0$,
$\epsilon_{1/2} = \frac{1}{2}(E_1 + E_2)
\mp \sqrt{\frac{1}{4}(E_1-E_2)^2 + \Delta^2}
$, and $\epsilon_3=E_1+E_2+U$, respectively.
The equilibrium of the chemical potential of the electrodes
is set to $\mu^{\rm eq}=(E_1+E_2)/2$.
For numerical demonstrations, we use the numbers in
accordance with recent electron
counting experiments of electron transfers through
quantum dot systems at small temperatures.\cite{Gus06076605}
A coupling strength of $\Gamma= 10^4$\,Hz serves as the
unit for all values. This is equivalent to an
energy unit of $[E]=10^4 h= 6.63 \times 10^{-30}$\,J,
and a time unit of $[t]= 0.1$\,ms,
which is the typical time scale of waiting times in quantum dot counting
experiments.\cite{Fuj061634} We also use a low temperature of
$T=2 \times 10^{4} [E] = 10$\,mK.
If mentioned, we set a small energy
detuning of $\Delta E=E_1-E_2$ in order
to deduce specific frequencies of the systems.
The bandwidth $\gamma$ is set sufficiently large in order
to neglect the finite bandwidth effects; thus
the non--Markovian effect is studied
in the wide band region.
In addition, the Lorentzian spectral densities are aligned
to the orbitals of the system.
\subsection{Transients and Fourier spectrum of WTD}
\begin{figure}
\includegraphics[width=8.5cm,clip]{fig2.eps}
\caption{The relative non-Markovian spectrum $F(\omega)$
of the WTD. The parameters used for
the four panels are given as follows.
Upper left panel (a): $U=0$, $\Delta E=0$, $V=1.0$, $\Delta = 1.0,5.0,8.0$.
Upper right panel (b): $U=0$, $\Delta E=0$, , $\Delta = 5.0$. $V=1.0, 2.0, 5.0$.
Bottom left panel (c): $\Delta E=0$, $V=1.0$, $\Delta = 5.0$ , $U=1.0, 2.0, 4.0$.
Bottom right panel (d): $V=1.0$, $\Delta = 5.0$ , $U=0.0$, $\Delta E=0.1, 1.0, 2.0$.
\label{fig:2}}
\end{figure}
Figure \ref{fig:2} shows the relative non-Markovian spectrum of the WTD represented by
\begin{equation}
F(\omega)=\frac{1}{\Gamma}
\frac{\vert P(\omega)-P^{(0)}(\omega)\vert }
{ P^{(0)}(\omega)}.
\end{equation}
It is noteworthy that $F(\omega)$ is independent of
the system reservoir coupling strength parameter $\Gamma$.
The WDT spectrum reveals several
frequencies that are present in
the transient oscillations.
These depend only on the
internal transfer rate $\Delta$,
Coulomb coupling $U$,
and bias voltage $V$.
The specific values of the parameters
are given in the caption of the figure.
Figure \ref{fig:2}(a) shows the main characteristics
of $F(\omega)$, consisting of
two overlapping sub-peaks centered around the value of $\Delta$.
Changing the value of $\Delta$
leads to the shift of both sub-peaks equally by
$\Delta$. In \Fig{fig:2}(b), $\Delta$ is kept constant
and the bias voltage is varied. We find that
the splitting of the two sub-peaks
is determined by the applied voltage.
Labeling the two peaks with $\pm$, respectively, we can
deduce the following relation for the
corresponding characteristic frequencies.
$\omega_{\pm}= \vert \Delta \pm V/2 \vert$.
In the presence of Coulomb interaction,
we observe an additional double--peaks
feature at $\vert U -\Delta \pm V/2 \vert$,
as demonstrated in \Fig{fig:2}(c).
This is similar to a non-Markovian shot noise
spectrum,\cite{Jin0806}
where a
finite Coulomb interaction $U$ induces also additional peaks
due to the
energy gap between the two-particle occupation state
and lower states.
On the other hand,
the orbital detuning does not induce additional
peaks in the double quantum dot in series
as shown in \Fig{fig:2}(d).
Oscillations of Rabi frequency, which
were observed in parallel DQD
systems,\cite{Wel08195315,Wel0957008,Bra08477}
are however absent in the present series DQD system.
In the parallel cases, the transport proceeds
via two channels, and the Rabi oscillations
in the WDT can be observed
as the consequence of quantum
mechanical interferences.\cite{Wel08195315,Wel0957008,Bra08477}
It is also noted that the information
contained in the spectrum of the non-Markovian
WDT is mostly equivalent
to a measurement of the
non-Markovian shot noise spectrum.
For this purpose, the WTD can be
considered as an alternative approach
to the shot noise spectrum measurement.
\subsection{Memory landscape of consecutive waiting times}
The expansion in non-Markovian corrections, \Eq{eq18},
can be readily employed to
calculate the two propagators involved in
the two-times joint probabilities
defined by \Eq{equ:twotimes}.
Denote
\begin{align}
Q^{(n)} (\tau_2,\tau_1) &=
\sum_{k=0}^{n} \sum_{j=0}^{n} \langle \langle
G^{(k)}(\tau_2-\tau_1) W^{(r-)}
\nonumber \\ &\qquad\ \ \times
G^{(j)}(\tau_1-t_0) W^{(l+)}
\bm p(t_0) \rangle \rangle.
\end{align}
A memory landscape of the system
can be calculated by the difference
between non-Markovian and Markovian
two-times joint probabilities
\begin{equation}\label{mem-land}
L^{(n)} (\tau_2,\tau_1)=
\frac{ Q^{(n)} (\tau_2,\tau_1)
- Q^{(0)} (\tau_2,\tau_1) }
{Q^{(0)} (\tau_2,\tau_1)}.
\end{equation}
The order $n$ of the perturbative expansion in non-Markovian
corrections has to be chosen in accordance to the parameters
in order to assure satisfactory convergence.
We find that the summation to the fourth non-Markovian contribution already converges
sufficiently for the given parameters.
As the memory in \Eq{mem-land} decays,
the relative non-Markovian landscape
$L^{(n)} (\tau_2,\tau_1)$ converges to zero.
Figure\,\ref{fig:3} shows the memory landscape of two survival times
related to two consecutive electron transfers through the left
electrode.
It visualizes how memory of the waiting time $\tau_1$
after the first transfer
is carried over into the waiting time $\tau_2$
following the second transfer.
We find that the non-Markovian effects
are small for the given parameters in case the DQD is coupled symmetrically
to the electrodes. This is due to the relatively weak coupling of the DQD to the electrodes
which is required in present counting experiment in order to
resolve single electron transfers on the measurable timescales.
It is observed that a stronger coupling to only one electrode
induces significantly larger non-Markovian
effects as shown in the left panels of \Fig{fig:3}.
In general, fast electron transfers
are necessary in order to observe significant non-Markovian effects.
The deviations for $\tau_1$, $\tau_2$ approaching zero from the Markovian
value are due to the truncation of the transients in the derivation
of the expansion. The expansion follows the general trend
of a numerically exact solution. Both solutions
overlap after the transients have decayed. However this causes
relatively large inaccuracies for $\tau_1$ and $\tau_2$ close to zero.
There is an interesting dependency of the non-Markovian effects in
the memory landscape on the Coulomb repulsion $U$.
By comparing the upper panels of \Fig{fig:3} where Coulomb repulsion is absent,
with the bottom one, where a large $U$ induces a Coulomb blockade regime,
we observe that memory decays faster with $\tau_2$
in the Coulomb blockade regime. This can be explained as follows.
In the second regime, only a single electron can
occupy the DQD and the double occupancy state does not provide memory
for the second survival time leading to an overall smaller non-Markovian
contribution. In this case, only one possible trajectory in the left to right direction is
possible. An electron enters the unoccupied DQD at time $\tau_1=0$ and leaves it at time $\tau_2=0$.
The memory is preserved during $\tau_1$ by the single electron inside the DQD.
However, after the electron has left a junction, the memory of its trajectory is lost rapidly
since no other electron can serve as a messenger inside the DQD
thus leading to comparatively short survival times $\tau_2$ where memory is present.
In the regime where Coulomb repulsion is neglectible, a second electron can occupy the
junction along the described trajectory, which is represented in the model by the presence
of an occupied double occupancy state. The presence of the second electron preserves
the memory during $\tau_2$ after the other electron has left the junction.
\begin{figure}
\begin{center}
\includegraphics[width=8.0cm,clip]{fig3.eps}
\end{center}
\caption{$L^{(4)}_{l,r}$ memory landscape of
consecutive survival times.
The bandwidth is large and a finite bias of $V=0.1 k_b T$ is applied.
The left coupling strength is $\Gamma^{(l)}=10^{4} Hz$.
The upper panels are calculated in absence of Coulomb coupling, $U=0$, bottom
panels display the Coulomb blockade regime, $U= \infty$.
Left panels are calculated for
a symmetric system, $\Gamma^{(l)}=\Gamma^{(r)}$. In the right panels,
a stronger coupling strength is applied to the right electrodes
$\Gamma^{(r)} = 10^{6} Hz$.
\label{fig:3}}
\end{figure}
\section{Conclusion}
We find that non-Markovian effects are small in the regimes
of recent single electron counting experiments.
The sampling rate of current experiments is slow, a requirement which is
imposed by the detection process of single electron transfers with currently available
technology. This verifies the reason that the Markovian approximation of
previous studies considering FCS or WTD
is reasonable for the previously investigated systems.
Non-Markovian effects in the electron transfer statistics
have to be taken into consideration for stronger
electrode-DQD couplings, which then also requires faster sampling
rates or a strongly asymmetric system.
For example they affect the decay rates of the WTD
which are directly related to the electronic
structure of the system in junction.\cite{Wel081137}
Non-Markovian effects also induce several oscillations with
characteristic system frequencies.
{Note that the form of Pauli rate equation remains valid
itself in the strong coupling limit. In the present paper
we employ a perturbative approach to the rate
equation and observe that the non-Markovian
effects increase with the coupling strength. This observation
is expected to remain true based on general Pauli rate equation
dynamics. In other words, the non-Markovian effects
are mainly visible for stronger couplings.}
The employed microscopic non-Markovian rate equation provides
a general framework to study similar systems and allows us to distinguish
between non-Markovian effects due to intrinsic properties of the
system, finite electrode-system coupling band-width and small bias voltages.
It can be combined with quantum chemistry calculations that can
calculate the employed parameters for molecules and their binding to
the electronic bands of the metal electrodes.
The approaches derived for the non-Markovian WTD are general
and can be applied to a variety of processes in physics,
chemistry and biology that are described by rate equations.
\acknowledgments
Support from the RGC (604007 \& 604508) of Hong Kong
is acknowledged.
|
1,941,325,221,113 | arxiv | \section*{Acknowledgment}
M. Cadeddu is grateful to C. Giunti and M. Lissia for stimulating discussions.
\nocite{*}
\bibliographystyle{apsrev4-1}
|
1,941,325,221,114 | arxiv |
\section{\texorpdfstring{\ttbar production cross section in $\Pp\Pap$ and $\Pp\Pp$ collisions as a function of $\sqrt{s}$}{ttbar production cross section in proton-antiproton and proton-proton collisions as a function of sqrt(s)}\label{app:supp_material}}
\input{supp_material}
}
\cleardoublepage \section{The CMS Collaboration \label{app:collab}}\begin{sloppypar}\hyphenpenalty=5000\widowpenalty=500\clubpenalty=5000\input{TOP-15-003-authorlist.tex}\end{sloppypar}
\end{document}
|
1,941,325,221,115 | arxiv | \section{Introduction} In many practical engineering situations, the dynamical behaviour of the system at stake can be modeled as a switching system like the one represented in Equation (\ref{eq-switching}):
\begin{equation}\label{eq-switching}
x_{k+1}=A_{\sigma\left(k\right) }x_{k},
\end{equation}
where
$\setmat\mathrel{\mathop:}=\left\{ A_{1},...,A_{m}\right\} $ is a set of matrices, and the function $$\sigma(\cdot): \n\rightarrow \{1,\dots, m\} $$ is called the \emph{switching signal}.
As a few examples, applications ranging from Viral Disease Treatment optimization (\cite{hmcb10}) to Multi-hop networks control (\cite{pappas-multihop}), or trackability of autonomous agents in sensor networks (\cite{cresp}) have been modeled with switching systems.
One of the central problems in the study of switching systems is their stability: do all the trajectories $x(t)$ tend to zero when $t\rightarrow \infty,$ whatever switching law $\sigma\left(k\right)$ occurs? The answer is given by the \emph{joint spectral radius} of the set $\setmat$ which is defined
as
\begin{equation} \rho\left(\setmat\right)
=\lim_{k\rightarrow\infty}\max_{\sigma
\in\left\{ 1,...,m\right\} ^{k}}\left\Vert A_{\sigma_{k}}...A_{\sigma_{2
}A_{\sigma_{1}}\right\Vert ^{1/k}.\label{eq-def.jsr
\end{equation}
This quantity is independent of the norm
used in (\ref{eq-def.jsr}), and is smaller than one if and only if the system is stable. See \cite{jungers_lncis} for a recent survey on the topic. Even though it is known to be very hard to compute, in recent years much effort has been devoted to approximating this quantity, because of its importance in applications. One of the most successful families of techniques to approximate it makes use of convex optimization methods, like Sum-Of-Squares, or Semidefinite Programming (\cite{JohRan_PWQ,multiple_lyap_Branicky,AAA_MS_Thesis,Roozbehani2008,daafouzbernussou,LeeD06,convex_conjugate_Lyap,Pablo_Jadbabaie_JSR_journal,protasov-jungers-blondel09}). Other methods have been proposed to tackle the stability problem (e.g. variational methods (\cite{MM11}), or iterative methods (\cite{GZalgorithm})), but a great advantage of the former methods is that they offer a simple criterion that can be checked with the help of the powerful tools available for solving convex programs, and they often come with a guaranteed accuracy.
\\As an example, the following simple set of LMIs is probably the first one that has been proposed in the literature in order to solve the problem:
\begin{equation}\label{eq-Lyap.CQ.SDP}
\begin{array}{rll}
A_i^TPA_i&\prec&P \quad i=1,\ldots,m.\\
P&\succ&0.
\end{array}
\end{equation}
It appears that if these equations have a solution $P,$ then the function $x^TPx$ is a common quadratic Lyapunov function, meaning that this function decreases, whatever switching signal occurs. This proves the following folklore theorem:
\begin{thm}
If a set of matrices $\setmat\mathrel{\mathop:}=\left\{ A_{1},...,A_{m}\right\} $ is such that the Equations (\ref{eq-Lyap.CQ.SDP}) have a solution $P,$ then this set is stable.
\end{thm}
Starting with the LMIs (\ref{eq-Lyap.CQ.SDP}), many researchers have provided other methods, based on semidefinite programming, for proving the stability of a switching system.
In all these methods, the stability criterion consists in verifying a set of \emph{Lyapunov Inequalities,} which we now describe. The different methods amount to write a set of equations, which are parameterized by the values of the entries of the matrices in $\setmat.$ If these equations have a solution, then it implies that the set $\setmat$ is stable.
\begin{defi}
We call a \emph{Lyapunov function} any continuous, positive, and homogeneous function
$V(x):\mathbb{R}^n\rightarrow\mathbb{R}.$\end{defi}
\begin{defi}
Given a switching system of the shape (\ref{eq-switching}), a \emph{Lyapunov Inequality} is a quantified inequality of the shape:
\begin{equation} \forall x\in \re^n, V_i(Ax)\leq V_j(x), \end{equation}
where the functions $V_i,V_j$ are Lyapunov functions, and $A$ is a particular product of matrices in $\setmat.$
\end{defi}
For instance, the relations (\ref{eq-Lyap.CQ.SDP}) represent a Lyapunov inequality, because of the well known property $$P\succeq 0\quad \Leftrightarrow \quad \forall x, x^TPx \geq 0. $$ They represent the fact that the ellipsoid corresponding to the matrix $P$ is mapped into itself. We call such Lyapunov inequalities with SDP matrices and semidefinite inequalities \emph{ellipsoidal Lyapunov inequalities.}
\begin{remark}\label{remark-approx} It is important to note that the utility of such LMIs goes further than the simple stability criterion: by applying them to the \emph{scaled} set of matrices \begin{equation}\label{eq-scaled-set} \setmat/\gamma =\{ A/\gamma: A\in \setmat\}, \end{equation} one can actually derive an upper bound $\gamma^*$ on the joint spectral radius, thanks to the homogeneity of the Definition (\ref{eq-def.jsr}): Take $\gamma^*$ the minimum $\gamma$ such that the scaled set (\ref{eq-scaled-set}) is stable.\\
This allows to provide an estimate of the quality of performance of a particular set of LMIs, as the maximal real number $r$ such that for any set of matrices $$r\gamma^*\leq \rho \leq \gamma^*. $$
In particular, it is
known (\cite{ando-shih}) that the estimate $\gamma^*$ obtained
with the set of Equations \ref{eq-Lyap.CQ.SDP} satisfies
\begin{equation}\label{eq-CQ.bound}
\frac{1}{\sqrt{n}}\gamma^* \leq\rho(\setmat)\leq\gamma^* ,
\end{equation}
where $n$ is the dimension of the matrices. The reason for which more LMI criteria have been introduced in the literature cited above is that for some other sets of LMIs, one can prove that the value $r$ is larger than $\frac{1}{\sqrt{n}}.$
\end{remark}
In the recent paper \cite{ajprhscc11}, the authors have presented a framework in which all these methods find a common natural generalization.
The idea is that a set of Lyapunov inequalities describes a set of switching signals for which the trajectory remains stable. Thus, a valid set of LMIs must cover all the possible switching signals, and provide a valid stability proof for all of these signals. One contribution of \cite{ajprhscc11} is to provide a way to represent a set of LMIs with a directed labeled graph which represents all the stable switching signals (as implied by the LMIs). Thus, one just has to check that all the possible switching signals are represented in the graph, in order to decide whether the corresponding set of LMIs is a sufficient condition for stability. We now formally describe the construction and the result:
In the following, $\setmat$ can represent a set of matrices or the alphabet corresponding to this set of matrices. Also, for any alphabet $\setmat,$ we note $\setmat^*$ (resp. $\setmat^t$) the set of all words on this alphabet (resp. the set of words of length $t$). Finally, for a word $w\in \setmat^t,$ we note $A_w$ the product corresponding to $w:$ $A_{w_1}\dots A_{w_t}$.\\
We represent a set of Lyapunov inequalities on a directed
labeled graph $G(N, E)$. Each node of this graph corresponds to a Lyapunov function
$V_i$, and each edge is
labeled by a finite product of matrices, i.e., by a word from the
set $\setmat^*.$
As illustrated in Figure~\ref{fig-node.arc}, for any word $A_w\in \setmat^*,$ and any Lyapunov inequality of the shape
\begin{equation}\label{eq-lyap.inequality.rule}
V_j(A_wx)\leq V_i(x) \quad \forall x\in\mathbb{R}^n,
\end{equation}
we add an arc going from node $i$ to node $j$ labeled with the word
$\bar w$ (the \emph{mirror} $\bar w$ of a word $w $ is the word obtained by reading $w$ starting from the end). So, there are as many nodes in the graph as there are different functions $V_i,$ and as many arcs as there are inequalities.
\begin{figure}[ht]
\centering \scalebox{.3} {\includegraphics{node_arc2.eps}}
\caption{Graphical representation of Lyapunov inequalities. The
graph above corresponds to the Lyapunov inequality $V_j(A_wx)\leq
V_i(x)$. Here, $A_w$ can be a single matrix from $\setmat$ or
a finite product of matrices from $\setmat$.}
\label{fig-node.arc}
\end{figure}
The reason for this construction is that there is a direct way of checking on $G$ whether the set of Lyapunov inequalities implies the stability. Before to present it, we need a last definition:
\begin{defi}\label{def-path-complete}
Given a directed graph $G(N,E)$ whose arcs are labeled with words
from the set $\setmat^*$, we say that the graph is
\emph{path-complete}, if for any finite word $w_1\dots w_k$ of any length $k$ (i.e., for all words in
$\setmat^*$), there is a directed path in $G$ such that the word obtained by concatenating the labels of the edges on this path contains the word $w_1\dots w_k$ as a subword.
\end{defi}
We are now able to state the criterion for validity of a set of LMIs:
\begin{thm}\label{thm-path.complete.implies.stability}(\cite{ajprhscc11})
Consider a set of Lyapunov inequalities with $m$ different labels, and its corresponding graph $G(V,E).$
If $G$ is path-complete, then, the Lyapunov inequalities are a valid criterion for stability, i.e., for any finite set of matrices
$\setmat=\{A_1,\ldots,A_m\}$ which satisfies these inequalities, the corresponding switching system (\ref{eq-switching}) is stable. \end{thm}
\begin{example}
The graph represented in Figure \ref{fig-hscc} is path-complete: one can check that every word can be read on this graph. As a consequence, the set of Equations (\ref{eq-hscc}) is a valid condition for stability.
\end{example}
\begin{figure}[ht]
\centering \scalebox{.4} {\includegraphics{graphe-hscc.eps}}
\caption{A graph corresponding to the LMIs in Equation (\ref{eq-hscc}). The graph is path-complete, and as a consequence any switching system that satisfies these LMIs is stable.}
\label{fig-hscc}
\end{figure}
\begin{equation}\label{eq-hscc}
\begin{array}{rll}
A_1^TP_1A_1&\prec&P_1 \\
A_1^TP_1A_1&\prec&P_2 \\
A_2^TP_2A_2&\prec&P_1 \\
A_2^TP_2A_2&\prec&P_2 \\
P_1,P_2&\succ&0.
\end{array}
\end{equation}
In this paper, we investigate the converse direction of Theorem \ref{thm-path.complete.implies.stability}, and answer the question ``Are there other sets of LMIs, which do not correspond to path-complete graphs, but are sufficient conditions for stability?'' We provide a negative answer to this question. Thus, we characterize the sets of LMIs that are a sufficient condition for stability. Of course, by Remark \ref{remark-approx} above, we not only characterize the Lyapunov inequalities that allow to prove stability, but we also characterize the valid LMIs which allow to approximate the joint spectral radius. Another motivation for studying LMI criteria for stability is that it appeared recently that much more quantities that are relevant for the asymptotic behaviour of switching systems can also be approximated thanks to LMIs. This is the case for instance of the Lyapunov exponent (\cite{protasov-jungers-lyap}), the p-radius (\cite{jungers-protasov-pradius}),...
Thus, we need to show that for any non-path-complete graph, there exists a set of matrices which is not stable, but yet satisfies the corresponding equations. This is not an easy task a priori, because we need to implicitly construct a counterexample, without knowning the graph, but just with the information that it is not path-complete. Moreover, we not only have to construct the unstable set of matrices which is a counterexample, but we need to implicitly build the solution $\{P_i\}$ to the Lyapunov inequalities, in order to show that the set satisfies the Lyapunov inequalities.
We split the proof in two steps: we first study a particular case of non-path-complete graphs: For these graphs there are only two different characters (i.e. two different matrices in the set); there is only one node; and $2^l-1$ self loops, each one with a different word of length $l$ ($l$ is arbitrary). The proof is simpler for this particular case, and we feel it gives a fair intuition on the reasoning.\\
The rest of the paper is as follows: in Section \ref{section-preliminaries} we first present the basic construction which lies at the core of our proofs, and then we present the proof for the particular case. In Section \ref{section-main} we prove our result in its full generality. We then show that it implies that recognizing if a set of LMIs is a valid criterion for stability is PSPACE-complete. In Section \ref{section-conclusion} we conclude and point out some possible further work.
\section{Proof of a particular case}\label{section-preliminaries}
\subsection{The construction}
We restrict ourselves to sets of two matrices for the sake of clarity and conciseness. Recall that we want to prove that if a graph is not path-complete, it is not a valid criterion for stability, meaning that there must exist a set of matrices that satisfies the corresponding Lyapunov inequalities, but yet, is not stable. Our goal in this subsection is to describe a simple construction that will allow us to build such a set of matrices in our main theorems.
If a graph is not path-complete, there is a certain word $w$ which cannot be read on the graph. Let us fix $n=|w|+1,$ i.e., $n$ is the length of this word plus one ($n$ will be the dimension of our matrices). We propose a simple construction of a set of matrices such that all long products which are not zero must contain the product $A_w.$
\begin{defi}
Given a word $w\in \{1,2\}^*,$ we call $\setmat_w$ the set of $\{0,1\}$-matrices $\{A_1,A_2\}$ such that the $(i,j)$ entry of $A_{l}$ is equal to one if and only if
\begin{itemize}
\item $j=i+1 \mod n, \mbox{ and } w_i=l,$ for $1\leq i\leq n-1,$
\item { or } $(i,j)=(n,1)$ and $l=1.$
\end{itemize}\end{defi}
\begin{figure}
\centering \scalebox{0.4} {\includegraphics{graphe_sigmaomega.eps}}
\caption{Graphical representation of the construction of the set of matrices $\setmat_{2212111}:$ the edges with label $1$ represent the matrix $A_1$ (i.e. $A_1$ is the adjacency matrix of the subgraph with edges labeled with a ``$1$''), and the edges with label $2$ represent the matrix $A_2.$}
\label{fig-sigmaomega}
\end{figure}
More clearly, $\setmat_w$ is the only set of binary matrices whose sum is the adjacency matrix of the cycle on $n$ nodes, and such that for all $i\in[1:n-1],$ the $i$th edge of this cycle is in the graph corresponding to $A_{w_i},$ the last edge being in the graph corresponding to $A_1.$ Figure \ref{fig-sigmaomega} provides a visual representation of the set $\setmat_w.$
\subsection{Proof of the particular case}\label{subsection-particular}
We now prove a particular case of our main result: we restrict our attention to ellipsoidal Lyapunov inequalities, and to graphs with a single node with $2^{l}-1$ self-loops labeled with words of length $l.$ That is, the Lyapunov inequalities express the constraints that all but one of the products of length $l$ leave a particular ellipsoid invariant. The corresponding graph is depicted in Figure \ref{fig-particular}, and the Lyapunov inequalities are of the shape (here we have taken $w=22\dots 2$ as the missing word):
\begin{equation}\label{eq-particular}
\begin{array}{rll}
(A_1\dots A_1)^TP(A_1\dots A_1)&\prec&P \\
(A_1\dots A_1A_2)^TP(A_1\dots A_1A_2)&\prec&P \\
\dots & \dots &\dots\\
(A_2\dots A_2A_1)^TP(A_2\dots A_2A_1)&\prec&P \\
P&\succ&0. \\
\end{array}
\end{equation}
\begin{figure}
\centering \scalebox{0.3} {\includegraphics{graphe_particular.eps}}
\caption{The graph corresponding to the particular case at stake in Subsection \ref{subsection-particular}: a single node with $2^l-1$ self loops labeled with words of length $l.$ This graph is not path-complete because the self loop $22\dots 2$ is missing.}
\label{fig-particular}
\end{figure}
We will make use of a well-known result from the seminal paper of \cite{ando-shih}:
\begin{thm}\label{thm-ando-shih}(\cite{ando-shih})
Let $\setmat\subset \re^{n\times n}$ be a set of matrices. If $\rho(\setmat)<1/\sqrt{n},$ then the matrices in $\setmat$ leave invariant a common ellipsoid; that is, Equations (\ref{eq-Lyap.CQ.SDP}) have a solution. \end{thm}
We also need the easy lemma characterizing the main property of our construction $A_w.$
\begin{lem}\label{lem-subproduct}
Any nonzero product in $\setmat_w^{2n}$ contains $A_w$ as a subproduct.
\end{lem}
{\it Proof.}
Since the matrices have binary entries, a nonzero product corresponds to a path in the corresponding graph. A path of length more than $n$ must contain a cycle, and there is only one cycle. Finally, a path of length $2n$ must contain a cycle starting at node $1,$ which corresponds to $A_w.${\flushright $\Box$}
We are now in position to prove the particular case:
\begin{thm}
Let $w$ be a word of length $l$ on $2$ letters, and $\Sigma=\{1,2\}^{l}\setminus \{w\}$ be the set of all words of length $l$ except $w.$ The graph with one node and $2^l-1$ self-loops, whose labels are all the words in $\Sigma$ is not a sufficient condition for stability. Indeed, the set $\setmat_w$ described above satisfies the corresponding LMIs, but is not stable.
\end{thm}
{\it Proof.}
We consider the above construction $\setmat_w.$ It is obvious that $\rho(\setmat_w)\geq 1$ (because $\rho(A_wA_1)=1;$ in fact $\rho(\setmat_w)= 1$ but this is not relevant for the discussion here).\\
Thus, we have to show that all products in the set $$\setmat'=\{A_{x_1}\dots A_{x_l}:x\in \{1,\dots,m\}^l,x\neq w\}$$ share a common invariant ellipsoid. In order to do that, we will show that $\rho(\setmat') =0.$ This fact together with Theorem \ref{thm-ando-shih} implies that the system (\ref{eq-particular}) has a solution.
We {\bf claim} that any nonzero product $A$ of length $l=n-1$ only has nonzero entries of the shape $(i,i-1 \ (\mbox{mod}\, n)):$ $$A_{i,j}=1\,\rightarrow j=i-1\ \mbox{mod}\ n.$$
This is because any edge in the graph is of the shape $(v_i\rightarrow v_{i+1\ (\mbox{mod}\ n)}),$ so a path of length $n-1$ must be of the shape $v_i\rightarrow v_{i+n-1\ (\mbox{mod}\ n)}.$ Also, by construction of $\setmat_w,$ the only product $A$ of length $l$ such that $A_{1,n}=1$ is $A_w.$
Now, suppose by contradiction that there exists a long nonzero product of matrices in $\setmat':$ $A_{y_1}\dots A_{y_T}\neq 0,\ A_{y_i}\in \setmat'.$ Any nonzero entry in this matrix corresponds to a path of length $lT$ of the shape $v_{i_1}\rightarrow v_{i_1-1}\rightarrow\dots \rightarrow v_{i_1-T}$ (where an arrow represents a jump of length $l$ corresponding to a multiplication by a matrix in $\setmat'$). Since we suppose that there are arbitrarily long products, it means that for some $j,$ $v_{i_j}=v_1$ and $v_{i_{j+1}}=v_n,$ so that $A_w$ must be in $\setmat',$ a contradiction.~{\flushright{$\Box$}}
\section{The main result}\label{section-main}
\subsection{The proof}
Let us now consider a general non-path-complete graph, and prove that some sets of matrices satisfy the corresponding equations, but fail to have JSR smaller than one. We will start by studying another family of Lyapunov functions. These functions are only defined on the positive orthant but, as we will see, it is sufficient for nonnegative matrices. We note these functions $V_p,$ where $p$ is a positive vector which defines them entirely:
\begin{equation}\label{eq-linear-norms}V_p(x)=\inf{\{\lambda: x /\lambda \leq p\}},\end{equation} where the inequality is entrywise. This quantity is a valid norm for nonnegative vectors, and geometrically, its unit ball is simply the set $\{x=p-y:y\geq 0\}$. We call this family of Lyapunov functions \emph{\conicnorm{} Lyapunov functions.} The following lemma provides an easy way to express the stability equations for this family of homogeneous functions, when dealing with nonnegative matrices: it allows to write the Lyapunov inequalities in terms of a Linear Program.
\begin{lem}\label{lem-linear-norms}
Let $p,p'\in \re^n_{++}$ be positive vectors, and $A\in \re^{n\times n}_+$. Then, we have
\begin{equation}\label{eq-conicnorm-lp} \forall x\in \re^n_{+}, \, V_{p}(Ax)\leq V_{p'}(x)\quad \iff \quad Ap' \leq p ,\end{equation} where the vector inequalities are to be understood componentwise.
\end{lem}
{\it Proof.}
$\Rightarrow:$ Taking $x=p'$ in the left-hand side of (\ref{eq-conicnorm-lp}), and taking into account that $V_{p'}(p')=1,$ we obtain that $V_{p}(Ap')\leq 1,$ and then $$Ap'\leq p.$$
$\Leftarrow:$ First, remark that for any pair of nonnegative vectors $y,z\in\re_+^n$ and any nonnegative matrix $A\in \re^{n\times n}_+,$ $y\leq z$ implies $Ay\leq Az.$ Now, take any vector $x\in\re^n_{+},$ and denote $\gamma=V_{p'}(x).$ We have \begin{eqnarray} x/\gamma &\leq & p'\\ Ax/\gamma &\leq &Ap'\\&\leq &p. \end{eqnarray} Thus, $$V_p(Ax)=\inf\{\lambda: Ax/\lambda \leq p\}\leq \gamma. $$
{\flushright $\Box$}
We can now present our main result.
\begin{thm}\label{theo-stab-implies-pc} A set of \conicnorm{} Lyapunov inequalities (like defined in (\ref{eq-linear-norms})) is a sufficient condition for stability if and only if the corresponding graph is path-complete.
\end{thm}
{\it Proof.}
The if part is exactly Theorem \ref{thm-path.complete.implies.stability}. We now prove the converse: for any non-path-complete graph, we constructively provide a set of matrices that satisfies the corresponding Lyapunov inequalities (with \conicnorm{} Lyapunov functions), but which is not stable.
{\bf The counterexample}
For a given graph $G$ which is not path-complete, there is a word $w$ that cannot be read as a subword of a sequence of labels on a path in
this graph. We reiterate the construction $\setmat_{ \bar w}$ above with the particular word $\bar w.$ We show below that the set of equations corresponding to $G$ admits
a solution for $\setmat_{ \bar w}$ within the family of \conicnorm{} Lyapunov functions.
{\bf Explicit solution of the Lyapunov inequalities}
We have to construct a vector $p_i$ defining a norm for each node of the graph $G.$ In order to do this, we construct an \emph{auxiliary graph} $G'$ from the graph $G.$ The set of nodes of $G'$ are the couples ($N$ is the number of nodes in $G$ and $n$ is the dimension of the matrices in $\setmat_{ \bar w}$): $$V'=\{(i,l): 1\leq i\leq N, 1\leq l\leq n\} $$ (that is, each node represents a particular entry of a particular Lyapunov function $p_i$).
There is an edge in $E'$ from $(i,l)$ to $(j,l')$ if and only if \begin{enumerate} \item there is a matrix $A_k\in \setmat_{ \bar w}$ such that
\begin{equation}\label{eq-auxiliary-graph} (A_k)_{l',l}=1,\end{equation} \item \label{item-label}there is an edge from $i$ to $j$ in $G$ with label $A_k.$ \end{enumerate} We give the label $A_k$ to this edge in $G'.$
We {\bf claim} that if $G$ is not path-complete, $G'$ is acyclic.\\
Indeed, on the one hand, by (\ref{eq-auxiliary-graph}), a cycle $(i,l)\rightarrow \dots \rightarrow (i,l)$ in $G'$ implies the existence of a product of matrices in $\setmat_{ \bar w}$ such that $A_{l,l}=1.$ We can then build from the right to the left a nonzero product of length $2n$ by following this cycle (several times, if needed). By Lemma \ref{lem-subproduct}, this implies that one can follow a path in $G'$ of the shape $$(i_i,l_1),\dots,(i_{n-1},l_{n-1}), $$ such that the sequence of labels is $\bar{\bar{w}}=w.$ \\
On the other hand, by item (\ref{item-label}) in our construction of $G',$ any such path in $G'$ corresponds to a path in $G$ with the same sequence of labels, a contradiction.
Let us construct $\setmat_w$ and $G'=(V',E')$ as above. It is well known that the nodes of an acyclic graph admit a renumbering $$s: \, V\rightarrow \mathbb{N}:\quad v\rightarrow s(v) $$ such that there can be a path from $v$ to $v'$ only if $s(v)<s(v')$ (see \cite{kahn62}). We are now in position to define our nonnegative vectors $v_i:$ we assign the $l$th entry of $v_i$ to be equal to $s((i,l)).$
{\bf Proof that the construction is a valid solution.}
Let us now prove that for all edge $i\rightarrow j$ of $G=(V,E)$ with label $A,$ $Av_i\leq v_j$ (where the inequality is entrywise). This, together with Lemma \ref{lem-linear-norms}, proves that for all $x,$ $V_{v_j}(Ax)\leq V_{v_i}(x).$
Take any edge $(i,j)$ in $E$ with label $A,$ and take an arbitrary index $l'.$ Supposing $(Av_i)_{l'}\neq 0,$ we have a particular index $l$ such that $$(Av_i)_{l'}=(v_i)_{l}\leq (v_j)_{l'.}$$ This is because $A_{l',l}=1,$ together with $(i,j)\in E$ implies that there is an edge $((i,l)\rightarrow (j,l'))\in E'.$ Thus, $(v_i)_{l}< (v_j)_{l'},$ and the proof is complete.
{\flushright $\Box$}
We now provide an analogue of this result for Ellipsoidal Lyapunov functions.
\begin{thm}\label{thm-ellipsoid-contrex} A set of Lyapunov ellipsoidal equations is a sufficient condition for stability if and only if the corresponding graph is path-complete.
\end{thm}
{\it Proof.}
The proof is to be found in the expanded version of this paper.
{\flushright $\Box$}
\begin{example}
The graph represented in Figure \ref{fig-hscc-wrong} is \emph{not path-complete}: one can easily check for instance that the word $A_1A_2A_1A_2\dots$ cannot be read as a subword of a path in the graph. As a consequence, the set of Equations (\ref{eq-hscc-wrong}) is not a valid condition for stability, even though it is very much similar to (\ref{eq-hscc}).\\
As an example, one can check that the set of matrices $$\setmat =\left \{ \begin{pmatrix}
-0.7 & 0.3 & 0.4\\
0.4 & 0 & 0.8\\
-0.7 & 0.5 & 0.7
\end{pmatrix},
\begin{pmatrix} -0.3 & -0.95 & 0\\
0.4 & 0.5 & 0.8\\
-0.6 & 0 & 0.2
\end{pmatrix}\right \} $$ make (\ref{eq-hscc-wrong}) feasible, even though this set is unstable. Indeed, $\rho(\Sigma)\geq \rho(A_1A_2A_1)^{1/3}=1.01\dots.$
\end{example}
\begin{figure}[ht]
\centering \scalebox{.4} {\includegraphics{graphe-hscc-wrong.eps}}
\caption{A graph corresponding to the LMIs in Equation (\ref{eq-hscc-wrong}). The graph is not path-complete: one can easily check for instance that the word $A_1A_2A_1A_2\dots$ cannot be read as a path in the graph.}
\label{fig-hscc-wrong}
\end{figure}
\begin{equation}\label{eq-hscc-wrong}
\begin{array}{rll}
A_1^TP_1A_1&\prec&P_1 \\
A_2^TP_1A_2&\prec&P_2 \\
A_2^TP_2A_2&\prec&P_1 \\
A_2^TP_2A_2&\prec&P_2 \\
P_1,P_2&\succ&0.
\end{array}
\end{equation}
\subsection{PSPACE-completeness of the recognizability problem}
Our results imply that it is PSPACE-complete to recognize sets of LMIs that are valid stability criteria, as we now show.
\begin{thm}\label{thm-pspace}
Given a set of ellipsoidal Lyapunov inequalities, or a set of \conicnorm{} Lyapunov inequalities, it is PSPACE complete to decide whether they constitute a valid stability criterion.
\end{thm}
{\it Proof.}
The proof is to be found in the expanded version of this paper.
{\flushright $\Box$}
\section{Conclusion}\label{section-conclusion}
We proved that the only sets of Lyapunov inequalities that imply stability are the ones that correspond to path-complete graphs \emph{in the case of ellipsoidal, or \conicnorm{} Lyapunov inequalities.}
As explained above, our results are not only important for proving stability of switching systems, but also for the more general goal of approximating the joint spectral radius.
Our work leads to several interesting open questions:
It is natural to wonder whether there are other families of Lyapunov functions such that Theorem \ref{theo-stab-implies-pc} fails to be true. That is, are there families of Lyapunov functions such that the class of valid sets of Lyapunov inequalities is larger than the path-complete ones?\\ As an example, one might look to the \emph{Complex Polytope Lyapunov functions} (see \cite{GZalgorithm,jungersprotasov09} for a study of these Lyapunov functions). We haven't found such examples yet. Also, the techniques analyzed here seem to be generalizable to other hybrid systems, or to the analysis of other joint spectral characteristics, like the joint spectral subradius, or the Lyapunov exponent. Finally, our PSPACE-completeness proof does not work for graphs with two different labels, i.e., for sets of two matrices, and the complexity of path-completeness recognizability is left open in that case.
\section{Acknowledgements}
We would like to thank Marie-Pierre B\'eal and Julien Cassaigne for helpful discussions on the recognizability problem.
\def\cprime{$'$} \newcommand{\noopsort}[1]{} \newcommand{\singleletter}[1]{#1}
|
1,941,325,221,116 | arxiv |
\section{Introduction}
Zero-knowledge protocols \cite{STOC:GolMicRac85} are a fundamental tool in modern cryptography in which a prover convinces a verifier that some statement is true without revealing any additional information. This security property is formalized via \emph{simulation}: the view of any (even malicious) verifier $V^*$ can be simulated in polynomial time (without access to, e.g., an $\mathsf{NP}$ witness for the statement).
Although the zero-knowledge property sounds almost paradoxical, it is achieved by designing a simulator $S^{V^*}$ that makes use of $V^*$ in ways that the honest protocol execution cannot, thereby resolving the apparent paradox. In the simplest and most common setting, the key simulation technique is \emph{rewinding}. Given an interactive adversary $A$, an oracle algorithm $S^{A}$ is said to rewind the adversary if it saves the state of $A$ midway through an execution in order to run $A$ \emph{multiple times} on different inputs. Rewinding is ubiquitous in the analysis of interactive proof systems, establishing properties such as zero-knowledge \cite{STOC:GolMicRac85,FOCS:GolMicWig86}, soundness \cite{STOC:Kilian92}, and knowledge-soundness \cite{STOC:GolMicRac85,C:BelGol92}.
However, since the foundational techniques of interactive proof systems were established, our conception of what constitutes efficient computation has fundamentally changed. Both in theory~\cite{FOCS:Shor94} and in practice~\cite{Google}, quantum computers appear to have capabilities beyond that of any efficient classical computer. Thus, it is imperative to analyze security against quantum adversaries. In this work, we consider this question for zero-knowledge protocols.
\begin{center}
\emph {When do classical zero-knowledge protocols remain secure against quantum adversaries?}
\end{center}
At a minimum, such protocols must be based on post-quantum cryptographic assumptions. However, since zero-knowledge is typically proved via rewinding, resolving this question also entails understanding \emph{to what extent we can rewind quantum adversaries}. Unfortunately, rewinding quantum adversaries is notoriously difficult because an adversary's internal state may be disturbed if any classical information is recorded about its response, potentially rendering it useless for subsequent executions~\cite{vandeGraaf97,STOC:Watrous06,EC:Unruh12,FOCS:AmbRosUnr14}.
By now, a few techniques exist to rewind quantum adversaries~\cite{STOC:Watrous06,EC:Unruh12,EC:Unruh16,C:ChiChuYam21,C:AnaChuLap21,FOCS:CMSZ21}, but the range of protocols to which these techniques apply remains quite limited. As a basic example, Watrous's zero-knowledge simulation technique \cite{STOC:Watrous06} applies to the standard \cite{FOCS:GolMicWig86} zero-knowledge proof system for graph isomorphism but (as noted in \cite{EC:Unruh12,FOCS:AmbRosUnr14}) does \emph{not} apply to the related~\cite{FOCS:GolMicWig86} zero-knowledge proof system for graph \emph{non}-isomorphism (GNI). Recall that in the GNI protocol, the prover $P$ wants to convince the verifier $V$ that two graphs $G_0, G_1$ are not isomorphic. To do so, the verifier sends a random isomorphic copy $H$ of $G_b$ for a uniformly random bit $b$, to which the prover returns $b$.\footnote{For this overview, we focus on the soundness $1/2$ case, but appropriate parallel repetition of this step reduces the soundness error.} However, to ensure zero-knowledge, the verifier first gives a proof of knowledge (PoK) that $H$ is isomorphic to either $G_0$ or $G_1$ via a variant of the parallel-repeated graph \emph{isomorphism} $\Sigma$-protocol. Intuitively, this ensures that a malicious verifier $V^*$ already \emph{knows} $b$ and hence does not learn anything new from the interaction.
The classical zero-knowledge simulator for the GNI protocol has two steps:
\begin{enumerate}
\item \textbf{Extract} an isomorphism $\pi$ satisfying $\pi(H) = G_b$ for some $b$ using \emph{multiple} valid PoK responses from the malicious verifier $V^*$.
\item \textbf{Simulate} the view of $V^*$ in an real interaction by returning $b$ (computed efficiently from $\pi$).
\end{enumerate}
It turns out that this kind of extract-and-simulate approach is beyond the reach of existing quantum rewinding techniques, because all known techniques for extracting information from \emph{multiple executions} of the adversary~\cite{EC:Unruh12,EC:Unruh16,FOCS:CMSZ21} fundamentally disturb the state. While this particular example just concerns the GNI protocol, this extract-and-simulate approach is very widely applicable, especially in the context of \emph{composition}, including protocols that follow the ``FLS trapdoor paradigm'' \cite{STOC:FeiSha90} or make use of extractable commitments \cite{EC:RicKil99,FOCS:PraRosSah02,TCC:Rosen04}. Given this state of affairs, we ask:
\begin{center}
\em When is it possible to \emph{undetectably} extract information from a quantum adversary?
\end{center}
As per the above discussion, if it is possible to \emph{undetectably} extract from the proof-of-knowledge subroutine in the \cite{FOCS:GolMicWig86} GNI protocol, then the full protocol is zero-knowledge against quantum adversaries.
\subsection{This Work}
In this work, we develop new techniques for quantum rewinding in the context of extraction and zero-knowledge simulation:
\begin{enumerate}
\item[(1)] Our first contribution is to give a quantum analogue of the \emph{extract-and-simulate} paradigm used in many classical zero-knowledge protocols, in which a simulator uses information extracted from \emph{multiple protocol transcripts} to simulate the verifier's view. The key difficulty in the quantum setting is to extract this information without causing any noticeable disturbance to the verifier's quantum state --- beyond the disturbance caused by a single protocol execution.
While the recent techniques of~\cite{FOCS:CMSZ21} allow extracting from multiple protocol transcripts, a major problem is that their extractor noticeably disturbs the adversary's state. We revisit the~\cite{FOCS:CMSZ21} approach for extraction and, using several additional ideas, construct an \emph{undetectable} extractor for a broad class of protocols. Using this extraction technique, we prove that the original~\cite{FOCS:GolMicWig86} protocol for graph non-isomorphism and some instantiations of the \cite{STOC:FeiSha90} protocol for $\mathsf{NP}$ are zero-knowledge against quantum adversaries.
\item[(2)] We next turn our attention to the Goldreich-Kahan \cite{JC:GolKah96} zero-knowledge proof system for $\mathsf{NP}$. Informally, analyzing the \cite{JC:GolKah96} proof system presents different challenges as compared to \cite{FOCS:GolMicWig86,STOC:FeiSha90} because in the latter protocols, rewinding is used for \emph{extraction} (after which simulation is straight-line), while in the \cite{JC:GolKah96} protocol, rewinding is used for the \emph{simulation} step (while extraction is trivial/straight-line).
Nevertheless, we show that some of our techniques are also applicable in this setting. We prove that the \cite{JC:GolKah96} protocol is zero-knowledge against quantum adversaries. Our simulator can be viewed as a natural quantum extension of the classical simulator.
Previously,~\cite{C:ChiChuYam21} used different techniques to show that the \cite{JC:GolKah96} protocol is $\varepsilon$-zero-knowledge against quantum adversaries, but their simulation strategy cannot achieve negligible accuracy.
\end{enumerate}
\paragraph{Isn't this impossible?}
As stated above, our results (both (1) and (2)) achieve constant-round black-box zero-knowledge with negligible simulation accuracy. Recently,~\cite{FOCS:CCLY21} showed that there do not exist black-box expected quantum polynomial time (EQPT) simulators for constant-round protocols for any language $L \not\in \mathsf{BQP}$. The source of the disconnect between our results and \cite{FOCS:CCLY21} is an ambiguity in the definition of EQPT. This brings us to our final contribution.
\begin{enumerate}
\item[(3)] We formally study the notion of expected runtime for quantum machines and formulate a model of expected quantum polynomial time simulation that avoids the~\cite{FOCS:CCLY21} impossibility result.
\end{enumerate}
We now discuss these contributions in more detail. To avoid confusion about the formal statements of (1) and (2), we begin by describing (3).
\subsection{Coherent-Runtime Expected Quantum Polynomial Time}
\label{sec:intro-creqpt}
While~\cite{FOCS:CCLY21} do not formally define EQPT,\footnote{When defining quantum zero-knowledge simulation, \cite[Page~12]{FOCS:CCLY21} requires that the simulator is a quantum Turing machine with \emph{expected} polynomial runtime, and refers to \cite{BBBV97} (which uses the~\cite{BV97} definition of a quantum Turing machine) for the quantum Turing machine model. However, as we discuss in \cref{sec:tech-eqpt},~\cite{BV97} restricts quantum Turing machines to have a \emph{fixed} running time (see~\cite[Def~3.11]{BV97}) in order to avoid difficult-to-resolve subtleties about quantum Turing machines with variable running time~\cite{Myers97,Ozawa98a,LindenP98,Ozawa98b}.} implicit in their result is the following computational model, which we call \emph{measured-runtime} EQPT ($\mathsf{EQPT}_{m}$). In this model, a computation on input $\ket{\psi}_\mathcal{A}$ is the following process, for a fixed ``transition'' unitary $U_{\delta}$ (corresponding to a quantum Turing machine transition function $\delta$) and time bound $T = \exp(\lambda)$:\footnote{The exponential time bound is simply for convenience; by Markov's inequality, for any expected polynomial time computation truncating the computation after an exponential number of steps has only a negligible effect on the output state.}
\begin{enumerate}[noitemsep]
\item Initialize a fresh memory/work register $\mathcal{W}$ to $\ket{0^{T}}_{\mathcal{W}}$ and a designated ``halt'' qubit $\mathcal{Q}$ to $\ket{0}$;
\item Repeat for at most $T$ steps:
\begin{enumerate}[nolistsep]
\item measure $\mathcal{Q}$ and halt if it is $1$;
\item
\label[step]{step:apply-unitary} apply $U_{\delta}$ to $\mathcal{A} \otimes \mathcal{Q} \otimes \mathcal{W}$.
\end{enumerate}
\end{enumerate}
The result of the computation is the residual state on $\mathcal{A}$ once the computation has halted. We say that a computation is $\mathsf{EQPT}_{m}$ if for \emph{all} states $\ket{\psi}_{\mathcal{A}}$, the expected running time of this computation is ${\rm poly}(\lambda)$. Using this model, we can give a more precise formulation of the \cite{FOCS:CCLY21} theorem: black-box $\mathsf{EQPT}_{m}$ zero-knowledge simulators for constant-round protocols do not exist. The key feature of the $\mathsf{EQPT}_{m}$ model that enables the \cite{FOCS:CCLY21} result is that the runtime is \emph{measured}.
In this work, we consider a different computational model for EQPT simulation called \emph{coherent-runtime EQPT} ($\mathsf{EQPT}_{c}$). In our model, simulators have the ability to run $\mathsf{EQPT}_{m}$ procedures \emph{coherently} --- which yields a superposition over computations with different runtimes --- and then later \emph{uncompute} the runtime by running the same computation in reverse.
Our notion of $\mathsf{EQPT}_{c}$ (see \cref{def:cr-eqpt}) captures (as a special case) computations of the form depicted in \cref{fig:cr-eqpt-simple} on an input $\ket{\phi}_{\mathcal{X}}$, where the result of the computation is the residual state on $\mathcal{X}$. In~\cref{fig:cr-eqpt-simple}, $C_1, C_2, C_3$ are arbitrary polynomial-size quantum circuits and $U$ is a unitary that \emph{coherently implements} an $\mathsf{EQPT}_{m}$ computation.\footnote{Our actual definition is for \emph{uniform} computation, so $(U, C_1, C_2, C_3)$ will have a uniform description.} In slightly more detail, any $\mathsf{EQPT}_{m}$ computation with transition unitary $U_{\delta}$ and runtime bound $T$ can be expressed as a unitary circuit $U = V_T \cdots V_2 V_1$ where each $V_i$ consists of two steps: (1) $\mathsf{CNOT}$ the halt qubit onto a register $\mathcal{B}_i$, and then (2) apply $U_{\delta}$ controlled on $\mathcal{B}_i = 0$. The unitary $U$ acts on $\mathcal{A} \otimes \mathcal{Q} \otimes \mathcal{W} \otimes \mathcal{B}$ where $\mathcal{B} \coloneqq \mathcal{B}_1 \otimes \cdots \otimes \mathcal{B}_T$. While our full definition of $\mathsf{EQPT}_{c}$ is more general, all the simulators we give can be written in the form of~\cref{fig:cr-eqpt-simple}. In~\cref{fig:cr-eqpt-simple}, the input register is of the form $\mathcal{X} = \mathcal{X}_1 \otimes \mathcal{X}_2$ where $\mathcal{X}_2$ is isomorphic to $\mathcal{A}$.
\begin{figure}[h!]
\centering
\begin{quantikz}
\lstick[wires=2]{$\ket{\phi}_{\mathcal{X}}$} & \gate[wires=2][0.8cm]{C_1}\qwbundle[alternate]{} & \qwbundle[alternate]{} & \gate[wires=2][0.8cm]{C_2}\qwbundle[alternate]{} & \qwbundle[alternate]{} & \gate[wires=2][0.8cm]{C_3}\qwbundle[alternate]{} & \qwbundle[alternate]{} \\
\lstick[wires=1]{} & \qwbundle[alternate]{} & \gate[wires=3][0.8cm]{U}\qwbundle[alternate]{} & \qwbundle[alternate]{} & \gate[wires=3][0.8cm]{U^\dagger}\qwbundle[alternate]{} & \qwbundle[alternate]{} & \qwbundle[alternate]{} \\
\lstick[wires=1]{$\ket{0}_{\mathcal{Q}} \otimes \ket{0^T}_{\mathcal{W}}$} & \qwbundle[alternate]{} & \qwbundle[alternate]{} & \qwbundle[alternate]{} & \qwbundle[alternate]{} & \qwbundle[alternate]{} & \qwbundle[alternate]{} \\
\lstick[wires=1]{$\ket{0^T}_{\mathcal{B}}$} \qwbundle[alternate]{} & \qwbundle[alternate]{} & \qwbundle[alternate]{} & \qwbundle[alternate]{} & \qwbundle[alternate]{} & \qwbundle[alternate]{} & \qwbundle[alternate]{}
\end{quantikz}
\caption{An example of an $\mathsf{EQPT}_{c}$ circuit.}\label{fig:cr-eqpt-simple}
\end{figure}
\noindent
\iffalse
\begin{definition}[informal]
Every computation of the following form is $\mathsf{EQPT}_{c}$: for the coherent implementation $U$ on $\mathcal{A} \otimes \mathcal{W} \otimes \mathcal{B}$ of some $\mathsf{EQPT}_{m}$ computation and polynomial-size quantum circuits\footnote{Our actual definition is for \emph{uniform} computation, so $(U, C_1, C_2, C_3)$ will have a uniform description.} $C_1,C_2,C_3$ on $\mathcal{A} \otimes \mathcal{X}$ for some register $\mathcal{X}$:
\begin{inparaenum}[(1)]
\item initialize $\mathcal{W} \otimes \mathcal{B}$ to $\ket{0}_{\mathcal{W}} \ket{0}_{\mathcal{B}}$;
\item run $C_1$;
\item apply $U$;
\item run $C_2$;
\item apply $U^{\dagger}$;
\item run $C_3$.
\end{inparaenum}
The result of the computation is the residual state on $\mathcal{X}$.
\end{definition}
\fi
We discuss and motivate the definition of $\mathsf{EQPT}_{c}$ in detail in \cref{sec:tech-eqpt}. For now, we note two key properties of the model. First, the ability to apply $U^{\dagger}$ is what enables us to circumvent the \cite{FOCS:CCLY21} impossibility for $\mathsf{EQPT}_{m}$. Second, we show a result analogous to the statement that any expected \emph{classical} polynomial time computation can be truncated to \emph{fixed} polynomial-time with small error.
\begin{lemma}[informal, see \cref{lemma:truncation}]\label{lemma:intro-truncation}
Any $\mathsf{EQPT}_{c}$ computation can be approximated with $\varepsilon$ accuracy by a quantum circuit of size ${\rm poly}(\lambda,1/\varepsilon)$.
\end{lemma}
Importantly, this lemma ensures that $\mathsf{EQPT}_{c}$ computations cannot break post-quantum polynomial hardness assumptions (unless the assumptions are false). We also note that this lemma implies that black-box zero-knowledge with $\mathsf{EQPT}_{c}$ simulation implies $\varepsilon$-zero-knowledge with strict quantum polynomial time simulation.\footnote{We also note that all of our simulators can be truncated to run in time $\mathsf{poly}(\lambda) \cdot 1/\varepsilon$ and achieve $\varepsilon$-ZK, matching the $\varepsilon$-dependence of the classical simulator.}
\paragraph{What does this mean for post-quantum zero knowledge?} Since the introduction of zero-knowledge protocols~\cite{STOC:GolMicRac85}, expected polynomial-time simulation has been the \emph{de facto} model for classical zero knowledge. Although expected polynomial-time simulators cannot actually be run ``in real life'' (to negligible accuracy), the security notion captures negligible accuracy simulation in a computational model that \emph{cannot break polynomial hardness assumptions}. Moreover, since~\cite{STOC:BarLin02} rules out \emph{strict} polynomial-time simulation for constant-round protocols, expected polynomial-time simulation captures the strongest\footnote{Other models of efficient simulation \cite{Levin,eprint:Klooss20} have been proposed (in the classical setting), but they are \emph{relaxations} of expected polynomial time simulation. It should be possible to define similar relaxations in the quantum setting, but we focus on obtaining an analogue to ``standard'' expected polynomial time simulation.} provable zero-knowledge properties of many fundamental protocols such as quadratic non-residuosity~\cite{STOC:GolMicRac85}, graph non-isomorphism~\cite{FOCS:GolMicWig86}, Goldreich-Kahan~\cite{JC:GolKah96}, and Feige-Shamir~\cite{STOC:FeiSha90}.
The combination of our positive results and the~\cite{FOCS:CCLY21} negative results transports this state of affairs entirely to the post-quantum setting. In particular, the conclusion we draw from the~\cite{FOCS:CCLY21} negative result is that one must go beyond the $\mathsf{EQPT}_{m}$ model in order to find the right quantum analogue of classical expected polynomial-time zero knowledge simulation. We propose $\mathsf{EQPT}_{c}$ to be that quantum analogue.
\vspace{10pt}
With this discussion in mind, we proceed to describe our results on post-quantum zero-knowledge and extraction in more detail.
\subsection{Results on Zero Knowledge}
Our main results regarding post-quantum zero knowledge are as follows. First, we show that the \cite{FOCS:GolMicWig86} graph non-isomorphism protocol is zero knowledge against quantum verifiers.
\begin{theorem}\label{thm:szk}
The \cite{FOCS:GolMicWig86} $4$-message proof system for graph non-isomorphism is a post-quantum statistical zero knowledge proof system. The zero-knowledge simulator is black-box and runs in $\mathsf{EQPT}_{c}$.
\end{theorem}
The \cite{FOCS:GolMicWig86} GNI protocol follows a somewhat general template using instance-dependent commitments \cite{BelMicOst90,JC:ItoOhtShi97,C:MicVad03}; we believe \cref{thm:szk} should extend to other instantiations of this paradigm (e.g. for lattice problems).
With some additional work, we use similar techniques to show how to instantiate the ``extract-and-simulate'' paradigm of Feige-Shamir \cite{STOC:FeiSha90} in the post-quantum setting.
\begin{theorem}\label{thm:feige-shamir}
Assuming super-polynomially secure non-interactive commitments, a particular instantiation of the \cite{STOC:FeiSha90} $4$-message argument system for $\mathsf{NP}$ is (sound and) zero-knowledge against quantum verifiers. The zero-knowledge simulator is black-box and runs in $\mathsf{EQPT}_{c}$.
\end{theorem}
Finally, using a different approach, we show that the Goldreich-Kahan \cite{JC:GolKah96} zero-knowledge proof system remains ZK against quantum adversaries.
\begin{theorem}\label{thm:gk}
When instantiated using a collapse-binding and statistically-hiding commitment scheme, the \cite{JC:GolKah96} protocol is zero-knowledge with a black-box $\mathsf{EQPT}_{c}$ simulator.
\end{theorem}
As a bonus, the simulator we construct in \cref{thm:gk} bears a strong resemblance to the \emph{classical} Goldreich-Kahan simulator, giving a clean conceptual understanding of constant-round zero knowledge in the quantum setting.
\subsection{Results on Extraction}
As alluded to in the introduction, \cref{thm:szk} and \cref{thm:feige-shamir} are proved using new results on post-quantum \emph{extraction}. We achieve ``undetectable extraction'' under the following definition of a \emph{state-preserving proof of knowledge}.\footnote{This is a quantum analogue of \emph{witness-extended emulation}~\cite{CCC:BarGol02}. Our definition is also similar to a definition appearing in~\cite{C:AnaChuLap21}, although they only consider the setting of \emph{statistical} state preservation.}
\begin{definition}\label{def:state-preserving-extraction}
An interactive protocol $\Pi$ is defined to be a \textdef{state-preserving argument (resp. proof) of knowledge} if there exists an extractor $\mathsf{Ext}^{(\cdot)}$ with the following properties:
\begin{itemize}
\item \textbf{Syntax}: For any quantum algorithm $P^*$ and auxiliary state $\ket{\psi}$, $\mathsf{Ext}^{P^*, \ket{\psi}}$ outputs a protocol transcript $\tau$, prover state $\ket{\psi'}$, and witness $w$.
\item \textbf{Extraction Efficiency}: If $P^*$ is a QPT algorithm, $E^{P^*(\cdot), \ket{\psi}}$ runs in expected quantum polynomial time ($\mathsf{EQPT}_{c}$).
\item \textbf{Extraction Correctness}: the probability that $\tau$ is an accepting transcript but $w$ is an invalid $\mathsf{NP}$ witness is negligible.
\item \textbf{State-Preserving}: the pair $(\tau, \ket{\psi'})$ is computationally (resp. statistically) indistinguishable from a transcript-state pair $(\tau^*, \ket{\psi^*})$ obtained through an honest one-time interaction with $P^*(\cdot, \ket{\psi})$ (where $\ket{\psi^*}$ is the prover's residual state).
\end{itemize}
\end{definition}
Proofs/arguments of knowledge are typically used (rather than just sending an $\mathsf{NP}$ witness) to achieve either \emph{succinctness} \cite{STOC:Kilian92} or \emph{security against the verifier} (e.g., witness indistinguishability) \cite{FOCS:GolMicWig86,STOC:FeiSha90}. We show that standard $3$- and $4$-message protocols in both of these settings are state-preserving proofs/arguments of knowledge.
\begin{theorem}[State-preserving succinct arguments] \label{thm:succinct-state-preserving}
Assuming collapsing hash functions exist, there exists a 4-message public-coin state-preserving succinct argument of knowledge for $\mathsf{NP}$.
\end{theorem}
For witness indistinguishability (WI), we have three related constructions achieving slightly different properties under different computational assumptions.
\begin{theorem}[State-preserving WI arguments] \label{thm:state-preserving-wi}
Assuming collapsing hash functions or \emph{super-polynomially secure} one-way functions, there exists a 4-message public-coin state-preserving witness-indistinguishable argument (in the case of collapsing)/proof (in the case of OWFs) of knowledge. Assuming \emph{super-polynomially secure} non-interactive commitments, there exists a 3-message PoK achieving the same properties.
\end{theorem}
In fact, as we will explain in the technical overview, we give explicit conditions under which any proof/argument of knowledge is also state-preserving.
One special case of \cref{thm:state-preserving-wi} that we would like to highlight is that of \emph{extractable commitments} \cite{FOCS:PraRosSah02,TCC:PasWee09}. An extractable commitment scheme $\mathsf{ExtCom}$ is a commitment scheme with the property that a committed message $m$ can be extracted given black-box access to an adversarial sender (provided that the adversary is sufficiently convincing). Analogously to the setting of proofs-of-knowledge, we consider ``state-preserving'' extractable commitments (see, e.g., \cite{STOC:BitShm20,EC:GLSV21,C:BCKM21b}), in which the extractor must simulate the entire view of the adversarial committer in addition to extracting the message. This variant of extractable commitments is quite natural; for example, it is exactly the property necessary to prove the post-quantum security of the \cite{TCC:Rosen04} zero-knowledge proof system for $\mathsf{NP}$. An immediate corollary of \cref{thm:state-preserving-wi} is a new construction of state-preserving extractable commitments.
\begin{corollary}[Extractable commitments]
Assuming super-polynomially secure non-interactive commitments, there exists a $3$-message public-coin post-quantum statistically-binding \emph{extractable commitment scheme}. Assuming super-polynomially secure one-way functions, there exists a $4$-message scheme with the same properties. Finally, assuming (polynomially secure) collapsing hash functions, there exists a $4$-message public-coin collapse-binding extractable commitment scheme.
\end{corollary}
We leave open the problem of using these techniques to achieve a statistically-binding extractable commitment scheme from polynomial assumptions.
More generally, we expect our state-preserving extraction results to be useful for future applications, both in the context of zero-knowledge and beyond.
\section{State-Preserving Extraction}\label{sec:state-preserving}
So far, we have constructed $\mathsf{EQPT}_{m}$ \emph{guaranteed extractors} for various protocols of interest (\cref{sec:high-probability-extractor}) and established the $\mathsf{EQPT}_{c}$ model that allows for \emph{state-preserving} extraction (\cref{sec:eqpt}). In this section, we prove a generalization of \cref{lemma:tech-overview-state-preserving-high-probability}, showing how to convert a $\mathsf{EQPT}_{m}$ guaranteed extractor into a state-preserving $\mathsf{EQPT}_{c}$ extractor.
In \cref{sec:hp-to-sp}, we write down an explicit reduction from state-preserving extraction to guaranteed extraction and prove \cref{lemma:state-preserving-high-probability}, which gives a condition (\cref{def:witness-binding}) under which the reduction is valid (intuitively capturing ``computational uniqueness'' of the witness given the first message of the protocol). Then, in \cref{sec:state-preserving-examples}, we show examples to which \cref{lemma:state-preserving-high-probability} applies; namely, protocols for languages with unique (partial) witnesses and general commit-and-prove protocols. Finally, in \cref{sec:state-preserving-main-theorems}, we conclude \cref{thm:succinct-state-preserving,thm:state-preserving-wi}.
\subsection{From Guaranteed Extraction to State-Preserving Extraction}
\label{sec:hp-to-sp}
We first recall our definition of state-preserving proofs of knowledge (\cref{def:state-preserving-extraction}).
\begin{definition}
An interactive protocol $\Pi$ is defined to be a \textdef{state-preserving argument (resp. proof) of knowledge} if there exists an extractor $\mathsf{Ext}^{(\cdot)}$ with the following properties:
\begin{itemize}
\item \textbf{Syntax}: For any quantum algorithm $P^*$ and auxiliary state $\ket{\psi}$, $\mathsf{Ext}^{P^*, \ket{\psi}}$ outputs a protocol transcript $\tau$, prover state $\ket{\psi'}$, and witness $w$.
\item \textbf{Extraction Efficiency}: If $P^*$ is a QPT algorithm, $E^{P^*, \ket{\psi}}$ runs in expected quantum polynomial time ($\mathsf{EQPT}_{c}$).
\item \textbf{Extraction Correctness}: the probability that $\tau$ is an accepting transcript but $w$ is an invalid $\mathsf{NP}$ witness is negligible.
\item \textbf{State-Preserving}: the pair $(\tau, \ket{\psi'})$ is computationally (resp. statistically) indistinguishable from a transcript-state pair $(\tau^*, \ket{\psi^*})$ obtained through an honest one-time interaction with $P^*(\cdot, \ket{\psi})$ (where $\ket{\psi^*}$ is the prover's residual state).
\end{itemize}
\end{definition}
We now introduce the notion of ``witness-binding'' protocols, i.e., protocols that are collapse-binding to functions of the witness $w$. For an adversary $\mathsf{Adv}$ and an interactive protocols $(P,V)$ we define a witness-binding experiment $\mathsf{Exp}^{\mathsf{Adv}}_{\mathsf{wb}}(b,\mathsf{Pred},f,\lambda)$ parameterized by a challenge bit $b$, a predicate $\mathsf{Pred}$ and a function $f$.
\begin{enumerate}
\item The challenger generates the first verifier message $\mathsf{vk}$ and sends it to $\mathsf{Adv}$; skip this step if the protocol is a 3-message protocol.
\item $\mathsf{Adv}$ replies with a classical instance $x$, classical first prover message $a$, and a quantum state on registers $\mathcal{W}_{\mathrm{witness}} \otimes \mathcal{Y}_{\mathrm{aux}}$.
\item The challenger performs a binary-outcome projective measurement to learn the output of $\mathsf{Pred}(x,\mathsf{vk},a,\cdot,\cdot)$ on $\mathcal{W}_{\mathrm{witness}}\otimes \mathcal{Y}_{\mathrm{aux}}$. If the output is $0$, the experiment aborts.
\item If $b = 0$, the challenger does nothing. If $b = 1$, the challenger initializes a fresh ancilla $\mathcal{K}$ to $\ket{0}_{\mathcal{K}}$, applies the unitary $U_f$ (acting on $\mathcal{W}_{\mathrm{witness}} \otimes \mathcal{K}$) that computes $f(\cdot)$ on $\mathcal{W}_{\mathrm{witness}}$ and XORs the output onto $\mathcal{K}$, measures $\mathcal{K}$, and then applies $U_f^\dagger$.
\item The challenger returns the $\mathcal{W}_{\mathrm{witness}} \otimes \mathcal{Y}_{\mathrm{aux}}$ registers to $\mathsf{Adv}$. Finally, $\mathsf{Adv}$ outputs a bit $b'$, which is the output of the experiment (if the experiment has not aborted).
\end{enumerate}
\begin{definition}[$(\mathsf{Pred},f)$-binding to the witness]\label{def:witness-binding}
A 3 or 4-message protocol is witness binding with respect to predicate $\mathsf{Pred}$ and function $f$ if for any computationally bounded quantum adversary $\mathsf{Adv}$,
\[
\Big| \Pr[\mathsf{Exp}^{\mathsf{Adv}}_{\mathsf{wb}}(0,\mathsf{Pred},f,\lambda) = 1]
- \Pr[\mathsf{Exp}^{\mathsf{Adv}}_{\mathsf{wb}}(1,\mathsf{Pred},f,\lambda) = 1] \Big|
\leq {\rm negl}(\lambda).\]
\end{definition}
Next, we write down a general-purpose reduction from state-preserving extraction to guaranteed extraction and show (\cref{lemma:state-preserving-high-probability}) that the reduction is valid under an appropriate witness-binding assumption.
\begin{lemma} \label{lemma:state-preserving-high-probability}
Suppose that $(P_{\Sigma}, V_{\Sigma})$ is a post-quantum proof/argument of knowledge with guaranteed extraction. We optionally assume that the extractor $\mathsf{Extract}^{P^*}$ outputs some auxiliary information $y$ in addition to the witness $w$. We then make the following additional assumptions with respect to a predicate $\mathsf{Pred}$:
\begin{itemize}
\item The protocol $(P_{\Sigma}, V_{\Sigma})$ is $(\mathsf{Pred}, f = \mathsf{Id})$-witness binding, and
\item The tuple $(w,y)$ output by the guaranteed extractor $\mathsf{Extract}^{P^*}$ satisfies $\mathsf{Pred}(\mathsf{vk}, x, a, w, y) = 1$ with $1-{\rm negl}$ probability.
\end{itemize}
Then, $(P_{\Sigma}, V_{\Sigma})$ is a state-preserving proof/argument of knowledge with $\mathsf{EQPT}_{c}$ extraction.
\end{lemma}
\begin{remark}
This lemma is stated with respect to $f = \mathsf{Id}$ to match the state-preserving proof of knowledge abstraction; however, we also consider (\cref{cor:state-preserving-partial-witness}) versions of this reduction where $f\neq \mathsf{Id}$.
\end{remark}
\begin{proof}
We want to show that $(P_{\Sigma},V_{\Sigma})$ is a state-preserving proof/argument of knowledge. We begin by describing our candidate state-preserving extractor $\overline{\mathsf{Extract}}^{P^*}$.
\begin{construction}\label{construction:state-preserving-reduction}
Let $\mathsf{Extract}^{P^*}$ be a post-quantum guaranteed extractor (\cref{def:high-probability-extraction-body}). We present an $\mathsf{EQPT}_{c}$ extractor $\overline{\mathsf{Extract}}^{P^*}$ that has the form of an $\mathsf{EQPT}_{c}$ computation (see~\cref{fig:cr-eqpt-simple}) where the unitary $U$ is a coherent implementation of the following $\mathsf{EQPT}_{m}$ computation on input register $\mathcal{H} \otimes \mathcal{R} \otimes \mathcal{S}$:
\begin{enumerate}
\item Measure $\mathcal{R} \otimes \mathcal{S}$ with the projective measurement
\[\BMeas{\ketbra{+_R}_{\mathcal{R}} \otimes \ketbra{0}_{\mathcal{S}}}.\]
If the output is $0$, abort.
\item If the output is $1$, we are guaranteed that $\mathcal{R} \otimes \mathcal{S}$ is $\ket{+_R}_{\mathcal{R}} \otimes \ket{0}_{\mathcal{S}}$. Run $\mathsf{Extract}^{P^*}$ on prover state $\mathcal{H}$ using $\mathcal{R}$ as the superposition of challenges (in~\cref{step:ge-run-coherently} of~\cref{def:high-probability-extraction-body}). We assume that the randomness $\mathsf{Extract}^{P^*}$ uses to sample a classical random $\mathsf{vk}$ is generated by applying a Hadamard to a subregister of $\mathcal{S}$.
Write everything that is measured/obtained during the execution of $\mathsf{Extract}^{P^*}$ onto subregisters of $\mathcal{S}$. This includes the instance $x$, the first two messages of the 4-message protocol $(\mathsf{vk},a)$, the bit $b$ indicating the verifier's decision (i.e., whether the prover succeeds when run on the uniform superposition of challenges), and the extracted output $(w,y)$ (if $b = 1$, $w$ is a valid witness for $x$ and $\mathsf{Pred}(x, \mathsf{vk}, a, w, y)=1$ with $1-{\rm negl}(\lambda)$ probability).
\end{enumerate}
The fact that the above computation is in $\mathsf{EQPT}_{m}$ follows from the fact that $\mathsf{Extract}^{P^*}$ is $\mathsf{EQPT}_{m}$. Let $U$ denote its coherent implementation (as in~\cref{sec:eqpt}, $U$ is a unitary on $\mathcal{H} \otimes \mathcal{R} \otimes \mathcal{S}$ and an exponential-size ancilla register).
Our state-preserving $\mathsf{EQPT}_{c}$ extractor $\overline{\mathsf{Extract}}^{P^*}$ takes as input a prover state on $\mathcal{H}$ and does the following.
$\overline{\mathsf{Extract}}^{P^*}:$
\begin{enumerate}[noitemsep]
\item Initialize additional registers $\mathcal{R} \otimes \mathcal{S}$ to $\ket{+_R}_{\mathcal{R}} \ket{0}_{\mathcal{S}}$.
\item Apply $U$.
\item\label[step]{step:measure-extracted-w} Measure the subregister of $\mathcal{S}$ containing $(x,\mathsf{vk},a,b,w)$ where $w = 0$ is interpreted as $\bot$. Note that $\mathcal{S}$ contains a subregister corresponding to $y$, but $y$ is not measured here.
\item\label[step]{step:apply-U-dagger} Apply $U^\dagger$.
\item\label[step]{step:run-P-again} Run the prover $P^*$ on first message $\mathsf{vk}$ to obtain $x,a$ (again). Then run $P^*$ on challenge $\mathcal{R}$. Measure $\mathcal{R}$ to obtain $r$, and measure the register of $\mathcal{H}$ corresponding to its output to obtain $z$. Output $(x,\mathsf{vk},a,r,z,w)$ and $\mathcal{H}$.
\end{enumerate}
\end{construction}
First, we note that the above procedure is $\mathsf{EQPT}_{c}$ by construction. To prove the extraction correctness guarantee, it suffices to show that when $b = 1$, the witness $w$ is valid with $1-{\rm negl}(\lambda)$ probability, and that when $b = 0$, the extractor outputs a rejecting transcript. The former statement follows immediately from the assumption that $\mathsf{Extract}^{P^*}$ is a guaranteed extractor. For the latter, observe (using the definition of $\mathsf{Extract}^{P^*}$ and the fact that $U$ is a coherent implementation of $\mathsf{Extract}^{P^*}$) that when $b = 0$, the state on $\mathcal{H} \otimes \mathcal{R}$ after running $P^*$ to obtain $a$ in~\cref{step:run-P-again} corresponds to a rejecting execution, so the transcript measured in~\cref{step:run-P-again} will be rejecting.
It remains to argue that the state-preserving extractor satisfies the indistinguishability property. Observe that $\overline{\mathsf{Extract}}^{P^*}$ can be rewritten so that $\mathsf{vk},x,a,b$ are no longer obtained by running $\mathsf{Extract}^{P^*}$ coherently as $U$ and then measuring those values afterwards, but instead by running those steps accroding to the standard $\mathsf{EQPT}_{m}$ implementation of $\mathsf{Extract}^{P^*}$. Thus the only part of $\mathsf{Extract}^{P^*}$ that is written as a coherent implementation of a variable runtime procedure is the $\mathsf{FindWitness}^{P^*}$ subroutine; let $U_{\mathsf{FW}}$ denote the coherent implementation of $\mathsf{FindWitness}^{P^*}$. Note that while $\mathsf{FindWitness}^{P^*}$ is technically not $\mathsf{EQPT}_{m}$ on its own (i.e., there exist inputs that could make it run for too long), the fact that $\mathsf{Extract}^{P^*}$ is $\mathsf{EQPT}_{m}$ ensures that $U_{\mathsf{FW}}$ is only applied on inputs where it runs for expected polynomial time.
Given the above definitions, the output of $\overline{\mathsf{Extract}}^{P^*}$ is perfectly equivalent to the following:
\begin{enumerate}[noitemsep]
\item Sample a random $\mathsf{vk}$, and run the prover $P^*$ to obtain $x,a$.
\item Initialize $\mathcal{R}$ to $\ket{+_R}_{\mathcal{R}}$ and measure $\mathsf{C}$ (this is the binary projective measurement on $\mathcal{H} \otimes \mathcal{R}$ defined in \cref{sec:ge-notation} that measures whether the verifier accepts when the prover with state $\mathcal{H}$ is run on the challenge $\mathcal{R}$).
\item If $\mathsf{C} =1$, apply $U_{\mathsf{FW}}$. Otherwise if $\mathsf{C} = 0$, set $w = \bot$ and skip to~\cref{step:measure-r-z}.
\item Measure the subregister corresponding to the part of the output of $U_{\mathsf{FW}}$ containing $w$. Note that there is also a subregister corresponding to $y$, but $y$ is not measured.
\item Apply $U_{\mathsf{FW}}^\dagger$.
\item\label[step]{step:measure-r-z} Measure $\mathcal{R}$ to obtain $r$ and run the prover $P^*$ on $r$ to obtain its response $z$.
\item\label[step]{step:extractor-output-w} Output $(x,\mathsf{vk},a,r,z,w)$ and $\mathcal{H}$.
\end{enumerate}
Let $\mathsf{Hybrid}_0$ be identical to $\overline{\mathsf{Extract}}^{P^*}$ except that~\cref{step:extractor-output-w} is modified to output $(x,\mathsf{vk},a,r,z)$ and $\mathcal{H}$ (i.e., omitting $w$). To show computational indistinguishability, it suffices to show that the output of $\mathsf{Hybrid}_0$ is computational indistinguishable from $\mathsf{Hybrid}_1$ defined as follows:
\begin{enumerate}[noitemsep]
\item Sample a random $\mathsf{vk}$, and run the prover $P^*$ to obtain $x,a$.
\item Initialize $\mathcal{R}$ to $\ket{+_R}_{\mathcal{R}}$ and measure $\mathsf{C}$ (this is the binary projective measurement on $\mathcal{H} \otimes \mathcal{R}$ defined in \cref{sec:ge-notation} that measures whether the verifier accepts when the prover with state $\mathcal{H}$ is run on the challenge $\mathcal{R}$).
\item Measure $\mathcal{R}$ to obtain $r$ and run the prover $P^*$ on $r$ to obtain its response $z$.
\item Output $(x,\mathsf{vk},a,r,z)$ and $\mathcal{H}$.
\end{enumerate}
$\mathsf{Hybrid}_1$ corresponds to an honest execution of $P^*$ since the measurement of $\mathsf{C}$ commutes with the measurement of $\mathcal{R}$.
By assumption, in $\mathsf{Hybrid}_0$, the reduced density $\DMatrix_{\mathcal{S}}$ of $\mathcal{S}$ satisfies $\Tr(\Pi_{\mathrm{Valid}} \DMatrix_{\mathcal{S}}) = 1-{\rm negl}(\lambda)$, where $\Pi_{\mathrm{Valid}}$ checks that either $b=0$ or (1) $w$ is a valid witness for $x$ and (2) $\mathsf{Pred}(\mathsf{vk}, x, a, w,y) = 1$. Therefore, the indistinguishability of $\mathsf{Hybrid}_0$ and $\mathsf{Hybrid}_1$ should intuitively follow from the witness-binding property, since if the measurement of $w$ is skipped, then $U_{\mathsf{FW}}$ cancels out with $U_{\mathsf{FW}}^\dagger$. However, to appeal to the guarantee that measuring $w$ is undetectable, we need to ensure that $U_{\mathsf{FW}}$ corresponds to an efficient operation.
We handle this by considering a fixed polynomial-time truncation of $U_{\mathsf{FW}}$. Suppose that a distinguisher can distinguish $\mathsf{Hybrid}_0$ from $\mathsf{Hybrid}_1$ with non-negligible advantage $\varepsilon(\lambda)$. Then we can modify $\mathsf{Hybrid}_0$ to use $U_{\mathsf{FW},\varepsilon}$, a coherent implementation of a strict ${\rm poly}(\lambda,1/\varepsilon)$-runtime algorithm that approximates $\mathsf{FindWitness}^{P^*}$ to precision $\varepsilon/2$. Now the same distinguisher must distinguish between $\mathsf{Hybrid}_{0,\varepsilon}$ and $\mathsf{Hybrid}_{1}$ with advantage $\varepsilon/2$, where $\mathsf{Hybrid}_{0,\varepsilon}$ is the following:
\begin{enumerate}[noitemsep]
\item Sample a random $\mathsf{vk}$, and run the prover $P^*$ to obtain $x,a$.
\item Initialize $\mathcal{R}$ to $\ket{+_R}_{\mathcal{R}}$ and measure $\mathsf{C}$.
\item If $\mathsf{C} =1$, apply $U_{\mathsf{FW},\varepsilon}$. Otherwise if $\mathsf{C} = 0$, set $w = \bot$ and skip to~\cref{step:measure-r-z-eps}.
\item Measure a subregister of the output register of $U_{\mathsf{FW},\varepsilon}$ to obtain $w$.
\item Apply $U_{\mathsf{FW},\varepsilon}^\dagger$.
\item\label[step]{step:measure-r-z-eps} Measure $\mathcal{R}$ to obtain $r$ and run the prover $P^*$ on $r$ to obtain its response $z$.
\item Output $(x,\mathsf{vk},a,r,z)$ and $\mathcal{H}$.
\end{enumerate}
Since $\varepsilon(\lambda)$ is at least $1/\lambda^c$ for some constant $c$ for infinitely many $\lambda$, it follows that $U_{\mathsf{FW},\varepsilon}$ and $U_{\mathsf{FW},\varepsilon}^\dagger$ are ${\rm poly}(\lambda)$-runtime algorithms for infinitely many $\lambda$. Then a distinguisher that distinguishes between $\mathsf{Hybrid}_{0,\varepsilon}$ and $\mathsf{Hybrid}_1$ contradicts the witness-binding property of $(P, V)$. \qedhere
\end{proof}
\subsection{Applying \cref{lemma:state-preserving-high-probability}}\label{sec:state-preserving-examples}
We now show that the witness-binding hypotheses in \cref{lemma:state-preserving-high-probability} are satisfied in two cases of interest: protocols for unique-witness (or partial witness) languages (\cref{cor:state-preserving-unique-witness}), and commit-and-prove protocols (\cref{cor:commit-and-prove}).
\begin{corollary}\label{cor:state-preserving-unique-witness}
Let $L\in \mathsf{UP}$ be a language with unique $\mathsf{NP}$ witnesses. Then, if $L$ has a post-quantum proof of knowledge with guaranteed extraction, it also has a post-quantum state-preserving proof of knowledge.
\end{corollary}
\begin{proof}
This follows immediately from the fact that any protocol for a $\mathsf{UP}$ language is $(\mathsf{Pred}, f)$-witness binding for $\mathsf{Pred} = 1$ (the trivial predicate) and $f = \mathsf{Id}$ (because there is a unique valid witness). Since $\mathsf{Pred} = 1$, any guaranteed extractor also satisfies the $\mathsf{Pred}$-hypothesis of \cref{lemma:state-preserving-high-probability}, so we are done. \qedhere
\end{proof}
We briefly state how \cref{cor:state-preserving-unique-witness} can be extended to languages $L$ with unique \emph{partial} witnesses, provided that the extractor only measures a function $f(w)$ that is a deterministic function of the instance $x$.
\begin{corollary}\label{cor:state-preserving-partial-witness}
Let $L\in \mathsf{NP}$, and let $f$ be an efficient function such that for all instances $x\in L$ and all witnesses $w\in R_x$, $f(x,w) = g(x)$ is equal to some fixed (possibly inefficient) function of $x$.
Suppose that $L$ has a proof/argument of knowledge $(P_{\Sigma}, V_{\Sigma})$ with guaranteed extraction. Then, a modified variant of $\overline{\mathsf{Extract}}$ (\cref{construction:state-preserving-reduction}), in which only $f(x,w)$ is measured instead of $w$, is a state-preserving proof/argument of knowledge extractor for $(P_{\Sigma}, V_{\Sigma})$ that outputs $g(x)$. \qedhere
\end{corollary}
This holds by the same reasoning as \cref{cor:state-preserving-unique-witness}: the hypothesis of \cref{cor:state-preserving-partial-witness} implies that any protocol for $L$ is $(\mathsf{Pred} = 1, f)$-witness binding, and so the reduction from \cref{lemma:state-preserving-high-probability} applies (when $f(x,w)$ is measured rather than $w$). \qedhere
\subsubsection{Commit-and-Prove Protocols}
Let $(P_\Sigma, V_\Sigma)$ denote a post-quantum proof/argument of knowledge with guaranteed extraction (\cref{def:high-probability-extraction-body}). Recall that \cref{def:high-probability-extraction-body} has been designed to capture (first-message) adaptive soundness, in which the prover $P^*$ can adaptively choose the instance $x$ as it sends its first message.
Then, we consider a \emph{commit-and-prove} compiled protocol $(P_{\mathsf{Com}}, V_{\mathsf{Com}})$ using $(P_\Sigma, V_\Sigma)$ and a commitment scheme $\mathsf{Com}$. $(P_{\mathsf{Com}}, V_{\mathsf{Com}})$ is executed as follows:
\begin{itemize}
\item $V_{\mathsf{Com}}$ sends a first message for $(P_{\Sigma}, V_{\Sigma})$ (if the protocol has four messages). Moreover, if $\mathsf{Com}$ is a two-message commitment scheme, $V_{\mathsf{Com}}$ sends a commitment key $\mathsf{ck}$.
\item $P_{\mathsf{Com}}$ then sends:
\begin{itemize}
\item A commitment $\mathsf{com} = \mathsf{Com}(\mathsf{ck}, w)$ to a witness $w$ for the underlying language $L$, and
\item A first prover message for an execution of $(P_{\Sigma}, V_{\Sigma})$ for the statement ``$\exists w, r$ such that $\mathsf{com} = \mathsf{Com}(\mathsf{ck}, w; r)$ and $w$ is an $\mathsf{NP}$-witness for $x\in L$.
\end{itemize}
\item $P_{\mathsf{Com}}$ and $V_{\mathsf{Com}}$ then complete the execution of $(P_{\Sigma}, V_{\Sigma})$.
\end{itemize}
\begin{corollary}\label{cor:commit-and-prove}
If $(P_{\Sigma}, V_{\Sigma})$ is a post-quantum proof/argument of knowledge with guaranteed extraction for all $\mathsf{NP}$ languages and $\mathsf{Com}$ is a collapse-binding commitment scheme, then the commit-and-prove compiled protocol is a state-preserving proof/argument of knowledge.
\end{corollary}
\begin{proof}
We first remark that since $(P_{\Sigma}, V_{\Sigma})$ is a post-quantum proof/argument of knowledge with guaranteed extraction, the commit-and-prove composed protocol is also immediately a post-quantum proof/argument of knowledge with guaranteed extraction. Namely, $\mathsf{Extract}^{P^*}$ interprets the cheating prover as an adaptive-input cheating prover for $(P_{\Sigma}, V_{\Sigma})$ with respect to the language
\[ L_{\mathsf{ck}, \mathsf{com}} = \big\{ (w, \omega): w \in R_x \text{ and } \mathsf{Com}(\mathsf{ck}, w; \omega) = \mathsf{com}\big\}
\]
and runs the guaranteed extractor for $(P_{\Sigma}, V_{\Sigma})$. Morevoer, this extraction procedure outputs \emph{both} an $\mathsf{NP}$-witness $w$ \emph{and} commitment randomness $\omega$ such that $\mathsf{com} = \mathsf{Com}(\mathsf{ck}, w; \omega)$; we treat $\omega$ as auxiliary information $y$.
We then define $\mathsf{Pred}(x,(\mathsf{ck}, \mathsf{vk}), (\mathsf{com}, a), w, \omega)$ to output $1$ if and only if $\mathsf{Com}(\mathsf{ck}, w; \omega) = \mathsf{com}$. Then, we observe that the commit-and-prove protocol is $(\mathsf{Pred}, \mathsf{Id})$-witness binding (for the language $L$) by the collapse-binding of the commitment scheme $\mathsf{Com}$. Moreover, the correctness property of $\mathsf{Extract}^{P^*}$ further guarantees that $\mathsf{Pred}(x, (\mathsf{ck}, \mathsf{vk}), (\mathsf{com}, a), w, \omega) = 1$ with probability $1-{\rm negl}(\lambda)$.
Thus, we conclude that \cref{lemma:state-preserving-high-probability} applies, and so the commit-and-prove protocol has a state-preserving extractor. \qedhere
\end{proof}
\subsection{Concluding \cref{thm:succinct-state-preserving,thm:state-preserving-wi}}\label{sec:state-preserving-main-theorems}
Finally, we describe how to conclude the results of \cref{thm:succinct-state-preserving,thm:state-preserving-wi}. We begin with \cref{thm:succinct-state-preserving}, re-stated below.
\begin{theorem}[\cref{thm:succinct-state-preserving}]
Assuming collapsing hash functions exist, there exists a 4-message public-coin state-preserving succinct argument of knowledge for $\mathsf{NP}$.
\end{theorem}
\begin{proof}
Given a collapsing hash function family $\mathsf{H}$, we construct a state-preserving succinct argument of knowledge for $\mathsf{NP}$ as follows:
\begin{itemize}
\item First, we define Kilian's succinct argument system (see \cref{sec:kilian}) with respect to $\mathsf H$. By \cref{corollary:kilian-guaranteed}, this argument system is a post-quantum argument of knowledge with guaranteed extraction.
\item Next, we apply the commit-and-prove compiler (\cref{cor:commit-and-prove}) using the collapse-binding commitment scheme $\mathsf{Com}(\mathsf{ck} = h, m) = h(m)$. This commitment scheme does not formally satisfy any hiding property, but it is \emph{succinct}, which is what is relevant for \cref{thm:succinct-state-preserving}.
\end{itemize}
\cref{cor:commit-and-prove} tells us that the resulting composed protocol is a state-preserving argument of knowledge for $\mathsf{NP}$. Moreover, it satisfies all of the properties (4-message, public-coin, succinct) claimed in the theorem statement. \qedhere
\end{proof}
Next, we prove \cref{thm:state-preserving-wi}, re-stated below.
\begin{theorem}\label{thm:state-preserving-wi-body}
Assuming collapsing hash functions or \emph{super-polynomially secure} one-way functions, there exists a 4-message public-coin state-preserving witness-indistinguishable argument (in the case of collapsing) or proof (in the case of OWFs) of knowledge. Assuming \emph{super-polynomially secure} non-interactive commitments, there exists a 3-message PoK achieving the same properties.
\end{theorem}
\begin{proof}
All three variants of this theorem are proved via the same approach: combining commit-and-prove with a (strong) witness-indistinguishable $\Sigma$-protocol.
Formally, let $\mathsf{Com}$ denote a (possibly keyed) non-interactive commitment scheme. We use $\mathsf{Com}$ to instantiate a commit-and-open $\Sigma$-protocol (\cref{def:commit-and-open}) such as the \cite{C:GolMicWig86} protocol for graph $3$-coloring or the (potentially modified) \cite{Blum86} protocol for Hamiltonicity. We do a sufficient parallel repetition of the commit-and-open protocol so that its challenge space satisfies $|R| = 2^t$ for $t\leq {\rm poly}(\lambda)$\footnote{Using \cite{Blum86}, one can set $t = {\rm poly}(\log \lambda)$.} and it achieves ${\rm negl}(\lambda)$ soundness error. Then, \cref{cor:commit-and-open} tells us that this protocol is a post-quantum proof/argument of knowledge (depending on whether $\mathsf{Com}$ is statistically or collapse-binding) with guaranteed extraction.
Next, we additionally assume (as is the case for \cite{C:GolMicWig86,Blum86}) that the $\Sigma$-protocol satisfies special honest-verifier zero knowledge (\cref{def:shvzk}). In fact, we assume that it satisfies SHVZK against quantum adversaries that run in time $2^t \cdot {\rm poly}(\lambda)$, which holds (for these examples) provided that $\mathsf{Com}$ is computationally hiding against $2^t\cdot {\rm poly}(\lambda)$-time adversaries.
Under this assumption, Watrous' rewinding lemma \cite{STOC:Watrous06} implies that the $\Sigma$-protocol has a time $2^t\cdot{\rm poly}(\lambda)$ malicious verifier post-quantum simulator.
We now plug this $\Sigma$-protocol into the commit-and-prove compiler (\cref{cor:commit-and-prove}), again making use of the commitment scheme $\mathsf{Com}$ (for simplicity of the proof, we assume here that a different commitment key is used, although this is not necessary). \cref{cor:commit-and-prove} tells us that the resulting protocol is a state-preserving proof/argument of knowledge (again depending on whether $\mathsf{Com}$ is statistically binding).
It remains to show WI of the commit-and-prove protocol. That is, we want to show that for every malicious verifier $V^*$ (and maliciously chosen commitment key $\mathsf{ck}$), a commitment $\mathsf{com} = \mathsf{Com}(\mathsf{ck}, w_1)$ and the view of $V^*$ in an execution of the $\Sigma$-protocol is computationally indistinguishable from the analogous state when a second witness $w_2$ is instead used. This is argued via the usual hybrid argument:
\begin{itemize}
\item Define $\mathsf{Hybrid}_{0,b}$ to be $\mathsf{Com}(\mathsf{ck}, w_b)$ along with the actual $\Sigma$-protocol view of $V^*$.
\item Define $\mathsf{Hybrid}_{1,b}$ to consist of $\mathsf{com} = \mathsf{Com}(\mathsf{ck}, w_b)$ along with a $2^t\cdot {\rm poly}(\lambda)$-time \emph{simulated} view of $V^*$ on input $(\mathsf{ck}, \mathsf{com})$. We have that $\mathsf{Hybrid}_{1,b}\approx_c \mathsf{Hybrid}_{0,b}$ by the super-polynomial time simulatability of the $\Sigma$-protocol (as discussed above).
\item Finally, we have that $\mathsf{Hybrid}_{1,0}\approx_c \mathsf{Hybrid}_{1,1}$ by the (already assumed) $2^t\cdot {\rm poly}(\lambda)$-hiding of $\mathsf{Com}$.
\end{itemize}
\noindent To conclude the theorem statement, it suffices to instantiate $\mathsf{Com}$ in three ways:
\begin{itemize}
\item Assuming $2^t\cdot {\rm poly}(\lambda)$-secure non-interactive commitments (e.g. \cite{C:BarOngVad03,TCC:GHKW17,ePrint:LomSch19}), one obtains the claimed $3$-message protocol.
\item Assuming $2^t\cdot {\rm poly}(\lambda)$-secure one-way functions, one obtains the OWF-based $4$-message protocol.
\item Assuming polynomially-secure collapsing hash functions, one obtains the collapsing-based $4$-message protocol by defining $\mathsf{Com}(h, m; r, s) = (h(r), s, \langle r, s\rangle \oplus m)$. This commitment scheme is \emph{statistically} hiding (i.e. hiding against unbounded adversaries), and so WI of the commit-and-prove protocol holds unconditionally, while the AoK property relies on collapsing.
\end{itemize}
\noindent This completes the proof of \cref{thm:state-preserving-wi}.\qedhere
\end{proof}
\section{The~\cite{FOCS:GolMicWig86} GNI Protocol is Post-Quantum Zero Knowledge}
In this section, we show that our state-preserving extraction results imply the post-quantum ZK of the graph non-isomorphism protocol, proving \cref{thm:szk}. We begin by giving a description of the GNI protocol in \cref{fig:gni}. Our description achieves soundness error $1/2$ (as does the original~\cite{FOCS:GolMicWig86}), but can be extended to the negligible soundness case (without increasing the number of rounds) with essentially the same proof of (post-quantum) ZK.
\begin{figure}[!ht]
\centering
\procedure[]{}{
P (G_0,G_1) \> \> V(G_0,G_1) \\
\> \sendmessageleft*[3cm]{H,\{G_{i,0},G_{i,1}\}_{i \in [n]}} \> \begin{subprocedure}
\pseudocode[mode=text]{$\pi \gets S_n$, $b \gets \{0,1\}$, $H = \pi(G_b)$ \\ $b_1, \hdots, b_n \gets \{0,1\}$ \\
$\forall i \in [n], \beta \in \{0,1\}$:\\ $\pi_{i,\beta} \gets S_n, G_{i,\beta} \coloneqq \pi_{i,\beta}(G_{\beta + b_i})$}
\end{subprocedure}\\
\begin{subprocedure}
\pseudocode[mode=text]{$v \gets \{0,1\}^n$}
\end{subprocedure}
\> \sendmessageright*[3cm]{v} \> \\
\> \sendmessageleft*[3cm]{\{m_i\}_{i \in [n]}} \> \begin{subprocedure}
\pseudocode[mode=text]{If $v_i = 0$, set $m_i = b_i, \pi_{i,0},\pi_{i,1}$.\\
If $v_i = 1$, set $m_i = b\oplus b_i, \pi_{i,b \oplus b_i} \circ \pi^{-1}$}
\end{subprocedure} \\
\begin{subprocedure}
\pseudocode[mode=text]{If $v_i = 0$, for $m_i = b_i, \pi_{i,0},\pi_{i,1}$, check: \\ $(\pi_{i, 0}^{-1}(G_{i,0}), \pi_{i, 1}^{-1}(G_{i,1})) = ( G_{b_i}, G_{1-b_i})$.\\
If $v_i = 1$, for $m_i = c_i, \sigma$, check:\\
$\sigma(H) = G_{i, c_i}$.
\\
\\ If the check fails, abort.
\\ Otherwise, find $b'$ such that $G_{b'} \simeq H$.
}
\end{subprocedure}
\> \sendmessageright*[3cm]{b'} \> \begin{subprocedure}
\pseudocode[mode=text]{\\ \\ Accept if $b' = b$.}
\end{subprocedure}
}
\caption{The Zero Knowledge Proof System for Graph Non-Isomorphism.}
\label{fig:gni}
\end{figure}
Next, we give a slightly more abstract description of the protocol using instance-dependent commitments \cite{BelMicOst90,JC:ItoOhtShi97,C:MicVad03}.
\begin{construction}
Fix a language $L$, let $\mathsf{IDC}$ be a non-interactive instance-dependent commitment\footnote{That is, when $x\in L$, a commitment $\mathsf{Com}(x, m)$ statistically hides the message $m$. When $x\not\in L$, a commitment $\mathsf{Com}(x, m)$ statistically binds the committer to $m$.} \cite{BelMicOst90,JC:ItoOhtShi97,C:MicVad03} for $L$, and let $\mathsf{PoK}$ be a statistically witness-indistinguishable proof of knowledge of the committed bit for $\mathsf{IDC}$. Then, we define the following interactive proof system for the complement language $\overline{L}$.
\begin{enumerate}
\item The verifier commits to a bit $b \in \{0,1\}$ using $\mathsf{IDC}$ and sends it to the prover.
\item The prover and verifier engage in $\mathsf{PoK}$ where the verifier proves knowledge of $b$.
\item If the prover accepts in $\mathsf{PoK}$, then it sends $b'$ as determined by the verifier's commitment.
\item The verifier accepts if $b' = b$.
\end{enumerate}
\end{construction}
\cite{FOCS:GolMicWig86} instantiates this framework for the language $L$ consisting of pairs of isomorphic graphs (and so $\overline{L}$ consists of pairs of non-isomorphic graphs, up to well-formedness of the string $x$).
Let $\mathsf{GIComm}$ be the following instance-dependent commitment scheme: $\mathsf{GIComm}((G_0,G_1),b;\pi) = \pi(G_b):= H$. Observe that if $G_0,G_1$ are isomorphic then this commitment is perfectly hiding, and if they are not then it is perfectly binding. Moreover, this commitment scheme admits a proof of knowledge of the committed bit as follows.
\begin{enumerate}
\item The prover chooses $b_1,\ldots,b_{\lambda} \in \{0,1\}$ uniformly at random and sends commitments $C_{i,0} \coloneqq \mathsf{GIComm}((G_0,G_1),b_i;\sigma_{i,0})$ and $C_{i,1} \coloneqq \mathsf{GIComm}((G_0,G_1),b_i \oplus 1;\sigma_{i,1})$.
\item The verifier sends a random string $v \in \{0,1\}^{\lambda}$.
\item The prover sends $b_i$ and opens $C_{i,0},C_{i,1}$ for all $i$ such that $v_i = 0$. The prover sends $c_i \coloneqq b \oplus b_i$, $\tau_i \coloneqq \sigma_{i,c_i} \circ \pi^{-1}$ for all $i$ such that $v_i = 1$.
\item The verifier accepts if the received openings are valid when $v_i = 0$, and $C_{i,c_i} = \tau_i(H)$ when $v_i = 1$, where $H$ is the commitment graph.
\end{enumerate}
Classically, we obtain $b$ by rewinding to find two accepting transcripts with $v_i \neq v_i'$; then $b = c_i \oplus b_i$.
\begin{lemma}\label{lemma:gi-state-preserving}
If $(G_0,G_1)$ are non-isomorphic then the above protocol is a statistically state-preserving proof of knowledge of the committed message for $\mathsf{GIComm}$.
\end{lemma}
\begin{proof}
We have already shown (\cref{cor:gni-guaranteed-extractor}) that this protocol has a guaranteed extractor, because when $G_0$ and $G_1$ are not isomorphic, this protocol is collapsing onto the $b_i$ part of $0$-challenge responses (as $b_i$ is fixed by the commitments $C_{i,0}, C_{i,1}$) and the protocol is $(2, g)$-probabilistically special sound (where $g$ checks (for the first challenge-partial response pair) the correctness of the $0$ challenge response bits $b_i$ for $v_i = 0$ and (for the second challenge-partial response pair) the correctness of all $b_i$ ($v_i = 0$) and $c_i$ ($v_i = 1$)).
Moreover, the language $L_{G_0, G_1} = \{ H: \exists (b, \pi) \text{ such that } \pi G_b \simeq H\}$ has partial unique witnesses: for any $H\in L_{G_0, G_1}$, the bit $b$ is uniquely determined (given that $G_0$ and $G_1$ are not isomorphic). Thus, the state-preserving reduction of \cref{lemma:state-preserving-high-probability} applies (see \cref{cor:state-preserving-partial-witness}), so this protocol has a state-preserving extractor.
\end{proof}
Finally, we note that \cref{lemma:gi-state-preserving} immediately implies that the GNI protocol is post-quantum (statistical) zero knowledge. We assume without loss of generality that the cheating verifier $V^*$ has a ``classical'' first message by replacing $V^*$ (with auxiliary state $\ket{\psi}$) with $(V^*, \DMatrix)$ for the mixed state $\DMatrix$ obtained by running $U_{V^*}$ on $\ket{\psi}$ to generate a first message, measuring it, and running $U_{V^*}^\dagger$.
The simulator is then described as follows:
\begin{itemize}
\item Given cheating verifier $V^*$ with classical first message $(\mathsf{com}, \mathsf{pok}_1)$, run the state-preserving $\mathsf{PoK}$ extractor on $V^*$ (which now acts as a $\mathsf{PoK}$ cheating prover).
\item If the transcript generated by the state-preserving extractor is accepting, then output the bit $b$ in the ``partial witness'' slot of the extractor's output. Otherwise, send an aborting message.
\end{itemize}
The (statistical) zero knowledge property of this simulator follows immediately from the state-preserving property of the extractor. Moreover, the simulator inherits the $\mathsf{EQPT}_{c}$ structure directly from the extractor (with additional fixed polynomial-time pre- and post-processing). This completes the proof of \cref{thm:szk}.
\section{The~\cite{STOC:FeiSha90} Protocol is Post-Quantum Zero Knowledge}\label{sec:feige-shamir}
We recall the Feige-Shamir 4-message zero knowledge argument system for $\mathsf{NP}$. This protocol uses three primitives as building blocks:
\begin{itemize}
\item A non-interactive commitment scheme $\mathsf{Com}$.
\item The 3-message WI argument of knowledge $\mathsf{AoK}$ constructed in \cref{sec:state-preserving-main-theorems}. We note that $\mathsf{AoK}$ is public-coin.
\item A 3-message delayed-witness WI argument of knowledge $\mathsf{dAoK}$.
\end{itemize}
We will argue security using the particular instantiations of $\mathsf{AoK},\mathsf{dAoK}$ due to subtleties arising from the concurrent composition. Unlike $\mathsf{AoK}$, we do \emph{not} require that $\mathsf{dAoK}$ is state-preserving.
\noindent The protocol is executed as follows.
\begin{itemize}
\item The verifier sends the following strings as its first message:
\begin{itemize}
\item Two commitments $c_0, c_1$ generated as $c_i = \mathsf{Com}(0; r_i)$ for i.i.d. random strings $r_i$. For the post-quantum variant, following \cite{EC:Unruh12,EC:Unruh16}, we additionally include commitments $c'_i = \mathsf{Com}(r_i; \rho_i)$ to the two random strings $r_0, r_1$.
\item A first (prover) message of $\mathsf{AoK}$ corresponding to the statement ``$\exists i, r_i, \rho_i$ such that $c_i = \mathsf{Com}(0; r_i)$ and $c'_i = \mathsf{Com}(r_i; \rho_i)$.'' By default, the verifier uses $(b, r_b, \rho_b)$ as its witness for a randomly chosen bit $b$.
\end{itemize}
\item The prover sends two strings as its first message:
\begin{itemize}
\item A second (verifier) message of $\mathsf{AoK}$ (which is a uniformly random string).
\item A first (prover) message of $\mathsf{dAoK}$ corresponding to the statement ``$x\in L$ or $\exists i, r_i, \rho_i$ such that $c_i = \mathsf{Com}(0; r_i)$ and $c'_i = \mathsf{Com}(r_i; \rho_i)$.'' No witness is required.
\end{itemize}
\item The verifier sends two strings as its second message:
\begin{itemize}
\item A third (prover) message of $\mathsf{AoK}$, computed using $(b, r_b, \rho_b)$.
\item A second (verifier) message of $\mathsf{dAoK}$ (which is a uniformly random string).
\end{itemize}
\item Finally, the prover sends the third message of $\mathsf{dAoK}$. The prover uses a witness $w$ for $x\in L$ to generate this message.
\end{itemize}
\subsection{Building Block: Delayed-Witness Proofs of Knowledge}
In order to instantiate the Feige-Shamir protocol, we need a post-quantum instantiation of $\mathsf{dAoK}$. In particular, we need:
\begin{lemma}\label{lemma:delayed-witness-wi}
Assume that post-quantum non-interactive commitments exist. Then, there exists a delayed-witness $\Sigma$-protocol for $\mathsf{NP}$ that is witness indistinguishable against quantum verifiers and is a post-quantum proof of knowledge with negligible knowledge error.
\end{lemma}
\cref{lemma:delayed-witness-wi} does not immediately follow from extraction techniques such as~\cite[Lemma 7]{EC:Unruh12} or \cite{FOCS:CMSZ21} because the canonical delayed-witness $\Sigma$-protocol \cite{C:LapSha90} is not collapsing, and these works only give results for collapsing protocols. Nonetheless, we show that (similar to the one-out-of-two graph isomorphism subprotocol of \cite{FOCS:GolMicWig86}) making use of a variant $(2, g)$-PSS (\cref{def:k-g-pss}), a simple modification of Unruh's rewinding technique~\cite{EC:Unruh12} suffices to prove \cref{lemma:delayed-witness-wi}.
\subsubsection{The \cite{C:LapSha90} Protocol}
We begin by recalling the \cite{C:LapSha90} $\Sigma$-protocol for graph Hamiltonicity. The protocol uses a non-interactive commitment scheme $\mathsf{Com}$ as a building block, and is executed as follows.
\begin{itemize}
\item The prover, given as input the security parameter $1^\lambda$ and an input length $1^n$,\footnote{Note that the prover does not even need to know the instance $x$ to compute this message; however, we consider an a priori fixed statement $x$ to make sense of the proof-of-knowledge property.} sends $\lambda$ commitments $\mathsf{com}_i$ to adjacency matrices of i.i.d. random cycle graphs on $n$ vertices (i.e., graphs $H_i = \sigma_i C_n$ that are random permutations of a fixed cycle graph on $n$ vertices).
\item The verifier sends a uniformly random string $r \gets \{0,1\}^\lambda$.
\item For the third round, the prover is given a graph $G$ and a fixed $n$-cycle represented by a permutation $\pi$ mapping $C_n$ to $G$. The prover then sends the following messages.
\begin{itemize}
\item For each $i$ such that $r_i = 0$, the prover sends a full opening of the $i$th commitment $\mathsf{com}_i$.
\item For each $i$ such that $r_i = 1$, the prover sends $\sigma_i \pi^{-1}$ and opens the substring of $\mathsf{com}_i$ consisting of commitments to each non-edge of $\sigma_i \pi^{-1}(G)$.
\end{itemize}
\item For each $i$ such that $r_i = 0$, the verifier checks that $\mathsf{com}_i$ was correctly opened to the adjacency matrix of a cycle graph. For each $i$ such that $r_i = 1$, the verifier checks that every matrix entry opened is a valid decommitment to $0$.
\end{itemize}
By the perfect binding of $\mathsf{Com}$, we know that this protocol satisfies $2$-special soundness. In fact, it is the parallel repetition of a protocol satisfying $2$-special soundness: for any index $i$, a commitment string $a_i$ along with a valid response $z_0$ to $r_i = 0$ and a valid response $z_1$ to $r_i = 1$ can be used to compute a Hamiltonian cycle in $G$. Indeed, it satisfies a variant special soundness (implicitly related to $(2, g')$-PSS) described here:
\begin{claim}
There exists an extractor $\mathsf{SSExtract}(a, r_1, z_{1,i}^{(1)}, r_2, z_{2,i})$ for the \cite{C:LapSha90} protocol such that $\mathsf{SSExtract}$ outputs a valid $\mathsf{NP}$ witness under the following conditions:
\begin{itemize}
\item $r_{1,i} = 0, r_{2,i} = 1$.
\item $(a_i, r_{2,i}, z_{2,i})$ is an accepting transcript.
\item There \emph{exists} a response $z_{1,i}$ xwith prefix $z_{1, i}^{(1)}$ such that $(a_i, r_{1,i}, z_{1,i})$ is an accepting transcript.
\end{itemize}
Here, $z^{(1)}$ denotes the part of a response $z$ consisting of the \emph{messages} opened (but not the commitment randomness).
\end{claim}
Moreover, we note that the protocol is partially collapsing on $0$-challenges: given a tuple $(x, a, r)$ and a state $\ket{\phi} = \sum_z \alpha_z \ket{z}$, any accepting response $z_i$ such that $r_i = 0$ can be \emph{partially} measured --- namely, the committed bits (but not the openings) can be measured --- without disturbing $\ket{\phi}$. This is sufficient to prove \cref{lemma:delayed-witness-wi}.
\subsubsection{Proof of \cref{lemma:delayed-witness-wi}}
The fact that this protocol is witness indistinguishable follows from the fact that it is a parallel repetition of a post-quantum ZK protocol \cite{STOC:Watrous06}. What remains is to establish the proof-of-knowledge property.
We consider the following variant of Unruh's approach to knowledge extraction~\cite{EC:Unruh12}:
\begin{enumerate}
\item Given a cheating prover $P^*$, first generate a (classical) first message $a$ from $P^*$. Let $\ket{\psi}$ denote the internal state of $P^*$ at this point.
\item Sample a uniformly random challenge $r$, compute the $P^*$ unitary $U_r \ket{\psi}$, which writes its response onto some register $\mathcal{Z}$. Apply the one-bit measurement $(\Pi_{V, r}, \mathbf{I} - \Pi_{V, r})$ that checks whether $V(x, a, r, z) = 1$.
\item If the measurement returns $1$, additionally measure every register $\mathcal{Z}^{(1)}_i$ (the opened messages, but not the commitment randomness) corresponding to $r_i = 0$.
\item Apply $U_r^\dagger$ to the prover state.
\item Sample an independent random challenge $r'$ and apply $U_{r'}$. Apply the one-bit measurement $(\Pi_{V, r'}, \mathbf{I} - \Pi_{V, r'})$.
\item If the measurement returns $1$, additionally measure the \emph{entire} response $\mathcal{Z}$.
\item If both measurements returned $1$, and there exists an index $i$ such that $r_i = 0$ and $r'_i = 1$, compute $\mathsf{SSExtract}(x, \mathsf{com}_i, 0, z_i^{(1)}, 1, z'_i)$ where $z_i^{(1)}$ is the first partially measured response in location $i$ and $z'_i$ is the second measured response in location $i$. Otherwise, abort.
\end{enumerate}
To show that this extraction procedure works, we first consider the variant in which no response measurements are applied (Step 3 and Step 6 are omitted). Then, by Unruh's rewinding lemma \cite[Lemma 7]{EC:Unruh12}, if $U_r\ket{\psi}$ produces an accepting response with probability at least $\epsilon$ (over the randomness of $r$), then the two binary measurements applied above will \emph{both} return $1$ with probability at least $\epsilon^3$. Then, by the fact that the protocol is partially collapsing on $0$-challenges, this continues to hold even if the measurement in Step 3 is applied.
Finally, since the probability that i.i.d. uniform strings $r, r'$ do not have an index $i$ such that $r_i = 0$ and $r'_i = 1$ is $(3/4)^\lambda = {\rm negl}(\lambda)$, we conclude that with probability $\epsilon^3 - {\rm negl}(\lambda)$, the above extractor produces partial accepting response $z_i^{(1)}$ and accepting response $z'_i$ for some $i$ such that $r_i = 0$ and $r'_i = 1$, and so $\mathsf{SSExtract}$ successfully outputs a witness. If $P^*$ is convincing with initial non-negligible probability $\gamma$, then with probability at least $\frac \gamma 2$, $\ket{\psi}$ is at least $\frac \gamma 2$-convincing, and so $\mathsf{SSExtract}$ outputs a valid witness with probability at least $\Omega(\gamma^3)$. This completes the proof of \cref{lemma:delayed-witness-wi}.
\subsection{Proof of Security for the \cite{STOC:FeiSha90} protocol}
We now prove the security of the Feige-Shamir protocol using suitable building blocks $(\mathsf{Com}, \mathsf{AoK}, \mathsf{dAoK})$.
\begin{theorem}\label{thm:fs-body}
Suppose that:
\begin{itemize}
\item $\mathsf{Com}$ is a post-quantum non-interactive commitment scheme,
\item $\mathsf{AoK}$ is the $3$-message state-preserving WI proof of knowledge for $\mathsf{NP}$ (with $\mathsf{EQPT}_{c}$ extraction) from \cref{sec:state-preserving-main-theorems}.
\item $\mathsf{dAoK}$ is the argument system from \cref{lemma:delayed-witness-wi}.
\end{itemize}
Then, the Feige-Shamir protocol is both sound and zero-knowledge against QPT adversaries. The zero-knowledge simulator is $\mathsf{EQPT}_{c}$.
\end{theorem}
\noindent Combining \cref{thm:fs-body} with the results of \cref{sec:state-preserving} implies \cref{thm:feige-shamir}.
We remark that the theorem is non-generic with respect to $\mathsf{AoK},\mathsf{dAoK}$ due to complications in the security proof coming from the fact that $\mathsf{AoK}$ and $\mathsf{dAoK}$ are executed simultaneously.
\begin{proof} We first prove soundness, followed by ZK.
\vspace{10pt} \noindent \textbf{Proof of Soundness.}
Suppose that $x\not\in L$ and $P^*$ is a QPT prover that convinces $V$ with non-negligible probability. Given such a $P^*$, we define a cheating prover $ P^*_{\mathsf{dAoK}}$ for the underlying $\mathsf{dAoK}$ that is given as additional auxiliary input strings $(c_0, c'_0, c_1, c'_1, b, r_b, \rho_b)$ such that $c_b = \mathsf{Com}(0; r_b)$ and $c'_b = \mathsf{Com}(r_b; \rho_b)$. $ P^*_{\mathsf{dAoK}}$ simply emulates $P^*$ while generating $\mathsf{AoK}$ messages using its auxiliary input. That is:
\begin{itemize}
\item $ P^*_{\mathsf{dAoK}}$ generates a message $\mathsf{aok}_1$ using its auxiliary input and calls $P^*$ on $(c_0, c'_0, c_1, c'_1, \mathsf{aok}_1)$. This results in a $P^*$-message $(\mathsf{aok}_2, \mathsf{daok}_1)$. $ P^*_{\mathsf{dAoK}}$ returns $\mathsf{daok}_1$.
\item Upon receiving a verifier challenge $r$, $ P^*_{\mathsf{dAoK}}$ computes an honestly generated message $\mathsf{aok}_3$ (deterministic\footnote{If randomness is required to generate this message, let it be fixed in advance in $P^*_{\mathsf{dAoK}}$'s internal state.} and independent of $r$) using its auxiliary input and calls $P^*$ on $(\mathsf{aok}_3, r)$. This results in a $P^*$-message $\mathsf{daok}_3$, which $ P^*_{\mathsf{dAoK}}$ outputs.
\end{itemize}
If the auxiliary input $(c_0, c'_0, c_1, c'_1, b, r_b, \rho_b)$ is sampled from the correct distribution, $ P^*_{\mathsf{dAoK}}$ perfectly emulates the interaction of $P^*$ and the honest Feige-Shamir verifier, so $ P^*_{\mathsf{dAoK}}$ is convincing with non-negligible probability $\varepsilon$ by assumption. Thus, the $\mathsf{dAoK}$ knowledge extractor from \cref{lemma:delayed-witness-wi} outputs a valid witness for the statement ``$\exists i, r_i, \rho_i$'' such that $c_i = \mathsf{Com}(0; r_i)$ and $c'_i = \mathsf{Com}(r_i; \rho_i)$'' with probability at least $\Omega(\varepsilon^3)$.
\begin{claim}
The probability that the $\mathsf{dAoK}$ extractor succeeds and $i \neq b$ is also $\Omega(\varepsilon^3)$.
\end{claim}
\begin{proof}
If this is not the case, then we obtain an algorithm breaking the WI property of $\mathsf{AoK}$. For a fixed statement $(c_0, c_0', c_1, c_1')$, the algorithm $V^*_{\mathsf{dAoK}}$, given an honestly generated message $\mathsf{aok}_1$, calls $(\mathsf{aok}_2, \mathsf{dAoK}_1) \gets P^*(c_0, c_0', c_1, c_1', \mathsf{aok}_1)$ and returns the message $\mathsf{aok}_2$. Given a \emph{fixed} response $\mathsf{aok}_3$, $V^*_{\mathsf{dAoK}}$ emulates the $\mathsf{dAoK}$ extractor from \cref{lemma:delayed-witness-wi} by sampling i.i.d. strings $r, r'$ for $\mathsf{dAoK}$ and re-using the message $\mathsf{aok}_3$. Then, if the extractor returns a valid witness $(i, r_i, \rho_i)$, $V^*_{\mathsf{dAoK}}$ returns the bit $i$. If not, $V^*_{\mathsf{dAoK}}$ guesses at random.
Since this faithfully emulates the execution of the $\mathsf{dAoK}$ extractor on $P^*_{\mathsf{dAoK}}$ and we assumed that it succeeds with probability $\Omega(\varepsilon^3)$, we conclude that the WI property of $\mathsf{AoK}$ with respect to $V^*_{\mathsf{dAoK}}$ implies the claim.
\end{proof}
However, this implies that the $\mathsf{dAoK}$ extractor breaks the computational hiding property of $\mathsf{Com}$. This is because if $c_{1-b}$ were instead sampled as $\mathsf{Com}(1; r_{1-b})$ and $c_{1-b}'$ sampled as $\mathsf{Com}(r_{1-b}; \rho_{1-b})$, it is information theoretically impossible for the $\mathsf{dAoK}$ extractor to output a witness such that $i \neq b$. This concludes the proof of soundness.
\vspace{10pt} \noindent \textbf{Proof of ZK.} We assume without loss of generality that the cheating verifier $V^*$ has a ``classical'' first message $(c_0, c_0', c_1, c_1', \mathsf{aok}_1)$ by replacing $V^*$ (with auxiliary state $\ket{\psi}$) with $(V^*, \DMatrix)$ for the mixed state $\DMatrix$ obtained by running $U_{V^*}$ on $\ket{\psi}$ to generate a first message, measuring it, and running $U_{V^*}^\dagger$.
By the construction of $\mathsf{AoK}$ (see \cref{cor:commit-and-prove,thm:state-preserving-wi-body}) we know that the tuple $(c_0, c_0', c_1, c_1', \allowbreak \mathsf{aok}_1)$ \emph{uniquely} determines a witness $\mathsf{td} = (b, r_b, \rho_b)$ that the $\mathsf{AoK}$ extractor can ever output (if such a witness exists; otherwise, we define $\mathsf{td}$ to be $\bot$). We non-uniformly include $\mathsf{td}$ in the description of the $V^*$ state $\DMatrix$ without loss of generality (this does not affect the simulator, only the analysis).
Our black-box zero-knowledge simulator is defined in \cref{fig:feige-shamir-simulator}:
\begin{figure}
\begin{mdframed}
\begin{itemize}
\item Construct a first message $\mathsf{daok}_1$ using the honest $\mathsf{dAoK}$ prover algorithm.
\item For fixed classical strings $(c_0, c_0', c_1, c_1', \mathsf{aok}_1, \mathsf{daok}_1)$, define an $\mathsf{AoK}$ cheating prover $P^*_{\mathsf{AoK}}$ with the following description:
\begin{itemize}
\item Send $\mathsf{aok}_1$
\item On challenge $s$, call $V^*$ on $(s, \mathsf{daok}_1)$. Upon receiving $(\mathsf{aok}_3, r)$, return $\mathsf{aok}_3$.
\end{itemize}
\item Run the state-preserving extractor $\mathsf{Extract}^{P^*_{\mathsf{AoK}}, \mathsf{daok}_1, \DMatrix}$, outputting the (unique possible) witness $\mathsf{td}$ along with a $P^*_{\mathsf{AoK}}$-view (which includes a $V^*$-view in it).
\item If the output witness is $\bot$, send an aborting final message. Otherwise, compute $\mathsf{daok}_3$ using $\mathsf{td}$.
\end{itemize}
\end{mdframed}
\caption{The Feige-Shamir protocol simulator}
\label{fig:feige-shamir-simulator}
\end{figure}
We claim that this achieves negligible simulation accuracy. We prove this via a hybrid argument:
\begin{itemize}
\item $\mathsf{Hyb}_0$: This is the simulated view of $V^*$.
\item $\mathsf{Hyb}_1$: This is the same as $\mathsf{Hyb}_0$, \emph{except} that $\mathsf{daok}_3$ is computed using an $\mathsf{NP}$-witness $w$ for $x$.
\item $\mathsf{Hyb}_2$: This is the real view of $V^*$.
\end{itemize}
The indistinguishability of $\mathsf{Hyb}_2$ and $\mathsf{Hyb}_1$ follows immediately from the state-preserving property of $\mathsf{AoK}$, as the view of $P^*_{\mathsf{AoK}}$ contains an entire correctly emulated view of $V^*$.
The indistinguishability of $\mathsf{Hyb}_1$ and $\mathsf{Hyb}_0$ follows from the witness indistinguishability of $\mathsf{dAoK}$. To prove this, we assume for the sake of contradiction that $\mathsf{Hyb}_1$ and $\mathsf{Hyb}_0$ are distinguishable by a polynomial-time distinguisher $D$ with non-negligible advantage $\varepsilon$. Then, we construct the following two additional hybrids:
\begin{itemize}
\item $\mathsf{Hyb}'_0$: This is simulated view of $V^*$, \emph{except} that $\mathsf{Extract}$ is replaced by a ${\rm poly}(\lambda, 1/\epsilon)$-size oracle algorithm that achieves accuracy $\frac{\varepsilon}4$.
\item $\mathsf{Hyb}'_1$: This is the same as $\mathsf{Hyb}_0'$ \emph{except} that $\mathsf{daok}_3$ is computed using an $\mathsf{NP}$-witness $w$ for $x$.
\end{itemize}
By a hybrid argument, we conclude that $D$ also distinguishes $\mathsf{Hyb}'_0$ and $\mathsf{Hyb}'_1$ with advantage $\varepsilon/2$. We claim that this breaks the witness indistinguishability of $\mathsf{dAoK}$. Define a $\mathsf{dAoK}$ verifier $V^*_{\mathsf{dAoK}}$ operating as follows
\begin{itemize}
\item $V^*_{\mathsf{dAoK}}$ has the state $\DMatrix$ as auxiliary input (including $c_0, c_0', c_1, c_1', \mathsf{aok}_1, \mathsf{td})$. $V^*_{\mathsf{dAoK}}$ wants to distinguish between proofs using witness $w$ and proofs using witness $\mathsf{td}$.
\item $V^*_{\mathsf{dAoK}}$ receives $\mathsf{daok}_1$ from the prover. It then calls (the $\epsilon/4$-truncated) $\mathsf{Extract}^{P^*_{\mathsf{AoK}}, \mathsf{daok}_1, \DMatrix}$, which returns a $P^*_{\mathsf{AoK}}$-view. $V^*_{\mathsf{dAoK}}$ sends the challenge $r$ from the $P^*_{\mathsf{AoK}}$-view to the prover.
\item Finally, upon receiving $\mathsf{daok}_3$ from the prover, $V^*_{\mathsf{dAoK}}$ outputs the emulated $V^*$ view.
\end{itemize}
$V^*_{\mathsf{dAoK}}$ has been constructed to be (aux-input) QPT, and (along with the distinguisher $ D$) violates the WI property of $\mathsf{dAoK}$, giving the claimed contradiction.
We conclude that the Feige-Shamir protocol is ZK, as desired. We note that the zero-knowledge simulator inherits the $\mathsf{EQPT}_{c}$ structure of the $\mathsf{AoK}$ state-preserving extractor (with some additional fixed poly-time pre- and post-processing). \qedhere
\end{proof}
\section{The~\cite{JC:GolKah96} Protocol is Post-Quantum Zero Knowledge}
\label{sec:gk}
In this section we show that the Goldreich--Kahan constant-round proof system for $\mathsf{NP}$ is post-quantum zero knowledge by giving an $\mathsf{EQPT}_{c}$ simulator. In \cref{sec:distinguishability} we give a technical lemma about the distinguishability of certain purifications that will be of central importance in the proof. In \cref{sec:gk-simulator} we describe our quantum simulator.
\subsection{Indistinguishability of Projections onto Indistinguishable States}
\label{sec:distinguishability}
Consider the states $\ket{\tau_b} \coloneqq \sum_{x} \ket{x}_{\mathcal{X}} \ket{D_b(x)}_{\mathcal{Y}}$ where $D_0,D_1$ are computationally indistinguishable (w.r.t. quantum adversaries) efficiently sampleable \emph{classical} distributions with random coins $x$ (in a slight abuse of notation, $D_b$ denotes both the distribution and the sample). If we are only given access to $\mathcal{Y}$, then distinguishing $\ket{\tau_0}$ from $\ket{\tau_1}$ is clearly hard since $\Tr_{\mathcal{X}}(\ketbra{\tau_b})$ is equivalent to a random classical sample from $D_b$.
In this subsection, we show that this indistinguishability generically extends to the setting where \emph{we additionally give the distinguisher access to the projection $\ketbra{D_b}$ on $\mathcal{X} \otimes \mathcal{Y}$}. This is formalized by giving the distinguisher an additional one-qubit register $\mathcal{O}$ and black-box access (see \cref{subsec:blackbox}) to the unitary $U_{b}$ and its inverse acting on $\mathcal{X} \otimes \mathcal{Y} \otimes \mathcal{O}$ defined as
\[ U_b \coloneqq \ketbra{D_b}_{\mathcal{X},\mathcal{Y}} \otimes \mathbf{X}_{\mathcal{B}} + (\mathbf{I}_{\mathcal{X},\mathcal{Y}} - \ketbra{D_b}_{\mathcal{X},\mathcal{Y}}) \otimes \mathbf{I}_{\mathcal{B}},\]
where $\mathbf{X}_{\mathcal{B}}$ denotes the bit-flip operator on $\mathcal{B}$. In particular, it is no longer the case that access to $\ket{\tau_b}$ is equivalent to a random classical sample from $D_b$, since the distinguisher's access to $U_b$ means the $\mathcal{X}$ is no longer independent of its view. Nevertheless, we prove the following.
\begin{lemma}
\label{lemma:proj-indist}
If there exists a polynomial-time quantum oracle distinguisher $S^{U_b}$ without direct access to $\mathcal{X}$ achieving
\begin{equation*}
\left|\Pr[S^{U_0}(\ket{D_0}_{\mathcal{X},\mathcal{Y}}) = 1] - \Pr[S^{U_1}(\ket{D_1}_{\mathcal{X},\mathcal{Y}}) =1]\right| \geq 1/{\rm poly}(\lambda),
\end{equation*}
then there exists a polynomial-time quantum algorithm $S$ that distinguishes classical samples from the distributions $D_0$ and $D_1$
\end{lemma}
Our proof will make use of two results by Zhandry~\cite{C:Zhandry12,Zhandry15}, which we restate here for convenience. In the following, quantum oracle access to a function $f: X \rightarrow Y$ refers to black-box access to the unitary that maps $\ket{x}\ket{y} \rightarrow \ket{x} \ket{f(x) \oplus y}$ for all $x,y$.
\begin{theorem}[Theorem 1.1 of \cite{C:Zhandry12}]
Let $D_0$ and $D_1$ be efficiently sampleable distributions on a set $Y$, and let $X$ be some other set. Let $O_0$ and $O_1$ be the distributions of functions from $X$ to $Y$ where for each $x \in X$, $O_b(x)$ is chosen independently according to $D_b$. Then if $A$ is an efficient quantum algorithm that can distinguish between quantum access to the oracle $O_0$ from quantum access to the oracle $O_1$, we can construct an efficient quantum algorithm $B$ that distinguishes classical samples from $D_0$ and $D_1$.
\end{theorem}
\begin{theorem}[\cite{Zhandry15}]
An efficient quantum algorithm cannot distinguish between quantum access to an oracle $f$ implementing a random function $X \rightarrow X$ and an oracle $\pi$ implementing a random permutation $X \rightarrow X$.
\end{theorem}
\begin{proof}
By \cite[Theorem 1.1]{C:Zhandry12}, it suffices for us to show that if there exists a distinguisher $S^{U_b}$ that distinguishes $\ket{D_0}_{\mathcal{X},\mathcal{Y}}$ from $\ket{D_1}_{\mathcal{X},\mathcal{Y}}$ can without directly accessing the $\mathcal{X}$ register, then there is an algorithm to distinguish between quantum oracle access to $D_0 \circ f$ and $D_1 \circ f$ (where $D_b \circ f$ is the composed function $D_b(f(\cdot))$) where $f: X \rightarrow X$ is a random function.
By~\cite{Zhandry15}, we observe that it suffices to show that $S^{U_b}$ implies an algorithm to distinguish between quantum oracle access to $D_0 \circ \pi$ and $D_1 \circ \pi$ for a random \emph{permutation} $\pi: X \rightarrow X$.
Given quantum oracle access to $D_b \circ \pi$, we can implement a unitary $V_{b,\pi}$ that maps $\ket{0}_{\mathcal{X},\mathcal{Y}}$ to the state $\ket{D_{b,\pi}} \coloneqq \sum_x \ket{x} \ket{D_b(\pi(x))}$ as follows: apply a Hadamard to $\mathcal{X}$, then apply the $D_b \circ \pi$ oracle to $\mathcal{X} \otimes \mathcal{Y}$.
We can hence use $S$ to distinguish $b = 0$ from $b = 1$ as follows. We prepare the state $\ket{D_{b,\pi}}_{\mathcal{X},\mathcal{Y}}$ using $V_{b,\pi}$. Using $V_{b,\pi}$ we can also implement the operation
\[
U_{b,\pi} \coloneqq \ketbra{D_{b,\pi}} \otimes \mathbf{X}_{\mathcal{B}} + (\mathbf{I} - \ketbra{D_{b,\pi}}) \otimes \mathbf{I}_{\mathcal{B}}
\]
as follows: apply $V_{b,\pi}^{\dagger}$ to $\mathcal{X} \otimes \mathcal{Y}$, apply $\ketbra{0}_{\mathcal{X},\mathcal{Y}} \otimes \mathbf{X}_{\mathcal{B}} + (\mathbf{I}-\ketbra{0}_{\mathcal{X},\mathcal{Y}}) \otimes \mathbf{I}_{\mathcal{B}}$, then apply $U_{b,\pi}$. We can therefore run $S^{U_{b,\pi}} \ket{D_{b,\pi}}$.
Since $S^{U_b}$ does not act on $\mathcal{X}$ except via its oracle, and $\ket{D_b}$ is related to $\ket{D_{b,\pi}}$ by a unitary acting on $\mathcal{X}$ only, it holds that
\[
\Tr_{\mathcal{X}}(S^{U_{b,\pi}}(\ketbra{D_{b,\pi}})) = \Tr_{\mathcal{X}}(S^{U_{b}}(\ketbra{D_{b}})),
\]
which completes the proof.
\end{proof}
\subsection{Quantum Simulator}
\label{sec:gk-simulator}
We begin by describing a variable-runtime $\mathsf{EQPT}_{m}$ estimation procedure that will be a useful subroutine in our quantum zero-knowledge simulator for~\cite{JC:GolKah96}. Following~\cref{thm:vrsvt}, let $\mathsf{VarEstimate}$ and $\mathsf{Transform}$ be the first and second stages of the variable-runtime singular vector transform (vrSVT). For binary projective measurements $\mathsf{A},\mathsf{B},\mathsf{C}$ on $\mathcal{A}$, we define a ``estimate-disturb-transform'' procedure $\mathsf{EDT}[\mathsf{A},\mathsf{B},\mathsf{C}]$. Intuitively, this procedure first uses $\mathsf{VarEstimate}$ to compute an upper bound on the running time of $\mathsf{Transform}[\mathsf{A} \rightarrow \mathsf{B}]$, but then disturbs the state with the measurement $\mathsf{C}$ before running $\mathsf{Transform}[\mathsf{A} \rightarrow \mathsf{B}]$. However, to ensure that $\mathsf{VarEstimate}$ does not run for unbounded time, the input is first ``conditioned'' by applying $\mathsf{B}$ followed by $\mathsf{A}$, and only proceeding if both measurements return $1$.
Formally, the procedure takes an input state on $\mathcal{A}$ and does the following:
\medskip \begin{minipage}{0.9\textwidth}
\noindent $\mathsf{EDT}[\mathsf{A},\mathsf{B},\mathsf{C}]$:
\begin{enumerate}[nolistsep]
\item Apply $\mathsf{B}$ to $\mathcal{A}$, obtaining outcome $b_1$.
\item Apply $\mathsf{A}$ to $\mathcal{A}$, obtaining outcome $b_2$.
\item If $b_1 = 0$ or $b_2 = 0$, stop and output $(0,\bot)$.
\item Otherwise, run $\mathsf{VarEstimate}_{\delta}[\mathsf{A} \rightleftarrows \mathsf{B}]$ on $\mathcal{A}$, obtaining classical output $y$.
\item Apply $\mathsf{C}$ to $\mathcal{A}$, obtaining outcome $c$.
\item Run $\mathsf{Transform}_y[\mathsf{A} \rightarrow \mathsf{B}]$ on $\mathcal{A}$.
\item Output $\mathcal{A}$ and $(1,c)$.
\end{enumerate}
\end{minipage}\medskip
\noindent Let $\widehat{\mathsf{EDT}}[\mathsf{A},\mathsf{B},\mathsf{C}]$ denote a coherent implementation of this procedure.
\begin{claim}
\label{claim:vrsvt-mreqpt}
For any efficient measurements $\mathsf{A},\mathsf{B},\mathsf{C}$, $\mathsf{EDT}[\mathsf{A},\mathsf{B},\mathsf{C}]$ is $\mathsf{EQPT}_{m}$.
\end{claim}
\begin{proof}
Since $\mathsf{EDT}[\mathsf{A},\mathsf{B},\mathsf{C}]$ commutes with $\Meas{\Jor}[\mathsf{A},\mathsf{B}]$, it suffices to analyze its running time for states contained within a single Jordan subspace. Let $\ket{\psi_j} \coloneqq \alpha \JorKetB{j}{1} + \beta \JorKetB{j}{0}$. Then
\begin{equation*}
\Pr[b_1 = b_2 = 1] = |\alpha|^2 \Pr[\mathsf{A}(\JorKetB{j}{1}) \to 1] \leq p_j.
\end{equation*}
Note that $\mathsf{C}$ does not affect the running time of $\mathsf{Transform}_y$. Hence the expected running time of this procedure on $\ket{\psi_j}$ is
\begin{equation*}
O((p_j \cdot \log(1/\delta)/\sqrt{p_j} + 1) \cdot (t_{\mathsf{A}} + t_{\mathsf{B}})) = O(\log (1/\delta) \cdot (t_{\mathsf{A}} + t_{\mathsf{B}})).
\end{equation*}
It follows that this procedure is $\mathsf{EQPT}_{m}$.
\end{proof}
We define the states and measurements used in the simulator.
\begin{itemize}[noitemsep]
\item For $r \in R$, let $\ket{\mathsf{Sim}_r} \coloneqq \frac{1}{\sqrt{2^\lambda}} \sum_{\mu} \ket{\mu} \ket{\mathsf{SHVZK}.\mathsf{Sim}(r;\mu)}$.
\item Let $\Meas{\mathsf{Sim}} \coloneqq \BMeas{\BProj{\mathsf{Sim}}}$, where $\BProj{\mathsf{Sim}} \coloneqq \sum_{r} \ketbra{r} \otimes \ketbra{\mathsf{Sim}_r} \otimes \mathbf{I}$.
\item Let $\Meas{\mathcal{R}} \coloneqq (\BProj{r})_{r \in R}$, where $\BProj{r} \coloneqq U_{V^*}^{\dagger} \ketbra{r}_{\mathcal{R}} U_{V^*}$.
\item Let $\Meas{\mathsf{com}} \coloneqq \BMeas{\BProj{\mathsf{com}}}$, where \[\BProj{\mathsf{com}} \coloneqq \sum_{\substack{r,\omega \\ \mathsf{Commit}(\mathsf{ck},r,\omega) = \mathsf{com}}} \ketbra{r,\omega}~.\]
\end{itemize}
\begin{minipage}{0.9\textwidth}
\noindent $\mathsf{Sim}^{V^*}$:
\begin{enumerate}[nolistsep]
\item Run $V^*(\mathsf{ck})$ for $\mathsf{ck} \gets \mathsf{Gen}(1^\lambda)$ to obtain a commitment $\mathsf{com}$.
\item Generate the state $\ket{0}_{\mathcal{R}'} \ket{\mathsf{Sim}_0}_{\mathcal{M},\mathcal{A},\mathcal{Z}}$.
\item \label[step]{step:gk-svt-forward} Apply $\widehat{\mathsf{EDT}}[\Meas{\mathsf{com}},\Meas{\mathsf{Sim}},\Meas{\mathcal{R}}]$, obtaining outcome $(b,r)$ (in superposition). Measure $b$.
\item \label[step]{step:gk-replace-state} If $b = 1$, measure $r$ and replace the state on $\mathcal{R}',\mathcal{M},\mathcal{A},\mathcal{Z}$ with $\ket{r}_{\mathcal{R}'}\ket{\mathsf{Sim}_r}_{\mathcal{M},\mathcal{A},\mathcal{Z}}$.
\item \label[step]{step:gk-svt-reverse} Apply $\widehat{\mathsf{EDT}}[\Meas{\mathsf{com}},\Meas{\mathsf{Sim}},\Meas{\mathcal{R}}]^{\dagger}$.
\item \label[step]{step:gk-final-meas} Measure register $\mathcal{A}$, obtaining outcome $a$. Apply $U_{V^*}$ and measure $\mathcal{R},\mathcal{W}$ to obtain $(r',\omega)$; if $\mathsf{Commit}(r',\omega) \neq \mathsf{com}$, stop and output the view of $V^*$. Otherwise, measure $\mathcal{Z}$, obtaining outcome $z$. Send $z$ to $V^*$ and output the view of $V^*$.
\end{enumerate}
\end{minipage}
\begin{lemma}
If $\mathsf{Com}$ is a collapse-binding commitment then $\mathsf{Sim}^{V^*}(\bm{\rho})$ is computationally indistinguishable from $\mathsf{out}_{V^*} \langle P,V^* \rangle$. $\mathsf{Sim}^{V^*}$ is an $\mathsf{EQPT}_{c}$ algorithm.
\end{lemma}
\begin{proof}
By \cref{claim:vrsvt-mreqpt}, $P[\Meas{\mathsf{com}},\Meas{\mathsf{Sim}},\Meas{\mathcal{R}}]$ is $\mathsf{EQPT}_{m}$, and so $\mathsf{Sim}^{V^*}$ is $\mathsf{EQPT}_{c}$.
We consider three hybrid simulators $H_1,H_2,H_3$, as follows. All three are provided with some witness $w$ such that $(x,w) \in \mathfrak{R}$. We first define $H_1$.
\medskip \begin{minipage}{0.9\textwidth}
$H_1^{V^*}(x,w)$:
\begin{enumerate}[nolistsep]
\item Run $V^*(\mathsf{ck})$ for $\mathsf{ck} \gets \mathsf{Gen}(1^\lambda)$ to obtain a commitment $\mathsf{com}$.
\item Generate the state $\ket{P} \coloneqq \sum_{\mu} \ket{\mu}_{\mathcal{M}} \ket{P_{\Sigma}(x,w;\mu)}_{\mathcal{A}}$.
\item \label[step]{step:gk-hybrid-1-svt-forward} Let $\Meas{P} \coloneqq \BMeas{\ketbra{P}}$. Apply $\widehat{\mathsf{EDT}}[\Meas{\mathsf{com}},\Meas{P},\Meas{\mathcal{R}}]$, obtaining outcome $(b,r)$ (in superposition). Measure $b$.
\item[4-6.] As in $\mathsf{Sim}$.
\end{enumerate}
\end{minipage} \medskip
$H_1$ is indistinguishable from $\mathsf{Sim}$ by \cref{lemma:proj-indist}: $H_1$ is obtained from $\mathsf{Sim}$ by replacing $\ket{\mathsf{Sim}_0}$ and $\Meas{\mathsf{Sim}}$ with $\ket{P}$ and $\Meas{P}$ and interacts only with the $\mathcal{A}$ register, and the distributions on $a$ induced by $(a,z) \gets \mathsf{SHVZK}.\mathsf{Sim}(0;\mu)$ and $a \gets P_{\Sigma}(x,w;\mu')$ are computationally indistinguishable.
\medskip \begin{minipage}{0.9\textwidth}
$H_2^{V^*}(x,w)$:
\begin{enumerate}[nolistsep]
\item[1-3.] As in $H_1$.
\setcounter{enumi}{3}
\item \label[step]{step:gk-hybrid-2-replace} If $b = 1$, measure $r$ and replace the state on $\mathcal{M},\mathcal{A},\mathcal{Z}$ with
\[ \ket{P_r} = \sum_{\mu} \ket{\mu}_{\mathcal{M}} \ket{P_{\Sigma}(x,w;\mu)}_{\mathcal{A}} \ket{P_{\Sigma}(x,w,r;\mu)}_{\mathcal{Z}}. \]
\item Let $\Meas{P,r} \coloneqq \BMeas{\ketbra{P_r}}$. Apply $\widehat{\mathsf{EDT}}[\Meas{\mathsf{com}},\Meas{P,r},\Meas{\mathcal{R}}]^{\dagger}$.
\item As in $\mathsf{Sim}$.
\end{enumerate}
\end{minipage} \medskip
By the SHVZK guarantee, the distributions on $(a,z)$ given by $a \gets P_{\Sigma}(x,w;\mu), z \gets P_{\Sigma}(x,w,r;\mu)$ and $(a,z) \gets \mathsf{SHVZK}.\mathsf{Sim}(r;\mu')$ are computationally indistinguishable.
Hence by \cref{lemma:proj-indist}, $H_1$ and $H_2$ are computationally indistinguishable.
By the correctness guarantee of $Q$, if $b = 1$ then the state at the beginning of \cref{step:gk-hybrid-2-replace} has $\Tr(\ketbra{P} \bm{\rho}) \geq 1 - \delta$. Note that $\ket{P}$ and $\ket{P_r}$ are related by an efficient local isometry $T_r \colon \mathcal{M} \to \mathcal{M} \otimes \mathcal{Z}$. Hence \cref{step:gk-hybrid-2-replace} is $\sqrt{\delta}$-close in trace distance to an application of this isometry. Switching to this state, we can commute the isometry through $\widehat{\mathsf{EDT}}[\Meas{\mathsf{com}},\Meas{P,r},\Meas{\mathcal{R}}]^{\dagger}$, which conjugates it to $\widehat{\mathsf{EDT}}[\Meas{\mathsf{com}},\Meas{P},\Meas{\mathcal{R}}]^{\dagger}$. This leads to the third hybrid, below.
\medskip \begin{minipage}{0.9\textwidth}
$H_3^{V^*}(x,w)$:
\begin{enumerate}[nolistsep]
\item[1-3.] As in $H_2$.
\setcounter{enumi}{3}
\item \label[step]{step:gk-hybrid-3-collapse} If $b = 1$, measure $r$.
\item Apply $\widehat{\mathsf{EDT}}[\Meas{\mathsf{com}},\Meas{P},\Meas{\mathcal{R}}]^{\dagger}$.
\item Apply $U_{V^*}$ and measure $\mathcal{R},\mathcal{W}$ to obtain $(r',\omega)$. If $\mathsf{Commit}(r',\omega) \neq \mathsf{com}$, stop and output the view of $V^*$. Otherwise, apply $T_{r'}$ to $\mathcal{M}$ and measure $\mathcal{Z}$, obtaining outcome $z$. Send $z$ to $V^*$ and output the view of $V^*$.
\end{enumerate}
\end{minipage} \medskip
$H_3$ is statistically close to $H_2$ provided that $\Pr[r = r'] = 1 - {\rm negl}(\lambda)$. Moreover, the collapsing property of the commitment implies that \cref{step:gk-hybrid-3-collapse} is computationally undetectable. If this step is removed then the effect of \cref{step:gk-svt-forward,step:gk-svt-reverse} is simply to apply $\Meas{\mathsf{com}}$; the output is then precisely the view of $V^*$ in a real execution.
Finally, we have that by $r = r'$ with all but negligible probability by the unique message-binding of the commitment scheme~(\cref{lemma:collapse-binding-unique-message}). \qedhere
\end{proof}
\section*{Acknowledgments}
We thank Zvika Brakerski, Ran Canetti, Yael Kalai, Vinod Vaikuntanathan, and Mark Zhandry for helpful discussions. NS is supported by DARPA under Agreement No. HR00112020023.
\section{Technical Overview}
In this section, we describe our techniques for proving our results on state-preserving extraction (\cref{thm:succinct-state-preserving,thm:state-preserving-wi}) and post-quantum zero knowledge (\cref{thm:szk}, \cref{thm:feige-shamir}, and \cref{thm:gk}). Finally, we discuss related work in \cref{sec:related-work}.
\subsection{Defining Expected Quantum Polynomial Time Simulation}
\label{sec:tech-eqpt}
In order to clearly present our results on zero knowledge, we begin with a detailed discussion of our model of expected quantum polynomial time simulation and how it relates to the \cite{FOCS:CCLY21} impossibility result.
\paragraph{Why is EQPT simulation hard to define?}
Recall from \cref{sec:intro-creqpt} that \cite{FOCS:CCLY21} rules out zero-knowledge simulators in a class of computations that we formalize as measured-runtime expected quantum polynomial-time ($\mathsf{EQPT}_{m}$). An $\mathsf{EQPT}_{m}$ computation takes as input a state $\ket{\psi}_{\mathcal{A}}$, initializes a large ancilla/workspace register $\ket{0}_{\mathcal{W}, \mathcal{B}}$ and state register $\ket{q_0}_{\mathcal{Q}}$ (where $\ket{q_0}$ denotes the initial state of a quantum Turing machine), then repeatedly applies some fixed transition unitary $U_{\delta}$ to $\mathcal{A} \otimes \mathcal{W} \otimes \mathcal{B} \otimes \mathcal{Q}$. After each application of $U_{\delta}$, $\mathcal{Q}$ is measured (applying some $(\Pi_f, \mathbf{I} - \Pi_f)$) to determine if the computation is in the ``halt state'' $\ket{q_f}$; the computation halts if the outcome of this measurement is $1$. A computation is $\mathsf{EQPT}_{m}$ if the expected number of steps before halting is polynomial for all inputs $\ket{\psi}_{\mathcal{A}}$.
Our $\mathsf{EQPT}_{m}$ definition is based on the definition of a quantum Turing machine (QTM) given in the seminal work of Deutsch~\cite{Deutsch85} (though we use a halt state \cite{Ozawa98a} in place of Deutsch's halt qubit). Note that the operation of a QTM is unitary \emph{except} for the measurement of whether the machine has halted. The validity of this ``halting scheme'' was the subject of some debate in a sequence of later works~\cite{Myers97,Ozawa98a,LindenP98,Ozawa98b}.
While the particulars of this debate are not so important here, there was a clear message: the reversibility of a QTM implies that the runtime of any QTM computation is \emph{always} effectively measured, even if there is no explicit monitoring of the halt state. Intuitively, this is because a QTM that has halted must, when reversed, know when to ``un-halt''; this requires counting the number of computation steps since the machine halted.
It was observed by \cite{LindenP98} that this prevents ``useful interference'' between branches of a QTM computation with different runtimes. That is, each branch of the computation is entangled with a description of its runtime, which prevents the branches from interfering with one another. Because interference is crucial in the design of efficient quantum algorithms, this is considered a major drawback of the QTM model. The now-standard definitions of \emph{efficient} quantum computation \cite{BV97,BBBV97} deliberately avoid this problem by restricting quantum Turing machines to have a \emph{fixed} runtime; these QTMs are effectively uniform quantum circuit families.
This phenomenon underpins the \cite{FOCS:CCLY21} impossibility result. Both in the classical \cite{STOC:BarLin02} and quantum \cite{FOCS:CCLY21} settings, there do not exist \emph{strict} polynomial time black-box simulators for constant-round protocols. It follows that such a simulator must have a variable runtime. By the observation of \cite{LindenP98}, simulation branches with different runtimes do not interfere. \cite{FOCS:CCLY21} leverage this by designing an adversary which can \emph{detect} this absence of interference.
\paragraph{Can we avoid measuring the runtime?}
The above discussion suggests that the $\mathsf{EQPT}_{m}$ model (i.e., quantum Turing machines in Deutsch's model~\cite{Deutsch85} with expected polynomial runtime) may not capture arbitrary efficient quantum computation. In particular, we ask whether it is possible to formalize a model in which the runtime is \emph{not} measured. Such a model could potentially avoid the \cite{FOCS:CCLY21} impossibility result.
Our solution is to formalize computations in which the runtime of an $\mathsf{EQPT}_{m}$ subcomputation is left \emph{in superposition} and can later be \emph{uncomputed}. To describe our formalism in more detail, we first briefly discuss coherent computation.
\paragraph{Coherent computation.}
It is well known that any quantum operation $\Phi$ on a state $\ket{\psi}$ can be realized in three steps:
\begin{inparaenum}[(1)]
\item prepare some ancilla qubits in a fixed state $\ket{0}$;
\item apply a unitary operation $U_{\Phi}$ to both $\ket{\psi}$ and the ancilla;
\item discard (trace out) the ancilla.
\end{inparaenum}
We refer to $U_{\Phi}$ as a \emph{unitary dilation} of $\Phi$. $U_{\Phi}$ is not uniquely determined by $\Phi$, but all such dilations are related by an isometry acting only on the ancilla system.
Since an $\mathsf{EQPT}_{m}$ computation is a quantum operation, it has a unitary dilation. In fact, we can choose a unitary dilation with a natural explicit form that we call a ``coherent implementation'', as shown in \cref{fig:coherent-qtm}.
\begin{figure}[tbhp]
\centering
\begin{quantikz}
\lstick[wires=1]{$\ket{0}_{\mathcal{B}_T}$} & \qw & \qw & \qw & \qw & \qw\ \cdots\ & \targ{}\vqw{4} & \octrl{4} & \qw \\
& \vdots & & \vdots \\
\lstick[wires=1]{$\ket{0}_{\mathcal{B}_2}$} & \qw & \qw & \targ{}\vqw{2} & \octrl{2} & \qw\ \cdots\ & \qw & \qw & \qw \\
\lstick[wires=1]{$\ket{0}_{\mathcal{B}_1}$} & \targ{}\vqw{1} & \octrl{1} & \qw & \qw & \qw\ \cdots\ & \qw & \qw & \qw \\
\lstick[wires=1]{$\ket{q_0}_{\mathcal{Q}}$} & \gate{\Pi_f} & \gate[wires=3][0.8cm]{U_{\delta}} & \gate{\Pi_f} & \gate[wires=3][0.8cm]{U_{\delta}} & \qw \ \cdots\ & \gate{\Pi_f} & \gate[wires=3][0.8cm]{U_{\delta}} & \qw \\
\lstick[wires=1]{$\ket{0^T}_{\mathcal{W}}$} \qwbundle[alternate]{} & \qwbundle[alternate]{} & \qwbundle[alternate]{} & \qwbundle[alternate]{} & \qwbundle[alternate]{} & \qwbundle[alternate]{} \ \cdots\ & \qwbundle[alternate]{} & \qwbundle[alternate]{} & \qwbundle[alternate]{} \\
\lstick{$\ket{\psi}_{\mathcal{A}}$} \qw & \qw & \qw & \qw & \qw & \qw\ \cdots\ & \qw & \qw & \qw &
\end{quantikz}
\caption{A coherent implementation $U$ (unitary dilation) of a quantum Turing machine with transition function $\delta$. The open circles indicate that the $i$th $U_{\delta}$ is applied when $\mathcal{B}_i$ contains $\ket{0}$. See \cref{sec:qtms} for more details.}\label{fig:coherent-qtm}
\end{figure}
Note that since we think of $T$ as being exponentially large, the unitary $U$ (corresponding to~\cref{fig:coherent-qtm}) is of exponential size. However, as long as the ancilla $\mathcal{W} \otimes \mathcal{B}$ (where $\mathcal{B} \coloneqq \mathcal{B}_1 \otimes \cdots \otimes \mathcal{B}_T$) is initialized to zero and $\mathcal{Q}$ is initialized to $\ket{q_0}$, the effect of $U$ on $\mathcal{A}$ is identical to the original $\mathsf{EQPT}_{m}$ computation. Indeed, the only difference from the original computation is that the runtime is written (in unary) on $\mathcal{B}$ and left in superposition. This means, in particular, that circuits making a single black-box query to a coherent implementation $U$ of an $\mathsf{EQPT}_{m}$ computation (and that cannot otherwise access $\mathcal{B}$) can only perform $\mathsf{EQPT}_{m}$ computations.
\paragraph{Our formalism: coherent-runtime EQPT.}
The advantage of moving to coherent implementations is that, unlike the original computation, $U$ has an \emph{inverse} $U^{\dagger}$. A coherent-runtime EQPT computation is allowed to invoke both $U$ and $U^{\dagger}$ in a restricted way, as we specify next.
\begin{definition}[Coherent-runtime EQPT (informal)]
\label{def:cr-eqpt-informal}
A computation on a register $\mathcal{X}$ is coherent-runtime EQPT ($\mathsf{EQPT}_{c}$) if it can be implemented by a procedure that can perform the following operations any polynomial number of times:
\begin{enumerate}[nolistsep,label=(\arabic*)]
\item \label{cr-eqpt-polyckt} apply any polynomial-size quantum circuit to $\mathcal{X}$;
\item \label{cr-eqpt-forward} initialize fresh ancillas $\mathcal{W}^{(i)} \otimes \mathcal{B}^{(i)}$ to zero and $\mathcal{Q}^{(i)}$ to $\ket{q_0^{(i)}}$ and apply a coherent implementation $U_i$ of an $\mathsf{EQPT}_{m}$ computation to $\mathcal{W}^{(i)} \otimes \mathcal{B}^{(i)}\otimes \mathcal{Q}^{(i)}$ and a subregister of $\mathcal{X}$;
\item \label{cr-eqpt-inverse} apply $U_i^{\dagger}$ to ancillas $\mathcal{W}^{(i)} \otimes \mathcal{B}^{(i)} \otimes \mathcal{Q}^{(i)}$ and a subregister of $\mathcal{X}$, then discard $(\mathcal{W}^{(i)}, \mathcal{B}^{(i)}, \mathcal{Q}^{(i)})$. For each $U_i$, this operation may occur at any time after performing \ref{cr-eqpt-forward} with respect to $U_i$.
\end{enumerate}
The output of the computation is the residual state on $\mathcal{X}$.
\end{definition}
Note that, because all unitary dilations are equivalent up to local isometry, the map computed by an $\mathsf{EQPT}_{c}$ computation is independent of the particular implementation of $U_i$.
\paragraph{What is the runtime of an $\mathsf{EQPT}_{c}$ computation?}
While procedures performing only \ref{cr-eqpt-polyckt} and \ref{cr-eqpt-forward} are clearly efficient (they are $\mathsf{EQPT}_{m}$), the efficiency of performing $U_i^\dagger$ in \ref{cr-eqpt-inverse} is less immediate. We analyze this in two ways:
\begin{itemize}
\item We prove (\cref{lemma:truncation}) that any $\mathsf{EQPT}_{c}$ computation has strict polynomial-time approximations (obtained by simultaneously truncating each $U_i$ and $U_i^\dagger$ to the same fixed runtime). This tells us that $\mathsf{EQPT}_{c}$ algorithms do not implement ``inefficient'' computations.
\item We give a natural interpretation of ``expected runtime'' under which the expected runtime of $U^\dagger$ (as applied in \ref{cr-eqpt-inverse}) is equal to the expected runtime of $U$.
\end{itemize}
Together, these give us a motivated definition of the expected runtime of an $\mathsf{EQPT}_{c}$ computation.
\cref{lemma:truncation} is proved in \cref{sec:eqpt}. In this overview, we focus on the expected runtime interpretation. For simplicity, we consider the basic form of $\mathsf{EQPT}_{c}$ computation depicted in \cref{fig:cr-eqpt-simple}; we assume in addition that $C_1 = C_3 = \mathbf{I}$ and that the computation always halts in at most $T$ steps.
Let $U^{(t)}$ be the truncation of $U$ to just after the $t$-th controlled application of $U_{\delta}$. Observe that after applying $U$ followed by $C_2$ on input state $\ket{\phi}$, the state of the system can be described as a superposition over $t$ of the applications of the $U^{(t)}$:
\[ \ket{\Psi}
= \sum_{t=0}^{T} \alpha_t \ket{0^t 1^{T-t}}_{\mathcal{B}} \ket{q_f}_{\mathcal{Q}} \ket{\phi_t}_{\mathcal{W},\mathcal{X}}
= \sum_{t=0}^{T} \alpha_t (\mathbf{I} \otimes C_2) U^{(t)} \ket{0^t 1^{T-t}}_{\mathcal{B}} \ket{q_0}_{\mathcal{Q}} \ket{0}_{\mathcal{W}} \ket{\phi}_{\mathcal{X}}
\]
for some states $\ket{\phi_t}$ and $\alpha_t \in \mathbb{C}$ where $|\alpha_t|^2$ is the probability that the $\mathsf{EQPT}_{m}$ computation halts in $t$ steps. The latter equality holds because if the computation halts at step $t$, the effect of the last $T-t$ steps of $U$ is only to flip $\mathcal{B}_{t+1},\ldots,\mathcal{B}_T$ from $\ket{0}$ to $\ket{1}$ (and $C_2$ does not act on $\mathcal{B}$).
We emphasize that since $U$ is a coherent implementation of an $\mathsf{EQPT}_{m}$ computation, we know that
\[ \sum_t |\alpha_t|^2 \cdot t = {\rm poly}(\lambda),
\]
as the left-hand side is equal to the expected runtime of $U$ as an $\mathsf{EQPT}_{m}$ computation.
Now, for any state $\ket{\psi}$ on $\mathcal{W} \otimes \mathcal{X}$, we can also express an application of $U^\dagger$ to $\mathcal{B} \otimes \mathcal{Q} \otimes \mathcal{W} \otimes \mathcal{X}$ entirely in terms of the unitary $U^{(t)}$, where $t$ is the contents of the $\mathcal{B}$ register. Specifically, we have
\[ U^{\dagger} \ket{0^t 1^{T-t}}_{\mathcal{B}} \ket{q_f}_{\mathcal{Q}} \ket{\psi} = (U^{(t)})^{\dagger} \ket{0^T}_{\mathcal{B}} \ket{q_f}_{\mathcal{Q}} \ket{\psi} \]
because the effect of the first $T-t$ steps of $U^{\dagger}$ on this state is only to flip $\mathcal{B}_{t+1},\ldots,\mathcal{B}_T$ from $\ket{1}$ to $\ket{0}$. As a result, the final state of the system (after the entire $\mathsf{EQPT}_{c}$ computation) is
\[ U^{\dagger} \ket{\Psi} = \sum_{t=0}^{T} \alpha_t U^{\dagger} (\mathbf{I} \otimes C_2) U^{(t)} \ket{0^t 1^{T-t}}_{\mathcal{B}} \ket{\phi}
= \sum_{t=0}^{T} \alpha_t (U^{(t)})^{\dagger} (\mathbf{I} \otimes C_2) U^{(t)} \ket{0^T}_{\mathcal{B}} \ket{\phi}. \]
We can interpret this to mean that within the ``branch'' of the superposition where $U$ ran in time $t$, the running time of $U^{\dagger}$ is also $t$, even if an arbitrary computation $C_2$ has been applied to $\mathcal{X}$ in between the applications of $U$ and $U^\dagger$. This gives an intuitive explanation for how $\mathsf{EQPT}_{c}$ computations are efficient: they simply compute an $(\alpha_t)_t$-superposition over branches in which $U$ and $U^\dagger$ together ran for $2t$ steps, such that the expectation $\sum_t |\alpha_t|^2 \cdot t$ is polynomial! Curiously, the \cite{LindenP98} reversibility issue indicates that such a computation cannot be implemented by an $\mathsf{EQPT}_{m}$ quantum Turing machine, which is what necessitates our new $\mathsf{EQPT}_{c}$ definition.
With all of this as motivation, we define the expected running time of an $\mathsf{EQPT}_{c}$ computation of the form $(U, C_1 = \mathbf{I}, C_2, C_3 = \mathbf{I})$ to be the appropriate linear combination of the branch runtimes, which is
\[ \sum_t |\alpha_t|^2 \cdot (2t + \mathsf{time}(C_2)) = 2\cdot \mathsf{time}(U) + \mathsf{time}(C_2),
\]
where $\mathsf{time}(U)$ is the expected running time of $U$ as an $\mathsf{EQPT}_{m}$ computation and $\mathsf{time}(C_2)$ is the (strict) running time of $C_2$. \cref{lemma:truncation} (whose proof makes use of this analysis) provides additional justification for this definition.
\paragraph{Extension to multiple $U_i$.} Everything we have discussed so far extends to the general case of \cref{def:cr-eqpt-informal}. However, we emphasize that the above analysis crucially relies on the ancilla being well-formed. This is the reason that $\mathsf{EQPT}_{c}$ algorithms have restricted access to $U_i,U_i^{\dagger}$: removing any of these restrictions could lead to applying these operations on malformed ancillas. Indeed, one can show that allowing an algorithm to apply $U_i,U_i^{\dagger},U_i$ to the same ancilla register would enable it to perform exponential-time computations.
\vspace{10pt}
Having established our computational model for simulation/extraction, we now give a detailed overview of our simulation and extraction techniques.
\subsection{Post-Quantum ZK for \cite{FOCS:GolMicWig86} and~\cite{STOC:FeiSha90} from Guaranteed Extraction}
The central idea behind our proofs of post-quantum ZK for the \cite{FOCS:GolMicWig86} GNI protocol (\cref{thm:szk}) and a variant of the~\cite{STOC:FeiSha90} protocol for $\mathsf{NP}$ (\cref{thm:feige-shamir}) is \emph{state-preserving extraction} (\cref{def:state-preserving-extraction}). Given a state-preserving extractor of the appropriate ``one-out-of-two graph isomorphism'' subroutine, proving the post-quantum ZK for the \cite{FOCS:GolMicWig86} GNI protocol (\cref{thm:szk}) follows easily, as simulating a cheating verifier immediately reduces to performing a state-preserving extraction of the verifier's (uniquely determined) bit $b$ such that $H\simeq G_b$. Proving post-quantum ZK for the~\cite{STOC:FeiSha90} protocol (\cref{thm:feige-shamir}) is more complicated because the Feige--Shamir protocol is a concurrent composition of two different protocols; we refer the reader to \cref{sec:feige-shamir} for details on its analysis.
In this subsection, we show that state-preserving extraction reduces to a related task that we call \emph{guaranteed extraction}; achieving the latter will be the focus of \cref{sec:tech-overview-hpe}.
Consider a $3$-message\footnote{Throughout our discussion of proofs of knowledge, we focus on the case of $3$- and $4$-message protocols. We sometimes ignore the first verifier message $\mathsf{vk}$ in a 4-message protocol for notational convenience.} public coin classical proof of knowledge $(P_{\Sigma}, V_{\Sigma})$ satisfying \emph{special soundness}:\footnote{This particular special soundness assumption is also for convenience; we later describe generalizations of special soundness for which we have results.} for any prover first message $a$ and any \emph{pair} of accepting transcripts $(a, r, z), (a, r', z')$ on different challenges $r \neq r'$, it is possible to extract a witness from $(a, r, z, r', z')$. For any such protocol, in the classical setting, it is possible to extract a witness from a cheating prover $P^*$ as follows:
\begin{itemize}
\item Given a cheating prover $P^*$, the extractor first generates a single transcript $(a,r,z)$ by running $P^*$ to obtain $a$, and then running it on a random $r$ to get $z$. If the transcript is rejecting, the extractor gives up.
\item If the transcript is accepting, the extractor rewinds $P^*$ to the point after $a$ was sent, and then repeatedly sends i.i.d. challenges $r_1, r_2, \hdots$ until $P^*$ produces \emph{another} accepting transcript.
\end{itemize}
As long as the prover has significantly greater than $2^{-\lambda}$ probability of convincing the verifier, the second accepting transcript $(a, r', z')$ produced will satisfy $r \neq r'$ with all but negligible probability, and thus a witness can be computed. In other words, this extractor \emph{guarantees} (with all but negligible probability) that a witness is extracted conditioned on an initial accepting execution. Moreover, for \emph{any} efficient $P^*$, the expected runtime of this procedure is ${\rm poly}(\lambda)$, since if $P^*$ (with some fixed random coins) is convincing with probability $p$, the expected number of rewinds in this procedure is $\frac 1 p$ and thus the overall expected number of rewinds is $p \cdot \frac 1 p = 1$.
In the quantum setting, one might hope for a similar ``guaranteed'' extractor, but prior works~\cite{EC:Unruh12,EC:Unruh16,FOCS:CMSZ21} fail to achieve this. Indeed, \cite[Page 32]{EC:Unruh12} explicitly asks whether something of this nature is possible.
Our first idea is to abstractly define a quantum analogue of this ``guaranteed'' extraction property and show that under certain conditions, it generically implies state-preserving extraction. Since the classical problem can only be solved in \emph{expected} polynomial time, there is again an ambiguity in what the quantum efficiency notion should be. However, it turns out that there is no \cite{FOCS:CCLY21}-type impossibility result for the problem of guaranteed extraction, so we demand the stronger $\mathsf{EQPT}_{m}$ extraction efficiency notion.
\begin{definition}\label{def:high-probability-extraction}
$(P_{\Sigma}, V_{\Sigma})$ is a post-quantum proof of knowledge with \emph{guaranteed extraction} if it has an extractor $\mathsf{Extract}^{P^*}$ of the following form.
\begin{itemize}[noitemsep]
\item $\mathsf{Extract}^{P^*}$ first runs the cheating prover $P^*$ to generate a (classical) first message $a$.
\item $\mathsf{Extract}^{P^*}$ runs $P^*$ coherently on the superposition $\sum_{r \in R} \ket{r}$ of all challenges to obtain a superposition $\sum_{r,z} \alpha_{r, z} \ket{r, z}$ over challenge-response pairs.\footnote{In general, the response $z$ will be entangled with the prover's state; here we suppress this dependence.}
\item $\mathsf{Extract}^{P^*}$ then computes (in superposition) the verifier's decision $V(x, a, r, z)$ and measures it. If the measurement outcome is $0$, the extractor gives up.
\item If the measurement outcome is $1$, run some quantum procedure $\mathsf{FindWitness}^{P^*}$ that outputs a string $w$.
\end{itemize}
We require that the following two properties hold.
\begin{itemize}[noitemsep]
\item \textbf{Correctness (guaranteed extraction):} The probability that the initial measurement returns $1$ but the output witness $w$ is invalid is ${\rm negl}(\lambda)$.
\item \textbf{Efficiency:} For any QPT $P^*$, the procedure $\mathsf{Extract}^{P^*}$ is in $\mathsf{EQPT}_{m}$.
\end{itemize}
\end{definition}
\noindent We claim that under suitable conditions, this kind of guaranteed extraction \emph{generically} implies state-preserving extraction, where the extractor will be $\mathsf{EQPT}_{c}$ rather than $\mathsf{EQPT}_{m}$. We describe the simplest example of these conditions: when the $\mathsf{NP}$ language itself is in $\mathsf{UP}$ (i.e. witnesses are unique).
\begin{lemma}[see \cref{lemma:state-preserving-high-probability}]\label{lemma:tech-overview-state-preserving-high-probability}
If $(P_{\Sigma}, V_{\Sigma})$ is a post-quantum proof of knowledge with guaranteed extraction for a language with unique witnesses, then $(P_{\Sigma}, V_{\Sigma})$ is a state-preserving proof of knowledge with $\mathsf{EQPT}_{c}$ extraction.
\end{lemma}
\noindent \cref{lemma:tech-overview-state-preserving-high-probability} can be extended to higher generality. For example, informally:
\begin{enumerate}
\item We can also extract ``partial witnesses'' that are uniquely determined by the instance $x$.
\item We can extract undetectably when the first message $a$ ``binds'' the prover to a single witness in the sense that the guaranteed extractor will only output this one witness (even if many others exist).
\item This can also be extended to certain protocols whose first messages are informally ``collapse-binding'' \cite{EC:Unruh16} to the witness.
\end{enumerate}
These generalizations are formalized in \cref{sec:state-preserving} using the notion of a ``witness-binding protocol'' (\cref{def:witness-binding}). In this overview, we give a proof for the ``unique witness'' setting.
\begin{proof}[Proof sketch]
Let $\mathsf{Extract}^{P^*}$ be a post-quantum guaranteed extractor with associated subroutine $\mathsf{FindWitness}^{P^*}$. We will present an $\mathsf{EQPT}_{c}$ extractor $\overline{\mathsf{Extract}}^{P^*}$ that has the form of an $\mathsf{EQPT}_{c}$ computation (see~\cref{fig:cr-eqpt-simple}) where the unitary $U$ is a coherent implementation of $\mathsf{FindWitness}^{P^*}$.
\begin{remark} This is an oversimplification of our real state-preserving extractor. In particular, $\mathsf{Extract}^{P^*}$ as described in this overview does not fit the $\mathsf{EQPT}_{c}$ model because $\mathsf{FindWitness}^{P^*}$ is not necessarily an $\mathsf{EQPT}_{m}$ computation --- its running time is only expected polynomial when viewed as a subroutine of $\mathsf{Extract}^{P^*}$, which runs $\mathsf{FindWitness}^{P^*}$ with some probability (which may be negligible) and moreover, only runs it on inputs consistent with the verifier decision $V(x,a,r,z) = 1$. In \cref{sec:state-preserving}, we formally demonstrate that our state-preserving extractor is $\mathsf{EQPT}_{c}$ by showing that it can be written in the form of~\cref{fig:cr-eqpt-simple} where the unitary $U$ is a coherent implementation of the $\mathsf{EQPT}_{m}$ procedure
$\mathsf{Extract}^{P^*}$.
\end{remark}
\noindent Our (simplified) $\mathsf{EQPT}_{c}$ extractor $\overline{\mathsf{Extract}}^{P^*}$ is defined as follows.
\begin{itemize}
\item Given $P^*$, generate a first message $a$ and superposition $\sum_{r, z} \alpha_{r, z} \ket{r, z}$ as in $\mathsf{Extract}^{P^*}$.
\item Compute the verifier's decision bit $V(x, a, r, z)$ in superposition and then measure it. If the measurement outcome is $0$, measure $r, z$ and terminate, outputting $(a, r, z, w=\bot)$ along with the current prover state.
\item If the measurement outcome is $1$, let $\ket{\psi}_{\mathcal{H}}$ denote the current prover state. For simplicity, assume that $\ket{\psi}_{\mathcal{H}}$ includes the superposition over $(r,z)$ and space to write the extracted witness. The next steps are:
\begin{itemize}
\item Run $U$ on input $\ket{\psi}_{\mathcal{H}} \otimes \ket{0}_{\mathcal{B}, \mathcal{W}}$.
\item Measure the sub-register of $\mathcal{H}$ containing the witness $w$.
\item Run $U^\dagger$.
\item Measure the sub-register of $\mathcal{H}$ containing the current transcript $r, z$.
\item Return $(a, r, z, w)$ and the residual prover state (i.e., the rest of $\mathcal{H}$).
\end{itemize}
\end{itemize}
Extraction correctness follows from the correctness of $\mathsf{FindWitness}^{P^*}$. Moreover, one can see that $\overline{\mathsf{Extract}}^{P^*}$ is state-preserving by considering two cases:
\begin{itemize}
\item \textbf{Case 1:} The initial measurement returns $0$. In this case, the transcript $(r, z)$ is immediately measured, and the resulting (sub-normalized) state exactly matches the component of the post-interaction $P^*$ view corresponding to when the verifier rejects.
\item \textbf{Case 2:} The initial measurement returns $1$. In this case, the procedure $\mathsf{FindWitness}^{P^*}$ would output a valid witness with probability $1-{\rm negl}$, so the output register of $U (\ket{\psi}_{\mathcal{A}} \otimes \ket{0}_{\mathcal{B}, \mathcal{W}})$ contains a valid witness with probability $1-{\rm negl}$. Since we assumed that the language $L$ is in $\mathsf{UP}$, this witness register is actually \emph{deterministic}, so measuring it is computationally (even statistically!) undetectable, and hence after applying $U^\dagger$ the resulting state $\ket{\psi'}$ is computationally indistinguishable from $\ket{\psi}$. Thus, the output of the extractor the measured witness $w$ along with a view that is computationally indistinguishable from the view of $P^*$ corresponding to when the verifier accepts.
\end{itemize}
This completes the proof sketch. \qedhere
\end{proof}
\paragraph{How do we apply \cref{lemma:tech-overview-state-preserving-high-probability}?} We now describe how to instantiate $\Sigma$-protocols so that the reduction in \cref{lemma:tech-overview-state-preserving-high-probability} applies (see \cref{sec:state-preserving-examples}).
First, we note that the \emph{un-repeated} variants of standard proofs of knowledge \cite{FOCS:GolMicWig86,Blum86} are ``witness-binding'' in the informal sense of the generalization (2); an extractor run on such protocols will only output a witness consistent with the commitment string $a$. However, since the un-repeated protocols only have constant (or worse) soundness error, there is no guaranteed extraction procedure for them (even in the classical setting).
In order to obtain negligible soundness error, these protocols are typically repeated in parallel; in this case, we \emph{do} show guaranteed extraction procedures, but the protocols \emph{lose} the witness-binding property (2). This is because each ``slot'' of the parallel repetition may be consistent with a different witness, and the extractor has no clear way of outputting a canonical one. In this case, measuring the witness potentially disturbs the prover's state by collapsing it to be consistent with the measured witness, which would not happen in the honest execution.
We resolve this issue using \emph{commit-and-prove}. Given a generic $\Sigma$-protocol for which we have a guaranteed extractor, we consider a modified protocol in which the prover sends a (collapsing or statistically binding) commitment $\mathsf{com} = \mathsf{Com}(w)$ to its $\mathsf{NP}$-witness along with a $\Sigma$-protocol proof of knowledge of an opening of $\mathsf{com}$ to a valid $\mathsf{NP}$-witness. When the extractor $\overline{\mathsf{Extract}}^{P^*}$ of \cref{lemma:tech-overview-state-preserving-high-probability} is applied to this protocol composition, the procedure $\mathsf{FindWitness}^{P^*}$ (which is run coherently as $U$) actually obtains \emph{both} an $\mathsf{NP}$ witness $w$ \emph{and} an opening of $\mathsf{com}$ to $w$. Therefore, the collapsing property of $\mathsf{Com}$ says that $w$ can be measured undetectably. In other words, the commit-and-prove compiler enforces a computational uniqueness property sufficient for \cref{lemma:tech-overview-state-preserving-high-probability} to apply. It also turns out that the (original, unmodified) \cite{FOCS:GolMicWig86} graph-nonisomorphism protocol can be viewed as using this commit-and-prove paradigm,\footnote{The verifier sends an instance-dependent commitment \cite{BelMicOst90,JC:ItoOhtShi97,C:MicVad03} of a bit to the prover (which is perfectly binding in the proof of ZK) and demonstrates knowledge of the bit and its opening.} which is one way to understand the proof of \cref{thm:szk}.
Finally, we remark that this commit-and-prove compiler is the cause of the super-polynomial assumptions in \cref{thm:state-preserving-wi,thm:feige-shamir}. This is because in order to show that a commit-and-prove protocol remains witness-indistinguishable, it must be argued that the proof of knowledge does not compromise the hiding of $\mathsf{Com}$, which we only know how to argue by simulating the proof of knowledge in superpolynomial time (and assuming that $\mathsf{Com}$ is superpolynomially secure). This issue does not arise when $\mathsf{Com}$ is statistically hiding and the $\Sigma$-protocol is statistically witness-indistinguishable.
\subsection{Achieving Guaranteed Extraction}
\label{sec:tech-overview-hpe}
So far, we have reduced from state-preserving extraction to the problem of guaranteed extraction. We now describe how we achieve guaranteed extraction for a wide class of $\Sigma$-protocols. Informally, we require that the protocol satisfies two important properties in order to perform guaranteed extraction:
\begin{itemize}
\item \textbf{Collapsing:} Prover responses can be measured undetectably provided that they are valid.
\item \textbf{$k$-special soundness:} It is possible to obtain a witness given $k$ accepting protocol transcripts $(a, r_1, z_1, \hdots, r_k, z_k)$ with distinct $r_i$ (for the same first prover message $a$).
\end{itemize}
Both of these restrictions can be relaxed substantially\footnote{We highlight that the PoK subroutine in the \cite{FOCS:GolMicWig86} graph non-isomorphism protocol is \emph{not} collapsing; it is only collapsing onto its responses of $0$ challenge bits; however, it turns out that this property is still sufficient to obtain guaranteed extraction for the subroutine (see \cref{sec:examples,sec:obtaining-guaranteed-extraction}).} (see \cref{subsec:protocol-prelims,sec:gss,sec:obtaining-guaranteed-extraction} for more details), but we focus on this case for the technical overview.
\begin{theorem}[See \cref{thm:high-probability-extraction}]\label{thm:tech-overview-high-probability}
Any public-coin interactive argument satisfying collapsing and $k$-special soundness is a post-quantum proof of knowledge with guaranteed extraction (in $\mathsf{EQPT}_{m}$).
\end{theorem}
We consider \cref{thm:tech-overview-high-probability} to be an interesting result in its own right and expect it to be useful in future work. We now describe our proof of~\cref{thm:tech-overview-high-probability} over the course of several steps:
\begin{itemize}
\item We begin by describing an abstract template that generalizes the~\cite{FOCS:CMSZ21} extraction procedure in~\cref{sec:overview-cmsz-template}. In this template, the extractor repeatedly (1) queries the adversary on i.i.d. random challenges and then (2) applies a ``repair procedure'' to restore the adversary's success probability.
\item In~\cref{sec:tech-overview-first-attempt}, we describe a natural ``first attempt'' at guaranteed extraction based on the~\cite{FOCS:CMSZ21} template.
\item We then observe in~\cref{sec:tech-overview-not-eqpt} that the entire template is unlikely to achieve guaranteed extraction in expected polynomial time. Perhaps surprisingly (and unlike the classical setting), querying the adversary on i.i.d. challenges appears \emph{too slow} for this extraction task.
\item In~\cref{sec:tech-overview-new-template}, we introduce a new extraction template in which the adversary is \emph{entangled} with a superposition of challenges, and the challenge is only measured once the adversary is guaranteed to give an accepting response.
\item While this new template is a promising idea, we are still far from achieving guaranteed extraction. For the rest of the overview (\cref{sec:tech-overview-no-speedup,sec:tech-overview-speedup,subsubsec:gk-issue}), we outline several technical challenges in instantiating this approach, eventually leading to our final extraction procedure and analysis.
\end{itemize}
\subsubsection{An Abstract \cite{FOCS:CMSZ21} Extraction Template}\label{sec:overview-cmsz-template}
\cite{FOCS:CMSZ21} recently showed that protocols satisfying collapsing and $k$-special soundness are post-quantum proofs of knowledge. Unlike our setting of guaranteed extraction, the \cite{FOCS:CMSZ21} extractor $\mathsf{Extract}^{P^*}(x, \gamma)$ is given \emph{as advice} an error parameter $\gamma$ and and extracts from cheating provers $P^*$ (that may have some initial quantum state) that are convincing with probability $\gamma^* \geq \gamma$. The extractor's success probability is roughly $\frac \gamma 2$.
At a high level, our abstract template makes use of two core subroutines that we call $\mathsf{Estimate}$ and $\mathsf{Transform}$. We describe the correctness properties required of $\mathsf{Estimate}$ and $\mathsf{Transform}$ below, and also describe their particular instantiations in \cite{FOCS:CMSZ21}.
\paragraph{Jordan's lemma and singular vector algorithms.} Let $\BProj{\mathsf{A}},\BProj{\mathsf{B}}$ be projectors on a Hilbert space $\mathcal{H}$ with corresponding binary projective measurements $\mathsf{A} = \BMeas{\BProj{\mathsf{A}}}$ and $\mathsf{B} = \BMeas{\BProj{\mathsf{B}}}$. Recall that Jordan's lemma~\cite{Jordan75} states that $\mathcal{H}$ can be decomposed as a direct sum $\mathcal{H} = \bigoplus \mathcal{S}_j$ of two-dimensional invariant subspaces $\mathcal{S}_j$, where in each $\mathcal{S}_j$, the projectors $\BProj{\mathsf{A}}$ and $\BProj{\mathsf{B}}$ act as rank-one projectors $\JorKetBraA{j}{1}$ and $\JorKetBraB{j}{1}$.\footnote{There will also be one-dimensional subspaces, which we ignore in this overview since they can be viewed as ``degenerate'' two-dimensional subspaces.} The vectors $\JorKetA{j}{1}$ and $\JorKetB{j}{1}$ are also left and right singular vectors of $\BProj{\mathsf{A}}\BProj{\mathsf{B}}$ with singular value $\sqrt{p_j}$, where $p_j \coloneqq \abs{\JorBraKetAB{j}{1}}^2$. This decomposition allows us to define on $\mathcal{H}$ the projective measurement $\mathsf{Jor} = (\Pi^{\mathsf{Jor}}_j)$ onto the Jordan subspaces $\mathcal{S}_j$ (i.e., $\image(\Pi^{\mathsf{Jor}}_j) = \mathcal{S}_j$). For an arbitrary state $\ket{\psi}$, we define the \textdef{Jordan spectrum} of $\ket{\psi}$ to be the distribution of $p_j$ induced by $\mathsf{Jor}$.
We will make use of procedures $\mathsf{Estimate}, \mathsf{Transform}$ satisfying the following properties.
\begin{itemize}
\item The Jordan subspaces $\mathcal{S}_j$ are invariant\footnote{We allow for decoherence, so we ask that every element of $\mathcal{S}_j$ is mapped to a \emph{mixed state} where every component is in $\mathcal{S}_j$.} under $\mathsf{Estimate}^{\mathsf{A},\mathsf{B}}$ and $\mathsf{Transform}^{\mathsf{A},\mathsf{B}}$. Equivalently, $\mathsf{Estimate}^{\mathsf{A},\mathsf{B}}$ and $\mathsf{Transform}^{\mathsf{A},\mathsf{B}}$ should \emph{commute} with $\mathsf{Jor}$. This property is important for arguing about the output behavior of $\mathsf{Estimate}$ and $\mathsf{Transform}$ on arbitrary states.
\item $\mathsf{Estimate}^{\mathsf{A},\mathsf{B}}$: on input $\ket{S_j} \in \mathcal{S}_j$, output $p \approx p_j$; the residual state remains in $\mathcal{S}_j$.
\item $\mathsf{Transform}^{\mathsf{A},\mathsf{B}}$ maps each $\JorKetA{j}{1}$ to $\JorKetB{j}{1}$. We have no requirements on any other state in $\mathcal{S}_j$ except that it remains in $\mathcal{S}_j$.
\end{itemize}
\cite{FOCS:CMSZ21} implement a version of $\mathsf{Estimate}^{\mathsf{A},\mathsf{B}}$ (following~\cite{MarriottW05}) with $\varepsilon$ accuracy by alternating $\mathsf{A}$ and $\mathsf{B}$ for $t = {\rm poly}(\lambda)/\varepsilon^2$ steps. The output is $p = d/(t-1)$ where $d$ is the number of occurrences of $b_i = b_{i+1}$ among the outcomes $b_1,b_2,\dots,b_t$. With probability $1-2^\lambda$, we have $\abs{p - p_j} \leq \varepsilon$. They (implicitly) implement $\mathsf{Transform}^{\mathsf{A},\mathsf{B}}$ by alternating measurements $\mathsf{A}$ and $\mathsf{B}$ back and forth until $\mathsf{B} \rightarrow 1$, with an expected running time of $O(1/p_j)$ on $\mathcal{S}_j$.
\paragraph{The~\cite{FOCS:CMSZ21} Extractor.} We now use the abstract procedures $(\mathsf{Estimate}, \mathsf{Transform})$ to describe (a slightly simplified version of) the~\cite{FOCS:CMSZ21} extractor. Let $\ket{+_R}_{\mathcal{R}}$ denote the uniform superposition over challenges on register $\mathcal{R}$ and let $\mathcal{H}$ denote the register containing the prover's state. Let $\mathsf{V}_r = \BMeas{\BProj{V, r}}$ denote a binary projective measurement on $\mathcal{H}$ that measures whether $P^*$ returns a valid response on $r$.
The extraction technique makes crucial use of two measurements: the first is $\mathsf{U} = \BMeas{\BProj{\mathsf{U}}}$, where $\BProj{\mathsf{U}} \coloneqq \mathbf{I}_{\mathcal{H}} \otimes \ketbra{+_R}_{\mathcal{R}}$ is the projective measurement of whether the challenge register $\mathcal{R}$ is \emph{uniform}. The second is $\mathsf{C} = \BMeas{\BProj{\mathsf{C}}}$, where $\BProj{\mathsf{C}} \coloneqq (\BProj{V,r_i})_{\mathcal{H}} \otimes \sum_{r \in R} \ketbra{r}_{\mathcal{R}}$ is the projective measurement that runs the prover on the challenge on $\mathcal{R}$ and \emph{checks} whether the prover wins. The extraction procedure is described in \cref{fig:tech-overview-cmsz} below.
\begin{mdframed}
\captionsetup{type=figure}
\captionof{figure}{The \cite{FOCS:CMSZ21} extractor with generic procedures $\mathsf{Estimate}, \mathsf{Transform}$}\label{fig:tech-overview-cmsz}
\begin{enumerate}
\item Generate a first verifier message $\mathsf{vk}$ and run $P^*(\mathsf{vk}) \rightarrow a$ to obtain a classical first prover message $a$ once and for all. Let $\ket{\psi}$ denote the state of $P^*$ after it returns $a$.
\item\label[step]{step:tech-overview-initial-estimate} Run $\mathsf{Estimate}^{\mathsf{U},\mathsf{C}}$ to accuracy $\gamma/4$ on $\ket{\psi}\ket{+_R}$, which outputs an estimate $p$ of the adversary's success probability and then discard $\mathcal{R}$;\footnote{We assume here that when we run $\mathsf{Estimate}^{\mathsf{U},\mathsf{C}}$ on a state $\ket{\phi} \in \image(\BProj{\mathsf{U}})$, the residual state is in $\image(\BProj{\mathsf{U}})$. Then we are guaranteed that $\mathcal{R}$ is unentangled from $\mathcal{H}$, which allows us to discard it. In our eventual construction, this assumption is enforced in \cref{thm:svdisc}.}~abort if $p < \gamma/2$ (this occurs with probability at most $1-\gamma/2$). Subtract $\gamma/4$ from $p$ so that $p$ represents a reasonable lower bound on the success probability. Set an error parameter $\varepsilon = \frac{\gamma^2}{2\lambda k}$ for the rest of the procedure and fix $N = \lambda k /p$.
\item We now want to generate $k$ accepting transcripts. For $i$ from $1$ to $N$:
\begin{enumerate}
\item Sample a uniformly random challenge $r_i$ and apply $\mathsf{V}_{r_i}$ to the current state $\ket{\psi_i}$.
\item If the output is $b_i = 1$, measure the response $z$. This is (computationally) undetectable by the protocol's collapsing property, so we ignore this step \textbf{for now}.
\item Let $E$ be a unitary such that applying $E$ to $\mathcal{H} \otimes \mathcal{W}$ (where $\mathcal{W}$ is an appropriate-size ancilla initialized to $\ket{0}_{\mathcal{W}}$) and then discarding $\mathcal{W}$ is equivalent to running $\mathsf{Estimate}^{\mathsf{U},\mathsf{C}}$ for $\lambda p/\varepsilon^2$ steps on $\mathcal{H} \otimes \mathcal{R}$ (where $\mathcal{R}$ is initialized to $\ket{+_R}_{\mathcal{R}}$) and then discarding $\mathcal{R}$.
We \emph{repair} the success probability by initializing $\mathcal{W} = \ket{0}_{\mathcal{W}}$ and then running $\mathsf{Transform}^{\mathsf{D},\mathsf{G}}$ on $\mathcal{H} \otimes \mathcal{W}$ where, roughly speaking, $\mathsf{D}$ is a projective measurement corresponding to the \emph{disturbance} caused by step (a), and $\mathsf{G}$ is a projective measurement that determines whether the adversary's success probability is \emph{good}, meaning at least $p-\varepsilon$. More precisely:
\begin{itemize}
\item $\mathsf{G} = \BMeas{\BProj{p,\varepsilon}}$ returns $1$ if, after applying $E$, the estimate is at least $p- \varepsilon$.\footnote{In our actual construction/proof, we replace this call to $\mathsf{Estimate}$ (and the additional call at the end of Step 3c) with a weaker primitive that \emph{only} computes the threshold instead of fully estimating $p$. This change makes it easier to instantiate the primitive.}
\item $\mathsf{D} = \BMeas{\BProj{r_i,b_i}}$ returns $1$ if $\mathcal{W} = \ket{0}_{\mathcal{W}}$ \emph{and} applying $\mathsf{V}_{r_i}$ returns $b_i$.
\end{itemize}
If $\mathsf{Transform}^{\mathsf{D},\mathsf{G}}$ has not terminated within $T$ calls to $\mathsf{D}$ and $\mathsf{G}$, abort (this occurs with probability at most $O(1/T)$). Otherwise, apply $E$, trace out $\mathcal{W}$, re-initialize $\mathcal{R}$ to $\ket{+_R}$ and then run $\mathsf{Estimate}^{\mathsf{U},\mathsf{C}}$ for $\lambda p/\varepsilon^2$ steps to obtain a new probability estimate $p'$. If $p' < p-2\varepsilon$, abort. Finally, discard $\mathcal{R}$ and re-define $p := p'$.
\end{enumerate}
\end{enumerate}
\end{mdframed}
\subsubsection{Guaranteed extraction, first attempt}
\label{sec:tech-overview-first-attempt}
The \cite{FOCS:CMSZ21} algorithm, interpreted in terms of the abstract procedures $\mathsf{Estimate}, \mathsf{Transform}$, will serve as our initial template for extraction. We now consider whether it can be modified to achieve \emph{guaranteed} extraction.
\paragraph{Syntactic Changes.}
The first issues with the \cite{FOCS:CMSZ21} extraction procedure are syntactic in nature. Namely, we want an extraction procedure that works for \emph{any} $P^*$, with no a priori lower bound $\gamma$ on the success probability of $P^*$. Of course, an extractor $\mathsf{Extract}^{P^*}$ that extracts with probability close to $1$ given an arbitrary $P^*$ is impossible to achieve (imagine a $P^*$ with negligible success probability), so the game is also changed as described in \cref{def:high-probability-extraction}. In terms of the \cite{FOCS:CMSZ21} template, the change is as follows:
\begin{itemize}
\item After obtaining $(a,\ket{\psi})$, measure $\mathsf{C}$ on $\ket{\psi} \ket{+_R}$ and terminate if the outcome is $0$.
\item Otherwise, the state is (re-normalized) $\BProj{\mathsf{C}} (\ket{\psi} \ket{+_R})$, and the goal is to extract with probability $1-{\rm negl}$.
\end{itemize}
\paragraph{Variable-Runtime Estimation.}
Since we are given no \emph{a priori} lower bound $\gamma$ on the success probability of $P^*$, there is no fixed additive precision $\epsilon$ for which the initial $\mathsf{Estimate}$ in \cref{step:tech-overview-initial-estimate} guarantees successful extraction --- the initial state $\ket{\psi}\ket{+_R}$ could be concentrated on subspaces $\mathcal S_j$ such that $p_j \ll \varepsilon$, in which case the estimation procedure almost certainly returns $0$.
To remedy this issue, we define a \emph{variable-length} variant of $\mathsf{Estimate}^{\mathsf{A},\mathsf{B}}$ with the guarantee that for every $j$ and every state in $\mathcal S_j$, $\mathsf{Estimate}^{\mathsf{A},\mathsf{B}}$ returns $p_j$ to within constant (factor $2$) \emph{multiplicative} accuracy with probability $1-2^{-\lambda}$. With regard to instantiation, we note that the \cite{MarriottW05,FOCS:CMSZ21} implementation of $\mathsf{Estimate}^{\mathsf{A},\mathsf{B}}$ can be modified to be variable-length: simply continue alternating $\Pi_A, \Pi_B$ until sufficiently many ($d = {\rm poly}(\lambda)$) $b_i = b_{i+1}$ occur, so that the estimate $\frac d {t-1}$ (where $t$ is the number of measurements performed) is reasonably concentrated around its expectation.
Thus, we begin with the natural idea that \cref{step:tech-overview-initial-estimate} should be modified to use this variable-length $\mathsf{Estimate}$. We remark that variable-length $\mathsf{Estimate}$ is \emph{not} required in later steps: the output $p$ of \cref{step:tech-overview-initial-estimate} can be used to set the parameters $(\varepsilon, N)$ for the rest of the procedure.
With this modification, our extractor \emph{never} aborts in \cref{step:tech-overview-initial-estimate}, but it also no longer runs in strict polynomial time. How do we analyze its runtime? First, one can compute that when run on a state in $\mathcal S_j$, the expected running time of this procedure is (up to factors of ${\rm poly}(\lambda)$) roughly $\frac 1 {p_j}$. This might seem concerning, because this expectation could be large (even superpolynomial) if $p_j$ is very small. However, what we care about is the runtime of $\mathsf{Estimate}^{\mathsf{U}, \mathsf{C}}$ on the (re-normalized) state $\BProj{\mathsf{C}} (\ket{\psi} \ket{+_R}).$ Writing $\ket{\psi} \ket{+_R} = \sum_j \alpha_j \ket{v_{j,1}}$, we see that $\BProj{\mathsf{C}} (\ket{\psi} \ket{+_R}) = \sum_j \alpha_j \sqrt{p_j} \ket{w_{j,1}}$.
To calculate the overall expected runtime, we use the fact that $\mathsf{Estimate}^{\mathsf{U},\mathsf{C}}$ commutes with the projective measurement $\mathsf{Jor}$ that outputs $j$ on each subspace $\mathcal{S}_j$. This implies that the expected runtime of $\mathsf{Estimate}$ on our state is the weighted linear combination of its expected runtime on the eigenstates $\ket{v_{j,1}}$, namely
\[ \frac 1 {\gamma^*} \sum_j |\alpha_j|^2 p_j \cdot \frac 1 {p_j} = \frac 1 {\gamma^*},
\]
where $\gamma^* = || \BProj{\mathsf{C}} (\ket{\psi} \ket{+_R}) ||^2$ is the probability that $\mathsf{C} \rightarrow 1$ in the initial execution.\footnote{One way to see this is to notice that applying $\mathsf{Jor}$ after running $\mathsf{Estimate}^{\mathsf{U},\mathsf{C}}$ clearly cannot affect the runtime of $\mathsf{Estimate}^{\mathsf{U},\mathsf{C}}$. Then $\mathsf{Jor}$ can be commuted to occur before $\mathsf{Estimate}^{\mathsf{U},\mathsf{C}}$.} Thus, the overall expected runtime equals $\gamma^* \cdot \frac 1 {\gamma^*} = 1$, so Step 2 of the procedure is efficient!
\paragraph{Our first attempt.}
With the changes above, Step 2 of the extraction procedure now has zero error and runs in expected polynomial time ($\mathsf{EQPT}_{m}$).
The other source of non-negligible extraction error from \cite{FOCS:CMSZ21} is in the cutoff $T$ imposed on $\mathsf{Transform}^{\mathsf{D},\mathsf{G}}$. By removing this cutoff, we obtain a procedure that is somewhat closer to the goal of guaranteed extraction in expected polynomial time, described in~\cref{fig:tech-overview-first-attempt} below.
\begin{mdframed}
\captionsetup{type=figure}
\captionof{figure}{Guaranteed extraction (Attempt 1)}\label{fig:tech-overview-first-attempt}
\begin{enumerate}
\item After obtaining $(a, \ket{\psi})$, apply $\mathsf{C}$ to $\ket{\psi} \ket{+_R}$ and terminate if the measurement returns $0$. Otherwise, let $\ket{\phi}$ denote the resulting state on $\mathcal{H} \otimes \mathcal{R}$.
\item Run the variable-length $\mathsf{Estimate}^{\mathsf{U}, \mathsf{C}}$ on $\ket{\phi}$, obtaining output $p$. Divide $p$ by $2$ to obtain a lower bound on the resulting success probability. Set $\varepsilon = \frac {p^2} {2\lambda k }$ and $N = \lambda k /p$.
\item Run Step 3 of the original \cite{FOCS:CMSZ21} extractor as in \cref{fig:tech-overview-cmsz}, with the parameters $p, \varepsilon, N$. Instead of imposing a time limit $T$, the procedure $\mathsf{Transform}^{\mathsf{D}, \mathsf{G}}$ is allowed to run until completion\footnote{To avoid a computation that runs for infinite time, one should at the very least impose an exponential $2^\lambda$ time cutoff, which can be shown to incur only a $2^{-\lambda}$ correctness error.} $( \mathsf{G}\rightarrow 1)$.
\end{enumerate}
\end{mdframed}
\subsubsection{Problem: Step 3 is not expected poly-time.}
\label{sec:tech-overview-not-eqpt}
Unfortunately, the ``first attempt'' above does \emph{not} satisfy \cref{def:high-probability-extraction}. The issue lies in its runtime: we argued before that over the randomness of $\mathsf{Extract}^{P^*}$, Step 2 runs in expected polynomial time. However, we did not analyze Step 3, which is the main loop for generating transcripts. Here is a rough estimate for its runtime.
Recall that Step 3 loops the following steps for each $i = 1,\dots,\lambda k/p$:
\begin{itemize}
\item Run the prover $P^*$ on a random challenge $r_i$. This takes a fixed ${\rm poly}(\lambda)$ amount of time.
\item Then, \emph{regardless} of whether $P^*$ was successful, the residual prover state $\ket{\phi_i}$ must be \emph{repaired} to have success probability $\approx p$.
\end{itemize}
It turns out that as currently written, the expected runtime of the repair step is (up to ${\rm poly}(\lambda)$ factors) equal to the runtime of a fixed-length $\mathsf{Estimate}$ procedure with precision $\approx p^2$ (this ensures that after $1/p$ repair steps, the total success probability loss must be at most $p$). Moreover, this runtime is intuitively necessary for \emph{any} possible repair procedure, since repairing the success probability should be at least as hard as computing whether it is above the acceptable threshold.
In our setting, the \cite{MarriottW05,FOCS:CMSZ21} estimation procedure requires $1/p^3$ time to obtain a $p^2$-accurate estimate in the relevant parameter regime.\footnote{As written in \cite{FOCS:CMSZ21}, the estimation procedure runs in $1/p^4$ time, but a factor of $p$ can be saved because (roughly speaking) the estimate only needs to achieve $p^2$ accuracy when $p_j$ is close to $p$.} Since Step 3 performs this loop $\frac{1}{p}$ times (omitting the $\lambda k$ factor), the total runtime will be at least $\frac 1 {p^4}$. This is too long for the ``conditioning'' of Step 1 to save us: if the initial state at the beginning of Step 1 is $\ket{\psi} \ket{+_R} \in \mathcal S_j$, the expected runtime of Step 3 is $p_j \cdot \frac 1 {p_j^4} = \frac 1 {p_j^3}$, which can be arbitrarily large (when $p_j$ is small).
\paragraph{Idea: Use a faster $\mathsf{Estimate}$?} Given how we have phrased the extractor in terms of abstract $(\mathsf{Estimate}, \mathsf{Transform})$ algorithms, a natural idea for improving the runtime is to use an implementation of the abstract $\mathsf{Estimate}$ algorithm that is faster than the \cite{MarriottW05}-based one used in \cite{FOCS:CMSZ21}. Indeed, if we use the procedure described in \cite{NagajWZ11} to implement $\mathsf{Estimate}^{\mathsf{U},\mathsf{C}}$, we obtain a quadratic speedup: the runtime of $\mathsf{Estimate}^{\mathsf{U},\mathsf{C}}$ in Step 3c can be improved from $\frac{1}{p^3}$ to $\frac{1}{p^{3/2}}$.
This speedup will be relevant to our eventual solution, but it does not resolve the problem. The back-of-the envelope calculation now just says that the expected runtime of Step 3 on a state $\ket{\psi}\ket{+_R} \in \mathcal{S}_j$ is $p_j \cdot p_j^{-5/2} = p_j^{-3/2}$, which is still unbounded.
\paragraph{So are we doomed?} Indeed, this runtime calculation seems problematic for the \emph{entire} \cite{FOCS:CMSZ21} template that we abstracted, by the following reasoning:
\begin{itemize}
\item On a state with initial estimate $p$, each choice of $r_i$ will only produce an accepting transcript with probability $\approx p$, so we must try $\approx k/p$ choices of i.i.d. $r_i$ to obtain $k$ accepting transcripts.
\item Therefore, as long as the repair step takes \textbf{super-constant time} (as a function of $1/p$), the overall extraction procedure will take too long.
\end{itemize}
This seems to indicate a dead end for extractors that follow the standard rewinding template of repeatedly running $P^*$ on random $r$ to obtain accepting transcripts.
\subsubsection{Solution: A New Rewinding Template}
\label{sec:tech-overview-new-template}
We solve our unbounded runtime issue by abandoning ``classical'' rewinding, in the following sense: unlike prior extraction procedures \cite{EC:Unruh12,FOCS:CMSZ21}, our extractor will \emph{not} follow the standard approach of obtaining transcripts by feeding uniformly random $r_i$ to $P^*$. Instead, we will \emph{generate} accepting transcripts $(r_i, z_i)$ via an inherently quantum procedure so that \emph{every} generated transcript is accepting (as opposed to only a $p$ fraction of them).
We accomplish this by using the procedure $\mathsf{Transform}$, which was previously only used for state repair, to \emph{generate} the transcripts. Consider a prover state $\ket{\psi_i}$ at the beginning of Step 3. By definition, $\ket{\psi_i} \ket{+_R} \in \image(\BProj{\mathsf{U}})$, so applying $\mathsf{Transform}^{\mathsf{U}, \mathsf{C}}$ to $\ket{\psi_i} \ket{+_R}$ produces a state in $\image(\BProj{\mathsf{C}})$. Now if the challenge register $\mathcal{R}$ is \emph{measured} (obtaining a string $r_i$), the residual prover state is \emph{guaranteed} to produce an accepting response on $r_i$!
Moreover, the extraction procedure can afford to run $\mathsf{Transform}^{\mathsf{U}, \mathsf{C}}$: since $\ket{\psi_i} \ket{+_R}$ has been constructed to lie almost entirely in subspaces $\mathcal S_j$ such that $p_j < p - \varepsilon$, the expected running time of $\mathsf{Transform}^{\mathsf{U}, \mathsf{C}}$ can be shown to be roughly\footnote{For technical reasons, we cut off $\mathsf{Transform}$ after an exponential number of steps so that the component of $\ket{\psi_i} \ket{+_R}$ lying in ``bad'' $\mathcal S_j$ (i.e., where $p_j$ is tiny) does not ruin the expected running time.} $\frac 1 p$.
This gives us a potential \emph{new} template for extraction: we modify the main loop (Step 3) as in \cref{fig:tech-overview-new-template}.
\begin{mdframed}
\captionsetup{type=figure}
\captionof{figure}{Our new extraction template}\label{fig:tech-overview-new-template}
\begin{enumerate}
\item After obtaining $(a, \ket{\psi})$, apply $\mathsf{C}$ to $\ket{\psi} \ket{+_R}$ and terminate if the measurement returns $0$. Otherwise, let $\ket{\phi}$ denote the resulting state on $\mathcal{H} \otimes \mathcal{R}$.
\item Run the variable-length $\mathsf{Estimate}^{\mathsf{U}, \mathsf{C}}$ on $\ket{\phi}$, obtaining output $p$. Divide $p$ by $2$ to obtain a lower bound on the resulting success probability. Set $\varepsilon = \frac {p} {4 k }$.
\item For $i$ from $1$ to \textcolor{red}{$k$}:
\begin{enumerate}
\item Given current prover state $\ket{\psi_i}$, apply $\mathsf{Transform}^{\mathsf{U},\mathsf{C}}$ to $\ket{\psi_i} \ket{+_R}$. Call the resulting state $\ket{\phi_{\mathsf{C}}}$.
\item Obtain a \emph{guaranteed accepting} transcript $(r_i, z_i)$ by measuring the $\mathcal{R}$ register of $\ket{\phi_{\mathsf{C}}}$ and then running $P^*$ on $r_i$. As before, measuring $z_i$ is computationally undetectable.
\item Run the Repair Step (3c) as in \cref{fig:tech-overview-cmsz} by calling $\mathsf{Transform}^{\mathsf{D},\mathsf{G}}$ and re-estimating $p$.
\end{enumerate}
\end{enumerate}
\end{mdframed}
We emphasize two crucial efficiency gains from this new extraction template:
\begin{itemize}
\item As already mentioned, the main loop now has $k$ steps instead of $k/p$, since each transcript is now guaranteed to be accepting.
\item Since only $k$ repair operations are now required, the \emph{error} parameter $\varepsilon$ for $\Pi_{p, \varepsilon}$ can be set to $\approx p$ instead of $\approx p^2$.
\end{itemize}
\paragraph{Correctness Analysis.}
We remark that even the correctness of this new extraction procedure is unclear. In the case of $k$-special sound protocols, we need the extraction procedure to produce $k$ accepting transcripts with distinct $r_i$; previously, this was guaranteed because each $r_i$ was sampled i.i.d., so (w.h.p.) no pair of them coincide. Here, $r_i$ is \emph{not} uniformly random --- it has been sampled by measuring the $\mathcal{R}$ register of some state in $\BProj{\mathsf{C}}$.
In order to analyze the behavior of this extractor, it is important to understand the state $\ket{\phi_{\mathsf{C}}}$ obtained after applying $\mathsf{Transform}^{\mathsf{U}, \mathsf{C}}$. Of course, we have an explicit representation $\sum_j \alpha_j \sqrt{p_j} \ket{w_{j,1}}$ for it, but it is not clear a priori how this helps.
To prove correctness, we analyze the state $\ket{\phi_{\mathsf{C}}}$ using what we call the Pseudoinverse Lemma (\cref{lemma:pseudoinverse}), which states that $\ket{\phi_{\mathsf{C}}}$ can be viewed as a \emph{conditional} state obtained by starting with a state $\ket{\phi_{\mathsf{U}}} = \ket{\psi_{\mathsf{U}}} \ket{+_R} \in \image(\BProj{\mathsf{U}})$ and \emph{post-selecting} (i.e., conditioning) on a $\mathsf{C}$-measurement of $\ket{\phi_{\mathsf{U}}}$ outputting $1$. Crucially, this pseudoinverse state has a precisely characterized $(\mathsf{U}, \mathsf{C})$-Jordan spectrum related to the Jordan spectrum of $\ket{\phi_C}$. We emphasize that the state $\ket{\phi_{\mathsf{U}}}$ does not actually exist in the extraction procedure; it is just a tool for the analysis.
Using the pseudoinverse lemma, one can show that the probability a $\mathsf{C}$-measurement of $\ket{\phi_{\mathsf{U}}}$ returns $1$ is $\approx p$, which implies that the joint distribution of $(r_1, \hdots, r_k)$ comes from a ``random enough'' distribution that we formalize as ``admissible'' (\cref{def:admissible-dist}). This is shown by the following reasoning: since measuring $\mathcal{R}$ commutes with $\mathsf{C}$, it is as if we have an initially uniformly random $r_i$ (obtained from measuring $\mathcal{R}$ of $\ket{\phi_{\mathsf{U}}}$) that is ``output'' with probability $\approx p$ (when $\mathsf{C}$ returns $1$). This is sufficient to argue about correctness properties of the extractor.
\paragraph{Runtime Analysis Idea.} Analyzing the runtime of $\mathsf{Transform}^{\mathsf{D}, \mathsf{G}}$ also turns out to be significantly more subtle than in the \cite{FOCS:CMSZ21} setting. The basic idea is to show that (within a reasonable amount of time) $\mathsf{Transform}^{\mathsf{D}, \mathsf{G}}$ \emph{returns} a state on $\mathcal{H} \otimes \mathcal{W}$ to $\image(\Pi_{p, \varepsilon})$ after it was ``initially'' disturbed by the binary measurement $\mathsf{D}$. In \cite{FOCS:CMSZ21}, this is literally true: the disturbance is measuring $(\Pi_{V, r}, \mathbf{I} - \Pi_{V, r})$ for randomly sampled $r$ on the prover state $\ket{\psi_i}$. One can then show that an expected constant number of $(\mathsf{D}, \mathsf{G})$-measurements returns the state to $\mathsf{G}$ by appealing to the statistics of the $(\mathsf{D}, \mathsf{G})$ Marriott-Watrous distribution.
However, in our setting, the ``disturbance'' is quite different: the amplified state $\ket{\phi_{\mathsf{C}}} \in \image(\BProj{\mathsf{C}})$ consists of a prover state \emph{entangled with} the challenge register $\mathcal{R}$ in a way that is \emph{guaranteed} to produce an accepting transcript. $\ket{\phi_{\mathsf{C}}}$ is then disturbed by measuring its $\mathcal{R}$ register, and the measurement $\mathsf{D}$ being applied in $\mathsf{Transform}^{\mathsf{D}, \mathsf{G}}$ depends on this $\mathcal{R}$ measurement outcome. Since the $\mathcal{R}$ measurement can disturb $\ket{\phi_{\mathsf{C}}}$ by a large amount (unlike $\mathsf{D}$), it is not a priori clear why $\mathsf{Transform}^{\mathsf{D}, \mathsf{G}}$ should return the state to $\image(\Pi_{p, \varepsilon})$.
At a high level, we show how to bound the runtime of this new procedure by appealing to the pseudoinverse state $\ket{\phi_{\mathsf{U}}}$, again! In more detail, using the pseudoinverse lemma, the state on $\mathcal{H} \otimes \mathcal{W}$ obtained after measuring $\mathcal{R}$ on $\ket{\phi_{\mathsf{C}}}$ (along with initializing $\mathcal{W}$ to $\ket{0}$) can be alternatively thought of as the state obtained by:
\begin{itemize}
\item Sampling $r_i$ proportional to the probability $\zeta_{r_i}$ of $\ket{\psi_{\mathsf{U}}}$ successfully answering $r_i$, and
\item Outputting (normalized) $\Pi_{r_i}(\ket{\psi_{\mathsf{U}}} \otimes \ket{0}_{\mathcal{W}})$, where $\Pi_{r_i} := \Pi_{r_i, 1}$.
\end{itemize}
This conditioning argument allows us to appeal to the same ``return to $\Pi_{p, \varepsilon}$'' principle to show that $\mathsf{Transform}^{\mathsf{D}, \mathsf{G}}$ indeed ``returns'' the state to $\image(\Pi_{p, \varepsilon})$, as if it had ``started out'' as the state $\ket{\phi_{\mathsf{U}}}\ket{0}_{\mathcal{W}}$, which only exists in the analysis!
\subsubsection{Problem: Step 3 is \emph{still} not expected poly-time.}
\label{sec:tech-overview-no-speedup}
The premise of our new extraction template was to speed up the extraction process by getting rid of excess work from running state repair in situations where no accepting transcript was obtained. Previously, we computed the expected runtime to perform $N \approx k/p$ repair steps in~\cref{fig:tech-overview-cmsz} (conditioned on a successful initial execution and initial estimate $p$) to be $pN/\varepsilon^2 \approx 1/p^4$, since the runtime of each repair step was equivalent (up to a constant factor) to the runtime of $\mathsf{G}$, which was $p/\varepsilon^2$, and $\varepsilon \approx p^2$. As noted above, with our new template we now only have to perform $N = k$ repair steps, and the error parameter $\epsilon$ can now be $\approx p$. With these improvements alone, one might hope to perform $N$ repair steps in $pN/\varepsilon^2 = p(k)(1/p^2) \approx 1/p$ time. This would result in expected polynomial runtime for the overall extractor when factoring in the conditioning.
Perhaps surprisingly, the above reasoning is incorrect! This new extraction procedure is \emph{still} not expected QPT: the expected runtime of $N$ repair steps will be $\approx \frac 1 {p^{2}}$, not $\frac 1 {p}$.
Why does this happen? It turns out that in this new extraction template, each repair step (which previously made expected $O(1)$ calls to $\mathsf{G}$) must now make an expected $O(1/p)$ calls to $\mathsf{G}$, cancelling out the factor-$1/p$ savings in $N$ obtained by using $\mathsf{Transform}^{\mathsf{U}, \mathsf{C}}$ to generate transcripts.
Indeed, the pseudoinverse-based runtime analysis above for $\mathsf{Transform}^{\mathsf{D}, \mathsf{G}}$ implies that each repair step must now make
\[ \frac 1 {\zeta_R} \sum_r \zeta_r \cdot \frac 1 {\zeta_r} = \frac 1 {\zeta_R} \approx 1/p
\]
calls to $\mathsf{G}$ (where $\zeta_R = \sum_r \zeta_r \approx p$ is the normalization factor for the $r_i$-distribution). This results in an overall expected running time of $\frac 1 p$ calls to $\mathsf{G}$ if $p$ was initially measured. Essentially, this is saying that while obtaining an accepting transcript $(r_i, z_i)$ causes \emph{limited enough} disturbance that repair can work, it causes more disturbance than a binary measurement, resulting in a factor of $1/p$ increase in the repair time.
\subsubsection{Solution: Use faster \texorpdfstring{$\mathsf{Estimate}$}{Estimate} and \texorpdfstring{$\mathsf{Transform}$}{Transform}} \label{sec:tech-overview-speedup}
Despite the less-than-expected speedup observed in \cref{sec:tech-overview-no-speedup}, it turns out that we nevertheless made significant progress. The reason is that the bottleneck to obtaining a faster extraction procedure is now in the running times of $\mathsf{Estimate}$ and $\mathsf{Transform}$, so we can hope to obtain an expected polynomial time procedure by using faster algorithms for $\mathsf{Estimate}^{\mathsf{U}, \mathsf{C}}$ and $\mathsf{Transform}^{\mathsf{D}, \mathsf{G}}$.
As discussed above, speeding up the fixed-length $\mathsf{Estimate}^{\mathsf{U}, \mathsf{C}}$ in $\mathsf{G}$ is relatively straightforward by appealing to \allowbreak \cite{NagajWZ11};\footnote{For technical reasons, we use a different algorithm due to \cite{STOC:GSLW19}, but a variant of \cite{NagajWZ11} would also suffice.} this results in an expected running time of $\frac 1 {\sqrt {p}}$ for $\mathsf{G}$.
However, implementing a \emph{fast} version of $\mathsf{Transform}^{\mathsf{D},\mathsf{G}}$ achieving $1-{\rm negl}(\lambda)$ correctness (which is required for our extraction procedure to have negligible error) is less straightforward. Some implementations in the literature (e.g., \cite{STOC:GSLW19}) achieve this correctness guarantee, but only given a known (inverse polynomial) lower bound on the eigenvalue $q_j$ (associated with $(\mathsf{D}, \mathsf{G})$-Jordan subspace $\mathcal T_j$). We have no such lower bound for our state $\frac 1 {\zeta_r} \Pi_r (\ket{\phi_{\mathsf{U}}} \ket{0}_{\mathcal{W}})$. Our resolution is to first apply a variable-length fast phase estimation algorithm (implemented by repeatedly running~\cite{NagajWZ11} to increasing precision, or singular value discrimination \cite{STOC:GSLW19} with decreasing thresholds, until we obtain a multiplicative estimate of the phase) and then run a fixed-length fast $\mathsf{Transform}^{\mathsf{D},\mathsf{G}}$ using the estimated phase to lower bound the eigenvalue. The fixed-length fast $\mathsf{Transform}^{\mathsf{D},\mathsf{G}}$ can be done using \cite{STOC:GSLW19}; it is also possible to use a more elementary algorithm combining fast amplitude amplification \cite{BHMT02} with ideas from \cite{STOC:Watrous06} for achieving $1-{\rm negl}(\lambda)$ correctness.
To summarize, we obtain a final $1/p$ speedup by combining a $1/\sqrt{p}$ speedup from using a faster $\mathsf{Estimate}^{\mathsf{U},\mathsf{C}}$ with a $1/\sqrt{p}$ speedup from using a faster $\mathsf{Transform}^{\mathsf{D},\mathsf{G}}$. The fact that the latter speedup is actually realized turns out to be subtle to argue.
\subsubsection{Last Problem: Measuring $z$ ruins the runtime guarantee}
\label{subsubsec:gk-issue}
Unfortunately, we are \emph{still} not done! There is one subtle issue with our extractor that we have ignored so far: our runtime analysis was only valid ignoring the effect of measuring the prover response $z$. Since all transcripts after running $\mathsf{Transform}^{\mathsf{U}, \mathsf{C}}$ are accepting by construction, the collapsing property of the protocol implies that measuring $z$ is computationally undetectable, so one might assume that the runtime analysis extends immediately.
However, the \emph{expected running time} of an algorithm is not an efficiently testable property of the input state. This is not just an issue with our proof strategy: the version of the above extractor where $z$ is measured does not run in expected polynomial time.
In a nutshell, the issue is that a computationally undetectable measurement can still cause a state's eigenvalues (either $\{p_j\}$, in $\mathsf{Jor}^{\mathsf{U}, \mathsf{C}}$, or $\{q_j\}$, in $\mathsf{Jor}^{\mathsf{G}, \mathsf{D}}$) to change by a negligible but nonzero amount, affecting the subsequent runtime of $\mathsf{Transform}^{\mathsf{D},\mathsf{G}}$. This negligible change can have an enormous effect on the expected runtime of the extractor, because if the runtime of a procedure is inversely proportional to the disturbed eigenvalue $\tilde p = p-{\rm negl}$, an overall expected runtime expression can now contain terms of the form $\frac p {p - {\rm negl}}$, which can be unbounded when $p$ is also negligible. Interestingly, such issues have long been known to exist in the \emph{classical} setting: these $\frac p {p-{\rm negl}}$ terms are the major technical difficulty in obtaining a classical simulator for the \cite{JC:GolKah96} protocol. This classical analogy inspires our resolution.
\paragraph{Solution: Estimate repair time before measuring $z$.}
\label{subsubsec:gk-fix}
We modify our extractor so that in each loop iteration, all procedures occurring after the $z$-measurement have a \emph{pre-determined} runtime. Previously, after $z$ was measured, we ran a fast variable-length $\mathsf{Transform}$ by running a the variable-length $\mathsf{Estimate}^{\mathsf{D}, \mathsf{G}}$ to determine a time bound $t$, and then running a $t$-time $\mathsf{Transform}^{\mathsf{D}, \mathsf{G}}$. Instead of this, we will run $\mathsf{Estimate}^{\mathsf{D}, \mathsf{G}}$ \emph{before} $z$ is measured. This allows us to compute a runtime bound for $\mathsf{Transform}^{\mathsf{D}, \mathsf{G}}$ before the $z$ measurement disturbs the state, preserving the expected running time of the entire procedure. This results in the final extraction procedure described in \cref{fig:final-extractor-intro} below.
\begin{mdframed}
\captionsetup{type=figure}
\captionof{figure}{Our final extraction procedure}\label{fig:final-extractor-intro}
\begin{enumerate}
\item After obtaining $(a, \ket{\psi})$, apply $\mathsf{C}$ to $\ket{\psi} \ket{+_R}$ and terminate if the measurement returns $0$. Otherwise, let $\ket{\phi}$ denote the resulting state on $\mathcal{H} \otimes \mathcal{R}$.
\item Run the variable-length $\mathsf{Estimate}^{\mathsf{U}, \mathsf{C}}$ on $\ket{\phi}$, obtaining output $p$. Divide $p$ by $2$ to obtain a lower bound on the resulting success probability. Set $\varepsilon = \frac {p} {4 k }$ and $N = k$.
\item For $i$ from $1$ to $N$:
\begin{enumerate}
\item Given prover state $\ket{\psi_i}$, apply $\mathsf{Transform}^{\mathsf{U},\mathsf{C}} \ket{\psi_i} \ket{+_R}$. Call the resulting state $\ket{\phi_{\mathsf{C}}}$.
\item Measure (and discard) the $\mathcal{R}$ register of $\ket{\phi_{\mathsf{C}}}$ to obtain a classical challenge $r_i$.
\item Initialize $\mathcal{W}$ to $\ket{0}_{\mathcal{W}}$ and call the variable-length $\mathsf{Estimate}^{\mathsf{D}, \mathsf{G}}$, which outputs a value $q$. We require that the output state is in the image of $\Pi_{r_i}$.
\item Measure the response $z_i$.
\item We repair the success probability by running $\mathsf{Transform}^{\mathsf{D},\mathsf{G}}$ on $\mathcal{H} \otimes \mathcal{W}$ for $\frac{\lambda}{\sqrt q}$ oracle steps. If the resulting state is not in the image of $\Pi_{p, \varepsilon}$, abort.
Trace out $\mathcal{W}$ and run $\mathsf{Estimate}^{\mathsf{U},\mathsf{C}}$ for $\lambda \sqrt{p}/\varepsilon$ steps to obtain a new probability estimate $p'$. If $p' < p- 2\varepsilon $, abort. Finally, discard $\mathcal{R}$ and re-define $p := p'$.
\end{enumerate}
\end{enumerate}
\end{mdframed}
By making this change, we incur an additional \emph{correctness} error for the extractor, because the collapsing measurement may decrease the probability that $\mathsf{Transform}^{\mathsf{D}, \mathsf{G}}$ successfully maps the state to $\Pi_{p, \varepsilon}$. However, this error is negligible because this correctness property is efficiently checkable (unlike the expected runtime). Thus, this procedure achieves both expected polynomial runtime\footnote{It remains to be argued that measuring $z_j$ does not affect the running time of \emph{subsequent} variable-runtime steps. This turns out to hold because the runtime of future loop iterations can be guaranteed by the correctness properties of the re-estimation step, which hold for an \emph{arbitrary} re-estimation input state.} and the desired correctness guarantees.
\subsubsection{Putting everything together} To summarize, we gave a new extraction template along with a particular instantiation that achieves expected polynomial runtime, by leveraging four different algorithmic improvements:
\begin{enumerate}[noitemsep]
\item By generating accepting transcripts with $\mathsf{Transform}^{\mathsf{U}, \mathsf{C}}$, we now only have to generate $k$ transcripts and repair $k$ prover states (instead of $k/p$).
\item (1) allows us to relax the error parameter $\varepsilon$ by a factor of $1/p$ (speeding up $\mathsf{G}$).
\item Using a fast algorithm for $\mathsf{Estimate}$ from the literature \cite{NagajWZ11,STOC:GSLW19} saves a factor of $1/\sqrt{p}$ runtime.
\item Using a new fast, variable-runtime algorithm for $\mathsf{Transform}$ saves another factor of $1/\sqrt{p}$.
\end{enumerate}
Finally, we implement the variable-length $\mathsf{Transform}$ in two phases (variable-length phase estimation followed by fixed-length $\mathsf{Transform}$) and interleave the measurement of the response $z$ between them, so that this $z$-measurement has no effect on the runtime.
We remark that the overall analysis of our extractor is rather involved (as we have omitted additional details in this overview); we refer the reader to \cref{sec:high-probability-extractor} for a full analysis.
\subsection{Post-Quantum ZK for \cite{JC:GolKah96}}
\label{sec:tech-overview-pqzk-gk}
In this section we give an overview of our proof that the Goldreich--Kahan (GK) protocol is post-quantum zero-knowledge (\cref{thm:gk}). Our simulator makes use of some of the techniques described in \cref{sec:tech-overview-hpe}, but the simulation strategy is quite different to our other results. In particular, our simulator does \emph{not} make use of state-preserving extraction.
We first recall the Goldreich--Kahan construction of a constant-round zero-knowledge proof system for $\mathsf{NP}$. Let $(P_{\Sigma},V_{\Sigma})$ be a $\Sigma$-protocol for $\mathsf{NP}$ satisfying special honest verifier zero knowledge (SHVZK)\footnote{Recall that the special honest-verifier zero-knowledge property guarantees the existence of a randomized simulation algorithm $\mathsf{SHVZK}.\mathsf{Sim}(r)$ that takes any $\Sigma$-protocol challenge $r \in R$ as input and outputs a tuple $(a,z)$ such that the distribution of $(a,r,z)$ is indistinguishable from the distribution of transcripts arising from an honest prover interaction on challenge $r$.} and let $\mathsf{Com}$ be a statistically hiding, computationally binding commitment. \cite{JC:GolKah96} construct a zero knowledge protocol $(P,V)$ as described in \cref{fig:gk-intro}.
\begin{figure}[!ht]
\centering
\procedure[]{}{
P (x, w) \> \> V(x) \\
\begin{subprocedure}
\pseudocode[mode=text]{Sample commitment key $\mathsf{ck}$.}
\end{subprocedure}
\> \sendmessageright*[3cm]{\mathsf{ck}} \>
\\
\> \sendmessageleft*[3cm]{\mathsf{com}} \> \begin{subprocedure}
\pseudocode[mode=text]{Sample $\Sigma$-protocol challenge $r \gets R$.\\
Commit to $r$: \\
$ \mathsf{com} = \mathsf{Com}(\mathsf{ck}, r; \omega)$ for $\omega \gets \{0,1\}^\lambda$.}
\end{subprocedure}\\
\begin{subprocedure}
\pseudocode[mode=text]{Compute $(a, \mathsf{st}) \gets P_\Sigma(x, w)$}
\end{subprocedure}
\> \sendmessageright*[3cm]{a} \> \\
\> \sendmessageleft*[3cm]{r, \omega} \> \begin{subprocedure}
\pseudocode[mode=text]{}
\end{subprocedure} \\
\begin{subprocedure}
\pseudocode[mode=text]{If $\mathsf{Com}(\mathsf{ck}, r; \omega) \neq \mathsf{com}$, abort. \\ Compute $z \gets P_\Sigma(\mathsf{st}, r)$
}
\end{subprocedure}
\> \sendmessageright*[3cm]{z} \> \begin{subprocedure}
\pseudocode[mode=text]{Accept if $(a, r, z)$ is an \\ accepting $\Sigma$-protocol transcript for $x$.}
\end{subprocedure}
}
\caption{The \cite{JC:GolKah96} Zero Knowledge Proof System for $\mathsf{NP}$.}
\label{fig:gk-intro}
\end{figure}
\iffalse
\begin{enumerate}[noitemsep]
\item $P$ sends the commitment key of a statistically-hiding, computationally-binding commitment scheme.
\item $V$ commits to a uniformly random challenge $r$.
\item $P$ runs $P_{\Sigma}$ to obtain the first message of the sigma protocol $a$, and sends it to $V$.
\item $V$ opens its commitment to $r$.
\item If the commitment opening was successful, $P$ runs $P_{\Sigma}(a,r)$ to obtain the second message of the sigma protocol $z$, and sends it to $V$.
\end{enumerate}
\fi
Soundness of the~\cite{JC:GolKah96} protocol holds against \emph{unbounded} $P^*$ and therefore extends immediately to the quantum setting.
\paragraph{Recap: the na\"ive classical simulator.}
As observed by~\cite{JC:GolKah96}, there is a natural \emph{na\"ive simulator} for their protocol that, for reasons analogous to~\cref{subsubsec:gk-issue}, turns out to have an unbounded expected runtime. To build intuition for our quantum simulation strategy, we will first recall the na\"ive classical simulator and show how to extend it to a na\"ive quantum simulator (while temporarily ignoring the runtime issue). Then, by using the technique described in~\cref{subsubsec:gk-fix}, we will improve this to a full $\mathsf{EQPT}_{c}$ quantum simulator.
The na\"ive classical simulator does the following:
\begin{enumerate}[noitemsep]
\item Call $V^*$ on a random commitment key $\mathsf{ck}$ to obtain a commitment $\mathsf{com}$.
\item\label[step]{gk-shvzk} Sample $(a',z') \gets \mathsf{SHVZK}.\mathsf{Sim}(0)$.
\item\label[step]{gk-first-run} Run $V^*$ on $a'$ to obtain a challenge-opening pair $(r',\omega')$. If $\omega'$ is not a valid opening of $\mathsf{com}$ to $r'$, terminate the simulation and output the current view of $V^*$.
\item\label[step]{gk-rewinding-step} \textbf{Rewinding step.} Sample $(a,z) \gets \mathsf{SHVZK}.\mathsf{Sim}(r')$ and run $V^*$ on $a$. If the output $(r,\omega)$ is not a valid message-opening pair, repeat this step from the beginning. \item Respond with $z$ and output $V^*$'s view.
\end{enumerate}
\noindent To see that this simulator outputs the correct view for $V^*$, consider two hybrid steps:
\begin{itemize}
\item First, switch to a hybrid simulator in which the sample $(a',z') \gets \mathsf{SHVZK}.\mathsf{Sim}(0)$ is instead computed by running the honest prover $P(x,w)$. The indistinguishability between this hybrid simulator and the real simulator follows from the fact that $a'$ sampled as $(a',z') \gets \mathsf{SHVZK}.\mathsf{Sim}(0)$ is computationally indistinguishable from the honestly generated $a'$.
\item Next, switch to a second hybrid simulator in which the honest prover is also used in the rewinding step to generate the $(a,z)$ samples rather than $\mathsf{SHVZK}.\mathsf{Sim}(r')$ (where $z$ is generated by running the honest prover on $(a,r')$). This is indistinguishable from the previous hybrid simulator by the $\mathsf{SHVZK}$ property, and moreover, by the computational binding of the commitment, the $r$ obtained in Step 4 must be $r'$ except with ${\rm negl}(\lambda)$ probability. Moreover, conditioned on $r = r'$, the second hybrid produces the same distribution as the honest interaction.
\end{itemize}
We now show how to extend this simulator to the quantum setting.
\paragraph{Our ``na\"ive'' quantum simulator.} Step 1 of the na\"ive classical simulator will be unchanged in the quantum setting, so we focus on devising quantum verisons of Steps 2,3, and 4 while assuming $\mathsf{ck},\mathsf{com}$ are fixed throughout.
Let $\ket{\psi}_{\mathcal{V}}$ be the state of the malicious verifier immediately after it sends $\mathsf{com}$. We let registers $\mathcal{A},\mathcal{Z}$ denote registers containing the messages $a,z$ in the $\Sigma$-protocol and let $\mathcal{M}$ be a register that will contain the random coins for $\mathsf{SHVZK}.\mathsf{Sim}$ (or the honest prover later on). Let $\ket{\mathsf{Sim}_r}$ for any $r \in R$ be the state
$\ket{\mathsf{Sim}_r}_{\mathcal{A},\mathcal{Z},\mathcal{M}} = \sum \alpha_{\mu} \ket{\mathsf{SHVZK}.\mathsf{Sim}(r;\mu),\mu}$ obtained by running $\mathsf{SHVZK}.\mathsf{Sim}$ on a uniform superposition of its random coins $\mu$.
We define binary projective measurements analogous to the $\mathsf{U}$ and $\mathsf{C}$ measurements used in our state-preserving extractor. However, instead of a single $\mathsf{U}$ measurement, we will have for each $r \in R$ a measurement $\mathsf{S}_r = \BMeas{\BProj{\mathsf{S},r}}$ on $\mathcal{V} \otimes \mathcal{A} \otimes \mathcal{Z} \otimes \mathcal{M}$ where $\BProj{\mathsf{S},r} \coloneqq \mathbf{I}_{\mathcal{V}} \otimes \ketbra{\mathsf{Sim}_r}_{\mathcal{A},\mathcal{Z},\mathcal{M}}$. The idea behind the $\mathsf{C} = \BMeas{\BProj{\mathsf{C}}}$ measurement is the same as before: it measures whether the malicious verifier $V^*$ returns a valid opening when run on the challenge $\mathcal{A}$. Note that $\mathsf{C}$ acts as identity on $\mathcal{Z},\mathcal{M}$.
The next steps of the quantum simulator are a direct analogue of the corresponding steps in the classical simulator:
\begin{enumerate}
\item[$2^*$.] Initialize $\mathcal{A} \otimes \mathcal{Z} \otimes \mathcal{M}$ to $\ket{\mathsf{Sim}_0}$.
\item[$3^*$.] Measure $\ket{\psi}_{\mathcal{V}} \otimes \ket{P}_{\mathcal{A},\mathcal{Z},\mathcal{M}}$ with $\mathsf{C}$. If the outcome of $\mathsf{C}$ is $0$ (the opening is invalid), terminate the simulation at this step: measure $\mathcal{A}$ to obtain $a'$, compute and measure the verifier's response $(r',\omega')$ and return $(\mathsf{ck},\mathsf{com},a',(r',\omega'),z = \bot)$ along with $\mathcal{V}$. If the outcome of $\mathsf{C}$ is $1$, we will have to rewind. First, compute the verifier's response and measure it to obtain $r'$.
\end{enumerate}
When the opening is invalid $(\mathsf{C}$ outputs $0$), the $\mathsf{SHVZK}$ guarantee informally implies that these steps computationally simulate the view of $V^*$.
The hard case is when the opening is valid ($\mathsf{C}$ outputs $1$). At this stage of the simulation, the state on $\mathcal{V} \otimes \mathcal{A} \otimes \mathcal{Z}$ is $\BProj{\mathsf{C}}(\ket{\psi}_{\mathcal{V}}\ket{\mathsf{Sim}_0}_{\mathcal{A},\mathcal{Z}} )$ (up to normalization). Intuitively, we want to ``swap'' $\ket{\mathsf{Sim}_0}_{\mathcal{A},\mathcal{Z},\mathcal{M}}$ for $\ket{\mathsf{Sim}_{r'}}_{\mathcal{A},\mathcal{Z},\mathcal{M}}$, but the application of $\BProj{\mathsf{C}}$ has entangled the $\mathcal{A}$ register with $\mathcal{V}$. We will therefore apply an operation to disentangle these registers, then swap $\ket{\mathsf{Sim}_0}$ for $\ket{\mathsf{Sim}_{r'}}$, and then ``undo'' the disentangling operation. We do this by defining a unitary $U$ that is the coherent implementation of the following variable-length computation on $\mathcal{V} \otimes \mathcal{A} \otimes \mathcal{Z} \otimes \mathcal{M} \otimes \mathcal{R}$: measure $\mathcal{R}$ to obtain $r$, and then run a variable-length $\mathsf{Transform}^{\mathsf{C},\mathsf{S}_{r}}$ on $\mathcal{V} \otimes \mathcal{A} \otimes \mathcal{Z} \otimes \mathcal{M}$.\footnote{The register $\mathcal{R}$ is required for the definition of $U$ and should not be confused with the sub-register of $\mathcal{V}$ that we measure to obtain the verifier's response.} Recall that implementing a variable-length computation coherently requires additional ancilla registers $\mathcal{W}, \mathcal{B}, \mathcal{Q}$ (see~\cref{sec:intro-creqpt}); we will suppress these registers for this overview, but we emphasize that they must be all be initialized to $\ket{0}$.
The simulator then continues as follows.
\begin{enumerate}
\item[$4^*$.] Run the following steps:
\begin{enumerate}
\item Initialize $\mathcal{R}$ to $\ket{0}$ and apply $U$ to $\BProj{\mathsf{C}}(\ket{\psi}_{\mathcal{V}} \ket{\mathsf{Sim}_0}_{\mathcal{A},\mathcal{Z},\mathcal{M}}) \otimes \ket{0}_{\mathcal{R}}$. On $\mathcal{V} \otimes \mathcal{A} \otimes \mathcal{Z} \otimes \mathcal{M}$, this maps $\image(\BProj{\mathsf{C}})$ to $\image(\BProj{\mathsf{S},0})$, which yields a state of the form $\ket{\psi'}_{\mathcal{V}} \ket{\mathsf{Sim}_0}_{\mathcal{A},\mathcal{Z},\mathcal{M}}$. Importantly, this (carefully!) breaks the entanglement between $\mathcal{V}$ and $\mathcal{A}$.
\item Now the simulator can easily swap $\ket{\mathsf{Sim}_0}$ out for $\ket{\mathsf{Sim}_{r'}}$.
\item Finally, the simulator changes the $\mathcal{R}$ register from $\ket{0}_{\mathcal{R}}$ to $\ket{r'}_{\mathcal{R}}$, and then applies $U^\dagger$ and traces out $\mathcal{R}$. This step maps the state on \emph{back} from $\image(\BProj{\mathsf{S},r'})$ to $\image(\BProj{\mathsf{C}})$.
\end{enumerate}
\item[$5^*$.] Measure $\mathcal{A}$ to obtain $a$, compute and measure the verifier's response $(r,\omega)$, measure $\mathcal{Z}$ to obtain $z$, and output $(\mathsf{ck},\mathsf{com},a,(r,\omega),z)$ along with $\mathcal{V}$.
\end{enumerate}
This simulator can be written as an $\mathsf{EQPT}_{c}$ computation, but we defer the details of this to our full proof (\cref{sec:gk}). For this overview, we will focus on proving the simulation guarantee.
Inspired by classical proof, we prove that our simulator produces the correct view for $V^*$ by considering two hybrid simulators. To describe the hybrid simulators, we define states $\ket{P}$ and $\ket{P_{r}}$ for any $r \in R$ corresponding to responses of the honest prover:
\begin{itemize}
\item Let $\ket{P}_{\mathcal{A},\mathcal{Z},\mathcal{M}}$ be the state from running the honest prover $P_{x,w}$ on a uniform superposition of random coins $\mu$ to generate a first message $P_{x,w}(\mu)$, i.e., $\ket{P}_{\mathcal{A},\mathcal{Z},\mathcal{M}} = \sum_{\mu} \ket{P_{x,w}(\mu)}_{\mathcal{A}}\ket{0}_{\mathcal{Z}} \ket{\mu}_{\mathcal{M}}$
\item For any $r \in R$, let $\ket{P_{r}}$ be the same as $\ket{P}_{\mathcal{A},\mathcal{Z},\mathcal{M}}$, except $\mathcal{Z}$ additionally contains the honest prover's response to $r$, i.e., $\ket{P_{r}}_{\mathcal{A},\mathcal{Z},\mathcal{M}} = \sum_{\mu} \ket{P_{x,w}(\mu)}_{\mathcal{A}}\ket{P_{x,w}(r;\mu)}_{\mathcal{Z}} \ket{\mu}_{\mathcal{M}}$
\end{itemize}
\noindent The hybrid simulators are essentially quantum versions of the classical ones:
\begin{itemize}
\item The first hybrid simulator behaves the same as the original simulator except that everywhere the simulator uses $\ket{\mathsf{Sim}_0}$, the hybrid simulator uses $\ket{P}$ instead. The amplification in Step~$4^* (a)$ is now onto $\image(\ketbra{P})$ rather than $\image(\mathsf{S}_{0})$. Moreover, in Step~$4^* b$, the simulator swaps $\ket{P}$ out for $\ket{\mathsf{Sim}_{r'}}$.
\item The second hybrid simulator is the same as the first, except every appearance of $\ket{\mathsf{Sim}_{r'}}$ is replaced with $\ket{P_{r'}}$. In particular, in Step~$4^* (b)$, the simulator swaps $\ket{P}$ out for $\ket{P_{r'}}$. The (inverse) amplification in Step~$4^* (c)$ is now from $\image(\ketbra{P_{r'}})$ onto $\image(\BProj{\mathsf{C}})$.
\end{itemize}
We remark that defining these hybrid simulators also requires extending the definition of the unitary $U$ that performs $\mathsf{Transform}$. In particular, $U$ must now support $\mathsf{Transform}^{\mathsf{C},\mathsf{P}}$ where $\mathsf{P} = \BMeas{\mathbf{I}_{\mathcal{V}} \otimes \ketbra{P}}$ and $\mathsf{Transform}^{\mathsf{C},\mathsf{P}_r}$ where $\mathsf{P}_r = \BMeas{\mathbf{I}_{\mathcal{V}} \otimes \ketbra{P_r}}$.
Proving indistinguishability of these hybrids requires some care. Intuitively, we want to invoke the SHVZK property to claim that $\ket{\mathsf{Sim}_0}$ and $\ket{P}$ are indistinguishable given just the reduced density matrices on the $\mathcal{A}$ register (for the first hybrid) and that $\ket{\mathsf{Sim}_{r'}}$ and $\ket{P_{r'}}$ are indistinguishable given just the reduced density matrices on $\mathcal{A} \otimes \mathcal{Z}$ (for the second hybrid). However, we have to ensure that the application of $\mathsf{Transform}$ --- which makes use of projections onto these states --- does not make this distinguishing task any easier.
We resolve this by proving a general lemma (\cref{lemma:proj-indist}) about quantum computational indistinguishability that may be of independent interest, which we briefly elaborate on here. Consider the states $\ket{\tau_b} \coloneqq \sum_{\mu} \ket{\mu}_{\mathcal{X}} \ket{D_b(\mu)}_{\mathcal{Y}}$ where $D_0,D_1$ are computationally indistinguishable classical distributions with randomness $\mu$. If we are only given access to $\mathcal{Y}$, then distinguishing $\ket{\tau_0}$ from $\ket{\tau_1}$ is clearly hard (since $\Tr_{\mathcal{X}}(\ketbra{\tau_b})$ is a random classical sample from $D_b$). \cref{lemma:proj-indist} strengthens this claim: it states that guessing $b$ remains hard even given an oracle implementing the corresponding binary-outcome measurement $\BMeas{\ketbra{\tau_b}_{\mathcal{X},\mathcal{Y}}}$.
By combining this lemma with the fact that our $\mathsf{Transform}$ procedure can always be truncated (in a further hybrid argument) to have strict ${\rm poly}(\lambda,1/\varepsilon)$-runtime with $\varepsilon$-accuracy, we can prove the desired indistinguishability claims.
\paragraph{From the na\"ive simulator to the full simulator.} The problem with both the classical and quantum na\"ive simulators presented above is that their expected runtime is not polynomial. The issue is conceptually the same as in~\cref{subsubsec:gk-issue}. Consider a malicious verifier $V^*$ that gives a valid response with negligible probability $p$ when run on $a$ sampled as $(a,z) \gets \mathsf{SHVZK}.\mathsf{Sim}(0)$, and succeeds with probability $p-{\rm negl}$ when run on $a$ sampled as $(a,z) \gets \mathsf{SHVZK}.\mathsf{Sim}(r)$. Then the expected running time is $\frac{p}{p-{\rm negl}}$, which can be unbounded for small $p$.
The solution described in~\cite{JC:GolKah96} is therefore to \emph{estimate} the running time of the rewinding step before making the computational switch. That is, if the simulator obtains a valid response before the rewinding step, then it keeps running the $V^*$ on samples from $\mathsf{SHVZK}.\mathsf{Sim}(0)$ until it obtains $\lambda$ additional valid responses. This gives the simulator an accurate estimate of the success probability of $V^*$, which it uses to bound the running time of the subsequent rewinding step.
We give a quantum simulator in $\mathsf{EQPT}_{c}$ for the~\cite{JC:GolKah96} protocol that implements the analogous quantum version of this estimation trick. As in~\cref{subsubsec:gk-issue}, the idea is to first compute an upper bound on the runtime of the $\mathsf{Transform}$ step (equivalently, a lower bound on the singular values) after measuring $\mathsf{C}$ in Step $3^*$ \emph{before} measuring $r$. This estimate is computed using a variable-length $\mathsf{Estimate}^{\mathsf{S}_0,\mathsf{C}}$ procedure, and since the $\mathsf{Transform}$ step has now been restricted to run in fixed polynomial time, we achieve the desired $p \cdot 1/p = 1$ cancellation in the expected running time.
Implementing this properly requires several tweaks to our simulator. In particular, the simulator no longer measures the verifier's challenge $r'$ directly in Step $3^*$; recording $r'$ is now delegated to $U$, since this step must be performed ``in between'' $\mathsf{Estimate}$ and $\mathsf{Transform}$. That is, we must modify $U$ so that instead of just performing (a coherent implementation of) $\mathsf{Transform}$, it runs the following steps coherently: (1) perform a variable-length $\mathsf{Estimate}$, where $\mathsf{Estimate}$ is parameterized by the same projectors as $\mathsf{Transform}$ (2) compute and measure the verifier's response (3) run $\mathsf{Transform}$ using the time bound computed from $\mathsf{Estimate}$. We defer further details to the full proof (\cref{sec:gk}). We remark that just as in~\cref{subsubsec:gk-issue}, the ${\rm negl}(\lambda)$ error incurred by the collapsing measurement moves into the \emph{correctness} error of the simulation.
\subsection{Related Work}\label{sec:related-work}
\paragraph{Post-Quantum Zero-Knowledge.} The first construction of a zero-knowledge protocol secure against quantum adversaries is due to Watrous \cite{STOC:Watrous06}. Roughly speaking, \cite{STOC:Watrous06} shows that ``partial simulators'' that succeed with an inverse polynomial probability that is \emph{independent} of the verifier state can be extended to full post-quantum zero-knowledge simulators. This technique handles sequential repetitions of classical $\Sigma$-protocols and has been used as a subroutine in other contexts (e.g., \cite{STOC:BitShm20,C:BCKM21b,C:ChiChuYam21,C:AnaChuLap21}), but its applicability is limited to somewhat special situations. Nevertheless, most prior post-quantum zero-knowledge results have relied crucially on the \cite{STOC:Watrous06} technique.
\cite{STOC:BitShm20,TCC:AnaLap20} recently introduced a beautiful \emph{non-black-box} technique that, in particular, achieves constant-round zero knowledge arguments for $\mathsf{NP}$ with \emph{strict} polynomial time simulation \cite{STOC:BitShm20}. As discussed above, the use of non-black-box techniques is necessary to achieve strict polynomial time simulation in the classical \cite{STOC:BarLin02} and quantum \cite{FOCS:CCLY21} settings (and in the quantum setting this extends to $\mathsf{EQPT}_{m}$ simulation).
Finally, recent work \cite{C:ChiChuYam21} showed that the Goldreich--Kahan protocol achieves post-quantum $\epsilon$-zero knowledge. This is closely related to our \cref{thm:gk}, and so we present a detailed comparison below.
\paragraph{Comparison with \cite{C:ChiChuYam21}.} The post-quantum security of the Goldreich--Kahan protocol was analyzed previously in \cite{C:ChiChuYam21}. Our simulation strategy for \cref{thm:gk} is related to that of \cite{C:ChiChuYam21} in that the two simulators both consider the Jordan decomposition for essentially the same pair of projectors, but the two simulators are otherwise quite different.
At a high level, \cite{C:ChiChuYam21} constructs a (non-trivial) quantum analogue of the following classical simulator: given error parameter $\epsilon$, repeat ${\rm poly}(1/\epsilon)$ times: sample $a \gets \mathsf{Sim}(0)$ and run $V^*$ on $a$. If $V^*$ ever opens correctly, record its response $r$. Then, run a single execution of the protocol using $(a, z) \gets \mathsf{Sim}(r)$ and output the result.
More concretely, the \cite{C:ChiChuYam21} simulator first attempts to extract the verifier's challenge $r$ in ${\rm poly}(1/\varepsilon)$ time, and then attempts to generate an accepting transcript in a single final interaction with the verifier. However, if the verifier \emph{aborts} in this final interaction, the simulation fails; this is roughly because successfully extracting $r$ skews the verifier's state towards not aborting. To obtain a full simulator, they use an idea from \cite{STOC:BitShm20}: (1) design a ``partial simulator'' that randomly guesses whether the verifier will abort in its final invocation, then achieves $\varepsilon$-simulation conditioned on a correct guess; (2) apply \cite{STOC:Watrous06}-rewinding to ``amplify'' onto executions where the guess is correct.
It is natural to ask whether the above simulation strategy would have sufficed to prove \cref{thm:gk} (instead of writing down a new simulator). We remark that this is unlikely; their simulator seems to be tailored to $\varepsilon$-ZK and, moreover, does not address what \cite{JC:GolKah96} describe as the main technical challenge in the classical setting: handling verifiers that abort with all but negligible probability. In more detail:
\begin{itemize}
\item Their non-aborting simulator (like the classical analogue above) \emph{always} tries to extract $r$. To achieve negligible simulation error, this extraction must succeed with all but negligible probability for any adversary that with inverse polynomial probability does not abort. This would require that the simulator run in superpolynomial time.
Our simulator, as well as essentially all classical black-box ZK simulators, address this issue by first measuring whether the verifier aborts, and then only proceeding with the simulation in the non-aborting case.
\item By Markov's inequality, expected polynomial time simulation implies $\varepsilon$-simulation in time $O(1/\varepsilon)$. As a function of $\varepsilon$, the \cite{FOCS:CCLY21} simulator runs in some large polynomial time (as currently written, they appear to achieve runtime $1/\varepsilon^6$, although it is likely unoptimized). Thus, even a hypothetical variable-runtime version of their simulator would not be expected polynomial time. In particular, the \cite{STOC:Watrous06,STOC:BitShm20} ``guessing'' compiler appears to cause a quadratic blowup in the runtime of their non-aborting simulator (due to a required smaller accuracy parameter).
\item The \cite{STOC:Watrous06,STOC:BitShm20} ``guessing'' compiler adds an additional layer of complexity onto the \cite{C:ChiChuYam21} simulator that is incompatible with the $\mathsf{EQPT}_{c}$ definition in the sense that given an $\mathsf{EQPT}_{c}$ partial simulator, the \cite{STOC:Watrous06,STOC:BitShm20} ``guessing'' compiler would not produce a procedure in $\mathsf{EQPT}_{c}$.
\end{itemize}
\noindent We also achieve some improvements over~\cite{C:ChiChuYam21} unrelated to the simulation accuracy:
\begin{itemize}
\item \cite{C:ChiChuYam21} require that the underlying sigma protocol satisfies a \emph{delayed witness} property, which is not required in the classical setting. Our ``projector indistinguishability'' lemma (\cref{lemma:proj-indist}; see also~\cref{sec:tech-overview-pqzk-gk}) enables us to handle arbitrary sigma protocols.
\item \cite{C:ChiChuYam21} require that the verifier commit to the sigma protocol challenge $r$ using a \emph{strong collapse-binding} commitment. Using a new proof technique (see~\cref{sec:unique-message-collapsing}), we show that standard collapse-binding suffices.
\end{itemize}
\paragraph{Post-Quantum Extraction}
As previously discussed, there is a line of prior work \cite{EC:Unruh12,EC:Unruh16,FOCS:CMSZ21} that achieves forms of post-quantum extraction that do \emph{not} preserve the prover state. Below we briefly discuss prior work on state-preserving post-quantum extraction.
\cite{STOC:BitShm20} directly constructs a state-preserving extractable commitment with \emph{non-black-box} extraction in order to achieve their zero-knowledge result. Their construction makes use of post-quantum fully homomorphic encryption (for quantum circuits). Their extractor homomorphically evaluates the adversarial sender.
\cite{STOC:BitShm20} also shows that constant-round zero-knowledge arguments and post-quantum secure function evaluation generically imply constant-round state-preserving extractable commitments. Combining this with \cite{STOC:Watrous06} yields a polynomial-round state-preserving extractable commitment scheme. Since this result also holds in the ``$\epsilon$ setting,'' plugging in \cite{C:ChiChuYam21} implies a constant-round $\epsilon$ state-preserving extractable commitment, although this protocol would have many rounds and is only privately verifiable.
All of the above results achieve computationally state-preserving extraction. \cite{C:AnaChuLap21} constructs a polynomial-round state-preserving extractable commitment scheme with \emph{statistical} state preservation. They use the \cite{STOC:Watrous06} simulation technique as the core of their extraction procedure, applied to a new construction where statistical state preservation is possible.
\section{Preliminaries}
\label{sec:preliminaries}
The security parameter is denoted by $\lambda$. A function $f \colon \mathbb{N} \rightarrow [0,1]$ is \emph{negligible}, denoted $f(\lambda) = {\rm negl}(\lambda)$, if it decreases faster than the inverse of any polynomial. A probability is \emph{overwhelming} if is at least $1-{\rm negl}(\lambda)$ for a negligible function ${\rm negl}(\lambda)$. For any positive integer $n$, let $[n] \coloneqq \{1,2,\dots,n\}$. For a set $R$, we write $r \gets R$ to denote a uniformly random sample $r$ drawn from $R$.
\subsection{Quantum Preliminaries and Notation}
\label{sec:quantum-prelims}
\paragraph{Quantum information.}
A (pure) \emph{quantum state} is a vector $\ket{\psi}$ in a complex Hilbert space $\mathcal{H}$ with $\norm{\ket{\psi}} = 1$; in this work, $\mathcal{H}$ is finite-dimensional. We denote by $\Hermitians{\mathcal{H}}$ the space of Hermitian operators on $\mathcal{H}$. A \emph{density matrix} is a positive semi-definite operator $\bm{\rho} \in \Hermitians{\mathcal{H}}$ with $\Tr(\bm{\rho}) = 1$. A density matrix represents a probabilistic mixture of pure states (a mixed state); the density matrix corresponding to the pure state $\ket{\psi}$ is $\ketbra{\psi}$. Typically we divide a Hilbert space into \emph{registers}, e.g. $\mathcal{H} = \mathcal{H}_1 \otimes \mathcal{H}_2$. We sometimes write, e.g., $\bm{\rho}^{\mathcal{H}_1}$ to specify that $\bm{\rho} \in \Hermitians{\mathcal{H}_1}$.
A unitary operation is a complex square matrix $U$ such that $U \contra{U} = \mathbf{I}$. The operation $U$ transforms the pure state $\ket{\psi}$ to the pure state $U \ket{\psi}$, and the density matrix $\bm{\rho}$ to the density matrix $U \bm{\rho} \contra{U}$.
A \emph{projector} $\Pi$ is a Hermitian operator ($\contra{\Pi} = \Pi$) such that $\Pi^2 = \Pi$. A \emph{projective measurement} is a collection of projectors $\mathsf{P} = (\Pi_i)_{i \in S}$ such that $\sum_{i \in S} \Pi_i = \mathbf{I}$. This implies that $\Pi_i \Pi_j = 0$ for distinct $i$ and $j$ in $S$. The application of $\mathsf{P}$ to a pure state $\ket{\psi}$ yields outcome $i \in S$ with probability $p_i = \norm{\Pi_i \ket{\psi}}^2$; in this case the post-measurement state is $\ket{\psi_i} = \Pi_i \ket{\psi}/\sqrt{p_i}$. We refer to the post-measurement state $\Pi_i \ket{\psi}/\sqrt{p_i}$ as the result of applying $\mathsf{P}$ to $\ket{\psi}$ and \emph{post-selecting} (conditioning) on outcome $i$. A state $\ket{\psi}$ is an \emph{eigenstate} of $\mathsf{P}$ if it is an eigenstate of every $\Pi_i$.
A two-outcome projective measurement is called a \emph{binary projective measurement}, and is written as $\mathsf{P} = \BMeas{\Pi}$, where $\Pi$ is associated with the outcome $1$, and $\mathbf{I} - \Pi$ with the outcome $0$.
General (non-unitary) evolution of a quantum state can be represented via a \emph{completely-positive trace-preserving (CPTP)} map $T \colon \Hermitians{\mathcal{H}} \to \Hermitians{\mathcal{H}'}$. We omit the precise definition of these maps in this work; we only use the facts that they are trace-preserving (for every $\bm{\rho} \in \Hermitians{\mathcal{H}}$ it holds that $\Tr(T(\bm{\rho})) = \Tr(\bm{\rho})$) and linear.
For every CPTP map $T \colon \Hermitians{\mathcal{H}} \to \Hermitians{\mathcal{H}}$ there exists a \emph{unitary dilation} $U$ that operates on an expanded Hilbert space $\mathcal{H} \otimes \mathcal{K}$, so that $T(\bm{\rho}) = \Tr_{\mathcal{K}}(U (\rho \otimes \ketbra{0}^{\mathcal{K}}) U^{\dagger})$. This is not necessarily unique; however, if $T$ is described as a circuit then there is a dilation $U_{T}$ represented by a circuit of size $O(|T|)$.
For Hilbert spaces $\mathcal{A},\mathcal{B}$ the \emph{partial trace} over $\mathcal{B}$ is the unique CPTP map $\Tr_{\mathcal{B}} \colon \Hermitians{\mathcal{A} \otimes \mathcal{B}} \to \Hermitians{\mathcal{A}}$ such that $\Tr_{\mathcal{B}}(\bm{\rho}_A \otimes \bm{\rho}_B) = \Tr(\bm{\rho}_B) \bm{\rho}_A$ for every $\bm{\rho}_A \in \Hermitians{\mathcal{A}}$ and $\bm{\rho}_B \in \Hermitians{\mathcal{B}}$.
A \emph{general measurement} is a CPTP map $\mathsf{M} \colon \Hermitians{\mathcal{H}} \to \Hermitians{\mathcal{H} \otimes \mathcal{O}}$, where $\mathcal{O}$ is an ancilla register holding a classical outcome. Specifically, given measurement operators $\{ M_{i} \}_{i=1}^{N}$ such that $\sum_{i=1}^{N} M_{i} M_{i}^{\dagger} = \mathbf{I}$ and a basis $\{ \ket{i} \}_{i=1}^{N}$ for $\mathcal{O}$, $\mathsf{M}(\bm{\rho}) \coloneqq \sum_{i=1}^{N} (M_{i} \bm{\rho} M_{i}^{\dagger} \otimes \ketbra{i}^{\mathcal{O}})$. We sometimes implicitly discard the outcome register. A projective measurement is a general measurement where the $M_{i}$ are projectors. A measurement induces a probability distribution over its outcomes given by $\Pr[i] = \Tr(\ketbra{i}^{\mathcal{O}} \mathsf{M}(\bm{\rho}))$; we denote sampling from this distribution by $i \gets \mathsf{M}(\bm{\rho})$.
The \emph{trace distance} between states $\bm{\rho},\bm{\sigma}$, denoted $d(\bm{\rho},\bm{\sigma})$, is defined as $\frac{1}{2}\Tr( \sqrt{(\bm{\rho} - \bm{\sigma})^2})$. The trace distance is contractive under CPTP maps (for any CPTP map $T$, $d(T(\bm{\rho}),T(\bm{\sigma})) \leq d(\bm{\rho},\bm{\sigma})$). It follows that for any measurement $\mathsf{M}$, the statistical distance between the distributions $\mathsf{M}(\bm{\rho})$ and $\mathsf{M}(\bm{\sigma})$ is bounded by $d(\bm{\rho},\bm{\sigma})$. We have the following \emph{gentle measurement lemma}, which bounds how much a state is disturbed by applying a measurement whose outcome is almost certain.
\begin{lemma}[Gentle Measurement~\cite{Winter99}]
\label{lemma:gentle-measurement}
Let $\bm{\rho} \in \Hermitians{\mathcal{H}}$ and $\mathsf{P} = \BMeas{\Pi}$ be a binary projective measurement on $\mathcal{H}$ such that $\Tr(\Pi \bm{\rho}) \geq 1-\delta$. Let
\[\bm{\rho}' \coloneqq \frac{\Pi \bm{\rho} \Pi}{\Tr(\Pi \bm{\rho})} \]
be the state after applying $\mathsf{P}$ to $\bm{\rho}$ and post-selecting on obtaining outcome $1$. Then
\begin{equation*}
d(\bm{\rho},\bm{\rho}') \leq 2\sqrt{\delta}.
\end{equation*}
\end{lemma}
\begin{definition}
\label{def:almost-proj}
A real-valued measurement $\mathsf{M}$ on $\mathcal{H}$ is \defemph{$(\varepsilon,\delta)$-almost-projective} if applying $\mathsf{M}$ twice in a row to any state $\bm{\rho} \in \Hermitians{\mathcal{H}}$ produces measurement outcomes $p,p'$ where
\begin{equation*}
\Pr[\abs{p-p'} \leq \varepsilon] \geq 1-\delta.
\end{equation*}
\end{definition}
\paragraph{Quantum algorithms.}
In this work, a \emph{quantum adversary} is a family of quantum circuits $\{ \mathsf{Adv}_{\lambda} \}_{\lambda \in \mathbb{N}}$ represented classically using some standard universal gate set. A quantum adversary is \emph{polynomial-size} if there exists a polynomial $p$ and $\lambda_0 \in \mathbb{N}$ such that for all $\lambda > \lambda_0$ it holds that $|\mathsf{Adv}_{\lambda}| \leq p(\lambda)$ (i.e., quantum adversaries have classical non-uniform advice).
\subsection{Black-Box Access to Quantum Algorithms}
\label{subsec:blackbox}
Let $A$ be a polynomial-time quantum algorithm with internal state $\bm{\rho} \in \mathrm{D}(\mathcal{H})$ whose behavior is specified by a unitary $U$ on $\mathcal{X} \otimes \mathcal{H}$. A quantum oracle algorithm $S^A$ with \emph{black-box access} to $(A,\bm{\rho})$ is restricted to acting on $\mathcal{H}$ (which is initially set $\bm{\rho}$) by applying the unitary $U$ or $U^\dagger$, but can freely manipulate $\mathcal{X}$ and an arbitrary external register $\mathcal{Y}$.
Black-box access models sometimes permit the $U$ and $U^\dagger$ gates to be controlled on any external registers (i.e., any registers other than the registers $\mathcal{Z} \otimes \mathcal{H}$ to which $U$ is applied). We note that none of the black-box algorithms in this work require controlled access to $U,U^{\dagger}$. This is because our black-box use of $U,U^\dagger$ takes the form $U^{\dagger} (\mathbf{I}_{\mathcal{H}} \otimes V_{\mathcal{X},\mathcal{Y}_1}) U$ where $V$ is a unitary acting only on $\mathcal{X} \otimes \mathcal{Y}_1$, and we can replace $U,U^\dagger$ controlled on $\mathcal{Y}_2$, with $V$ controlled on $\mathcal{Y}_2$.
\paragraph{Algorithms with classical input and output.} We also consider the special case of quantum algorithms that take classical ``challenge'' $r$ and produce classical ``response'' $z$. Writing $\mathcal{X} = \mathcal{R} \otimes \mathcal{Z}$, an algorithm of this form is specified by a unitary $U$ on $\mathcal{R} \otimes \mathcal{Z} \otimes \mathcal{H}$ of the form $\sum_r \ketbra{r}_{\mathcal{R}} \otimes U^{(r)}_{\mathcal{Z},\mathcal{H}}$. For example, $S^A$ can run $A$ on a superposition of inputs by instantiating $\mathcal{R} \otimes \mathcal{Z}$ to $\sum_{r} \ket{r}_{\mathcal{R}} \otimes \ket{0}_{\mathcal{Z}}$ and then applying $U$.
We note that this definition is consistent with the notions of interactive quantum machines and oracle access to an interactive quantum machine used in e.g.~\cite{EC:Unruh12} and other works on post-quantum zero-knowledge.
We remark that our formalism is tailored to the two-message challenge-response setting. While the protocols we analyze in this paper will have more than two messages of interaction, our analysis will typically center around two particular messages in the middle of a longer execution, and $\bm{\rho}$ will be the intermediate state of the interactive algorithm right before the next challenge is sent. We also point out that the unitary $U$ can be treated as independent of the (classical) protocol transcript before challenge $r$ is sent, since we can assume this transcript is saved in $\bm{\rho}$.
\subsection{Jordan's Lemma}
\label{subsec:jordan}
We state Jordan's lemma and its relation to the singular value decomposition.
\begin{lemma}[\cite{Jordan75}]
\label{lemma:jordan}
For any two Hermitian projectors $\BProj{\mathsf{A}}$ and $\BProj{\mathsf{B}}$ on a Hilbert space $\mathcal{H}$, there exists an orthogonal decomposition of $\mathcal{H} = \bigoplus_j \RegS_{j}$ into one-dimensional and two-dimensional subspaces $\{\RegS_{j}\}_{j}$ (the \emph{Jordan subspaces}), where each $\RegS_{j}$ is invariant under both $\BProj{\mathsf{A}}$ and $\BProj{\mathsf{B}}$. Moreover:
\begin{itemize}[noitemsep]
\item in each one-dimensional space, $\BProj{\mathsf{A}}$ and $\BProj{\mathsf{B}}$ act as identity or rank-zero projectors; and
\item in each two-dimensional subspace $\RegS_{j}$, $\BProj{\mathsf{A}}$ and $\BProj{\mathsf{B}}$ are rank-one projectors. In particular, there exist distinct orthogonal bases $\{\JorKetA{j}{1},\JorKetA{j}{0}\}$ and $\{\JorKetB{j}{1},\JorKetB{j}{0}\}$ for $\RegS_{j}$ such that $\BProj{\mathsf{A}}$ projects onto $\JorKetA{j}{1}$ and $\BProj{\mathsf{B}}$ projects onto $\JorKetB{j}{1}$.
\end{itemize}
\end{lemma}
A simple proof of Jordan's lemma can be found in~\cite{Regev06-XXX}.
For each $j$, the vectors $\JorKetA{j}{1}$ and $\JorKetB{j}{1}$ are corresponding left and right singular vectors of the matrix $\BProj{\mathsf{A}} \BProj{\mathsf{B}}$ with singular value $s_j = |\JorBraKetAB{j}{1}|$. The same is true for $\JorKetA{j}{0}$ and $\JorKetB{j}{0}$ with respect to $(\mathbf{I}-\BProj{\MeasA})(\mathbf{I}-\BProj{\MeasB})$.
\subsection{Commitment Schemes}
A \emph{commitment scheme} consists of a pair of PPT algorithms $\mathsf{Gen},\mathsf{Commit}$ with the following properties.
\newcommand{\mathsf{Exp}^{\mathsf{Adv}}_{\mathsf{hide}}}{\mathsf{Exp}^{\mathsf{Adv}}_{\mathsf{hide}}}
\paragraph{Statistical/computational hiding.}
For an adversary $\mathsf{Adv}$, define the experiment $\mathsf{Exp}^{\mathsf{Adv}}_{\mathsf{hide}}(\lambda)$ as follows.
\begin{enumerate}[noitemsep]
\item $\mathsf{Adv}(1^\lambda)$ sends $(\mathsf{ck},m_0,m_1)$ to the challenger.
\item The challenger flips a coin $b \in \{0,1\}$ and returns $\mathsf{com} \coloneqq \mathsf{Commit}(\mathsf{ck},m_b)$ to the adversary.
\item The adversary outputs a bit $b'$. The experiment outputs $1$ if $b = b'$.
\end{enumerate}
We say that $(\mathsf{Gen},\mathsf{Commit})$ is statistically (resp. computationally) hiding if for all unbounded (resp. non-uniform QPT) adversaries $\mathsf{Adv}$,
\begin{equation*}
| \Pr[\mathsf{Exp}^{\mathsf{Adv}}_{\mathsf{hide}}(\lambda) = 1] - 1/2 | = {\rm negl}(\lambda) ~.
\end{equation*}
\newcommand{\mathsf{Exp}^{\mathsf{Adv}}_{\mathsf{bind}}}{\mathsf{Exp}^{\mathsf{Adv}}_{\mathsf{bind}}}
\paragraph{Statistical/computational binding.}
For an adversary $\mathsf{Adv}$, define the experiment $\mathsf{Exp}^{\mathsf{Adv}}_{\mathsf{bind}}(\lambda)$ as follows.
\begin{enumerate}[noitemsep]
\item The challenger generates $\mathsf{ck} \gets \mathsf{Gen}(1^\lambda)$.
\item $\mathsf{Adv}(\mathsf{ck})$ sends $(m_0,\omega_0,m_1,\omega_1)$ to the challenger.
\item The experiment outputs $1$ if $\mathsf{Commit}(\mathsf{ck},m_0,\omega_0) = \mathsf{Commit}(\mathsf{ck},m_1,\omega_1)$.
\end{enumerate}
We say that $(\mathsf{Gen},\mathsf{Commit})$ is statistically (resp. computationally) binding if for all unbounded (resp. non-uniform QPT) adversaries $\mathsf{Adv}$,
\begin{equation*}
\Pr[\mathsf{Exp}^{\mathsf{Adv}}_{\mathsf{bind}}(\lambda) = 1] = {\rm negl}(\lambda) ~.
\end{equation*}
\newcommand{\mathsf{Exp}^{\mathsf{Adv}}_{\mathsf{cl}}}{\mathsf{Exp}^{\mathsf{Adv}}_{\mathsf{cl}}}
\newcommand{\Meas{\mathsf{valid}}}{\Meas{\mathsf{valid}}}
\paragraph{Collapse binding.}
For an adversary $\mathsf{Adv}$, define the experiment $\mathsf{Exp}^{\mathsf{Adv}}_{\mathsf{cl}}(\lambda)$ as follows.
\begin{enumerate}[noitemsep]
\item The challenger generates $\mathsf{ck} \gets \mathsf{Gen}(1^\lambda)$.
\item $\mathsf{Adv}(\mathsf{ck})$ sends a commitment $\mathsf{com}$ and a quantum state $\bm{\rho}$ on registers $\mathcal{M} \otimes \mathcal{W}$.
\item The challenger flips a coin $b \in \{0,1\}$. If $b = 0$, the challenger does nothing. Otherwise, the challenger measures $\mathcal{M}$ in the computational basis.
\item The challenger returns registers $\mathcal{M} \otimes \mathcal{W}$ to the adversary, who outputs a bit $b'$. The experiment outputs $1$ if $b = b'$.
\end{enumerate}
We say that $\mathsf{Adv}$ is valid if measuring the output of $\mathsf{Adv}(\mathsf{ck})$ in the computational basis yields, with probability $1$, $(\mathsf{com},m,\omega)$ such that $\mathsf{Commit}(\mathsf{ck},m,\omega) = \mathsf{com}$.
We say that $(\mathsf{Gen},\mathsf{Commit})$ is collapse-binding if for all \emph{valid} non-uniform QPT adversaries $\mathsf{Adv}$,
\begin{equation*}
| \Pr[\mathsf{Exp}^{\mathsf{Adv}}_{\mathsf{cl}}(\lambda) = 1] - 1/2 | = {\rm negl}(\lambda) ~.
\end{equation*}
\subsection{Preliminaries on Interactive Arguments}\label{subsec:protocol-prelims}
An interactive argument for an $\mathsf{NP}$-language $L$ consists of a pair of interactive algorithms $P, V$:
\begin{itemize}
\item The prover algorithm $P$ is given as input an $\mathsf{NP}$ statement $x$ and an $\mathsf{NP}$ witness $w$ for $x$.
\item The verifier algorithm $V$ is given as input an $\mathsf{NP}$ statement $x$; at the end of the interaction, it outputs a bit $b$ (interpreted as ``accept''/``reject'').
\end{itemize}
The minimal requirement we ask of such a protocol is \emph{completeness}, which states that when the honest $P, V$ algorithms are executed on a valid instance-witness pair $(x, w)$, the verifier should accept with probability $1-{\rm negl}(\lambda)$.
We typically consider interactive arguments consisting of either $3$ or $4$ messages. In many (but not all) settings we assume that the argument system is \emph{public-coin} (in the second-to-last round), meaning that the second-to-last message (or \emph{challenge}) is a uniformly random string $r$ from some domain. We will use the following notation to denote messages in any such protocol:
\begin{itemize}[noitemsep]
\item For $4$-message public-coin protocols, we use $\mathsf{vk}$ to denote the first verifier message.
\item We denote the first prover message by $a$.
\item We denote the verifier challenge by $r$.
\item We denote the prover response by $z$.
\item We denote the verification predicate by $V(\mathsf{vk}, a, r, z)$.
\end{itemize}
We consider $3$-message protocols as a special case of $4$-message protocols in which $\mathsf{vk} = \bot$.
A key property of interactive protocols considered in this work is \emph{collapsing} (and relaxations thereof), defined below.
\begin{definition}[Collapsing Protocol \cite{EC:Unruh16,LiuZ19,DonFMS19}]
\label{def:collapsing-protocol}
An interactive protocol $(P,V)$ is \emph{collapsing} if for every polynomial-size interactive quantum adversary $A$ (where $A$ may have an arbitrary polynomial-size auxiliary input quantum advice state),
\begin{equation*}
\Big| \Pr[\mathsf{CollapseExpt}(0,A) = 1]
- \Pr[\mathsf{CollapseExpt}(1,A) = 1] \Big|
\leq {\rm negl}(\lambda).
\end{equation*}
For $b \in \{0,1\}$, the experiment $\mathsf{CollapseExpt}(b,A)$ is defined as follows:
\begin{enumerate}[noitemsep]
\item The challenger runs the interaction $\langle A,V\rangle$ between $A$ (acting as a malicious prover) and the honest verifier $V$, stopping just before the measurement the register $\mathcal{Z}$ containing the malicious prover's final message. Let $\tau'$ be the transcript up to this point excluding the final prover message.
\item The challenger applies a unitary $U$ to compute the verifier's decision bit $V(\tau',\mathcal{Z})$ onto a fresh ancilla, measures the ancilla, and then applies $U^\dagger$. If the measurement outcome is $0$, the experiment aborts.
\item If $b = 0$, the challenger does nothing. If $b = 1$, the challenger measures the $\mathcal{Z}$ register in the computational basis and discards the result.
\item The challenger returns the $\mathcal{Z}$ register to $A$. Finally $A$ outputs a bit $b'$, which is the output of the experiment.
\end{enumerate}
\end{definition}
\cref{def:collapsing-protocol} captures the collapsing property of Kilian's interactive argument system \cite{STOC:Kilian92} (as well as other $\Sigma$-protocols that make use of ``strongly collapsing commitments'' \cite{C:ChiChuYam21}), but does not accurately capture protocols that make use of commitments satisfying statistical binding but not ``strict binding'' \cite{EC:Unruh12}. To capture these protocols, we introduce a partial-collapsing definition.
For a 3 or 4-message interactive protocol $(P,V)$, let $T$ denote the set of transcript prefixes $\tau_{\mathrm{pre}}$ (i.e., the first message $a$ in a 3-message protocol or the first two messages $(\mathsf{vk},a)$ in a 4-message protocol), let $R$ denote the set of challenges $r$ (the second-to-last message) and let $Z$ denotes the set of possible responses $z$ (the final message). Informally, such a protocol is \emph{partially collapsing} with respect to a function $f: T \times R \times Z \rightarrow \{0,1\}^*$ if the prover cannot detect a measurement of $f$.
\begin{definition}[Partially Collapsing Protocol]
\label{def:partial-collapsing-protocol}
Let $f: T \times R \times Z \rightarrow \{0,1\}^*$ be a public efficiently computable function. A 3 or 4-message interactive protocol $(P, V)$ is partially collapsing with respect to $f$ if for every polynomial-size interactive quantum adversary $A$ (where $A$ may have an arbitrary polynomial-size auxiliary input quantum advice state),
\begin{equation*}
\Big| \Pr[\mathsf{PCollapseExpt}(0,f,A) = 1]
- \Pr[\mathsf{PCollapseExpt}(1,f,A) = 1] \Big|
\leq {\rm negl}(\lambda).
\end{equation*}
For $b \in \{0,1\}$, the experiment $\mathsf{PCollapseExpt}(b,f,A)$ is defined as follows:
\begin{enumerate}[noitemsep]
\item The challenger runs the interaction $\langle A,V\rangle$ between $A$ (acting as a malicious prover) and the honest verifier $V$, stopping just before the measurement the register $\mathcal{Z}$ containing the malicious prover's final message. Let $(\tau_{\mathrm{pre}},r)$ be the transcript up to this point (i.e., excluding the final prover message).
\item The challenger applies a unitary $U$ to compute the verifier's decision bit $V(\tau',\mathcal{Z})$ onto a fresh ancilla, measures the ancilla, and then applies $U^\dagger$. If the measurement outcome is $0$, the experiment aborts.
\item If $b = 0$, the challenger does nothing. If $b = 1$, the challenger initializes a fresh ancilla $\mathcal{Y}$ to $\ket{0}_{\mathcal{Y}}$, applies the unitary $U_f$ (acting on $\mathcal{Z} \otimes \mathcal{Y}$) that computes $f(\tau_{\mathrm{pre}},r,\cdot)$ on $\mathcal{Z}$ and XORs the output onto $\mathcal{Y}$, measures $\mathcal{Y}$ and discards the result, and then applies $U_f^\dagger$.
\item The challenger returns the $\mathcal{Z}$ register to $A$. Finally $A$ outputs a bit $b'$, which is the output of the experiment.
\end{enumerate}
\end{definition}
\cref{def:partial-collapsing-protocol} captures the collapsing property of standard commit-and-open $\Sigma$-protocols \cite{FOCS:GolMicWig86,Blum86} that make use of statistically binding (or, more generally, standard collapse-binding \cite{EC:Unruh16,C:ChiChuYam21}) commitments by setting $f$ to output the part of $z$ corresponding to the committed message (but not the opening). In some other cases (a subroutine of the \cite{FOCS:GolMicWig86} graph non-isomorphism protocol, as well as the \cite{C:LapSha90} ``reverse Hamiltonicity'' $\Sigma$-protocol) we will use more complicated definitions of $f$ that measure different pieces of information depending on the challenge $r$.
Finally, we recall the definition of special honest-verifier zero knowledge.
\begin{definition}[Special honest-verifier zero knowledge]\label{def:shvzk}
A 3-message sigma protocol $(P_{\Sigma},V_{\Sigma})$ is \emph{special honest verifier zero knowledge} (SHVZK) if there exists an algorithm $\mathsf{SHVZK}.\mathsf{Sim}$ such that for all $(x,w) \in \mathfrak{R}$ and challenges $r \in R$, the distributions
\begin{equation*}
\mathsf{SHVZK}.\mathsf{Sim}(x,r) \quad\text{and}\quad (a,z) \gets P_{\Sigma}(x,w,r)
\end{equation*}
are computationally indistinguishable.
\end{definition}
\section{Standard Collapse-Binding Implies Unique Messages}
\label{sec:unique-message-collapsing}
Recall that the standard collapse-binding security property ensures that if an efficient adversary produces a superposition of valid message-opening pairs $(m,\omega)$ to a commitment $c$, then it cannot detect whether a measurement of $m$ is performed. There is an \emph{apparent} deficiency with this definition as compared to the classical binding definition, which Unruh (implicitly) observes in~\cite{EC:Unruh12,EC:Unruh16}: collapse-binding does not seem to imply that an adversary cannot give valid openings to two different messages \emph{if the openings themselves are not measured}.
This issue has received relatively little attention, in part because circumventing it turns out to be fairly easy in many cases by either modifying the underlying protocol, or by simply assuming ``strong'' collapse-binding \cite{C:ChiChuYam21} where the measurement of the message \emph{and opening} is undetectable. For example:
\begin{itemize}
\item In~\cite{EC:Unruh12}, Unruh introduces the notion of a \emph{strict-binding} commitment, defined so that for any commitment $c$, there is a unique valid message-opening pair $(m,\omega)$. Unruh shows that standard $\Sigma$-protocols (such as GMW 3-coloring and Blum Hamiltonicity) are sound when instantiated with strict-binding commitments, but due to the issue described above, is unable to prove that these protocols are sound when instantiated with a statistically-binding commitment.
\item In~\cite{EC:Unruh16}, Unruh gives a generic transformation which converts a classically secure $\Sigma$-protocol into a quantum proof of knowledge by committing to the responses to each challenge in advance. However, in many $\Sigma$-protocols (e.g. \cite{FOCS:GolMicWig86,Blum86}) the response already consists of an opening to a commitment; are these protocols secure if the commitment is collapse-binding?
\item This issue also arises in~\cite{C:ChiChuYam21}, which explicitly asks for a strong collapse-binding commitment to instantiate their $\Sigma$-protocols. (They do note that a statistically binding commitment also suffices via a different argument.)
\end{itemize}
We believe this is an unsatisfying state of affairs. Collapse-binding is widely accepted as the quantum analogue of classical computational binding, but as the above examples illustrate, there are many natural settings where it is unclear whether it can be used as a drop-in replacement for classically binding commitments. Given this issue, a natural suggestion would be to treat strong collapse-binding as the quantum analogue of classical binding. However, we suggest that any definition of quantum computationally binding should at least capture statistically binding commitments. Statistically binding commitments do not generically satisfy strong collapse-binding, but are (standard) collapse-binding. Worse, strong collapse-binding is not a ``robust'' notion: we can make any commitment scheme lose its strong collapse-binding property by adding a single bit to the opening that the receiver ignores.
In this section, we resolve this difficulty and show that standard collapse-binding generically implies that an adversary cannot give two valid openings for two different messages, even when the openings are left unmeasured. This simplifies some of the proofs in this work, and also implies that strong collapse-binding and strict binding are unnecessary in the above examples.
Towards proving this, we first formalize a natural security property that captures the fact that a quantum adversary should only be able to open to a unique message.
Let $\mathsf{Com} = (\mathsf{Gen},\mathsf{Commit})$ be a non-interactive commitment scheme. Define the following challenger-adversary interaction $\mathsf{Exp}_{uniq}^{\mathsf{Adv}}(\lambda)$ where $\mathsf{Adv} = (\mathsf{Adv}_1,\mathsf{Adv}_2)$ is a two-phase adversary.
\begin{enumerate}
\item\label[step]{step:unique-1} The challenger generates $\mathsf{ck} \gets \mathsf{Gen}(\lambda)$.
\item\label[step]{step:unique-2} Run $\mathsf{Adv}_1(\mathsf{ck})$ to output a classical commitment string $\mathsf{com}$, a classical message $m_1$ and a superposition of openings on register $\mathcal{W}$. It also returns its internal state $\mathcal{H}$, which is passed onto $\mathsf{Adv}_2$.
\item\label[step]{step:unique-3} The challenger measures whether $\mathcal{W}$ contains a valid opening for $m_1$ with respect to $\mathsf{com}$ and aborts (and outputs $0$) if not.
\item\label[step]{step:unique-4} Run $\mathsf{Adv}_2(\mathsf{ck})$ on $(\mathcal{H},\mathcal{W})$. It outputs another message $m_2$ and a superposition of openings on register $\mathcal{W}$. If $m_2 = m_1$ then the expriment aborts and outputs $0$.
\item\label[step]{step:unique-5} The challenger measures whether $\mathcal{W}$ contains a valid opening for $m_1$ with respect to $\mathsf{com}$. If so, the experiment outputs $1$, otherwise $0$.
\end{enumerate}
\begin{definition}
We say that a commitment is \emph{unique-message binding} if it can only be opened to a unique message if for all QPT adversaries $\mathsf{Adv}$,
\[ \Pr[\mathsf{Exp}_{uniq}^{\mathsf{Adv}}(\lambda) =1] = {\rm negl}(\lambda).\]
\end{definition}
\begin{lemma}\label{lemma:collapse-binding-unique-message}
Any collapse-binding commitment $\mathsf{Com}$ satisfies unique-message binding.
\end{lemma}
We remark that the unique-message binding definition and this lemma easily extend to interactive collapse-binding commitments. However, we will focus on the non-interactive case for simplicity. Our proof is reminiscent of the ``control qubit'' trick used by Unruh in~\cite{Unruh16-asiacrypt} to prove that collapse-binding implies a notion called sum-binding.
\begin{proof}
Suppose that $\mathsf{Adv} = (\mathsf{Adv}_1,\mathsf{Adv}_2)$ satisfies $\Pr[\mathsf{Exp}_{uniq}^{\mathsf{Adv}}(\lambda) =1] = \varepsilon(\lambda) = \varepsilon$. Then we construct an adversary $\mathsf{Adv}'$ that obtains advantage $\varepsilon/8$ in the collapsing game for $\mathsf{Com}$ as follows:
\begin{enumerate}
\item Upon receiving $\mathsf{ck}$ from the challenger, $\mathsf{Adv}'$ does the following:
\begin{enumerate}
\item Run $\mathsf{Adv}_1(\mathsf{ck})$ to obtain a classical commitment $\mathsf{com}$, a classical message $m_1$ (on register $\mathcal{M}$), and registers $\mathcal{W}, \mathcal{H}$.
\item\label[step]{step:collapsing-first-measurement} Measure whether $\mathcal{W}$ contains a valid opening for $m_1$ with respect to $\mathsf{com}$; if the opening is invalid, abort and output a random $b'$.\footnote{To match the syntax of the collapsing game, the ``abort'' works as follows: $\mathsf{Adv}'(\mathsf{ck})$ initializes $\mathcal{M} \otimes \mathcal{W}$ to some valid commitment, sends it to the challenger, ignores the registers it gets back, and then outputs a random $b'$.}
\item\label[step]{step:apply-U} Next, prepares an ancilla qubit $\mathcal{B}$ in the state $\ket{+}_{\mathcal{B}}$ and then apply the unitary $U$ defined as
\[ U = \ketbra{1}_{\mathcal{B}} \otimes U^{\mathsf{Adv}_2}_{\mathcal{H},\mathcal{M},\mathcal{W}} + \ketbra{0}_{\mathcal{B}} \otimes \mathbf{I}_{\mathcal{H},\mathcal{M},\mathcal{W}}.\]
where $U^{\mathsf{Adv}_2}$ is a unitary description of $\mathsf{Adv}_2$ (the action of $\mathsf{Adv}_2$ on $\mathcal{H} \otimes \mathcal{M} \otimes \mathcal{W}$ is unitary without loss of generality). That is, the unitary $U$ has two branches of computation: it does nothing when $\mathcal{B} = 0$, and it runs $\mathsf{Adv}_2$ when $\mathcal{B} = 1$.
\item\label[step]{step:measure-valid} Next apply the binary projective measurement $\BMeas{\BProj{\mathsf{ck},\mathsf{com},m_1}}$ where
\[\BProj{\mathsf{ck},\mathsf{com},m_1} \coloneqq \ketbra{0}_{\mathcal{B}} \otimes \mathbf{I}_{\mathcal{H},\mathcal{M},\mathcal{W}} + \ketbra{1}_{\mathcal{B}} \otimes \mathbf{I}_{\mathcal{H}} \otimes \sum_{\substack{m,\omega \ : \ m \neq m_1 \wedge \\ \mathsf{Commit}(\mathsf{ck},m,\omega) = \mathsf{com}}} \ketbra{m,\omega}_{\mathcal{M},\mathcal{W}}. \]
This measurement checks that after applying $U$, the output of $\mathsf{Adv}_2$ (when $\mathcal{B} = 1$) is a valid message and opening $(m,\omega)$ where $m \neq m_1$. If this measurement rejects, abort and output a random $b'$.
\item Finally, send $\mathcal{M} \otimes \mathcal{W}$ to the collapsing challenger.
\end{enumerate}
\item\label[step]{step:uncompute-U} When the collapsing challenger returns $\mathcal{M} \otimes \mathcal{W}$, apply $U^\dagger$.
\item\label[step]{step:distinguish} Perform the binary projective measurement $\BMeas{\BProj{+}}$ where $\BProj{+} \coloneqq \ketbra{+}_{\mathcal{B}} \otimes \mathbf{I}_{\mathcal{H},\mathcal{M},\mathcal{W}}$. If the measurement outcome is $1$ (corresponding to $\ket{+}$), then $\mathsf{Adv}'$ outputs $b' = 0$ (i.e., guesses that the collapsing challenger did not measure the message). Otherwise, it outputs $b' = 1$.
\end{enumerate}
We now compute the probability that $\mathsf{Adv}'$ outputs $b' = b$ for each choice of the collapsing challenge bit $b$.
If $b = 1$, then $\mathsf{Adv}'$ guesses correctly (outputs $b' = 1$) with probability exactly $1/2$. This is because if $\mathsf{Adv}'$ aborts, it outputs $1$ with probability $1/2$ by definition, and if it does not abort, then it sends the collapsing challenger $\mathcal{M} \otimes \mathcal{W}$ where a measurement of the $\mathcal{M}$ register will completely determine the $\mathcal{B}$ register. In particular, if the outcome of the $\mathcal{M}$ measurement is $m_1$, $\mathcal{B}$ collapses to $\ket{0}$; otherwise, $\mathcal{B}$ collapses to $\ket{1}$. In either case, the probability that the measurement of $\BMeas{\BProj{+}}$ returns $0$ (making $\mathsf{Adv}'$ output $b' = 1$) is exactly $1/2$.
We now consider the case $b = 0$. Let $\bm{\rho} = \sum_{\mathsf{ck},\mathsf{com},m_1} \bm{\rho}_{\mathsf{ck},\mathsf{com},m_1} + \bm{\rho}_{\bot}$ be the state on $\mathcal{M} \otimes \mathcal{W} \otimes \mathcal{H}$ after \cref{step:collapsing-first-measurement}, where $\bm{\rho}_{\mathsf{ck},\mathsf{com},m_1}$ is the (subnormalized) state corresponding to outcomes $\mathsf{ck},\mathsf{com},m_1$ and the outcome ``valid'' in \cref{step:collapsing-first-measurement}, and $\bm{\rho}_{\bot}$ is the (subnormalized) state corresponding to the outcome ``invalid'' in \cref{step:collapsing-first-measurement}.
Recall that in the case $b = 0$, the collapsing challenger does nothing to $\mathcal{M} \otimes \mathcal{W}$. Thus the effect of \cref{step:apply-U,step:measure-valid,step:uncompute-U} is to apply a binary projective measurement $\BMeas{\BProj{\mathsf{ck},\mathsf{com},m_1}'}$ where $\BProj{\mathsf{ck},\mathsf{com},m_1}' \coloneqq U^\dagger \Pi_{\mathsf{ck},\mathsf{com},m_1} U$. From the description of the experiment, it holds that
\begin{align*}
\Pr[b' = 0] &= \frac{1}{2} \Tr(\bm{\rho}_{\bot}) + \frac{1}{2} \sum_{\mathsf{ck},\mathsf{com},m_1} \Tr((\mathbf{I}-\BProj{\mathsf{ck},\mathsf{com},m_1}') (\ketbra{+} \otimes \bm{\rho}_{\mathsf{ck},\mathsf{com},m_1})) \\
&\quad + \sum_{\mathsf{ck},\mathsf{com},m_1} \Tr(\BProj{+} \BProj{\mathsf{ck},\mathsf{com},m_1}' (\ketbra{+} \otimes \bm{\rho}_{\mathsf{ck},\mathsf{com},m_1}) \BProj{\mathsf{ck},\mathsf{com},m_1}') \\
& \geq \frac{1}{2} - \frac{1}{2} \sum_{\mathsf{ck},\mathsf{com},m_1} \Tr(\BProj{\mathsf{ck},\mathsf{com},m_1}' (\ketbra{+} \otimes \bm{\rho}_{\mathsf{ck},\mathsf{com},m_1})) \\
&\quad + \sum_{\mathsf{ck},\mathsf{com},m_1} \Tr(\bm{\rho}_{\mathsf{ck},\mathsf{com},m_1}) \left(\frac{\Tr(\BProj{\mathsf{ck},\mathsf{com},m_1}' (\ketbra{+} \otimes \bm{\rho}_{\mathsf{ck},\mathsf{com},m_1}))}{\Tr(\bm{\rho}_{\mathsf{ck},\mathsf{com},m_1})}\right)^2 \\
&\geq \frac{1}{2} - \frac{1}{2} \sum_{\mathsf{ck},\mathsf{com},m_1} \Tr(\BProj{\mathsf{ck},\mathsf{com},m_1}' (\ketbra{+} \otimes \bm{\rho}_{\mathsf{ck},\mathsf{com},m_1})) \\
&\quad + \frac{\left(\sum_{\mathsf{ck},\mathsf{com},m_1} \Tr(\BProj{\mathsf{ck},\mathsf{com},m_1}' (\ketbra{+} \otimes \bm{\rho}_{\mathsf{ck},\mathsf{com},m_1})) \right)^2}{\sum_{\mathsf{ck},\mathsf{com},m_1} \Tr(\bm{\rho}_{\mathsf{ck},\mathsf{com},m_1})}
\end{align*}
where the latter inequality is Jensen, and the former is the following:
\begin{claim}
If $\BProj{\MeasA} \bm{\rho} = \bm{\rho}$ then $\Tr(\BProj{\MeasA} \BProj{\MeasB} \bm{\rho} \BProj{\MeasB}) \geq \Tr(\BProj{\MeasB} \bm{\rho})^2/\Tr(\bm{\rho})$.
\end{claim}
\begin{proof}
$\Tr(\BProj{\MeasB} \bm{\rho}) = \Tr(\BProj{\MeasB} \bm{\rho} \BProj{\MeasA}) = \Tr(\BProj{\MeasA} \BProj{\MeasB} \bm{\rho}) \leq \sqrt{\Tr(\BProj{\MeasA} \BProj{\MeasB} \bm{\rho} \BProj{\MeasB} \BProj{\MeasA}) \Tr(\bm{\rho})}$, where the inequality is by Cauchy-Schwarz.
\end{proof}
Let $\gamma \coloneqq \sum_{\mathsf{ck},\mathsf{com},m_1} \Tr(\bm{\rho}_{\mathsf{ck},\mathsf{com},m_1})$. Observe that $\sum_{\mathsf{ck},\mathsf{com},m_1} \Tr(\BProj{\mathsf{ck},\mathsf{com},m_1}' (\ketbra{+} \otimes \bm{\rho}_{\mathsf{ck},\mathsf{com},m_1})) = (\gamma + \varepsilon)/2$. It follows that
\begin{equation*}
\Pr[b' = 0] = \frac{1}{2} - \frac{1}{4} (\gamma + \varepsilon) + \frac{1}{4} \cdot \frac{(\gamma + \varepsilon)^2}{\gamma} \geq \frac{1}{2} + \frac{\varepsilon}{4} \enspace.
\end{equation*}
Thus the overall probability that $\mathsf{Adv}'$ guesses a random $b$ correctly in the collapsing experiment is at least $1/2 + \varepsilon(\lambda)/8$.
\end{proof}
\section{Generalized Notions of Special Soundness}
\label{sec:gss}
Let $(P,V)$ denote a 3 or 4-message public-coin interactive proof or argument system. Let $T$ denote the set of transcript prefixes $\tau_{\mathrm{pre}}$ (i.e., the first message in a 3-message protocol or the first two messages in a 4-message protocol), $R$ denotes the set of challenges $r$ (the second-to-last message) and $Z$ denotes the set of possible responses $z$ (the final message). The instance $x$ is assumed to be part of $\tau_{\mathrm{pre}}$, which allows us to capture protocols in which the instance is adaptively chosen by the prover in its first message.
We introduce generalizations of the special soundness property to capturing situations where
\begin{enumerate}
\item the special soundness extractor is able to produce a witness given only a function $f(z)$ of the response $z$, and/or
\item the extractor is only required to succeed (with some $1-{\rm negl}(\lambda)$ probability) when the challenges are sampled from an ``admissible distribution.''
\end{enumerate}
The second property is related to the notion of probabilistic special soundness due to~\cite{FOCS:CMSZ21}.\footnote{A similar (but not identical) definition appears in an older version of~\cite{FOCS:CMSZ21}: \url{https://arxiv.org/pdf/2103.08140v1.pdf}.}
Throughout this section, $k$ will be a parameter specifying the number of (partial) transcripts required to extract.
\subsection{Generalized Special Soundness Definitions}
We first recall the standard definition of $k$ special soundness.
\begin{definition}[$k$-special soundness]\label{def:k-ss}
An interactive protocol $(P, V)$ is $k$-special-sound if there exists an efficient extractor $\mathsf{SSExtract}: T \times (R \times Z)^k \rightarrow \{0,1\}^*$ such that given $\tau_{\mathrm{pre}},(r_i,z_i)_{i \in [k]}$ where each $r_i$ is distinct and for each $i$, $(\tau_{\mathrm{pre}},r_i,z_i)$ is an accepting transcript, $\mathsf{SSExtract}(\tau_{\mathrm{pre}},r_i,z_i)$ outputs a valid witness $w$ for the instance $x$ with probability $1$.
\end{definition}
In order to generalize this definition, we consider interactive protocols $(P,V)$ with a ``consistency'' predicate $g: T \times (R \times \{0,1\}^*)^* \rightarrow \{0,1\}$. The argument $\{0,1\}^*$ corresponds to some partial information $y$ about a response $z$. The consistency predicate should have the property that if $g(\tau_{\mathrm{pre}},(r_i,y_i)_{i \in [k]}) =1$, then $g(\tau_{\mathrm{pre}},(r_i,y_i)_{i \in G}) = 1$ for all subsets $G \subset [k]$. For any positive integer $k$, we define the set $\mathsf{Consistent}_k$ to be the subset of $T \times (R \times \{0,1\}^*)^k$ on which $g$ outputs $1$. We can extend $k$ special soundness to allow the $\mathsf{SSExtract}$ algorithm to produce a witness given only partial information $y_i$ of the responses $z_i$ provided that the ``partial transcripts'' satisfy consistency.
\begin{definition}[$(k,g)$-special soundness]\label{def:k-g-ss}
An interactive protocol $(P, V)$ is $(k,g)$-special-sound if there exists an efficient extractor $\mathsf{SSExtract}_g: T \times (R \times \{0,1\}^*)^* \rightarrow \{0,1\}$ such that given $(\tau_{\mathrm{pre}},(r_i,y_i)_{i \in [k]}) \in \mathsf{Consistent}_k$ where each $r_i$ is distinct and for each $i$, $\mathsf{SSExtract}_g(\tau_{\mathrm{pre}},r_i,y_i)$ outputs a valid witness $w$ with probability $1$.
\end{definition}
Notice that all $k$-special-sound protocols with super-polynomial size challenge space are $(k,g)$-probabilistic-special sound for the ``trivial'' consistency predicate $g$ that simply checks (interpreting $y_i = z_i$ as a full response) whether all the transcripts are accepting.
\begin{claim}
\label{claim:kss-to-kfss}
For any $k$-special-sound protocol $(P,V)$, there exists a consistency predicate $g$ such that $(P,V)$ is $(k,g)$-special-sound.
\end{claim}
\begin{proof}
Define $g$ to output $1$ on input $\tau_{\mathrm{pre}},(r_i,y_i)_{i \in [k]}$ if and only if each $(\tau_{\mathrm{pre}},r_i,y_i)$ is an accepting transcript. It follows that the original $\mathsf{SSExtract}$ in the special soundness definition satisfies the requirements of the $(k, g)$-special soundness definition.
\end{proof}
When the challenge space $R$ is super-polynomial-size, we can generalize this definition even further so that the extractor need not succeed on worst-case $k$-tuples of distinct challenges, but only on $k$-tuples sampled from an ``admissible distribution.''
\begin{definition}[$Q$-admissible distribution]
\label{def:q-admissible-dist}
A distribution $D_k$ over $R^k$ is \emph{admissible} if there exists a negligible function ${\rm negl}(\lambda)$ and a sampling procedure $\mathsf{Samp}$ such that $D_k$ is ${\rm negl}(\lambda)$-close to the output distribution of the following process:
\begin{itemize}[nolistsep]
\item $\mathsf{Samp}$ makes, in expectation, $Q(\lambda)$ classical queries to an oracle $O_R$ that outputs a uniformly random challenge $r \gets R$ each time it is queried.
\item $\mathsf{Samp}$ must produce its outputs as follows. Let $Q_{\mathrm{total}}$ be the total number of queries it makes to $O_R$. $\mathsf{Samp}$ specifies a set $\{i_1,\dots,i_k\} \subseteq [Q_{\mathrm{total}}]$, and its output is defined to be $r_{i_1},\dots,r_{i_k}$ where $r_i$ is the $i$th output of the uniform sampling oracle $O_R$.
We stress that $\mathsf{Samp}$ may use an arbitrary (e.g., even inefficient) process to select the set $\{i_1,\dots,i_k\}$. Moreover, the output challenges $r_{i_1},\dots,r_{i_k}$ do not necessarily have distinct values (this can occur if the sampling oracle $O_R$ outputs the same challenge more than once).
\end{itemize}
\end{definition}
\begin{definition}[admissible distribution]
\label{def:admissible-dist}
A distribution $D_k$ over $R^k$ is \emph{admissible} if there exists $Q = {\rm poly}(\lambda)$ such that $D_k$ is a $Q$-admissible distribution.
\end{definition}
\begin{definition}[$(k,g)$-probabilistic special soundness]\label{def:k-g-pss}
An interactive protocol $(P, V)$ with consistency predicate $g$ is $(k,g)$-probabilistic-special-sound if there exists an efficient extractor $\mathsf{SSExtract}: T \times (R \times \{0,1\}^*)^k \rightarrow \{0,1\}^*$ such that for any distribution $D$ supported on $\mathsf{Consistent}_k$ whose marginal distribution on $R^k$ is admissible,
\[ \Pr_{(\tau_{\mathrm{pre}},(r_i,y_i)_{i \in [k]}) \gets D }[\mathsf{PSSExtract}_g(\tau_{\mathrm{pre}},(r_i,y_i)_{i \in [k]}) \rightarrow w \wedge w\text{ is a valid witness for }x] = 1-{\rm negl}(\lambda)\]
\end{definition}
Note that $(k,g)$-probabilistic special soundness (PSS) is only meaningful when the challenge space $R$ has super-polynomial size. When $R$ is polynomial, an admissible distribution $D_k$ can simply output $(r,\dots,r)$ (the same challenge repeated $k$ times) since there exists a $\mathsf{Samp}$ that simply queries $O_R$ until it outputs the same challenge $k$ times.
However, when $|R|$ is superpolynomial, $(k,g)$-PSS is a relaxation of $(k,g)$-special soundness.
\begin{claim}
\label{k-g-ss-implies-k-g-pss}
When $R = 2^{\omega(\log \lambda)}$, any $(k,g)$-special-sound protocol is also $(k,g)$-probabilistic-special-sound.
\end{claim}
\begin{proof}
It suffices to prove that the probability any admissible distribution outputs the same challenge $r$ more than once is ${\rm negl}(\lambda)$. By the definition of an admissible distribution, its output is ${\rm negl}(\lambda)$-close to the output of an arbitrary sampling algorithm that makes an expected ${\rm poly}(\lambda)$ number of queries to a uniform sampling oracle $O_R$ over $R$, and then outputs a size-$k$ subset of the oracle responses.
Suppose towards a contradiction that there exists constant $c$ such that for infinitely many $\lambda \in \mathbb{N}$, the sampling oracle $O_R$ outputs a repeated challenge with probability $1/\lambda^c$. Let $d$ be a constant such that the expected number of queries to the uniform sampling oracle $O_R$ is $O(\lambda^d)$. If $0 \leq q \leq \lambda^{d+c+1}$ oracle queries have already been made, the probability that the next oracle query allows finding a collision is at most $\lambda^{d+c+1}/|R|$. This implies that finding a collision within $\lambda^{d+c+1}$ queries is at most $\lambda^{2d+2c+2}/|R|$. Thus, to find a collision with probability at least $1/\lambda^c$, the number of oracle queries must be at least $\lambda^{d+c+1}$ with probability at least $1/\lambda^c - \lambda^{2d+2c+2}/|R|$, which implies the expected number of oracle queries is at least $\lambda^{d+c+1}(1/\lambda^c - \lambda^{2d+2c+2}/|R|) = \lambda^{d+1} - \lambda^{3d+3c+3}/|R|$. Since $R = 2^{\omega(\log \lambda)}$, there exists a constant $\lambda_0$ such that for all $\lambda > \lambda_0$, this expectation is $\lambda^{d+1} - \lambda^{3d+3c+3}/|R| > \lambda^{d+1} - 1$. This contradicts our assumption that the expected number of queries to the sampling oracle is $O(\lambda^d)$.
\end{proof}
\subsection{A Special Soundness Parallel Repetition Theorem}
Although it is well-known that $2$-special soundness is preserved under parallel repetition, the situation is more complicated for generalized special soundness notions (and even $k$-special soundness for larger values of $k$). We state and prove a useful theorem about the parallel repetition of special sound protocols.
\begin{lemma}
\label{lemma:parallel-repetition}
If $\Sigma = (P,V)$ is a $(k,g)$-special-sound protocol, then the $t=\Omega(k^2\log^2(\lambda))$-fold parallel repetition $\Sigma^t$ is $(k^2,g^t)$-probabilistic special sound where $g^t$ outputs $1$ if and only (1) the arguments $y_i$ consist of $t$ formally separated components, and (2) $g$ outputs $1$ on each of the $t$ components.
\end{lemma}
\begin{proof}
Let $\mathsf{Consistent}_{k^2}$ be the set of $k$-tuples of shared-prefix partial transcripts of $\Sigma^t$ on which $g^t$ outputs $1$. Let $D$ be a distribution supported on $\mathsf{Consistent}_{k^2}$ whose marginal distribution on $(R^t)^{k^2}$ is admissible.
We construct $\mathsf{PSSExtract}_{g^t}$ for $\Sigma^t$ that takes as input
\[(\tau_{\mathsf{pre},j})_{j \in [t]},((r_{j,i})_{j \in [t]},(y_{j,i})_{j \in [t]})_{i \in [k^2]} \gets D\]
and does the following:
\begin{enumerate}
\item Look for $j \in [t]$ such that $\{r_{j,i}\}_{i \in [k^2]}$ consists of $k$ distinct challenges. If no such $j$ exists, abort and output $\bot$.
\item If such a $j$ exists, let $H$ be a size-$k$ subset of $[k^2]$ such that $\{r_{j,i}\}_{i \in H}$ consists of $k$ distinct challenges, and let $\mathsf{SSExtract}_g$ be the $(k,g)$-special-soundness extractor for $\Sigma$. Run $\mathsf{SSExtract}_g( \tau_{\mathrm{pre},j},(r_{j,i},y_{j,i})_{i \in H}) \rightarrow w$ and output $w$.
\end{enumerate}
First, we note that \[g(\tau_{\mathrm{pre},j},(r_{j,i},y_{j,i})_{i \in H}) = 1\]
follows from
\[g^t((\tau_{\mathsf{pre},j})_{j \in [t]},((r_{j,i})_{j \in [t]},(y_{j,i})_{j \in [t]})_{i \in [k^2]}) =1.\]
Thus, it suffices to prove that this extractor aborts with probability ${\rm negl}(\lambda)$. Define $\mathsf{BAD} \subset R^{tk^2}$ to be the set of all $tk^2$-tuples $(r_{j,i})_{j \in [t], i \in [k^2]}$ such that for all $j \in [t]$, the $k^2$-tuple $(r_{j,i})_{i \in [k^2]}$ does not contain $k$ distinct challenges.
Suppose $(r_{j,i})_{j \in [t], i \in [k^2]}$ is sampled uniformly at random from $R^{tk^2}$. Then we have
\[ \Pr_{(r_{j,i})_{j \in [t], i \in [k^2]} \gets R^{tk^2}}[(r_{j,i})_{j \in [t], i \in [k^2]} \in \mathsf{BAD}] \leq \left(\frac{k}{e^k}\right)^t .\]
This follows from the fact that for any fixed $j$, the probability that $(r_{j,i})_{i \in [k^2]}$ does \emph{not} contain $k$ distinct challenges is at most $k((k-1)/k)^{k^2} \leq k/e^k$.
By the definition of an admissible distribution (\cref{def:admissible-dist}), the marginal distribution of $D_{\Sigma^t}$ on $(R^t)^{k^2}$ is the result of the following process (up to ${\rm negl}(\lambda)$ statistical distance): make an expected ${\rm poly}(\lambda)$ number of classical queries to a uniform sampling oracle $O_{R^t}$ over $R^t$, receiving a set of challenges $A$, and then (using an arbitrary procedure) output any size-$k^2$ subset $\{(r_{1,1},\dots,r_{t,1}),\dots,(r_{1,k^2},\dots,r_{t,k^2})\}$ of $A$ of $k^2$ challenges. The extractor aborts if $(r_{j,i})_{j \in [t], i \in [k^2]} \in \mathsf{BAD}$.
Let $d$ be a constant such that the expected number of queries to the uniform sampling oracle $O_{R^t}$ is $O(\lambda^d)$. Suppose towards a contradiction that the extractor aborts with non-negligible probability, i.e., there exists a constant $c$ such that for infinitely many $\lambda \in \mathbb{N}$, the extractor aborts with probability at least $1/\lambda^c$. If $0 \leq q \leq \lambda^{d+c+1}$ oracle queries have already been made, the probability that the next oracle query allows finding a size-$k^2$ subset of outputs in $\mathsf{BAD}$ is at most
\begin{align*}
(\lambda^{d+c+1})^{k^2}\left(\frac{k}{e^k}\right)^{k^2\log^2(\lambda)}.
\end{align*}
Moreover, there exists a constant $\lambda_0$ such that for all $\lambda > \lambda_0$, this can be upper bounded as
\begin{align*}
(\lambda^{d+c+1})^{k^2}\left(\frac{k}{e^k}\right)^{k^2\log^2(\lambda)} < \left(\frac{2^{k^2 + k^2 \log (k)} }{e^{k^3}}\right)^{\log^2(\lambda)} < (1/2)^{\log^2(\lambda)} = 1/\lambda^{\log (\lambda)}.
\end{align*}
Thus for all $\lambda > \lambda_0$, the probability of finding a size-$k^2$ subset of oracle outputs in $\mathsf{BAD}$ within $\lambda^{d+c+1}$ oracle queries is at most $\lambda^{d+c+1} /\lambda^{\log(\lambda)}$; this implies that finding a size-$k^2$ subset of oracle outputs in $\mathsf{BAD}$ with probability $1/\lambda^c$ requires making at least $\lambda^{d+c+1}$ oracle queries with probability at least $1/\lambda^c - \lambda^{d+c+1}/\lambda^{\log(\lambda)}$. Then for $\lambda > \lambda_0$, the expected number of queries is at least $(\lambda^{d+c+1})(1/\lambda^c - \lambda^{d+c+1}/\lambda^{\log(\lambda)}) = \lambda^{d+1} - \lambda^{2d+2c+2}/\lambda^{\log(\lambda)}$. Since $c$ and $d$ are constants, there exists $\lambda_0'$ such that for $\lambda > \lambda_0'$, the expected number of queries is at least $\lambda^{d+1}-1$. This contradicts our assumption that the number expected number of queries to the uniform sampling oracle is $O(\lambda^d)$. \qedhere
\end{proof}
\subsection{Examples of Probabilistic Special Sound Protocols}\label{sec:examples}
We now show that many classical interactive proofs-of-knowledge (or arguments-of-knowledge) satisfy probabilistic special soundness. It was already noted above that (parallel repetitions of) standard special sound protocols satisfy the notion. Here, we highlight three other cases: commit-and-open protocols (where $g$ is only given partial transcripts), Kilian's protocol, and a subroutine of the \cite{FOCS:GolMicWig86} graph non-isomorphism protocol.
\subsubsection{The ``one-out-of-two'' graph isomorphism subroutine}
In order to prove \cref{thm:szk}, we consider the following proof-of-knowledge subroutine of the \cite{FOCS:GolMicWig86} graph non-isomorphism protocol:
\begin{itemize}
\item The subroutine instance is three graphs $G_0, G_1, H$. The prover\footnote{The \cite{FOCS:GolMicWig86} verifier acts as the prover in this subroutine.} wants to prove that there exists a bit $b$ such that $G_b$ is isomorphic to $H$. To do so, they execute a parallel repetition of the following protocol.
\item The prover picks a random permutations $\sigma_0, \sigma_1$, a random bit $c$, and sends $(H_0 = \sigma_0 (G_c),H_1= \sigma_1(G_{1-c}))$ to the verifier.
\item The verifier sends a random bit $r$.
\item If $r=0$, the prover sends $(c, \sigma_0, \sigma_1)$ and the verifier checks that $(H_0 = \sigma_0 (G_c),H_1= \sigma_1(G_{1-c}))$ was computed correctly.
\item If $r=1$, the prover sends $(c\oplus b, \sigma_{c\oplus b}\pi)$, where $\pi$ is an isomorphism mapping $H$ to $G_b$. The verifier then checks that $(\sigma_{c\oplus b}\pi) H = H_{c\oplus b}$.
\end{itemize}
In the classical setting, this is generally viewed as a proof of knowledge of $(b, \pi)$. However, we consider it as a proof of knowledge of the bit $b$, in the situation where $G_0$ and $G_1$ are not isomorphic. We will formalize this in \emph{two} different ways: first by showing that the protocol is $(2, g)$-special sound for a natural consistency predicate $g$, and then by showing that it is $(2, g')$-PSS for a more complicated predicate $g'$ that we have to use to be compatible with the protocol's limited partial collapsing property.
First, we define an (inefficient) consistency predicate $g$, which is given as input $\tau_{\mathrm{pre}}$ an arbitrary number of pairs $(\mathbf{r}, \mathbf{c}) \in \{0,1\}^{\lambda}\times \{0,1\}^\lambda$ (rejecting if the input is not of this form). $g$ outputs $1$ if the following conditions hold for all $\ell \in [\lambda]$:
\begin{itemize}
\item If $r_\ell = 0$, the graphs $(H_{0,\ell}, H_{1,\ell})$ are isomorphic to $(G_{c_\ell}, G_{1-c_\ell})$.
\item If $r_\ell = 1$, the graph $H_{{c_\ell}, \ell}$ is isomorphic to $H$.
\end{itemize}
The following claim then holds immediately by transitivity of graph isomorphism. The extractor, given $(\mathbf{r}, \mathbf{c})$ and $(\mathbf{r}', \mathbf{c}')$ simply chooses an $\ell$ such that $r_\ell \neq r'_\ell$ and outputs $b = c_\ell \oplus c'_\ell$.
\begin{claim}
If $G_0$ and $G_1$ are not isomorphic, then the \cite{FOCS:GolMicWig86} subroutine satisfies $(2, g)$-special soundness, where the extractor outputs the bit $b$.
\end{claim}
Finally, we define the predicate $g'$ to be a slight modification of $g$: for the \emph{first} pair $(r^{(1)}, c^{(1)})$, $g'$ ignores\footnote{Alternatively, we could define $g'$ to require inputs with these $c^{(1)}_\ell$ omitted.} the bits $c^{(1)}_\ell$ for $i$ such that $r^{(1)}_\ell = 1$. The protocol will then not be $(2, g')$-special sound (e.g. a first transcript with $r^{(1)} = 1^\lambda$ would provide no information), it \emph{will} be $(2, g')$-PSS.
\begin{claim}
\label{claim:gni-pss}
If $G_0$ and $G_1$ are not isomorphic, then the \cite{FOCS:GolMicWig86} subroutine satisfies $(2, g')$-PSS, where the extractor outputs the bit $b$.
\end{claim}
\begin{proof}
This follows from the claim that if $(r^{(1)}, r^{(2)})\in \{0,1\}^\lambda \times \{0,1\}^{\lambda}$ is sampled according to an admissible distribution, then with all but ${\rm negl}(\lambda)$ probability, there exists an index $\ell$ such that $r^{(1)}_\ell = 0$ and $r^{(2)}_\ell = 1$. This can be argued using the same reasoning as in the proof of~\cref{k-g-ss-implies-k-g-pss}, since the probability that two uniformly random $\lambda$-bit strings $r^{(1)}$ and $r^{(2)}$ do not have an index $\ell \in [\lambda]$ such that $r^{(1)}_\ell = 0$ and $r^{(2)}_\ell = 1$ is ${\rm negl}(\lambda)$.
\end{proof}
\subsubsection{Commit-and-Open Protocols}
The next class of examples we discuss is that of \emph{commit-and-open protocols}. In particular, we are interested in characterizing a special soundness property where the extractor is only given the opened \emph{messages} in the prover's response (and not their openings).
\begin{definition}\label{def:commit-and-open}
Let $\mathsf{Com}$ denote a (possibly keyed) non-interactive commitment scheme. A commit-and-open protocol is a (3 or 4 message) protocol for an $\mathsf{NP}$ language $L$ of the following form:
\begin{itemize}
\item (Optional first verifier message) If $\mathsf{Com}$ is keyed, the verifier samples and sends the commitment key $\mathsf{ck}$ for $\mathsf{Com}$.
\item The prover, given a witness $w$ for some statement $x\in L$, computes a string $y \in \{0,1\}^N$ and sends a bitwise commitment $a = \mathsf{Com}(\mathsf{ck}, y)$ to the verifier.
\item The verifier samples a string $r$ that encodes a subset $S\subset [N]$ and sends $r$ to the prover.
\item The prover sends openings to $\{y_i\}_{i\in S}$.
\item The verifier checks that each opening to $y_i$ (for $i\in S$ is valid and then computes some function $\Check(y_S)$ on the opened bits.
\end{itemize}
We say that such a protocol satisfies ``commit-and-open $k$-special soundness'' if there exists an extractor $\mathsf{Extract}(x, y)$ satisfying the following property. For every instance $x$ and every collection of $k$ \emph{distinct} sets $S_1, \hdots, S_k$ (represented by strings $(r_1, \hdots, r_k)$, for \emph{any} string $y$ such that $\Check(y_{S_i}) = 1$ for all $i$, $w= \mathsf{Extract}(x, y)$ is a valid $\mathsf{NP}$-witness for $x$.
\end{definition}
It is not hard to see that the ``commit-and-open'' $k$-special soundness property, combined with the (computational/statistical) binding of the commitment scheme, implies a standard (computational/statistical) $k$-special soundness property of the $\Sigma$-protocol. However, we consider ``commit-and-open $k$-special soundness'' explicitly in order to satisfy (probabilistic) special soundness with respect to \emph{partial} transcripts.
This definition captures extremely common $\Sigma$-protocols, such as:
\begin{itemize}
\item The \cite{FOCS:GolMicWig86} $\Sigma$-protocol for $3$-coloring.
\item A slight variant of the \cite{Blum86} $\Sigma$-protocol for Hamiltonicity\footnote{In this variant, in addition to committing to a permuted graph $\pi(G)$, the prover commits to the permutation $\pi$ and the permuted cycle $\pi \circ \sigma$. On the $0$ challenge, the prover additionally opens the commitment to $\pi$, and on the $1$ challenge, the prover additionally opens the commitment to $\pi \circ \sigma$.}
\item Protocols following the ``MPC-in-the-head'' paradigm \cite{STOC:IKOS07}.
\end{itemize}
To view this in terms of generalized $k$-special soundness, define a consistency predicate $g$ as follows: on input $(\tau_{\mathrm{pre}},(r_i,\{m_{i,\ell}\}_{\ell \in S_i})_{i \in [k]})$, output $1$ if and only if
\begin{itemize}
\item For any pair of sets $S_i, S_j$ (corresponding to challenges $r_i, r_j$), for any $\ell \in S_i \cap S_j$, we have $m_{i,\ell} = m_{j,\ell}$. That is, the ``opened" message subsets are mutually consistent.
\item For all $i\in [k]$, $\Check(\{m_{i, \ell}\}_{\ell \in S_i}) = 1$.
\end{itemize}.
With this formalism in place, the following claim is immediate.
\begin{claim}
Any protocol satisfying commit-and-open $k$-special soundness (as described in \cref{def:commit-and-open}) is $(k, g)$-special sound.
\end{claim}
\subsubsection{Kilian's Protocol}\label{sec:kilian}
We briefly recall Kilian's protocol~\cite{STOC:Kilian92} instantiated with a collapsing hash function:
\begin{enumerate}
\item The verifier samples a collapsing hash function $h \gets H_{\lambda}$ and sends $h$ to the prover.
\item Let $h_{\mathrm{Merkle}}$ be the Merkle hash function corresponding to $h$. The prover uses $w$ to compute a PCP $\pi$, and then sends $\mathsf{rt} = h_{\mathrm{Merkle}}(\pi)$ to the verifier.
\item The verifier samples random coins $r$ and sends them to the prover.
\item The prover computes the set of the PCP indices $q_r$ that the PCP verifier with randomness $r$ would check. It sends the corresponding values $\pi[q_r]$ along with the Merkle openings of $\mathsf{rt}$ on the positions $q_r$.
\item Finally, the verifier accepts if all the Merkle openings are valid and $V_{\mathrm{PCP},x}(r,\pi[q_r]) =1$, i.e., the PCP verifier with randomness $r$ accepts $\pi[q_r]$.
\end{enumerate}
We will instantiate Kilian's protocol with a PCP of knowledge, defined as follows. Let $\mathsf{WIN}_{\mathsf{PCP},x}(\pi)$ denote the probability that $\pi$ is accepted by the PCP verifier.
\begin{definition}[PCP of Knowledge]
A PCP has knowledge error $\kappa_{\mathrm{PCP}}(\lambda)$ if there is an extractor $\mathsf{E}_{\mathrm{PCP}}$ such that given any PCP $\pi$ where $\mathsf{WIN}_{\mathrm{PCP},x}(\pi) > \kappa_{\mathrm{PCP}}$, the extractor $\mathsf{E}_{\mathrm{PCP}}(\pi) \rightarrow w$ outputs a valid witness $w$ for $x$ with probability $1$.
\end{definition}
The following claim is due to~\cite{FOCS:CMSZ21}, though we have slightly rewritten it to match our definition of $k$-PSS.
\begin{claim}
\label{claim:kilian-pss}
Kilian's protocol instantiated with a PCP with knowledge error $\kappa_{\mathrm{PCP}}(\lambda) = {\rm negl}(\lambda)$ proof length $\ell(\lambda)$, and alphabet-size $\Sigma(\lambda)$ is $(k,g)$-PSS where $k = \ell \log(|\Sigma|)$ and the consistency function $g$ outputs $1$ on $(\tau_{\mathrm{pre}},(r_i,z_i)_{i \in [k]})$ if (1) for each $i$, the response $z_i$ contains PCP answers $\pi[q_{r_i}]$ such that $V_{\mathrm{PCP}}(x,r_i,\pi[q_{r_i}]) = 1$, and (2) for every $i \neq i'$ the answers $\pi[q_{r_i}]$ and $\pi[q_{r_{i'}}]$ agree on all indices in $q_{r_i} \cap q_{r_{i'}}$.
\end{claim}
\begin{proof}
Our extractor $\mathsf{PSSExtract}_g$ takes as input $(\tau_{\mathrm{pre}},(r_i,z_i)_{i \in [k]})$ and generates a witness as follows:
\begin{enumerate}
\item\label[step]{step:pcp} Generate a PCP string $\pi \in \Sigma^{\ell}$ as follows. For each $t \in [\ell]$, check if $t \in q_{r_i}$ for any $i$. If so, pick such an $i$ arbitrarily and set $\pi[t]$ according to the value specified in $z_i$ (the choice of $i$ does not matter since the input satisfies consistency with respect to $g$). If there is no such $i$, set $\pi[t]$ arbitrarily.
\item Run $\mathsf{E}_{\mathrm{PCP}}(\pi) \rightarrow w$ and output $w$.
\end{enumerate}
We prove that~\cref{step:pcp} constructs a PCP $\pi$ where $\mathsf{WIN}_{\mathrm{PCP},x}(\pi) > \kappa_{\mathrm{PCP}}$ with $1-{\rm negl}(\lambda)$ probability whenever $(\tau_{\mathrm{pre}},(r_i,z_i)_{i \in [k]})$ is sampled from a distribution supported on $\mathsf{Consistent}_k$ (i.e., the subset of $T \times (R \times Z)^k$ where $g$ outputs $1$) whose marginal distribution on $R^k$ is admissible.
It suffices to prove that if $(r_1,\dots,r_k)$ are output by $\mathsf{Samp}$ (where $\mathsf{Samp}$ makes an expected ${\rm poly}(\lambda)$ number of queries to a uniform sampling oracle $O_R$ and then outputs a size-$k$ subset of the outputs of $O_r$) then the probability there \emph{exists} $\pi \in \Sigma^\ell$ such that (1) $\mathsf{WIN}_{\mathrm{PCP},x}(\pi) \leq \kappa_{\mathrm{PCP}}$ and (2) $V_{\mathrm{PCP},x}(r_i,\pi[q_{r_i}]) = 1$ for all $i \in [k]$ is ${\rm negl}(\lambda)$. This follows by invoking the definition of an admissible distribution, and observing that any $\pi$ resulting from~\cref{step:pcp} satisfies (2) by construction, which means that $\mathsf{WIN}_{\mathrm{PCP},x}(\pi) > \kappa_{\mathrm{PCP}}$ with probability $1-{\rm negl}(\lambda)$.
Let $d$ be a constant such that for all $\lambda > \lambda_d$, $\mathsf{Samp}$ makes at most $\lambda^d$ queries to the sampling oracle $O_r$. Suppose towards contradiction that there exists a constant $c$ such that for infinitely many $\lambda$, the probability that $\mathsf{Samp}$ outputs $(r_1,\dots,r_k)$ such that with probability at least $1/\lambda^c$, there \emph{exists} $\pi \in \Sigma^\ell$ satisfying conditions (1) and (2) above. Thus, for infinitely many $\lambda$, the probability that $\mathsf{Samp}$ makes $2\lambda^{c+d}$ queries (or more) to its sampling oracle $O_R$ is at most $1/(2\lambda^c)$ by Markov's inequality. This means that even if $\mathsf{Samp}$ makes at most $2\lambda^{c+d}$ queries to its sampling oracle, it still succeeds with probability at least $1/(2\lambda^c)$ for infinitely many $\lambda$.
Consider any fixed PCP $\pi$ such that $\mathsf{WIN}_{\mathrm{PCP},x}(\pi) \leq \kappa_{\mathrm{PCP}}$. The probability that the PCP is accepting on at least $k$ challenges out of $2\lambda^{c+d}$ uniformly random challenges is at most
\[\kappa_{\mathrm{PCP}}^k \cdot \binom{2\lambda^{c+d}}{k} \leq \kappa_{\mathrm{PCP}}^k (2\lambda^{c+d})^k.\]
By taking a union bound over all $\pi \in \Sigma^\ell$ we conclude that given $2\lambda^{c+d}$ uniformly random challenges, the probability there \emph{exists} a PCP $\pi$ such that $\mathsf{WIN}_{\mathrm{PCP},x}(\pi) \leq \kappa_{\mathrm{PCP}}$ and $\pi$ is accepting on at least $k$ of the $2\lambda^{c+d}$ challenges is at most
\[ |\Sigma|^\ell \kappa_{\mathrm{PCP}}^k (2\lambda^{c+d})^k = (|\Sigma|\cdot(2\lambda^{c+d} \kappa_{\mathrm{PCP}})^{\log(|\Sigma|)})^\ell, \]
where we have plugged in $k = \ell \log(|\Sigma|)$. Since $\kappa_{\mathrm{PCP}} = {\rm negl}(\lambda)$, there exists $\lambda_0$ such that $2\lambda^{c+d} \kappa_{\mathrm{PCP}} < \frac{1}{4}$ for all $\lambda > \lambda_0$. Then for all $\lambda > \lambda_0$, we have
\[ (|\Sigma|\cdot(2\lambda^{c+d} \kappa_{\mathrm{PCP}})^{\log(|\Sigma|)})^\ell < \frac{1}{|\Sigma|^\ell} \]
Since the PCP alphabet size is at least $|\Sigma| \geq 2$ and the PCP length is at least $\ell \geq \lambda$, the probability that $\mathsf{Samp}$ succeeds when restricted to making at at most $2\lambda^{c+d}$ queries to $O_R$ is at most $O(1/2^\lambda)$, which is a contradiction.
\end{proof}
\section{Singular Vector Algorithms}
In this section we give algorithms for working with states that are singular vectors of a matrix $\BProj{\MeasA} \BProj{\MeasB}$, where $\BProj{\MeasA},\BProj{\MeasB}$ are projectors. In \cref{sec:vrsvt} we give an algorithm that transforms left singular vectors to right singular vectors with negligible error. The runtime of the algorithm depends on the corresponding singular value.
\paragraph{Notation.}
Throughout this section we will consider the interaction between two binary projective measurements $\mathsf{A} = \BMeas{\BProj{\MeasA}}, \mathsf{B} = \BMeas{\BProj{\MeasB}}$.
We consider the matrix $\BProj{\MeasA} \BProj{\MeasB}$ and its singular value decomposition $V \Sigma W^{\dagger}$. Recall that $V,W$ are unitary and $\Sigma$ is a diagonal matrix. The columns of $V$ (resp. $W$) are the left (resp. right) singular vectors of $\BProj{\MeasA} \BProj{\MeasB}$, and the entries on the diagonal of $\Sigma$ are the singular values $s_j$. Note that the singular value decomposition is not in general unique; for the purposes of this section we fix one arbitrarily.
We denote left (resp. right) singular vectors of $\BProj{\MeasA} \BProj{\MeasB}$ with $s_j > 0$ by $\JorKetA{j}{1}$ (resp. $\JorKetB{j}{1}$). Define $\RegS_j \coloneqq \spanset(\JorKetA{j}{1}, \JorKetB{j}{1})$. If $s_j < 1$, then $\RegS_j$ is two-dimensional. The $\RegS_j$ correspond to the Jordan subspaces of $(\BProj{\MeasA},\BProj{\MeasB})$. As such, we also have $\JorKetA{j}{0},\JorKetB{j}{0} \in \RegS_j$. A straightforward calculation shows that these are left and right singular vectors of $(\mathbf{I}-\BProj{\MeasA})(\mathbf{I}-\BProj{\MeasB})$ with singular value $s_j$. The Jordan subspace values $p_j$ are the squares of the corresponding singular values. In our setting it is more natural to use the squares (since they correspond to probabilities), and so the guarantees in this section are stated with respect to the squared singular values.
\subsection{Fixed-Runtime Algorithms}
In this section we recall a selection of algorithms for manipulating singular vectors of $\BProj{\MeasA} \BProj{\MeasB}$. All of these algorithms make black-box use of $U_{\mathsf{A}},U_{\mathsf{B}}$; we consider their complexity as circuits with $U_{\mathsf{A}},U_{\mathsf{B}}$ gates. All of these algorithms take as input some threshold $a \in (0,1]$, such that their correctness guarantee will hold for singular vectors of value at least $a$, and their running time is linear in $1/a$.
The first algorithm $\mathsf{Transform}$ implements a fixed-runtime singular vector transformation, taking left singular vectors to their corresponding right singular vectors.
\begin{theorem}[Singular vector transformation \cite{STOC:GSLW19}]
\label{thm:svt}
There is a uniform family of circuits $\{ \mathsf{Transform}_{a,\delta} \}_{a,\delta \in (0,1]}$ with $U_{\mathsf{A}},U_{\mathsf{B}}$ gates, of size $O(\log (1/\delta)/\sqrt{a})$, such that the following holds. Let $\JorKetA{j}{1}$ be a left singular vector of $\BProj{\MeasA} \BProj{\MeasB}$ with singular value $s_j$. If $a \leq s_j^2$, $\mathsf{Transform}_{a,\delta}[\mathsf{A} \to \mathsf{B}](\JorKetA{j}{1})$ outputs the state $\JorKetB{j}{1}$ with probability at least $1 - \delta$. Moreover, for all $a$, $\RegS_j$ is invariant under $\mathsf{Transform}_{a,\delta}$.
\end{theorem}
The second algorithm $\mathsf{Threshold}$ implements a measurement determining, given a threshold $a$ and a singular vector with singular value $s_j$, whether $a \leq s_j$ or $a > 2s_j$ (and otherwise has no guarantee).
\begin{theorem}[Singular value threshold \cite{STOC:GSLW19}]
\label{thm:svdisc}
There is an algorithm $\mathsf{Threshold}$ which, for all binary projective measurements $\mathsf{A},\mathsf{B}$, given black-box access to operators $U_{\mathsf{A}},U_{\mathsf{B}}$, achieves the following guarantee. Given $\delta > 0, b \geq \varepsilon > 0$ and a state $\JorKetA{j}{1}$ which is a left singular vector of $\BProj{\MeasA} \BProj{\MeasB}$ with singular value $s_j$:
\begin{itemize}
\item if $s_j^2 \geq b$, then $\Pr[\mathsf{Threshold}^{\mathsf{A},\mathsf{B}}_{p,\varepsilon,\delta}(\JorKetA{j}{1}) \to 1] \geq 1 - \delta$, and
\item if $s_j^2 \leq b-\varepsilon$, then $\Pr[\mathsf{Threshold}^{\mathsf{A},\mathsf{B}}_{p,\varepsilon,\delta}(\JorKetA{j}{1}) \to 1] \leq \delta$.
\end{itemize}
Moreover, $\RegS_j$ is invariant under $\mathsf{Threshold}$, and if the outcome is $1$ the post-measurement state is $\JorKetA{j}{1}$. $\mathsf{Threshold}$ runs in time $O(\log (1/\delta)\sqrt{b}/\varepsilon)$.
\end{theorem}
Next, we describe an algorithm which, with access to $U_{\mathsf{A}},U_{\mathsf{B}}$, can ``flip'' a singular vector state from $\image(\mathbf{I} - \BProj{\MeasA})$ to $\image(\BProj{\MeasA})$ using $\BProj{\MeasB}$, provided that the singular value is sufficiently far from both $0$ and $1$.
\begin{lemma}
Let $\BProj{\MeasA},\BProj{\MeasB}$ be projectors. There is an algorithm $\mathsf{Flip}_{\varepsilon}[\BProj{\MeasA}, \BProj{\MeasB}]$ which, on input a state $\JorKetA{j}{0}$ that is a left singular vector of $\BProj{\MeasA} \BProj{\MeasB}$ with $\varepsilon \leq s_j^2 \leq 3/4$, outputs the state $\JorKetA{j}{1}$ with probability $1-\delta$ in time $O(\log(1/\delta)/\sqrt{\varepsilon})$. $\mathsf{Flip}$ is invariant on the subspace spanned by $\{ \JorKetA{j}{1},\JorKetA{j}{0} \}$.
\end{lemma}
\begin{proof}
The algorithm operates as follows:
\begin{enumerate}[nolistsep]
\item \label[step]{step:alternate-to-1} Apply $\mathsf{A},\mathsf{B}$ in an alternating fashion until either $\mathsf{A} \to 1$, $\mathsf{B} \to 1$ or $3\log(1/\delta)$ measurements have been applied.
\item If $\mathsf{A} \to 1$, stop.
\item If $\mathsf{B} \to 1$, apply $\mathsf{Transform}_{\varepsilon,\delta}[\BProj{\MeasB}, \BProj{\MeasA}]$.
\end{enumerate}
The lemma follows since the probability that \cref{step:alternate-to-1} takes more than $k$ steps is $(3/4)^k$, and then by the guarantee of $\mathsf{Transform}$.
\end{proof}
\subsection{Variable-Runtime Singular Vector Transformation (vrSVT)}
\label{sec:vrsvt}
In this section we describe our variable-runtime SVT algorithm. In fact, for technical reasons our algorithm consists of two parts: a variable-runtime \emph{singular value estimation} procedure which \emph{preserves} singular vectors, and a \emph{singular vector transformation} procedure which transforms left singular vectors to right singular vectors, whose running time is fixed given a classical input from the estimation procedure.
Below we give a proof of \cref{thm:vrsvt} that makes use of the singular value discrimination and singular vector transformation algorithms of \cite{STOC:GSLW19}. We note that it is possible to prove \cref{thm:vrsvt} via more ``elementary'' means using high-probability phase estimation \cite{NagajWZ11} and amplitude amplification. Indeed, phase estimation for $(2\mathbf{I} - \BProj{\MeasA})(2\mathbf{I} - \BProj{\MeasB})$ is equivalent to singular value estimation for $\BProj{\MeasA} \BProj{\MeasB}$ and amplitude amplification can be viewed as a (non-coherent) singular vector transformation.
\begin{theorem}[Two-stage variable-runtime singular vector transformation]
\label{thm:vrsvt}
Let $\mathsf{A} = \BMeas{\BProj{\MeasA}}$, $\mathsf{B} = \BMeas{\BProj{\MeasB}}$ be projective measurements. There is a pair of algorithms $\mathsf{VarEstimate}[\BProj{\MeasA} \rightleftarrows \BProj{\MeasB}]$ and $\mathsf{Transform}[\BProj{\MeasB} \to \BProj{\MeasA}]$ with $U_{\mathsf{A}}$ and $U_{\mathsf{B}}$ gates with the following properties. Let $\JorKetB{j}{1}$ be a left singular vector of $\BProj{\MeasA} \BProj{\MeasB}$ with singular value $s_j > 0$, and let $\JorKetA{j}{1}$ be the corresponding right singular vector. Then
\begin{enumerate}
\item The subspace $\RegS_j$ is invariant under both $\mathsf{VarEstimate}$ and $\mathsf{Transform}$.
\item The running time of $\mathsf{VarEstimate}(\JorKetA{j}{1})$ is $O(\log(1/\delta)/s_j)$ with probability $1-\delta$ and $O(\log(1/\delta)/\delta)$ with probability $1$.
\item The output $(q,\ket{\psi}) \gets \mathsf{VarEstimate}(\JorKetA{j}{1})$ is such that $\ket{\psi} = \JorKetB{j}{1}$ with probability $1-\delta$.
\item The running time of $\mathsf{Transform}(q,\ket{\psi'})$, where $(\gamma,\ket{\psi}) \gets \mathsf{VarEstimate}(\JorKetA{j}{1})$ and $\ket{\psi'}$ is any state, is $O(\log (1/\delta)/s_j)$ with probability $1-\delta$ and at most $1/\delta$ with probability $1$.
\item The output state of $\mathsf{Transform}(\mathsf{VarEstimate}(\JorKetA{j}{1}))$ is $\JorKetB{j}{1}$ with probability $1-\delta$.
\end{enumerate}
\end{theorem}
The $\mathsf{Transform}$ procedure above can be instantiated directly via the singular vector transformation algorithm of \cite{STOC:GSLW19}, see \cref{thm:svt}.
We describe an implementation of $\mathsf{VarEstimate}$ using the singular value discrimination algorithm (\cref{thm:svdisc}). For a binary projective measurement $\mathsf{A}$, let $\bar{\mathsf{A}}$ denote the same measurement with the outcome labels reversed. For $k$ in the procedure below, define $b \coloneqq 2^{-k}$ and $\varepsilon \coloneqq 2^{-k-1}$.
\begin{enumerate}[noitemsep]
\item \label[step]{step:est-loop} Set $b \coloneqq 0$, $k \coloneqq 0$. Repeat the following two steps until $b = 1$ or $k \geq \lceil \log (1/\delta) \rceil$:
\begin{enumerate}[noitemsep]
\item Set $k \gets k+1$.
\item Apply $\mathsf{B}$, obtaining outcome $c$.
\item If $c = 1$, apply $\mathsf{Threshold}^{\mathsf{A},\mathsf{B}}(\gamma,\varepsilon,\delta/\log (1/\delta))$ obtaining outcome $b \in \{0,1\}$.
\item If $c = 0$, apply $\mathsf{Threshold}^{\bar{\mathsf{A}},\bar{\mathsf{B}}}(\gamma,\varepsilon,\delta/\log (1/\delta))$ obtaining outcome $b \in \{0,1\}$.
\end{enumerate}
\item \label[step]{step:fix-state} Apply $\mathsf{B}$, obtaining outcome $c$. If $c = 0$, apply $\mathsf{Flip}_{2^{-k-1}}[\mathsf{A},\mathsf{B}]$.
\item Output $2^{-k-1}$.
\end{enumerate}
\begin{lemma}[Variable-runtime singular value estimation]
\label{lemma:varest}
Let $\JorKetA{j}{1}$ be a left singular vector with singular value $s_j$. Let $\delta > 0$. $\mathsf{VarEstimate}_{\delta}[\mathsf{A} \rightleftarrows \mathsf{B}](\JorKetA{j}{1},\delta)$ runs in time $O(\log(1/\delta)/s_j)$ with probability $1-\delta$ and $O(\log(1/\delta)/\delta)$ with probability $1$. Moreover, $\mathsf{VarEstimate}$ outputs $a$ in the range $\max(\delta,s_j^2)/4 \leq a \leq \max(\delta,s_j^2)$ with probability $1 - \delta$.
\end{lemma}
\begin{proof}
First, observe that $k$ iterations of \cref{step:est-loop} take time $O(\log(1/\delta) \cdot 2^k)$. Since $\mathsf{VarEstimate}$ terminates within $\lceil \log(1/\delta) \rceil$ iterations of \cref{step:est-loop} with probability $1$, $\mathsf{VarEstimate}$ runs in time $O(\log(1/\delta)/\delta)$ with probability $1$.
The probability that the singular value discrimination algorithm outputs $1$ when $2^{-k} > 2s_j$ is at most $\delta/(\log (1/\delta))$. Similarly, the probability that it outputs $1$ when $2^{-k} \leq s_j$ is at least $1-\delta/(\log (1/\delta))$. By a union bound, with probability at least $1 - \delta$ the algorithm either stops in the first iteration where $2^{-k} \leq 2 s_j$ (so $s_j < 2^{-k} \leq 2 s_j$) or in the following iteration ($s_j/2 < 2^{-k} \leq s_j$). Thus $2^{-k} \in [s_j/2,2s_j]$, so $2^{-k-1} \in [s_j/4,s_j]$ as required. The running time in this case is $O(\log(1/\delta)/s_j)$.
If $s_j \geq 1/2$ then the algorithm stops after one iteration in state $\JorKetB{j}{1}$ with probability $1-\delta$. Otherwise the probability that $\log(1/\delta)$ alternating measurements $\mathsf{A},\mathsf{B}$ are applied with only $0$ outcomes is at most $\delta$. If \cref{step:fix-state} terminates with $\mathsf{B} \to 1$, then the resulting state is $\JorKetB{j}{1}$. Otherwise, the resulting state is $\JorKetA{j}{1}$. In this case the $\mathsf{Transform}$ algorithm rotates the state to $\JorKetB{j}{1}$ with probability $1-\delta$.
\end{proof}
The next two claims follow directly from the correctness and subspace invariance guarantees of $\mathsf{Threshold}$ and $\mathsf{VarEstimate}$.
\begin{corollary}
\label{cor:threshold-est-almost}
For any state $\bm{\rho}$, $\delta > 0$, $\varepsilon \colon [0,1] \to [\delta,1]$:
\[
\Pr[\mathsf{Threshold}_{p,\varepsilon(p),\delta}(\mathsf{VarEstimate}(\bm{\rho})) = 1] \geq 1 - 2\delta,
\]
where $p$ is the classical output from $\mathsf{VarEstimate}$.
\end{corollary}
\begin{corollary}
\label{cor:threshold-almost}
For any state $\bm{\rho}$, $\delta > 0$, $\varepsilon \in [\delta,1]$:
\[
\Pr\left[ b_1 = 1 \,\wedge\, b_2 = 0 \, \middle\vert \begin{array}{r}
(b_1,\bm{\rho}_1) \gets \mathsf{Threshold}_{p,\varepsilon,\delta}(\bm{\rho}) \\
(b_2,\bm{\rho}_2) \gets \mathsf{Threshold}_{p-\varepsilon,\varepsilon,\delta}(\bm{\rho}_1) \\
\end{array}
\right] \leq 2\delta ~.
\]
Moreover,
\[
\Pr\left[ b_1 = 1 \,\wedge\, p_j < p - \varepsilon \, \middle\vert \begin{array}{r}
(b_1,\bm{\rho}_1) \gets \mathsf{Threshold}_{p,\varepsilon,\delta}(\bm{\rho}) \\
j \gets \Meas{\mathsf{Jor}}[\mathsf{A},\mathsf{B}](\bm{\rho}_1) \\
\end{array}
\right] \leq \delta ~.
\]
\end{corollary}
\section{Pseudoinverse Lemma}
In this section we show that for binary projective measurements $\mathsf{A},\mathsf{B}$ any state $\ket{\psi_\mathsf{A}}$ in the image of $\BProj{\MeasA}$, there is a state $\ket{\psi_\mathsf{B}}$ in the image of $\mathsf{B}$ such that $\ket{\psi_\mathsf{A}}$ is (approximately) obtained by applying $\mathsf{A}$ to $\ket{\psi_\mathsf{B}}$ and conditioning on obtaining a $1$. Moreover, if $\ket{\psi_\mathsf{A}}$ has Jordan spectrum that is concentrated around eigenvalue $p$, then $\ket{\psi_\mathsf{A}}$ has the same property. We refer to this as the ``pseudoinverse lemma'' because $\ket{\psi_\mathsf{B}}$ is obtained from $\ket{\psi_\mathsf{A}}$ by applying the pseudoinverse of the matrix $\BProj{\MeasA}\BProj{\MeasB}$.
\begin{lemma}[Pseudoinverse Lemma]
\label{lemma:pseudoinverse}
Let $\mathsf{A},\mathsf{B}$ be binary projective measurements, and let $\{ \RegS_j \}_{j}$ be the induced Jordan decomposition. Let $\SProj[\mathsf{Jor}]{j}$ be the projection on to $\RegS_j$ and let $p_j$ be the eigenvalue of $\RegS_j$. Let $\bm{\rho}$ be a state such that $\Tr(\BProj{\MeasA} \bm{\rho}) = 1$ and let $\SProj{0} \coloneqq \sum_{j, p_j = 0} \SProj[\mathsf{Jor}]{j}$. Let $E \coloneqq \sum_{j, p_j > 0} \frac{1}{p_j} \SProj[\mathsf{Jor}]{j}$. There exists a ``pseudoinverse'' state $\bm{\rho}'$ with $\Tr(\BProj{\MeasB} \bm{\rho}') = 1$ such that all of the following are true:
\begin{enumerate}[noitemsep]
\item $\Tr(\BProj{\MeasA} \bm{\rho}') = \frac{1-\Tr(\SProj{0} \bm{\rho})}{\Tr(E \bm{\rho})} $,
\item $d\left(\bm{\rho},\frac{\BProj{\MeasA} \bm{\rho}' \BProj{\MeasA}}{\Tr(\BProj{\MeasA} \bm{\rho}')}\right) \leq 2\sqrt{\Tr(\SProj{0} \bm{\rho})}$,
\item for all $j$ such that $p_j > 0$ it holds that $\Tr(\SProj[\mathsf{Jor}]{j} \bm{\rho}') = \frac{\Tr(\SProj[\mathsf{Jor}]{j} \bm{\rho})}{p_j \cdot \Tr(E \bm{\rho})}$, and
\item for all $j$ such that $p_j = 0$ it holds that $\Tr(\SProj[\mathsf{Jor}]{j} \bm{\rho}') = 0$.
\end{enumerate}
\end{lemma}
An important consequence of (3) and (4) is that for all $j$, if $\Tr(\SProj[\mathsf{Jor}]{j} \bm{\rho}) = 0$ then $\Tr(\SProj[\mathsf{Jor}]{j} \bm{\rho}') = 0$.
\begin{proof}
Let $C \coloneqq \BProj{\MeasA} \BProj{\MeasB}$, and note that $\JorKetA{j}{1},\JorKetB{j}{1}$ are corresponding left and right singular vectors of $C$ with singular value $\sqrt{p_j}$. Hence $C = \sum_{p_j > 0} \sqrt{p_j} \JorKetA{j}{1} \JorBraB{j}{1}$. Let $C^+$ be the pseudoinverse of $C$, i.e., $C^+ = \sum_{p_j>0} \frac{1}{\sqrt{p_j}} \JorKetB{j}{1} \JorBraA{j}{1}$. Define \[\bm{\rho}' \coloneqq \frac{C^+ \bm{\rho} (C^+)^{\dagger}}{\Tr(C^+ \bm{\rho} (C^{+})^{\dagger})}.\]
Since $\Tr(\BProj{\MeasA}\bm{\rho})=1$, we have $\Tr(\BProj{\MeasB} \bm{\rho}') = 1$. We also have
\begin{align}
\Tr(C^+ \bm{\rho} (C^+)^{\dagger}) = \Tr((C C^{\dagger})^{+}\bm{\rho}) = \sum_j \frac{1}{p_j} \JorBraA{j}{1} \bm{\rho} \JorKetA{j}{1} = \Tr(E \bm{\rho}). \label{eq:pseudoinverse-trace}
\end{align}
Next, observe that since $C C^+ = \sum_{p_j > 0} \JorKetBraA{j}{1} = \mathbf{I} - \Pi_0$, we have
\begin{align}
\BProj{\MeasA} \bm{\rho}' \BProj{\MeasA} &= \BProj{\MeasA} \left(\frac{ C^+ \bm{\rho} (C^+)^{\dagger} }{\Tr(C^+ \bm{\rho} (C^{+})^{\dagger})}\right) \BProj{\MeasA} \notag \\
&= \BProj{\MeasA} \left(\frac{ C^+ \bm{\rho} (C^+)^{\dagger} }{\Tr(E \bm{\rho})}\right) \BProj{\MeasA} \notag \\
&= \BProj{\MeasA} \BProj{\MeasB} \left(\frac{ C^+ \bm{\rho} (C^+)^{\dagger} }{\Tr(E \bm{\rho})}\right) \BProj{\MeasB} \BProj{\MeasA} \notag \\
&= \frac{1}{\Tr(E \bm{\rho})} C C^+ \bm{\rho} (C C^+)^{\dagger} \notag \\
&= \frac{1}{\Tr(E \bm{\rho})} (\mathbf{I}-\Pi_0) \bm{\rho} (\mathbf{I}-\Pi_0). \label{eq:pseudoinverse-condition}
\end{align}
\noindent Given these calculations, we can prove the claimed properties (1-3) in the lemma statement:
\begin{itemize}
\item \textbf{Proof of (1).} Taking the trace of both sides of \cref{eq:pseudoinverse-condition}, we see that
\[ \Tr(\BProj{\MeasA} \bm{\rho}') = \Tr(\BProj{\MeasA} \bm{\rho}' \BProj{\MeasA}) = \frac 1 {\Tr(E \bm{\rho})} \Tr\left((\mathbf{I} - \Pi_0) \bm{\rho} (\mathbf{I} - \Pi_0) \right) = \frac {\Tr\left((\mathbf{I} - \Pi_0) \bm{\rho} \right)} {\Tr(E \bm{\rho})} = \frac {1 - \Tr\left(\Pi_0 \bm{\rho} \right)} {\Tr(E \bm{\rho})}.
\]
\item \textbf{Proof of (2).}
Given \cref{eq:pseudoinverse-condition} and the trace calculation above, we have that
\[ \frac{\BProj{\MeasA} \bm{\rho}' \BProj{\MeasA}}{\Tr(\BProj{\MeasA} \bm{\rho}')} = \frac{1}{1-\Tr(\Pi_0 \bm{\rho})} (\mathbf{I}-\Pi_0) \bm{\rho} (\mathbf{I}-\Pi_0)
\]
The inequality $d\left(\bm{\rho},\frac{\BProj{\MeasA} \bm{\rho}' \BProj{\MeasA}}{\Tr(\BProj{\MeasA} \bm{\rho}')}\right) \leq 2\sqrt{\Tr(\SProj{0} \bm{\rho})}$ now follows from \cref{lemma:gentle-measurement} (gentle measurement).
\item \textbf{Proof of (3).} For all $j$ such that $p_j > 0$, making use of the same calculation as \cref{eq:pseudoinverse-trace}, we have
\[ \Tr(\SProj[\mathsf{Jor}]{j} \bm{\rho}') = \frac{ \Tr(\SProj[\mathsf{Jor}]{j} C^+ \bm{\rho} (C^+)^{\dagger})}{\Tr( C^+ \bm{\rho} (C^+)^{\dagger})}= \frac{\Tr( C^+ \SProj[\mathsf{Jor}]{j} \bm{\rho} (C^+)^{\dagger})}{\Tr(E \bm{\rho})} = \frac{\Tr(E \hspace{.1cm} \SProj[\mathsf{Jor}]{j} \bm{\rho})}{\Tr(E \bm{\rho})} = \frac{\Tr(\frac 1 {p_j} \SProj[\mathsf{Jor}]{j} \bm{\rho})}{\Tr(E \bm{\rho})}.
\]
\item \textbf{Proof of (4).} This follows immediately from the fact that $\Pi_0 C^+ = C^+ \Pi_0 = 0$.
\end{itemize}
This completes the proof of \cref{lemma:pseudoinverse}.
\end{proof}
We conclude this section by showing that under a mild condition, any state $\bm{\rho}$ that is close to $\image(\BProj{\MeasA})$ has a nearby state in $\image(\BProj{\MeasA})$ with the same Jordan decomposition.
\begin{claim}
\label{claim:jordan-rotate}
Let $\bm{\rho}$ be any state. Let $\SProj[\mathsf{Jor}]{\mathsf{stuck}}$ project on to one-dimensional subspaces $\RegS_j$ in the image of $\mathbf{I} - \BProj{\MeasA}$. There exists a state $\bm{\sigma}$ such that for all $j$, $\Tr(\SProj[\mathsf{Jor}]{j} \bm{\sigma}) = \Tr(\SProj[\mathsf{Jor}]{j} \bm{\rho})$, $\Tr(\BProj{\MeasA} \bm{\sigma}) = 1 - \Tr(\SProj[\mathsf{Jor}]{\mathsf{stuck}} \cdot \bm{\rho})$, and $d(\bm{\rho},\bm{\sigma}) \leq \sqrt{1 - \Tr(\BProj{\MeasA} \bm{\rho})}$.
\end{claim}
\begin{proof}
Define a unitary $U$ which is invariant on the $\RegS_j$ and, in each two-dimensional $\RegS_j$, rotates $\JorKetA{j}{0}$ to $\JorKetA{j}{1}$. Formally,
\begin{equation*}
U \coloneqq \sum_{j,p_j \notin \{0,1\}} (\JorKetA{j}{1}\JorBraA{j}{0} + \JorKetA{j}{0}\JorBraA{j}{1}) + \mathbf{I}_{\RegS^{(1)}},
\end{equation*}
where $\RegS^{(1)}$ is the direct sum of the 1D subspaces. Set
\[
\bm{\sigma} \coloneqq \BProj{\MeasA} \bm{\rho} \BProj{\MeasA} + U (I - \BProj{\MeasA}) \bm{\rho} (I - \BProj{\MeasA}) U^{\dagger}. \qedhere
\]
\end{proof}
\section{Post-Quantum Guaranteed Extraction}\label{sec:high-probability-extractor}
\def \otimes {\otimes}
\def \mathsf{vk} {\mathsf{vk}}
In this section, we give a post-quantum extraction procedure for various 3- and 4-message public-coin interactive protocols. In particular, we will consider interactive protocols satisfying \emph{partial collapsing} (\cref{def:partial-collapsing-protocol}) with respect to some class of efficiently computable functions $F = \{f: T\times R \times Z \rightarrow \{0,1\}^* \}$. Our goal is to establish \emph{guaranteed extraction}, defined below (essentially matching \cref{def:high-probability-extraction}).
\begin{definition}\label{def:high-probability-extraction-body}
$(P_{\Sigma}, V_{\Sigma})$ is a post-quantum proof of knowledge with \emph{guaranteed extraction} if it has an extractor $\mathsf{Extract}^{P^*}$ of the following form.
\begin{enumerate}[noitemsep]
\item $\mathsf{Extract}^{P^*}$ first runs the cheating prover $P^*$ to generate a (classical) first message $a$ along with an instance $x$ (in a 4-message protocol, this requires first sampling a random $\mathsf{vk}$ and running $P^*(\mathsf{vk})$ to obtain $x,a$).
\item\label[step]{step:ge-run-coherently} $\mathsf{Extract}^{P^*}$ runs $P^*$ coherently on the superposition $\sum_{r \in R} \ket{r}$ of all challenges to obtain a superposition $\sum_{r,z} \alpha_{r, z} \ket{r, z}$ over challenge-response pairs.\footnote{In general, the response $z$ will be entangled with the prover's state; here we suppress this dependence.}
\item $\mathsf{Extract}^{P^*}$ then computes (in superposition) the verifier's decision $V(x, a, r, z)$ and measures it. If the measurement outcome is $0$, the extractor gives up.
\item If the measurement outcome is $1$, run some quantum procedure $\mathsf{FindWitness}^{P^*}$ that outputs a string $w$.
\end{enumerate}
We require that the following two properties hold.
\begin{itemize}[noitemsep]
\item \textbf{Correctness (guaranteed extraction):} The probability that the initial measurement returns $1$ but the output witness $w$ is not a valid witness for $x$ is ${\rm negl}(\lambda)$.
\item \textbf{Efficiency:} For any QPT $P^*$, the procedure $\mathsf{Extract}^{P^*}$ is in $\mathsf{EQPT}_{m}$.
\end{itemize}
\end{definition}
We remark that this definition is written to capture (first-message) \emph{adaptive soundness}, where the prover $P^*$ is allowed to choose the instance $x$ when it sends its first message. One could alternatively define a non-adaptive variant of this definition in which the instance $x$ is fixed in advance (and this section's results would hold in this setting as well).~\cref{def:high-probability-extraction-body} suffices for our purposes since none of the 4-message protocols we consider have the first verifier message $\mathsf{vk}$ depend on $x$ (in all cases we consider, $\mathsf{vk}$ is just a commitment key or hash function key), and the protocols all satisfy adaptive soundness.
\subsubsection{Notation}
\label{sec:ge-notation}
Let $\mathcal{R}$ denote a register with the basis $\{\ket{r}\}_{r \in R}$ and let $\ket{+_R}_{\mathcal{R}} \coloneqq \frac{1}{\sqrt{\abs{R}}} \sum_{r \in R} \ket{r}$. Let $\mathcal{H}$ denote the prover's state (including its workspace), and let $U_r$ denote the unitary on $\mathcal{H}$ that the prover applies on challenge $r$. Let $\mathcal{Z}$ denote the subregister of $\mathcal{H}$ that the prover measures to obtain its response $z$ after applying $U_r$.
Define the projector
\[\Pi_{V,r} = U_{r}^{\dagger} \left(\sum_{z : V(r,z) = 1} \ketbra{z}_{\mathcal{Z}} \otimes \mathbf{I}\right) U_{r}\]
which intuitively projects onto subspace of $\mathcal{H}$ where the prover gives an accepting response on challenge $r$.
Define the binary projective measurement $\mathsf{C} = \BMeas{\BProj{\mathsf{C}}}$ where
\[ \BProj{\mathsf{C}} = \sum_r \ketbra{r}_{\mathcal{R}} \otimes \Pi_{V,r} ,\]
and $\mathsf{U} = \BMeas{\BProj{\mathsf{U}}}$ where
\[ \BProj{\mathsf{U}} = \ketbra{+_R}_{\mathcal{R}} \otimes \mathbf{I}_{\mathcal{H}}.\]
\subsection{Description of the Extractor}
\label{subsec:description-ext}
We first give a full description of an extraction procedure $\mathsf{Extract}$, defined for any partially collapsing protocol.
\paragraph{The threshold unitary.}
Consider the following measurement procedure $\mathsf{T}_{p,\varepsilon,\delta}$ on $\mathcal{H}$, parameterized by threshold $p$, accuracy $\varepsilon$ and error $\delta$.
\begin{itemize}[noitemsep]
\item Initialize a fresh register $\mathcal{R}$ to $\ket{+_R}_{\mathcal{R}}$.
\item Run $\mathsf{Threshold}_{p,\varepsilon,\delta}^{\mathsf{U},\mathsf{C}}$ on $\mathcal{H} \otimes \mathcal{R}$ , obtaining outcome $b$.
\item Trace out $\mathcal{R}$ and output $b$.
\end{itemize}
We define $U_{p,\varepsilon,\delta}$ to be a \emph{coherent} implementation of $\mathsf{T}_{p,\varepsilon,\delta}$. $U_{p,\varepsilon,\delta}$ acts on $\mathcal{H} \otimes \mathcal{W} \otimes \mathcal{B}$ where $\mathcal{W} \otimes \mathcal{B}$ is an ancilla register: $\mathcal{W}$ contains the algorithm's workspace and $\mathcal{B}$ is a single qubit containing the measurement outcome. In particular, applying $U_{\varepsilon,\delta}$ to $\ket{\psi}_{\mathcal{H}} \ket{0}_{\mathcal{W},\mathcal{B}}$, measuring $\mathcal{B}$, and then tracing out $\mathcal{W}$ implements the above measurement.
\paragraph{The repair measurements.} We define the two projective measurements $\mathsf{D}_r = \BMeas{\BProj{r}},\mathsf{G}_{p,\varepsilon,\delta} = \BMeas{\BProj{p,\varepsilon,\delta}}$ for our repair step.
For any $p,\varepsilon,\delta > 0$, define the projector $\Pi_{p,\varepsilon,\delta}$ on $\mathcal{H} \otimes \mathcal{W} \otimes \mathcal{B}$ as follows:
\[ \Pi_{p,\varepsilon,\delta} \coloneqq U_{p,\varepsilon,\delta}^\dagger (\mathbf{I}_{\mathcal{H}, \mathcal{W}} \otimes \ketbra{1}_{\mathcal{B}}) U_{p,\varepsilon,\delta}.\]
For any $r \in R$, we define the projector $\Pi_r$ on $\mathcal{H} \otimes \mathcal{W}$ as
\[\Pi_r \coloneqq (\Pi_{V, r})_{\mathcal{H}} \otimes \ketbra{0}_{\mathcal{W}}.\]
We describe the extraction procedure $\mathsf{Extract}_{V}^{P^*}(x)$. The procedure is defined with respect to $k$ efficiently computable functions $f_1, \hdots, f_k: T\times R \times Z\rightarrow \{0,1\}^*$.
\begin{mdframed}
\begin{enumerate}
\item \label[step]{step:first-measurement} \textbf{Initial Execution.} Use $P^*$ to generate $(\mathsf{vk}, a)$, and let $\ket{\psi}$ denote the residual prover state. Apply $\mathsf{C} = \BMeas{\BProj{\mathsf{C}}}$ to $ \ket{\psi}_{\mathcal{H}}\otimes \ket{+_R}_{\mathcal{R}}$. If $0$, terminate (note that we do not consider this an ``abort''.) Otherwise:
\item \label[step]{step:variable-phase-est} \textbf{Estimate success probability.} Run $\mathsf{VarEstimate}[\mathsf{C} \rightleftarrows \mathsf{U}]$ (as defined in \cref{sec:vrsvt}) with $\frac 1 2$-multiplicative error and failure probability $\delta = 1/2^\lambda$, outputting a value $p$. Note that since the input state is in $\BProj{\mathsf{C}}$, the algorithm produces an output state in $\BProj{\mathsf{C}}$ with probability $1-\delta$.
Abort if $p < \lambda k \sqrt{\delta}$. Define $\varepsilon = \frac{p}{4k}$.
\item \label[step]{step:extract-transcript} \textbf{Main Loop.} Repeat the following ``main loop'' for $i$ from $1$ to $k$:
\begin{enumerate}
\item \label[step]{step:loop-reestimate} \textbf{Lower bound success probability.} Run $\mathsf{Threshold}_{p,\varepsilon,\delta}^{\mathsf{C},\mathsf{U}}$ on $\mathcal{H} \otimes \mathcal{R} $, obtaining outcome $b$. Abort if $b = 0$. Update $p \coloneqq p-\varepsilon$.
\item \label[step]{step:loop-measure-r} \textbf{Measure the challenge.} Measure the $\mathcal{R}$ register, obtaining a particular challenge $r_i \in R$. Discard the $\mathcal{R}$ register.
\item \label[step]{step:loop-phase-est} \textbf{Estimate the running time of Transform.} Initialize the $\mathcal{W}$ register to $\ket{0}_{\mathcal{W}}$ and run $\mathsf{VarEstimate}[\mathsf{D}_r \rightleftarrows \mathsf{G}_{p, \varepsilon,\delta}]$ with $\frac 1 2$-multiplicative error and failure probability $\delta = 2^{-\lambda}$, obtaining classical output $q$. Since the input state is in $\Pi_r$, the algorithm produces an output state in $\Pi_r$ with probability $1-\delta$.
\item \label[step]{step:loop-collapsing}\textbf{Record part of the accepting response.} Make a partial measurement of the prover response $z_i$; specifically, measure $y_i = f_i(z_i)$.\footnote{Formally, we (1) apply the prover unitary $U_{r^*}$ to $\mathcal{H}$, (2) apply the projective measurement $\big(\Pi_y\big)_y$ for $\Pi_y = \sum_{z: f_i(z) = y} \ketbra{z}_{\mathcal{Z}}\otimes \mathbf{I}_{\mathcal{H}'}$ (where $\mathcal{H} = \mathcal{Z} \otimes \mathcal{H}'$), and (3) apply $U_{r^*}^\dagger$ to $\mathcal{H}$.} \textbf{If $i=k$, go to Step 4.}
\item \label[step]{step:loop-svt} \textbf{Transform onto good states.} Apply $\mathsf{Transform}_q[\mathsf{D}_r \rightarrow \mathsf{G}_{p, \varepsilon,\delta}]$ with failure probability $\delta = 2^{- \lambda}$.
\item \label[step]{step:loop-discard-w} Next, apply $U_{p,\varepsilon,\delta}$ and then discard the $\mathcal{W}$ register. Update $p \coloneqq p-\varepsilon$.
\item \label[step]{step:loop-amplify-C} \textbf{Transform onto accepting executions.} Re-initialize $\mathcal{R}$ to $\ket{+_R}$ and then apply $\mathsf{Transform}_{p}[\mathsf{U} \rightarrow \mathsf{C}]$; abort if this procedure fails.
\end{enumerate}
\item Output $(\mathsf{vk},a,r_1,y_1,\cdots,r_k,y_k)$.
\end{enumerate}
The above procedure deterministically terminates and aborts if it has not already stopped after $O(k)/\sqrt \delta$ steps, for $\delta := 2^{-\lambda}$.
\end{mdframed}
\subsection{Partial Transcript Extraction Theorem}
Our most general extraction theorem is stated for \emph{any} partially collapsing protocol, but is only guaranteed to output partial transcripts (rather than a witness). In \cref{sec:obtaining-guaranteed-extraction}, we show how this theorem can be used to establish guaranteed extraction of a witness.
\begin{theorem}\label{thm:high-probability-extraction}
For any $4$-message public-coin interactive argument satisfying partial collapsing (\cref{def:collapsing-protocol}) with respect to the functions $f_1, \hdots, f_{k-1}$ (but \emph{not necessarily} $f_k$), the procedure $\mathsf{Extract}_V$ has the following properties for any instance $x$.
\begin{enumerate}
\item \textbf{Efficiency:} For any QPT prover $P^*$, $\mathsf{Extract}_V^{P^*}$ runs in expected polynomial time ($\mathsf{EQPT}_{m}$). More formally, the number of calls that $\mathsf{Extract}_V^{P^*}$ makes to $P^*$ is a classical random variable whose expectation is a fixed polynomial in $k, \lambda$.
\item \textbf{Correctness:} $\mathsf{Extract}$ aborts with negligible probability.
\item \textbf{Distribution of outputs}: For every choice of $(\mathsf{vk}, a)$, let $\gamma = \gamma_{\mathsf{vk}, a}$ denote the success probability of $P^*$ conditioned on first two messages $(\mathsf{vk}, a)$. Then, if $\gamma > \delta^{1/3}$, the distribution of $(r_1, \hdots, r_k)$ (conditioned on $(\mathsf{vk}, a)$ and a successful first execution) is $O(1/\gamma)$-admissible (\cref{def:admissible-dist}).
\end{enumerate}
\end{theorem}
\subsection{Proof of \cref{thm:high-probability-extraction}}
\subsubsection{Intermediate State Notation}
Our extraction procedure and analysis make use of four relevant registers:
\begin{itemize}[noitemsep]
\item A challenge randomness register $\mathcal{R}$,
\item A prover state register $\mathcal{H}$, and
\item A phase estimation workspace register $\mathcal{W}$.
\item A one qubit register $\mathcal{B}$ that contains a bit $b$ where $b = 1$ indicates that the computation has not aborted during a sub-computation.
\end{itemize}
We now establish some conventions:
\begin{itemize}[noitemsep]
\item states written using the letter $\dm$ satisfy $\dm \in \Hermitians{\mathcal{B} \otimes \mathcal{H} \otimes \mathcal{R}}$ or $\dm \in \Hermitians{ \mathcal{H} \otimes \mathcal{R}}$, where we use $\Hermitians{\mathcal{H}}$ to denote the space of Hermitian operators on $\mathcal{H}$;
\item states using the letter $\DMatrixW$ satisfy $\DMatrixW \in \Hermitians{\mathcal{B} \otimes \mathcal{H} \otimes \mathcal{W}}$ or $\DMatrixW \in \Hermitians{\mathcal{H} \otimes \mathcal{W}}$;
\item states using the letter $\DMatrixH$ satisfy $\DMatrixH \in \Hermitians{\mathcal{B} \otimes \mathcal{H}}$ or $\DMatrixH \in \Hermitians{\mathcal{H}}$;
\item states using the letter $\DMatrixT$ satisfy $\DMatrixT\in \Hermitians{\mathcal{H}\otimes \mathcal{W} \otimes \mathcal{R}}$
\end{itemize}
With these conventions in mind, we define some intermediate states related to the extraction procedure:
\begin{mdframed}
\begin{itemize}[noitemsep]
\item Let $\psi$ denote the prover state after $(\mathsf{vk}, a)$ is generated.
\item Let $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(2)}$ denote the state obtained at the end of \cref{step:variable-phase-est}.
\item For each iteration of the \cref{step:extract-transcript} loop, we define the following states:
\begin{itemize}[nolistsep]
\item Let $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathrm{init})}$ denote the state at the beginning of \cref{step:extract-transcript}. The register $\mathcal{B}$ is initialized to $\ketbra{1}$. For the rest of the loop iteration, $\mathcal{B}$ is set to $\ketbra{0}$ if the computation aborts.
\item Let $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})}$ denote the state at the end of \cref{step:loop-reestimate}.
\item Let $\DMatrixH_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(3b)}$ denote the state at the end of \cref{step:loop-measure-r}.
\item Let $\DMatrixW_{\mathcal{B},\mathcal{H},\mathcal{W}}^{(3c)}$ denote the state at the end of \cref{step:loop-phase-est}.
\item Let $\DMatrixW_{\mathcal{B},\mathcal{H},\mathcal{W}}^{(3e)}$ denote the state immediately before the $\mathcal W$ register is traced out during \cref{step:loop-svt}.
\item Let $\DMatrixH_{\mathcal{B},\mathcal{H}}^{(3f)}$ denote the state at the end of \cref{step:loop-discard-w}.
\item Let $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(3g)}$ denote the state at the end of \cref{step:loop-amplify-C}.
\end{itemize}
\end{itemize}
\end{mdframed}
As in~\cref{lemma:pseudoinverse}, let the Jordan decomposition of $\mathcal{H} \otimes \mathcal{R}$ corresponding to $\BProj{\mathsf{C}},\BProj{\mathsf{U}}$ be $\{\mathcal{S}_j\}_j$ where subspace $\mathcal{S}_j$ is associated with the eigenvalue/success probability $p_j$. Let $\Pi^{\mathsf{Jor}}_j$ the projection onto $\mathcal{S}_j$, i.e., $\image(\Pi^{\mathsf{Jor}}_j) = \mathcal{S}_j$. Define the following projections on $\mathcal{H} \otimes \mathcal{R}$:
\begin{itemize}
\item $\Pi_0^{\mathsf{Jor}} \coloneqq \sum_{j, p_j =0} \Pi^{\mathsf{Jor}}_j$
\item $\Pi^\mathsf{Jor}_{\geq p}= \sum_{j: p_j \geq p} \Pi_j^{\mathsf{Jor}}$
\item $\Pi^{\mathsf{Jor}}_{< p} = \sum_{j: p_j < p} \Pi_j^{\mathsf{Jor}}$
\end{itemize}
\noindent We additionally define the following projectors on $\mathcal{B} \otimes \mathcal{H} \otimes \mathcal{R}$.
\[ \Pi^{\mathsf{Jor}}_{\mathsf{Bad}} = \ketbra{1}_{\mathcal{B}} \otimes \Pi^{\mathsf{Jor}}_{< p} \hspace{10pt} \text{and} \hspace{10pt} \Pi^{\mathsf{Jor}}_{\mathsf{Good}} = \mathbf{I}_{\mathcal{B},\mathcal{H},\mathcal{R}} - \Pi^{\mathsf{Jor}}_{\mathsf{Bad}}.
\]
\begin{claim}\label{claim:reest-good-state}
For \emph{any} estimate $p$ and \emph{any} state $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathrm{init})}$ such that $\Tr((\mathbf{I}_{\mathcal{B}} \otimes \Pi_{\mathsf{C}}) \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathrm{init})}) = 1$, the state $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})}$ obtained by running $b\gets \mathsf{Threshold}^{\mathsf{C}, \mathsf{U}}_{p, \varepsilon, \delta}$ (and then redefining $p := p-\varepsilon$) and setting $\mathcal{B} = \ketbra{b}$ satisfies
\[ \Tr( \Pi^{\mathsf{Jor}}_{\mathsf{Bad}} \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})}) \leq \delta.\]
\end{claim}
\begin{proof}
When $\mathsf{Threshold}^{\mathsf{C}, \mathsf{U}}_{p, \varepsilon, \delta}$ returns $0$, the computation aborts. Therefore, the lemma follows immediately from the almost-projectivity of $\mathsf{Threshold}$ (\cref{cor:threshold-almost}).
\end{proof}
\subsubsection{Analysis of \cref{step:first-measurement,step:variable-phase-est}}
We first show that \cref{step:first-measurement,step:variable-phase-est} run in expected polynomial time, and bound the statistic $\mathbb E[1/p \cdot X_1]$, where $p$ is the output of \cref{step:variable-phase-est} and $X_1$ is the indicator for the event ``\cref{step:first-measurement} does not abort''.
\begin{lemma}
\label{lemma:step-1-2}
The expected runtime of \cref{step:first-measurement,step:variable-phase-est} is $O(1)$. Moreover,
\[\mathbb E[1/p \cdot X_1] = O(1).
\]
\end{lemma}
\begin{proof}
Let $\ket{\psi}$ denote the state of $P^*$ after $(\mathsf{ck}, a)$ are generated. Then, consider the $(\mathsf{U}, \mathsf{C})$-Jordan decomposition
\[ \ket{\psi} \otimes \ket{+_R} = \sum_j \alpha_j \ket{v_{j,1}},
\]
where each $\ket{v_{j,1}} \in \mathcal S_j \cap \image(\Pi_{\mathsf{U}})$. Let $\gamma = \sum_j |\alpha_j|^2 p_j$ denote the initial success probability of $\ket{\psi}$.
\cref{step:first-measurement} runs in a fixed polynomial time and aborts with probability $1-\gamma$. Otherwise, \cref{step:variable-phase-est} is run on the residual state
\[ \frac 1 {\sqrt \gamma} \sum_j \alpha_j \sqrt{p_j} \ket{w_{j, 1}},
\]
where $\ket{w_{j,1}}$ is a basis vector in $\mathcal S_j \cap \image(\Pi_{\mathsf{C}})$. \cref{lemma:varest} tells us that \emph{both} the runtime of $\mathsf{VarEstimate}^{\mathsf{C}, \mathsf{U}}$ on this state (making oracle use of $\mathsf{C}, \mathsf{U}$) and the expectation of $1/p$ (where $p$ is the output of \cref{step:variable-phase-est}) are at most a constant times
\[ \frac 1 {\gamma} \sum_j \alpha_j^2 p_j \cdot \frac 1 {p_j} + \delta \cdot 1/\delta \leq \frac 1 {\gamma} \sum_j \alpha_j^2 + 1 = \frac 1 \gamma + 1,
\]
so since $\Pr[\text{\cref{step:first-measurement} does not abort}] = \gamma$, the overall expected value bounds are as claimed.
\end{proof}
\subsubsection{The Pseudoinverse State}\label{sec:extractor-analysis-pseudoinverse-state}
As defined earlier, let $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})} $ denote the state at the end of~\cref{step:loop-reestimate} (for some arbitrary iteration of \cref{step:extract-transcript}). We prove some important properties of the subsequent states in the execution of \cref{step:extract-transcript}.
We begin with \cref{step:loop-measure-r}, which measures the $\mathcal{R}$ register, obtaining a challenge $r$ and resulting state $\DMatrixH_{\mathcal{B},\mathcal{H}}^{(3b)}$. Let \[\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}'^{(\mathsf{C})} = \frac{\Pi^{\mathsf{Jor}}_{\mathsf{Good}} \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})} \Pi^{\mathsf{Jor}}_{\mathsf{Good}}}{\Tr(\Pi^{\mathsf{Jor}}_{\mathsf{Good}} \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})})}\]
denote the residual state; we write $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}'^{(\mathsf{C})} = \alpha_0 \ketbra{0}_{\mathcal{B}} \otimes \dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C},0)} + \alpha_1 \ketbra{1}_{\mathcal{B}} \otimes \dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)}$. By \cref{claim:reest-good-state}, we have that:
\begin{claim}\label{claim:analysis-pi-jor-good}
$\Tr(\Pi^\mathsf{Jor}_{\mathsf{Good}} \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})}) \geq 1-\delta$
\end{claim}
By gentle measurement, it then follows that
\[ d(\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})}, \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}'^{(\mathsf{C})}) \leq 2\sqrt{\delta}.
\]
Let $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{U})} = \alpha_0 \ketbra{0}_{\mathcal{B}}\otimes \dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},0)} + \alpha_1 \ketbra{1}_{\mathcal{B}}\otimes \dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)}$ where $\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)}$ denotes the state guaranteed to exist by applying~\cref{lemma:pseudoinverse} with $\mathsf{B} = \mathsf{U} $ and $\mathsf{A} = \mathsf{C}$ on $\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)}$. Recall from~\cref{lemma:pseudoinverse} that $\Tr\left( \BProj{\mathsf{U}} \dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)}\right) = 1$, since $\Tr( \Pi_0^{\mathsf{Jor}}\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)}) = 0$. Moreover, we also have:
\begin{claim}
\label{claim:exact-pseudoinverse}
\[\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)} = \frac{\BProj{\mathsf{C}}\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)} \BProj{\mathsf{C}}}{\Tr(\BProj{\mathsf{C}}\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)})}.
\]
\end{claim}
\begin{proof}
$\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{C},1)}$ is a state satisfying $\Tr(\BProj{\mathsf{C}}\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{C},1)}) = 1$, and $\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)}$ is then a (re-normalized) projection of $\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{C},1)}$ onto $(\mathsf{U}, \mathsf{C})$-Jordan subspaces with bounded Jordan $p_j$-value. Therefore, $\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)}$ also satisfies $\Tr(\BProj{\mathsf{C}}\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)}) = 1$.
From~\cref{lemma:pseudoinverse} (Property 2) we then have \[d(\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)}, \frac{\BProj{\mathsf{C}}\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)} \BProj{\mathsf{C}}}{ \Tr(\BProj{\mathsf{C}}\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)})}) \leq 2 \sqrt{\Tr(\Pi_0 \dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)})} = 0,\] which implies $\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)} = \BProj{\mathsf{C}}\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)} \BProj{\mathsf{C}}/ \Tr(\BProj{\mathsf{C}}\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)})$.
\end{proof}
\noindent Finally, because $\Tr(\Pi^{\mathsf{Jor}}_{\mathsf{Bad}} \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}'^{(\mathsf{C})}) = 0$, \cref{lemma:pseudoinverse} (Property 3) tells us that $\Tr(\Pi^{\mathsf{Jor}}_{\mathsf{Bad}} \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{U})}) =0$ as well.
Define $p_{\mathsf{U}} = \Tr(\BProj{\mathsf{C}}\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)})$ to be the normalization factor above, which is equal to the ($\mathsf{C}$-)success probability of $\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)}$.
\begin{claim}\label{claim:pseudoinverse-win-probability}
$p_{\mathsf{U}} \geq p$.
\end{claim}
\begin{proof}
Since $\Tr(\Pi^{\mathsf{Jor}}_{\geq p} \dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)}) = 1$ and $\BProj{\mathsf{C}}$ commutes with each $\Pi_j^{\mathsf{Jor}}$, we have:
\begin{align*} \Tr( \BProj{\mathsf{C}}\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)}) &= \Tr( \BProj{\mathsf{C}}\Pi^{\mathsf{Jor}}_{\geq p} \dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U}, 1)})
\\ &= \Tr(\sum_{j: p_j \geq p} \BProj{\mathsf{C}}\Pi^{\mathsf{Jor}}_{j} \dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)})
\\ &\geq p \Tr(\sum_{j: p_j \geq p} \Pi^{\mathsf{Jor}}_{j} \dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)})
\\ &= p. \qedhere
\end{align*}
\end{proof}
Since $\Tr(\BProj{\mathsf{U}} \dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)}) =1$, it can be written in the form $\DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \otimes \ketbra{+_R} $. For each $r$, we define $\zeta_r := \Tr( \Pi_{V, r} \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)})$ to be the success probability of $\DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)}$ on $r$. Finally, define $\zeta_R = \sum_r \zeta_r$.
We now proceed to analyze the state $\DMatrixH_{\mathcal{B},\mathcal{H}}^{(3b)}$. To do so, we first define $\DMatrixH_{\mathcal{B},\mathcal{H}}'^{(3b)}$ to be the state at the end of \cref{step:loop-measure-r} when $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}'^{(\mathsf{C})}$ is used in place of $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})}$. We know that $d(\DMatrixH_{\mathcal{B},\mathcal{H}}^{(3b)}, \DMatrixH_{\mathcal{B},\mathcal{H}}'^{(3b)}) \leq 2\sqrt{\delta}$, so this characterization will suffice.
\begin{claim}
\label{claim:stratify-r}
The state $\DMatrixH_{\mathcal{B},\mathcal{H}}'^{(3b)}$ is a mixed state with the following form: with probability $\alpha_0$, it is in the abort state. Otherwise, with conditional probability $\zeta_r/ \zeta_R$, \cref{step:loop-amplify-C} measures challenge $r$ and the resulting state is $\ketbra{1}_{\mathcal{B}} \otimes \frac{\Pi_{V, r} \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \Pi_{V, r}}{ \zeta_r}$.
\end{claim}
\begin{proof}
By definition of the pseudoinverse state $\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)} = \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)}\otimes \ketbra{+_R}$, we can write $\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)}$ as
\[\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)} = \frac{\BProj{\mathsf{C}}( \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)}\otimes \ketbra{+_R} )\BProj{\mathsf{C}}}{\Tr(\BProj{\mathsf{C}}( \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \otimes \ketbra{+_R} ))}.\]
Since $\BProj{\mathsf{C}}= \sum_{r \in R} \Pi_{V, r}\otimes \ketbra{r}$, we can write
\begin{align*}
\BProj{\mathsf{C}}( \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)}\otimes \ketbra{+_R})\BProj{\mathsf{C}}&= (\sum_{r \in R} \Pi_{V, r} \otimes \ketbra{r})\left( \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \otimes \ketbra{+_R} \right)(\sum_{r \in R} \Pi_{V, r} \otimes \ketbra{r})\\
&= \frac{1}{\abs{R}}\sum_{r \in R} \Pi_{V, r} \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \Pi_{V, r} \otimes \ketbra{r}.
\end{align*}
Thus, we can rewrite $\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)}$ as
\begin{align*}
\frac{\BProj{\mathsf{C}}( \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \otimes \ketbra{+_R})\BProj{\mathsf{C}}}{\Tr(\BProj{\mathsf{C}} ( \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \otimes \ketbra{+_R}))} &= \frac{\frac{1}{\abs{R}}\sum_{r \in R} \Pi_{V, r} \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \Pi_{V, r}\otimes \ketbra{r}}{\Tr(\frac{1}{\abs{R}}\sum_{r \in R} \Pi_{V, r} \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \Pi_{V, r} \otimes \ketbra{r})} \\
&= \frac{\frac{1}{\abs{R}}\sum_{r \in R} \Pi_{V, r} \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \Pi_{V, r} \otimes \ketbra{r}}{\frac{1}{\abs{R}}\sum_{r \in R} \zeta_r} \\
&= \frac{\sum_{r \in R} \Pi_{V, r} \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \Pi_{V, r}\otimes \ketbra{r}}{\zeta_R}.
\end{align*}
Therefore, the probability of obtaining $r$ after measuring $\mathcal{R}$ of $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}'^{(\mathsf{C})}$ is
\begin{align*}
\frac{\Tr((\mathbf{I} \otimes \ketbra{r}) \sum_{r' \in R} \Pi_{V, r'} \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \Pi_{V, r'}\otimes \ketbra{r'})}{\zeta_R} &= \frac{\Tr( \Pi_{V, r} \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)}\Pi_{V, r}\otimes \ketbra{r} )}{\zeta_R}\\
&= \frac{\zeta_r}{\zeta_R},
\end{align*}
and the post-measurement state is $ \Pi_{V, r} \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \Pi_{V, r}$. Thus, the state $\DMatrixH_{\mathcal{B},\mathcal{H}}'^{(3b)}$ is as claimed.
\end{proof}
In particular, \cref{claim:stratify-r} tells us that the ratio $\frac{\zeta_R}{\abs{R}}$ is exactly $\mathrm{Tr}(\BProj{\mathsf{C}}\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)}) = p_{\mathsf{U}}$.
We begin our analysis of the repair step by defining the following states:
\begin{enumerate}
\item $\DMatrixW_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)} \coloneqq \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \otimes \ketbra{0}_{\mathcal{W}}.$ Here, $\DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)}$ is the state satisfying $\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)} = \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \otimes \ketbra{0}_{\mathcal{R}}$.
\item $\DMatrixW_{\mathcal{H},\mathcal{W}}^{(r,1)} \coloneqq \Pi_r \DMatrixW_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)} \Pi_r/\zeta_r$.
\end{enumerate}
By \cref{claim:stratify-r} we can view our variant of~\cref{step:loop-measure-r,step:loop-phase-est,step:loop-collapsing,step:loop-svt} as follows:
\begin{itemize}
\item With probability $\alpha_0$, abort. Otherwise:
\item A challenge is sampled so that each string $r$ occurs with probability $\frac{\zeta_r}{\zeta_R}$
\item If the string $r$ is sampled, initialize the state to $\ketbra{1}_{\mathcal{B}} \otimes \DMatrixW_{\mathcal{H},\mathcal{W}}^{(r, 1)} $.
\end{itemize}
Unfortunately, the state $\DMatrixW_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)}$ only satisfies $ \Tr(\Pi_{p, \varepsilon} \DMatrixW_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)}) \geq 1-\delta$ (it is not \emph{quite} fully in the image of $\Pi_{p, \varepsilon}$). With this in mind, we define two additional states:
\begin{enumerate}
\item[3.] $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)} \coloneqq \frac{\Pi_{p,\varepsilon} \DMatrixW_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)} \Pi_{p,\varepsilon}}{\Tr(\Pi_{p,\varepsilon} \DMatrixW_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)})}$. Since $\Tr(\Pi_{p,\varepsilon} \DMatrixW_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)}) = 1-\delta$, we have $d(\DMatrixW_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)},\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)}) \leq 2\sqrt{\delta}$ by~\cref{lemma:gentle-measurement}.
\item[4.] $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(r,1)} \coloneqq \Pi_r \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)} \Pi_r / \Tr(\Pi_r \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)})$.
\end{enumerate}
\noindent Let $\widetilde{\zeta}_r \coloneqq \Tr(\Pi_r \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)})$, and observe that $\widetilde{\zeta}_r \in [\zeta_r \pm 2\sqrt{\delta}]$. Define $\widetilde{\zeta}_R \coloneqq \sum_{r \in R} \widetilde{\zeta}_r$ and $\widetilde{p}_{\mathsf{U}} \coloneqq \widetilde{\zeta}_R/\abs{R}$.
\begin{claim}\label{claim:fake-p-inequality} $|\widetilde{p}_{\mathsf{U}} - p_{\mathsf{U}}| \leq 2\sqrt{\delta}$
\end{claim}
\begin{proof}
For every string $r$, we have that
\[ |\zeta_r - \widetilde \zeta_r | = \left|\mathrm{Tr}(\Pi_r( \DMatrixW_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)} - \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)})\right| \leq 2\sqrt{\delta}
\]
since $||\DMatrixW_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)} - \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)}|| \leq 2\sqrt{\delta}$. Therefore, we have that
\[ \left|\frac{\zeta_R}{\abs{R}} - \frac{\widetilde \zeta_R}{\abs{R}}\right| \leq 2\sqrt{\delta}
\]
by subadditivity.
\end{proof}
Consider the following two mixed states
\[ \DMatrixT_{\mathcal{H},\mathcal{R},\mathcal{W}} := \sum_r \frac{\zeta_r}{\zeta_R} \ketbra r \otimes \frac{\Pi_r \DMatrixW_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)} \Pi_r}{\zeta_r} , \text{ and}
\]
\[\widetilde{\DMatrixT}_{\mathcal{H},\mathcal{R},\mathcal{W}} := \sum_r \frac{\widetilde \zeta_r}{\widetilde \zeta_R} \ketbra r \otimes \frac{\Pi_r \widetilde \DMatrixW_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)} \Pi_r}{\widetilde{\zeta}_r}.
\]
We claim that these two mixed states are close in trace distance.
\begin{claim}\label{claim:tau-trace-distance}
$|| \DMatrixT - \widetilde{\DMatrixT}||_1 \leq \frac{4\sqrt{\delta}}{p_{\mathsf{U}}}$.
\end{claim}
To see this, we first note that
\begin{align*} \left|\left| \widetilde{\DMatrixT} - \frac {\widetilde \zeta_R}{\zeta_R}\widetilde{\DMatrixT} \right|\right|_1 &= |1 -\frac{\widetilde \zeta_R}{\zeta_R}|\cdot \left|\left| \widetilde{\DMatrixT}\right|\right|_1
\\ &= \left|1 -\frac{\widetilde \zeta_R}{\zeta_R}\right|
\\ &= \left|1 -\frac{\widetilde p_{\mathsf{U}}}{p_{\mathsf{U}}}\right|
\\ &\leq \frac{2\sqrt{\delta}}{p_{\mathsf{U}}}
\end{align*}
\noindent Moreover, we have that
\begin{align*}
\left|\left| \DMatrixT - \frac {\widetilde \zeta_R}{\zeta_R}\widetilde{\DMatrixT} \right|\right|_1
&= \frac 1 {\zeta_R} \left|\left|\sum_r \ketbra{r} \otimes \Pi_r (\DMatrixW_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)} - \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)}) \Pi_r \right|\right|_1
\\ &\leq \frac {|R|}{\zeta_R} \cdot ||\DMatrixW_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)} - \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)}||_1
\\ &\leq \frac {|R|}{\zeta_R} \cdot 2\sqrt{\delta}
\\ &= \frac{2\sqrt{\delta}}{p_{\mathsf{U}}}.
\end{align*}
\noindent Thus, we conclude that $|| \DMatrixT - \widetilde{\DMatrixT}||_1 \leq \frac{2\sqrt{\delta}}{p_{\mathsf{U}}} + \frac {2\sqrt{\delta}}{p_{\mathsf{U}}} \leq \frac{4\sqrt{\delta}}{p_{\mathsf{U}}}$ by the triangle inequality.
This trace bound will allow us to analyze correctness and bound the expected runtime of the extractor by appealing to properties of the state $\widetilde{\DMatrixT}$.
\subsubsection{Runtime Analysis}
In this section, we bound the expected running time of $\mathsf{Ext}$ (proving property (1) of \cref{thm:high-probability-extraction}).
\begin{theorem}\label{thm:expected-qpt}
For any QPT $P^*$, $\mathsf{Ext}^{P^*}$ runs in $\mathsf{EQPT}_{m}$.
\end{theorem}
We note that \cref{lemma:step-1-2} already showed that the expected running time of \cref{step:first-measurement,step:variable-phase-est} is $O(1)$ calls to $(\mathsf{U}, \mathsf{C})$.
Next, we show that the expected runtime of the main loop (\cref{step:extract-transcript}) is also ${\rm poly}(\lambda)$. To prove this, we make use of the syntactic property (enforced by the definition of \cref{step:loop-amplify-C}) that for every $i\in \{0,1,\dots,k\}$, the state $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C},\mathrm{init})}$ at the beginning of the $i$th iteration of \cref{step:extract-transcript} is in $\image(\Pi_{\mathsf{C}})$ (provided that the computation has not aborted). We then show
\begin{lemma}\label{lemma:running-time-p}
Let $p$ be an arbitrary real number output by \cref{step:variable-phase-est}, and let $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C},\mathrm{init})}$ be an \emph{arbitrary} non-aborted state (i.e. $\mathcal{B}$ is initialized to $\ketbra{1}_{\mathcal{B}}$) that is in the image of $\Pi_{\mathsf{C}}$.
Then, the expected runtime of one iteration of \cref{step:extract-transcript} on $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C},\mathrm{init})}$ is ${\rm poly}(\lambda)/p$.
\end{lemma}
\begin{proof}
We analyze the running time assuming that the collapsing measurement of~\cref{step:loop-collapsing} is not performed. This is without loss of generality; the steps following the (partial) collapsing measurement have a fixed runtime (as a function of previously computed parameters in the execution), so the collapsing measurement cannot affect the overall expected running time.\footnote{We only remove the collapsing measurement of the \emph{current} loop iteration; previous collapsing measurements are baked into the (arbitrary) state $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C},\mathrm{init})}$.}
First, note that \cref{step:loop-reestimate,step:loop-amplify-C} run in a fixed ${\rm poly}(\lambda)/\sqrt{p}$ time by \cref{thm:svt}. Thus, we focus on \cref{step:loop-phase-est,step:loop-svt}.
We bound the expected runtime of \cref{step:loop-phase-est,step:loop-svt} via the following hybrid argument.
\begin{itemize}
\item $\mathsf{Hyb}_0$: This is the real procedure, assuming that \cref{step:loop-collapsing} is not performed.
\item $\mathsf{Hyb}_1$: In this hybrid, the $\mathcal{R}$-measurement outcome and residual state $\DMatrixH_{\mathcal{B},\mathcal{H}}^{(3b)}$ is prepared differently:
\begin{itemize}
\item With probability $\alpha_0$, abort. Otherwise:
\item The challenge $r$ is sampled with probability equal to $\zeta_r/\zeta_R$.
\item If the string $r$ is sampled, initialize the state to $\ketbra{1}_{\mathcal{B}} \otimes \frac{\Pi_{V, r} \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \Pi_{V, r}}{ \zeta_r}$.
\end{itemize}
This is an alternate description of the state $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}'^{(\mathsf{C})}$.
\item $\mathsf{Hyb}_2$: In this hybrid, the state at the beginning of \cref{step:loop-phase-est} (which is usually $\DMatrixH_{\mathcal{B},\mathcal{H}}^{(3b)}\otimes \ketbra{0}_{\mathcal{W}}$) is prepared differently:
\begin{itemize}
\item With probability $\alpha_0$, abort. Otherwise:
\item A challenge is sampled so that each string $r$ occurs with probability $\widetilde \zeta_r/\widetilde \zeta_R$
\item If the string $r$ is sampled, initialize the state to $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(r,1)}$.
\end{itemize}
\end{itemize}
\begin{claim}
The expected running times of $\mathsf{Hyb}_0$, $\mathsf{Hyb}_1$ differ by at most $O(k)$, and the expected running times of $\mathsf{Hyb}_1$ and $\mathsf{Hyb}_2$ differ by at most $O(k/p)$.
\end{claim}
\begin{proof}
The \emph{worst-case} running time of $\mathsf{Ext}$ is bounded to be $k/\sqrt{\delta}$ by definition. We will combine this with trace distance bounds to prove the claim.
For $\mathsf{Hyb}_0$ and $\mathsf{Hyb}_1$, we note that the running time of \cref{step:loop-phase-est,step:loop-svt} can be viewed as a classical distribution over integers obtained via applying a CPTP map to the input state, which is either $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})}$ or $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}'^{(\mathsf{C})}$. Since trace distance is contractive under CPTP maps, we conclude that these integer distributions are $2\sqrt{\delta}$-close in statistical distance. Since (as integers) they are $(k/\sqrt{\delta})$-bounded, we conclude that their expectations differ by $O(k)$.
The argument is similar for $\mathsf{Hyb}_1$ and $\mathsf{Hyb}_2$, except that the running time of \cref{step:loop-phase-est,step:loop-svt} can instead be viewed as a classical distribution obtained via a CPTP map from either $\DMatrixT$ or $\widetilde{\DMatrixT}$, which have trace distance at most $\frac{4\sqrt{\delta}}{p_{\mathsf{U}}} \leq \frac{4\sqrt{\delta}}{p}$.
\end{proof}
\noindent Thus, it suffices to bound the expected runtime in the procedure $\mathsf{Hyb}_2$.
Without~\cref{step:loop-collapsing}, we can view~\cref{step:loop-phase-est,step:loop-svt} as a variable-runtime $\mathsf{Transform}^{\mathsf{D}_r\rightarrow \mathsf{G}_{p, \varepsilon, \delta}}$ with respect to the projectors $(\Pi_r, \Pi_{p, \epsilon})$, where $r$ is sampled from the above distribution. We first analyze the runtime of this procedure for a fixed value of $r$.
Let $\Jor_r = (\Pi_j^{\Jor_r})_j$ denote the Jordan measurement corresponding to projections $(\Pi_r, \Pi_{p, \varepsilon})$, and let $q_j$ denote the eigenvalue associated with $\Pi_j^{\Jor_r}$. Define the \emph{Jordan weights} of $\widetilde{\DMatrixW}_{\mathcal{B},\mathcal{H},\mathcal{W}}^{(\mathsf{U})}$ as the vector $(y^{\Jor_r}_j)_j$ where
\[ y^{\Jor_r}_j \coloneqq \Tr(\Pi^{\Jor_r}_j \widetilde{\DMatrixW}_{\mathcal{B},\mathcal{H},\mathcal{W}}^{(\mathsf{U})}).\]
Then, the Jordan weights of $\widetilde{\DMatrixW}_{\mathcal{B},\mathcal{H},\mathcal{W}}^{(r)}$ are $(z^{\Jor_r}_j)_j$ where
\[ z^{\Jor_r}_j \coloneqq q_j y^{\Jor_r}_j/\widetilde{\zeta}_r.\]
\begin{claim}\label{lemma:svt-runtime-r}
Given the string $r$ and state $\widetilde{\DMatrixW}_{\mathcal{B},\mathcal{H},\mathcal{W}}^{(r)}$ as input, \cref{step:loop-phase-est,step:loop-collapsing,step:loop-svt} make an expected ${\rm poly}(\lambda) \cdot 1 /\sqrt{\widetilde \zeta_r}$ calls to $\Pi_{p,\varepsilon}$ and $\Pi_r$.
\end{claim}
\begin{proof}
By \cref{thm:vrsvt}, the expected running time (in number of calls to $\Pi_r,\Pi_{p,\varepsilon}$) of \cref{step:loop-phase-est,step:loop-collapsing,step:loop-svt} on a state with Jordan weights $(q_j y^{\Jor_r}_j/\widetilde{\zeta}_r)_j$ is
\begin{align*} \sum_j \frac{q_j y^{\Jor_r}_j}{\widetilde{\zeta}_r} \cdot \frac{{\rm poly}(\lambda)}{\sqrt{q_j}} &\leq {\rm poly}(\lambda) \sqrt{\sum_j \frac{q_j y^{\Jor_r}_j}{\widetilde{\zeta}_r q_j} }\\
&= {\rm poly}(\lambda) \sqrt{ \frac{1}{\widetilde{\zeta}_r } }.
\end{align*}
\noindent This completes the proof of \cref{lemma:svt-runtime-r}.
\end{proof}
\noindent By \cref{lemma:svt-runtime-r}, along with the fact that $\Pi_{p, \varepsilon}$ is implemented in a fixed ${\rm poly}(\lambda)/\sqrt{p}$ time, the expected running time of~\cref{step:loop-phase-est,step:loop-collapsing,step:loop-svt} in $\mathsf{Hyb}_2$ is:
\begin{align}
\sum_{r \in R} \frac{\widetilde{\zeta}_r}{\widetilde{\zeta}_R} \cdot \frac{{\rm poly}(\lambda)}{\sqrt{\widetilde{\zeta}_r p}} &= \frac{{\rm poly}(\lambda)}{\sqrt{ p}} \sum_{r \in R} \frac{\widetilde{\zeta}_r}{\widetilde{\zeta}_R} \cdot \frac 1 {\sqrt{\widetilde{\zeta}_r}} \nonumber \\
&\leq \frac{{\rm poly}(\lambda)}{\sqrt{ p}} \sqrt{\sum_{r \in R} \frac{\widetilde{\zeta}_r}{\widetilde{\zeta}_R} \cdot \frac 1 {\widetilde{\zeta}_r}} \nonumber\\
&= \frac{{\rm poly}(\lambda)}{\sqrt{ p}} \sqrt{ \frac{\abs{R}}{\widetilde{\zeta}_R}} \nonumber\\
&= \frac{{\rm poly}(\lambda)}{\sqrt{ p}} \sqrt{\frac 1 { \widetilde{p}_{\mathsf{U}}}} \nonumber \\
&\leq \frac{{\rm poly}(\lambda)}{ \sqrt{p (p - 2\sqrt{\delta})}} \nonumber \\ &\leq \frac{{\rm poly}(\lambda)}{p}
\end{align}
where the first inequality is an application of Jensen's inequality, the second inequality holds by \cref{claim:fake-p-inequality}, and the last inequality holds by the abort condition in \cref{step:variable-phase-est} ($p$ drops by a factor of at most $2$ in the entire process).
This completes the proof of \cref{lemma:running-time-p}.
\end{proof}
Finally, combining \cref{lemma:running-time-p} with \cref{lemma:step-1-2} along with the fact that throughout the extraction procedure, the updated value of $p$ is at most a factor of $2$ smaller than the initial output of \cref{step:variable-phase-est}, we conclude that the overall expected running time of \cref{step:extract-transcript} is at most $\mathbb E[{\rm poly}(\lambda)/p] \leq {\rm poly}(\lambda)$, completing the proof of \cref{thm:expected-qpt}.
\subsubsection{Correctness of the repair step}
\label{subsec:repair-runtime}
In this section, we prove that $\mathsf{Extract}$ aborts with negligible probability (property (2) of \cref{thm:high-probability-extraction}).
\begin{lemma}\label{lemma:negl-abort}
The probability that the procedure aborts is negligible.
\end{lemma}
\begin{proof}
By \cref{thm:expected-qpt}, the probability that the procedure aborts because it ran for too long is $O({\rm poly}(\lambda)/\sqrt{\delta}) = {\rm negl}(\lambda)$.
By \cref{lemma:step-1-2}, $\mathbb{E}[1/p \cdot X] = O(1)$, where $X$ is the indicator for whether \cref{step:first-measurement} outputs $1$. Hence by \cref{claim:abort-chosen-p} (proven below) and \cref{cor:threshold-est-almost} (which implies that the first iteration of \cref{step:loop-reestimate} only aborts with negligible probability), the probability that any iteration of the loop aborts when we remove \cref{step:loop-collapsing} is at most
\begin{equation*}
k \cdot O(\sqrt{\delta}) \mathbb{E}[1/p \cdot X] = O(\sqrt{\delta}).
\end{equation*}
Then by the collapsing guarantee (applied to the measurements of $y_1, \hdots, y_{k-1}$; it is not necessary for $y_k$), and by \cref{thm:expected-qpt}, the probability that any iteration of the loop aborts is negligible.
\end{proof}
\begin{claim}
\label{claim:abort-chosen-p}
Let $\dm \in \Hermitians{\mathcal{H} \otimes \mathcal{R}}$ be a state such that $\Tr(\BProj{\mathsf{C}} \dm_{\mathcal{H},\mathcal{R}}) = 1$, and consider running two iterations of \cref{step:extract-transcript} in sequence on $\dm_{\mathcal{H},\mathcal{R}}$, with the following modifications:
\begin{itemize}
\item \cref{step:loop-collapsing} is not applied, and
\item The $O(k)/\sqrt{\delta}$ runtime cutoff has been removed.
\end{itemize}
Then, for any choice of $p \in [0,1]$, the probability that the first iteration (with initial value $p$) does not abort in \cref{step:loop-reestimate} and the second iteration aborts in \cref{step:loop-reestimate} is at most $O(\sqrt{\delta}/p)$.
Also, the probability that an iteration of \cref{step:extract-transcript} (where \cref{step:loop-collapsing} is not applied) does not abort in \cref{step:loop-reestimate} but does abort in \cref{step:loop-amplify-C} is at most $O(\sqrt{\delta}/p)$.
\end{claim}
\begin{proof}
For a projector $\Pi$, we write $\Pi^{\mathcal{B}}$ to denote the projection
\[
\ketbra{0}_{\mathcal{B}} \otimes \mathbf{I} + \ketbra{1}_{\mathcal{B}} \otimes \Pi.
\]
Let $\mathcal{B}$ store the output of $\mathsf{Threshold}$ in the first application of \cref{step:loop-reestimate}. Recall that in \cref{sec:extractor-analysis-pseudoinverse-state} we have defined the following states:
\begin{itemize}
\item $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})}$ denotes the state after applying \cref{step:loop-reestimate} (i.e., coherently applying $\mathsf{Threshold}$ to $\dm_{\mathcal{H},\mathcal{R}}$ where the output is stored on $\mathcal{B}$). The extraction procedure now re-defines/updates $p := p - \epsilon$. Note that $\Tr((\SProj[\mathsf{Jor}]{\geq p})^{\mathcal{B}} \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})}) \geq 1 - \delta$ (\cref{claim:analysis-pi-jor-good}).
\item $\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{C},b)} \coloneqq \bra{b}_{\mathcal{B}} \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})} \ket{b}_{\mathcal{B}}/q$ where $q \coloneqq \Tr(\bra{b}_{\mathcal{B}} \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})} \ket{b}_{\mathcal{B}})$. We may assume $q > 0$ or else the claim holds trivially.
\item $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}'^{(\mathsf{C})} \coloneqq \frac{\left(\SProj[\mathsf{Jor}]{\geq p}\right)^{\mathcal{B}} \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})} \left(\SProj[\mathsf{Jor}]{\geq p}\right)^{\mathcal{B}}}{\Tr(\left(\SProj[\mathsf{Jor}]{\geq p}\right)^{\mathcal{B}} \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})} )}$ is the result of projecting $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})}$ onto eigenvalues $\geq p$ (for the Jordan decomposition corresponding to $\BProj{\mathsf{C}},\BProj{\mathsf{U}}$) when $\mathcal{B} = 1$.
\item $\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C}, 1)} \coloneqq \bra{1}_{\mathcal{B}} \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}'^{(\mathsf{C})} \ket{1}_{\mathcal{B}}/q'$ where $q' \coloneqq \Tr( \bra{1}_{\mathcal{B}} \dm_{\mathcal{B}, \mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)} \ket{1}_{\mathcal{B}})$. Note that $\Tr(\BProj{\mathsf{C}}\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C}, 1)}) =1$.
\item $\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)}$ denotes the pseudoinverse of $\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C}, 1)}$ with respect to $(\mathsf{U},\mathsf{C})$ as guaranteed by the pseudoinverse lemma (\cref{lemma:pseudoinverse}); by definition, $\Tr(\BProj{\mathsf{U}}\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U}, 1)}) =1$. Moreover $\Tr(\BProj{\mathsf{C}}\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U}, 1)}) \geq p$ (\cref{claim:pseudoinverse-win-probability}) since all the $(\BProj{\mathsf{C}},\BProj{\mathsf{U}})$-Jordan-eigenvalues of $\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C}, 1)}$ are at least $p$, which implies the same property holds for the pseudoinverse state.
\item $\DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \coloneqq \Tr_{\mathcal{R}}(\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)})$. Note that since $\Tr(\BProj{\mathsf{U}}\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U}, 1)}) =1$, we have $\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U}, 1)} = \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \otimes \ketbra{+}_{\mathcal{R}}$.
\item $\dm_{\mathcal{H},\mathcal{R}}'^{(3b,1)} \coloneqq \frac{\sum_{r} \BProj{V,r} \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \BProj{V,r} \otimes \ketbra{r}}{|R| \cdot \Tr(\BProj{\mathsf{C}} \dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)})} $ is within trace distance $2\sqrt{\delta}$ of the state after measuring the $\mathcal{R}$ register in~\cref{step:loop-measure-r} but \emph{before} discarding $\mathcal{R}$ (\cref{claim:stratify-r}).
\item $\DMatrixW_{\mathcal{H},\mathcal{W}}^{(\mathsf{U}, 1)} \coloneqq \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \otimes \ketbra{0}_{\mathcal{W}}$. We have that $\Tr(\Pi_{p, \varepsilon, \delta}\DMatrixW_{\mathcal{H},\mathcal{W}}^{(\mathsf{U}, 1)}) \geq 1-\delta $ because $\mathsf{Threshold}_{p,\varepsilon,\delta}$ outputs $1$ on $\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)}$ with probability $1-\delta$ by the Jordan spectrum guarantee of \cref{lemma:pseudoinverse}.
\item $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)} \coloneqq \frac{\Pi_{p, \varepsilon, \delta}\DMatrixW^{(\mathsf{U}, 1)}\Pi_{p, \varepsilon, \delta}}{\Tr(\Pi_{p, \varepsilon, \delta}\DMatrixW^{(\mathsf{U}, 1)} )}$. By the gentle measurement lemma (\cref{lemma:gentle-measurement}), we have $d(\DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \otimes \ketbra{0}_{\mathcal{W}}, \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)}) \leq 2\sqrt{\delta}$.
\item $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(r,1)} \coloneqq \frac{\Pi_r \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)} \Pi_r }{\widetilde \zeta_r}$ where $\widetilde \zeta_r \coloneqq \Tr(\Pi_r \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)})$ for all $r \in R$.
\item $\widetilde{\DMatrixT}_{\mathcal{H},\mathcal{W},\mathcal{R}} \coloneqq \sum_r \frac{\widetilde \zeta_r}{\widetilde \zeta_R} \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(r,1)}\otimes \ketbra{r}$ for $\widetilde{\zeta}_R = \sum_r \widetilde{\zeta}_r$. By \cref{claim:tau-trace-distance}, we have that $\widetilde{\DMatrixT}_{\mathcal{H},\mathcal{W},\mathcal{R}}$ is within trace distance $\frac{4\sqrt{\delta}}{p}$ of the state $\DMatrixT = \dm_{\mathcal{H},\mathcal{R}}'^{(3b,1)} \otimes \ketbra{0}_{\mathcal{W}}$.
\end{itemize}
We now consider the application of the variable-runtime singular vector transform performed across \cref{step:loop-phase-est,step:loop-svt} (recall that we omit \cref{step:loop-collapsing} for this analysis). We consider applying these steps to the state $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(r,1)}$. Note that \cref{step:loop-phase-est,step:loop-svt} commute with $\Meas{\mathsf{Jor}}[\mathsf{D}_r,\mathsf{G}_{p,\varepsilon,\delta}]$. Hence writing $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(3e,r,1)}$ for the state after applying \cref{step:loop-phase-est,step:loop-svt} to $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(r,1)}$, we have
\[ \Tr(\SProj[\Jor_r]{j} \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(3e,r,1)}) = \Tr(\SProj[\Jor_r]{j} \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(r,1)}) = \frac{q_j \Tr(\SProj[\Jor_r]{j}\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)})}{\widetilde{\zeta}_r}~ \]
where $\SProj[\Jor_r]{j}$ is the $j$-th element of $\Meas{\mathsf{Jor}}[\mathsf{D}_r,\mathsf{G}_{p,\varepsilon,\delta}]$. Let $\SProj[\Jor_r]{\mathsf{G}_{p,\varepsilon,\delta},\mathsf{stuck}}$ be defined (analogous to $\SProj[\mathsf{Jor}]{\mathsf{stuck}}$ in~\cref{claim:jordan-rotate}) as $\SProj[\mathsf{Jor}]{\mathsf{stuck}} \coloneqq \sum_{j \in S} \SProj[\Jor_r]{j}$ where $S$ is the set of all $j$ where $\mathcal{S}_j$ is a one-dimensional Jordan subspace $\mathcal{S}_j \in \image(\mathbf{I}-\BProj{p,\varepsilon,\delta})$. We now invoke~\cref{claim:jordan-rotate} to ``rotate'' the state $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(3e,r,1)}$ into $\image(\BProj{p,\varepsilon,\delta})$ while preserving the Jordan spectrum, which is possible as long as the component of the state in $\SProj[\Jor_r]{\mathsf{G}_{p,\varepsilon,\delta},\mathsf{stuck}}$ is $0$. This is satisfied here because $\Tr(\SProj[\Jor_r]{\mathsf{stuck}} \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(3e,r,1)}) = \Tr(\SProj[\Jor_r]{\mathsf{stuck}} \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)}) = 0$ since $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)}$ was defined so that $\Tr(\BProj{p,\varepsilon,\delta}\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)}) = 1$.
Additionally, by the guarantee of \cref{thm:vrsvt}, $\Tr(\BProj{p,\varepsilon,\delta} \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(3e,r,1)}) \geq 1 - \delta$. Hence by \cref{claim:jordan-rotate}, there exists a state $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{G},r,1)}$ with the same Jordan spectrum as $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(3e,r,1)}$, $\Tr(\BProj{p,\varepsilon,\delta} \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{G},r,1)}) = 1$ and $d(\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{G},r,1)},\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(3e,r,1)}) \leq \sqrt{\delta}$. Note that $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{G},r,1)}$ also has the same Jordan spectrum of $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(r,1)}$
Consider the pseudoinverse state $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{D},r,1)}$ under $(\mathsf{D}_r,\mathsf{G}_{p,\varepsilon,\delta})$ of $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{G},r,1)}$. Since the Jordan spectrum of $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{G},r,1)} \in \image(\BProj{p,\varepsilon,\delta})$ is identical to the Jordan spectrum of $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(r,1)} \in \image(\BProj{r})$, and $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)}$ is the pseudoinverse under $(\mathsf{G}_{p,\varepsilon,\delta},\mathsf{D}_r)$ of $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(r,1)}$, it follows that
\[\Tr( \BProj{p,\varepsilon,\delta} \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{D},r,1)}) = \Tr( \BProj{r} \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{U},1)}) = \widetilde{\zeta}_r,\]
and moreover $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{G},r,1)} = \BProj{p,\varepsilon,\delta} \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{D},r,1)} \BProj{p,\varepsilon,\delta}/\widetilde{\zeta}_r$.
Hence, if the state before \cref{step:loop-phase-est} is $\sum_r \frac{\widetilde{\zeta}_r}{\widetilde{\zeta}_R} \widetilde{\DMatrixW}^{(r,1)}_{\mathcal{H},\mathcal{W}}$ (which is $\frac{4\sqrt{\delta}}p$ close to the actual state before \cref{step:loop-phase-est}) then the state after \cref{step:loop-svt} is $O(\sqrt{\delta})$-close to the following state:
\begin{equation*}
\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(3e,1)} \coloneqq \frac{1}{\widetilde{\zeta}_R} \sum_{r} \BProj{p,\varepsilon,\delta} \left( \widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{D},r,1)}\right) \BProj{p,\varepsilon,\delta}.
\end{equation*}
Therefore, writing $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{D},r,1)} = \widetilde{\DMatrixH}_{\mathcal{H}}^{(\mathsf{D},r,1)}\otimes \ketbra{0}_{\mathcal{W}} $ ($\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{D},r,1)}$ has this form since $\widetilde{\DMatrixW}_{\mathcal{H},\mathcal{W}}^{(\mathsf{D},r,1)} \in \image(\BProj{r})$), the state at the end of \cref{step:loop-discard-w} is $O(\sqrt{\delta})$-close to
\begin{equation*}
\widetilde{\DMatrixH}_{\mathcal{H}}^{(3f,1)} \coloneqq \frac{1}{\widetilde{\zeta}_R} \sum_{r\in R} M_{p,\varepsilon,\delta} \left( \widetilde{\DMatrixH}_{\mathcal{H}}^{(\mathsf{D},r,1)}\right) M_{p,\varepsilon,\delta}^{\dagger},
\end{equation*}
where $M_{p,\varepsilon,\delta}$ is the measurement element of $\mathsf{T}_{p,\varepsilon,\delta}$ that corresponds to a $1$ outcome.
By the guarantee of $\mathsf{Threshold}$ (\cref{thm:svdisc}), it holds that for all states $\DMatrixH \in \Hermitians{\mathcal{H}}$,
\begin{equation*}
\Tr(\SProj[\mathsf{Jor}]{< p-\varepsilon} M_{p,\varepsilon,\delta} \DMatrixH M_{p,\varepsilon,\delta}^{\dagger}) \leq \delta,
\end{equation*}
and so by linearity,
\begin{align}
\Tr(\SProj[\mathsf{Jor}]{< p-\varepsilon} \widetilde{\DMatrixH}_{\mathcal{H}}^{(3f,1)}) &= \frac{1}{\widetilde{\zeta}_R} \sum_{r\in R} \Tr(\SProj[\mathsf{Jor}]{< p-\varepsilon}M_{p,\varepsilon,\delta} \left( \widetilde{\DMatrixH}_{\mathcal{H}}^{(\mathsf{D},r,1)}\right) M_{p,\varepsilon,\delta}^{\dagger}) \nonumber \\ &\leq \frac 1 {\widetilde{\zeta}_R} \cdot |R| \cdot \delta \nonumber \\
&\leq \frac{\delta}{p-2\sqrt{\delta}} \label{eq:fake-p}\\
&= O(\delta/p) \nonumber,
\end{align}
where \cref{eq:fake-p} holds by \cref{claim:fake-p-inequality}. It follows from the guarantees of the fixed-runtime singular vector transform (\cref{thm:svt}) that the state at the end of \cref{step:loop-amplify-C} is $O(\delta)$-close to the state $\widetilde{\dm}_{\mathcal{H},\mathcal{R}}^{(3g,1)} = \mathsf{Transform}_{p-\varepsilon}[\mathsf{U} \to \mathsf{C}](\widetilde{\DMatrixH}_{\mathcal{H}}^{(3f,1)}\otimes \ketbra{+_R}_{\mathcal{R}})$, which has the property that
\[ \Tr((\mathbf{I}-\BProj{\mathsf{C}}) \SProj[\mathsf{Jor}]{\geq p-\varepsilon} (\widetilde{\DMatrixH}_{\mathcal{H}}^{(3f,1)}\otimes \ketbra{+_R}_{\mathcal{R}})) \leq \delta.\]
Combining this with $\Tr(\SProj[\mathsf{Jor}]{< p-\varepsilon} \widetilde{\DMatrixH}_{\mathcal{H}}^{(3f,1)}) = O(\delta/p)$, we conclude that if the state before \cref{step:loop-phase-est} is $\sum_r \frac{\widetilde{\zeta}_r}{\widetilde{\zeta}_R} \widetilde{\DMatrixW}^{(r,1)}_{\mathcal{H},\mathcal{W}}$, then the probability that \cref{step:loop-amplify-C} aborts is at most $O(\delta/p)$. Additionally, the guarantee of $\mathsf{Threshold}$ (\cref{thm:svdisc}) implies that in the next iteration of \cref{step:extract-transcript}, the probability that \cref{step:loop-reestimate} aborts on $\widetilde{\dm}_{\mathcal{H},\mathcal{R}}^{(3g,1)}$ is also at most $O(\delta/p)$. By a trace distance argument, the probability that \cref{step:loop-amplify-C} or the subsequent \cref{step:loop-reestimate} aborts in a \emph{real} execution of \cref{step:extract-transcript} (with the modifications as in the statement of~\cref{claim:abort-chosen-p}) when the first \cref{step:loop-reestimate} did not abort is at most $O(\sqrt{\delta}/p)$. This completes the proof of \cref{claim:abort-chosen-p}.
\end{proof}
\iffalse
Let $\mathsf{Estimate}_{p}$ denote the binary-outcome measurement that computes an estimate $q$ of the success probability and outputs $1$ if and only if $q \geq p$.
\begin{lemma}[\cite{FOCS:CMSZ21}]
Let $\JorKetB{j}{1}$ be an eigenvector of $\Pi_{p,\varepsilon} \Pi_r \Pi_{p,\varepsilon}$ with eigenvalue $q_j$, and define $\bm{\rho} \coloneqq \Tr_{\mathcal{W}}(\JorKetBraB{j}{1})$. Then
\[ \Expectation_{q \gets \mathsf{Estimate}_{p-\varepsilon}(\bm{\rho})}[q] \geq 1-\delta/q_j. \]
\end{lemma}
\begin{proof}
Consider the state $\JorKetA{j}{1} = (1/\sqrt{q_j})\Pi_r \JorKetB{j}{1}$. Since $\Pi_r = \mathbf{I}^{\mathcal{R}} \otimes \Pi_{V,r} \otimes \ketbra{0}^{\mathcal{W}}$, the state $\JorKetA{j}{1}$ can be written in the form $\JorKetA{j}{1} = \ket{\widetilde{v}_{j,1}} \otimes \ket{0}^{\mathcal{W}}$. Since
\[\JorKetB{j}{1} = \frac{\Pi_{p,\varepsilon}(\ket{\widetilde{v}_{j,1}} \otimes \ket{0}^{\mathcal{W}})}{\sqrt{q_j}},\]
we can interpret the state $\Tr_{\mathcal{W}}(\JorKetBraB{j}{1})$ as the result of the following process
\begin{enumerate}
\item Start with the state $\ket{\widetilde{v}_{j,1}}$ on $(\mathcal{R},\mathcal{H})$.
\item Apply the $(\varepsilon,\delta)$-almost-projective measurement corresponding to $\Pi_{p,\varepsilon}$ and post-select on obtaining an outcome in the range $[p,1]$.
\end{enumerate}
By Markov's inequality, if we were to apply the $(\varepsilon,\delta)$-almost projective measurement again, we would obtain an outcome within $\varepsilon$ of the first outcome in $[p,1]$ with probability at least $1-\delta/q_j$, and the lemma follows.
\end{proof}
\begin{claim}
\label{claim:correctness-repair}
Assume $p > \text{TBD}_1$. If the state at the beginning of ~\cref{step:loop-amplify-C} is $(\varepsilon,\delta)$-concentrated around $p$, then the state at the end of~\cref{step:loop-reestimate} is $(\varepsilon,\delta)$-concentrated around the new estimate $p'$. Moreover, the probability that the repair loop does not abort and $p' \in [p \pm \varepsilon]$ is $1-\text{TBD}_2(p)$.
\end{claim}
\fi
\subsubsection{Correctness of Transcript Generation}
Finally, we prove property (3) of \cref{thm:high-probability-extraction}.
\begin{lemma}
For every $\tau_{\mathrm{pre}} = (\mathsf{vk}, a)$, let $\gamma = \gamma_{\mathsf{vk}, a}$ denote the initial success probability of $P^*$ conditioned on $\tau_{\mathrm{pre}}$. Then, if $\gamma > \delta^{1/3}$, the distribution $D_k$ on $(r_1, \hdots, r_k)$ (conditioned on $(\mathsf{vk}, a)$ and a successful first execution) is $O(1/\gamma)$-admissible (\cref{def:admissible-dist}).
\end{lemma}
This follows by appealing to the following claim in each round, making use of the fact that the expectation of $1/p$ conditioned on an accepting initial execution is equal\footnote{Here (and elsewhere) we informally make use of the fact that the ``current'' value of $p$ in any iteration of \cref{step:extract-transcript} is always at least $p_0/2$, where $p_0$ is the \emph{initial} estimated $p$.} to $1/\gamma$; the $O(\sqrt{\delta})$-closeness from the claim also degrades to $O(\sqrt{\delta}/\gamma)$ when conditioning on an accepting initial execution.
\begin{claim}
Consider the distribution $D$ supported on $R \cup \{\bot\}$ obtained running a single iteration of \cref{step:extract-transcript} with parameter $p$ on an arbitrary state $\dm \in \Hermitians{\mathcal{H} \otimes \mathcal{R}}$ with $\Tr(\BProj{\mathsf{C}} \dm) = 1$ (where $r \coloneqq \bot$ if $\mathsf{Extract}$ aborts). There exists a procedure $\mathsf{Samp}$ that makes expected $O(1/p)$ queries to uniform sampling oracle $O_R$ (but can otherwise behave arbitrarily and inefficiently) that outputs a distribution $O(\sqrt{\delta})$-close to $D$, and if the output of $\mathsf{Samp}$ is not $\bot$ then is one of the responses to its oracle queries.
\end{claim}
\begin{proof}
$\mathsf{Samp}$ initially behaves similarly to $\mathsf{Extract}$: apply $\mathsf{Threshold}_{p,\varepsilon,\delta}$ to $\dm$; if $\mathsf{Threshold}$ outputs $0$ then output $\bot$. Let $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})}$ be the state after applying $\mathsf{Threshold}$, and (as in $\mathsf{Extract}$) re-set $p := p-\varepsilon$.
As before, let $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}'^{(\mathsf{C})} \coloneqq \frac{\left(\SProj[\mathsf{Jor}]{\geq p}\right)^{\mathcal{B}} \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})} \left(\SProj[\mathsf{Jor}]{\geq p}\right)^{\mathcal{B}}}{\Tr(\left(\SProj[\mathsf{Jor}]{\geq p}\right)^{\mathcal{B}} \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})} )}$. Let $\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)}$ be the pseudoinverse of $\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C}, 1)} = \bra{1}_{\mathcal{B}} \dm_{\mathcal{B},\mathcal{H},\mathcal{R}}'^{(\mathsf{C})} \ket{1}_{\mathcal{B}}/\Tr( \bra{1}_{\mathcal{B}} \dm_{\mathcal{B}, \mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)} \ket{1}_{\mathcal{B}})$ as guaranteed by the pseudoinverse lemma (\cref{lemma:pseudoinverse}).
We have by \cref{lemma:gentle-measurement,lemma:pseudoinverse} that $d(\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{\mathsf{C}},\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}'^{\mathsf{C}}) \leq 2\sqrt{\delta}$ and $\dm_{\mathcal{H},\mathcal{R}}'^{\mathsf{C},1} = \frac{\Pi_C \dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)} \Pi_C }{\Tr(\BProj{\mathsf{C}} \dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)})}$. Finally, write $\dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)} = \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \otimes \ketbra{+}_{\mathcal{R}}$.
$\mathsf{Samp}$ now behaves differently than $\mathsf{Extract}$. $\mathsf{Samp}$ ``clones'' $\DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)}$ (recall that $\mathsf{Samp}$ can be an arbitrary function) and repeats the following until $b = 1$: query $O_R$, obtaining $r \in R$; on a fresh copy of $\DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)}$, measure whether the verifier accepts on challenge $r$ (i.e., $\BMeas{\BProj{V,r}}$), obtaining bit $b$. Output $r$ if $b = 1$.
Let $p_{\mathsf{U}} \coloneqq \Tr(\BProj{\mathsf{C}} \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)})$; we have that $p_{\mathsf{U}} \geq p $ by \cref{claim:pseudoinverse-win-probability}. Hence, the expected number of queries $\mathsf{Samp}$ makes is $1/p$. Observe that $p_{\mathsf{U}}$ is the probability that a uniform $r$ is accepted. Let $\zeta_r \coloneqq \Tr(\BProj{V,r} \DMatrixH_{\mathcal{H}}^{(\mathsf{U}, 1)})$; $\zeta_r$ is the probability that $r$ is accepted. Then, for every $r^*\in R$,
\begin{equation*}
\Pr_{\mathsf{Samp}}[r = r^*] = \sum_{n=0}^{\infty} \Pr[r_1,\ldots,r_{n} \text{ rejected}] \Pr[r_{n+1} = r^*, r^* \text{ accepted}] = \sum_{n = 0}^{\infty} (1-p_{\mathsf{U}})^n \cdot \frac{\zeta_{r^*}}{|R|} = \frac{\zeta_{r^*}}{p_{\mathsf{U}} \cdot |R|}~.
\end{equation*}
Consider now the distribution on $r$ obtained by measuring $\mathcal{R}$ on state $\dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)}$: for every $r^*$,
\begin{equation*}
\Pr[r = r^*] = \Tr(\ketbra{r^*} \dm_{\mathcal{H},\mathcal{R}}'^{(\mathsf{C},1)}) = \frac{\Tr(\ketbra{r^*} \BProj{\mathsf{C}} \dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)} \BProj{\mathsf{C}})}{p_{\mathsf{U}}} = \frac{\Tr(\BProj{V,r^*} \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)})}{p_{\mathsf{U}}\cdot|R|} = \frac{\zeta_{r^*}}{p_{\mathsf{U}} \cdot |R|}
\end{equation*}
since $(\mathbf{I}_{\mathcal{H}}\otimes \ketbra{r^*} ) \BProj{\mathsf{C}} = \BProj{V,r^*}\otimes \ketbra{r^*} $ and $\ketbra{r^*} \dm_{\mathcal{H},\mathcal{R}}^{(\mathsf{U},1)} \ketbra{r^*} = \frac{1}{|R|} \DMatrixH_{\mathcal{H}}^{(\mathsf{U},1)} \otimes \ketbra{r^*}_{\mathcal{R}} $.
Overall, $D$ is obtained by measuring $(\mathcal{B},\mathcal{R})$ on the state $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}^{(\mathsf{C})}$, which is $O(\sqrt{\delta})$-close to $\dm_{\mathcal{B},\mathcal{H},\mathcal{R}}'^{(\mathsf{C})}$; the claim follows by contractivity of trace distance.
\end{proof}
Having established properties (1), (2), and (3), we have proved \cref{thm:high-probability-extraction}!
\subsection{Obtaining Guaranteed Extraction}\label{sec:obtaining-guaranteed-extraction}
In this section, we combine the guarantees of \cref{thm:high-probability-extraction} with additional analysis to prove that all of the example protocols from \cref{sec:examples} have guaranteed extractors (additionally assuming partial collapsing where necessary). We remark that handling the graph isomorphism subroutine requires a slight modification of the \cref{thm:high-probability-extraction}, which we detail below.
We begin with a general-purpose corollary of \cref{thm:high-probability-extraction} for the case of protocols satisfying $(k, g)$-PSS (\cref{def:k-g-pss}) in addition to $f_1, \hdots, f_{k-1}$-partial collapsing (which was assumed in \cref{thm:high-probability-extraction}).
\begin{corollary}\label{cor:guaranteed-extractor-or-inconsistent}
Let $(P_{\Sigma}, V_{\Sigma})$ be a 3- or 4- message public coin interactive argument with a consistency function $g: T \times (R \times \{0,1\}^*)^*\rightarrow \{0,1\}$, and let $f_1, \hdots, f_k$ be functions. Suppose that:
\begin{itemize}
\item The protocol is partially collapsing with respect to $f_1, \hdots, f_{k-1}$, and
\item The protocol is $(k, g)$-PSS for some $k = {\rm poly}(\lambda)$.
\end{itemize}
Then, one of the two following conclusions holds:
\begin{enumerate}
\item The extractor from \cref{thm:high-probability-extraction} composed with the PSS extractor $\mathsf{PSSExtract}$ satisfies \emph{guaranteed extraction}, OR
\item The extractor from \cref{thm:high-probability-extraction} outputs a $k$-tuple of partial transcripts $(r_1, y_1, \hdots, r_k, y_k)$ such that $g(\tau_{\mathrm{pre}}, r_1, y_1, \hdots, r_k, y_k) = 0$ (the transcripts are \emph{inconsistent}) with non-negligible probability.
\end{enumerate}
\end{corollary}
\begin{proof}
Suppose that conclusion (1) is false, meaning that there exist infinitely many $\lambda$ and a constant $c$ such that the extractor from \cref{thm:high-probability-extraction} has an accepting initial execution but the call to $\mathsf{PSSExtract}$ fails to produce a witness with probability at least $1/\lambda^c$. We know that the \cref{thm:high-probability-extraction} extractor aborts with negligible probability, so we also assume that the extractor does not abort here. Then, by an averaging argument, with probability at least $\frac 1 {2\lambda^c}$ over the distribution of $(\mathsf{vk}, a)$, the above event conditioned on $(\mathsf{vk}, a)$ holds with probability at least $\frac 1 {2\lambda^c}$. This in particular implies that $\gamma_{\mathsf{vk}, a}$ (as defined in \cref{thm:high-probability-extraction}) is at least $\frac 1 {2\lambda^c}$ for these choices of $(\mathsf{vk}, a)$. Then, property (3) of \cref{thm:high-probability-extraction} implies that the distribution of $(r_1, \hdots, r_k)$ is \emph{admissible} for these choices of $(\mathsf{vk}, a)$ (and choices of $\lambda$). Thus, the $(k, g)$-PSS property of $(P_{\Sigma}, V_{\Sigma})$ implies that for every such $(\mathsf{vk}, a)$, the $k$-tuple of partial transcripts must be inconsistent with probability at least $\frac 1 {2 \lambda^c}$ (as otherwise $\mathsf{PSSExtract}$ would succeed with $1-{\rm negl}$ probability). Therefore, assuming that conclusion (1) is false, the probability that the $k$-tuple of transcripts output by the \cref{thm:high-probability-extraction} extractor are inconsistent is at least $\frac 1 {4 \lambda^{2c}}$ for infinitely many $\lambda$, implying conclusion (2). \qedhere
\end{proof}
Finally, we apply \cref{cor:guaranteed-extractor-or-inconsistent} to obtain guaranteed extractors for all of the \cref{sec:examples} example protocols (along with a general result for $k$-special sound protocols).
\begin{corollary}\label{cor:collapsing-ss-extract}
If $(P_{\Sigma}, V_{\Sigma})$ is (fully) collapsing and $k$-special sound, and $|R|= 2^{\omega(\log \lambda)}$, then the protocol has guaranteed extraction.
\end{corollary}
\begin{proof}
Since $(P_{\Sigma}, V_{\Sigma})$ is $k$-special sound and $|R|= 2^{\omega(\log \lambda)}$, we know that the protocol is $(k, g)$-PSS for the ``trivial'' transcript consistency predicate $g$. Therefore, \cref{cor:guaranteed-extractor-or-inconsistent} applies to this protocol (where the extractor sets $f_1 = \hdots = f_k = \mathsf{Id}$). However, conclusion (2) of \cref{cor:guaranteed-extractor-or-inconsistent} cannot happen because the consistency predicate of $\mathsf{PSSExtract}$ in this case simply checks that the transcripts are accepting, which is guaranteed by the fact that $(r_i, y_i = z_i)$ was a measurement outcome of a state in $\Pi_C$.
\end{proof}
\begin{corollary}\label{cor:commit-and-open}
If $(P_{\Sigma}, V_{\Sigma})$ is a commit-and-open protocol (\cref{def:commit-and-open}) satisfying commit-and-open $k$-special soundness and $R = 2^{\omega(\log \lambda)}$ (either natively or enforced by parallel repetition), and the commitment scheme is instantiated using a collapse-binding commitment \cite{EC:Unruh16}, then the protocol has a guaranteed extractor.
\end{corollary}
\begin{proof}
Under the hypotheses of the corollary (along with \cref{claim:kss-to-kfss,k-g-ss-implies-k-g-pss}), the protocol satisfies either $(k, g)$-PSS (if it has a natively superpolynomial challenge space) or $(k^2\log^2(\lambda), g)$-PSS (if parallel repeated; see~\cref{lemma:parallel-repetition}), where $g$ is a predicate that enforces the constraint that all opened messages are consistent with each other. We set $f_1 = \hdots = f_k = f$ where $f(z)$ outputs the substring of $z$ corresponding to the opened messages (and not the openings). Then, the \cref{thm:high-probability-extraction} extraction procedure does not violate $g$-consistency by the unique-message binding of the commitment scheme (shown in \cref{lemma:collapse-binding-unique-message}). Thus, \cref{cor:guaranteed-extractor-or-inconsistent} implies that $(P_{\Sigma}, V_{\Sigma})$ has a guaranteed extraction procedure. \qedhere
\end{proof}
\begin{corollary}
\label{corollary:kilian-guaranteed}
Kilian's succinct argument system \cite{STOC:Kilian92}, when instantiated using a collapsing hash function and a PCP of knowledge, has a guaranteed extraction procedure.
\end{corollary}
\begin{proof}
We know from~\cref{claim:kilian-pss} the \cite{STOC:Kilian92} succinct argument system is (1) (fully) collapsing, and (2) $(k, g)$-PSS for $k = {\rm poly}(n, \lambda)$ and $g$ defined so that when $z_i$ and $z_j$ contain overlapping leaves of the Merkle tree, the leaf values are equal. We set $f_1 = \hdots = f_k = \mathsf{Id}$, and observe that the \cref{thm:high-probability-extraction} extractor does not violate $g$-consistency, because if it output two transcripts $(r_1, z_1), (r_2, z_2)$ with inconsistent leaf values, since the transcripts are accepting (they were obtained by measuring a state in $\Pi_C$), this would violate the collision-resistance (implied by collapsing) of the hash family. Thus, by \cref{cor:guaranteed-extractor-or-inconsistent}, the protocol has a guaranteed extractor.
\end{proof}
\begin{corollary}\label{cor:gni-guaranteed-extractor}
The one-out-of-two graph isomorphism subroutine has a guaranteed extraction procedure that extracts the bit $b$ (when $G_0$ and $G_1$ are not isomorphic).
\end{corollary}
\begin{proof}
By~\cref{claim:gni-pss}, this protocol is $(2,g')$-PSS where $g'$ is the following asymmetric function:
\begin{itemize}
\item For the first partial transcript $(\tau_{\mathrm{pre}}, r^{(1)}, c^{(1)})$, $g'$ checks that for all $i$ such that $r_i = 0$, $(H_{0,i}, H_{1,i})$ are isomorphic to $(G_{c^{(1)}_i}, G_{1-c^{(1)}_i})$.
\item For the second partial transcript $(\tau_{\mathrm{pre}}, r^{(2)}, c^{(2)})$, $g'$ \emph{additionally} checks that for all $i$ such that $r_i = 1$, $H_{c^{(2)}_i, i}$ is isomorphic to $H$.
\end{itemize}
We define the following pair of functions $f_1, f_2$:
\begin{itemize}
\item $f_1(\tau_{\mathrm{pre}}, r, z)$ outputs the following substring of $z$. For every $i$ such that $r_i = 0$, the substring includes the bit $c_i$ (where $z_i = (c_i, \sigma_{0,i}, \sigma_{1,i})$).
\item $f_2(\tau_{\mathrm{pre}}, r, z)$ outputs the substring $c$ (the distinguished single bit of each $z_i$).
\end{itemize}
We note that the graph isomorphism subprotocol is $f_1$-collapsing; this follows from the fact that for any \emph{accepting} transcript $(\tau_{\mathrm{pre}}, r, z)$, the bits $c_i$ (for $r_i = 0$) are information-theoretically determined as a function of $(G_0, G_1, H_{0,i}, H_{1,i})$.
Thus, if we instantiate the \cref{thm:high-probability-extraction} extractor using $(f_1, f_2)$ (note that we require no properties of $f_2$) we have that \cref{cor:guaranteed-extractor-or-inconsistent} applies. Moreover, $g'$-consistency of the transcripts output by the extractor is not violated, because it is formally implied by the fact that they were obtained by partially measuring a state in $\Pi_C$ (any accepting partial transcript $(r_i, c_i)$ satisfies the condition checked by $g'$). Thus, we conclude that the protocol has a guaranteed extractor by \cref{cor:guaranteed-extractor-or-inconsistent}.
\end{proof}
\section{Expected Polynomial Time for Quantum Simulators}\label{sec:eqpt}
We introduce a notion of efficient computation we call coherent-runtime expected quantum polynomial time ($\mathsf{EQPT}_{c}$). We then formalize a new definition of post-quantum zero-knowledge with $\mathsf{EQPT}_{c}$ simulation.
\subsection{Quantum Turing Machines}
\label{sec:qtms}
We recall the definition of a quantum Turing machine (QTM) of Deutsch \cite{Deutsch85}. A QTM is a tuple $(\Sigma,Q,\delta,q_0,q_f)$ where $\Sigma$ is a finite set of symbols, $Q$ is a finite set of states, $\delta \colon Q \times \Sigma \to \mathbb{C}^{Q \times \Sigma \times \{-1,0,1\}}$ is a transition function, and $q_0,q_f$ are the initial and final (halting) states respectively.
We fix registers $\mathcal{Q}$ containing the state, $\mathcal{I}$ containing the position of the tape head, and $\mathcal{T}$ containing the tape. A configuration state of a Turing machine is a vector $\ket{q,i,\mathbb{T}} \in \mathcal{Q} \otimes \mathcal{I} \otimes \mathcal{T}$ where $q \in Q$ is the current state, $i \in \mathbb{N}$ is the location of the tape head, and $\mathbb{T} \in \Sigma^*$ is the (finite) contents of the tape.
A transition is given by the map $U_{\delta}$, which acts on basis states as follows:
\begin{equation*}
\ket{q,i,T} \mapsto \sum_{q' \in Q} \sum_{a \in \Sigma} \sum_{d \in \{-1,0,1\}} \alpha_{q',a,d,b} \ket{q',i + d,\mathbb{T}_{i \to a}}
\end{equation*}
where $\delta(q,\mathbb{T}_i) = \sum_{q',a,d} \alpha_{q',a,d} \ket{q',a,d}$. $\delta$ is a valid transition function if and only if $U_{\delta}$ is unitary. The definition of QTMs generalises to multiple tapes in the natural way. We will consider QTMs having a separate input/output tape on register $\mathcal{A}$ (with head position in $\mathcal{I}_{\mathsf{in}}$).
The execution of a $T$-bounded QTM proceeds as follows.
\begin{enumerate}[noitemsep]
\item Initialize register $\mathcal{Q}$ to $\ket{q_0}$, $\mathcal{I},\mathcal{I}_{\mathsf{in}}$ to $\ket{0}$, and $\mathcal{T}$ to the empty tape state $\ket{\varnothing}$.
\item \label[step]{step:qtm-main-loop} Repeat the following for at most $T$ steps:
\begin{enumerate}
\item Apply the measurement $\BProj{f} = \BMeas{\ketbra{q_f}}$ to $\mathcal{Q}$. If the outcome is $1$, halt and discard all registers except $\mathcal{A}$.
\item Apply $U_{\delta}$.
\end{enumerate}
\end{enumerate}
The \defemph{output} $M(\bm{\rho})$ of a QTM $M$ on input $\bm{\rho} \in \Hermitians{\mathcal{A}}$ is the state on $\mathcal{A}$ when the machine halts. The \defemph{running time} $t_M(\bm{\rho})$ of $M$ on input $\bm{\rho}$ is the number of iterations of \cref{step:qtm-main-loop}. Note that both of these quantities are random variables.
\begin{definition}
The expected running time $E_M(n)$ of a QTM $M$ is the maximum over all $n$-qubit states $\bm{\rho}$ of $\Expectation[t_M(\bm{\rho})]$. A $T$-bounded QTM $M$ (for some $T \leq \exp(n)$) is $\mathsf{EQPT}_{m}$ and there exists a polynomial $p$ such that $E_M(n) \leq p(n)$ for all $n$.
\end{definition}
\subsection{Coherent-Runtime EQPT}
\newcommand{\mathsf{D}}{\mathsf{D}}
\begin{definition}
A $\mathsf{D}$-circuit is a quantum circuit $C$ with special gates $\{ G_i,G_i^{-1} \})_{i=1}^{k}$ with the following restriction: for each $i$, there is a single $G_i$ gate and a single $G_i^{-1}$ gate acting on a designated register $\mathcal{X}_i$, where $G_i$ acts before $G_i^{-1}$. All other gates may act arbitrarily on $\mathcal{Y} \otimes \bigotimes_{i=1}^{k} \mathcal{X}_i$, for some register $\mathcal{Y}$. For any CPTP maps $\Phi_i \colon \Hermitians{\mathcal{X}_i} \to \Hermitians{\mathcal{X}_i}$, $C[\Phi_1,\ldots,\Phi_k] \colon \Hermitians{\mathcal{Y} \otimes \bigotimes_{i=1}^k \mathcal{X}_i} \to \Hermitians{\mathcal{Y} \otimes \bigotimes_{i=1}^k \mathcal{X}_i}$ is the superoperator defined as follows:
\begin{enumerate}[noitemsep]
\item For each $i$, $U_i$ be a unitary dilation of $\Phi_i$. That is, let $\mathcal{Z}_i$ be an ancilla Hilbert space and $U_{\Phi}$ unitary on $\mathcal{X}_i \otimes \mathcal{Z}_i$ such that $\Phi(\bm{\sigma}) = \Tr_{\mathcal{Z}_i}(U_i (\bm{\sigma} \otimes \ketbra{0}_{\mathcal{Z}_i}) U_i^{\dagger})$ for all $\bm{\sigma} \in \Hermitians{\mathcal{X}_i}$.
\item Construct a circuit $C'$ on $\mathcal{Y} \otimes \bigotimes_{i=1}^{k} (\mathcal{X}_i \otimes \mathcal{Z}_i)$ from $C$ by replacing $G_i$ with $U_i$ and $G_i^{-1}$ with $U_i^{\dagger}$ for each $i$.
\item Let $C$ be the superoperator $\bm{\rho} \mapsto \Tr_{\mathcal{Z}}(C'(\bm{\rho} \otimes \bigotimes_{i=1}^{k} \ketbra{0}_{\mathcal{Z}_i}))$.
\end{enumerate}
Since all choices of $U_i$ are equivalent up to a local isometry on $\mathcal{Z}_i$, the map $C[\Phi_1,\ldots,\Phi_k]$ is well-defined.
\end{definition}
We are now ready to define our notion of \emph{coherent-runtime expected quantum polynomial time}.
\begin{definition}
\label{def:cr-eqpt}
A sequence of CPTP maps $\{\Phi_{n}\}_{n \in \mathbb{N}}$ is a \defemph{$\mathsf{EQPT}_{c}$ computation} if there exist a uniform family of $\mathsf{D}$-circuits $\{ C_n \}_{n \in \mathbb{N}}$ and $\mathsf{EQPT}_{m}$ computations $M_1,\ldots,M_k$ such that $C_n[M_1,\ldots,M_k] = \Phi_n$ for all $n$.
\end{definition}
\newcommand{\ket{\mathsf{init}}}{\ket{\mathsf{init}}}
We show that any $\mathsf{EQPT}_{c}$ computation can be approximated to any desired precision by a polynomial-size quantum circuit. We first show the following claim. Let $\ket{\mathsf{init}} \coloneqq \ket{q_0}_{\mathcal{Q}} \ket{0,0}_{\mathcal{I},\mathcal{I}_{\mathsf{in}}} \ket{\varnothing}_{\mathcal{T}}$.
\begin{claim}\label{lemma:truncation}
\label{claim:qtm-truncation}
Let $M$ be a $T$-bounded QTM running in expected time $t$, and let $U$ be the unitary dilation of $M$ as in \cref{fig:coherent-qtm}. For all $\gamma \colon \mathbb{N} \to (0,1]$, there is a uniform sequence of unitary circuits $\{ V_{n} \}_n$ of size ${\rm poly}(n)/\gamma(n)^2$ such that for every unitary $A$ on $\mathcal{A}$ and state $\ket{\psi} \in \mathcal{A}$:
\begin{equation*}
\norm{(U^{\dagger} (\mathbf{I} \otimes A) U - V_n^{\dagger} (\mathbf{I} \otimes A) V_n) \ket{\psi} \ket{\mathsf{init}} \ket{0^T}_{\mathcal{B}}} \leq \gamma(n).
\end{equation*}
\end{claim}
\begin{proof}
Let $V$ be the unitary given by truncating $U$ to just after the $\tau$-th iteration of $U_{\delta}$, where $\tau \coloneqq \lceil t/4\gamma^2 \rceil$. Let $\Pi \coloneqq \ketbra{1^{T-\tau}}_{\mathcal{B}_{\tau+1},\ldots,\mathcal{B}_T}$. Observe that for every state $\ket{\psi}$,
\[ \Pi U_i \ket{\psi} \ket{\mathsf{init}} \ket{0^T}_{\mathcal{B}} = \Pi_f V_i \ket{\psi} \ket{\mathsf{init}} \ket{0^\tau 1^{T-\tau}}_{\mathcal{B}}, \]
because $\Pi$ projects on to computations that finish in at most $\tau$ steps, and once the computation finishes, the remaining $\mathsf{CNOT}_{\Pi_f}$ gates flip the corresponding $\mathcal{B}_i$ from $0$ to $1$.
Moreover, for every state $\ket{\phi} \in \mathcal{A} \otimes \mathcal{Q} \otimes \mathcal{W}$ and $z \in \{0,1\}^\tau$,
\[ U_i^{\dagger} \ket{\phi} \ket{z 1^{T-\tau}} = V_i^{\dagger} \ket{\phi} \ket{z 0^{T-\tau}}, \]
since the applications of $U_{\delta}$ controlled on $\mathcal{B}_{\tau+1},\ldots,\mathcal{B}_T$ act as the identity and the $\mathsf{CNOT}_{\Pi_f}$ gates acting on $\mathcal{B}_{\tau+1},\ldots,\mathcal{B}_T$ flip the corresponding register from $1$ to $0$. Hence
\[
U^{\dagger}(I \otimes A) \Pi U \ket{\psi} \ket{\mathsf{init}} \ket{0^T}_{\mathcal{B}} = V^{\dagger}(I \otimes A) \Pi_f V \ket{\psi}\ket{\mathsf{init}}\ket{0^T}_{\mathcal{B}}.
\]
The claim follows since, by Markov's inequality,
\[ \norm{\Pi U_i \ket{\psi} \ket{\mathsf{init}} \ket{0^T}_{\mathcal{B}}} \leq \sqrt{t/\tau} \leq \varepsilon(n)/2. \qedhere \]
\end{proof}
\begin{lemma}
For any $\mathsf{EQPT}_{c}$ computation $\{ \Phi_{n} \}_n$ and $\varepsilon \colon \mathbb{N} \to (0,1]$, there is a uniform sequence of (standard) quantum circuits $\{ C_{n} \}_n$ of size ${\rm poly}(n,1/\varepsilon(n))$ such that $d(\Phi_{n}(\bm{\rho}),C_n(\bm{\rho})) \leq \varepsilon(n)$ for all $\bm{\rho}$.
\end{lemma}
\begin{proof}
Let $D_{n}$ be a $\mathsf{D}$-circuit and $M_1,\ldots,M_k$ such that $\Phi_n = D_n[M_1,\ldots,M_k]$, and let $U_n$ be the unitary circuit obtained by replacing each $G_i,G_i^{-1}$ with the corresponding coherent implementation of $M_i$ as in \cref{fig:coherent-qtm}. Let $U'_n$ be as $U_n$, but where the $G_i$-gates are replaced with unitaries $V_i$ as guaranteed by \cref{claim:qtm-truncation}, with $\gamma(n) \coloneqq \varepsilon(n)/k$. The circuit $C_n$ is obtained by initializing the ancillas to $\ket{\mathsf{init}} \ket{0^\tau}_{\mathcal{B}}$, applying $U'_n$, and then tracing out the ancillas.
We make use of the fidelity distance $d_F$, defined in \cite{STOC:Watrous06} to be
\begin{equation*}
d_F(\bm{\rho},\bm{\sigma}) \coloneqq \inf \{ \norm{\ket{\psi} - \ket{\phi}} : \text{$\ket{\psi},\ket{\phi}$ purify $\bm{\rho},\bm{\sigma}$, respectively} \}.
\end{equation*}
\cite{STOC:Watrous06} shows that $d_F(\bm{\rho},\bm{\sigma}) \geq d(\bm{\rho},\bm{\sigma})$. We can choose the purifications $U_n \ket{\psi} \ket{\mathsf{init}} \ket{0^T}$ of $\Phi_n$ and $U'_n \ket{\psi} \ket{0}$ of $C_n$. By \cref{claim:qtm-truncation}, and the triangle inequality, the distance between these states is at most $\varepsilon(n)$.
\end{proof}
\subsection{Zero Knowledge with $\normalfont{\textsf{EQPT}}_c$ Simulation}
\newcommand{\mathsf{out}}{\mathsf{out}}
Given our definition of $\mathsf{EQPT}_{c}$ above, we now formally define zero-knowledge with $\mathsf{EQPT}_{c}$ simulation for interactive protocols.
For an interactive protocol $(P, V)$, let $\mathsf{out}_{V^*}\langle P,V^* \rangle$ denote the output of $V^*$ after interacting with $P$.
\begin{definition}
An interactive argument is \emph{black-box} statistical (resp. computational) post-quantum zero knowledge if there exists an $\mathsf{EQPT}_{c}$ simulator $\mathsf{Sim}$ such that for all polynomial-size quantum malicious verifiers $V^*$ and all $(x,w) \in R_L$, the distributions
\begin{equation*}
\mathsf{out}_{V^*}\langle P(x,w),V^* \rangle \quad \text{and} \quad \mathsf{Sim}^{V^*}(x)
\end{equation*}
are statistically (resp. quantum computationally) indistinguishable.
\end{definition}
|
1,941,325,221,117 | arxiv | \section{Introduction}
Linear-optical architectures belong to the most prominent platforms for realizing protocols of quantum information processing \cite{kok07,kni01}. They are experimentally feasible and they work directly with photons without the necessity to transfer the quantum state of a photonic qubit into another quantum system like an ion etc. The latter feature is quite convenient because photons are good carriers of information for communication purposes. Linear-optical quantum gates achieve the non-linearity necessary for the interaction between qubits by means of the non-linearity of quantum measurement. Unfortunately, quantum measurement is not only non-linear but also probabilistic. Therefore linear-optical implementations of quantum gates are mostly probabilistic too---their operation sometimes fails. Partly, this is a fundamental limitation. But in many cases when data qubits appear in an improper state after the measurement on an ancillary system they can still be corrected by applying a proper unitary transformation which depends only on the measurement result. In these situations, implementation of the feed forward can increase the probability of success of the gate \cite{sci06,pre07}. In the present paper, we apply this approach to a linear-optical programmable quantum gate.
The most of conventional computers use fixed hardware and different tasks are performed using different software. This concept can be, in principle, applied to quantum computers as well: The executed unitary operation can be determined by some kind of a program. However, in 1997 Nielsen and Chuang \cite{nie97} showed that an $n$-qubit quantum register can perfectly encode at most $2^n$ distinct quantum operations. Although this bound rules out perfect universally-programmable quantum gates (even unitary transformations on only one qubit form a group with uncountably many elements), it is still possible to construct approximate or probabilistic programmable quantum gates and optimize their performance for a given size of the program register. Such gates can either operate deterministically, but with some noise added to the output state \cite{hil06}, or they can operate probabilistically, but error free \cite{hil02,vid02,hil04}. Combination of these regimes is also possible.
A probabilistic programmable phase gate was proposed by Vidal, Masanes, and Cirac \cite{vid02}. It carries out rotation of a single-qubit state along the $z$-axis of the Bloch sphere. The angle of rotation (or the phase shift) is programmed into a state of a single-qubit program register. It is worth noting that an exact specification of an angle of rotation would require infinitely many classical bits. But here the information is encoded into a single qubit only. The price to pay is that the success probability of such a gate is limited by 50\,\% \cite{note1}. The programmable phase gate was experimentally implemented for the first time in 2008 \cite{mic08}. However, the success probability of that linear-optical implementation reached only 25\,\%. In the present paper we will show how to increase the success probability of this scheme to its quantum mechanical limit of 50\,\% by means of electronic feed forward.
\section{Theory}
The programmable phase gate works with a data and program qubit. The program qubit is supposed to contain information about the phase shift $\phi$ encoded in the following way:
\begin{equation}
|\phi\rangle_{P}=\frac{1}{\sqrt{2}}(|0\rangle_P + e^{i\phi}|1\rangle_P).
\label{eq-prog_qubit}
\end{equation}
The gate performs a unitary evolution of the data qubit which depends on the state of the program qubit:
\begin{equation}
U(\phi) =|0\rangle_D \langle 0|+e^{i\phi}|1\rangle_D \langle 1|.
\label{eq-Uphi}
\end{equation}
Without loss of generality we can consider only pure input states of the data qubit:
\begin{equation}
|\psi_\mathrm{in}\rangle_{D} = \alpha|0\rangle_D+\beta|1\rangle_D.
\label{eq-dat_qubit_in}
\end{equation}
So the output state of the data qubit reads:
\begin{equation}
|\psi_\mathrm{out}\rangle_{D} = \alpha|0\rangle_D + e^{i\phi}\beta|1\rangle_D.
\label{eq-dat_qubit_out}
\end{equation}
Experimentally the programmable phase gate can be implemented by an optical setup shown in Fig.~\ref{fig-scheme}. Each qubit is represented by a single photon which may propagate in two optical fibers. The basis states $|0\rangle$ and $|1\rangle$ correspond to the presence
of the photon in the first or second fiber, respectively. When restricted only to the cases where a single photon emerges in each output port, the conditional two-photon output state reads
(the normalization reflects the fact that the probability of this situation is $1/2$):
\begin{eqnarray*}
&&\hspace{-5mm}
\frac{1}{\sqrt{2}}(\alpha|0\rangle_D \otimes |0\rangle_P+\beta e^{i\phi} |1\rangle_D \otimes |1\rangle_P)\\
&=& \frac{1}{2} \big[ (\alpha|0\rangle_D + \beta e^{i\phi}|1\rangle_D) \!\otimes\! |+\rangle_P\\
&+& (\alpha|0\rangle_D - \beta e^{i\phi}|1\rangle_D) \!\otimes\! |-\rangle_P \big],
\end{eqnarray*}
where $|\pm\rangle_P=\frac{1}{\sqrt{2}}(|0\rangle_P \pm |1\rangle_P)$. If we make a measurement on the program qubit in the basis $\{| \pm \rangle_P \}$ then also the output state of the data qubit collapses into one of the two following states according to the result of the measurement: $|\psi_{\mathrm{out}}\rangle_D=\alpha|0\rangle_D \pm \beta e^{i\phi}|1\rangle_D$.
If the measurement outcome is $|+\rangle_P$ then the unitary transformation $U(\phi)$ has been applied to the data qubit. If the outcome is $|-\rangle_P$ then the state acquires an extra $\pi$ phase shift, i.e., $U(\phi+\pi)$ has been executed. This is compensated by a fast electro-optical modulator which applies a corrective phase shift $-\pi$ (in practice we apply phase shift $\pi$ what is equivalent).
\begin{figure}
\begin{center}
\resizebox{\hsize}{!}{\includegraphics*{fig1.pdf}}
\end{center}
\caption{(Color online) Scheme of the experiment. FC -- fiber couplers,
VRC -- variable ratio couplers, PM -- phase modulators, D -- detectors.}
\label{fig-scheme}
\end{figure}
\section{Experiment}
The scheme of the setup is shown in Fig.~\ref{fig-scheme}. Pairs of photons are created by type-II collinear frequency-degenerate spontaneous parametric down conversion (SPDC) in a two-millimeter long BBO crystal pumped by a diode laser (Coherent Cube) at 405\,nm. The photons are separated by a polarizing beam splitter and coupled into single-mode fibers. Using fiber polarization controllers the same polarizations are set on the both photons. By means of fiber couplers and electro-optical phase modulators the required input states of the program and data qubits are prepared. To prepare state (\ref{eq-prog_qubit}) of the program qubit a fiber coupler (FC) with fixed splitting ratio 50:50 is used. An arbitrary state of the data qubit (\ref{eq-dat_qubit_in}) is prepared using electronically controlled variable ratio coupler (VRC). All employed phase modulators (EO Space) are based on the linear electro-optic effect in lithium niobate. Their half-wave voltages are about 1.5\,V. These phase modulators (PM) exhibit relatively high dispersion. Therefore one PM is placed in each interferometer arm in order to compensate dispersion effects. Because the overall phase of a quantum state is irrelevant it is equivalent to apply either a phase shift $\varphi$ to $|1\rangle$ or $-\varphi$ to $|0\rangle$.
The gate itself consists of the exchange of two rails of input qubits and of the measurement on the data qubit (see Fig.~\ref{fig-scheme}). The measurement in basis $\{| \pm \rangle \}$ is accomplished by a fiber coupler with fixed splitting ratio 50:50 and two single photon detectors. In this experiment we use single photon counting modules (Perkin-Elmer) based on actively quenched silicon avalanche photodiodes. Detectors D$_{p0}$, D$_{d0}$, and D$_{d1}$ belongs to a quad module SPCM-AQ4C (total efficiencies 50--60\,\%, dark counts 370--440\,s$^{-1}$, response time 33--40\,ns). As detector D$_{p1}$, serving for the feed forward, a single module SPCM AQR-14FC is used because of its faster response (total efficiency about 50\,\%, dark counts 180\,s$^{-1}$, response time 17\,ns). The output of the detector is a 31\,ns long TTL (5\,V) pulse.
To implement the feed forward the signal from detector D$_{p1}$ is led to a passive voltage divider in order to adapt the 5\,V voltage level to about 1.5\,V (necessary for the phase shift of $\pi$) and then it is led directly to the phase modulator. The coaxial jumpers are as short as possible. The total delay including the time response of the detector is 20\,ns. To compensate this delay, photon wave-packets representing data qubits are retarded by fiber delay lines (one coil of fiber of the length circa 8\,m in each interferometer arm). Timing of the feed-forward pulse and the photon arrival was precisely tuned. Coherence time of photons created by our SPDC source is only several hundreds of femtoseconds.
The right-most block in Fig.~\ref{fig-scheme} enables us to measure the data qubit at the output of the gate in an arbitrary basis. These measurements are necessary to evaluate performance of the gate.
The whole experimental setup is formed by two Mach-Zehnder interferometers (MZI). The length of the arms of the shorter MZI is about 10.5\,m (the upper interferometer in Fig.~\ref{fig-scheme}). The length of the arms of the longer one is about 21.5\,m (the lower interferometer one in Fig.~\ref{fig-scheme}). To balance the arm lengths we use motorized air gaps with adjustable lengths. Inside the air gaps, polarizers and wave plates are also mounted. They serve for accurate setting of polarizations of the photons (to obtain high visibilities the polarizations in the both arms of each MZI must be the same).
To reduce the effect of the phase drift caused by fluctuations of temperature and temperature gradients we apply both passive and active stabilization. The experimental setup is covered by a shield minimizing air flux around the components and the both delay fiber loops are winded on an aluminium cylinder which is thermally isolated. Besides, after each three seconds of measurement an active stabilization is performed. It measures intensities for phase shifts 0 and $\pi/2$ and if necessary it calculates phase compensation and applies corresponding additional corrective voltage to the phase modulator. This results in the precision of the phase setting during the measurement period better than $\pi/200$. For the stabilization purposes we use a laser diode at 810\,nm. To ensure the same spectral range, both the laser beam and SPDC generated photons pass through the same band-pass interference filter (spectral FWHM 2\,nm, Andover). During the active stabilization the source is automatically switched from SPDC to a laser diode.
\section{Results}
Any quantum operation can be fully described by a completely
positive (CP) map. According to the Jamiolkowski-Choi isomorphism
any CP map can be represented by a positive-semidefinite operator
$\chi$ on the tensor product of input and output Hilbert spaces
$\mathcal{H}_{\mathrm{in}}$ and $\mathcal{H}_{\mathrm{out}}$
\cite{jam72,cho75}. The input state $\rho_{\mathrm{in}}$ transforms
according to
$$
\rho_{\mathrm{out}}= \mathrm{Tr}_{\mathrm{in}}[\chi
(\rho_{\mathrm{in}}^T\otimes I_{\mathrm{out}})].
$$
Combinations of different input states with measurements on the output quantum
system represent effective measurements performed on
$\mathcal{H}_{\mathrm{in}}\otimes\mathcal{H}_{\mathrm{out}}$.
A proper selection of the input states and final measurements
allows us to reconstruct matrix $\chi$ from measured data
using maximum likelihood (ML) estimation technique \cite{jez03,par04}.
For each phase shift, i.e, for a fixed state of the program qubit, we used six different input states of the data qubit, namely
$|0\rangle, |1\rangle, |\pm\rangle = (|0\rangle \pm |1\rangle)/\sqrt{2},$ and
$|\pm i \rangle = (|0\rangle \pm i |1\rangle)/\sqrt{2}$.
For each of these input states the state of the data qubit at the output of the gate was measured
in three different measurement basis, $\{|0\rangle, |1\rangle \},
\{ |\pm \rangle \},$ and $\{ |\pm i \rangle \}$.
Each time we simultaneously measured two-photon coincidence counts between detectors
D$_{p0}$ \& D$_{d0}$, D$_{p0}$ \& D$_{d1}$, D$_{p1}$ \& D$_{d0}$, D$_{p1}$ \& D$_{d1}$
in 12 three-second intervals. The unequal detector efficiencies were compensated by proper rescaling of the measured coincidence counts.
From these data we have reconstructed Choi matrices describing the functioning of the gate for several different phase shifts. In Figs.~\ref{fig-choi-halfpi} and \ref{fig-choi-pi} there are examples of the Choi matrices of the gate for $\phi=\pi/2$ and $\phi=\pi$, respectively.
\begin{figure}
\begin{center}
Reconstructed: \hspace{24mm} Ideal: \hspace*{11mm} \\[1mm]
\resizebox{0.49\hsize}{!}{\includegraphics*{fig2a.png}}
\resizebox{0.49\hsize}{!}{\includegraphics*{fig2b.png}}\\[5mm]
\resizebox{0.49\hsize}{!}{\includegraphics*{fig2c.png}}
\resizebox{0.49\hsize}{!}{\includegraphics*{fig2d.png}}
\end{center}
\caption{(Color online) Choi matrix for the gate with the feed forward when
$\phi = \pi/2$ is encoded into the program qubit. The left top
panel shows the real part of the reconstructed process matrix while
the left bottom one displays its imaginary part.
The two right panels show the real and imaginary part
of the ideal matrix.}
\label{fig-choi-halfpi}
\end{figure}
\begin{figure}
\begin{center}
Reconstructed: \hspace{24mm} Ideal: \hspace*{11mm} \\[1mm]
\resizebox{0.49\hsize}{!}{\includegraphics*{fig3a.png}}
\resizebox{0.49\hsize}{!}{\includegraphics*{fig3b.png}}\\[5mm]
\resizebox{0.49\hsize}{!}{\includegraphics*{fig3c.png}}
\resizebox{0.49\hsize}{!}{\includegraphics*{fig3d.png}}
\end{center}
\caption{(Color online) Choi matrix for the gate with the feed forward when
$\phi = \pi$ is encoded into the program qubit. The left top
panel shows the real part of the reconstructed process matrix while
the left bottom one displays its imaginary part.
The two right panels show the real and imaginary part
of the ideal matrix.}
\label{fig-choi-pi}
\end{figure}
To quantify the quality of gate operation we have calculated the process fidelity. If $\chi_{\mathrm{id}}$ is a one-dimensional projector then the common definition of process fidelity is
$$
F_\chi=\mathrm{Tr}[\chi \chi_{\mathrm{id}}] /
(\mathrm{Tr}[\chi]\mathrm{Tr}[\chi_{\mathrm{id}}]).
$$
Here $\chi_{\mathrm{id}}$ represents the ideal transformation
corresponding to our gate. In particular,
$$
\chi_{\mathrm{id}} = \sum_{i,j=0,1} |i \rangle\langle j|
\otimes U | i \rangle\langle j | U^\dag,
$$
where $U$ stays for the unitary operation (\ref{eq-Uphi}) applied by the gate.
We have also reconstructed density matrices of the output
states of the data qubit corresponding to all input states and calculated fidelities and purities of the output states. The fidelity of output state $\rho_{\mathrm{out}}$ is defined as
$F=\langle \psi_{\mathrm{out}}| \rho_{\mathrm{out}}
|\psi_{\mathrm{out}} \rangle,$
where $|\psi_{\mathrm{out}} \rangle = U |\psi_{\mathrm{in}} \rangle$
with $|\psi_{\mathrm{in}} \rangle$ being the (pure) input state.
The purity of the output state is defined as
$\mathcal{P}=\mathrm{Tr}[\rho_{\mathrm{out}}^2]$. If the input
state is pure the output state is expected to be pure as well.
Table~\ref{tab-withFF} shows process fidelities for seven different phase shifts. It also shows the average and minimal values of output state fidelities and average and minimal purities of output states. Fidelities and purities are averaged over six output states corresponding to six input states described above. Also the minimum values are related to these sets of states.
\begin{table}
\begin{ruledtabular}
\begin{tabular}{cccccc}
$\phi$ & $F_{\chi}$ & $F_{\mathrm{av}}$ & $F_{\mathrm{min}}$ &
$\mathcal{P}_{\mathrm{av}}$ & $\mathcal{P}_{\mathrm{min}}$
\\ \hline
0 & 0.976 & 0.985 & 0.970 & 0.974 & 0.947 \\
$\pi/6$ & 0.977 & 0.986 & 0.972 & 0.975 & 0.951 \\
$\pi/3$ & 0.977 & 0.985 & 0.970 & 0.975 & 0.943 \\
$\pi/2$ & 0.974 & 0.983 & 0.973 & 0.975 & 0.953 \\
$2\pi/3$ & 0.978 & 0.987 & 0.962 & 0.988 & 0.961 \\
$5\pi/6$ & 0.972 & 0.981 & 0.953 & 0.974 & 0.944 \\
$\pi$ & 0.980 & 0.987 & 0.975 & 0.977 & 0.961 \\
\end{tabular}
\end{ruledtabular}
\caption{Process fidelities ($F_{\chi}$), average ($F_{\mathrm{av}}$)
and minimal ($F_{\mathrm{min}}$) output-state fidelities,
average ($\mathcal{P}_{\mathrm{av}}$) and minimal
($\mathcal{P}_{\mathrm{min}}$) output-state purities for different
phases ($\phi$) \emph{with} feed forward ($p_\mathrm{succ}=50\,\%$).}
\label{tab-withFF}
\end{table}
To evaluate how the feed forward affects the performance of the gate we have also calculated process fidelities, output state fidelities and output state purities for the cases when the feed forward was not active. It means we have selected only the situations when detector D$_{p0}$ (corresponding to $|+\rangle_P$) clicked and no corrective action was needed (like in Ref.\ \cite{mic08}). These values are displayed in Table~\ref{tab-withoutFF}. One can see that there is no substantial difference between the case \emph{with} the feed forward (success probability 50\,\%) and the case \emph{without} the feed forward (success probability 25\,\%).
\begin{table}
\begin{ruledtabular}
\begin{tabular}{cccccc}
$\phi$ & $F_{\chi}$ & $F_{\mathrm{av}}$ & $F_{\mathrm{min}}$ &
$\mathcal{P}_{\mathrm{av}}$ & $\mathcal{P}_{\mathrm{min}}$
\\ \hline
0 & 0.977 & 0.985 & 0.973 & 0.975 & 0.953 \\
$\pi/6$ & 0.975 & 0.985 & 0.972 & 0.973 & 0.949 \\
$\pi/3$ & 0.988 & 0.989 & 0.971 & 0.980 & 0.946 \\
$\pi/2$ & 0.979 & 0.986 & 0.976 & 0.976 & 0.957 \\
$2\pi/3$ & 0.981 & 0.989 & 0.966 & 0.982 & 0.935 \\
$5\pi/6$ & 0.974 & 0.984 & 0.961 & 0.976 & 0.947 \\
$\pi$ & 0.979 & 0.986 & 0.977 & 0.978 & 0.960 \\
\end{tabular}
\end{ruledtabular}
\caption{Process fidelities ($F_{\chi}$), average ($F_{\mathrm{av}}$)
and minimal ($F_{\mathrm{min}}$) output-state fidelities,
average ($\mathcal{P}_{\mathrm{av}}$) and minimal
($\mathcal{P}_{\mathrm{min}}$) output-state purities for different
phases ($\phi$) \emph{without} feed forward ($p_\mathrm{succ}=25\,\%$).}
\label{tab-withoutFF}
\end{table}
~
\section{Conclusions}
We have implemented a reliable and relatively simple electronic feed-forward system which is fast and which does not require high voltage. We employed this technique to double the success probability of a programmable linear-optical quantum phase gate. We showed that the application of the feed forward does not affect substantially either the process fidelity or the output-state fidelities. Beside the improvement of efficiency of linear-optical quantum gates, this feed forward technique can be used for other tasks, such as quantum teleportation experiments or minimal disturbance measurement.
\bigskip
\begin{acknowledgments}
The authors thank Lucie \v{C}elechovsk\'{a} for her advice and for her help
in the preparatory phase of the experiment.
This work was supported by the Czech Science Foundation (202/09/0747),
Palacky University (PrF-2011-015), and the Czech Ministry of Education
(MSM6198959213, LC06007).
\end{acknowledgments}
|
1,941,325,221,118 | arxiv | \section{Introduction}
The last few years have witnessed a surge of interest in building machine learning (ML) methods for scientific applications in diverse disciplines, e.g., hydrology \cite{jia2019physics}, biological sciences \cite{yazdani2019systems},
and climate science~\cite{faghmous2014big}.
Given the promising results from previous research, expectations are rising for using ML to accelerate scientific discovery and help address some of the biggest challenges that are facing humanity such as water quality, climate, and healthcare. However, ML models focus on mining the statistical relationships from data and thus often require large amount of labeled observation data to tune their model parameters.
\begin{figure} [!t]
\centering
\subfigure[]{ \label{fig:b}{}
\includegraphics[width=0.44\columnwidth]{AL.png}
}\hspace{-.3in}
\subfigure[]{ \label{fig:b}{}
\includegraphics[width=0.54\columnwidth]{RTAL.png}
}
\vspace{-.1in}
\caption{(a) Pool-based active learning. (b) Real-time active learning.}
\label{fig:AL}
\end{figure}
Collecting labeled data is often expensive in scientific applications due to the substantial manual labor and material cost required to deploy sensors or other measuring instruments.
For example, collecting water temperature data commonly requires highly trained scientists to travel to sampling locations and deploy sensors within a lake or stream, incurring personnel and equipment costs for data that may not improve model predictions.
To make the data collection more efficient, we aim to develop a data-driven method that assists domain experts in determining when and where to deploy measuring instruments in real time so that data collection can be optimized for training ML models.
Active learning has shown great promise for selecting representative samples~\cite{settles2009active,felder2009active}. In particular, it aims to find a query strategy based on which
we can annotate samples that optimize the training of a predictive ML model. Traditional pool-based active learning methods are focused on selecting query samples
from a fixed set of data points (Fig.~\ref{fig:AL} (a)).
These techniques have been widely explored in image recognition~\cite{li2013adaptive,gal2017deep}
and natural language processing~\cite{shen2017deep,zhou2013active}.
These approaches mostly select samples based on their uncertainty level, which can be measured by Bayesian inference~\cite{kendall2017uncertainties} and Monte Carlo drop-out approximation~\cite{gal2016dropout}.
The samples with higher uncertainty tend to stay closer to the current decision boundary
and thus can bring higher information gain to refine the boundary. Some other approaches also explore the diversity of samples so that they can annotate samples different with those that have already been labeled~\cite{sener2017active,cai2017active,wu2019active}.
However, the pool-based active learning approaches cannot be used in scientific problems as they assume all the data points are available in a fixed set
while scientific data have to be annotated in real time. When monitoring scientific systems, new labels can only be collected by deploying sensors or other measuring instruments. Such labeling decisions have to be made immediately
after we observe the data at the current time, which requires balancing the information gain against the budget cost. Hence, these labeling decisions are made without access to future data and also cannot be changed afterwards. Such a real-time labeling process (also referred to as stream-based selective sampling) is described in Fig.~\ref{fig:AL}.
Moreover, existing approaches do not take into account the representativeness of the selected samples given their spatial and temporal context. Scientific systems commonly involve multiple physical processes that evolve over time and also interact with each other. For example, in a river network, different river segments can have different thermodynamic patterns due to different catchment characteristics as well as climate conditions. Connected river segments can also interact with each other through the water advected from upstream to downstream segments. In this case, new annotated samples can be less helpful if they are selected from river segments that have similar spatio-temporal patterns with previously labeled samples. Instead, the model should take samples that cover different time periods and river segments with distinct properties.
To address these challenges, we propose a new framework \textbf{G}raph-based \textbf{R}einforcement Learning for \textbf{Rea}l-Time \textbf{L}abeling (GR-REAL), in which we formulate real-time active learning problem as a Markov decision process.
This proposed framework is developed in the context of modeling streamflow and water temperature in river networks but the framework can be generally applied to many complex physical systems with interacting processes. Our method makes labeling decisions based on the spatial and temporal context of each river segment as well as the uncertainty level at the current time.
Once we determine the actions of whether to label each segment at the current time step, the collected labels can be used to refine
the way we represent the spatio-temporal context and estimate uncertainty for the observed samples at the next time step (i.e., the next state).
In particular, the proposed framework consists of a predictive model and a decision model. The predictive model extracts spatial and temporal dependencies from data and embeds such contextual information in a state vector. The predictive model also generates final predictions and estimates uncertainty based on the obtained embeddings. At the same time, the decision model is responsible for determining whether we will take labeling actions at the current time step based on the embeddings and outputs obtained from the predictive model. The collected labels are then used to refine the predictive model.
We train the decision model via reinforcement learning using past observation data.
During the training phase, the reward of labeling each river segment at each time step
can be estimated as the expectation of accumulated performance improvement via dynamic programming over training sequential data.
Since this proposed data-driven method requires separate training data from the past history, which can be scarce in many scientific systems, we also propose a way to transfer knowledge from existing physics-based models which are commonly used by domain scientists to study environmental and engineering problems. The transferred knowledge can be used to initialize the decision model and thus less training data is required to fine-tune it to a quality model.
We evaluate our proposed framework in predicting streamflow and water temperature in the Delaware River Basin. Our proposed method produces superior prediction performance given limited budget for labeling. We also show that the distribution of collected samples is consistent with the dynamic patterns in river networks.
\section{Related Work}
Active learning techniques have been used to intelligently select query samples that help improve the learning performance of ML models~\cite{settles2009active}. These methods have shown much success in annotating image data~\cite{li2013adaptive,gal2017deep}
and text data~\cite{shen2017deep,zhou2013active}. In those problems, we can always hire human experts to visually annotate samples at any time after data have been collected because the mapping from input data (e.g., images, text sentences or short phrases) to labels have been perceived by human. The major difference in scientific problems is that the relationships from data samples to labels cannot be fully captured by human but often requires specific measuring instruments deployed by domain scientists. Hence, these measurement must be taken instantly after we observe the data and the decisions cannot be changed afterwards.
Such real-time active learning tasks are also referred to as stream-based selective sampling~\cite{settles2009active}. Traditional stream-based selective sampling approaches rely on heuristic or statistical metrics to select informative data samples~\cite{dagan1995committee,smailovic2014stream,cohn1996active}
and cannot fully exploit the complex relationships between data samples in a long sequence and estimate the long-term potential reward of labeling each sample. More recently, reinforcement learning has been used to learn how-to-label behaviors~\cite{cheng2013feedback,wassermann2019ral}. However, these methods do not consider the spatial and temporal context of each sample, which is often important for determining the information gain of labeling the data. Besides, they commonly require a separate large training set, which is hard to obtain in scientific applications.
Recent advances in deep learning models have brought a huge potential for representing spatial and temporal context. For example, the Long-Short Term Memory (LSTM) has found great success in capturing temporal dependencies~\cite{jia2019physics} in scientific problems.
Also, the Graph Convolutional Networks (GCN) model
has also proven to be effective in
representing interactions between multiple objects.
Given its unique capacity, GCN has achieved the improved prediction accuracy in several scientific problems~\cite{qi2019hybrid,xie2018crystal}.
Simulation data have been used to assist in training ML models~\cite{jia2019physics,read2019process,shah2018airsim}. Since many ML models require an initial choice of model parameters before training, researchers have explored different ways to physically inform a model starting state.
One way to harness physics-based modeling knowledge is to use the physics-based model's simulated data to pre-train the ML model, which also alleviates data paucity issues. Jia \textit{et al.} extensively discuss this strategy~\cite{jia2019physics}. They pre-train their Physics-Guided Recurrent Neural Network (PGRNN) models for lake temperature modeling on simulated data generated from a physics-based model and fine-tune it with minimal observed data. They show that pre-training
can
significantly reduce the training data needed for a quality model. In addition, Read et al.~\cite{read2019process} show that such models are able to generalize better to unseen scenarios.
\section{Problem Definition and Preliminaries}
\subsection{Problem definition}
Our objective is to model the dynamics of temperature and streamflow in a set of connected river segments given limited budget for collecting labels. We represent the connections amongst these river segments in a graph structure $\mathcal{G} = \{\mathcal{V},\mathcal{E},\textbf{W}\}$, where $\mathcal{V}$ represents the set of $N$ river segments and $\mathcal{E}$ represents the set of connections amongst river segments. Specifically, we create an edge $(i,j)\in\mathcal{E}$ if the segment $j$ is anywhere downstream of the segment $i$.
The matrix $\textbf{W}$ represents the adjacency level between each pair of segments, i.e., $\textbf{W}_{ij}=0$ means there is no edge from the segment $i$ to the segment $j$ and a higher value of $\textbf{W}_{ij}$ indicates that the segment $i$ is closer to the segment $j$ in terms of the stream distance.
More details of the adjacency matrix are discussed in Section~\ref{sec:dataset}.
We are provided with a time series of inputs for each river segment at daily scale. The input features $\textbf{x}_{i}^t$ for each segment $i$ at time $t$ are a $D$-dimensional vector, which include meteorological drivers, geometric parameters of the segments, etc. (more details can be found in Section~\ref{sec:dataset}). Assume currently we are at time~$T$. Once we observe input features $\{\textbf{x}_{i}^{t}\}_{i=1}^N$ at each future time step $t=T+1$ to $T+M$ for all the segments, we have to determine immediately whether to collect labels $\textbf{y}^t_i$ for each segment $i$. We also have to ensure that the labeling cost will not exceed the budget limit.
To train our proposed model, we assume that we have access to observation data in the history. In particular, for each segment $i$, we have input features $\textbf{X}_i=\{\textbf{x}_{i}^{1}, \textbf{x}_{i}^2, ..., \textbf{x}_{i}^T\}$ and their labels $\textbf{Y}=\{y^t_i\}$. As elaborated later in the method and result discussion, we also divide the time period from $t=1$ to $T$ into separate training and hold-out periods.
\subsection{Physics-based Streamflow and Temperature Model}
The Precipitation-Runoff Modeling System (PRMS)~\cite{markstrom2015prms} and the coupled Stream Network Temperature Model (SNTemp)~\cite{sanders2017documentation} is a physics-based model that simulates daily streamflow and water temperature for river networks, as well as other variables. PRMS is a one-dimensional, distributed-parameter modeling system that translates spatially-explicit meteorological information into water information including evaporation, transpiration, runoff, infiltration, groundwater flow, and streamflow. PRMS has been used to simulate catchment hydrologic variables relevant to management decisions at regional~\cite{lafontaine2013application} to national scales~\cite{regan2018description}, among other applications. The SNTemp module for PRMS
simulates mean daily stream water temperature for each river segment by solving an energy mass balance model which accounts for the effect of inflows (upstream, groundwater, surface runoff), outflows, and surface heating and cooling on heat transfer in each river segment. The SNTemp module is driven by the same meteorological drivers used in PRMS and also driven by the hydrologic information simulated by PRMS (e.g. streamflow, groundwater flow).
Calibration of PRMS-SNTemp is very time-consuming because it involves a large number of parameters (84 parameters) and the parameters interact with each other both within segments and across segments.
\section{Method}
The proposed GR-REAL framework aims to actively train a predictive model in real time from streaming data,
as shown in Fig.~\ref{fig:framework}.
At each time, the predictive model embeds currently observed samples by incorporating the spatio-temporal context and provide its outputs (embeddings, predictions and uncertainty) to the decision model. Then the decision model determines to label a subset (or none) of observed samples at the current time. The labeled samples are then used to update the predictive model. In the following, we will describe the predictive model and the decision model.
\begin{figure} [!t]
\centering
\includegraphics[width=0.85\columnwidth]{framework_new.png}
\caption{The overview of GR-REAL framework. The black arrows shows the feed-forward process using the predictive model. The blue arrows shows the feedback process by the decision model.}
\label{fig:framework}
\end{figure}
\subsection{Recurrent Graph Neural Network}
Effective modeling of river segments requires the ability to capture their temporal thermodynamics and the influence received from upstream segments. Hence, we incorporate the information from both previous time steps and neighbors (i.e., upstream segments) when modeling each segment.
Our model is based on the LSTM model which has proven to be effective in capturing long-term dependencies.
Different from the standard LSTM, we develop a customized recurrent cell that combines the spatial and temporal context.
When applied to each segment $i$ at time $t$, the recurrent cell has a cell state $\textbf{c}_i^t\in \mathbb{R}^H$, which serves as a memory and allows
preserving the information from its history and neighborhood. Then the recurrent cell outputs a hidden representation $\textbf{h}_i^t\in \mathbb{R}^H$, from which we generate the target output. In the following, we describe the recurrent process of generating $\textbf{c}_i^t$ and $\textbf{h}_i^t$ based on the input $\textbf{x}_i^t$ and collected information from the previous time $t-1$.
For each river segment $i$ at time $t-1$, the model extracts latent variables
which contain relevant information to pass to its downriver segments. We refer to these latent variables as transferred variables. For example, the amount of water advected from each segment and its water temperature can directly impact the change of water temperature for its downriver segments.
We generate the transferred variables $\textbf{q}_{i}^{t-1}$ from the hidden representation $\textbf{h}_{i}^{t-1}$ at the previous time step:
\begin{equation}
\small
\textbf{q}_{i}^{t-1} = \text{tanh}(\textbf{U}_q\textbf{h}_{i}^{t-1}+\textbf{b}_q),
\end{equation}
where $\textbf{U}_q$ and $\textbf{b}_q$ are model parameters.
Similar to LSTM, we generate a candidate cell state $\bar{\textbf{c}}_i^t$, a forget gate $f_i^t$ and a input gate $g_i^t$ by combining $\textbf{\textbf{x}}_i^t$ and the hidden representation at previous time step $\textbf{h}_i^{t-1}$, as follows: \begin{equation}
\small
\begin{aligned}
\bar{\textbf{c}}_i^t &= \text{tanh}(\textbf{U}_c \textbf{h}_i^{t-1} + \textbf{V}_c \textbf{x}_i^t+\textbf{b}_c),\\
\textbf{f}_i^t &= \sigma(\textbf{U}_f \textbf{h}_i^{t-1} + \textbf{V}_f \textbf{x}_i^t+\textbf{b}_f),\\
\textbf{g}_i^t &= \sigma(\textbf{U}_g \textbf{h}_i^{t-1} + \textbf{V}_g \textbf{x}_i^t+\textbf{b}_g),
\end{aligned}
\end{equation}
where $\{\textbf{U}_{l},\textbf{V}_l,\textbf{b}_l\}_{l=c,f,g}$ are model parameters.
After gathering the transferred variables for all the segments, we develop a new recurrent cell structure for each segment $i$ that integrates the transferred variables from its upstream segments into the computation of the cell state $\textbf{c}_i^t$. The forget gate $\textbf{f}_i^t$ and input gate $\textbf{g}_i^t$ are used to filter the information from previous time step and current time. This process can be expressed as:
\begin{equation}
\small
\textbf{c}_{i}^{t} = \textbf{f}_{i}^{t}\otimes (\textbf{c}_{i}^{t-1}+\sum_{(j,i)\in \mathcal{E}}\textbf{W}_{ji}\textbf{q}_{j}^{t-1})+\textbf{g}_{i}^t\otimes\bar{\textbf{c}}_{i}^t,
\label{conv}
\end{equation}
where $\otimes$ denotes the entry-wise product.
We can observe that the forget gate not only filters the previous information from the segment $i$ itself but also from its neighbors (i.e., upstream segments). Each upstream segment $j$ is weighted by the adjacency level $\textbf{A}_{ji}$ between $j$ and $i$. When a river segment has no upstream segments (i.e., head water), the computation of
$\textbf{c}_i^t$ is the same as with the standard LSTM. In Eq.~\ref{conv}, we use $\textbf{q}_{j}^{t-1}$
from the previous time step
because of the time delay in transferring the influence from upstream to downriver segments (the maximum travel time is approximately one day according to PRMS).
Then we generate the output gate $\textbf{o}^t_i$ to filter the cell state at $t$ and
output the hidden representation
\begin{equation}
\small
\begin{aligned}
\textbf{o}_i^t &= \sigma(\textbf{U}_o \textbf{h}_i^{t-1} + \textbf{V}_o \textbf{x}_i^t+\textbf{b}_o),\\
\textbf{h}_i^t &= \textbf{o}_i^t\otimes \text{tanh}(\textbf{c}_i^t),
\end{aligned}
\label{hid}
\end{equation}
where $\{\textbf{U}_o,\textbf{V}_o,\textbf{b}_o\}$ are model parameters.
Finally, we generate the predicted output from the hidden representation as follows:
\begin{equation}
\small
\hat{\textbf{y}}_i^t = \textbf{W}_y \textbf{h}_i^t+\textbf{b}_y,
\label{prd}
\end{equation}
where $\textbf{W}_y$ and $\textbf{b}_y$ are model parameters.
After applying this recurrent process to all the time steps, we define a loss using true observations
$\textbf{Y}=\{\textbf{y}_{i}^{t}\}$ that are collected at certain time steps and certain segments, as follows:
\begin{equation}
\small
\mathcal{L}_{\text{RGrN}} = \frac{1}{|\textbf{Y}|} \sum_{\{(i,t)|\textbf{y}_{i}^{t}\in \textbf{Y}\}} (\textbf{y}_{i}^{t}-\hat{\textbf{y}}_{i}^{t})^2.
\label{loss_PGRGrN}
\end{equation}
\subsection{Markov Decision Process for Active Learning}
The real-time labeling problem can be naturally formulated as a Markov sequential decision making problem. Here each state corresponds to the current status of the predictive model given the input of currently observed samples.
Given the current state, an intelligent agent (i.e., the decision model) needs to take the optimal actions that maximize the potential reward. Here the actions represent the decisions of whether we will label each river segment at the current time $t$. The reward can be measured by the improvement of predictive performance on an independent hold-out dataset. Our goal is to get a decision model that can maximize the gain of performance improvement.
Specifically, we represent the state variable at each time $t$ as $\textbf{S}^t\in \mathbb{R}^{N\times (H+3)}$,
where each row $\textbf{s}_i^t=[\textbf{h}_i^t, \hat{\textbf{y}}_i^t,\textbf{u}_i^t,\textbf{b}^t]$ is the concatenation of model embeddings $\textbf{h}_i^t$, prediction $\hat{\textbf{y}}_i^t$, and uncertainty $\textbf{u}_i^t$ of the segment $i$ obtained from the predictive model, as well as the remaining budget $\textbf{b}^t$. Here the uncertainty level is estimated by Monte Carlo drop-out-based strategy used in previous work~\cite{daw2020physics,gal2016dropout}. Actions taken at each time are encoded by an $N$-by-$2$ matrix $\textbf{A}^t$ where each row $\textbf{a}^t_{i.}$ is a one-hot vector indicating whether we label the segment $i$ ([1, 0]) or not ([0, 1]).
The decision model is used to determine optimal actions to take at each time. In particular, the decision model takes the input of current state $\textbf{S}^t$ and outputs the potential future reward when the agent performs each possible action $\textbf{A}^t$
, i.e., the $Q$ value $Q(\textbf{S}^t,\textbf{A}^t)$.
To train such a decision model, we need to estimate training labels for the $Q$ values by dynamic programming.
Specifically, we consider a set of four-tuple samples over the sequence which consists of $\{\textbf{S}^t,\textbf{A}^t, R(\textbf{S}^t,\textbf{A}^t), \textbf{S}^{t+1}\}$, where $R(\textbf{S}^t,\textbf{A}^t)$ is the immediate reward of performing actions $\textbf{A}^t$ at state $\textbf{S}^t$ and this can be measured as the reduction of prediction RMSE on an independent hold-out dataset, and $\textbf{S}^{t+1}$ is the new model state after we observe data at $t+1$.
We can estimate the training labels $Q(\textbf{S}^t,\textbf{A}^t)$ via dynamic programming, as follows:
\begin{equation}
Q(\textbf{S}^t,\textbf{A}^t) = R(\textbf{S}^t,\textbf{A}^t)+\gamma \max_{\textbf{A}^{t+1}} Q(\textbf{S}^{t+1},\textbf{A}^{t+1}),
\end{equation}
where $\gamma$ is a discount factor.
During the prediction phase, we can select optimal actions that maximize the $Q$ value, as follows:
\begin{equation}
\textbf{A}^{t}_* = \text{argmax}_{\textbf{A}^t} Q(\textbf{S}^t,\textbf{A}^t)
\end{equation}
Given a large number of river segments, the action space will be exponentially large ($2^N$) and thus the model learning can be computationally intractable. In this work, we consider a simplified action space by performing actions independently on each river segment given its current state so we reduce the possible actions to be $2N$. Nevertheless, the selected actions for different river segments are still related to each other since we embed the spatial context of each segment in its state vector.
\begin{algorithm}[!h]
\footnotesize
\caption{Training procedure of GR-REAL.}
\begin{algorithmic}[1]
\REQUIRE Training data $\textbf{X}_{tr} = \{\textbf{x}_i^t\}$ for $i=1:N$ and $t=1:T1$, and their available labels $\{\textbf{y}_i^t\}$; Holdout data $\textbf{X}_{hd} =\{\textbf{x}_i^t\}$ for $i=1:N$ and $t=T1+1:T$, and their available labels $\{\textbf{y}_i^t\}$;
\FOR{training epoch $k\leftarrow 1$ to $\text{Train\_iteration}$}
\STATE{Labeled set $\mathcal{L}=\{\}$, state transition set $\mathcal{M}$=\{\}.}
\STATE initialize the predictive model.
\FOR{time step $t\leftarrow 1$ to $T1$}
\IF{remaining budget$\le$0}
\STATE{Break}
\ENDIF
\STATE{Generate $\textbf{h}^t$, $\hat{\textbf{y}}^t$, $\textbf{u}^t$
by the predictive model.}
\STATE{Concatenate the obtained values into the state vector $\textbf{s}^t$.}
\IF{$t>1$}
\STATE{Add $(\textbf{S}^{t-1},\textbf{S}^{t},\textbf{A}^{t-1}, R(\textbf{S}^{t-1},\textbf{A}^{t-1}))$ to $\mathcal{M}$.}
\STATE{Update the decision model using samples in $\mathcal{M}$.}
\ENDIF
\FOR{river segment $i\leftarrow 1$ to $N$}
\STATE{Predict $Q(\textbf{s}_i^t,\cdot)$ for different actions by the decision model.}
\STATE{Select the action $\textbf{a}_i^t$ that leads to the highest $Q$ value. }
\ENDFOR
\STATE{Update $\mathcal{L}$ with labeled samples and reduce the budget.}
\STATE{Update the predictive model using samples in $\mathcal{L}$ and measure the performance improvement on $\textbf{X}_{hd}$ as the reward $R(\textbf{S}^t,\textbf{A}^t)$.}
\ENDFOR
\ENDFOR
\end{algorithmic}
\label{algorithm}
\end{algorithm}
\vspace{-.15in}
\subsection{Implementation details}
We summarize the training process in Algorithm~\ref{algorithm}. Here we wish to clarify the difference between the training procedure and the test procedure. It is noteworthy that we repeatedly train our model by going through the training period for multiple passes (Line 1 in Algorithm~\ref{algorithm}). This is critical for refining the decision model given the fact that it can make poor decisions (especially for the first few time steps) during the first few passes. When we apply the trained model to the test period (i.e., $T+1$ to $T+M$), we can only take one pass over the test data in a typical setting of real-time active learning.
In this case, the decision model cannot change any decisions that have been made in the past.
During the training process, we also allow the selection of random query samples from the training period with a probability of 0.5\%. This helps the decision model to explore a diverse set of training samples and thus have a better estimate of the potential reward resulted from each labeling action.
\subsection{Policy Transfer}
Due to the substantial human effort and material costs to collect observation data, we often have limited data for training ML models. For example, the Delaware River Basin is one of the most widely studied basins in the United States but still has less than 2\% daily temperature observation for over 80\% of its internal river segments~\cite{read2017water}. Such sparse data makes it challenging to train
a good decision model.
To address this issue, we propose to transfer the model learned from simulation data produced by physics-based models. Physics-based models are built based on known physical relationships that transform input features to the target variable. In the context of modeling river networks, we use PRMS-SNTemp model to simulate target variables (i.e., temperature and streamflow) given the input drivers. Hence, for the input features $\textbf{x}_{i:N}^{1:T}$ for all the $N$ river segments in a past period with $T$, we can generate their simulated target variables $\textbf{y}_{i:N}^{1:T}$ by running the PRMS-SNTemp model.
We can pre-train the decision model using the simulation data. Then we can refine the decision model when it is applied to true observations.
It is noteworthy that simulation data from the PRMS-SNTemp model are imperfect but they only provide a synthetic realization of physical responses of a river systems to a given set of input features. Nevertheless, pre-training a neural network using simulation data allows the network to emulate a synthetic but physically realistic phenomena.
When applying the pre-trained model to a real system, we fine-tune the model using true observations. Our hypothesis is that the pre-trained model is much closer to the optimal solution and thus requires less true labeled data to train a good quality model.
\section{Experimental Results}
\subsection{Dataset and test settings}
\label{sec:dataset}
The dataset is pulled from U.S. Geological Survey's National Water Information System~\cite{us2016national} and the Water Quality Portal~\cite{read2017water}, the largest standardized water quality data set for inland and coastal waterbodies~\cite{read2017water}.
Observations at a specific latitude and longitude were matched to river segments that vary in length from 48 to 23,120 meters. The river segments were defined by the national geospatial fabric used for the National Hydrologic Model as described by Regan et al.~\cite{regan2018description}, and the river segments are split up to have roughly a one day water travel time. We match observations to river segments by snapping observations to the nearest river segment within a tolerance of 250 meters. Observations farther than 5,000 m along the river channel to the outlet of a segment were omitted from our dataset.
We study a subset of the Delaware River Basin with 42 river segments that feed into the mainstream Delaware River at Wilmington, DE. We use input features at the daily scale from Oct 01, 1980 to Sep 30, 2016 (13,149 dates). The input features have 10 dimensions which include daily average precipitation, daily average air temperature, date of the year, solar radiation, shade fraction, potential evapotranspiration and the geometric features of each segment (e.g., elevation, length, slope and width).
Water temperature observations were available for 32 segments but the temperature was observed only on certain dates. The number of temperature observations available
for each observed segment ranges from 1 to 9,810 with a total of 51,103
observations across all dates and segments. Streamflow observations were available for 18 segments. The number of streamflow observations available
for each observed segment ranges from 4,877 to 13,149
with a total of 206,920 observations across all dates and segments.
We divide the available data into four periods, training period (Oct 01, 1980 - Sep 30, 1989), hold-out period (Oct 01, 1989 - Sep 30, 1998), test period (Oct 01, 1998 - Sep 30, 2007) and evaluation period (Oct 01, 2007 - Sep 30, 2016). We assume the current time $T+1$ starts from Oct 01, 1998, i.e., the start of the testing period. We will apply the trained decision model to the test period to collect new samples and use them to train the predictive model. Then the predictive model will be evaluated in the evaluation period.
We will compare with a set of baselines:
\begin{itemize}
\vspace{-.1in}
\itemsep0em
\item Random selection: This method assumes the access to the entire set of testing data (so it is a pool-based method).
We randomly select a subset of query samples (size equal to the budget) from the test data for labeling.
\item Uncertainty: For each time step, we estimate the uncertainty of each data sample (i.e., each node in the graph) using the method presented in previous work~\cite{daw2020physics,gal2016dropout}. Then we select the query samples if their uncertainty values are above a threshold. Such threshold is estimated in the training period.
\item Uncentainty+Centrality+Density (UDC): This is a graph-based active learning method. For each time step, we compute the weighted summation of uncertainty (measured by~\cite{daw2020physics,gal2016dropout}), the centrality and the density of each node following the previous work~\cite{cai2017active,gao2018active}. Then we select the query samples if their summation values are above a threshold. Such threshold is estimated in the training period.
\item ROAL~\cite{huang2019improvement}: This methods solves the real-time labeling problem using reinforcement learning. It considers each segment separately but uses an LSTM to estimate the Q value.
\vspace{-.1in}
\end{itemize}
Here the first three baselines use the same predictive model as our proposed GR-REAL method,
which uses a one-layer graph structure. Since the adjacency matrix $\textbf{W}$ includes $(i,j)$ pairs for segment $j$ anywhere downstream of segment $i$, the one-layer graph structure can still capture spatial dependencies between river segments that are not directly connected.
We generate the adjacency matrix $\textbf{W}$ based on the stream distance between each pair of river segment outlets, represented as $\text{dist}(i,j)$. We standardize the stream distance and then compute the adjacency level as $\textbf{W}_{ij}=1/(1+\text{exp}(\text{dist}(i,j)))$ for each edge $(i,j)\in\mathcal{E}$. The hidden variables in the networks have a dimension of 20. The hyper-parameter $\gamma$ is set as 0.8. For uncertainty estimation, we have used a dropout probability of 0.2 and randomly created 10 different dropout networks. During the training and testing, we also set a yearly limit so as to avoid having too many query samples selected from a single year. Here we set the yearly limit to be $1.2\times\text{Budget}/\#\text{years}$
\subsection{Predictive performance}
\begin{table}[!t]
\footnotesize
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\newcolumntype{C}[1]{>{\centering\arraybackslash}p{#1}}
\centering
\caption{RMSE (standard deviation) for streamflow modeling ($m^3/s$) with different budgets. }
\begin{tabular}{l|C{0.7cm} C{0.7cm} C{0.7cm} C{0.7cm} C{0.7cm} C{0.7cm}}
\hline
\textbf{Samples} &\textbf{100} & \textbf{300} & \textbf{500} & \textbf{1000}& \textbf{2000}\\ \hline
\multirow{2}{*}{Random} & 6.24 & 5.97 & 5.92 &5.66 &5.59 \\
& (0.52) & (0.34) & (0.36) & (0.20)&(0.23)\\ \hline
\multirow{2}{*}{Uncertain} & 6.16 &5.88 &5.83 &5.64 &5.43 \\
& (0.26) & (0.24) & (0.20) & (0.12)&(0.14)\\ \hline
\multirow{2}{*}{UDC} & 6.12 &5.84 &5.76 &5.49 &5.44 \\
& (0.23) & (0.26) & (0.19) & (0.12)&(0.12)\\ \hline
\multirow{2}{*}{ROAL} & 5.79 &5.74 &5.70 &5.22 &5.36 \\
& (0.25) & (0.22) & (0.13) & (0.11)&(0.11)\\ \hline
\multirow{2}{*}{GR-REAL} & 5.33 &5.40 & 5.41 & 4.80 & 4.78 \\
& (0.21) & (0.22) & (0.14) & (0.11)&(0.08)\\
\hline
\end{tabular}
\label{perf_flow}
\end{table}
\begin{table}[!t]
\footnotesize
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\newcolumntype{C}[1]{>{\centering\arraybackslash}p{#1}}
\centering
\caption{RMSE (standard deviation) for temperature modeling ($^\circ C$) with different budgets. }
\begin{tabular}{l|C{0.7cm} C{0.7cm} C{0.7cm} C{0.7cm} C{0.7cm} C{0.7cm}}
\hline
\textbf{Samples} &\textbf{100} & \textbf{300} & \textbf{500} & \textbf{1000}& \textbf{2000}\\ \hline
\multirow{2}{*}{Random} & 4.42 &3.89& 3.42& 3.31& 3.25\\
& (0.45)& (0.46)& (0.34)& (0.14)& (0.09) \\\hline
\multirow{2}{*}{Uncertain} & 4.16& 3.51& 3.40& 3.28 &3.27 \\
& (0.38)& (0.35)& (0.35)& (0.17)& (0.06) \\ \hline
\multirow{2}{*}{UDC} & 4.17& 3.56& 3.33& 3.24& 3.26 \\
& (0.29)& (0.24)& (0.26)& (0.12)& (0.04) \\ \hline
\multirow{2}{*}{ROAL} & 4.04& 3.54& 3.38& 3.24& 3.24 \\
& (0.27)& (0.19)& (0.19)& (0.13)& (0.08) \\ \hline
\multirow{2}{*}{GR-REAL} & 3.40& 3.34& 3.29& 3.23& 3.24 \\
& (0.18)& (0.16)& (0.15)& (0.09)& (0.05) \\
\hline
\end{tabular}
\label{perf_temp}
\end{table}
In Tables~\ref{perf_flow} and~\ref{perf_temp}, we report the performance of each method with different budget limits (i.e., the number of allowed labeled samples). We repeat each method five times and report mean and standard deviation of prediction RMSE (in the evaluation period). In general, the proposed GR-REAL outperforms other methods by a considerable margin. For temperature modeling, all the methods have similar performance when they have access to sufficient labeled samples (e.g., 2000 samples), but GR-REAL outperforms other methods when we have limited budget. Here the method based on model uncertainty, node density and centrality can be less helpful because these measures indicate only the representativeness of samples at the current time but not the potential future benefit in a real-time sequence. The ROAL method also does not perform as well as the proposed method as it does not take into account the spatial context when estimating the potential reward of labeling each sample.
\begin{figure} [!h]
\centering
\includegraphics[width=0.5\columnwidth]{rt_label.png}
\vspace{-.2in}
\caption{The relationship between the number of selected query samples and the change of prediction RMSE over time.}
\label{fig:track}
\end{figure}
In Fig.~\ref{fig:track}, we show an example for the selected samples by GR-REAL in streamflow modeling (training period). It can be clearly seen that the many samples are labeled right at the time when the predictive model starts to get an increase of prediction error. Then the error decreases after we refine the model using the labeled samples. This shows the effectiveness of the proposed method in automatically detecting query samples which can bring large potential benefit for model training.
\subsection{Distribution of selected data}
In Fig.~\ref{fig:data_sel}, we show the distribution of selected data samples over the entire test period for temperature prediction. Given the budget limit of 1000, we observe that GR-REAL selects most samples in the summer period (Fig.~\ref{fig:data_sel} (a)), especially before 2004. This is because river temperature in the Delaware River Basin are usually
much more variable in summer period and thus more samples are selected to learn these complex dynamic patterns. Since 2004, the model selects fewer data on the same ``peak" period since the potential reward for training the model decreases after it has learned from samples in the same period from previous years. When we reduce the budget to 500, GR-REAL also selects more samples in other time periods than the summer period (especially after 2004) because the budget is limited and the model needs to balance the performance over the entire year to get the optimal overall performance.
\begin{figure} [!t]
\centering
\subfigure[]{ \label{fig:a}
\includegraphics[width=0.45\columnwidth]{AQL_1000.png}
}\hspace{-.1in}
\subfigure[]{ \label{fig:a}
\includegraphics[width=0.45\columnwidth]{AQL_500.png}
}
\vspace{-.2in}
\caption{The distribution of selected query sample over time for temperature modeling given the budget of (a) 1000 and (b) 500.}
\label{fig:data_sel}
\end{figure}
\begin{figure} [!t]
\centering
\subfigure[]{ \label{fig:a}
\includegraphics[width=0.45\columnwidth]{dist_flow.png}
}\hspace{-.1in}
\subfigure[]{ \label{fig:b}
\includegraphics[width=0.45\columnwidth]{dist_samples_flow.png}
}\vspace{-.2in}
\caption{(a) Number of river segments for each streamflow range. (b) Distribution of selected samples over river segments of different streamflow ranges. }
\label{fig:sel_sp}
\end{figure}
For the streamflow modeling, we show the distribution of selected samples over segments with different streamflow ranges. For each segment, we first compute its average streamflow value and we show the histogram of average streamflow over all the 18 segments (which have streamflow observations) in Fig.~\ref{fig:sel_sp} (a). Then we show in Fig.~\ref{fig:sel_sp} (b) the distribution of selected samples with respect to the average streamflow of the segments from which the samples are taken. We can see that most samples are selected from the two segments with largest streamflow values and from the segments with average streamflow around 3$m^3/s$.
We hypothesize that labeled samples from segments with highstreamflow ($>12m^3/s$) and middle flow range($1-5m^3/s$) are more helpful to refine the model so as to reduce the overall prediction error. In contrast, more samples on low-flow range ($<1m^3/s$) can bring limited improvement to the overall error because errors made on low-flow segments tend to be much smaller.
This is one limitation of the proposed method as accurate prediction on low-flow segments is also important for understanding the aquatic ecosystem.
These limitations may be potential opportunities for future work.
\subsection{Performance of different graph models}
Here we show how the representation of graphs impacts the learning performance (Table.~\ref{fig:perf_graph}). We consider the following three representation: 1) Downstream-neighbor graph which is the graph used in our implementation. Here we add edges from segment $i$ to segment $j$ if segment $j$ is anywhere downstream of segment $i$. Consider three segments $\{a,b,c\}$ in a upstream-to-downstream consecutive sequence. We will include the edge $ab$, $bc$ and $ac$ in our graph and their adjacency levels are determined by the stream distance. 2) Direct-neighbor graph which only includes connected neighbors. In the above example, we will only create edges of $ab$ and $bc$. 3) No-neighbor graph which is equivalent to an RNN model trained using data from all the segments.
The choice of graph representation can affect the selection of representative samples. The improvement from no-neighbor graph to direct-neighbor graph shows that the modeling of spatial context can help select more informative samples. The downstream-neighbor graph results in better performance since a river segment can impact downriver segments that are multi-hops away and thus incorporating multi-hop relationships can help better embed the contextual information.
\begin{table}[!t]
\footnotesize
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\centering
\caption{The prediction RMSE (budget=500) using different graph structures. }
\begin{tabular}{|l|cc|}
\hline
\textbf{Graph} &streamflow($m^3/s$) & temperature($^\circ C$) \\ \hline
Downstream neighbor & 5.41 & 3.29\\
Direct neighbor & 5.57& 3.33\\
No neighbor & 5.71 & 3.36\\
\hline
\end{tabular}
\label{fig:perf_graph}
\end{table}
\begin{figure} [!t]
\centering
\subfigure[]{ \label{fig:a}
\includegraphics[width=0.4\columnwidth]{pretraining_flow.png}
}
\subfigure[]{ \label{fig:b}
\includegraphics[width=0.4\columnwidth]{pretraining_temp.png}
}
\vspace{-.15in}
\caption{Prediction performance of GR-REAL before and after using the policy transfer in modeling (a) streamflow and (b) water temperature.}
\label{fig:pretrain}
\end{figure}
\subsection{Policy transfer}
We also study the efficacy of policy transfer from simulation data when we have access to limited training data.
In particular, we randomly reduce the labeled samples for temperature and streamflow in the training period so that GR-REAL will learn different decision models. Then we will evaluate the performance using these decision models as reported in Fig.~\ref{fig:pretrain}. We can observe that the predictive performance decreases as we reduce the number of training data. This is because we can only learn a sub-optimal decision model using limited training data and thus it may not be able to select the most helpful samples for training the predictive model. However, the method with policy transfer (GR-REAL$^\text{ptr}$) can produce better performance since the model is initialized to be much closer to its optimal state. In temperature prediction, we notice that GR-REAL and GR-REAL$^\text{ptr}$ have similar performance when using more than 70\% training data. This is because the training data is sufficient to learn an accurate decision model.
\section{Conclusion}
In this paper, we propose the GR-REAL framework which uses the spatial and temporal contextual information to select query samples in real time.
We demonstrate the effectiveness of GR-REAL in selecting informative samples for modeling streamflow and water temperature in the Delaware River Basin. We also show that policy transfer can further improve the performance when we have less training data. The proposed method may also be used to
measure other water quality parameters for which sensors are costly or too difficult to maintain (e.g., metal, nutrients, or algal biomass).
While GR-REAL achieves better predictive performance, it estimates potential reward based on accuracy improvement and thus remains limited in selecting samples that indeed help understand an ecosystem. For example, the GR-REAL ignores low-flow segments as they contribute less to the overall accuracy loss. Studying this element of GR-REAL has promise for future work.
\section{Acknowledgments}
Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.
\bibliographystyle{plain}
\vspace{-.05in}
|
1,941,325,221,119 | arxiv | \section{Introduction}
The Poincar\'e group sharply contrasts with the ${SU(3)\times SU(2)\times U(1)}$ symmetry group of the Standard Model, in that it is understood to act not only on particles of the Standard Model, but also on the spacetime manifold itself. This foundational assumption of the Standard Model sets gravity apart from the other three fundamental forces.
In the present work, we reconstrue the symmetries of the Poincar\'e group, not as `horizontal' symmetries acting on the spacetime manifold, but as `lifted' symmetries acting on a physical theory's `vertical' space. We develop \emph{5-vector theory}, an evolution of classical scalar field theory whose Poincar\'e symmetries are elevated to act solely on its vertical matter and solder fields---and whose dynamics are nevertheless equivalent to those of the scalar field.
To accommodate such a modified group action, we reimagine both the matter field of scalar field theory as well as the `background canvas' on which scalar field theory is constructed. Rather than embedding our physical model in a spacetime whose coordinates transform with translations and Lorentz transformations, we imagine a `static' four-dimensional background, with Cartesian coordinates $\{x^a\}$ and metric $\eta_{ab}$ of signature ($-$$+$$+$$+$). This Cartesian\footnote{Although they have a defined metric, the coordinates $\{x^a\}_{a\in\{0,1,2,3\}}$ are labeled as \emph{Cartesian}---and not \emph{Euclidean} or \emph{Minkowskian}---to emphasize their rigidity. Their discrete counterpart---the `integer lattice'---will not have \emph{any} infinitesimal symmetries.\label{whyCartesian}} spacetime is not to be viewed as a Poincar\'e-set---in particular, it does not transform under Poincar\'e transformations. In this sense, the coordinate $x^0$ represents an absolute time and $\v{x}$ an absolute space---together comprising an absolute reference frame of what we will show to be, nonetheless, a fully relativistic physical model.
On this background, we replace the familiar scalar field $\phi(x^b)$ with a \emph{5-vector} matter field, and we augment the $4\times4$ vierbein matrix $e_\mu^{~a}(x^b)$ into a solder field\footnote{We note that our use of the term \emph{solder field} is exceptional; in treatments of reductive Cartan geometry (e.g. \cite{wise_macdowell-mansouri_2010}), the \emph{soldering form} or \emph{coframe field} is a $\mathfrak{g}/\mathfrak{h}$-valued horizontal 1-form that shares the dimension of its base manifold. Because we augment the vierbein into the larger f\"unfbein, however, we find it simplest to regard both $e_\mu^{~a}(x^b)$ and $\v{e}(x^b)$ as matrix-valued 0-forms.} we call the \emph{f\"unfbein}, a $5\times5$ matrix at each point $x^b$, as follows:
\begin{eqn}
\phi(x^b)&\rightarrow\gv{\phi}(x^b)\defeq\left[\begin{matrix}[l]\phi^\mu\\~\phi\end{matrix}\right](x^b)\\
\vspace{10pt}\\
e_\mu^{~a}(x^b)&\rightarrow\v{e}(x^b)\defeq\left[\begin{matrix}e_\mu^{~a}&\v{0}\\e_\mu&1\end{matrix}\right](x^b).
\label{new5Fields}
\end{eqn}
In a manner that shall be made clear, the f\"unfbein serves as the `hinges' that bind, or solder, the vertical matter field $\gv{\phi}$ to the horizontal Cartesian background; we shall see that $\{x^a\}$ itself is independent of Poincar\'e transformations, and that the transformed f\"unfbein provides the mapping from this fixed background to the transformable matter field. It is for this reason that both Poincar\'e (Greek) indices and Cartesian (Latin) indices appear in the f\"unfbein.
The 5-vector and f\"unfbein serve as the targets of Poincar\'e transformations---as the Poincar\'e-sets---of our new field theory. As we will show, in establishing a more detailed matter field $\gv{\phi}$, we essentially transfer the data stored in spacetime---via spacetime derivatives $\partial^\mu\phi$ of the scalar field---to data stored in the $\phi^\mu$ components of the matter field itself. As further demonstrated in a companion paper \cite{glasser_discrete5vectortheory}, this data-transfer is crucial---it `unburdens' the Cartesian canvas of our physical theory, affording its discretization without sacrificing Poincar\'e symmetry.
In the following work, we motivate this evolution of scalar field theory. We apply at length the variational technology of \cite{olver_textbook_1993}, and progressively introduce modified forms of the scalar field Lagrangian until our 5-vector theory is discovered. We solve for the symmetries and Poincar\'e currents---the linear and angular energy-momentum tensors---of our theory, and we discuss the physical implications of this `Poincar\'e lift'.
\section{Real Scalar Field Theory}
We begin with a review of the Lagrangian for a real scalar field with an arbitrary potential $V(\phi)$ in ($-$$+$$+$$+$) Minkowski spacetime:
\begin{eqn}
\mL\defeq-\frac{1}{2}\partial_\mu\phi\partial^\mu\phi-V(\phi).
\label{phiLagrangian}
\end{eqn}
Applying the Euler operator\footnote{
An introduction of notation is helpful here. For a system of $M$ independent variables $\{x^i\}$ and $N$ dependent variables $\left\{u^\ell\right\}$, we denote a \emph{$(k\geq0)$-order multi-index} $J$ by $J\equiv(j_1,\dots,j_k)$, where ${1\leq j_i\leq M}$. We let $\#J$ denote the order (i.e., length) of the multi-index $J$, where any repetitions of indices are to be double-counted. Accordingly, $u_J$ represents a partial derivative taken with respect to $(x^{j_1},\dots,x^{j_k})$. For example, ${u_{ii}\equiv\partial^2u/\partial x^i\partial x^i}$ has $\#J=2$.
$\ws{D}_i$ denotes a \emph{total derivative}, i.e.:
\begin{equation*}
\ws{D}_iP\defeq\pd{P}{x^i}+\sum\limits_{\ell=1}^N\sum\limits_{\#J\geq0}u_{J,i}^\ell\pd{P}{u^\ell_J}
\end{equation*}
and $\ws{D}_J\equiv\ws{D}_{j_1}\cdots\ws{D}_{j_k}$. We let $(-\ws{D})_J$ denote a total derivative with negative signs included for each index. For example:
\begin{equation*}
(-\ws{D})_{xyz}u=(-\ws{D}_x)(-\ws{D}_y)(-\ws{D}_z)u=-\pd{^3u}{x\partial y\partial z}\equiv-u_{xyz}.
\end{equation*}
As is conventional, we will sometimes relax our notation and denote the total and partial derivatives by the same symbol: $\partial_i$.
}
for $\phi$---
\begin{eqn}
\ws{E}_\phi\defeq&\sum\limits_J(-\ws{D})_J\pd{}{\left(\phi_J\right)}\\
=&\partial_\phi-\ws{D}_\mu\partial_{(\partial_\mu\phi)}+\cdots
\end{eqn}
---we derive the following equation of motion (EOM):
\begin{eqn}
0=\ws{E}_\phi(\mL)=-V'(\phi)+\partial_\mu\partial^\mu\phi.
\end{eqn}
Let us review the usual Poincar\'e symmetries associated with this Lagrangian---in particular its translation symmetry generators $P_\alpha$ and Lorentz symmetry generators $M_{\alpha\beta}$, for ${\alpha\neq\beta\in\{t,x,y,z\}}$:
\begin{eqn}
P_\alpha&\defeq\partial_\alpha\\
M_{\alpha\beta}&\defeq x_\alpha\partial_\beta-x_\beta\partial_\alpha
\label{PoincareSymmetries}
\end{eqn}
where ${\partial_\mu\equiv\partial/\partial x^\mu}$ and ${x_\mu=\eta_{\mu\nu}x^\nu}$. (For now, we employ the most familiar setting for field theories---a flat, four-dimensional spacetime labeled by Poincar\'e-transformable coordinates $x^\mu$.) These symmetries are called \emph{spacetime symmetries} because they operate on spacetime itself---that is, on the \emph{independent} or \emph{horizontal} variables of the theory.
We denote the $n^\text{th}$\emph{-order jet space} ${\ws{Jet}(n)}$ of scalar theory---comprised of (i) spacetime's independent coordinate variables $x^\mu$; (ii) the dependent variable $\phi(x^\mu)$; and (iii) the derivatives of the dependent variable up to order $n$: $\partial_{\mu_1\cdots \mu_n}\phi(x^\mu)$---as follows:
\begin{eqn}
\ws{Jet}(n)&\equiv X\times U^{(n)}\\
&\equiv X\times U\times U^1\times\cdots\times U^n
\end{eqn}
for $x^\mu\in X$, $\phi\in U$, $\partial_\mu\phi\in U^1$ and so on. $X$ is referred to as the \emph{horizontal subspace} of $\ws{Jet}(n)$, and $U^{(n)}$ as the \emph{vertical subspace} of $\ws{Jet}(n)$. Correspondingly, the jet space differential operator $\partial_\mu$ is referred to as a \emph{horizontal vector field}, while ${\partial_\phi+\partial_{(\partial_\mu\phi)}}$, for example, is referred to as a \emph{vertical vector field}.
Generalizing for the moment to an \emph{arbitrary} ${0^\text{th}\text{-order}}$ jet space ${\ws{Jet}(0)=X\times U}$, comprised of ${M=\ws{dim}(X)}$ horizontal and ${N=\ws{dim}(U)}$ vertical variables, we briefly review the elements of \cite{olver_textbook_1993} essential to our study. Following \cite{olver_textbook_1993} Eq.~(5.1), we define a \emph{generalized vector field} $\v{v}$ on $\ws{Jet}(0)$:
\begin{eqn}
\v{v}=\sum\limits_{i=1}^M\zeta^i[u]\pd{}{x^i}+\sum\limits_{\ell=1}^N\gamma^\ell[u]\pd{}{u^\ell}
\label{defineV}
\end{eqn}
where $\zeta^i$ and $\gamma^\ell$ are arbitrary smooth functions, and where ${[u]\equiv\left(x,u^{(n)}\right)}$ denotes their dependence on any variables of $\ws{Jet}(n)$, such that:
\begin{eqn}
[u]\defeq\Big(x^i\in X,u^\ell\in U,u^\ell_{x^i}\in U^1,\dots,u^\ell_{x^{i_1}\cdots x^{i_n}}\in U^n\Big).
\end{eqn}
$\v{v}$ is understood to be the generator of a smooth transformation of the variables in $\ws{Jet}(0)$. There is a unique extension of $\v{v}$ from ${X\times U}$ to ${X\times U^{(n)}}$ that self-consistently specifies the flow of the vertical `derivative subspace' $U^{[1,n]}\subset\ws{Jet}(n)$, given the flow of $\ws{Jet}(0)$ along $\v{v}$. This extension is referred to as the vector field's \emph{prolongation} and is given by:
\begin{eqn}
\ws{pr}[\v{v}]=\sum\limits_{i=1}^M\zeta^i[u]\pd{}{x^i}~+~\smashoperator{\sum\limits_{\substack{\ell\in[1,N]\\\#J\in[0,n]}}}\Omega^J_\ell[u]\pd{}{u^\ell_J}
\label{prolongationOfV}
\end{eqn}
where
\begin{eqn}
\Omega^J_\ell[u]\defeq\ws{D}_J\left(\gamma^\ell[u]-\sum\limits_{i=1}^Mu^\ell_i\zeta^i[u]\right)+\sum\limits_{i=1}^Mu^\ell_{J,i}\zeta^i[u]
\end{eqn}
and where the sum over ${\#J\in[0,n]}$ indicates a sum over all multi-indices of length ${0\leq\#J\leq n}$. (Note that ${\ws{pr}[\v{v}]=\v{v}}$ when restricted to its ${\#J=0}$ terms.) A jet space vector field $\v{v}$ is defined to act on (i.e., infinitesimally transform) any Lagrangian $\mL[u]$ via its prolongation: $\ws{pr}[\v{v}](\mL)$.
Still following \cite{olver_textbook_1993}, we define the \emph{evolutionary representative} of $\v{v}$ by the following vertical vector field $\v{v}_Q$:
\begin{eqn}
\v{v}_Q=\smashoperator{\sum\limits_{\substack{\ell\in[1,N]}}}Q_\ell[u]\pd{}{u^\ell}
\label{evolRep}
\end{eqn}
where the \emph{characteristics} $Q_\ell[u]$ of $\v{v}_Q$ are given by
\begin{eqn}
Q_\ell[u]\defeq\gamma^\ell[u]-\sum\limits_{i=1}^Mu^\ell_i\zeta^i[u].
\label{charOfEvol}
\end{eqn}
Following Eq.~(\ref{prolongationOfV}), we calculate the prolongation of $\v{v}_Q$ as follows:
\begin{eqn}
\ws{pr}[\v{v}_Q]=\smashoperator{\sum\limits_{\substack{\ell\in[1,N]\\\#J\in[0,n]}}}\Big(\ws{D}_JQ_\ell[u]\Big)\pd{}{u^\ell_J}.
\label{prolongOfEvol}
\end{eqn}
We define a \emph{variational symmetry} of the Lagrangian $\mL$ as a generalized vector field $\v{v}$ for which there exists an $M$-tuple ${B=(B^1[u],\dots,B^M[u])}$ such that either of the following equivalent conditions hold:\footnote{We interchangeably employ the notations:
\begin{equation*}
\ws{Div}P\equiv\ws{Div}_i P^i\equiv\sum\limits_{i=1}^M\ws{D}_iP^i.
\end{equation*}
}
\begin{eqn}
\ws{pr}[\v{v}](\mL)+\mL~\ws{Div}_i\zeta^i&=\ws{Div}_i(B^i+\mL\zeta^i)\\
\ws{pr}[\v{v}_Q](\mL)&=\ws{Div}B.
\label{defineVarSymm}
\end{eqn}
As proven in \cite{olver_textbook_1993} Proposition 5.52, $\v{v}$ is a variational symmetry of $\mL[u]$ if and only if $\v{v}_Q$ is. (We note that $\v{v}_Q$ is its own evolutionary representative.)
Noether's theorem establishes a one-to-one correspondence between the (equivalence classes of) variational symmetries of a Lagrangian and (equivalence classes of) its conservation laws.\footnote{We refer the reader to \cite{olver_textbook_1993} pp. 264, 292 for a discussion of \emph{trivial} symmetries and conservations laws, and the \emph{equivalence classes} they generate.\label{footnoteEquivTrivial}} Because $\v{v}$ and $\v{v}_Q$ belong to the same equivalence class, the Noether procedure for $\v{v}_Q$ discovers conservation laws equivalent to those of $\v{v}$.
Returning now to our present scalar theory, therefore, we first find the evolutionary representatives ${P_\alpha}_Q$ and ${M_{\alpha\beta}}_Q$ of the Poincar\'e symmetries by applying the framework of Eqs.~(\ref{defineV})-(\ref{prolongOfEvol}) to the Poincar\'e generators of Eq.~(\ref{PoincareSymmetries}). We derive their prolongations $\ws{pr}[{P_\alpha}_Q]$ and $\ws{pr}[{M_{\alpha\beta}}_Q]$ to first order, which suffices because higher order derivatives do not appear in the Lagrangian $\mL$ of Eq.~(\ref{phiLagrangian}). We find:
\begin{eqn}
\ws{pr}\left[{P_\alpha}_Q\right]&=-\pd{\phi}{x^\alpha}\pd{}{\phi}-\ws{D}_\mu\left[\pd{\phi}{x^\alpha}\right]\pd{}{(\partial_\mu\phi)}\\
\ws{pr}\left[{M_{\alpha\beta}}_Q\right]&=-\left(x_\alpha\pd{\phi}{x^\beta}-x_\beta\pd{\phi}{x^\alpha}\right)\pd{}{\phi}\\
&-\ws{D}_\mu\left[x_\alpha\pd{\phi}{x^\beta}-x_\beta\pd{\phi}{x^\alpha}\right]\pd{}{(\partial_\mu\phi)}.
\label{verticalSymms}
\end{eqn}
Applying these prolonged vector fields to $\mL$, we calculate as follows:
\begin{eqn}
\ws{pr}\left[{P_\alpha}_Q\right](\mL)&=-\ws{D}_\mu(\delta^\mu_\alpha\mL)\\
\ws{pr}\left[{M_{\alpha\beta}}_Q\right](\mL)&=-\ws{D}_\mu\left[\left(x_\alpha\delta^\mu_\beta-x_\beta\delta^\mu_\alpha\right)\mL\right].
\label{phiTheoryDivergencesVQ}
\end{eqn}
It is immediately seen that
\begin{eqn}
\ws{pr}\left[\v{v}_Q\right](\mL)=-\ws{Div}\left(\mL\zeta\right)
\label{resultOfVQ}
\end{eqn}
for each of the Poincar\'e symmetries ${\v{v}_Q={P_\alpha}_Q}$ and ${\v{v}_Q={M_{\alpha\beta}}_Q}$, where $\zeta$ refers to the horizontal coefficients of Eq.~(\ref{defineV}), as defined in Eq.~(\ref{PoincareSymmetries}).
Eq.~(\ref{phiTheoryDivergencesVQ}) demonstrates that ${P_\alpha}$ and ${M_{\alpha\beta}}$---and their evolutionary representatives---are variational symmetries of $\mL$, as defined in Eq.~(\ref{defineVarSymm}). By Noether's theorem, then, we are guaranteed to find conservation laws associated to each of these symmetries, as we now do.
We carry out the Noether procedure as specified in \cite{olver_textbook_1993} Proposition 5.98 to explicitly solve for these conservation laws. To do so, we introduce the higher Euler operators $\ws{E}^J_{u^\ell}$, whose purpose is to facilitate the following `integration by parts' for \emph{any} vertical vector field $\v{v}_Q$ with characteristics $Q_\ell$, as notated in Eq.~(\ref{evolRep}):
\begin{eqn}
\ws{pr}[\v{v}_Q](P)=\sum\limits_{\ell=1}^N\sum\limits_{\#J\geq0}\ws{D}_J\left(Q_\ell\cdot\ws{E}^J_{u^\ell}(P)\right)
\label{higherEulerDef}
\end{eqn}
for any $P$. Eq.~(\ref{higherEulerDef}) is definitional, as it uniquely determines the form of these operators, as follows:\footnote{We let ${\left(\begin{smallmatrix}I\\J\end{smallmatrix}\right)\equiv I!/[J!(I\backslash J)!]}$ when ${J\subseteq I}$, and $0$ otherwise. We define $I!=(\tilde{i}_1!\cdots\tilde{i}_M!)$, where $\tilde{i}_k$ denotes the number of occurrences of the integer $k$ in multi-index $I$. $I\backslash J$ denotes the set difference of multi-indices, with repeated indices treated as distinct elements of the set, as they are in ${J\subseteq I}$.}
\begin{eqn}
\ws{E}^J_{u^\ell}\defeq\sum\limits_{I\supseteq J}\left(\begin{matrix}I\\J\end{matrix}\right)(-\ws{D})_{I\backslash J}\pd{}{u^\ell_I}.
\end{eqn}
We note that ${\ws{E}^J_{u^\ell}\equiv\ws{E}_{u^\ell}}$ is the conventional Euler operator for ${J=\emptyset}$.
For ${P=\mL}$ a Lagrangian, one observes that the right hand side of Eq.~(\ref{higherEulerDef}) splits $\ws{pr}[\v{v}_Q](\mL)$ into a divergence and a term that vanishes along solutions ${\ws{E}_{u^\ell}(\mL)=0}$ of our system---that is, \emph{on shell}:
\begin{eqn}
\ws{pr}\left[\v{v}_Q\right](\mL)=\left(\sum\limits_{\ell=1}^NQ_\ell\cdot\ws{E}_{u^\ell}(\mL)\right)+\ws{Div}A
\label{prQ2QEandA}
\end{eqn}
where the $M$-tuple ${A=(A^1[u],\dots,A^M[u])}$ is given by
\begin{eqn}
A^k&\defeq\sum\limits_{\ell=1}^N\sum\limits_{\#I\geq0}\frac{\tilde{i}_k+1}{\#I+1}\ws{D}_I\left[Q_\ell\ws{E}_{u^\ell}^{I,k}(\mL)\right].
\label{defFourTuple}
\end{eqn}
Combining the observation of Eq.~(\ref{prQ2QEandA}) with the second relation of Eq.~(\ref{defineVarSymm}), we see that a variational symmetry $\v{v}_Q$ yields the following conservation law on shell:
\begin{eqn}
\ws{Div}\left(A-B\right)=0.
\label{conciseConsLawForVariational}
\end{eqn}
We may now carry out this Noether procedure for our particular first-order scalar field Lagrangian. We need only find the higher Euler operator for multi-index ${J=(x^\mu)}$ and dependent variable ${u^\ell=\phi}$, as follows:
\begin{eqn}
\ws{E}_\phi^\mu(\mL)=\pd{\mL}{(\partial_\mu\phi)}=-\partial^\mu\phi.
\end{eqn}
For our Poincar\'e symetries ${P_\alpha}_Q$ and ${M_{\alpha\beta}}_Q$, therefore:
\begin{eqn}
A^\mu_{P_\alpha}&=(\partial_\alpha\phi)(\partial^\mu\phi)\\
A^\mu_{M_{\alpha\beta}}&=\left(x_\alpha\pd{\phi}{x^\beta}-x_\beta\pd{\phi}{x^\alpha}\right)(\partial^\mu\phi).
\label{TuplesForScalarTheory}
\end{eqn}
We correspondingly substitute $A$ from Eq.~(\ref{TuplesForScalarTheory}) and ${B=-\mL\zeta}$ from Eq.~(\ref{resultOfVQ}) for each respective symmetry into Eq.~(\ref{conciseConsLawForVariational}) to solve for our 10 conservation laws:
\begin{eqn}
0&=\ws{D}_\mu\Bigg[\partial^\mu\phi\partial^\alpha\phi+\eta^{\mu\alpha}\mL\Bigg]\eqdef\ws{D}_\mu T^{\mu\alpha}\\
0&=\ws{D}_\mu\Bigg[x^\alpha T^{\mu\beta}-x^\beta T^{\mu\alpha}\Bigg]\eqdef\ws{D}_\mu L^{\mu\alpha\beta}.
\label{twoConsLawsOfOriginalTheory}
\end{eqn}
We have thus found that the formal manipulations of \cite{olver_textbook_1993} recover the familiar conservation laws of scalar field theory.
\section{Real `Scalar + 4-Vector' Field Theory}
We now explore the Lagrangian of a new field theory that replicates the dynamics of the familiar scalar theory described above. In part, we are inspired toward the following Lagrangian by the Goldstone model of \cite{kibble_symmetry_1967}. We define:
\begin{eqn}
\mL&\defeq\frac{1}{2}\phi^\mu\phi_\mu-\frac{1}{2}\phi^\mu\partial_\mu\phi+\frac{1}{2}\phi\partial_\mu\phi^\mu-V(\phi).
\label{ScalarPlus4VectorLagrangian}
\end{eqn}
This Lagrangian has five dynamical variables, in the form of a Lorentz 4-vector $\phi^\mu$ and a scalar $\phi$. In this flat-spacetime theory, (still possessing familiar `Poincar\'e-deformable' coordinates), we raise and lower Greek indices with the Minkowski metric $\eta_{\mu\nu}$ of signature $($$-$$+$$+$$+$$)$---for example: ${\phi_\mu=\eta_{\mu\nu}\phi^\nu}$ and ${\partial^\mu=\eta^{\mu\nu}\partial_\nu}$.
We again apply Euler operators to derive the following five EOM:
\begin{eqn}
0&=\ws{E}_{\phi}(\mL)=-V'(\phi)+\partial_\mu\phi^\mu\\
0&=\ws{E}_{\phi^\sigma}(\mL)=\phi_\sigma-\partial_\sigma\phi.
\label{RealPlusFourVectorEOM}
\end{eqn}
Upon combining these EOM, we see that our new Lagrangian replicates the dynamics of the familiar scalar field Lagrangian. Unlike the scalar field Lagrangian, however, our new Lagrangian produces this behavior with coupled first-order EOM.
The latter EOM of Eq.~(\ref{RealPlusFourVectorEOM}) suggests that $\phi_\mu$ functions as the spacetime derivatives of the scalar field. In this respect, the theory's variables are reminiscent of a Hamiltonian system, with first-order EOM given by
\begin{eqn}
\left[\begin{matrix}\dot{\v{x}}\\\dot{\v{p}}\end{matrix}\right]=\left[\begin{matrix}\v{p}\\-\grad{V(\v{x})}\end{matrix}\right].
\end{eqn}
Still, the EOM's differences from the Hamiltonian formulation---including its democratic, relativistic treatment of space and time---are perhaps more important.
We now repeat the exercise of the prior section and calculate in the formalism of \cite{olver_textbook_1993} the symmetries and conservation laws of the Lagrangian in Eq.~(\ref{ScalarPlus4VectorLagrangian}). The dependent $\phi^\mu$ variables modify much of the analysis.
We begin with the same \emph{translation} symmetry generator $P_\alpha$ of Eq.~(\ref{PoincareSymmetries}), but we must modify $M_{\alpha\beta}$ to account for our new $\phi^\mu$ field. After all, as indicated by Eq.~(\ref{RealPlusFourVectorEOM}), $\phi_\mu$ must transform as a spacetime derivative. We therefore set:
\begin{eqn}
P_\alpha&\defeq\partial_\alpha\\
M_{\alpha\beta}&\defeq x_\alpha\partial_\beta-x_\beta\partial_\alpha+\phi_\alpha\partial_{\phi^\beta}-\phi_\beta\partial_{\phi^\alpha}
\label{FourVectorTheoryGenerators}
\end{eqn}
Calculating and applying prolongations of these symmetries to $\mL$, we find that:
\begin{eqn}
\ws{pr}\left[P_\alpha\right](\mL)&=0\\
\ws{pr}\left[M_{\alpha\beta}\right](\mL)&=0.
\end{eqn}
$\mL$ of Eq.~(\ref{ScalarPlus4VectorLagrangian}) is therefore invariant under transformation by the vector fields $P_\alpha$ and $M_{\alpha\beta}$ of Eq.~(\ref{FourVectorTheoryGenerators}). According to the first relation of Eq.~(\ref{defineVarSymm}), setting ${B^i=-\mL\zeta^i}$ and noting that ${\ws{D}_i\zeta^i=0}$ for each symmetry, respectively, $P_\alpha$ and $M_{\alpha\beta}$ are indeed variational symmetries of $\mL$.
To find `scalar + 4-vector' theory's associated conservation laws, therefore, we may again solve for the prolonged evolutionary representatives of the new Poincar\'e symmetries of Eq.~(\ref{FourVectorTheoryGenerators}), up to first order:
\begin{eqn}
\ws{pr}\left[{P_\alpha}_Q\right]&=-\pd{\phi}{x^\alpha}\pd{}{\phi}-\pd{\phi^\sigma}{x^\alpha}\pd{}{\phi^\sigma}\\
&-\ws{D}_\mu\left[\pd{\phi}{x^\alpha}\right]\pd{}{(\partial_\mu\phi)}-\ws{D}_\mu\left[\pd{\phi^\sigma}{x^\alpha}\right]\pd{}{(\partial_\mu\phi^\sigma)}\\
\ws{pr}\left[{M_{\alpha\beta}}_Q\right]&=-\left(x_\alpha\pd{\phi}{x^\beta}-x_\beta\pd{\phi}{x^\alpha}\right)\pd{}{\phi}\\
&-\left(x_\alpha\pd{\phi^\sigma}{x^\beta}-x_\beta\pd{\phi^\sigma}{x^\alpha}\right)\pd{}{\phi^\sigma}\\
&+\phi_\alpha\pd{}{\phi^\beta}-\phi_\beta\pd{}{\phi^\alpha}\\
&-\ws{D}_\mu\left[x_\alpha\pd{\phi}{x^\beta}-x_\beta\pd{\phi}{x^\alpha}\right]\pd{}{(\partial_\mu\phi)}\\
&-\ws{D}_\mu\left[x_\alpha\pd{\phi^\sigma}{x^\beta}-x_\beta\pd{\phi^\sigma}{x^\alpha}\right]\pd{}{(\partial_\mu\phi^\sigma)}\\
&+\left[\pd{\phi_\alpha}{x^\mu}\pd{}{(\partial_\mu\phi^\beta)}-\pd{\phi_\beta}{x^\mu}\pd{}{(\partial_\mu\phi^\alpha)}\right].
\label{prolongedVertFiveVectorSymmetries}
\end{eqn}
Applying these to the Lagrangian of Eq.~(\ref{ScalarPlus4VectorLagrangian}), we find that ${P_\alpha}_Q$ and ${M_{\alpha\beta}}_Q$ again satisfy Eq.~(\ref{resultOfVQ}):
\begin{eqn}
\ws{pr}\left[{P_\alpha}_Q\right](\mL)&=-\ws{D}_\mu(\delta^\mu_\alpha\mL)\\
\ws{pr}\left[{M_{\alpha\beta}}_Q\right](\mL)&=-\ws{D}_\mu\left[\left(x_\alpha\delta^\mu_\beta-x_\beta\delta^\mu_\alpha\right)\mL\right].
\label{resultOfVQ2}
\end{eqn}
Eq.~(\ref{resultOfVQ2}) demonstrates that ${P_\alpha}_Q$ and ${M_{\alpha\beta}}_Q$---prolonged in Eq.~(\ref{prolongedVertFiveVectorSymmetries})---are also variational symmetries of the Lagrangian in Eq.~(\ref{ScalarPlus4VectorLagrangian}), as they had to be. We can therefore solve for their associated currents via the Noether procedure of Eqs.~(\ref{higherEulerDef})-(\ref{conciseConsLawForVariational}).
To solve as before for our conserved currents, we first derive our new system's higher Euler operators:
\begin{eqn}
\ws{E}_\phi^\mu(\mL)&=\pd{\mL}{(\partial_\mu\phi)}=-\frac{1}{2}\phi^\mu\\
\ws{E}_{\phi^\sigma}^\mu(\mL)&=\pd{\mL}{(\partial_\mu\phi^\sigma)}=\frac{1}{2}\phi\delta^\mu_\sigma.
\end{eqn}
We thus solve for the 4-tuples given by Eq.~(\ref{defFourTuple}):
\begin{eqn}
A^\mu_{P_\alpha}&=\frac{1}{2}\left(\phi^\mu\partial_\alpha\phi-\phi\partial_\alpha\phi^\mu\right)\\
A^\mu_{M_{\alpha\beta}}&=\frac{1}{2}\Bigg(x_\alpha\pd{\phi}{x^\beta}\phi^\mu-x_\beta\pd{\phi}{x^\alpha}\phi^\mu+\phi_\alpha\phi\delta^\mu_\beta-\phi_\beta\phi\delta^\mu_\alpha\\
&\hspace{40pt}-x_\alpha\pd{\phi^\mu}{x^\beta}\phi+x_\beta\pd{\phi^\mu}{x^\alpha}\phi\Bigg).
\label{Tuplesfor4VectorTheory}
\end{eqn}
For each respective symmetry, we again substitute $A$ from Eq.~(\ref{Tuplesfor4VectorTheory}) and ${B=-\mL\zeta}$ from Eq.~(\ref{resultOfVQ2}) into Eq.~(\ref{conciseConsLawForVariational}), to derive 10 conservation laws:
\begin{eqn}
0&=\ws{D}_\mu\Bigg[\frac{1}{2}\Bigg(\phi^\mu\partial^\alpha\phi-\phi\partial^\alpha\phi^\mu\Bigg)+\eta^{\mu\alpha}\mL\Bigg]\\
&\eqdef\ws{D}_\mu T^{\mu\alpha}\\
0&=\ws{D}_\mu\Bigg[x^\alpha T^{\mu\beta}-x^\beta T^{\mu\alpha}+\frac{1}{2}\phi\bigg(\phi^\alpha\eta^{\mu\beta}-\phi^\beta\eta^{\mu\alpha}\bigg)\Bigg]\\
&\eqdef\ws{D}_\mu L^{\mu\alpha\beta}.
\label{consLawFlatFiveVectorNoVierbeinTheory}
\end{eqn}
It is straightforward to show that these conservation laws are equivalent to the conservation laws of Eq.~(\ref{twoConsLawsOfOriginalTheory})---in that they differ by trivial conservation laws, as defined in \cite{olver_textbook_1993}. (See footnote \ref{footnoteEquivTrivial}.) In particular, the familiar energy-momenta of scalar theory in Eq.~(\ref{twoConsLawsOfOriginalTheory}) are equivalent to the above energy-momenta of scalar + 4-vector theory.
As such, an experiment measuring a theory's dynamics and conserved currents would not be able to distinguish between a scalar theory and a scalar + 4-vector theory.
\section{Real 5-Vector Field Theory}
In the two foregoing scalar and scalar + 4-vector theories, we have defined the Poincar\'e symmetries to act horizontally---that is, on the spacetime manifold. We note that, by definition, the target of a Poincar\'e Lie group action must take values in a continuum. These two observations imply that the prior theories' symmetries are inconsistent with a discretization of spacetime. This is the fundamental reason that lattice field theories, which define particles at fixed, discrete points embedded in spacetime, fail to preserve Poincar\'e symmetries and their associated invariants.
Although we defer to our companion paper the presentation of a \emph{discrete, Poincar\'e-invariant theory}, we briefly motivate the following section by considering the prerequisites for such a theory. In light of the preceding argument, the Poincar\'e symmetries of a theory with a discrete horizontal subspace $X$ must act only on the theory's vertical subspace $U^{(n)}$---that is, only on its dependent variables.
One might hope that the vertical evolutionary representatives---${P_\alpha}_Q$ and ${M_{\alpha\beta}}_Q$---of the previous sections would be sufficient for this purpose, but these merely camouflage their horizontal action by including derivatives of the dependent variables in their coefficients. Indeed, there is an effective sense in which any vector field with a term of the form $f[u]u_x\partial_u$ has an action on the horizontal subspace $X\subset\ws{Jet}(n)$. After all, $f[u]u_x\partial_u$ is equivalent---in the definition of \cite{olver_textbook_1993}---to the horizontal vector field $f[u]\partial_x$.
This equivalence is more than mere formality. If we examine ${P_\alpha}_Q=-(\partial\phi/\partial x^\alpha)\partial_\phi$ from Eq.~(\ref{verticalSymms}), for example, we see that the Poincar\'e index $\alpha$ adorns the spacetime coordinate. A discretization of this coordinate necessarily breaks, therefore, the infinitesimal Poincar\'e invariance of the theory; indeed, any theory set against a background of Poincar\'e-transformable spacetime presupposes a continuous universe.
In this section, therefore, we evolve our theory to discover the `Poincar\'e lift' that properly verticalizes our symmetries. We proceed in two pedagogical steps:
\begin{enumerate}[label=(\roman*)]
\item We first cleave the Poincar\'e group action from the background coordinates of our theory by introducing a vierbein solder field into the scalar + 4-vector Lagrangian. In this `vierbein formalism', we find vertical Poincar\'e symmetries of our EOM, but discover that the theory---i.e., the Lagrangian---is not entirely Poincar\'e-symmetric.
\item We then repair this vierbein theory with an edifying matrix formalism we call \emph{5-vector theory}, introducing (a) the 5-vector particle; (b) its antiparticle; and (c) the f\"unfbein solder field. We demonstrate the vertical Poincar\'e invariance of the 5-vector EOM \emph{and} Lagrangian, and derive the theory's conservation laws.
\end{enumerate}
\subsection{Poincar\'e Lift \#1: Vierbein Formalism}
We begin by taking inspiration from Eq.~(\ref{ScalarPlus4VectorLagrangian}) and define the following Lagrangian:
\begin{eqn}
\mL\defeq\frac{1}{2}\phi^\mu e_\mu^{~a}\eta_{ab}e_\nu^{~b}\phi^\nu-\frac{1}{2}\phi^\mu e_\mu^{~a}\partial_a\phi+\frac{1}{2}\phi e_\nu^{~b}\partial_b\phi^\nu-V(\phi).
\label{FiveVectorLagrangian}
\end{eqn}
In this definition, we have abandoned the Poincar\'e-deformable coordinate system $\{x^\mu\}$ and denote by $\{x^a\}$ the flat background of our theory---a 4-D Cartesian manifold with Minkowski metric $\eta_{ab}$ of signature $($$-$$+$$+$$+$$)$. (See footnote \ref{whyCartesian}.) We have facilitated the introduction of these Cartesian coordinates using the vierbein, substituting
\begin{eqn}
\partial_\mu~\rightarrow~e_\mu^{~a}\partial_a
\end{eqn}
in our Lagrangian and thereby cleaving the target indices $\{\mu\}$ of Poincar\'e symmetries from the background coordinate indices $\{a\}$. We shall often denote a point in this flat background as $x\in\{x^a\}$.
Latin (Cartesian) indices may be lowered with $\eta_{ab}$ and raised with its inverse, such that
\begin{eqn}
\partial^a=\eta^{ab}\partial_b.
\end{eqn}
The repetition of Latin indices indicates a sum---as in the familiar flat-spacetime Einstein summation convention.
Our Lagrangian again has five dynamical variables, in the form of a Lorentz 4-vector $\phi^\mu$ and a scalar field $\phi$. It additionally has a $4\times4$ matrix vierbein solder field $e_\mu^{~a}$, with inverse $e^\nu_{~b}$:
\begin{eqn}
e_\mu^{~a}e^\mu_{~b}=\delta^a_b~~~\text{and}~~~e_\mu^{~a}e^\nu_{~a}=\delta_\mu^\nu.
\label{vierbeinIdentities}
\end{eqn}
We note for convenience that $\partial_{\left(e^\alpha_{~a}\right)}e_\beta^{~b}=-e_\beta^{~a}e_\alpha^{~b}$, an identity readily derived by differentiating Eq.~(\ref{vierbeinIdentities}).
These fields are defined to be functions over $\{x^a\}$: $\phi(x),$ $\phi^\mu(x)$, $e_\nu^{~b}(x)$, and so on. Greek (Poincar\'e) indices may be raised and lowered by a `metric' $g_{\mu\nu}$ formed of the vierbeins:
\begin{eqn}
\phi_\mu=g_{\mu\nu}\phi^\nu\equiv e_\mu^{~a}\eta_{ab}e_\nu^{~b}\phi^\nu\\
\phi^\mu=g^{\mu\nu}\phi_\nu\equiv e^\mu_{~a}\eta^{ab}e^\nu_{~b}\phi_\nu.
\label{raiseLower}
\end{eqn}
One may profitably regard $g_{\mu\nu}$ and $g^{\mu\nu}$ simply as compact notations for these arrangements of vierbein fields.
Crucially, in this section we make the following assumptions for the `ungauged' vierbein solder field:
\begin{enumerate}[label=(\alph{enumi})]
\item $e_\mu^{~a}(x)$ is constant---that is, ${\partial_b(e_\mu^{~a})=0}$ $\forall$ $x$;
\item $e_\mu^{~a}$ transforms under global Poincar\'e transformations, in a manner to be defined; and
\item $e_\mu^{~a}$ is a static, non-dynamical field.
\end{enumerate}
These assumptions will be relaxed in the ensuing matrix formalism. In the present vierbein formalism, however, these \emph{ungauged assumptions} will be applied steadfastly.
We may apply Euler operators---defined in terms of $\{x^a\}$ coordinates---to Eq.~(\ref{FiveVectorLagrangian}) and derive the EOM of our vierbein theory:
\begin{eqn}
(\star):~~0&=\ws{E}_{\phi}(\mL)=-V'(\phi)+e_\mu^{~a}\partial_a\phi^\mu\\
(\star\star)_\sigma:~~0&=\ws{E}_{\phi^\sigma}(\mL)=g_{\sigma\mu}\phi^\mu-e_\sigma^{~a}\partial_a\phi.
\label{theEOM}
\end{eqn}
We find it convenient to refer to these EOM in the analysis below as $(\star)$ and $(\star\star)_\sigma$. It is worth promptly noting their similarity to the EOM of Eq. (\ref{RealPlusFourVectorEOM}).
We now take the decisive step and lift our Poincar\'e generators to vertical vector fields. We define them as follows:
\begin{eqn}
P^\alpha&\defeq\phi^\alpha\partial_\phi+\left(\partial^a\right)\partial_{e_\alpha^{~a}}\\
M^{\alpha\beta}&\defeq\phi^\sigma\left(\eta^{\alpha\nu}\delta_\sigma^\beta-\eta^{\beta\nu}\delta_\sigma^\alpha\right)\partial_{\phi^\nu}\\
&\hspace{20pt}-e_\sigma^{~a}\left(\eta^{\alpha\sigma}\delta_\nu^\beta-\eta^{\beta\sigma}\delta_\nu^\alpha\right)\partial_{e_\nu^{~a}}.
\label{VierbeinFormalismGenerators}
\end{eqn}
These differential operators are defined to represent the action of the Poincar\'e generators on our matter and solder fields; we will soon verify that they satisfy the appropriate Lie algebra.
The Poincar\'e vector fields of Eq. (\ref{VierbeinFormalismGenerators}) warrant some examination. To clarify the \emph{in situ} action of $P^\alpha$, we first note that
\begin{eqn}
P^\alpha[f(\phi^\mu)e_\sigma^{~a}g(\phi^\nu)]=f(\phi^\mu)\delta_\sigma^\alpha\partial^ag(\phi^\nu).
\end{eqn}
The logic for this unusual differential operator $\partial^a$ in the coefficient of the $P^\alpha$ symmetry will be clarified when we replace $P^\alpha$ and $M^{\alpha\beta}$ with improved matrix operators in the forthcoming matrix formalism.
We further note that, unlike the prolonged vertical symmetries defined in Eqs.~(\ref{verticalSymms}) and (\ref{prolongedVertFiveVectorSymmetries}), which are in fact equivalent to horizontal symmetries, we have defined vertical generators in Eq.~(\ref{VierbeinFormalismGenerators}) that do not involve derivatives of dependent variables in their coefficients.
The prolongation of our global symmetries to sufficient order takes the following form:
\begin{eqn}
\ws{pr}[P^\alpha]&=\phi^\alpha\partial_\phi+\left(\partial_a\phi^\alpha\right)\partial_{\left(\partial_a\phi\right)}+\left(\partial^a\right)\partial_{e_\alpha^{~a}}\\
\ws{pr}[M^{\alpha\beta}]&=\phi^\sigma\left(\eta^{\alpha\nu}\delta_\sigma^\beta-\eta^{\beta\nu}\delta_\sigma^\alpha\right)\partial_{\phi^\nu}\\
&\hspace{20pt}+\left(\partial_a\phi^\sigma\right)\left(\eta^{\alpha\nu}\delta_\sigma^\beta-\eta^{\beta\nu}\delta_\sigma^\alpha\right)\partial_{(\partial_a\phi^\nu)}\\
&\hspace{20pt}-e_\sigma^{~a}\left(\eta^{\alpha\sigma}\delta_\nu^\beta-\eta^{\beta\sigma}\delta_\nu^\alpha\right)\partial_{e_\nu^{~a}}.
\label{prolongedVierbeinFormalismGenerators}
\end{eqn}
We may demonstrate that these are infinitesimal symmetries of the Euler-Lagrange equations, as follows:
\begin{eqn}
\ws{pr}[P^\alpha](\star)&=-V''(\phi)g^{\alpha\mu}(\star\star)_\mu+e^\alpha_{~a}\ws{D}^a(\star)\\
&\hspace{17pt}+g^{\alpha\mu}\ws{D}_a\ws{D}^a(\star\star)_\mu-e^\alpha_{~a}e^\mu_{~b}\ws{D}^a\ws{D}^b(\star\star)_\mu\\
\ws{pr}[M^{\alpha\beta}](\star)&=0\\
\ws{pr}[P^\alpha](\star\star)_\sigma&=\delta_\sigma^\alpha e^\mu_{~a}\ws{D}^a(\star\star)_\mu\\
\ws{pr}[M^{\alpha\beta}](\star\star)_\sigma&=[\delta^\alpha_\sigma\eta^{\beta\mu}-\delta^\beta_\sigma\eta^{\alpha\mu}](\star\star)_\mu.
\label{EOMPoincareInvariance}
\end{eqn}
Each right hand side above clearly vanishes on shell---that is, on the submanifold of the jet space satisfying the EOM of Eq.~(\ref{theEOM}). The flow of the dependent variables along these vector fields therefore carries solutions of our EOM into other solutions, leaving the solution submanifold invariant. By definition, therefore, $P^\alpha$ and $M^{\alpha\beta}$ of Eq.~(\ref{VierbeinFormalismGenerators}) are symmetries of our EOM.
It is furthermore straightforward to check that these symmetries, as defined in Eq.~(\ref{VierbeinFormalismGenerators}), satisfy the Poincar\'e Lie algebra:
\begin{eqn}
\left\llbracket P^\alpha,P^\beta\right\rrbracket&=0\\
\left\llbracket M^{\alpha\beta},P^\mu\right\rrbracket&=\eta^{\alpha\mu}P^\beta-\eta^{\beta\mu}P^\alpha\\
\left\llbracket M^{\alpha\beta},M^{\mu\nu}\right\rrbracket&=\eta^{\alpha\mu}M^{\beta\nu}+\eta^{\beta\nu}M^{\alpha\mu}-\eta^{\alpha\nu}M^{\beta\mu}-\eta^{\beta\mu}M^{\alpha\nu}.
\label{PoincareLieAlgebra}
\end{eqn}
The Poincar\'e invariance of the above theory is therefore apparent in the 10 symmetries of our EOM---defined in Eq.~(\ref{VierbeinFormalismGenerators}) and verified in Eq.~(\ref{EOMPoincareInvariance})---which satisfy the Poincar\'e Lie algebra in Eq.~(\ref{PoincareLieAlgebra}).
And yet, the Poincar\'e invariance of the vierbein formalism is incomplete. Calculating ${\ws{pr}[P^\alpha](\mL)}$ and ${\ws{pr}[M^{\alpha\beta}](\mL)}$ for $\mL$ in Eq.~(\ref{FiveVectorLagrangian}), it is easily shown that while $M^{\alpha\beta}$ of Eq.~(\ref{VierbeinFormalismGenerators}) is a variational symmetry of $\mL$---as defined in Eq.~(\ref{defineVarSymm})---$P^\alpha$ is not. While variational symmetries of a Lagrangian $\mL$ are always symmetries of its EOM ${\ws{E}_u(\mL)}$, the converse is not always true \cite{olver_textbook_1993}, as we have just seen.
In the subsequent 5-vector theory, we will find a completely Poincar\'e invariant theory by addressing the following three limitations of our vierbein formalism:
\begin{itemize}
\item The vierbein's transformation under $P^\alpha$ of Eq.~(\ref{VierbeinFormalismGenerators}) is too inflexible, in a sense that will be clarified. We will use the matrix formalism to define a more flexible notion of our solder field's translation, which will require the vierbein's expansion into the $5\times5$ f\"unfbein.
\item The vierbein theory lacks an antiparticle. If we consider the EOM of Eq.~(\ref{theEOM}), we note that the momentum of our $\phi$ field is---roughly speaking---characterized by the $\phi^\mu$ field. Because the 5-vector must transform under translations, its `linear momentum charge'---in a quantized theory---should be conserved in its interactions. The terms of a 5-vector Lagrangian should therefore couple only the 5-vector field and its antiparticle.
\item The vierbein field is non-dynamical. The setting of Noether's procedure in jet space requires that any vertical field---any dependent variable---has a corresponding Euler-Lagrange equation. We shall therefore define our f\"unfbein solder field to be dynamical.
\end{itemize}
In the following matrix formalism, we render the improvements motivated above. We will introduce three new elements into our Lagrangian: (a) the \emph{5-vector} particle; (b) its antiparticle---the \emph{twisted 5-vector}; and (c) the \emph{f\"unfbein} solder field.
\subsection{Poincar\'e Lift \#2: Matrix Formalism}
We therefore refine our vierbein theory by reexpressing its fields and group actions in the form of matrix representations. In the following, we recast the action of the Poincar\'e symmetries---represented as differential operators on jet space in Eq.~(\ref{VierbeinFormalismGenerators})---as matrix transformations on our matter and solder fields.
We choose the following faithful, non-unitary $5\times5$ matrix representation $\rho$ of the Poincar\'e group:
\begin{eqn}
\rho:\left\{
\begin{alignedat}{3}
&(\Lambda,\vphi)&&\rightarrow~~\gv{\Lambda}&&\equiv\left[\begin{matrix}\Lambda^\mu_{~\nu}&\v{0}\\\vphi_\nu&1\end{matrix}\right]\\
&(\mOne,\v{0})&&\rightarrow~~\gv{\Lambda}_0&&\equiv\left[\begin{matrix}\delta^\mu_\nu&\v{0}\\\v{0}&1\end{matrix}\right]\\
&(\Lambda,\vphi)^{-1}&&\rightarrow~~\gv{\Lambda}^{-1}&&\equiv\left[\begin{matrix}(\Lambda^{-1})^\mu_{~\nu}&\v{0}\\-\vphi_\sigma(\Lambda^{-1})^\sigma_{~\nu}&1\end{matrix}\right].
\end{alignedat}\right.
\end{eqn}
We correspondingly organize our matter fields into a 5-component column vector that we call a \emph{5-vector}---$\gv{\phi}$---whose components and Poincar\'e transformation are defined as follows:
\begin{eqn}
\gv{\phi}(x)&\defeq\left[\begin{matrix}[l]\phi^\mu\\~\phi\end{matrix}\right](x)\\
\gv{\Lambda}\cdot\gv{\phi}&\defeq\left[\begin{matrix}\Lambda^\mu_{~\nu}&\v{0}\\\vphi_\nu&1\end{matrix}\right]\cdot\left[\begin{matrix}\phi^\nu\\\phi\end{matrix}\right]=\left[\begin{matrix}\Lambda^\mu_{~\nu}\phi^\nu\\\phi+\vphi_\nu\phi^\nu\end{matrix}\right].
\label{5VectorDefn}
\end{eqn}
We furthermore augment our vierbein into a $5\times5$ matrix we call the \emph{f\"unfbein}---$\v{e}$---whose components and transformation are defined to be:
\begin{eqn}
\v{e}(x)\defeq&\left[\begin{matrix}e_\mu^{~a}&\v{0}\\e_\mu&1\end{matrix}\right](x)\\
\v{e}\cdot\gv{\Lambda}^{-1}\defeq&\left[\begin{matrix}e_\mu^{~a}&\v{0}\\e_\mu&1\end{matrix}\right]\cdot\left[\begin{matrix}\Lambda^\mu_{~\nu}&\v{0}\\\vphi_\nu&1\end{matrix}\right]^{-1}\\
=&\left[~\begin{matrix}[c|c]e_\mu^{~a}(\Lambda^{-1})^\mu_{~\nu}&\v{0}\\\hline e_\mu(\Lambda^{-1})^\mu_{~\nu}-\vphi_\sigma(\Lambda^{-1})^\sigma_{~\nu}&1\end{matrix}~\right].
\label{FunfbeinDefn}
\end{eqn}
We call particular attention to the \emph{inverse} action of $\gv{\Lambda}$ on the f\"unfbein, as well as the distinct use of left and right matrix multiplication for the 5-vector and f\"unfbein, respectively. For completeness, we note that $\v{e}^T$ transforms just as it should: ${\v{e}^T\rightarrow\left(\v{e}\cdot\gv{\Lambda}^{-1}\right)^T=\gv{\Lambda}^{-T}\v{e}^T}$.
We define our antiparticle---the \emph{twisted 5-vector} ${\gv{\tphi}}$---as a 5-component row vector with the following Poincar\'e transformation:
\begin{eqn}
\gv{\tphi}(x)&\defeq\left[\tphi^\mu~\tphi\right](x)\\
\gv{\tphi}\cdot\gv{\Lambda}^T&\defeq\left[\tphi^\nu~\tphi\right]\cdot\left[\begin{matrix}\Lambda^\mu_{~\nu}&\vphi_\nu\\\v{0}&1\end{matrix}\right]=\left[\tphi^\nu\Lambda^\mu_{~\nu}~\Big|~\tphi+\tphi^\nu\vphi_\nu\right].
\label{A5VectorDefn}
\end{eqn}
Despite the identical Poincar\'e transformations of $\gv{\tphi}$ and $\gv{\phi}^T$, the EOM of $\gv{\tphi}$ will emphasize its uniqueness from $\gv{\phi}$, as we shall see.
As a point of clarification, our notation in Eq.~(\ref{A5VectorDefn}) is somewhat schematic. One might prefer to explicitly notate the implied transpose of the 4-vector $\tphi^\mu$ and the Poincar\'e matrix components, e.g. $[(\tphi^\nu)^T(\Lambda^\mu_{~\nu})^T]$. Wherever components of $\gv{\tphi}$, $\gv{\Lambda}^T$ and $\v{e}^T$ are written, however, we will continue to rely on indices to indicate the appropriate order of operations, as we have in Eq.~(\ref{A5VectorDefn}), and we trust that it will not be a source of confusion.
We observe that, given the matrix group actions defined in Eqs.~(\ref{5VectorDefn})-(\ref{A5VectorDefn}), the expressions
\begin{eqn}
\gv{\tphi}\cdot\v{e}^T=\left[\tphi^\mu~\tphi\right]\cdot\left[\begin{matrix}e_\mu^{~a}&e_\mu\\\v{0}&1\end{matrix}\right]~~~\text{and}~~~\v{e}\cdot\gv{\phi}=\left[\begin{matrix}e_\mu^{~a}&\v{0}\\e_\mu&1\end{matrix}\right]\cdot\left[\begin{matrix}\phi^\mu\\\phi\end{matrix}\right]
\label{invarPoincareExpressions}
\end{eqn}
are invariant under Poincar\'e transformations.
As a final introductory note before we define ungauged 5-vector theory, we state our \emph{solder field assumptions} for the `ungauged' f\"unfbein in the matrix formalism:
\begin{enumerate}[label=(\alph{enumi})]
\item $\v{e}(x)$ is constant\footnote{This assumption is applied as if it were an \emph{on-shell condition}. It is not required a priori, for example, when deriving EOM or applying Poincar\'e transformations. In general relativity, a related assumption appears as the `tetrad postulate' \cite{carroll_textbook_2003}, wherein ${\nabla_a(e_\mu^{~b})=0}$ for the covariant derivative $\nabla_a$. Since our present theory is as yet ungauged, it is analogous that we should require ${\partial_a\v{e}=0}$.}---that is, ${\partial_a\v{e}=0}$ $\forall$ $x$;
\item $\v{e}$ transforms under global Poincar\'e transformations, as defined in Eq.~(\ref{FunfbeinDefn}); and
\item $\v{e}$ is a dynamical field, (though its EOM will turn out to be indeterminate in this as-yet-ungauged theory).
\end{enumerate}
In (c), we have relaxed the non-dynamical solder field assumption of the vierbein formalism.
We are at last ready to define the \emph{5-vector Lagrangian} in our matrix formalism:
\begin{eqn}
\mL&\defeq\gv{\tphi}\v{e}^T\Big\{\gv{\partial}_\eta-\gv{\partial}_{\text{\tiny{$V$}}}\Big\}\v{e}\gv{\phi}\\
&\defeq\left[\tphi^\mu~\tphi\right]\left[\begin{matrix}e_\mu^{~a}&e_\mu\\\v{0}&1\end{matrix}\right]\cdot\\
&\hspace{30pt}\left\{\left[\begin{matrix}\eta_{ab}&-\partial_a\\-\partial_b&\partial^2\end{matrix}\right]-\left[\begin{matrix}\mZero&\v{0}\\\v{0}&\partial^2-m^2\end{matrix}\right]\right\}\left[\begin{matrix}e_\nu^{~b}&\v{0}\\e_\nu&1\end{matrix}\right]\left[\begin{matrix}\phi^\nu\\\phi\end{matrix}\right].
\label{theLagrangian}
\end{eqn}
Defining the Poincar\'e-invariant quantities
\begin{eqn}
\phi_0&\defeq\phi+e_\mu\phi^\mu\\
\tphi_0&\defeq\tphi+e_\mu\tphi^\mu,
\label{PoincareInvariantQtys}
\end{eqn}
we may concisely rewrite this Lagrangian as:
\begin{eqn}
\mL=\tphi^\mu g_{\mu\nu}\phi^\nu-\tphi^\mu e_\mu^{~a}\partial_a\phi_0-\tphi_0\partial_b(e_\nu^{~b}\phi^\nu)+m^2\tphi_0\phi_0.
\label{5VectorUngaugedLagrangian}
\end{eqn}
We now make several comments about $\mL$.
First, we have restricted our theory to the (almost Klein-Gordon) potential ${V(\tphi_0,\phi_0)=m^2\tphi_0\phi_0}$. This potential is uniquely well-suited to our bilinear matrix formulation, however, self-interacting potentials ${V=V(\tphi_0,\phi_0)}$, defined in terms of the Poincar\'e-invariant quantities of Eq.~(\ref{PoincareInvariantQtys}), are equally admissible. We also note that, given the positive sign of $V$ within $\mL$, we have arbitrarily chosen to flip the overall sign of the Lagrangian.
Second, we have included in Eq.~(\ref{theLagrangian}) two matrices formed of `background $\{x^a\}$ operators', denoted by the symbols $\gv{\partial}_\eta$ and $\gv{\partial}_{\text{\tiny{$V$}}}$. Importantly, these matrices do not transform under Poincar\'e transformation. Loosely speaking, they characterize the \emph{horizontal kinematics} of our fields on $\{x^a\}$, while the other matrices of our Lagrangian characterize the \emph{vertical dynamics} of our fields.
There is a redundancy in the formulation of these matrices: Rather than combine $\gv{\partial}_\eta$ and $\gv{\partial}_{\text{\tiny{$V$}}}$ into a single matrix, we include second derivatives ${\partial^2\equiv\partial^a\partial_a}$ in $\gv{\partial}_\eta$ and $\gv{\partial}_{\text{\tiny{$V$}}}$ whose terms ${\tphi_0\partial^2\phi_0}$ cancel each other out.
We do this to highlight the relationship between the Poincar\'e transformation of the f\"unfbein, and the jet space transformations given in the vierbein formalism's Eq.~(\ref{VierbeinFormalismGenerators}). Taking ${\gv{\partial}_\eta}$ alone, the Poincar\'e translation ${[\v{e}^T\gv{\partial}_\eta\v{e}]\rightarrow[\gv{\Lambda}^{-T}\v{e}^T\gv{\partial}_\eta\v{e}\gv{\Lambda}^{-1}]}$ for ${\gv{\Lambda}=(\mOne,\vphi)}$ exactly reproduces the vierbein translation $P^\alpha$ previously defined in Eq.~(\ref{VierbeinFormalismGenerators}). Indeed, it is now clear why the unusual coefficient $\partial^a$ appears in the definition of $P^\alpha$ in the vierbein formalism: It effectively replaces the transformation of the four \emph{hidden} $e_\mu$ components of the f\"unfbein field, without explicitly including them in the theory.
In this sense, the transformation of $\v{e}$ in the full context of ${[\v{e}^T\{\gv{\partial}_\eta-\gv{\partial}_V\}\v{e}]}$ reveals the flexibility of the f\"unfbein's transformations relative to the vierbein.
Having characterized elements of our 5-vector Lagrangian, we now proceed to examine its Poincar\'e symmetry. The matrix generators of the Poincar\'e group are given by:
\begin{eqn}
[P^\alpha]^\mu_{~\sigma}&\defeq\left[\begin{matrix}\mZero&\v{0}\\\delta^\alpha_\sigma&0\end{matrix}\right]\\
[M^{\alpha\beta}]^\mu_{~\sigma}&\defeq\left[\begin{matrix}\left(\delta^\alpha_\sigma\eta^{\beta\mu}-\delta^\beta_\sigma\eta^{\alpha\mu}\right)&\v{0}\\\v{0}&0\end{matrix}\right].
\label{MatrixFormalismGenerators}
\end{eqn}
It is easily verified that these generators satisfy the Lie algebra of Eq.~(\ref{PoincareLieAlgebra}).
We apply these generators to $\mL$ of Eq.~(\ref{theLagrangian}) as we would differential operators, and note that the prolongations of their actions are equivalent to transforming our matrix fields wherever those fields appear---even under a derivative operator. In particular, the matrix generators of Eq.~(\ref{MatrixFormalismGenerators})---(and their negations and transposes, as appropriate)---applied to the fields of $\mL$ \emph{in situ}, generate the same transformations as the following prolonged vector fields:
\begin{eqn}
\ws{pr}[P^\alpha]&=\smashoperator{\sum\limits_{J=\{\emptyset,a,\cdots\}}}\Big[\phi^\alpha_J\partial_{\phi_J}+\tphi^\alpha_J\partial_{\tphi_J}\Big]-\partial_{e_\alpha}\\
\ws{pr}[M^{\alpha\beta}]&=\smashoperator{\sum\limits_{J=\{\emptyset,a,\cdots\}}}\Big[\phi^\sigma_J(\delta^\alpha_\sigma\eta^{\beta\nu}-\delta^\beta_\sigma\eta^{\alpha\nu})\partial_{\phi^\nu_J}\\
&\hspace{30pt}+\tphi^\sigma_J(\delta^\alpha_\sigma\eta^{\beta\nu}-\delta^\beta_\sigma\eta^{\alpha\nu})\partial_{\tphi^\nu_J}\\
&\hspace{30pt}-(e_\sigma)_J(\delta^\alpha_\nu\eta^{\beta\sigma}-\delta^\beta_\nu\eta^{\alpha\sigma})\partial_{(e_\nu)_J}\\
&\hspace{30pt}-(e_\sigma^{~c})_J(\delta^\alpha_\nu\eta^{\beta\sigma}-\delta^\beta_\nu\eta^{\alpha\sigma})\partial_{(e_\nu^{~c})_J}\Big].
\end{eqn}
We therefore observe that the 5-vector Lagrangian of Eq.~(\ref{theLagrangian}) is completely invariant under Poincar\'e transformations:
\begin{eqn}
\ws{pr}[P^\alpha](\mL)&=0\\
\ws{pr}[M^{\alpha\beta}](\mL)&=0.
\label{showMatrixVarSymm}
\end{eqn}
Eq.~(\ref{showMatrixVarSymm}) demonstrates that our matrix transformations are indeed variational symmetries of our Lagrangian in Eq.~(\ref{theLagrangian})---and that we have, therefore, repaired the vierbein formalism. Having successfully lifted the Poincar\'e symmetries of our classical field theory, we have thus fulfilled our `verticalization program'.
We may now derive the EOM for our matter and solder fields. We simply apply Euler operators---again defined in terms of $\{x^a\}$ coordinates---to the Lagrangian of Eq.~(\ref{theLagrangian}):
\begin{eqn}
\hspace{-5pt}&\v{0}=\ws{E}_{\tiny{\text{$\gv{\tphi}$}}}(\mL)=\v{e}^T\big\{\gv{\partial}_\eta-\gv{\partial}_{\text{\tiny{$V$}}}\big\}\v{e}\gv{\phi}\\
\hspace{-5pt}&\v{0}=\ws{E}_{\tiny{\text{$\gv{\phi}$}}}(\mL)=\v{e}^T\big\{\tilde{\gv{\partial}}_\eta-\gv{\partial}_{\text{\tiny{$V$}}}\big\}\v{e}\gv{\tphi}^T\\
\hspace{-5pt}&0=\ws{E}_{e_\sigma}(\mL)=-\tphi^\sigma\left[\partial_b(e_\nu^{~b}\phi^\nu)-m^2(e_\nu\phi^\nu+\phi)\right]\\
\hspace{-5pt}&\hspace{57pt}+\phi^\sigma\left[\partial_a(e_\mu^{~a}\tphi^\mu)+m^2(e_\mu\tphi^\mu+\tphi)\right]\\
\hspace{-5pt}&0=\ws{E}_{e_\sigma^{~a}}(\mL)=\tphi^\sigma\eta_{ab}\left[e_\nu^{~b}\phi^\nu-\partial^b\left(e_\nu\phi^\nu+\phi\right)\right]\\
\hspace{-5pt}&\hspace{52pt}+\phi^\sigma\eta_{ab}\left[e_\mu^{~b}\tphi^\mu+\partial^b\left(e_\mu\tphi^\mu+\tphi\right)\right].
\label{theUngaugedEOM}
\end{eqn}
In the second equation above, we have employed a new `kinematic matrix':
\begin{eqn}
\tilde{\gv{\partial}}_\eta\equiv\left[\begin{matrix}\eta_{ab}&\partial_a\\\partial_b&\partial^2\end{matrix}\right],
\end{eqn}
which reflects the change in sign of first-order derivatives upon the evaluation of an Euler operator.
We define the following Poincar\'e-invariant quantities
\begin{eqn}
(\circ)&\defeq m^2\phi_0-\partial_b(e_\nu^{~b}\phi^\nu)\\
(\bullet)&\defeq m^2\tphi_0+\partial_b(e_\mu^{~b}\tphi^\mu)\\
(\circ\circ)_a&\defeq\eta_{ab}e_\nu^{~b}\phi^\nu-\partial_a\phi_0\\
(\bullet\bullet)_a&\defeq\eta_{ab}e_\mu^{~b}\tphi^\mu+\partial_a\tphi_0,
\end{eqn}
so that we may reexpress our EOM of Eq.~(\ref{theUngaugedEOM}) as follows:
\begin{eqn}
\hspace{-5pt}&0=\ws{E}_{\tphi^\sigma}(\mL)=e_\sigma^{~a}(\circ\circ)_a+e_\sigma(\circ)\\
\hspace{-5pt}&0=\ws{E}_{\tphi}(\mL)=(\circ)\\
\hspace{-5pt}&0=\ws{E}_{\phi^\sigma}(\mL)=e_\sigma^{~a}(\bullet\bullet)_a+e_\sigma(\bullet)\\
\hspace{-5pt}&0=\ws{E}_{\phi}(\mL)=(\bullet)\\
\hspace{-5pt}&0=\ws{E}_{e_\sigma}(\mL)=\tphi^\sigma(\circ)+\phi^\sigma(\bullet)\\
\hspace{-5pt}&0=\ws{E}_{e_\sigma^{~a}}(\mL)=\tphi^\sigma(\circ\circ)_a+\phi^\sigma(\bullet\bullet)_a.
\label{eulerOperator5VectorEOM}
\end{eqn}
We see that all of our EOM are therefore solved when:
\begin{eqn}
(\circ)=(\bullet)=(\circ\circ)_a=(\bullet\bullet)_a=0.
\label{theBriefUngaugedEOM}
\end{eqn}
We observe that our EOM are indeterminate of the solder field's 20 degrees of freedom (DOF). This is to be expected in an ungauged theory---inasmuch as the `dynamics' of $\eta_{\mu\nu}$ are `indeterminate' in a flat theory of gravity without curvature. Indeed, our assumption ${\partial_a\v{e}=0}$ requires $\v{e}$ to be constant over all $\{x^a\}$.
We further note from Eq.~(\ref{theBriefUngaugedEOM}) that \emph{both} $\phi_0$ and $\tphi_0$ obey the Klein-Gordon equation on shell:
\begin{eqn}
(\partial^2-m^2)\phi_0=(\partial^2-m^2)\tphi_0=0.
\label{KGeqn}
\end{eqn}
Despite these identical dynamics, the $(\bullet)$ and $(\bullet\bullet)_a$ EOM for $\gv{\tphi}$ have an important sign difference with respect to $(\circ)$ and $(\circ\circ)_a$. The \emph{internal} dynamics of the components of $\gv{\tphi}$ are distinctly opposite those of $\gv{\phi}$, encouraging its interpretation as the antiparticle of $\gv{\phi}$.
Finally, we proceed to develop the 10 conservation laws of 5-vector theory. Because our symmetries are already vertical, and since $B$ of Eq.~(\ref{conciseConsLawForVariational}) exactly vanishes in Eq.~(\ref{showMatrixVarSymm}), we need only calculate the 4-tuple $A$ that completes our conservation laws, as defined in Eq.~(\ref{defFourTuple}). Gathering the nonzero data required, we find:
\begin{eqn}
\ws{E}_\phi^a(\mL)&=-e_\mu^{~a}\tphi^\mu\\
\ws{E}_{\phi^\sigma}^a(\mL)&=-e_\sigma^{~a}\tphi_0-e_\sigma e_\mu^{~a}\tphi^\mu\\
\ws{E}_{e_\sigma}^{a}(\mL)&=-e_\mu^{~a}\tphi^\mu\phi^\sigma\\
\ws{E}_{e_\sigma^{~b}}^{a}(\mL)&=-\delta^a_b\phi^\sigma\tphi_0\\
Q^{P^\alpha}_\phi&=\phi^\alpha\hspace{28pt}Q^{M^{\alpha\beta}}_{\phi^\nu}=\left(\delta_\sigma^\alpha\eta^{\beta\nu}-\delta_\sigma^\beta\eta^{\alpha\nu}\right)\phi^\sigma\\
Q^{P^\alpha}_{\tphi}&=\tphi^\alpha\hspace{28pt}Q^{M^{\alpha\beta}}_{\tphi^\nu}=\left(\delta_\sigma^\alpha\eta^{\beta\nu}-\delta_\sigma^\beta\eta^{\alpha\nu}\right)\tphi^\sigma\\
Q^{P^\alpha}_{e_\sigma}&=-\delta^\alpha_\sigma\hspace{22pt}Q^{M^{\alpha\beta}}_{e_\nu}=\left(\delta_\nu^\beta\eta^{\alpha\sigma}-\delta_\nu^\alpha\eta^{\beta\sigma}\right)e_\sigma\\
&\hspace{54pt}Q^{M^{\alpha\beta}}_{e_\nu^{~a}}=\left(\delta_\nu^\beta\eta^{\alpha\sigma}-\delta_\nu^\alpha\eta^{\beta\sigma}\right)e_\sigma^{~a}.
\end{eqn}
Following Eq.~(\ref{defFourTuple}), these data yield the following 4-tuples:
\begin{eqn}
A^a_{P^\alpha}&=0\\
A^a_{M^{\alpha\beta}}&=0.
\end{eqn}
We have therefore discovered that the \emph{canonical} Noether currents of 5-vector theory are \emph{trivial}---that is, they vanish on shell.
By Noether's second theorem, this triviality can be seen as the result of the gauge-like Poincar\'e symmetry of $\mL$ in Eq.~(\ref{theLagrangian})---that is, its \emph{local} Poincar\'e symmetry. Analogous results are found in other locally symmetric (gauge) theories, for example, in the `strong' conservation laws of general relativity \cite{,fletcher_local_1960,goldberg_invariant_1980,kosmann-schwarzbach_noether_2011}.
Nevertheless, it is clear from the comparable physics of scalar + 4-vector theory and 5-vector theory that 5-vector theory also conserves 10 nontrivial currents as it evolves through its EOM solution subspace. These nontrivial conservation laws may be identified by simply noting that the EOM of Eq.~(\ref{theBriefUngaugedEOM}) have the same structure as the scalar + 4-vector EOM of Eq.~(\ref{RealPlusFourVectorEOM}).
We can therefore write down corresponding conservation laws, taking inspiration from Eq.~(\ref{consLawFlatFiveVectorNoVierbeinTheory}) and recalling that we have sign changes from the overall Lagrangian and antiparticle terms:
\begin{eqn}
\hspace{-15pt}0&=\ws{D}_a\Bigg[e^\alpha_{~b}\bigg(e_\mu^{~a}\tphi^\mu\partial^b\phi_0+\tphi_0\partial^b(e_\mu^{~a}\phi^\mu)\bigg)+e^\alpha_{~b}\eta^{ba}\mL\Bigg]\\
&\eqdef\ws{D}_aT^{a\alpha}\\
\hspace{-15pt}0&=\ws{D}_a\Bigg[\bigg(e^\alpha_{~b}x^bT^{a\beta}-e^\beta_{~b}x^bT^{a\alpha}\bigg)-\eta^{ab}\bigg(\tphi^\alpha e^\beta_{~b}-\tphi^\beta e^\alpha_{~b}\bigg)\phi_0\Bigg]\\
&\eqdef\ws{D}_aL^{a\alpha\beta}.
\label{consLawFiveVectorMatrixTheory}
\end{eqn}
We note that on-shell substitutions from the EOM of Eq.~(\ref{theBriefUngaugedEOM}) reduce the first of these conservation laws to the following equality: ${0=\left(\partial_ae^\alpha_{~b}\right)\cdot\left[\tphi_0\partial^a\partial^b\phi_0-\partial^a\tphi_0\partial^b\phi_0\right]}$. We therefore use our flat solder field assumption---${\partial_a\v{e}=0}$---to validate that ${\ws{D}_aT^{a\alpha}=0}$. The second relation, ${\ws{D}_aL^{a\alpha\beta}=0}$, follows similarly.
Before concluding our discussion of ungauged 5-vector theory's conservation laws, we seek to reexpress Eq.~(\ref{consLawFiveVectorMatrixTheory}). In the manner of a quantum mechanical probability current, we symmetrize the contributions of the 5-vector $\gv{\phi}$ and the twisted 5-vector $\gv{\tphi}$ to the energy-momentum $T^{a\alpha}$. We further note that the Lagrangian ${\mL=0}$ on shell, so that it may be omitted from $T^{a\alpha}$. With these considerations, we discover the following modified conservation laws:
\begin{eqn}
0&=\ws{D}_a\bar{T}^{a\alpha}\defeq\ws{D}_a\Bigg[e^\alpha_{~b}e_\mu^{~a}\bigg(\tphi^\mu\partial^b\phi_0-\phi^\mu\partial^b\tphi_0\bigg)\\
&\hspace{74pt}+e^\alpha_{~b}\bigg(\tphi_0\partial^b(e_\mu^{~a}\phi^\mu)-\phi_0\partial^b(e_\mu^{~a}\tphi^\mu)\bigg)\Bigg]\\
0&=\ws{D}_a\bar{L}^{a\alpha\beta}\defeq\ws{D}_a\Bigg[e^\alpha_{~b}x^b\bar{T}^{a\beta}-e^\beta_{~b}x^b\bar{T}^{a\alpha}\Bigg].
\label{finalConsLawFiveVectorMatrixTheory}
\end{eqn}
These symmetrized energy-momenta, which are readily verified by substitutions from Eq.~(\ref{theBriefUngaugedEOM}), will prove indispensable in our discrete companion paper. We further observe that the symmetrized angular momentum conservation law no longer requires an additional offsetting term, as appeared in Eqs.~(\ref{consLawFlatFiveVectorNoVierbeinTheory}) and (\ref{consLawFiveVectorMatrixTheory}).
\subsection{The Comparable Symmetries of\\5-Vector Theory and Scalar Theory}
We have demonstrated the lifted Poincar\'e symmetries of 5-vector theory, and discovered its nontrivial conservation laws. We now pause to compare the vertical Poincar\'e symmetries we have defined for 5-vector theory with the more familiar Poincar\'e transformations of scalar field theory.
In Eq.~(\ref{5VectorDefn}), the action of the translation symmetry $P^\alpha$ on our 5-vector particle is reminiscent of the familiar translation of scalar theory---truncated to first order:
\begin{align*}
&\text{Scalar:}&&\phi&&\rightarrow~~\phi+\vphi^\mu\partial_\mu\phi+\cdots\\
&\text{5-Vector:}&&[\phi_\mu~\phi]&&\rightarrow~~\left[\phi_\mu~\Big|~\phi+\vphi^\mu\phi_\mu\right].
\end{align*}
Similarly, inasmuch as the vierbein is a map from the Cartesian background to the local frame---${\partial^\mu=e^\mu_{~a}\partial^a}$---we may compare its transformation under $P^\alpha$ in the differential operator formalism of Eq.~(\ref{VierbeinFormalismGenerators}) with the more familiar transformation of the spacetime derivative under a translation in scalar field theory:
\begin{align*}
&\text{Spacetime Derivative:}&&\partial^\mu&&\rightarrow~~\partial^\mu+\vphi^\nu\partial_\nu\partial^\mu+\cdots\\
&\text{Vierbein:}&&e^\mu_{~a}\partial^a&&\rightarrow~~e^\mu_{~a}\partial^a+\vphi^\mu\partial_a\partial^a.
\end{align*}
Up to indexation, this translation resembles the first-order truncation of its scalar theory counterpart.
The Lorentz symmetry $M^{\alpha\beta}$ in Eq.~(\ref{5VectorDefn}) is even more immediately recognizable from scalar theory than $P^\alpha$. Indeed, $M^{\alpha\beta}$ effects the transformation of our 4-vectors---that is, the `Lorentz sectors' of our 5-component fields---just as it does for the 4-vector spacetime derivatives of scalar field theory.
\section{Conclusion}
In 5-vector theory, we have thus discovered a Poincar\'e-symmetric theory with vertical group transformations, as desired. In Eq.~(\ref{theLagrangian}), we have defined the physics of 5-vector theory, and found that its dynamics on a static Cartesian background replicate those of a scalar field in flat spacetime, as in Eq.~(\ref{KGeqn}). We have furthermore discovered its conservation laws, as expressed in Eq.~(\ref{finalConsLawFiveVectorMatrixTheory}). We conclude that 5-vector theory is a viable classical field theory, whose dynamics and conservation laws essentially reproduce the physics of a real scalar field.
Upon reflection, a revision of our physical intuition is prompted by this theory. We have demonstrated how the `background canvas' of 5-vector theory can be regarded as an invariant object---without sacrificing any consequential aspect of the spacetime symmetries of scalar theory. By constructing the 5-vector and solder field to have Lorentz and translation components, we have recast `background' spacetime symmetries as `foreground' symmetries of dynamical fields.
We furthermore observe that the invariant background $\{x^a\}$ constitutes an absolute reference frame, and appears to restore the notion of simultaneity to relativistic field theory. After all, two events $x,y\in\{x^a\}$ that satisfy ${x^0=y^0}$ are, formally, simultaneous.
However, a `5-vector observer'---that is, an observer composed of 5-vector matter fields---whose relativistic dynamics are described by Eq.~(\ref{theBriefUngaugedEOM}), would experience the passage of time in her own reference frame. Therefore, while two events may have a well-defined simultaneity in the Cartesian background, they may not be observed to be simultaneous by such a `vertical' observer.
To lift the Poincar\'e symmetries, 5-vector theory requires the coexistence of the `absolute universal clock' of Newtonian physics, and the `local relativistic clock' of Einsteinian physics. In our companion paper, we will further demonstrate that this absolute Newtonian clock might well be digital.
\section{Acknowledgments}
This research was supported by the U.S. Department of Energy (DE-AC02-09CH11466).
|
1,941,325,221,120 | arxiv | \section{Introduction} \label{sec-intro} %
In this paper, we investigate ground states of the energy for a system including both attractive as well as repulsive Coulomb interactions. The very fundamental nature of such \textit{nonlocal} Coulomb interaction is testified by its ubiquitous presence in nature and by the vast number of its occurrences in physical systems.
We consider the problem of finding the optimal shape taken by a uniform negative distribution of charge interacting with a fixed positively charged region; mathematically, this leads to a problem in \textit{potential theory} (\cite{Hel,Lan}). In our model, minimizing configurations are determined by the interplay between the repulsive self-interaction of the negative phase and the attractive interaction between the two oppositely charged regions. We investigate existence of global minimizers and their structure, and we obtain \textit{charge neutrality} and \textit{screening} as key features of our system. In particular, it is noteworthy that the positive phase is completely screened by the optimal negative distribution of charge, in the sense that outside the support of the two charges the long-range potential exerted by the positive region is canceled by the presence of the negative one.
On the basis of this screening result, we can draw a link to the classical theory of \textit{obstacle problems} \cite{Caf77,Caf80,Caf98,PetShaUra}: indeed, the net potential of the optimal configuration can be characterized -- outside the positively charged region and with respect to its own boundary conditions -- as the solution to an obstacle problem, a fact which in turn entails further regularity properties of the minimizer.
\medskip
Mathematically, we represent the fixed positively charged domain by a bounded open set $\Omega^+\subset\mathbb R^3$, and we are
interested in minimizing among configurations $\Omega^-\subset\mathbb R^3\setminus\Omega^+$ with finite volume the
nonlocal energy
\begin{align} \label{def-E-intro}
&I(\Omega^+,\Omega^-)\ := \ \notag\\
&\int_{\Omega^+}\int_{\Omega^+}\frac{1}{4\pi|x-y|} \ \mathrm{d} x \mathrm{d} y +
\int_{\Omega^-}\int_{\Omega^-}\frac{1}{4\pi|x-y|}\ \mathrm{d} x \mathrm{d} y -2 \int_{\Omega^+}\int_{\Omega^-}\frac{1}{4\pi|x-y|}\ \mathrm{d} x \mathrm{d} y\,.
\end{align}
Here the first two terms represent the repulsive self-interaction energies of $\Omega^+$ and $\Omega^-$, respectively, and the third term represents the
attractive mutual interaction between $\Omega^+$ and $\Omega^-$. The present model also arises in the modelling of
copolymer-homopolymer blends, see the remarks on related models below.
The first natural question concerns the existence of minimizers for this variational problem. Since the functional does
not include any interfacial penalization, and we just have a uniform bound on the charge density, the natural topology for the compactness of minimizing sequences is the
weak*-topology in $L^\infty(\mathbb R^3)$. While the lower semicontinuity of the functional follows from standard arguments in potential
theory, a non-trivial issue lies in the fact that the limit distribution could include intermediate densities of charge,
in the sense that the limit function could attain values in the whole interval $[0,1]$: as a result, the limit
configuration might not be admissible for our problem. We will however show below that minimizer only take values in $\{0,1\}$, which allows to bring the negatively charged region and the positively charged one as ``close'' together as possible.
Our next aim is to identify specific properties of the optimal set. Here we first establish a \textit{charge neutrality} phenomenon: the total negative charge of the optimal configuration equals the given positive one, i.e. $|\Omega^-|=|\Omega^+|$. In particular, this shows that configurations with nonzero total net charge are unstable in this sense. We also study the case where the total negative charge is prescribed: we discuss the minimization of the energy under the additional volume constraint $|\Omega^-|=\lambda$, and analyze the dependence of the solution on the parameter $\lambda$ (Theorem~\ref{thm-constrained}), proving that a minimizer exists if and only if $\lambda\leq|\Omega^+|$.
The issue of charge neutrality is a central question for systems including interacting positive and negative charges. For instance, we refer to the work of Lieb and Simon \cite{LieSim}, where charge neutrality is shown for minimizers of the Thomas-Fermi energy functional for atomic structures, in the context of quantum mechanics.
Another related question is whether the maximal negative \textit{ionization} (the number of extra electrons that a neutral atom can bind) remains small: we mention in particular the so-called \textit{ionization conjecture}, which gives an upper bound on the number of electrons that can be bound to an atomic nucleus. For some results in this direction, see e.g. \cite{BenLie,Lieb84,LiSiSiTh,Sol03}.
A second remarkable property of minimizers is that complete \textit{screening} is achieved (Theorem~\ref{thm-screening}): the
negative charge tends to arrange itself in a layer around the boundary of $\Omega^+$ in such a way to cancel the Coulomb
potential exerted by the positive charge. Indeed, the net potential
\begin{align*}
\phi(x):= \int_{\Omega^+}\frac{1}{4\pi|x-y|}\ \mathrm{d} y - \int_{\Omega^-}\frac{1}{4\pi|x-y|}\ \mathrm{d} y
\end{align*}
of the optimal configuration
actually vanishes in the external uncharged region $\mathbb R^3\setminus(\overline{\Omega^+\cup\Omega^-})$. Since it can be
proved (under some mild regularity assumptions on the boundary of $\Omega^+$) that $\phi$ is strictly positive in the closure of $\Omega^+$, this implies that the positive phase is completely
surrounded by $\Omega^-$ and the distance between $\Omega^+$ and the uncharged space is strictly positive; moreover,
each connected component of $\Omega^-$ has to touch the boundary of $\Omega^+$. Notice that, although the minimizer $\Omega^-$ is in general defined up to a Lebesgue-negligible set, we can always select a precise representative, which is in particular an open set, see \eqref{def-Ome-}. Such properties are established by combining information from the Euler-Lagrange equations with \textit{ad hoc} arguments based on the maximum principle.
\begin{figure}
\begin{tikzpicture}[scale=0.5]
\draw [fill=blue!15!white] (-2.5,2.4) to [out=40, in=225] (0,2.5) to [out=45, in=180 ] (2,3.4) to [out=0, in=90] (6.4,1) to [out=270, in=0] (0,-2.1) to [out=180, in=220] (-2.5,2.4);
\draw [fill=blue!70!white] (-2,2) to [out=40, in=225] (0,1.7) to [out=45, in=180 ] (2,3) to [out=0, in=90] (6,1) to [out=270, in=0] (0,-1.5) to [out=180, in=220] (-2,2);
\path [fill=white] (1,1) to [out=90, in=180] (3,2) to [out=0, in=90] (5,1) to [out=270, in=0] (3,0) to [out=180, in=270] (1,1);
\draw [fill=blue!15!white] (1,1) to [out=90, in=180] (3,2) to [out=0, in=90] (5,1) to [out=270, in=0] (3,0) to [out=180, in=270] (1,1);
\draw [fill=white] (1.6,1) to [out=90, in=180] (3,1.6) to [out=0, in=90] (4.6,1) to [out=270, in=0] (3,0.4) to [out=180, in=270] (1.6,1);
\draw [fill=blue!15!white] (9,1.3) to [out=0, in=90] (10.3,0) to [out=270, in=0] (9,-1.3) to [out=180, in=270] (7.7,0) to [out=90, in=180] (9,1.3);
\draw [fill=blue!70!white] (9,1) to [out=0, in=90] (10,0) to [out=270, in=0] (9,-1) to [out=180, in=270] (8,0) to [out=90, in=180] (9,1);
\node at (7.7,2.5) {$\Omega^+$};
\draw (7.5,2) to (5.5,1);
\draw (7.5,2) to (9,0);
\node at (7,-2) {$\Omega^-$};
\draw (7,-1.5) to (8.1,-0.7);
\draw (7,-1.5) to (4.7,0.7);
\draw (7,-1.5) to (5,-1);
\end{tikzpicture}
\caption{Sketch of the shape of the minimizer $\Omega^-$ (light-shaded area) corresponding to a given configuration $\Omega^+$ (dark-shaded area): the negative charges arrange in a layer around the positively charged region.}
\label{fig-intro}
\end{figure}
The screening result enables us to draw a connection between the classical theory of the obstacle problem and our
model, showing that the potential $\phi$ of a minimizing configuration is actually a solution of the former. In turn,
this allows us to exploit the regularity theory for the \textit{free boundary} of solutions to obstacle-type problems in order to recover further regularity properties of the minimizing set (Theorem~\ref{thm-regularity}).
In the last part of the paper, we investigate the regime in which the charge density of the negative phase is much
higher than the positive one, which is modeled mathematically by rescaling the negative charge density by $\frac{1}{\varepsilon}$
and letting $\varepsilon\to 0$. In this case, we prove $\Gamma$-convergence to a limit model where the distribution of the
negative charge is described by a positive Radon measure (Theorem~\ref{thm-gammaconv}); in turn, we show that the
optimal configuration for this limit model is attained by a surface distribution of charge on $\partial\Omega^+$
(Proposition~\ref{prp-surfacecharge}).
\medskip
\textbf{Related models with Coulomb interaction.}
Capet and Friesecke investigated in \cite{CapFri} a closely related discrete model, where the optimal
distribution of $N$ electrons of charge -1 in the potential of some fixed positively charged atomic nuclei is determined
in the large $N$ limit. Under a hard-core radius constraint, which prevents electrons from falling into the nucleus,
they show via $\Gamma$-convergence that the negative charges tend to uniformly distribute on spheres around the atomic
nuclei, the number of electrons surrounding each nucleus matching the total nuclear charge; in particular, the potential
exerted by the nuclei is screened and in the limit the monopole moment, the higher multipole moments of each atom, and
the interaction energy between atoms vanish. Hence our analysis on charge neutrality, screening and on the limit surface charge model could also be interpreted as a macroscopic counterpart
of the discrete analysis developed in \cite{CapFri}.
Due to the universal nature of Coulomb interaction, we expect that our results could be also instrumental in the investigation of more general models, where an interfacial penalization is possibly added and the phase $\Omega^+$ is no longer fixed.
Recently, the problem with a single self-interacting phase of prescribed volume, surrounded by a neutral phase, has been extensively studied (see e.g. \cite{BonCri,FraLie,Jul,KnuMur1,KnuMur2,LuOtto}).
A minimization problem for two phases with Coulomb interactions and an interfacial energy term arises for instance in the modeling of diblock-copolymers. These consist of two subchains of different type that repel each other but are chemically bonded, leading to a phase separation on a mesoscopic scale. Variational models derived by mean
field or density functional theory
\cite{MaSc94,Leib80,OhKa86,BaOo90,ChRe03} take the form of a nonlocal Cahn--Hilliard energy. A subsequent strong segregation limit results in a nonlocal perimeter problem \cite{ReWe00} with a Coulomb-type energy contribution. For a mathematical analysis of diblock-copolymer models see for example \cite{AlCo09,ReWe08,ChPW09}.
In a mixture of diblock-copolymers and homopolymers an additional \textit{macroscopic
phase separation} in homopolymers and diblock-copolymers occurs, where now three phases have to be distinguished. Choksy and Ren developed a density functional theory
\cite{ChRe05}, a subsequent strong segregation reduction leads to an energy of the form
\begin{align}
c_0{\mathcal H}^2(\partial (\overline{\Omega^+\cup \Omega^-})) + c_1 {\mathcal H}^2(\partial \Omega^+) + c_2 {\mathcal H}^2(\partial \Omega^-) + I(\Omega^+,\Omega^-) \label{eq-energy-bcp}
\end{align}
where $\Omega^+,\Omega^-$ are constrained to be open sets of finite perimeter, to have disjoint supports and equal
total volume, and where $I(\Omega^+,\Omega^-)$ denotes the Coulomb interaction energy defined in \eqref{def-E-intro}. In \cite{GenPel} the existence of minimizers
has been shown in one space dimension and lower and upper bounds on the energy of minimizers have been presented in higher dimensions. Furthermore, in \cite{GePe09} the stability of layered structures has been investigated.
The model that we analyze in the present paper can be understood as a reduction of \eqref{eq-energy-bcp} to the case $c_0=c_2=0$ and a
minimization in $\Omega^-$ only, for $\Omega^+$ given.
\medskip
\textbf{Structure of paper.} The paper is organized as follows. The notation and the variational setting of the problem
are fixed in Section~\ref{sec-setting}, where we also state the main results. A detailed discussion of the
relaxed model with intermediate densities of charge, instrumental for the analysis of the original problem, is performed
in Section~\ref{sec-relaxed}, where the main existence theorems are proved. Section~\ref{sec-screening} contains the
proof of the screening property, while in Section~\ref{sec-obstacle} the relation with the obstacle problem and its
consequences are discussed (in particular, we prove the regularity of the minimizer). Section~\ref{sec-surface} is
devoted to the analysis of a limit surface-charge model. Finally, spherically symmetric configurations are explicitly
discussed in the concluding Appendix.
\medskip
\textbf{Notation.} We denote the ball centered at a point $x\in\mathbb R^3$ with radius $\rho>0$ by $B_\rho(x)$,
writing for simplicity $B_\rho$ for balls centered at the origin. For any measurable set $E \subset \mathbb R^3$, we denote its
Lebesgue measure by $|E| := {\mathcal L}^3(E)$. The integral average of an integrable function $f$ over a measurable set $E$ with
positive measure is $\average\int_E f := \frac{1}{|E|}\int_E f$. Sublevel sets of a function $f$ are indicated by
$\{f<\alpha\}:=\{x\in\mathbb R^3 : f(x) < \alpha\}$, and a similar notation is used for level sets and superlevel sets.
\section{Setting and main results} \label{sec-setting} %
Let $\Omega^+\subset\mathbb R^3$ be a fixed non-empty, bounded and open set. We assume that $\Omega^+$ is uniformly, positively charged with charge density $1$. For any uniformly, negatively charged measurable set $\Omega^-$ with finite Lebesgue measure, we consider the corresponding Coulombic energy $E(\Omega^-):=I(\Omega^+,\Omega^-)$, thus
\begin{align} \label{def-E}
E(\Omega^-)= \int_{\Omega^+}\int_{\Omega^+}\frac{1}{4\pi|x-y|} \ \mathrm{d} x \mathrm{d} y +
\int_{\Omega^-}\int_{\Omega^-}\frac{1}{4\pi|x-y|}\ \mathrm{d} x \mathrm{d} y -2 \int_{\Omega^+}\int_{\Omega^-}\frac{1}{4\pi|x-y|}\ \mathrm{d} x \mathrm{d} y\,.
\end{align}
Our aim is to find the optimal configuration of the negative charge,
under the assumption that the two oppositely charged regions do not overlap. We hence consider the minimization problem
\begin{align} \label{min-unc} %
\min \ \Bigl\{ E(\Omega^-) \ :\ \Omega^-\subset\mathbb R^3 \text{ measurable, } |\Omega^+\cap\Omega^-|=0,\ |\Omega^-|<\infty \Bigr\}
\end{align}
(notice that we require that $\Omega^-$ has finite volume in order for the energy \eqref{def-E} to be well defined). We also consider the closely related minimization problem where the total negative charge is prescribed, which for $\lambda>0$ given yields
\begin{align} \label{min-con} %
\min \ \Bigl\{ E(\Omega^-) \ :\ \Omega^-\subset\mathbb R^3\text{ measurable, } |\Omega^+\cap\Omega^-|=0,\ |\Omega^-| = \lambda \ \Bigr\}\,.
\end{align}
\medskip
The energy \eqref{def-E} can be expressed in different ways. We will usually denote $u^+:=\chi_{\Omega^+}$ and
$u:=\chi_{\Omega^-}$, where $\chi_{\Omega^\pm}$ are the characteristic functions of the sets $\Omega^\pm$. For given charge
densities $u^+,u$, the associated \textit{potential} $\phi$ is defined as
\begin{align} \label{def-phi} %
\phi(x) := \int_{\mathbb R^3} \frac{u^+(y)-u(y)}{4\pi|x-y|} \ \mathrm{d} y\,.
\end{align}
Notice that the potential $\phi$ solves the elliptic problem
\begin{align*}
\begin{cases}
-\Delta\phi = u^+-u,\\
\lim_{|x|\to\infty} |\phi(x)| =0
\end{cases}
\end{align*}
(see Lemma~\ref{lem-decay} for the second condition). By classical elliptic regularity, we have $\phi\in
C^{1,\alpha}(\mathbb R^3)$ for every $\alpha<1$ and $\phi\in W^{2,p}_{\loc}(\mathbb R^3)$ for all $1\leq p<\infty$. In addition, $\phi\in L^p(\mathbb R^3)$ for all $p>3$, and $\nabla\phi\in L^q(\mathbb R^3)$ for all $q>\frac{3}{2}$ by \cite[Theorem~4.3, Theorem~10.2]{LiLo01}. A standard argument, based on integration by parts, shows that the energy of a
configuration can be expressed in terms of the associated potential as
\begin{align*}
E(\Omega^-) = \int_{\mathbb R^3}|\nabla\phi|^2\ \mathrm{d} x\,.
\end{align*}
Finally, yet another way to represent the energy is in terms of Sobolev norms: indeed,
\begin{align*}
E(\Omega^-) = \|u^+ - u\|^2_{H^{-1}(\mathbb R^3)}\,.
\end{align*}
\medskip
We now state the main findings of our analysis. We first consider the unconstrained minimization problem \eqref{min-unc}, in which the total negative charge is not \textit{a priori} prescribed, proving existence and uniqueness of a minimizing configuration; interestingly, it turns out that the volume of the minimizer matches the volume of the positive charge and the system exhibits a charge neutrality phenomenon.
\begin{theorem}[The unconstrained problem: Existence and uniqueness]\label{thm-unconstrained} %
Let $\Omega^+\subset\mathbb R^3$ be a fixed, non-empty, bounded and open set. Then, the minimum problem \eqref{min-unc} admits a unique (up to a set of zero Lebesgue measure) solution
$\Omega^- \subset \mathbb R^3$. Furthermore, the minimizer satisfies the saturation property
\begin{align*}
|\Omega^-|= |\Omega^+|\,.
\end{align*}
\end{theorem}
We also obtain a corresponding result for the case \eqref{min-con} when the total negative charge is prescribed.
\begin{theorem}[The constrained problem: Existence and uniqueness/Nonexistence]\label{thm-constrained} %
Let $\Omega^+\subset\mathbb R^3$ be a fixed, non-empty, bounded and open set. Then:
\begin{enumerate}
\item For every $\lambda \leq |\Omega^+|$, there is a unique (up to a set of zero Lebesgue measure) minimizer $\Omega^-$ of
\eqref{min-con}.
\item For every $\lambda > |\Omega^+|$, there is no global minimizer of \eqref{min-con}.
\end{enumerate}
\end{theorem}
We will first prove Theorem~\ref{thm-constrained}, then Theorem~\ref{thm-unconstrained} is an easy consequence of this
theorem. The main technical difficulty arising when we try to apply the direct method of the Calculus of Variations to
prove the existence of a minimizer is clearly the following: for a minimizing sequence $(\Omega^-_n)_n$, we can pass to
a subsequence such that $\chi_{\Omega^-_n}\rightharpoonup u$ weakly* in $L^\infty(\mathbb R^3)$, but we can not guarantee that $u$ takes
values in $\{0,1\}$: in other words, the limit object might no longer be a set, with a uniform distribution of
charge. This obstacle will be bypassed by considering the relaxed problem where we allow for intermediate densities of
charge, and showing that a minimizer of this auxiliary problem is in fact a minimizer of the original one (see
Section~\ref{sec-relaxed} for the proofs of these results).
We remark that a similar strategy was also used in \cite{GenPel} for a related one-dimensional model.
\medskip
Having established existence of a solution to \eqref{min-unc}, we now discuss further properties of the minimizer. The
following theorem, whose proof is given in Section~\ref{sec-screening} and relies on the maximum principle, deals with
the \textit{screening effect} realized by the optimal configuration: the associated potential vanishes in the uncharged
region. Actually, it turns out that such a property (together with the nonnegativity of the potential) uniquely characterizes the minimizer (see Remark~\ref{rem-screening}).
\begin{theorem}[Screening]\label{thm-screening}
Assume that $\Omega^+\subset\mathbb R^3$ is a bounded and open set with Lipschitz boundary.
Let $\Omega^-$ be a representative of the minimizer of \eqref{min-unc} and let $\phi$ be the corresponding potential, defined in \eqref{def-phi}. Then $\phi \geq 0$ in $\mathbb R^3$ and
\begin{align} \label{eq-screen}
\phi=0 \qquad\text{ almost everywhere in } \mathbb R^3\setminus({\Omega^+\cup\Omega^-})\,.
\end{align}
After possibly changing $\Omega^-$ on a set of Lebesgue measure zero, we have
\begin{align} \label{def-Ome-} %
\Omega^- = \{ \phi > 0 \} \BS \overline{\Omega^+}\,.
\end{align}
If $\Omega^+$ satisfies an interior ball condition, then we also have $\phi>0$ in $\overline{\Omega^+}$.
\end{theorem}
It is convenient to also introduce a notation for the uncharged region: in view of \eqref{def-Ome-}, we set
\begin{align} \label{def-Ome0} %
\Omega_0 := \mathbb R^3\setminus\overline{\{\varphi>0\}}\,,
\end{align}
which by \eqref{def-Ome-} coincides, up to a set of measure zero, with $\mathbb R^3\setminus(\overline{\Omega^+\cup\Omega^-})$. Notice that by \eqref{def-Ome-} and \eqref{def-Ome0} we are selecting precise representatives of the sets $\Omega^-$ and $\Omega_0$, which in general are defined up to a set of Lebesgue measure zero, and that with this choice they are open sets.
\medskip
Based on the screening property and classical maximum principles for subharmonic functions, we establish some further qualitative properties on the shape of the minimizer $\Omega^-$.
\begin{theorem}[Structure of $\Omega^-$]\label{thm-support} %
Suppose that the assumptions of Theorem~\ref{thm-screening} hold and let $\Omega^-$ be the minimizer of problem \eqref{min-unc}, given by \eqref{def-Ome-}. Then $\Omega^-$ is open, bounded and the following statements hold:
\begin{enumerate}
\item\label{it-2.4-1} ${\rm dist\,} (x,\Omega^+)\ \leq\ 2\,|\Omega^+|^{1/3}\quad\text{ for all }x\in {\Omega^-}$;
\item\label{it-2.4-2} ${\rm diam\,}\Omega^- \ \leq\ (1+2\sqrt{3})\ {\rm diam\,}\Omega^+$;
\item\label{it-2.4-3} for every connected component $V$ of ${\Omega^-}$ we have $\partial V \cap\partial\Omega^+\neq\emptyset$;
\item\label{it-2.4-4} if $\Omega^+$ satisfies an interior ball condition, then ${\rm dist\,}(\Omega_0,\partial\Omega^+)>0$.
\end{enumerate}
\end{theorem}
Notice that, as a consequence of Theorem~\ref{thm-screening} and Theorem~\ref{thm-support}, the potential $\phi$ of the minimizing configuration has compact support.
We complete our analysis of the minimum problem \eqref{min-unc} by discussing some further properties of the minimizer, included the regularity of its boundary. This relies heavily on the observation that, as a consequence of Theorem~\ref{thm-screening}, the potential $\phi$ associated with a minimizer of \eqref{min-unc} is in fact a solution to a classical obstacle problem.
Indeed, as a consequence of the characterization \eqref{def-Ome-}, $\phi$ solves
\begin{align} \label{eq-obstacle1}
\begin{cases}
\Delta\phi = \chi_{\{\phi>0\}} & \text{in }\mathbb R^3\setminus\overline{\Omega^+},\\
\phi\geq0\ .
\end{cases}
\end{align}
It then follows that $\phi$ solves the obstacle problem
\begin{equation*}
\min \biggl\{ \int_{\mathbb R^3\setminus\overline{\Omega}^+} \Bigl( |\nabla\psi|^2 +2\psi \Bigr)\ \mathrm{d} x
\ : \ \psi\in H^1(\mathbb R^3\setminus\overline{\Omega^+}),\ \psi\geq0, \ \psi=\phi \text{ on }\partial\Omega^+ \biggr\}\,,
\end{equation*}
see Proposition~\ref{prp-obstacle}.
The well-established regularity theory for the so-called \textit{free boundary} of a solution to an obstacle problem also yields more information about the regularity of the boundary of $\Omega_0$ (for a comprehensive account of the available results, see, for instance, the book \cite{PetShaUra}).
\begin{theorem}[Regularity]\label{thm-regularity}
Under the assumptions of Theorem~\ref{thm-screening}, let $\Omega^-$ be the minimizer of problem \eqref{min-unc}, let $\varphi$ be the associated potential, and let $\Omega_0$ be defined by \eqref{def-Ome0}. Then $\phi\in C^{1,1}_{\mathrm{loc}}(\mathbb R^3\setminus\overline{\Omega^+})$ and the boundary of $\Omega_0$ has finite $\mathcal{H}^2$-measure locally in $\mathbb R^3\setminus\overline{\Omega^+}$. Moreover, one has the decomposition $\partial\Omega_0 = \Gamma \cup \Sigma$, where $\Gamma$ is relatively open in $\partial\Omega_0$ and real analytic, while $x_0\in\Sigma$ if and only if
\begin{equation*}
\lim_{r\to0^+}\frac {{\rm min\,diam\,} \bigl( \{\phi=0\}\cap B_r(x_0) \bigr)}{r} =0\,,
\end{equation*}
where ${\rm min\,diam\,} (E)$ denotes the infimum of the distances between pairs of parallel planes enclosing the set $E$. The Lebesgue density of $\Omega_0$ is 0 at each point of $\Sigma$.
\end{theorem}
The proof of Theorem~\ref{thm-regularity} is given in Section~\ref{sec-obstacle}, and a more precise characterization of the singular points of $\partial \Omega_0$ is given in Proposition~\ref{prp-singbound}.
Notice that the only possible singularities allowed in a minimizer are of ``cusp-type'', since the set $\Omega_0$ has zero Lebesgue density at such points. An example of occurrence of a singular point is presented in Remark~\ref{rem-singularity}.
\medskip
In the final section we consider for given $u^+ \in L^1(\mathbb R^3;\{0,1\})$ and for $\varepsilon>0$ the energy
\begin{align*}
\mathcal F_\varepsilon (u) \ :=\
\begin{cases}
\| u^+ - u\|_{H^{-1}(\mathbb R^3)}^2 &\text{if } u\in L^1(\mathbb R^3;\{0,\frac{1}{\varepsilon}\}), \ \int_{\Omega^+} u=0, \ \int_{\mathbb R^3} u\leq\lambda,\\
\infty &\text{ else.}
\end{cases}
\end{align*}
Here our main result is the Gamma-convergence of $\mathcal F_\varepsilon$ to an energy defined on a class of positive Radon
measures, see Theorem~\ref{thm-gammaconv}. Furthermore, in Proposition~\ref{prp-surfacecharge} we show that minimizers
of the limit energy are supported on the boundary $\partial \Omega^+$ and thus describe a surface charge distribution.
\begin{remark}
Although we have restricted our analysis to the physically meaningful case of three dimensions with a Newtonian potential, we believe that the methods used in this paper can be extended in a straightforward way to obtain the corresponding results in higher space dimensions. Indeed, our analysis is based on general tools rather than on the specific three-dimensional structure of the problem. Similarly, it should be possible to treat also more general Riesz kernels $\frac{1}{|x-y|^\alpha}$ in the energy.
\end{remark}
\section{Existence and the relaxed problem} \label{sec-relaxed} %
In this section, we give the proofs of Theorems~\ref{thm-unconstrained} and \ref{thm-constrained}. In order to overcome
the difficulties in the proof of the existence of a minimizer pointed out in the discussion above, it is convenient to
relax the problem by allowing for intermediate densities of charge taking values in $[0,1]$, the convex hull of
$\{0,1\}$.
\medskip
In this section, we will always assume that $\Omega^+$ is an open and bounded set with $|\Omega^+|=m$ for some $m>0$. We also recall that $u^+:=\chi_{\Omega^+}$ is the characteristic function of $\Omega^+$. We then consider, for $\lambda>0$, the relaxed minimum problem
\begin{align} \label{min-rel} %
\min \ \biggl\{ \mathcal{E}(u)\ : \ u \in L^1(\mathbb R^3;[0,1]), \int_{\Omega^+} u\ \mathrm{d} x = 0,\ \int_{\mathbb R^3}u\ \mathrm{d} x \leq\lambda
\biggr\}\,,
\end{align}
where
\begin{align*}
\mathcal{E}(u):=\int_{\mathbb R^3}|\nabla\phi|^2\ \mathrm{d} x
\end{align*}
and $\phi$ is the potential associated to $u$, defined by \eqref{def-phi}. The corresponding class of admissible configurations is given by
\begin{align*}
\mathcal{A}_\lambda := \Bigl\{ u \in L^1(\mathbb R^3; [0,1]) \ : \ \int_{\Omega^+} u\ \mathrm{d} x = 0,\
\int_{\mathbb R^3}u\ \mathrm{d} x\leq\lambda \Bigr\}\,.
\end{align*}
We first note that the potential $\phi$ is uniformly bounded and indeed vanishes for $|x| \to \infty$.
This is \textit{a priori} not clear, since $u$ may have unbounded support.
\begin{lemma} \label{lem-decay} %
Assume $u\in\mathcal A_\lambda$. Then the potential $\phi$, defined in \eqref{def-phi}, satisfies
\begin{align}
-\frac{1}{2}\Big(\frac{3\lambda}{4\pi}\Big)^{\frac{2}{3}}\,&\leq\,\phi(x)\,\leq\,\frac{1}{2}\Big(\frac{3m}{4\pi}\Big)^{\frac{2}{3}}\quad\text{ for all }x\in\mathbb R^3, \label{eq:bounds-phi}\\
|\phi(x)| \,&\to\, 0\quad\text{ for }|x| \to \infty. \label{eq:decay-phi}
\end{align}
\end{lemma}
\begin{proof}
For $t>0$ let $r(t)$ denote the radius of a ball with volume $t$, thus $\frac{4\pi}{3}r(t)^3=t$. By classical rearrangement inequalities \cite[Theorem~3.4]{LiLo01} we deduce
\begin{align*}
\varphi(x) \,\leq\, \int_{\mathbb R^3} \frac{u^+(y)}{4\pi|x-y|}\mathrm{d} y \,\leq\, \int_{B_{r(m)}}\frac{1}{4\pi|y|}\mathrm{d} y \,=\, \frac{r(m)^2}{2}\,.
\end{align*}
This shows the upper estimate in \eqref{eq:bounds-phi}. The lower bound follows similarly.
Next let $\varepsilon>0$ be given and fix $R_\varepsilon>\frac{1}{\varepsilon}$ such that $\int_{\mathbb R^3\setminus B_{R_\varepsilon}}u <\varepsilon$ and $\Omega^+\subset
B_{R_\varepsilon}$. Again by rearrangement inequalities we can bound
\begin{align*} %
\int_{\mathbb R^3\setminus B_{R_\varepsilon}}\frac{u(y)}{4\pi|x-y|}\ \mathrm{d} y \leq
\max_x\max_{\substack{0\leq w \leq 1\\\int w \leq \varepsilon}}\int_{\mathbb R^3}\frac{w(y)}{4\pi|x-y|}\ \mathrm{d} y \leq
\int_{B_{r(\varepsilon)}}\frac{1}{4\pi|y|}\ \mathrm{d} y = \frac{r(\varepsilon)^2}{2}\,.
\end{align*}
Then for every $x$ with $|x|>2 R_\varepsilon$ one has
\begin{align*}
|\phi(x)| \leq \int_{B_{R_\varepsilon}}\frac{|u^+(y)-u(y)|}{4\pi|x-y|}\ \mathrm{d} y + \int_{\mathbb R^3\setminus
B_{R_\varepsilon}}\frac{u(y)}{4\pi|x-y|}\ \mathrm{d} y \leq \frac{m+\lambda}{4\pi R_\varepsilon} + \frac{r(\varepsilon)^2}{2}\,,
\end{align*}
which shows \eqref{eq:decay-phi}.
\end{proof}
Existence and uniqueness of a minimizer for the relaxed problem \eqref{min-rel} follow directly from standard arguments.
\begin{proposition}[Minimizer for the relaxed problem] \label{prp-existence} %
For every $\lambda > 0$, the relaxed minimum problem \eqref{min-rel} admits a unique solution $u_\lambda \in \mathcal{A}_\lambda$.
\end{proposition}
\begin{proof}
The existence of a minimizer follows by the Direct Method of the Calculus of Variations and standard semicontinuity
arguments. Indeed, for a minimizing sequence $u_n\in\mathcal{A}_\lambda$ we have that, up to subsequences, $u_n\rightharpoonup u$
weakly* in $L^\infty(\mathbb R^3)$ for some measurable function $u\in L^\infty(\mathbb R^3)$, which is clearly still an element of
the class $\mathcal{A}_\lambda$. To prove semicontinuity, we express the total energy as
\begin{align*}
\mathcal{E}(u) = \int_{\mathbb R^3}\int_{\mathbb R^3} \frac{(u^+-u)(x)(u^+-u)(y)}{4\pi|x-y|} \,\mathrm{d} x\mathrm{d} y\,.
\end{align*}
For the self-interaction energy of $u$ we have
\begin{align*}
\int_{\mathbb R^3}\int_{\mathbb R^3}\frac{u(x)u(y)}{4\pi|x-y|}\ \mathrm{d} x\mathrm{d} y \leq
\liminf_{n\to\infty}\int_{\mathbb R^3}\int_{\mathbb R^3}\frac{u_n(x)u_n(y)}{4\pi|x-y|}\ \mathrm{d} x\mathrm{d} y
\end{align*}
by classical potential theory (see, for instance, \cite[equation~(1.4.5)]{Lan}). For the mixed term, we have
\begin{align*}
\int_{\mathbb R^3}\int_{\mathbb R^3}\frac{u^+(x)u_n(y)}{4\pi|x-y|}\ \mathrm{d} x\mathrm{d} y &= \int_{\mathbb R^3}\phi^+(y)u_n(y)\ \mathrm{d} y \\
&\to
\int_{\mathbb R^3}\phi^+(y)u(y)\ \mathrm{d} y
= \int_{\mathbb R^3}\int_{\mathbb R^3}\frac{u^+(x)u(y)}{4\pi|x-y|}\ \mathrm{d} x\mathrm{d} y
\end{align*}
where in passing to the limit we used the fact that the potential $\phi^+$ associated to the positive phase $\Omega^+$
is a continuous function vanishing at infinity. This completes the proof of existence.
\medskip
Uniqueness of the minimizer follows by convexity of the problem: let $u_1,u_2\in\mathcal{A}_\lambda$ be two solutions to
the the relaxed minimum problem \eqref{min-rel}, and let $\phi_1,\phi_2$ be the associated potentials. Setting
$u_\alpha:=\alpha u_1 + (1-\alpha)u_2$ for $\alpha\in(0,1)$, we have $u_\alpha\in\mathcal{A}_\lambda$ and the associated
potential is given by $\phi_\alpha = \alpha\phi_1 + (1-\alpha)\phi_2$. Hence
\begin{align*}
\mathcal{E}(u_\alpha) = \int_{\mathbb R^3}|\nabla\phi_\alpha|^2\ \mathrm{d} x < \alpha\int_{\mathbb R^3}|\nabla\phi_1|^2\ \mathrm{d} x +
(1-\alpha)\int_{\mathbb R^3}|\nabla\phi_2|^2\ \mathrm{d} x = \min_{\mathcal{A}_\lambda}\mathcal{E}(u)
\end{align*}
unless $\nabla\phi_1=\nabla\phi_2$. Hence $u_1=u_2$ almost everywhere, and the minimizer is unique.
\end{proof}
We now turn our attention to some useful properties of a minimizer of the relaxed problem \eqref{min-rel}, following from first variation arguments.
\begin{lemma}[First variation of the relaxed problem] \label{lem-EL} %
Assume that $u$ is the minimizer of the relaxed problem \eqref{min-rel} and let $\phi$ be the associated potential.
Let $\eta$ be any bounded Lebesgue integrable function such that $\int_{\Omega^+} |\eta|=0$. Then the following
properties hold:
\begin{enumerate}
\item\label{it:lem3.3-1} If $\int_{\mathbb R^3}\eta=0$ and there exists $\delta>0$ such that ${\rm supp\,}\eta\subset\{\delta<u<1-\delta\}$, then
\begin{align*}
\int_{\mathbb R^3}\phi\eta\ \mathrm{d} x =0\,.
\end{align*}
\item\label{it:lem3.3-2} If $\int_{\mathbb R^3}\eta\leq0$ and there exists $\delta>0$ such that $\eta\geq0$ on $\{u<\delta\}$ and $\eta\leq0$ on
$\{u>1-\delta\}$, then
\begin{align*}
\int_{\mathbb R^3}\phi\eta\ \mathrm{d} x \leq 0\,.
\end{align*}
\item\label{it:lem3.3-3} If $\int_{\mathbb R^3} u < \lambda$, $\eta\geq 0$, and if there exists $\delta>0$ with
${\rm supp\,}\eta\subset \{u<1-\delta\}$ then
\begin{align*}
\int_{\mathbb R^3}\phi\eta\ \mathrm{d} x \leq 0\,.
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
We first prove \ref{it:lem3.3-1}. The function $u_\varepsilon:=u\pm\varepsilon\eta$, for $\varepsilon>0$ sufficiently small, is admissible in the relaxed
problem \eqref{min-rel}. Let $\psi$ be such that $\Delta\psi=\eta$, so that $\phi_\varepsilon=\phi\pm\varepsilon\psi$ satisfies
$-\Delta\phi_\varepsilon=u^+ - u_\varepsilon.$ Then by minimality of $u$ we have
\begin{align*}
\int_{\mathbb R^3}|\nabla\phi|^2\ \mathrm{d} x \leq \int_{\mathbb R^3}|\nabla\phi_\varepsilon|^2\ \mathrm{d} x \,,
\end{align*}
from which, by letting $\varepsilon\to0$, we immediately deduce
\begin{align*}
0 = \int_{\mathbb R^3}\nabla\phi\cdot\nabla\psi\ \mathrm{d} x = -\int_{\mathbb R^3} \phi\Delta\psi\ \mathrm{d} x = -\int_{\mathbb R^3}\phi\eta\ \mathrm{d} x\,.
\end{align*}
Let now $u,\eta$ satisfy the assumptions in \ref{it:lem3.3-2} or \ref{it:lem3.3-3}. Then the function $u_\varepsilon:=u+\varepsilon\eta$, for $\varepsilon>0$
sufficiently small, is admissible in the relaxed problem \eqref{min-rel}, and arguing as before we obtain
\begin{align*}
0 \leq \int_{\mathbb R^3}\nabla\phi\cdot\nabla\psi\ \mathrm{d} x = -\int_{\mathbb R^3} \phi\Delta\psi\ \mathrm{d} x = -\int_{\mathbb R^3}\phi\eta\ \mathrm{d}
x\,,
\end{align*}
which completes the proof.
\end{proof}
As a consequence of the first order conditions proved in previous lemma, it follows that the potential associated to a minimizer is everywhere nonnegative.
\begin{lemma}[Nonnegativity of $\varphi$] \label{lem-phipos} %
Assume that $u$ is the minimizer of the relaxed problem \eqref{min-rel} and let $\phi$ be the associated
potential. Then $\phi \geq 0$ in $\mathbb R^3$.
\end{lemma}
\begin{proof}
For $\delta > 0$, let $x \in E_\delta := \{ u > \delta \}$ such that $E_\delta$ has positive Lebesgue density at $x$. By an application of Lemma~\ref{lem-EL}\ref{it:lem3.3-2} with $\eta:=-\chi_{E_\delta\cap B_r(x)}$, we then get for every $r>0$
\begin{align*}
\int_{E_\delta \cap B_r(x)}\phi(y)\ \mathrm{d} y \geq 0\,.
\end{align*}
Since $\phi$ is continuous, it follows that $\phi(x)\geq 0$ for all $x \in E_\delta := \{ u > \delta \}$ such that
$E_\delta$ has positive Lebesgue density at $x$. By \cite[Corollary~2.14]{Mat95}, we hence have $\phi \geq 0$ a.e. in
$E_\delta$. Since $\delta > 0$ is arbitrary, it follows that $\phi \geq 0$ a.e. in $E_0 := \{ u > 0 \}$. By changing
$u$ on a set of Lebesgue measure zero, we hence may assume that $\phi \geq 0$ in $E_0$.
\medskip
By the above calculation, the open set $U:=\{\phi<0\}$ is contained in $\{ u \leq 0 \}$, and hence $-\Delta\phi\geq0$
in $U$. Since $\phi$ vanishes at the boundary of $U$ and at infinity, by the minimum principle we conclude that
$\phi$ must be nonnegative in $U$, which is a contradiction unless $U=\emptyset$. This shows that $\phi\geq0$ in
$\mathbb R^3$.
\end{proof}
The following simple lemma is used in the proofs of Proposition~\ref{prp-saturation} and Theorem~\ref{thm-screening}.
\begin{lemma} \label{lem-potential} %
Let $w\in L^1(\mathbb R^3)\cap L^\infty(\mathbb R^3)$ and let $\phi$ be the associated potential, that is
\begin{align*}
\phi(x) := \int_{\mathbb R^3} \frac{w(y)}{4\pi|x-y|}\ \mathrm{d} y\,.
\end{align*}
Then
\begin{align}\label{eq-pot1}
\int_{\partial B_R}\phi\ \mathrm{d}\mathcal{H}^2 = \int_R^{\infty} \frac{R^2}{r^2} \biggl( \int_{B_r}w\ \mathrm{d} x \biggr)\ \mathrm{d} r\,.
\end{align}
In particular, if ${\rm supp\,} w\subset B_R$ for some $R>0$ and $\int_{\mathbb R^3}w=\lambda$, then
\begin{align} \label{eq-pot2} %
\int_{\partial B_R}\phi \ \mathrm{d}\mathcal{H}^2 = \lambda R\,.
\end{align}
\end{lemma}
\begin{proof}
Since $-\Delta\phi=w$, we have
\begin{align} \label{prev-eq} %
\frac{\mathrm{d}}{\mathrm{d} R}\average\int_{\partial B_R} \phi\ \mathrm{d}\mathcal{H}^2 = \average\int_{\partial B_R}\frac{\partial\phi}{\partial\nu}\ \mathrm{d}\mathcal{H}^2 = \frac{1}{4\pi R^2}
\int_{B_R}\Delta\phi\ \mathrm{d} x = - \frac{1}{4\pi R^2}\int_{B_R}w\ \mathrm{d} x\,.
\end{align}
Integrating \eqref{prev-eq} between $R$ and $\infty$ and recalling that $\lim_{R\to\infty}\average\int_{\partial B_R}\phi=0$ by Lemma~\ref{lem-decay}, we obtain the conclusion.
\end{proof}
We next use the first variation formulas to show that minimizers of the relaxed problem \eqref{min-rel} minimize the absolute value of the total net charge within the set of admissible configurations.
\begin{proposition}[Saturation of charges] \label{prp-saturation}
For every $\lambda>0$, the solution $u_\lambda$ of the relaxed minimum problem \eqref{min-rel} with $|\Omega^+| = m$ satisfies
\begin{align} \label{sat-1} %
\int_{\mathbb R^3} u_\lambda \ \mathrm{d} x \ = \ \min \{ \lambda, m \}\,.
\end{align}
Furthermore, for all $\lambda \geq m$, we have $u_\lambda = u_m$.
\end{proposition}
\begin{proof}
Denote by $\phi$ the potential of $u^+-u_\lambda$ as in \eqref{def-phi} and choose $R_0>0$ such that $\Omega^+\subset B_{R_0}$. Arguing by contradiction, we first assume
\begin{align} \label{ass-fals} %
\mu \ :=\ \int_{\mathbb R^3} u_\lambda \ \mathrm{d} x \ <\ \min \{ \lambda, m \}\,.
\end{align}
Our argument is based on the fact that screening is not possible under the assumption \eqref{ass-fals}. Indeed, we will even show
\begin{align} \label{no-screening} %
\big| B_R^c\cap \{\phi>0\}\cap \{u_\lambda<1\} \big| \ =\ \infty \quad\text{ for all }R>0.
\end{align}
We first note that \eqref{ass-fals} yields a contradiction if \eqref{no-screening} holds. Indeed, by \eqref{no-screening} we can choose $\delta>0$, $R>R_0$ such that
$\big| (B_{R+1}\setminus B_R) \cap \{\phi>0\} \cap \{u_\lambda<1-\delta\} \big| >0$.
Letting $\eta:= {\mathcal X}_{(B_{R+1}\setminus B_R)\cap \{u_\lambda<1-\delta\}}$, by Lemma~\ref{lem-EL}\ref{it:lem3.3-3} we deduce that
\begin{align*}
0\ \geq\ \int_{\mathbb R^3} \phi\eta \,\mathrm{d} x \ =\ \int_{(B_{R+1}\setminus B_R)\cap \{u_\lambda<1-\delta\}} \phi\,\mathrm{d} x \ >\
0\,,
\end{align*}
which is impossible. This shows that $\int_{\mathbb R^3} u_\lambda \geq \min \{ \lambda, m \}$ and proves \eqref{sat-1} for $\lambda \leq m$, since $u_\lambda\in \mathcal A_\lambda$.
We next give the argument for \eqref{no-screening}. By \eqref{eq-pot1}, we have for all $R>R_0$
\begin{align*}
\int_{\partial B_R}\phi\ \mathrm{d}{\mathcal H}^2
= \int_R^\infty \frac{R^2}{r^2} \Big(\int_{B_r} (u^+-u_\lambda) \ \mathrm{d} x \Big)\ \mathrm{d} r
\geq\ \int_R^\infty \frac{(m-\mu)R^2}{r^2}\ \mathrm{d} r = (m-\mu)R\,.
\end{align*}
By integrating this identity from $R$ to $\infty$, we get $\int_{B_R^c} \phi \ =\ \infty$. Since $\phi$ is uniformly bounded, this implies $|B_R^c\cap\{\phi>0\}|=\infty$. On the other hand, we have $|\{u_\lambda=1\}|\ \leq\ \int_{\mathbb R^3}u_\lambda \ =\ \mu$, which yields \eqref{no-screening}.
\medskip
It remains to consider the case $\lambda > m$ and to show that the assumption
\begin{align} \label{ass-fals2}
\mu \ :=\ \int_{\mathbb R^3} u_\lambda \ \mathrm{d} x > m
\end{align}
yields a contradiction.
By Lemma~\ref{lem-phipos}, we have $\phi\geq 0$ in $\mathbb R^3$. On the other hand, by the proof of Lemma~\ref{lem-potential} we get
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d} R} \average\int_{\partial B_R} \phi \ \mathrm{d}\mathcal{H}^2 %
&\lupref{prev-eq}= -\frac{1}{4\pi R^2} \int_{B_R} (u^+-u_\lambda)\ \mathrm{d} x = \frac{1}{4\pi R^2} \biggl(\mu - m -\int_{\mathbb R^3\setminus B_R}u_\lambda\ \mathrm{d} x \biggr)\,.
\end{align*}
Since by \eqref{ass-fals2} the last term is positive for $R$ sufficiently large, it follows that the mean value $\average\int_{\partial B_R}\phi$ of $\phi$ on spheres is a strictly increasing function of the radius, for large radii, vanishing in the limit as $R\to\infty$. This is clearly in contradiction with the fact that $\phi\geq0$. We conclude that $\int u_\lambda = m$ for $\lambda\geq m$ and, in turn, $u_\lambda=u_m$ by uniqueness of the minimizer.
\end{proof}
The previous proposition allows us to draw some conclusions on the dependence of the minimal energy in the relaxed problem \eqref{min-rel} on the parameter $\lambda$.
\begin{corollary}[Minimal energy as function of $\lambda$] \label{cor-e} %
For $\lambda>0$ let $e(\lambda):=\mathcal{E}(u_\lambda)=\min_{\mathcal{A}_\lambda}\mathcal{E}$. Then $e(\lambda)$ is continuous, strictly decreasing for $\lambda\in[0,m]$ and constant for $\lambda\geq m$.
\end{corollary}
\begin{proof}
Since $\mathcal A_\lambda\subset\mathcal A_{\lambda'}$ for $\lambda\leq \lambda'$ the minimal energy $e(\lambda)$ is decreasing. The strict monotonicity of $e(\lambda)$, for $\lambda\leq m$, follows from the fact that if $e(\lambda)=e(\lambda')$ for some $0<\lambda<\lambda'\leq m$, then the uniqueness of minimizers would imply that $u_\lambda=u_{\lambda'}$, which is not permitted by \eqref{sat-1}. The fact that $e(\lambda)$ is constant for $\lambda\geq m$ follows also from Proposition~\ref{prp-saturation}.
To prove that $e$ is continuous, we observe that for $\lambda'>\lambda$ we can use the function
$\frac{\lambda}{\lambda'}u_{\lambda'} \in \mathcal{A}_\lambda$ as a competitor in the relaxed minimum problem \eqref{min-rel}, which yields $e(\lambda)\leq\mathcal{E}(\frac{\lambda}{\lambda'}u_{\lambda'})$. Hence, by monotonicity
\begin{align*}
e(\lambda)\leq\liminf_{\lambda'\searrow\lambda} \mathcal{E}\Bigl(\frac{\lambda}{\lambda'}u_{\lambda'}\Bigr) =
\liminf_{\lambda'\searrow\lambda} e(\lambda') \leq e(\lambda)\,,
\end{align*}
which implies continuity from the right. Similarly, by considering $\lambda''<\lambda$ and comparing with
$\frac{\lambda''}{\lambda}u_\lambda \in \mathcal{A}_{\lambda''}$ we obtain continuity from the left. Together this shows that $e$ is continuous.
\end{proof}
We next address the proofs of Theorem~\ref{thm-constrained} and Theorem~\ref{thm-unconstrained} on existence and uniqueness for the constrained and unconstrained minimum problems.
\begin{proof}[Proof of Theorem~\ref{thm-constrained}]
We divide the proof into two steps.
\smallskip
{\it Step 1: the case $\lambda \leq m$}. By Proposition~\ref{prp-existence} there exists a unique minimizer $u$ of $\mathcal E$ in the class of densities $\mathcal{A}_\lambda$. By Proposition~\ref{prp-saturation}, we have $\int_{\mathbb R^3}u\ =\ \lambda$. It therefore remains to show that the set $\{0<u<1\}$ has zero Lebesgue-measure. Arguing by contradiction, we assume that $|\{0<u<1\}|>0$. Then there exists $\delta>0$ such that the set
$\mathcal{U}_\delta:=\{\delta<u<1-\delta\}$ has positive measure. We set $\eta(x):=(\phi(x)-c)\chi_{\mathcal{U}_\delta}(x)$,
where
\begin{align} \label{avg-con} %
c:=\average\int_{\mathcal{U}_\delta} \phi(x)\ \mathrm{d} x\,.
\end{align}
The function $\eta$ satisfies the assumptions of Lemma~\ref{lem-EL}\ref{it:lem3.3-1}, and we deduce
\begin{align*}
0=\int_{\mathbb R^3}\phi\eta\ \mathrm{d} x = \int_{\mathcal{U}_\delta}\phi(\phi-c)\ \mathrm{d} x \lupref{avg-con}=
\int_{\mathcal{U}_\delta}(\phi-c)^2\ \mathrm{d} x\,,
\end{align*}
which implies $\phi=c$ almost everywhere in $\mathcal{U}_\delta$.
By \eqref{def-phi} and standard elliptic theory, we have $\phi \in W^{2,p}_{\mathrm{loc}}(\mathbb R^3)$ for all $p < \infty$. From Stampacchia's Lemma \cite[Proposition~3.23]{GiMa12}, one can deduce that $\nabla\phi=0$ almost everywhere in $\mathcal{U}_\delta$ and then that $\Delta \phi=0$ almost everywhere in $\mathcal{U}_\delta$. On the other hand $\Delta\phi =u$ in $\mathcal{U}_\delta$, which contradicts our assumption.
\medskip
{\it Step 2: the case $\lambda > m$}. Suppose that there exists a minimizer $\Omega^-$ of \eqref{min-con}, and let $u:=\chi_{\Omega^-}$. By Proposition~\ref{prp-saturation} the unique minimizer $u_\lambda$ of the corresponding relaxed problem \eqref{min-rel} is given by $u_\lambda=u_m$, in particular we have
$$
{\mathcal E}(u_m) < {\mathcal E}(u)\,.
$$
Moreover, by the previous step, $u_m$ is in fact the characteristic function of a set.
For $R > 0$ let $u^R := u_m + (1-u_m)\chi_{B_R\setminus B_{\t R}}$, where $\t R= \t R(R) > 0$ is chosen such that $\int_{\mathbb R^3} u^R=\lambda$, which is equivalent to the condition
\begin{align*}
\lambda \,=\, m + \frac{4\pi}{3}(R^3-\tilde R^3) - \int_{B_R\setminus B_{\t R}} u_m\ \mathrm{d} x\,.
\end{align*}
We deduce first that $\t R(R)\to\infty$ as $R\to\infty$ and then $\frac{4\pi}{3}(R^3-\tilde R^3)=\lambda-m +o(1)$ as $R\to\infty$. Since $\Omega^+$ is bounded, and since $u_m$ takes values in $\{0,1\}$ almost everywhere we deduce that for $R$ sufficiently large $u^R$ is the characteristic function of an admissible set for the minimizing problem \eqref{min-con}. We claim that ${\mathcal E}(u^R) \to {\mathcal E}(u_m) < {\mathcal E}(u)$. For $R$ large enough, this yields a contradiction to the statement that $u$ is a minimizer of \eqref{min-con}. To prove the convergence of ${\mathcal E}(u^R)$ observe that an explicit calculation for the self-interaction energy of $\chi_{B_R\setminus B_{\t R}}$ yields
\begin{align*}
\int_{B_R\setminus B_{\t R}}\int_{B_R\setminus B_{\t R}}\frac{1}{4\pi|x-y|} \mathrm{d} y\mathrm{d} x \,&=\,
c(3\t R^5+2R^5-5\t R^3R^2) \\
&=\, 15cR^3\delta^2 + c_1 R^2\delta^3 + c_2R\delta^4 + c_3\delta^5
\end{align*}
(see \eqref{eq:ring} in the Appendix), with $\delta=R-\t R=\frac{\lambda-m}{4\pi R^2} +\mathcal{O}(R^{-3})$.
This implies that the self-interaction energy of the annulus vanishes as $R\to\infty$. By a similar asymptotic analysis one shows that the interaction energy of the annulus with the charge distributions $u^+,u_m$ also tends to zero as $R\to\infty$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm-unconstrained}] %
By Theorem~\ref{thm-constrained}, there exists a unique minimizer $\Omega^-$ of the constrained minimum problem \eqref{min-con} for $\lambda=m$. We claim that $\Omega^-$ is also the unique solution of the unconstrained problem \eqref{min-unc}. Indeed, the conclusion follows immediately from the fact that $\chi_{\Omega^-}$ is the unique solution of the relaxed problem \eqref{min-rel} for $\lambda= m$, and by Proposition~\ref{prp-saturation} and Corollary \ref{cor-e}.
\end{proof}
\section{Proof of the screening property} \label{sec-screening} %
We now turn to the proof of Theorem~\ref{thm-screening} related to the screening of the positive charge.
\begin{proof}[Proof of Theorem~\ref{thm-screening}]
The nonnegativity of $\varphi$ is proved in Lemma~\ref{lem-phipos}.
We introduce the closed sets
\begin{align*}
A^- \,&=\, \{x\in\mathbb R^3\,:\, |B_r(x)\cap\Omega^-|>0\,\text{ for all }r>0\}\,,\\
A_0 \,&=\, \{x\in\mathbb R^3\,:\, |B_r(x)\setminus(\Omega^+\cup\Omega^-)|>0\,\text{ for all }r>0\}\,.
\end{align*}
Notice that $A^-$ and $A_0$ are independent of the precise representative $\Omega^-$ of the minimizer: indeed, they coincide with the closures of the sets of points of positive Lebesgue density of $\Omega^-$ and of $\mathbb R^3\setminus(\Omega^+\cup\Omega^-)$, respectively. Furthermore, $|\Omega^-\setminus A^-|=|(\mathbb R^3\setminus(\Omega^+\cup\Omega^-))\setminus A_0|=0$. We have selected the sets $A^-$ and $A_0$ as the largest possible sets for which we can apply the comparison argument in Step ~1 below. We now divide the proof of the theorem into four steps.
\smallskip
{\it Step 1.} In this step of the proof, we show that
\begin{align} \label{screen-step1} %
\sup_{A_0}\phi \leq \inf_{A^-}\phi\,.
\end{align}
Indeed, for \eqref{screen-step1} it is sufficient to prove that $\phi(x_0)\leq\phi(x_1)$ for every pair of points $x_0\in A_0$ and $x_1\in A^-$. Define a variation field $\eta\in L^1(\mathbb R^3;[0,1])$ by
\begin{align*}
\eta(x):=
\begin{cases}
\frac{1}{|B_r(x_0)\setminus(\Omega^+\cup\Omega^-)|} &\text{if }x\in B_r(x_0)\setminus(\Omega^+\cup\Omega^-), \\
-\frac{1}{|B_r(x_1)\cap\Omega^-|} &\text{if }x\in B_r(x_1)\cap\Omega^-, \\
0 & \text{otherwise}.
\end{cases}
\end{align*}
Then $\int_{\mathbb R^3} \eta = 0$, and, by an application of Lemma~\ref{lem-EL}(ii), we hence get
\begin{align*}
\average\int_{B_r(x_0)\setminus(\Omega^+\cup\Omega^-)}\phi(y)\ \mathrm{d} y \leq \average\int_{B_r(x_1)\cap\Omega^-} \phi(x)\ \mathrm{d} x\,.
\end{align*}
Since $\phi$ is continuous we deduce by letting $r\downarrow 0$ that $\phi(x_0)\leq \phi(x_1)$, which completes the proof of \eqref{screen-step1}.
\medskip
{\it Step 2.} We next show that
\begin{align} \label{screen-step2} %
\inf_{A^-}\phi =0\,.
\end{align}
Since $\varphi\geq 0$ this implies by \eqref{screen-step1} that $\varphi=0$ in $A_0$, which proves \eqref{eq-screen}.
To prove \eqref{screen-step2} we first consider the case in which $A^-$ is unbounded. Then there is a sequence $(x_n)_n$ in $A^-$ with $|x_n|\to\infty$. In view of Lemma~\ref{lem-decay}, this implies $\phi(x_n)\to 0$ and hence \eqref{screen-step2}.
It remains to consider the case when $A^-$ is bounded: let $R_0>0$ be such that $\Omega^+\cup A^-\subset B_{R_0}$. Then \eqref{eq-pot2} yields $\int_{\partial B_R}\phi=0$ for every $R\geq R_0$, and since $\phi$ is nonnegative we obtain that $\phi\equiv0$ in $\mathbb R^3\setminus B_{R_0}$. Next we define the ``interior'' (in a measure-theoretic sense) of $A_0$, that is the open set
\begin{align*}
\tilde A_0 \,:=\, \{x\in\mathbb R^3\,:\, |B_r(x)\cap (\Omega^-\cup\Omega^+)|=0\,\text{ for some }r>0\}\subset A_0\,.
\end{align*}
As $\phi$ is harmonic in $\t A_0$ and $\phi\equiv0$ in $\mathbb R^3\setminus B_{R_0}$, we deduce that $\phi$ vanishes in the closure of the connected component $D$ of $\t A_0$ that contains $\mathbb R^3\setminus B_{R_0}$. If $\partial D\cap A^-\neq\emptyset$, we immediately obtain \eqref{screen-step2}.
It therefore remains to consider the case $\partial D\cap A^-=\emptyset$, and we now show that this case in fact never happens. Hence we argue by contradiction assuming that $\partial D\cap A^-=\emptyset$.
We first deduce that $\partial D \subset\partial\Omega^+$. In fact, by the assumption $\partial D\cap A^-=\emptyset$ any $x\in\partial D\setminus \partial\Omega^+$ has positive distance to the sets $\overline{\Omega^+}$ and $A^-$, which shows that $x\in \t A_0$ and implies that $x\not\in\partial D$ as $\t A_0$ is open. Therefore $\partial D\setminus \partial\Omega^+=\emptyset$.
We claim next that there exist a point $x_0\in\partial\Omega^+$ with $\varphi(x_0)=0$ and a ball $B_R\subset\Omega^+$ with $\partial B_R\cap\partial\Omega^+=\{x_0\}$ (we can assume without loss of generality that the ball is centered at the origin). In fact (note that at this stage we do not assume an inner sphere condition), choose any $x_*\in \partial D$. Without loss of generality we can assume that
\begin{align*}
\Omega^+\cap B_r(x_*) = \{(y,t)\in \mathbb R^{2}\times\mathbb R\,:\, t>\psi(y)\}\cap B_r(x_*)
\end{align*}
for some Lipschitz function $\psi:\mathbb R^{2}\to\mathbb R$. Since $x_*\not\in A^-$ by the contradiction assumption, after possibly decreasing $r$ we obtain that $\varphi=0$ on $\graph(\psi)\cap B_r(x_*)$. Choose now any $x_1\in \Omega^+\cap B_r(x_*)$ with $R:={\rm dist\,}(x_1,\graph(\psi))< {\rm dist\,}(x_1,\partial B_r(x_*))$. Then there exists $x_0\in \graph(\psi)\cap \partial B_R(x_1)$ and we deduce that $x_0$ and the ball $B_R(x_1)$ enjoy the desired properties (see Figure~\ref{fig-screen}).
\begin{figure}
\begin{tikzpicture}[scale=0.5]
\draw (0,0) circle [radius=4];
\draw [fill=blue!15!white] (-4,0) to [out=0, in=240] (-1,2) to (0,0) to [out=45, in=180] (1.5,1.5) to [out=0, in=180] (3,-0.5) to [out=0, in=200] (4,0) to [out=90, in=0] (0,4) to [out=180, in=90] (-4,0);
\draw [ultra thin, fill=white] (1.5,2.5) circle [radius=1];
\draw [fill=blue!70!white] (1.5,2.5) circle [radius=1];
\draw [fill] (0,0) circle [radius=0.07];
\node [below right] at (0,0) {$x_*$};
\draw [fill] (1.5,2.5) circle [radius=0.07];
\draw [fill] (1.5,1.5) circle [radius=0.07];
\node [below] at (1.5,1.3) {$x_0$};
\node [below right] at (3.5,-2) {$B_r(x_*)$};
\node [above right] at (2,3) {$B_R(x_1)$};
\node at (-2,2) {$\Omega^+$};
\node at (-2,0) {$\psi$};
\end{tikzpicture}
\caption{The construction of an interior ball touching $\partial\Omega^+$ at a point $x_0$, used in the second step of the proof of Theorem~\ref{thm-screening}.}
\label{fig-screen}
\end{figure}
Since $-\Delta\varphi=1$ in $B_R$ and since $\phi$ is not constant in $B_R$ by Stampacchia's Lemma \cite[Proposition~3.23]{GiMa12}, the minimum principle shows that $\varphi>0$ in $B_R$. Then the Hopf boundary point Lemma \cite[Lemma~3.4]{GiTr01} further implies that $\partial_\nu\varphi(x_0)<0$ for $\nu=\frac{x_0}{|x_0|}$. Since $\varphi$ is of class $C^1$ we conclude that $\varphi(x_0+t\nu)<0$ for $t>0$ sufficiently small, which is a contradiction and completes the proof of claim \eqref{screen-step2} and, in turn, of \eqref{eq-screen}.
\medskip
{\it Step 3.} We now prove \eqref{def-Ome-} by showing that
\begin{align} \label{screen-step3} %
\big| \bigl(\Omega^- \bigtriangleup \{\phi>0\} \bigr) \setminus \Omega^+ \big|=0\,.
\end{align}
By \eqref{eq-screen}, we have $\phi(x) = 0$ for almost every $x \not\in {\Omega^+ \cup \Omega^-}$ and hence $\{ \phi > 0 \} \subset \Omega^+ \cup \Omega^-$ up to a Lebesgue nullset.
It remains to show that the set $U = \{\phi=0\} \cap \Omega^-$ satisfies $|U|=0$. Indeed, recalling that $\phi \in W^{2,p}_{\mathrm{loc}}(\mathbb R^3)$ for all $p < \infty$, using Stampacchia's Lemma \cite[Proposition~3.23]{GiMa12} as in the proof of Theorem~\ref{thm-constrained} we obtain $\nabla\phi=0$ almost everywhere in $U$ and then that $\Delta \phi=0$ almost everywhere in $U$. Since on the other hand $\Delta\phi = 1$ in $U$, this implies $|U| = 0$. The above arguments together yield \eqref{screen-step3}.
\medskip
{\it Step 4.} We finally show that
\begin{align} \label{screen-step4} %
\min_{\overline{\Omega}^+}\phi>0\,.
\end{align}
under the assumption that $\Omega^+$ satisfies the interior ball condition.
Indeed, if \eqref{screen-step4} does not hold, then we have $\min_{\overline{\Omega}^+}\phi = 0$. By the minimum principle and since $-\Delta\phi=1$ in $\Omega^+$, there is $x_0\in\partial\Omega^+$ such that $\phi(x_0)=0$. By the interior ball condition there exists $B_R(x_1)\subset\Omega^+$ with $\partial B_R(x_1)\cap\partial\Omega^+=\{x_0\}$. But then we can argue as in Step 2 above and obtain a contradiction to the fact that $\phi\geq0$. This proves \eqref{screen-step4}.
\end{proof}
\begin{remark}
A-posteriori we can identify the set $\tilde A_0$ used in the previous proof with the set $\Omega_0$ defined in \eqref{def-Ome0}. To show this we fix the representative \eqref{def-Ome-} for $\Omega^-$. Since $\Omega^+$ has Lipschitz boundary $|\partial\Omega^+|=0$ holds and hence $\Omega^-\cup\Omega^+=\{\phi>0\}\cup\Omega^+$ up to a set of measure zero. By Stampacchia's Lemma \cite[Proposition~3.23]{GiMa12} we deduce as above that $|\Omega^+\cap\{\phi=0\}|=0$ and thus $\Omega^-\cup\Omega^+=\{\phi>0\}$ up to a set of measure zero. This proves that
\begin{align*}
\tilde A_0 \,=\, \{x\in\mathbb R^3\,:\, |B_r(x)\cap\{\phi>0\}|=0\text{ for some }r>0\}
\end{align*}
and since $\{\phi>0\}$ is open
\begin{align*}
\tilde A_0 \,=\, \{x\in\mathbb R^3\,:\, B_r(x)\cap\{\phi>0\}=\emptyset \text{ for some }r>0\}\,=\,\Omega_0.
\end{align*}
\end{remark}
\begin{remark} \label{rem-screening}
The screening property uniquely characterizes the minimizer, in the following sense: there exists a unique set $\Omega^-$ (up to a set of Lebesgue measure zero) such that the corresponding potential is nonnegative and vanishes outside $\Omega^+\cup\Omega^-$. Indeed, assume by contradiction that there exist two sets $\Omega^-_1$, $\Omega^-_2$ disjoint from $\Omega^+$. We set $u_1=\chi_{\Omega^-_1}$, $u_2=\chi_{\Omega^-_2}$ and let $\phi_1$ and $\phi_2$ be the corresponding potentials characterized by
\begin{align*}
\begin{cases}
-\Delta\phi_i = \chi_{\Omega^+} - \chi_{\Omega^-_i},\\
\lim_{|x|\to\infty}\phi_i(x)=0.
\end{cases}
\end{align*}
Then one has $\phi_i\geq0$ and $\varphi_i=0$ almost everywhere in $\mathbb R^3\setminus(\Omega^+\cup\Omega^-_i)$, $i=1,2$. Hence, $-\Delta (\phi_1-\phi_2)= -(u_1-u_2)$ and testing this equation by $\phi_1-\phi_2$ gives
\begin{align*}
\int_{\mathbb R^3} |\nabla(\phi_1-\phi_2)|^2\ \mathrm{d} x \,=\, -\int_{\mathbb R^3} (u_1-u_2)(\phi_1-\phi_2)\ \mathrm{d} x \,\leq\, 0\,,
\end{align*}
where in the last step we have used the screening property. Therefore $\phi_1-\phi_2$ is constant and since both vanish at infinity we deduce that $\phi_1=\phi_2$, which implies $|\Omega^-_1\Delta\Omega^-_2|=0$.
\end{remark}
Using the screening property we now further characterize $\Omega^-$ and in particular show that this set is essentially bounded.
\begin{proof}[Proof of Theorem~\ref{thm-support}]
The fact that $\Omega^-$, defined by \eqref{def-Ome-}, is open follows by continuity of $\varphi$.
We now turn to the proofs of the other statements.
\medskip
{\it Proof of \ref{it-2.4-1}.}
We first recall that by the positivity of $\varphi$ and \eqref{eq:bounds-phi}
\begin{align*}
0\,\leq\, \phi(x) \,\leq\, \frac{3^{2/3}}{2(4\pi)^{2/3}}m^{\frac{2}{3}}\quad\text{ for all }x\in\mathbb R^3
\end{align*}
holds. We now adapt the proof of \cite[Lemma~1]{Wei}. Consider $x_0\in {\{\phi>0\}}\setminus\overline{\Omega^+}$ and observe that for every $r<{\rm dist\,}(x_0,\Omega^+)$ we have $\overline{B_{r}(x_0)}\subset \mathbb R^3\setminus\overline{\Omega^+}$.
By the screening property, the function $w(x):=\varphi(x)-\frac16|x-x_0|^2$ is harmonic in $B_r(x_0)\cap\{\varphi>0\}$, and the maximum principle yields
$$
\max_{\partial(B_r(x_0)\cap\{\varphi>0\})} w \geq w(x_0) = \varphi(x_0) >0\,.
$$
Since $w(x)\leq 0$ on $\partial\{\varphi>0\}$, we obtain
\begin{align*}
0\,\leq\,\varphi(x_0) \,\leq\, \max_{\partial B_r(x_0)} w \,\leq\, \frac{3^{2/3}}{2(4\pi)^{2/3}}m^{\frac{2}{3}} -\frac{r^2}{6}\,,
\end{align*}
thus
\begin{align}
r^2 \,\leq\, \frac{3\cdot 3^{2/3}}{(4\pi)^{2/3}}m^{\frac{2}{3}} \, \leq\, \frac{3}{2}m^{\frac{2}{3}}\,. \label{eq:boundphi}
\end{align}
Letting $r\nearrow {\rm dist\,}(x_0,\Omega^+)$ we obtain \ref{it-2.4-1}.
\medskip
{\it Proof of \ref{it-2.4-2}.} By using $m\leq\frac43\pi(\frac12{\rm diam\,}\Omega^+)^3$ in \eqref{eq:boundphi} we easily obtain the estimate in \ref{it-2.4-2}.
\medskip
{\it Proof of \ref{it-2.4-3}.} Let $V$ be a connected component of $\Omega^-$, and assume by contradiction that $\partial V\cap\partial\Omega^+=\emptyset$. Then $-\Delta\phi\leq 0$ in $V$ and $\phi=0$ on $\partial V$, which implies by the maximum principle that $\phi\leq0$ in $V$, which is a contradiction.
\medskip
{\it Proof of \ref{it-2.4-4}.} The fact that $\partial\Omega_0$ and $\partial\Omega^+$ have positive distance is a consequence of the continuity of $\phi$ and Theorem~\ref{thm-screening}.
\end{proof}
\section{Formulation as obstacle problem and regularity of minimizers} \label{sec-obstacle}
In the following proposition we show that, as a consequence of \eqref{eq-obstacle1}, the potential $\phi$ associated with a minimizer of \eqref{min-unc} can be characterized as the solution of an obstacle problem.
\begin{proposition}[Formulation as obstacle problem] \label{prp-obstacle} %
Let $\Omega^+$ be as in Theorem~\ref{thm-screening}, and let $D:=B_{R_0}\setminus\overline{\Omega^+}$, where $R_0$ is chosen so that $\overline{\Omega^+}\subset B_{R_0}$. Then the potential $\phi$ associated with the minimizer $\Omega^-$ of \eqref{min-unc} is the unique solution to the obstacle problem
\begin{align} \label{eq-obstacle2} \min \biggl\{ \int_{D} \Bigl( |\nabla\psi|^2 +2\psi \Bigr)\ \mathrm{d} x \ : \ \psi\in H^1(D),\ \psi\geq0, \ \psi-\phi \in H^1_0(D) \biggr\} \,.
\end{align}
\end{proposition}
\begin{proof}
Existence and uniqueness of a solution of \eqref{eq-obstacle2} can be easily established by the direct method of the Calculus of Variations, and by strict convexity of the functional. Moreover, one can show (see, for instance, \cite[Section~1.3.2]{PetShaUra} for details) that the solution $\psi$ belongs to $W^{2,p}_{\mathrm{loc}}(D)$ for every $1<p<\infty$, and that it solves the Euler-Lagrange equations
\begin{align} \label{eq-obstacle3}
\begin{cases}
\Delta\psi = \chi_{\{\psi>0\}} & \text{in }D,\\
\psi\geq0 & \text{in }D,\\
\psi=\phi & \text{on }\partial D.
\end{cases}
\end{align}
The conditions in \eqref{eq-obstacle3} completely characterize the minimizer of \eqref{eq-obstacle2}: indeed, if $\psi_1,\psi_2\in H^1(D)$ were two different solutions of \eqref{eq-obstacle3}, then for every $\eta\in H^1_0(D)$ we would have
\begin{align*}
\int_D \Bigl( \nabla\psi_i\cdot\nabla\eta + \eta\chi_{\{\psi_i>0\}} \Bigr) \ \mathrm{d} x=0\,, \qquad i=1,2.
\end{align*}
Using $\eta=\psi_1-\psi_2$ as a test function and subtracting the two resulting equations, we would get
\begin{align*}
0 = \int_D \Bigl( |\nabla(\psi_1-\psi_2)|^2 + (\psi_1-\psi_2) \bigl( \chi_{\{\psi_1>0\}} - \chi_{\{\psi_2>0\}}
\bigr) \Bigr) \ \mathrm{d} x \geq \int_D |\nabla(\psi_1-\psi_2)|^2\ \mathrm{d} x\,,
\end{align*}
which implies $\psi_1=\psi_2$. Hence, since $\phi$ itself is a solution to \eqref{eq-obstacle3} by \eqref{eq-obstacle1} (which, in turn, follows from the screening property), we conclude that $\varphi$ is also the minimizer of \eqref{eq-obstacle2}.
\end{proof}
We next exploit the connection to the obstacle problem to deduce regularity properties of the free boundary.
\begin{proof}[Proof of Theorem~\ref{thm-regularity}]
By Proposition~\ref{prp-obstacle} the potential $\phi$ is the solution to the obstacle problem \eqref{eq-obstacle2} and solves equation \eqref{eq-obstacle1}. This problem has been widely investigated and we collect below the main results available in the literature, for whose proofs we refer the reader to the presentation in the book \cite{PetShaUra} and to the references contained therein (see, in particular, \cite{Caf77, Caf80, Caf98, Wei}).
First of all, by \cite[Theorem~2.3]{PetShaUra} one has that $\phi\in C^{1,1}_{\mathrm{loc}}(\mathbb R^3\setminus\overline{\Omega^+})$.
The \textit{free boundary} $\Gamma(\phi):=\partial\{\phi>0\}$ has locally finite $\mathcal{H}^2$-measure in $\mathbb R^3\setminus\overline{\Omega^+}$ by \cite[Lemma~3.13]{PetShaUra}.
Moreover, by \cite[Theorem~3.22, Theorem~3.23 and Definition~3.24]{PetShaUra} it follows that $\Gamma(\phi)=\Gamma_{\mathrm{reg}}\cup\Gamma_{\mathrm{sing}}$, where $\Gamma_{\mathrm{reg}}$ is a relatively open subset of $\Gamma(\phi)$ with analytic regularity (\cite[Theorem~4.20]{PetShaUra}), while $x_0\in\Gamma_{\mathrm{sing}}$ if and only if
\begin{align*}
\lim_{r\to0^+}\frac 1r \, {\rm min\,diam\,} \bigl( \{\phi=0\}\cap B_r(x_0) \bigr) = 0
\end{align*}
(\cite[Proposition~7.1]{PetShaUra}), from which it also follows that the Lebesgue density of $\{\phi=0\}$ is 0 at each point of $\Gamma_{\mathrm{sing}}$.
Now the properties in the statement follow by observing that $\partial\Omega_0\subset\Gamma(\phi)$ and $\Gamma(\phi)\setminus\partial\Omega_0\subset\Gamma_{\mathrm{sing}}$: indeed, the second inclusion is a consequence of the fact that a regular point $x_0\in\Gamma_{\mathrm{reg}}$ has a neighborhood in which $\Gamma(\phi)$ is regular, which implies that $x_0\in\partial\Omega_0$.
\end{proof}
In the following proposition we show how points in the regular part $\Gamma$ or in the singular part $\Sigma$ of $\partial\Omega_0$ (\emph{regular points} and \emph{singular points}, respectively) can be characterized in terms of the blow-up of the potential $\phi$ at those points \cite{Caf80}, and a structure result for the singular part \cite{Caf98}. A different characterization can be provided in terms of the Ou-Weiss energy functional, see \cite{Ou-1994,Wei}.
\begin{proposition}[Characterization of the singular set of $\partial \Omega_0$] \label{prp-singbound} %
Under the assumptions of Theorem~\ref{thm-regularity}, let $\partial\Omega_0 = \Gamma \cup \Sigma$, where $\Gamma$ is the regular part of $\partial \Omega_0$ and $\Sigma$ is the singular part of $\partial \Omega_0$.
The sets $\Gamma$ and $\Sigma$ can be characterized as follows: for $x_0\in\partial\Omega_0$, the corresponding rescaled potential $\phi_{r,x_0}$ is
\begin{align*}
\phi_{r,x_0}(x) := \frac{\phi(x_0+rx)-\phi(x_0)}{r^2}\,.
\end{align*}
Then, after extraction of a subsequence, we have $\phi_{r,x_0}\to\phi_{x_0}$ in $C^{1,\alpha}_{\mathrm{loc}}(\mathbb R^3)$ for every $\alpha\in(0,1)$. The blow-up function $\phi_{x_0}$ has two possible behaviors, independent of the choice of subsequence: either $\phi_{x_0}$ resembles a half space solution, i.e.
\begin{align} \label{phi-hs} \phi_{x_0}(x)=\frac12 \bigl[(x\cdot e)^+\bigr]^2 \qqquad \text{(half-space solution)}
\end{align}
for some unit vector $e \in S^2$, or
\begin{align} \label{phi-ps} %
\phi_{x_0}(x)=\frac12 x \cdot A_{x_0}x \qqquad \text{(polynomial solution)}
\end{align}
for some symmetric matrix $A_{x_0}$ with ${\rm Tr\,} A_{x_0}=1$.
Then $x_0\in\Gamma$ if and only if \eqref{phi-hs} holds, while $x_0\in\Sigma$ if and only if \eqref{phi-ps} holds.
Moreover, setting for $d=0,1,2$
\begin{align*}
\Sigma^d := \{ x_0\in\Sigma \ :\ \dim\ker A_{x_0} = d\}\,,
\end{align*}
each set $\Sigma^d$ is contained in a countable union of $d$-dimensional $C^1$-manifolds.
Finally, $\Sigma^0=\emptyset$.
\end{proposition}
\begin{proof}
For a proof of the classification of regular and singular points in terms of the blow-up of the potential, see \cite[Theorem~3.22 and Theorem~3.23]{PetShaUra}, while for the structure of $\Sigma$ see \cite[Theorem~7.9]{PetShaUra}.
We have only to show that the set $\Sigma^0$ is actually empty. Indeed, for $x_0\in\Sigma$ one has the decay estimate
\begin{align*}
|\phi(x) - {\textstyle\frac12} A_{x_0}(x-x_0)\cdot(x-x_0)| \leq \sigma(|x-x_0|) \ |x-x_0|^2
\end{align*}
where $\sigma$ is a suitable modulus of continuity (see \cite[Proposition~7.7]{PetShaUra}). This property clearly implies that if $x_0\in\Sigma^0$ we have $\phi>0$ in $B_r(x_0)\setminus\{x_0\}$ for $r>0$ small enough, which in turn yields $x_0\notin\partial\Omega_0$.
\end{proof}
\section{A surface charge model} \label{sec-surface} %
In this section we discuss the asymptotic limit, in the variational sense of $\Gamma$-convergence (see
\cite{Bra,DM}), of our charge distribution model when the charge density of one phase is much higher than the one of the
other: this is achieved by rescaling the negative charge density by a factor $\frac{1}{\varepsilon}$ and by letting $\varepsilon$ go to
zero. In the limit model the admissible configurations are described by positive Radon measures supported in
$\mathbb R^3\setminus\Omega^+$, with the optimal configuration realized by a surface distribution of charge concentrated on
$\partial\Omega^+$. We remark that a similar limit model, in the particular case where the fixed domain $\Omega^+$ is
the union of a finite number of disjoint balls, was analyzed in \cite{CapFri}.
\medskip
Given two positive Radon measures $\mu,\nu\in\mathcal{M}^+(\mathbb R^3)$, we introduce the energy
\begin{equation*}
\mathcal{I}(\mu,\nu) := \int_{\mathbb R^3}\int_{\mathbb R^3} \frac{1}{4\pi|x-y|}\ \mathrm{d}\mu(x)\mathrm{d}\nu(y)\,,
\end{equation*}
and we set $\mathcal{I}(\mu):=\mathcal{I}(\mu,\mu)$. We also define the potential
\begin{align} \label{eq-potmu} \phi_\mu(x):= \int_{\mathbb R^3}\frac{1}{4\pi|x-y|}\ \mathrm{d}\mu(y)\,,
\end{align}
and we note that
\begin{equation*}
\mathcal{I}(\mu,\nu) = \int_{\mathbb R^3}\phi_\mu(x)\ \mathrm{d}\nu(x) = \int_{\mathbb R^3}\phi_\nu(y)\ \mathrm{d}\mu(y)\,.
\end{equation*}
We will denote by $\mu^+:=\chi_{\Omega^+}{\mathcal L}^3$ the measure associated with the uniform charge distribution in
$\Omega^+$, and by $\phi_{\mu^+}$ the associated potential. For $\lambda>0$, $\varepsilon>0$, we define the sets
\begin{align*}
\mathcal{A}_\lambda &:= \biggl\{ \mu\in\mathcal{M}^+(\mathbb R^3) \ :\ {\rm supp\,}\mu\subset\mathbb R^3\setminus\Omega^+,\ \int_{\mathbb R^3}\mathrm{d}\mu \leq\lambda \biggr\}\,,\\
\mathcal{A}_{\lambda,\varepsilon} &:= \biggl\{ \mu\in\mathcal{A}_\lambda \ :\ \mu = u{\mathcal L}^3,\ u:\mathbb R^3\to\big[0,{\varepsilon}^{-1}\big] \biggr\}\
\end{align*}
and the functionals on $\mathcal{M}(\mathbb R^3)$
\begin{equation*}
\mathcal{F}_\varepsilon(\mu):=
\begin{cases}
-2\mathcal{I}(\mu^+,\mu) + \mathcal{I}(\mu) & \text{if }\mu\in\mathcal{A}_{\lambda,\varepsilon},\\
\infty & \text{otherwise,}
\end{cases}
\end{equation*}
\begin{equation*}
\mathcal{F}(\mu):=
\begin{cases}
-2\mathcal{I}(\mu^+,\mu) + \mathcal{I}(\mu) & \text{if }\mu\in\mathcal{A}_{\lambda},\\
\infty & \text{otherwise.}
\end{cases}
\end{equation*}
\begin{theorem} \label{thm-gammaconv} %
Assume that $\Omega^+\subset\mathbb R^3$ is an open, bounded set with Lipschitz boundary.
The family of functionals $(\mathcal{F}_\varepsilon)_\varepsilon$ $\Gamma$-converge, as $\varepsilon\to0$, to the functional $\mathcal{F}$ with
respect to weak*-convergence in $\mathcal{M}(\mathbb R^3)$.
\end{theorem}
\begin{proof}
We prove the two properties of the definition of $\Gamma$-convergence.
\medskip
\textit{Liminf inequality.} Given $\mu_\varepsilon\stackrel{*}{\rightharpoonup}\mu$ weakly* in $\mathcal{M}(\mathbb R^3)$, we have to
show that $\mathcal{F}(\mu)\leq\liminf_{\varepsilon\to0}\mathcal{F}_\varepsilon(\mu_\varepsilon)$. We can assume without loss of generality that
$\liminf_{\varepsilon\to0}\mathcal{F}_\varepsilon(\mu_\varepsilon)<\infty$, so that $\mu_\varepsilon\in\mathcal{A}_{\lambda,\varepsilon}$ and $\mu_\varepsilon=u_\varepsilon{\mathcal L}^3$.
Clearly ${\rm supp\,}\mu\subset\mathbb R^3\setminus\Omega^+$, and by lower semicontinuity
$\mu(\mathbb R^3)\leq\liminf_{\varepsilon\to 0}\mu_\varepsilon(\mathbb R^3)\leq\lambda$, which implies that $\mu\in\mathcal{A}_\lambda$. We then need to
show that
\begin{align*}
-2\mathcal{I}(\mu^+,\mu)+\mathcal{I}(\mu) \leq\liminf_{\varepsilon\to0} \Bigl(
-2\mathcal{I}(\mu^+,\mu_\varepsilon)+\mathcal{I}(\mu_\varepsilon) \Bigr)\,.
\end{align*}
Since the functional $\mathcal{I}$ is lower semicontinuous with respect to weak*-convergence of positive measures (see
\cite[equation (1.4.4)]{Lan}), we immediately have
\begin{align*}
\mathcal{I}(\mu)\leq\liminf_{\varepsilon\to0}\mathcal{I}(\mu_\varepsilon)\,.
\end{align*}
Moreover, the convergence $\mu_\varepsilon\stackrel{*}{\rightharpoonup}\mu$ and $\sup_\varepsilon\mu_\varepsilon(\mathbb R^3)<\infty$ imply that
\begin{align*}
\lim_{\varepsilon\to0}\int_{\mathbb R^3}f\ \mathrm{d}\mu_\varepsilon = \int_{\mathbb R^3}f\ \mathrm{d}\mu
\end{align*}
for every $f\in C^0_0(\mathbb R^3):=\{g\in C^0(\mathbb R^3) : \{|g|>\varepsilon\} \text{ is compact for every }\varepsilon>0\}$. Hence, since
$\phi_{\mu^+}\in C^0_0(\mathbb R^3)$ we conclude that
\begin{align} \label{eq-gammaconv1} \lim_{\varepsilon\to0}\mathcal{I}(\mu^+,\mu_\varepsilon) = \lim_{\varepsilon\to0}\int_{\mathbb R^3}\phi_{\mu^+}\
\mathrm{d}\mu_\varepsilon = \int_{\mathbb R^3}\phi_{\mu^+}\ \mathrm{d}\mu = \mathcal{I}(\mu^+,\mu)\,,
\end{align}
which completes the proof of the liminf inequality.
\medskip
\textit{Limsup inequality.} Given a measure $\mu\in\mathcal{M}(\mathbb R^3)$, we need to construct a
recovery sequence $\mu_\varepsilon\stackrel{*}{\rightharpoonup}\mu$ such that $\limsup_{\varepsilon\to0}\mathcal{F}_\varepsilon(\mu_\varepsilon)\leq\mathcal{F}(\mu)$. We can
assume without loss of generality that $\mathcal{F}(\mu)<\infty$, so that $\mu\in\mathcal{A}_\lambda$.
We first show that without loss of generality we can assume that
${\rm supp\,}\mu\subset\subset\mathbb R^3\setminus\overline{\Omega^+}$. Indeed, since $\partial\Omega^+$ is Lipschitz, we can
define for every $\delta>0$ a map $\Phi_\delta\in C^\infty(\mathbb R^3;\mathbb R^3)$ such that
$\Omega^+\subset\subset\Phi_\delta(\Omega^+)$ and $\|\Phi_\delta-Id\|_{C^1(\mathbb R^3)}\to0$ as $\delta\to0$ (the map
$\Phi_\delta$ ``pushes'' the boundary of $\Omega^+$ in the complement of $\Omega^+$). We define the push-forward
$\mu_\delta$ of the measure $\mu$ by setting for every continuous function $f$
\begin{align*}
\int_{\mathbb R^3}f\ \mathrm{d}\mu_\delta := \int_{\mathbb R^3}f\circ\Phi_\delta\ \mathrm{d}\mu\,.
\end{align*}
It is not hard to see that $\mu_\delta\in\mathcal{A}_\lambda$,
${\rm supp\,}\mu_\delta\subset\subset\mathbb R^3\setminus\overline{\Omega^+}$, $\mu_\delta\stackrel{*}{\rightharpoonup}\mu$ weakly* in $\mathcal{M}(\mathbb R^3)$
and $\mathcal{F}(\mu_\delta)\to\mathcal{F}(\mu)$ as $\delta\to0$. This shows that it is sufficient to provide a
recovery sequence in the case ${\rm supp\,}\mu\subset\subset\mathbb R^3\setminus\overline{\Omega^+}$.
\medskip
We now reduce to the case of a measure absolutely continuous with respect to the Lebesgue measure. Indeed, we define
for $\delta>0$ the convolution
\begin{align*}
\mu_\delta := \rho_\delta*\mu = \int_{\mathbb R^3}\rho_\delta(\cdot - y)\ \mathrm{d}\mu(y)\,,
\end{align*}
where $\rho_\delta\in C^{\infty}_{\mathrm c}(B_\delta)$, $\rho_\delta\geq0$, $\int_{B_\delta}\rho_\delta=1$ is a
sequence of mollifiers. Then $\mu_\delta\in\mathcal{A}_\lambda$ for $\delta$ sufficiently small (since we are
assuming ${\rm supp\,}\mu\subset\subset\mathbb R^3\setminus\overline{\Omega^+}$), $\mu_\delta$ is absolutely continuous with respect
to the Lebesgue measure and $\mu_\delta\stackrel{*}{\rightharpoonup}\mu$ (see \cite[Theorem~1.26]{Mat95}). We now show that we also have
$\mathcal{F}(\mu_\delta)\to\mathcal{F}(\mu)$. Indeed, the convergence of $\mathcal{I}(\mu^+,\mu_\delta)$ to
$\mathcal{I}(\mu^+,\mu)$ can be proved exactly as in \eqref{eq-gammaconv1}; moreover
\begin{align*}
\mathcal{I}(\mu) = \int_{B_\delta}\rho_\delta(z)\mathcal{I}(\mu(\cdot - z))\ \mathrm{d} z \geq
\mathcal{I}\biggl(\int_{B_\delta}\rho_\delta(z)\ \mathrm{d}\mu(\cdot-z)\biggr) = \mathcal{I}(\mu_\delta)
\end{align*}
(the first equality is due to the translation invariance of the functional $\mathcal{I}$, while the inequality is a
consequence of Jensen's inequality and of the convexity of $\mathcal{I}$), which combined with the lower
semicontinuity of $\mathcal{I}$ leads to $\lim_{\delta\to0}\mathcal{I}(\mu_\delta)=\mathcal{I}(\mu)$. This yields
$\mathcal{F}(\mu_\delta)\to\mathcal{F}(\mu)$.
\medskip
Hence, to complete the proof it remains just to provide a recovery sequence in the case of a measure
$\mu\in\mathcal{A}_\lambda$ absolutely continuous with respect to the Lebesgue measure and such that
${\rm supp\,}\mu\subset\subset\mathbb R^3\setminus\overline{\Omega^+}$. This can be done by a simple truncation argument: denoting
by $u$ the Lebesgue density of $\mu$, we define $\mu_\varepsilon:=(u\wedge\frac{1}{\varepsilon})\mathcal{L}^3$: it is then clear that
$\mu_\varepsilon\in\mathcal{A}_{\lambda,\varepsilon}$ and that $\mathcal{F}_\varepsilon(\mu_\varepsilon)\to\mathcal{F}(\mu)$ by the Lebesgue Dominated
Convergence Theorem.
\end{proof}
In the following proposition we discuss the limit problem, showing that the minimizer is obtained by a surface
distribution of charge on $\partial\Omega^+$.
\begin{proposition}\label{prp-surfacecharge}
Let $\mu\in\mathcal{M}^+(\partial\Omega^+)$ be a solution to the minimum problem
\begin{align} \label{min3} \min\biggl\{ \mathcal{F}(\mu) : \mu\in\mathcal{M}^+(\partial\Omega^+),\
\int_{\partial\Omega^+}\ \mathrm{d}\mu=m \biggr\}\,.
\end{align}
Then $\mu$ is the unique minimizer of $\mathcal{F}$ over $\mathcal{A}_m$.
\end{proposition}
\begin{proof}
We start by observing that the existence of a minimizer is guaranteed by the direct method of the Calculus of
Variations. Indeed, given a minimizing sequence $(\mu_n)_n$, by the uniform bound $\mu_n(\partial\Omega^+)=m$ we can
extract a (not relabeled) subsequence weakly*-converging to some positive measure $\mu$ supported on
$\partial\Omega^+$, and such that $\mu(\partial\Omega^+)=m$. Moreover, by semicontinuity of $\mathcal{I}$ with respect
to weak*-convergence of positive measures, and by the same argument as in \eqref{eq-gammaconv1} with $\mu_\varepsilon$ replaced
by $\mu_n$, we easily obtain that $\mu$ is a minimizer of \eqref{min3}.
\medskip
We now claim that the potential $\phi_\mu$ generated by the minimizer $\mu$, according to \eqref{eq-potmu}, coincides
with the potential $\phi_{\mu^+}$ outside $\Omega^+$. Fix any point $x_0\in\partial\Omega^+$, let $\rho>0$ and denote
$\alpha_\rho:=\mathcal{H}^2(\partial\Omega^+\cap B_\rho(x_0))$. Consider for $\varepsilon>0$ the measure
\begin{align*}
\mu_\varepsilon := \varepsilon\mathcal{H}^2\mathbin{\vrule height 1.6ex depth 0pt width 0.13ex\vrule height 0.13ex depth 0pt width 1.3ex}(\partial\Omega^+\cap B_\rho(x_0)) + \frac{m-\varepsilon\alpha_\rho}{m}\ \mu\,,
\end{align*}
which is admissible in problem \eqref{min3} for $\varepsilon$ sufficiently small. Then, by minimality of $\mu$
\begin{align*}
0 &\geq -2\mathcal{I}(\mu^+,\mu) + \mathcal{I}(\mu) + 2\mathcal{I}(\mu^+,\mu_\varepsilon) - \mathcal{I}(\mu_\varepsilon)\\
& = \frac{2\varepsilon\alpha_\rho}{m} \int_{\partial\Omega^+}\int_{\partial\Omega^+}\frac{1}{4\pi|x-y|}\ \mathrm{d}\mu(x)\mathrm{d}\mu(y)
- \frac{2\varepsilon\alpha_\rho}{m} \int_{\Omega^+}\int_{\partial\Omega^+}\frac{1}{4\pi|x-y|}\ \mathrm{d} x\ \mathrm{d}\mu(y) \\
& \qquad + 2\varepsilon\int_{\Omega^+}\int_{\partial\Omega^+\cap B_\rho(x_0)}\frac{1}{4\pi|x-y|}\ \mathrm{d} x\ \mathrm{d}\mathcal{H}^2(y) \\
& \qquad - 2\varepsilon\int_{\partial\Omega^+}\int_{\partial\Omega^+\cap B_\rho(x_0)}\frac{1}{4\pi|x-y|}\
\mathrm{d}\mathcal{H}^2(x)\mathrm{d}\mu(y) + o(\varepsilon)\,.
\end{align*}
Dividing by $\varepsilon$ and letting $\varepsilon\to0^+$ we obtain
\begin{align*}
\frac{1}{m} \int_{\partial\Omega^+} (\phi_{\mu^+}-\phi_\mu)\ \mathrm{d}\mu \geq
\frac{1}{\alpha_\rho}\int_{\partial\Omega^+\cap B_\rho(x_0)}(\phi_{\mu^+}-\phi_\mu)\ \mathrm{d}\mathcal{H}^2\,,
\end{align*}
and since the right-hand side in the previous inequality converges as $\rho\to0$ to $(\phi_{\mu^+}-\phi_\mu)(x_0)$, we
obtain that
\begin{align*}
\frac{1}{\mu(\partial\Omega^+)}\int_{\partial\Omega^+}(\phi_{\mu^+}-\phi_\mu)\ \mathrm{d}\mu \geq
(\phi_{\mu^+}-\phi_\mu)(x_0)
\end{align*}
for every $x_0\in\partial\Omega^+$. We then conclude that there exists a constant $\alpha$ such that $\phi :=
\phi_{\mu^+}-\phi_\mu = \alpha$ $\mu$-a.e. on $\partial\Omega^+$, and $\phi\leq\alpha$ on $\partial\Omega^+$.
\medskip
Observe that, if $R_0>0$ denotes a radius such that $\Omega^+\subset B_{R_0}$, by \eqref{eq-pot2} (which still holds
in the present setting: see, for instance, \cite[Theorem~6.12]{Hel}) we have
\begin{align} \label{eq-gammaconv4} \int_{\partial B_R}\phi\ \mathrm{d}\mathcal{H}^2 = 0 \qquad\text{for every }R>R_0\ .
\end{align}
Now, since $\phi$ is superharmonic in $\mathbb R^3\setminus{\rm supp\,}\mu$, $\phi=\alpha$ on ${\rm supp\,}\mu$ and $\phi$ vanishes at
infinity, by the minimum principle we have that
\begin{align}\label{eq-gammaconv5}
\phi\geq\min\{0,\alpha\} \qquad\text{in }\mathbb R^3\setminus{\rm supp\,}\mu\ .
\end{align}
Hence condition \eqref{eq-gammaconv4} excludes the case $\alpha>0$. On the other hand, if $\alpha<0$ then we would
have that $\phi$ is harmonic in $\mathbb R^3\setminus\overline{\Omega^+}$, $\phi\leq\alpha<0$ on $\partial\Omega^+$ and
$\phi$ vanishes at infinity, so that $\phi<0$ in $\mathbb R^3\setminus\overline{\Omega^+}$, which is again a contradiction
with \eqref{eq-gammaconv4}. Thus $\alpha=0$ and combining \eqref{eq-gammaconv5} with the fact that $\phi\leq\alpha$
on $\partial\Omega^+$, we conclude that $\phi=0$ on $\partial\Omega^+$. In turn, this implies that $\phi=0$ in
$\mathbb R^3\setminus\Omega^+$ since $\phi$ is harmonic in $\mathbb R^3\setminus\overline{\Omega}^+$ and vanishes at infinity. We
have then proved that
\begin{align} \label{eq-gammaconv2} \int_{\partial\Omega^+}\frac{1}{4\pi|x-y|}\ \mathrm{d}\mu(y) =
\int_{\Omega^+}\frac{1}{4\pi|x-y|}\ \mathrm{d} y \qquad\text{for every }x\in\mathbb R^3\setminus\Omega^+\ .
\end{align}
\medskip
We can now complete the proof of the proposition, showing that $\mu$ is the minimizer of $\mathcal{F}$ over
$\mathcal{A}_m$. Indeed, for every $\nu\in\mathcal{A}_m$ we have, using \eqref{eq-gammaconv2}
\begin{align*}
\mathcal{F}(\nu) & = -2\int_{\mathbb R^3}\int_{\Omega^+}\frac{1}{4\pi|x-y|}\ \mathrm{d} x\ \mathrm{d}\nu(y)
+ \int_{\mathbb R^3}\int_{\mathbb R^3}\frac{1}{4\pi|x-y|}\ \mathrm{d}\nu(x)\mathrm{d}\nu(y) \\
& = -2\int_{\mathbb R^3}\int_{\partial\Omega^+}\frac{1}{4\pi|x-y|}\ \mathrm{d}\mu(x)\mathrm{d}\nu(y)
+ \int_{\mathbb R^3}\int_{\mathbb R^3}\frac{1}{4\pi|x-y|}\ \mathrm{d}\nu(x)\mathrm{d}\nu(y) \\
& = \int_{\mathbb R^3}\int_{\mathbb R^3}\frac{1}{4\pi|x-y|}\ \mathrm{d}(\mu-\nu)(x)\mathrm{d}(\mu-\nu)(y)
- \int_{\mathbb R^3}\int_{\mathbb R^3}\frac{1}{4\pi|x-y|}\ \mathrm{d}\mu(x)\mathrm{d}\mu(y) \\
& = \mathcal{I}(\mu-\nu) + \mathcal{F}(\mu)\,.
\end{align*}
Using the fact that $\mathcal{I}(\mu-\nu)\geq0$, with equality if and only if $\mu=\nu$ (see
\cite[Theorem~1.15]{Lan}), we obtain the conclusion.
\end{proof}
\begin{remark} \label{rem-surfacepot} The proof of the previous proposition shows, in particular, the following
interesting fact: if $\mu$ solves the minimum problem \eqref{min3}, then
\begin{align*}
\int_{\partial\Omega^+}\frac{1}{4\pi|x-y|}\ \mathrm{d}\mu(y) = \int_{\Omega^+}\frac{1}{4\pi|x-y|}\ \mathrm{d} y \qquad\text{for
every }x\in\mathbb R^3\setminus\Omega^+,
\end{align*}
that is, the potential $\phi_\mu$ generated by $\mu$ coincides outside of $\Omega^+$ with the potential $\phi_{\mu^+}$
generated by the uniform distribution of charge in $\Omega^+$. In particular, we again find a complete screening
property: the potential of $\mu^+ -\mu$ vanishes outside of the support of that measure.
\end{remark}
|
1,941,325,221,121 | arxiv | \section{Introduction}
A longstanding challenge is to explain the discrete spectrum of black hole microstates using spacetime geometry. In recent years, some statistical aspects of these microstates have been explained using spacetime wormholes.\footnote{The role of spacetime wormholes in quantum gravity has also been a longstanding puzzle, \cite{Hawking:1987mz,Lavrelashvili:1987jg,Giddings:1987cg,Coleman:1988cy,Maldacena:2004rf,ArkaniHamed:2007js}.} Examples include: aspects of the spectral form factor \cite{Saad:2018bqo,Saad:2019lba,Stanford:2019vob} and late-time correlation functions \cite{Blommaert:2019hjr,Saad:2019pqd,Blommaert:2020seb,Stanford:2021bhl}, the Page curve \cite{Almheiri:2019qdq,Penington:2019kki} and matrix elements \cite{Stanford:2020wkf} of an evaporating black hole, and the ETH behavior of matrix elements \cite{Belin:2020hea,Belin:2021ryy,Chandra:2022bqq}.
A statistical theory of microstates is far from a complete description, but it is enough to probe discreteness of the energy spectrum. One tool to discuss this is the spectral form factor
\begin{equation}\label{intro:SFF}
K_\beta(t) = \langle Z(\beta+\i t)Z(\beta - \i t)\rangle
\end{equation}
where $Z(x) = \text{Tr}\,e^{-x H}$ is the thermal partition function, and the brackets represent some form of averaging for which the statistical description is sufficient. The discrete nature of chaotic energy levels is reflected in the ``plateau'' to a late time value $K_\beta(\infty) = Z(2\beta)$.
For systems in the unitary symmetry class (no time reversal symmetry), the random matrix theory (RMT) prediction for the spectral form factor is simple. The microcanonical version
\begin{equation}
K_E(t) = \int \frac{\d\beta}{2\pi \i} e^{2\beta E} K_\beta(t)
\end{equation}
should have the form of a linear ramp connected to a plateau: $\text{min}\{t/2\pi, e^{S(E)}\}$ with $S(E)$ the microcanonical entropy at energy $E$. This sharp transition at $t_p = 2\pi e^{S(E)}$ is a signature of discretness of the spectrum; it arises from oscillations in the density pair correlator with wavelength $e^{-S(E)}$, representing the mean spacing between discrete energy levels.
The sharpness of the transition from the ramp to the plateau is an apparent obstruction to an explanation in terms of geometry. In particular, in two-dimensional dilaton gravity models such as JT gravity, the genus expansion should roughly be thought of as an expansion in $e^{-S(E)}$. But the transition from the ramp to the plateau comes from contributions have go as $e^{i \# e^{S(E)}}$, nonperturbative in the genus counting parameter, suggesting that it is not captured by the conventional sum over geometries.\footnote{Some previous approaches to explaining the plateau through a sum over geometries have involved ``spacetime D-branes" \cite{Saad:2019lba,Blommaert:2019wfy,Marolf:2020xie}, which generalize the sum over geometries to include contributions from an infinite number of asyptotic boundaries.}
However, the spectral form factor $K_\beta(t)$ is an integral of $K_E(t)$ over energy, and this integral has the potential to smooth out the transition to the plateau. As first shown by \cite{Okuyama:2020ncd,okounkov2002generating} for the Airy matrix integral, the resulting function can have a convergent genus expansion, smoothly transitioning from the ramp to the plateau. We conjecture a generalization of this result below, in a limit that will be referred to as ``$\tau$-scaling.'' This convergent series makes it possible to explain the plateau in terms of a conventional sum over geometries, rather than from a radical nonperturbative effect.
In this paper, we will explain some features of this genus expansion for the spectral form factor, primarily working in the low-energy limit of JT gravity: the Airy model. Our explanations will connect with the encounter computations in semiclassical periodic orbit theory, used to explain the RMT corrections to the ramp \cite{2009arXiv0906.4930A,Sieber_2001,Sieber_2002,PhysRevLett.93.014103,PhysRevE.72.046207}. The sum over encounters is closely analogous to a genus expansion, so it is natural to try interpret the genus expansion for the plateau in terms of a gravitational analog of encounters. Encounters alone cannot be sufficient to explain the genus expansion for $K_\beta(t)$ because without time-reversal symmetry, the encounters cancel genus by genus.
The models that we study, in particular the Airy model, allow us to generalize the theory of encounters beyond their usual regime of validity in the high-energy, semiclassical limit. At very low energies, of order $1/t$, the encounters receive large quantum corrections that disturb the cancellation between encounters, reproducing the expected $\tau$-scaled $K_\beta(t)$.
In \hyperref[SectionTwo]{\textbf{Section Two}}, we introduce a formula for $K_\beta(t)$ in a double-scaled matrix integral in the ``$\tau$-scaled" limit, generalizing \cite{Okuyama:2020ncd,Okuyama:2021cub}. We reconcile the existence of a convergent genus expansion for $K_\beta(t)$ with the absence of such an expansion for $K_E(t)$. In particular, one can think of the genus expansion for $K_\beta(t)$ as coming entirely from very low energies.
In \hyperref[SectionThree]{\textbf{Section Three}} we review an analog of the genus expansion for $K_E(t)$ in periodic orbit theory: the sum over encounters. The sum over encounters gives an expansion in $e^{-S(E)}$, valid at high energies. For periodic orbit systems in the GUE symmetry class (no time-reversal), corrections to the ramp coming from encounters cancel order by order \cite{PhysRevLett.93.014103,PhysRevE.72.046207}. In JT gravity, we discuss a direct analog of the simplest type of encounter contribution in a theory with time-reversal symmetry, contributing to the SFF at genus one-half.
In \hyperref[SectionFour]{\textbf{Section Four}} we study the Airy model, the low-energy limit of JT gravity. The wormhole geometries in this model are very simple, and in one-to-one correspondence with ribbon graphs in the Feynman diagram expansion of Kontsevich's matrix model. These graphs allow us to generalize the encounter computations beyond the semiclassical, high-energy regime. At genus one and high energies, the encounter contributions mutually cancel in the GUE symmetry class. At low energies, quantum corrections to the encounters spoil this cancellation, leading to the nonzero contribution to $K_\beta(t)$. The full answer at this genus comes from a large region of moduli space, far from the semiclassical encounter regime.
{\bf Note:} Two recent papers \cite{Blommaert:2022lbh,Weber:2022sov} are closely related to our work. A preliminary version of section two of this paper was shared with the authors of \cite{Blommaert:2022lbh,Weber:2022sov} in October 2021.
\section{Tau scaling of the spectral form factor}\label{SectionTwo}
In this section we discuss the ``$\tau$-scaling'' limit of matrix integrals in which we conjecture that the spectral form factor has a simple form with a convergent genus expansion. Consider a double-scaled matrix integral with unitary symmetry class and classical density of states
\begin{equation}
\rho(E) = e^{S_0}\rho_0(E).
\end{equation}
The spectral form factor is defined as
\begin{equation}
K_\beta(t) \equiv \langle Z(\beta+\i t)Z(\beta - \i t)\rangle, \hspace{20pt} Z(x) \equiv \text{Tr}\, e^{-x H}.
\end{equation}
Here the angle brackets represent the average in the matrix integral. We would like to analyze this in a limit where $t$ goes to infinity and $e^{S_0}$ also goes to infinity, holding fixed $\beta$, and also holding fixed the ratio
\begin{equation}
\tau = t e^{-S_0}.
\end{equation}
This will be referred to as the ``$\tau$-scaled'' limit.
In the $\tau$-scaled limit, the time $t = e^{S_0}\tau$ is large, so the SFF will be dominated by correlations of nearby energy levels. Pair correlations of nearby levels are described by the universal sine-kernel formula, which translates to a ramp-plateau structure $\text{min}\{t/2\pi,\rho(E)\}$ as a function of the center of mass energy $E$. By integrating this contribution over $E$, one gets the following candidate expression for the spectral form factor
\begin{align}\label{foldingformula}
K_\beta(t) &\stackrel{?}{\approx} \int_{E_0}^\infty \d E \ e^{-2\beta E} \text{min}\left\{\frac{t}{2\pi},\rho(E)\right\}.
\end{align}
This was previously discussed as an uncontrolled approximation to the SFF \cite{Cotler:2016fpe}. Here we would like to propose that it is exact in the $\tau$-scaled limit,
\begin{equation}
\lim_{S_0\to\infty} e^{-S_0}K_\beta(\tau e^{S_0}) = \int_{E_0}^\infty \d E \ e^{-2\beta E}\text{min}\left\{\frac{\tau}{2\pi},\rho_0(E)\right\}.\label{tauconj}
\end{equation}
Let's try an example by taking $\rho_0(E) = \frac{\sqrt{E}}{2\pi}$, which is sometimes called the Airy model, or the Kontsevich-Witten model. Then (\ref{tauconj}) becomes
\begin{align}
e^{-S_0}K_\beta(\tau e^{S_0}) &= \frac{1}{2\pi}\int_0^\infty \d E e^{-2\beta E}\text{min}\left\{\tau,\sqrt{E}\right\}\\
&= \frac{1}{2\pi}\frac{\pi^{1/2}}{2^{5/2}\beta^{3/2}}\text{Erf}(\sqrt{2\beta}\tau)\label{airyconj}\\
&=\frac{\tau}{4\pi\beta} - \frac{\tau^3}{6\pi}+ \frac{\beta}{10\pi}\tau^5 - \frac{\beta^2}{21\pi} \tau^7 +\dots.\label{airyanssec1}
\end{align}
We can compare this to the exact answer for the spectral form factor of the Airy model \cite{okounkov2002generating,Okuyama:2021cub}
\begin{align}
K_\beta(t) &= \langle Z(2\beta)\rangle \text{Erf}(e^{-S_0}\sqrt{2\beta(\beta^2+t^2)})\\
&= \frac{\exp\left(S_0 + \frac{1}{3}e^{-2S_0}\beta^3\right)}{4\sqrt{\pi}\beta^{3/2}}\text{Erf}(e^{-S_0}\sqrt{2\beta(\beta^2+t^2)}).\label{eqn:airySFF}
\end{align}
This agrees with (\ref{airyconj}) in the $\tau$-scaled limit.
As a second example, we can take $\rho_0(E) =\frac{1}{4\pi^2}\sinh(2\pi\sqrt{E})$, which corresponds to JT gravity:
\begin{align}
e^{-S_0}K_\beta(\tau e^{S_0}) &= \frac{1}{2\pi} \int_0^\infty \d E e^{-2\beta E} \text{min}\left\{\tau,\frac{1}{2\pi}\sinh(2\pi\sqrt{E})\right\}\\
&= \frac{e^{\frac{\pi^2}{2\beta}}}{16\sqrt{2\pi}\beta^{3/2}}\left[\text{Erf}\left(\frac{\frac{\beta}{\pi}\text{arcsinh}(2\pi \tau) + \pi}{\sqrt{2\beta}}\right)+\text{Erf}\left(\frac{\frac{\beta}{\pi}\text{arcsinh}(2\pi \tau)- \pi}{\sqrt{2\beta}}\right)\right]\label{JTerfs}\\
&= \frac{\tau}{4\pi\beta} - \frac{\tau^3}{6\pi}+ \left(\frac{\beta}{10\pi}+\frac{2\pi}{15}\right)\tau^5 - \left(\frac{\beta^2}{21\pi} + \frac{4\pi\beta}{21} + \frac{64\pi^3}{315}\right)\tau^7 +\dots\label{tauexp}
\end{align}
For JT gravity, no exact formula for the SFF is known,\footnote{See \cite{Okuyama:2020ncd,Okuyama:2021cub} for discussion of a different limit where $\beta$ is also large, and see \cite{Johnson:2020exp} for numerical evaluation.} but (\ref{JTerfs}) can be checked by using topological recursion \cite{Eynard:2004mh,Eynard:2007kz} to compute the exact spectral form factor to a given order in $e^{-S_0}$, and then applying $\tau$-scaling. Using this method, we confirmed (\ref{JTerfs}) up to order $\tau^{13}$.
Note that the series in $\tau$ corresponds to the genus expansion, as one can see by undoing the $\tau$-scaling and replacing $\tau \to t e^{-S_0}$. In particular, the power of $\tau$ is $\tau^{2g+1}$. Normally, the genus expansion of the SFF in JT gravity is an asymptotic series. But after $\tau$-scaling it has a nonzero radius of convergence $|\tau| < {1\over 2\pi}$, and the analytic continuation is nonsingular along the entire real $\tau$ axis. For large $\tau$ it reproduces the plateau.\footnote{In appendix \ref{Pappendix} we show that (\ref{tauconj}) always has a nonzero radius of convergence.}
The presence of powers of $\beta$ in the leading $\tau$-scaled answer indicates that there are cancellations of higher powers of $t$. For example, the term proportional to $\beta \tau^5$ arises from a linear combination of terms $(\beta + \i t)^{p_1}(\beta - \i t)^{p_2}$ with $p_1 + p_2 = 6$ such that the leading power $t^6$ cancels. This cancellation has been studied by \cite{Blommaert:2022lbh,Weber:2022sov}.
The conjecture (\ref{tauconj}) was designed so that if we compute the inverse Laplace transform to $K_E(\tau e^{S_0})$, the answer will simply be $e^{S_0}\min\{\tau/2\pi,\rho_0(E)\}$. In particular, for fixed $E > 0$, the expansion in powers of $\tau$ terminates after the linear term -- naively there is simply no genus expansion for fixed energy in the $\tau$-scaled limit. A more refined viewpoint is that the genus expansion has coefficients that are derivatives of $\delta$ functions of $\rho(E)$. This can be seen by writing $\min(x,y) = x - (x-y)\theta(x-y)$ and expanding in powers of $x$. It can also be seen by inverse Laplace transforming (\ref{tauexp}) term by term.
So the genus expansion of the canonical SFF can be understood as arising from contributions localized at zero energy where the plateau time is short. To see this from another perspective, consider a $\rho_0$ of the general form
\begin{equation}\label{generaldensity}
2\pi\rho_0(E) = a_1 E^{1/2} + a_3 E^{3/2} + a_5 E^{5/2} + a_7 E^{7/2}+\dots
\end{equation}
Then the conjecture (\ref{tauconj}) gives
\begin{align}\label{tauexpansionintro}
\int \d E e^{-2\beta E} &\text{min}\left\{\frac{\tau}{2\pi},\rho_0(E)\right\}\\& = \frac{\tau}{4\pi\beta} - \frac{\tau^3}{6\pi a_1^2} + \frac{(a_1\beta + 2a_3)}{10 \pi a_1^5}\tau^5 - \frac{(2a_1^2\beta^2 + 12 a_1a_3\beta - 6 a_1 a_5 + 21 a_3^2)}{42\pi a_1^8}\tau^7+\dots\notag
\end{align}
The contribution from genus $g$ depends on only the $g-1$ first terms in the expansion of $\rho_0(E)$ around $E = 0$. Indeed, in appendix \ref{Pappendix} we show that the coefficient of $\tau^{2g+1}$ for $g \ge 1$ is
\begin{equation}
-\frac{1}{g(2g+1)(2\pi)^{2g+1}} \oint_0 \frac{\d E}{2\pi \i} \frac{e^{-2\beta E}}{\rho_0(E)^{2g}}.
\end{equation}
In the rest of the paper we will try to understand where this series comes from. We will start by comparing to another type of expansion associated to the spectral form factor -- the theory of encounters in periodic orbits.
\newpage
\section{Encounters in orbits and in JT}\label{SectionThree}
One case where the spectral form factor has been studied extensively is semiclassical chaotic billiards.\footnote{See the introduction of \cite{muller2005periodic} for history and references.} There, the Gutzwiller trace formula is used to write an expression for the spectral form factor in terms of a sum over pairs of periodic orbits. Special pairings of orbits called ``encounters'' lead to a series in $\tau$ that is vaguely reminiscent of (\ref{tauexpansionintro}).
However, there are important differences: the encounters cancel between themselves for systems with unitary symmetry class, and the encounter analysis is only valid at high energies, in the semiclassical region. It is tempting to view the genus expansion (\ref{tauexpansionintro}) as analogous to a type of ``souped up'' encounter theory that can accurately treat very low energies, outside the semiclassical limit, and for which the encounters do not quite cancel.
We will explore this further in section \ref{SectionFour}. In the current section we prepare by reviewing the theory of encounters in periodic orbits and finding an analog of the simplest (Sieber-Richter) encounter in a JT gravity calculation.
\subsection{Review of periodic-orbit theory}
Consider a semiclassical billiards system, consisting of a particle moving in a stadium.\footnote{We set $\hbar = 1$. The semiclassical limit corresponds to high energies.} The starting point for the theory of encounters is Gutzwiller's trace formula for the oscillating part of the density of states $\rho_{\text{osc}}(E)$ in terms of a sum over classical periodic orbits $\gamma$:
\begin{equation}
\rho_{\text {osc}}(E)\sim {1\over \pi }\text{Re}\sum_{\gamma} A_{\gamma} e^{\i S_{\gamma}}.
\end{equation}
Here $A_{\gamma}$ is the stability amplitude (one-loop determinant) and $S_{\gamma}$ is the classical action. The microcanonical spectral form factor is then given by a double sum over orbits $\gamma,\gamma'$:
\begin{align}
K_E(t) &=\langle \int \d\epsilon e^{\i \epsilon t} \rho_{\text{osc}}(E+{\epsilon \over 2})\rho_{\text{osc}}(E-{\epsilon\over 2})\rangle \\
&={1\over 2\pi }\langle \sum_{\gamma,\gamma'} A_{\gamma}A^*_{\gamma'}e^{\i (S_{\gamma}-S_{\gamma'})}\delta\left(t-{t_{\gamma}+t_{\gamma'}\over 2}\right)\rangle.\label{eqn:SFF}
\end{align}
Here $t_{\gamma}={\partial S_{\gamma}\over \partial E}$ is the period of the semiclassical orbits $\gamma$ and $\langle \cdot \rangle$ represents an average over the energy window.
$K_E(t)$ receives both diagonal ($S_\gamma=S_\gamma'$) and off-diagonal ($S_\gamma\neq S_\gamma'$) contributions. In a chaotic system, one expects $S_{\gamma}=S_{\gamma'}$ only if $\gamma$ and $\gamma'$ are identical or related by symmetry -- the simplest (GUE) case is to assume there is no symmetry so $\gamma = \gamma'$. Berry showed \cite{berry1985semiclassical} that the sum over $\gamma = \gamma'$ leads to the linear ramp $t/2\pi$ in the GUE spectral form factor. The factor of $t$ comes from the possibility of a relative time shift between $\gamma$ and $\gamma'$. In the GOE case, there is additional time reversal symmetry $\mathcal{T}^2=1$, and diagonal sum also contains the time reversed orbit $\gamma'=\mathcal{T}\gamma$. This leads to an additional factor of two, so $K_E(t) \sim t/\pi$.
The off-diagonal contributions are weighted by an oscillatory factor $e^{\i (S_{\gamma}-S_{\gamma'})}$. Encounter theory is a way of identifying systematic classes of orbits such that the difference in actions is small. These consist of orbit pairs $\gamma,\gamma'$ that closely follow each other except for small off-shell regions known as encounters. The impressive achievement of encounter theory is that a sum over such encounters reproduces the fact that the GUE $K_E(t)$ has no corrections before the plateau, and the GOE $K_E(t)$ has a particular expansion
\begin{align}
K_E^{(GOE)}(t)&={t\over \pi}-{t\over 2\pi}\log\left[1+{t\over \pi \rho(E)}\right]\\ &={t\over \pi}-{2t^2\over\rho(E)(2\pi)^2}+{2 t^3\over \rho(E)^2(2\pi)^3}-...
\end{align}
Berry's analysis explains the linear term. The quadratic term was explained by Sieber and Richter \cite{Sieber_2001}, the cubic term was explained in \cite{heusler2004universal}, and the full series was reproduced in \cite{muller2005periodic}.
\subsubsection{Sieber-Richter pair}
The simplest example of an encounter is the Sieber-Richter pair or ``2-encounter'' which exists in a theory with time-reversal symmetry. The pair of orbits $\gamma,\gamma'$ can be sketched in configuration space as follows (this figure and (\ref{fig223}) were modified from \cite{muller2005periodic} with permission):
\begin{equation}\label{SRfig}
\includegraphics[valign = c, scale = 0.4]{figures/SR.pdf}
\end{equation}
We will focus on the case with only two degrees of freedom.
The key feature is that the orbit $\gamma$ (the red/solid orbit) returns close to itself at some point $t_1$ along the orbit. This point is referred to as an encounter, and the partner orbit $\gamma'$ differs from $\gamma$ only in the vicinity of the encounter (and as a consequence it is time-reversed in one of the two ``stretches'' outside the encounter region).
The encounter can be characterized by the deviation of the two nearby segments of $\gamma$, and it is convenient to decompose this deviation into the stable and unstable directions $s,u$. Within the encounter region, the $s,u$ variables decay and grow exponentially in time, with a Lyapunov exponent $\lambda$. This determines the duration of the encounter region:
\begin{equation}
t_{enc}={1\over \lambda}\log {c^2\over |s u|},
\end{equation}
where $c$ characterizes the regime of validity of the linearized analysis near the encounter.\footnote{At high energies, the result does not depend on the precise value of $c$.} In the two regions outside the encounter (called stretches), the two orbits follow each other closely, up to time reversal. This means that the difference in actions $S_\gamma - S_{\gamma'}$ comes only from the encounter region itself. This difference in action is determined by the $s,u$ variables and takes the form\footnote{This is reminiscent of the action that controls out-of-time-order correlators \cite{Stanford:2021bhl,Gu:2021xaj}.}
\begin{equation}
S_{\gamma}-S_{\gamma'}=s u.
\end{equation}
The probability that orbit $\gamma$ will have such an encounter is determined by ergodicity, which gives a uniform measure in the phase space ${1\over (2\pi)^2\rho_E}\d t_1 \d s \d u$. The Sieber-Richter pair's contribution to the spectral form factor can then be evaluated using the following integral:
\begin{equation}\label{srint}
K_{E}(t)\supset {t\over \pi }{1\over (2\pi)^2\rho(E)}\int^{c}_{-c} \d s \int_{-c}^{c}\d u{t\over 2t_{enc}} \int_{0}^{t-2t_{enc}} \d t_1 e^{\i su}.
\end{equation}
The factors in the integral are explained as follows:
\begin{enumerate}
\item The overall ${t\over \pi }=2\times {t\over 2\pi }$ factor reflects the relative time shift between $\gamma$ and $\gamma'$ and the time reversal symmetry. This part is the same as in the linear ramp.
\item The additional ${t\over 2t_{enc}}$ factor reflects the fact that the encounter region can be anywhere along the orbit: $t$ comes from integrating over the time of the reference point; ${1\over 2t_{enc}}$ fixes an over-counting from the choice of the reference point inside the encounter (changing this reference point would rescale $s$ and $u$ oppositely).
\item The integration range of the time where the encounter takes place, $t_1$, is upper bounded by $t-2t_{enc}$ to ensure the existence of the encounter region.
\end{enumerate}
The integral (\ref{srint}) gives:
\begin{equation}\label{eqn:k1/2}
K_E(t)\supset {t\over \pi }{t\over (2\pi)^2\rho(E)}\int \d s \d u e^{\i su}({t\over 2t_{enc}}-1)\approx-{2t^2\over (2\pi)^2\rho(E)}.
\end{equation}
Naively the answer should be of order $t^3$, but this term is proportional to $\int \d s \d u e^{\i s u}/\log|su| \approx 0$, and the nonzero answer comes from the subleading $t^2$ term.
\subsubsection{Cancellation of encounters in GUE}
The Sieber-Richter pair does not contribute in a theory without time-reversal symmetry (GUE case) because the portions of the orbits in the right stretch of figure (\ref{SRfig}) would have no reason to follow each other. Instead, in the GUE case the leading encounters (in the ${1\over \rho_E}$ expansion) are a configuration with two 2-encounters, and a configuration with a single 3-encounter where three segments of the orbit simultaneously approach each other:
\begin{equation}\label{fig223}
\includegraphics[valign = c, scale = .382]{figures/2_2_and_3_encounters.pdf}
\end{equation}
The two 2-encounters (denoted as $(2)^2$) is a straightforward generalization of the 2-encounter. It contains two pairs of $(s_i,u_i)$ soft modes with encounter time $t_{enc}^i$ and three zero modes $t_i$ labelling the stretch lengths.
Its contribution $K_{E,(2)^2}(t)$ to the spectral form factor is given by the following integral :
\begin{equation}
K_E(t)\supset K_{E,(2)^2}(t)= {t\over 2\pi}{1\over (4\pi^2\rho(E))^2}\int \prod_{i=1}^2\d s_i \d u_i e^{\i \sum_{i=1}^2s_i u_i} {t\over 4 \prod_{i=1}^2t_{enc}^i} {(t-\sum_{i=1}^22t_{enc}^i)^3\over 6},
\end{equation}
where the ${(t-\sum_{i=1}^22t_{enc}^i)^3\over 6}$ comes from the integration of the three zero modes $t_i$. As before, the $s_i,u_i$ integral is nonzero only when the measure is independent of $t_{enc}^i$.
This kills the $t^5,t^4$ powers in the two 2-encounter and left with only a $t^3$ piece:
\begin{equation}\label{twoEncounters}
K_{E,(2)^2}(t)={t^3\over (2\pi)^3\rho(E)^2}.
\end{equation}
The 3-encounter (denoted as $(3)^1$) is a limiting case of the two 2-encounters where one of the stretches shrinks to zero.
It can be thought of as a sequential swap of pairs of trajectories where each swap leads to an action difference $s_i u_i$ between the swapped trajectories. These deviations $(s_i,u_i)$ can be relatd to the deviations between nearest neighbor trajectories $(\hat s_i, \hat u_i)$\footnote{See Sec.II of \cite{muller2005periodic} for a detailed discussion.}, which determine the encounter duration $t_{enc}={1\over \lambda }\log{c^2\over \text{max}(\hat s_i)\text{max}(\hat u_i)}$. The contribution to the spectral form factor is
\begin{align}
K(t)\supset K_{E,(3)^1}(t)&={t\over 2\pi}{1\over (4\pi^2\rho(E))^2}\int \prod_{i=1}^2\d s_i \d u_i e^{\i \sum_{i=1}^2s_i u_i} {t\over 3 t_{enc}} {(t-3t_{enc})^2\over 2}\\ &=-{t^3\over (2\pi)^3\rho(E)^2}.\label{threeencounter}
\end{align}
In particular, these two contributions cancel (although GOE variants of them that include the possibility of time-reversed stretches do not cancel). In \cite{muller2005periodic} it was shown that this cancellation between the GUE encounters continues to hold to all orders in the ${1\over \rho_E}$ expansion, reproducing the RMT expectation that the ramp is exact before the plateau time.
\subsection{Sieber-Richter pair in JT}\label{section:SRJT}
In this section, we will explain the analog of Sieber-Richter pair in JT gravity. As explained in \cite{Altland:2020ccq}, this corresponds to the topology of a cylinder with a crosscap inserted. We are grateful to Adel Rahman for collaboration on the calculations in this section.
The cylinder with a crosscap inserted corresponds to the following quotient of hyperbolic space:
\begin{equation}
\includegraphics[valign = c, scale = 1.1]{figures/xcap.pdf}\label{embeddingdiagram}
\end{equation}
The identification that defines the quotient is specified by gluing together the two geodesics with single arrows and also gluing together the two geodesics with double arrows, keeping mind the orientation of the arrows.
The wiggly solid red segments form a single $S^1$ boundary, and have renormalized length $\beta_1$, which will be continued to $\beta + \i t$. Similarly, the wiggly dashed black segments form a single $S^1$ of renormalized length $\beta_2$, which will be continued to $\beta - \i t$.
The two curves labeled $b_1$ together form a circular geodesic of length $b_1$, and the two curves labeled $b_2$ form a circular geodesic of length $b_2$. The two lines labeled $a,a'$ form two circular geodesics that intersect at a point. These are ``one-sided'' geodesics, meaning that a neighborhood of either one is a Mobius strip, rather than a cylinder. Hyperbolic geometry imposes one constaint on these parameters \cite{Norbury}:
\begin{equation}\label{CCrelation}
\sinh(\frac{a}{4})\sinh(\frac{a'}{4})=\cosh(\frac{b_1+b_2}{4})\cosh(\frac{b_1-b_2}{4}).
\end{equation}
The geometry has a $\mathbb{Z}_2$ mapping class group that interchanges $a\leftrightarrow a'$. A convenient way to fix this is to parametrize the geometry by $a$ and to require that $a < a'$. This amounts to requiring $a < a_*$, where
\begin{equation}\label{CCcutoff}
\sinh^2(\frac{a^*}{4})=\cosh(\frac{b_1+b_2}{4})\cosh(\frac{b_1-b_2}{4}).
\end{equation}
The path integral of JT gravity on this space is then
\begin{equation}
2\times e^{-S_0}\int_0^\infty b_1\d b_1 b_2\d b_2 Z_{\text{Tr}}(\beta_1,b_1)Z_{\text{Tr}}(\beta_2,b_2)\int_{\epsilon}^{a^*} \frac{\d a}{2\tanh(\frac{a}{4})}.
\end{equation}
Let's explain each of the factors in this expression. The factor of two is from the possibility of an orientation reversal on going from one boundary to the other. The factor of $e^{-S_0}$ is from the topological weighting $e^{S_0\chi}$ where $\chi = -1$ is the Euler characteristic of the crosscap cylinder. The integral over $b_1$ comes with a factor of $b_1$ that represents the integral of the Weil-Petersson measure $\d b \wedge \d\tau$ over the twist parameter $\tau$. The factors of $Z_{\text{Tr}}(\beta,b)$ represent the integral over the boundary wiggles. Finally, the $a$ parameter is integrated with the crosscap measure \cite{Norbury,Gendulphe,Stanford:2019vob} with an upper limit specified by $a_*$ to account for the mapping class group. Note that the integral would be divergent near $a = 0$, which represents the fact that in JT gravity, the path integral on non-orientable surfaces is divergent. We regularized the integral by cutting it off at $\epsilon$, and we will see that this divergence does not survive the $\tau$-scaling limit.\footnote{This regularization corresponds e.g.~to studying the $(2,p)$ minimal string with large but finite $p$.}
To obtain the contribution to the spectral form factor, we continue the parameters $\beta_1,\beta_2$ to $\beta \pm \i t$:
\begin{equation}\label{canonicalCC}
\begin{aligned}
K_{\beta}(t)&\supset 2\times e^{-S_0}\int_0^\infty b_1\d b_1 b_2\d b_2\int_{\epsilon}^{a^*} \frac{\d a}{2\tanh(\frac{a}{4})} Z_{\text{Tr}}(\beta+it,b_1)Z_{\text{Tr}}(\beta-it,b_2) \\
&= 2\times e^{-S_0}\int_0^\infty b_1\d b_1 b_2\d b_2\int_{\epsilon}^{a^*} \frac{\d a}{2\tanh(\frac{a}{4})} \frac{e^{-b_1^2/(4(\beta+\i t))}}{\sqrt{4\pi (\beta+\i t)}}\frac{e^{-b_2^2/(4(\beta-\i t))}}{\sqrt{4\pi (\beta-\i t)}} \\
&\approx \frac{e^{-S_0}}{2\pi t}\int_0^\infty b_1\d b_1 b_2\d b_2\int_{\epsilon}^{a^*} \frac{\d a}{2\tanh(\frac{a}{4})} \exp\left\{\i\frac{b_1^2-b_2^2}{4t} - \frac{\beta}{4t^2}(b_1^2+b_2^2)\right\}.
\end{aligned}
\end{equation}
Here, the lower bound $\epsilon$ of the integration range of $a$ is the cutoff that regularizes the crosscap integral mentioned before -- it will drop out below. In the last step we used $t \gg \beta$.
Because the answer we expect is proportional to ${1\over \rho(E)}$, it is convenient to go to the microcanonical spectral form factor, using
\begin{align}
\int {\d\beta\over 2\pi \i} e^{2\beta E}Z_{\text{Tr}}(\beta+it,b_1)Z_{\text{Tr}}(\beta-it,b_2) &\approx {1\over 4\pi t} \int {\d \beta\over 2\pi \i} e^{2\beta E}\exp\left\{\i\frac{b_1^2-b_2^2}{4t} - \frac{\beta}{4t^2}(b_1^2+b_2^2)\right\} \\
&={1\over 4\pi t} \exp\left\{\i\frac{b_1^2-b_2^2}{4t}\right\}\delta\left({b_1^2+b_2^2\over 4t^2}-2E \right).
\end{align}
We can also evaluate the integral over $a$, getting\footnote{We choose to include in $V$ the factor of two from the sum over orientation reversal of one of the trumpets.}
\begin{equation}\label{eqn:v1/2_2}
V_{1/2}(b_1,b_2) = 2\times\int_\epsilon^{a_*}\frac{\d a}{2\tanh(\frac{a}{4})} = 2\log \cosh\frac{b_1+b_2}{4} + 2\log \cosh\frac{b_1-b_2}{4} - 4\log\frac{\epsilon}{4}.
\end{equation}
We can now evaluate the microcanonical spectral form factor for fixed $E$ and large $t$:
\begin{align}
K_E(t)&\supset \frac{e^{-S_0}}{4\pi t} \int_0^\infty b_1\d b_1 b_2\d b_2 V_{1/2}(b_1,b_2) \exp\left\{\i\frac{b_1^2-b_2^2}{4t}\right\}\delta\left({b_1^2+b_2^2\over 4t^2}-2E \right)\label{structure}\\
&\approx e^{-S_0} {2t^2\sqrt{E}\over \pi} \int_{-\infty}^\infty \d (\delta b) e^{\i \sqrt{E} \delta b} (\log \cosh(\sqrt{E}t)+\log \cosh{\delta b\over 4} - 2\log\frac{\epsilon}{4})\label{deltab}\\
&=e^{-S_0} {2t^2\sqrt{E}\over \pi} \int_{-\infty}^\infty \d (\delta b) e^{\i \sqrt{E} \delta b}\log \cosh{\delta b\over 4}\\
&=-{t^2\over 2\pi ^2 \rho(E)}.\label{expdecay}
\end{align}
which matches the encounter result (\ref{eqn:k1/2}). In the last step we used that in JT gravity, the density of states is $\rho_0(E) = \sinh(2\pi \sqrt{E})/(2\pi)^2$.
We will now make a few remarks connecting this calculation to the encounter picture (see appendix \ref{app:soft} for more details). The $b_1,b_2$ parameters can be regarded as analogous to the lengths of the periodic orbits, with difference of orbit actions $\Delta S = S_\gamma - S_{\gamma'}$ analogous to $(b_1^2-b_2^2)/4t \approx \sqrt{E}\delta b$. With this understanding we can write the moduli space volume in JT as
\begin{equation}
V_{1/2}(b_1,b_2) = 2\sqrt{E} t + 2\log\cosh\frac{\Delta S}{4\sqrt{E}} + \text{const.}
\end{equation}
In periodic orbit theory, we should compare this moduli space volume to the integral over the parameters of the encounter with fixed action difference $\Delta S$:
\begin{equation}
\int_{-c}^{c}\d s \d u \delta(su-\Delta S) {t-2t_{enc}\over t_{enc}}=\lambda\left(t-2t_{enc}\right)=\lambda t+2\log{|\Delta S| \over c^2}.
\end{equation}
The Lyapunov exponent $\lambda$ from periodic orbit theory should be compared to the JT gravity chaos exponent $2\pi/\beta = 2\sqrt{E}$, so the terms linear in $t$ match. However, these terms drop out after integrating over $\Delta S$ with weighting $e^{\i \Delta S}$. Instead, the answer is determined by the subleading terms, and in particular by the locations of their singularities in the upper half-plane for $\Delta S$. In the JT case, the closest singularity to the real axis is at $\Delta S = 2\pi\i \sqrt{E}$, leading to the exponential suppression of (\ref{expdecay}).
\section{Beyond encounters}\label{SectionFour}
In a GUE-like theory such as orientable JT gravity, the analog of encounters are expected to cancel exactly at fixed energy. In this section, we will discuss a convenient decomposition of the moduli space that separates the contributions of different encounters. This will make it possible to understand the encounter contributions and their cancellation, as well as the failure of their cancellation at low energies.
Instead of JT gravity, we will work with the simpler Airy model, which may be viewed as the low energy or low temperature limit of JT gravity, where one approximates the $\sinh( c \sqrt{E})$ density of states as $\rho(E) = c\sqrt{E} $. In this limit, the lengths of the asymptotic boundaries, as well as the lengths of any internal closed geodesics, go to infinity. One can see this by taking this limit in the JT gravity formula for partition functions as trumpets integrated against the Weil-Petersson (WP) volume
\begin{equation}\label{JTpartitionfunctions}
\langle Z(\beta_1)\dots Z(\beta_n)\rangle_{\text{JT}}\supset e^{\chi S_0} \int_0^\infty b_1 \d b_1 \dots \int_0^\infty b_n \d b_n \; \frac{e^{-\frac{b_1^2}{4\beta_1}}}{\sqrt{4\pi \beta_1}} \dots \frac{e^{-\frac{b_n^2}{4\beta_n}}}{\sqrt{4\pi \beta_n}} V_{g,n}(b_1,\dots, b_n).
\end{equation}
Partition functions for the Airy model can be obtained from the JT answers by an infinite rescaling of $\beta$, accompanied by a renormalization of $S_0$
\begin{equation}
\langle Z(\beta_1)\dots Z(\beta_n)\rangle_{\text{Airy}}= \lim_{\Lambda\rightarrow \infty} \Lambda^{\frac{3}{2}\chi} \langle Z(\Lambda \beta_1)\dots Z(\Lambda \beta_n)\rangle_{\text{JT}}
\end{equation}
To take this limit in (\ref{JTpartitionfunctions}), we rescale the $b_i$ by $\sqrt{\Lambda}$. The WP volumes are polynomials in the $b_i$, with degree $6g+2n-6$. We define the Airy volumes as
\begin{equation}
V_{g,n}^{\text{Airy}}(b_1\dots b_n) = \lim_{\Lambda\rightarrow \infty} \Lambda^{3-3g-n} V_{g,n}(\Lambda b_1,\dots,\Lambda b_n).
\end{equation}
These Airy volumes are then homogeneous polynomials in the $b_i$ of degree $6g+2n-6$, given by the leading powers of the full WP volumes. The Airy partition functions can be written as trumpets integrated against the Airy volumes, with $S_0 \rightarrow S_0 +\frac{3}{2}\log(\Lambda)$.
In the limit where the boundary lengths $b_1\dots b_n$ become infinitely long, the surfaces counted by the WP volumes simplify. The Gauss-Bonnet theorem implies that a constant negative curvature surface with geodesic boundaries has a fixed volume proportional to its Euler character. As the lengths of the boundaries are going to infinity, the surfaces must become infinitely thin strips in order for the volume to remain fixed.
This thin strip limit allows for a simple decomposition of the moduli space of these surfaces, described by Kontsevich \cite{kontsevich1992intersection}, which will connect in a transparent way to the encounters discussed in the previous section and to the description of the Airy model using a double-scaled matrix integral. We now briefly review this decomposition, following \cite{DoThesis}.
\subsection{Kontsevich's decomposition of moduli space}
In the thin strip (Airy) limit, the moduli space can be described as a sum over trivalent ribbon graphs, together with an integral over the lengths of the edges that make up the graphs, subject to the constraint that the boundaries have lengths $b_i$:
\begin{equation}
V^{\text{Airy}}_{g,n}(b_1\dots b_n) =\frac{2^{2g-2+n}}{|\text{Aut}(\Gamma)|}\prod_{k = 1}^E \int_0^{\infty} \d l_k \prod_{i=1}^n \delta(b_i - \sum_{k = 1}^E n^i_k l_{k})
\end{equation}
Here $E = 6g-6+3n$ is the number of edges in the graph, $l_k$ is the length of edge $k$, and $n^i_k \in \{0,1,2\}$ is the number of sides of edge $k$ that belong to boundary $i$.
The Laplace transform of this expression is a little simpler:
\begin{align}\label{LapVMat}
\tilde{V}^{\text{Airy}}_{g,n}(z_1\dots z_n) &\equiv \int_0^\infty \prod_{i = 1}^n\left[\d b_i e^{-b_i z_i}\right] V^{\text{Airy}}_{g,n}(b_1\dots b_n) \\ &=\sum_{\Gamma \in \Gamma_{g,n}}\frac{2^{2g-2+n}}{|\text{Aut}(\Gamma)|} \prod_{k=1}^{6g-6+3n}\frac{1}{z_{l(k)}+z_{r(k)}}.
\end{align}
Here $\Gamma_{g,n}$ is set of trivalent ribbon graphs with genus $g$ and $n$ boundaries, contructed from $E=6g-6+3n$ edges and $V=4g-4+2n$ trivalent vertices. The $k$ variable runs over the $6g-6+3n$ edges, and the $l(k)\in \{1\cdots n \}$ index labels which boundary of the Riemann surface the left side of the ribbon belongs to. Similarly, $r(k)$ labels which boundary the right side of the ribbon belongs to.
We are interested in the case where there are two boundaries, so we will draw ribbon graphs with ribbon edges denoted by solid red ($1$) and dashed black ($2$) lines. Then a $11$ edge comes with a factor of $1/(2z_1)$, a $22$ edge comes with a factor of $1/(2z_2)$, and a $12$ edge comes with a factor of $1/(z_1+z_2)$. These ribbon graphs can be orientable or non-orientable, depending on what variety of JT gravity or Airy gravity we are interested in.\footnote{In the non-orientabe case, one also has additional factors of two due to the possibility of inserting orientation reversing operators along particular cycles.} An example in the non-orientable case is
\begin{equation}
\includegraphics[scale = .65, valign = c]{figures/exampleGraph.pdf}
\end{equation}
This graph has two boundaries and genus one-half, and together with three other graphs discussed in section \ref{genusonehalfsubsection} below, it gives the Kontsevich-graph description of the Sieber-Richter two-encounter.
We will also consider the graphs with two boundaries and genus one. To enumerate the graphs, a useful fact is that all of the orientable graphs for fixed $(g,n)$ can be obtained from a single graph by repeatedly applying the cross operation (or Whitehead collapse) \cite{Whitehead1936equivalent,Penner1988perturbative}:
\begin{equation}\label{crossOp}
\includegraphics[valign = c, scale = .7]{figures/cross.pdf}
\end{equation}
This is consistent with that fact that moduli space $\overline{\mathcal{M}}_{g,n}$ is a connected space: if we back off of the Airy limit of JT gravity, then the strips have finite width, and the operation (\ref{crossOp}) is a smooth transition.
\subsection{Genus one-half}\label{genusonehalfsubsection}
In this section we will illustrate the connection between encounters and Kontsevich graphs by studying the example of genus one-half, with two boundaries. In this case, the volume of the moduli space is
\begin{equation}\label{Airygenusonehalfvol}
V^{\text{Airy}}_{\frac{1}{2},2}(b_1,b_2) = \text{Max}(b_1,b_2).
\end{equation}
This can be obtained by taking the large $b_1,b_2$ limit of the JT gravity answer (\ref{eqn:v1/2_2}). To take this limit, one drops the constant piece and replaces $\log \cosh(\frac{b_1\pm b_2}{4})$ with $\frac{1}{4}|b_1\pm b_2|$.
There are four Kontsevich graphs with two boundaries and genus one half:
\begin{equation}
\includegraphics[scale = .55,valign = c]{figures/g=halfgraphs.pdf}
\end{equation}
Here the graphs are labeled by $(n_{11},n_{22})$, the number of $11$ and $22$ propagators. The contributions of these graphs to $\tilde{V}(z_1,z_2)$ are
\begin{align}
(1,0):\hspace{10pt} \frac{c_{1,0}}{z_1 (z_1+z_2)^2} &, \hspace{20pt} (0,1):\hspace{10pt} \frac{c_{0,1}}{z_2 (z_1+z_2)^2} \\
(2,0):\hspace{10pt} \frac{c_{2,0}}{z_1^2 (z_1+z_2)} &, \hspace{20pt} (0,2):\hspace{10pt} \frac{c_{0,2}}{z_2^2 (z_1+z_2)}
\end{align}
where the coefficients $c_{0,1} = c_{1,0}$ and $c_{0,2} = c_{2,0}$ are determined the by the symmetry factor of the graph, together with a factor of two from the possibility of orientation reversal along one boundary.
Rather than computing the symmetry factors, we can compute $c_{1,0}$ and $c_{2,0}$ indirectly by matching to the volume (\ref{Airygenusonehalfvol}). To find the contribution of each graph to the volume, we take the inverse Laplace transform, for example
\begin{equation}
(1,0):\hspace{10pt} \int_{\gamma+i\mathbb{R}}\frac{\d z_1}{2\pi i} \frac{\d z_2}{2\pi i}e^{b_1 z_1 +b_2 z_2} \frac{c_{1,0}}{z_1 (z_1+z_2)^2} = c_{1,0} b_2 \theta(b_1-b_2).
\end{equation}
Together with a similar term from $(0,1)$, this gives
\begin{equation}\label{genusonehalfvolsfirstgraph}
(1,0)+(0,1) = c_{1,0} \text{min}(b_1,b_2).
\end{equation}
Similarly,
\begin{equation}
(2,0) + (0,2) = c_{2,0} |b_1-b_2|.
\end{equation}
To match to (\ref{Airygenusonehalfvol}) we conclude that $c_{1,0} = c_{2,0} = 1$.
The corresponding contributions to the spectral form factor
\begin{equation}\label{b1b2integral}
\langle Z(\beta_1)Z(\beta_2)\rangle \supset e^{-S_0}\int \frac{b_1 \d b_1}{\sqrt{4\pi \beta_1}}\frac{b_2 \d b_2}{\sqrt{4\pi \beta_2}} e^{-\frac{b_1^2}{4\beta_1} - \frac{b_2^2}{4\beta_2}}V(b_1,b_2)
\end{equation}
are then (keeping the leading power of $t$)
\begin{align}
(1,0) + (0,1) &= e^{-S_0}\frac{t^2}{\sqrt{2\pi\beta}}\\
(2,0) + (0,2) &= -2e^{-S_0}\frac{t^2}{\sqrt{2\pi\beta}}
\end{align}
The sum of these contributions is $-e^{-S_0}t^2/\sqrt{2\pi\beta}$, which is the Laplace transform of the microcanonical answer $-e^{-S_0} t^2/(\pi\sqrt{E})$, which matches the two-encounter contribution (\ref{eqn:k1/2}) in the special case of the Airy density of states $\rho(E)=\frac{\sqrt{E}}{2\pi}$. Of course, this follows from the low-energy limit of the match we previously found in JT gravity. The interesting feature is that both classes of graphs contribute at the same order, and we have to sum both in order to reproduce the answer from the encounter.
The $(1,0)$ and $(0,1)$ graphs naively resemble a two-encounter; if one shrinks away the $11$ (or $22$) propagator, we find a graph with only 12 propagators and a quartic vertex. The $12$ propagators correspond to nearly parallel stretches of the $1$ and $2$ geodesic boundaries on the surface, so these graphs represent contributions for which the two boundaries are nearly parallel in a pattern that matches the Sieber-Richter pair. Of course, in computing the spectral form factor one glues on trumpets to the surface with geodesic boundaries, but the asymptotic boundaries also remain almost parallel for the stretches. Along these stretches, the geometry locally looks like the double-cone (or a non-orientable ``twisted" double-cone).
The $(2,0)$ and $(0,2)$ graphs are not as obviously connected to encounter theory, but they do represent a small part of the moduli space integral that is analogous to the $s,u$ integration in the encounter. To see which part of moduli space it corresponds to, consider the $a$ geodesic. In the Airy limit, this is simply the shortest loop on the Kontsevich graph that includes the twisted edge. For the $(2,0)$ and $(0,2)$ graphs, this means that $a$ is the twisted edge itself, which forms a loop shorter than $|b_1-b_2|/2$. For the $(1,0)$ and $(0,1)$ graphs, $a$ corresponds to a loop that includes the twisted edge plus the shorter untwisted edge, with total length longer than $|b_1-b_2|/2$. So the two classes of graphs divide the moduli space up as
\begin{align}
V^{(\text{Airy})}_{\frac{1}{2},2}(b_1,b_2) &= 2\int_0^{a^*=\frac{1}{2}\text{Max}(b_1,b_2)} \d a \\ &= 2\underbrace{\int_{\frac{|\delta b|}{2}}^{a^*=\frac{1}{2}\text{Max}(b_1,b_2)} \d a}_{(1,0)+(0,1)= \frac{1}{2} \text{Min}(b_1,b_2)}+ 2\underbrace{\int_0^{\frac{|\delta b|}{2}}\d a}_{(2,0)+(0,2)=\frac{|\delta b|}{2}}.
\end{align}
By splitting the integral into two parts, we introduce ``fictitious" endpoint contributions, proportional to $|\delta b|$, which cancel between the two graphs.
We can understand the geometry a bit better by fattening the Kontsevich graphs up and connecting them to the embedding space diagram (\ref{embeddingdiagram}). Here we will focus on the part of the embedding diagram bounded by the $b_1$, $b_2$ geodesics, removing the asymptotic trumpets. The two classes of Kontsevich graphs correspond to two limiting embedding space diagrams, with the $a$, $a'$ geodesics shown:
\begin{equation*}\label{limitingdiagrams}
\includegraphics[valign = c, width = \textwidth]{figures/xcap2.pdf}
\end{equation*}
On the left we start with a diagram similar to the middle of (\ref{embeddingdiagram}). A limiting case of this diagram represents a strip-like geometry. Upon making the identifications indicated by the arrows, we end up with the $(1,0)$ graph. On the right we begin with a somewhat different-looking embedding space diagram, which limits to the $(2,0)$ graph.
Though the two embedding diagrams that we start with look somewhat different, we can see that their topology is the same after making the indicated identifications. To see this more clearly, we may cut the embedding space diagram corresponding to the $(2,0)$ graph, then glue a pair of the identified edges to end up with an embedding space diagram resembling the $(1,0)$ diagram.
\begin{equation}
\includegraphics[valign = c, scale = 1.2]{figures/xcap3.pdf}
\end{equation}
After cutting and gluing the $(2,0)$, it must also be deformed somewhat to match the $(1,0)$ diagram; for instance, the newly cut geodesic, with an identification indicated by three arrows, is ``long" on the left diagram, but ``short" on the right. This corresponds to the fact that as shown, each of these two embedding space diagrams represent different limiting regions of moduli space, corresponding to the distinct $(1,0)$ and $(2,0)$ graphs. The limiting case of this deformation corresponds to the cross operation on the ``middle" edge of the $(2,0)$ graph.
\subsection{Genus one}
We now turn to our main interest, which is the first nontrivial ($\tau^3$) term in the series (\ref{airyanssec1}) for a GUE-like theory. This term arises at genus one. At genus one with two boundaries, the volume of the moduli space in the Airy limit is
\begin{equation}\label{AIRY12}
V^{\text{Airy}}_{1,2}(b_1,b_2) = \frac{(b_1^2+b_2^2)^2}{192}.
\end{equation}
Integrating this against trumpet wave functions and taking the limit of large $t$ leads to the term $-\tau^3/(6\pi)$ in the spectral form factor (\ref{airyanssec1}). We can gain a bit of insight by understanding how this contribution arises from different Kontsevich graphs, which can be related in turn to encounters.
\begin{figure}
\includegraphics[width = \textwidth]{figures/g=1graphs.pdf}
\caption{{\sf \small The nine Kontsevich graphs with two boundaries and genus one consist of these five, together with another four given by interchanging the solid/red and dashed/black lines on the last four graphs. The dashed/black lines correspond to $1$ boundaries, and the solid/red lines correspond to $2$. The gray lines with arrows show what happens if we apply a cross operation to a given edge.}}\label{figGenusOneGraphs}
\end{figure}
The Kontsevich graphs that contribute to $V_{1,2}$ have six propagators total, which can be $11$, $22$ and $12$ propagators. Up to symmetries, there are nine distinct graphs, see Figure \ref{figGenusOneGraphs}, and they can be characterized by the number of $11$ and $22$ propagators,
\begin{equation}\label{k1k2}
(5,0), \ (4,0), \ (3,0), \ (2,0), \ (1,1), \ (0,2), \ (0,3), \ (0,4), \ (0,5).
\end{equation}
For example, the $(5,0)$ graph has five $11$ propagators, zero $22$ propagators, and one $12$ propagator. It is given by
\begin{equation}
\frac{c_{5,0}}{z_1^5 (z_1+z_2)}
\end{equation}
where the constant $c_{5,0}$ can be computed by evaluating the symmetry factor of the graph. As in genus one-half, these factors can be determined indirectly by matching to (\ref{AIRY12}). For example, after inverse Laplace transforming this, we find that the contribution to the volume $V^{\text{Airy}}_{1,2}(b_1,b_2)$ is
\begin{equation}
\int_{\gamma + \i \mathbb{R}}\frac{\d z_1}{2\pi \i}\frac{\d z_2}{2\pi\i} e^{b_1 z_1 + b_2 z_2} \frac{c_{0,5}}{z_1^5 (z_1+z_2)} = \frac{c_{0,5}}{24}(b_1-b_2)^4\theta(b_1-b_2).
\end{equation}
One can work out a similar expression for each of the $(k_1,k_2)$ cases in (\ref{k1k2}), and the coefficients $c_{k_1,k_2}$ are uniquely determined by the condition that the contributions of all of the graphs should add up to (\ref{AIRY12}). Explicitly,
\begin{equation}
c_{5,0} = c_{4,0} = \frac{1}{8}, \hspace{20pt} c_{3,0} = \frac{1}{6}, \hspace{20pt} c_{2,0} = \frac{1}{4}, \hspace{20pt} c_{1,1} = \frac{1}{2}
\end{equation}
and equal values for $k_1\leftrightarrow k_2$.
The contribution of a given graph to the spectral form factor is then obtained from
\begin{equation}\label{b1b2integral}
\langle Z(\beta_1)Z(\beta_2)\rangle \supset e^{-2S_0}\int \frac{b_1 \d b_1}{\sqrt{4\pi \beta_1}}\frac{b_2 \d b_2}{\sqrt{4\pi \beta_2}} e^{-\frac{b_1^2}{4\beta_1} - \frac{b_2^2}{4\beta_2}}\int_{\gamma + \i \mathbb{R}}\frac{\d z_1}{2\pi \i}\frac{\d z_2}{2\pi\i} e^{b_1 z_1 + b_2 z_2}\frac{c_{k_1,k_2}}{z_1^{k_1}z_2^{k_2}(z_1+z_2)^{6-k_1-k_2}}.
\end{equation}
This integral reduces to a sum of hypergeometric functions. We can simplify the expression by setting $\beta_1 = \beta + \i t$ and $\beta_2 = \beta - \i t$ with large $t$, and keeping all terms that grow at order $t^3$ or faster. This leads to
\begin{align}
(5,0) + (0,5) &= e^{-2S_0}\frac{t^3}{6\pi}\\
(4,0) + (0,4) &= e^{-2S_0}\frac{t^3}{6\pi}\left(3\log\frac{2t}{\beta} - 9\right)\\
(3,0) + (0,3) &= e^{-2S_0}\frac{t^3}{6\pi}\left(-6\log\frac{2t}{\beta}+10\right)\\
(2,0) + (0,2) &= e^{-2S_0}\frac{t^3}{6\pi}\left(-\frac{t^2}{\beta^2} + 3\log\frac{2t}{\beta}-3\right)\label{02}\\
(1,1) &= e^{-2S_0}\frac{t^3}{6\pi}\cdot\frac{t^2}{\beta^2}\label{11}
\end{align}
The sum of these contributions gives $-e^{-2S_0}t^3/(6\pi) = - e^{S_0}\tau^3/(6\pi)$, which produces the cubic term in (\ref{airyanssec1}). However, individual graphs contain terms that grow faster with time.
\subsubsection{Encounters}
As in the genus one-half case from section \ref{genusonehalfsubsection}, we can make a map from Kontsevich graphs to encounters by shrinking the $11$ and $22$ edges to form a graph with only $12$ edges but with higher-degree vertices. If we do this, the $(1,1)$ and $(0,2)$ graphs will correspond to a case with two two-encounters, and the $(0,3)$ and $(0,4)$ graphs will correspond to a three-encounter. After shrinking the $22$ edges, the final graph $(0,5)$ does not correspond to an encounter, but in parallel to the discussion of the $(0,2)$ graph from genus one-half, we believe it should be considered part of the extended three-encounter moduli space.
Let's examine the Kontsevich graphs that correspond to a pair of two-encounters. The contribution is the sum of (\ref{02}) and (\ref{11}), which gives
\begin{equation}
(2,0) + (0,2) + (1,1) = e^{-2S_0}\frac{t^3}{2\pi}\left(\log\frac{2t}{\beta} - 1\right).\label{graph2enc}
\end{equation}
We would like to compare this to the semiclassical answer for a pair of two-encounters (\ref{twoEncounters}), for the density of states of the Airy model $\rho(E) = \sqrt{E}/(2\pi)$
\begin{align}
\text{two two-encounters} &= e^{-2S_0}\frac{t^3}{2\pi E}.
\end{align}
In the canonical ensemble, this gives the naive expression
\begin{equation}
\text{two two-encounters} \stackrel{?}{=} e^{-2S_0}\frac{t^3}{2\pi}\int_0^\infty\frac{\d E}{E} e^{-2\beta E}.
\end{equation}
The reason this expression is naive is that at very low energies, the encounter picture breaks down, because the action is small enough that we do not require orbits to form pairs whose action cancels.
For the case of genus one-half, this breakdown was not significant because the analogous integral over energy was $\int \d E e^{-2\beta E}/ \sqrt{E}$ which is convergent. But in the present case, the integral diverges and the cutoff associated to the breakdown of encounter theory becomes important. We can estimate the energy of the breakdown from the point where the action $S \sim E t$ becomes of order one, which gives $E \sim 1/t$. A revised estimate for the semiclassical encounter contribution would then be
\begin{equation}\label{twotwoencounters}
\text{two two-encounters} \stackrel{?}{=} e^{-2S_0}\frac{t^3}{2\pi}\int_{1/t}^\infty\frac{\d E}{E} e^{-2\beta E} = e^{-2S_0}\frac{t^3}{2\pi}\left(\log\frac{t}{\beta} +\text{const}\right).
\end{equation}
This matches the form of (\ref{graph2enc}).
One can similarly find agreement between the predicted contribution of a three enounter and the sum of the graphs $(5,0) + (0,5) + (4,0) + (0,4) + (3,0) + (0,3)$. In particular, the cancellation of the encounters demonstrated in \cite{muller2005periodic} is visible here in the fact that the log terms cancel between the graphs summed in (\ref{graph2enc}) and the three-encounter graphs.
\subsubsection{Beyond encounters}
Because the $t^3\log(t)$ terms cancel, the entire contribution comes from the $t^3$ terms, and in encounter language, these contributions depend on the details of the small energy region (e.g.~the precise cutoff one uses in (\ref{twotwoencounters})). This cannot be computed using standard encounter theory. However, the Kontsevich graphs continue to be valid for all energies. In this sense, the Kontsevich graphs give a quantum completion of the semiclassical encounter theory for this particular system.
It is interesting to understand the region of the $b_1,b_2$ integral (\ref{b1b2integral}) that is relevant for the $t^3\log(t)$ pieces that cancel out vs.~the full $t^3$ answer. The log terms arise from non-analyticities at $b_1 = b_2$, where the phases contributed by the trumpet wave functions cancel. This is analogous to the fact that encounter contributions in periodic-orbit theory arise from a nonanlyticity in the region $\Delta S = 0$ where a pair of orbits have approximately the same length and cancelling actions.
However, the full moduli space volume is analytic in $b_1,b_2$, which implies that the log terms must cancel when we sum over graphs. What region of the $b_1,b_2$ integral is important for producing the leftover $t^3$? We have an integral of the form
\begin{equation}
\frac{1}{t}\int_0^\infty b_1 \d b_1 b_2 \d b_2 e^{\i (b_1^2-b_2^2)/(4t)} (b_1^2-b_2^2)^2.
\end{equation}
In this integral, $b_1^2\sim t$ and $b_2^2\sim t$, but with no particular preference for the region where $b_1 = b_2$. So the $1$ and $2$ boundaries have significantly different lengths, and the $11$ or $22$ portions of the Kontsevich graphs are as long as the $12$ portions. This corresponds to the idea that we are probing low energies, so that action $b_1^2/t$ is of order one and it does not need to cancel between the two ``orbits.'' Note that in periodic orbits, the analog of this region would be outside the regime of the validity of the semiclassical encounter approximations.
\section{Discussion}\label{Discussion}
In the Airy model at genus one, we found that the answer for $K_\beta(t)$ came from an integral over a large portion of moduli space. This poses a challenge for understanding the geometric origin of the series for $K_\beta(t)$ at higher genus and for theories with other densties of states; $K_\beta(t)$ has a universal form, fixed entirely by $\rho_0(E)$, but this universal answer comes from a highly quantum integral over moduli space. This suggests that there is some universal structure in the moduli space responsible for this series. Though we have not understood this structure, our findings in the Airy model hint at a relationship with encounters.
In the Airy model the moduli space has a natural structure, given by the Konstsevich graphs. In a sense we made precise at genus one half and genus one, the whole moduli space should be thought of as made up of ``quantum corrected" encounters, valid at very low energies. Perhaps in JT (and even in more general large $N$ chaotic systems) there is a ``fattened" version of this quantum encounter region of moduli space that is responsible for the answer, rather than the entire moduli space.
We can see a hint that the connection between the genus expansion for $K_\beta(t)$ and encounters generalizes to higher genus/other spectral curves by generalizing the estimate (\ref{twotwoencounters}) of $K_\beta(t)$ from encounters. An encounter configuration is expected to give a contribution to $K_E(t)$ proportional to $t^{2g+1}/\rho(E)^{2g}$. An estimated contribution of the encounter to $K_\beta(t)$, generalizing (\ref{twotwoencounters}) and extrapolating to low energies, would then be\footnote{We have dropped all terms that would be small in the $\tau$-scaling limit.}
\begin{align}\label{highergenuscutoff}
K_\beta(t) & \stackrel{?}{\supset} C e^{-2g S_0} t^{2g+1} \int_{\frac{1}{t}}^{\infty} \frac{\d E}{\rho_0(E)^{2g}} e^{-2\beta E},
\cr
&= C e^{-2g S_0} t^{2g+1} \bigg[P_g^{(\rho)}(\beta) \bigg(\log\frac{t}{\beta}+\text{const}\bigg) + \text{Higher powers of t}\bigg].
\end{align}
Here $P_{g}^{(\rho)}(\beta)$ is a polynomial in $\beta$ of degree $g-1$, whose coefficients depend on the first $g$ coefficients in the expansion (\ref{generaldensity}) for $\rho_0(E)$.
Summing over encounters at each genus, the familiar cancellations between encounters in $K_E(t)$ imply that the log terms cancel, leaving us with cutoff-dependent terms which may or may not cancel between encounters. We can compare this estimate with the conjecture (\ref{tauexpansionintro}) for $K_\beta(t)$
\begin{equation}\label{Kbetaexpansion}
K_\beta(t) = \sum_{g=0}^\infty P^{(\rho)}_{g}(\beta) \; e^{-2 g S_0}\; t^{2g+1}.
\end{equation}
In appendix \ref{Pappendix} we show that the polyomials $P^{(\rho)}_{g}(\beta)$ in (\ref{highergenuscutoff}) and (\ref{Kbetaexpansion}) are indeed the same.\footnote{Up to an overall genus-dependent coefficient which can be absorbed into the coefficient $C$ in (\ref{highergenuscutoff}).} So the ``const" terms in the estimate (\ref{highergenuscutoff}) match the genus g contribution in (\ref{Kbetaexpansion}), up to an overall cutoff-dependent factor.
Another set of questions concerns the relationship between the genus expansion for $K_\beta(t)$ and other approaches to understanding the plateau, such as the sigma model approach \cite{Wegner:1979tu,doi:10.1080/00018738300101531,PhysRevLett.75.902}, the Riemann-Siegel lookalike formula \cite{Berry_1990,Keating:1992tq,BerryKeatin1992}, and orbit action correlation functions \cite{argaman1993correlations}. Understanding the relationship between these approaches and the approach taken in this paper may be useful for learning lessons about theories that do not have a $\tau$-scaled spectral form factor. We discuss the sigma model approach in \hyperref[Appendix A]{Appendix A} and the action correlation approach in \hyperref[Appendix B]{Appendix B}.
\section*{Acknowledgements}
We thank Alexander Altland, Adel Rahman, Julian Sonner and the authors of \cite{Blommaert:2022lbh,Weber:2022sov} for discussions and Raghu Mahajan and Stephen Shenker for initial collaboration. PS is supported by a grant from the Simons Foundation (385600, PS), and by NSF grant PHY-2207584. DS is supported in part by DOE grant DE-SC0021085 and by the Sloan Foundation. ZY is supported in part by the Simons Foundation.
\section{Introduction}
A longstanding challenge is to explain the discrete spectrum of black hole microstates using spacetime geometry. In recent years, some statistical aspects of these microstates have been explained using spacetime wormholes.\footnote{The role of spacetime wormholes in quantum gravity has also been a longstanding puzzle, \cite{Hawking:1987mz,Lavrelashvili:1987jg,Giddings:1987cg,Coleman:1988cy,Maldacena:2004rf,ArkaniHamed:2007js}.} Examples include: aspects of the spectral form factor \cite{Saad:2018bqo,Saad:2019lba,Stanford:2019vob} and late-time correlation functions \cite{Blommaert:2019hjr,Saad:2019pqd,Blommaert:2020seb,Stanford:2021bhl}, the Page curve \cite{Almheiri:2019qdq,Penington:2019kki} and matrix elements \cite{Stanford:2020wkf} of an evaporating black hole, and the ETH behavior of matrix elements \cite{Belin:2020hea,Belin:2021ryy,Chandra:2022bqq}.
A statistical theory of microstates is far from a complete description, but it is enough to probe discreteness of the energy spectrum. One tool to discuss this is the spectral form factor
\begin{equation}\label{intro:SFF}
K_\beta(t) = \langle Z(\beta+\i t)Z(\beta - \i t)\rangle
\end{equation}
where $Z(x) = \text{Tr}\,e^{-x H}$ is the thermal partition function, and the brackets represent some form of averaging for which the statistical description is sufficient. The discrete nature of chaotic energy levels is reflected in the ``plateau'' to a late time value $K_\beta(\infty) = Z(2\beta)$.
For systems in the unitary symmetry class (no time reversal symmetry), the random matrix theory (RMT) prediction for the spectral form factor is simple. The microcanonical version
\begin{equation}
K_E(t) = \int \frac{\d\beta}{2\pi \i} e^{2\beta E} K_\beta(t)
\end{equation}
should have the form of a linear ramp connected to a plateau: $\text{min}\{t/2\pi, e^{S(E)}\}$ with $S(E)$ the microcanonical entropy at energy $E$. This sharp transition at $t_p = 2\pi e^{S(E)}$ is a signature of discretness of the spectrum; it arises from oscillations in the density pair correlator with wavelength $e^{-S(E)}$, representing the mean spacing between discrete energy levels.
The sharpness of the transition from the ramp to the plateau is an apparent obstruction to an explanation in terms of geometry. In particular, in two-dimensional dilaton gravity models such as JT gravity, the genus expansion should roughly be thought of as an expansion in $e^{-S(E)}$. But the transition from the ramp to the plateau comes from contributions have go as $e^{i \# e^{S(E)}}$, nonperturbative in the genus counting parameter, suggesting that it is not captured by the conventional sum over geometries.\footnote{Some previous approaches to explaining the plateau through a sum over geometries have involved ``spacetime D-branes" \cite{Saad:2019lba,Blommaert:2019wfy,Marolf:2020xie}, which generalize the sum over geometries to include contributions from an infinite number of asyptotic boundaries.}
However, the spectral form factor $K_\beta(t)$ is an integral of $K_E(t)$ over energy, and this integral has the potential to smooth out the transition to the plateau. As first shown by \cite{Okuyama:2020ncd,okounkov2002generating} for the Airy matrix integral, the resulting function can have a convergent genus expansion, smoothly transitioning from the ramp to the plateau. We conjecture a generalization of this result below, in a limit that will be referred to as ``$\tau$-scaling.'' This convergent series makes it possible to explain the plateau in terms of a conventional sum over geometries, rather than from a radical nonperturbative effect.
In this paper, we will explain some features of this genus expansion for the spectral form factor, primarily working in the low-energy limit of JT gravity: the Airy model. Our explanations will connect with the encounter computations in semiclassical periodic orbit theory, used to explain the RMT corrections to the ramp \cite{2009arXiv0906.4930A,Sieber_2001,Sieber_2002,PhysRevLett.93.014103,PhysRevE.72.046207}. The sum over encounters is closely analogous to a genus expansion, so it is natural to try interpret the genus expansion for the plateau in terms of a gravitational analog of encounters. Encounters alone cannot be sufficient to explain the genus expansion for $K_\beta(t)$ because without time-reversal symmetry, the encounters cancel genus by genus.
The models that we study, in particular the Airy model, allow us to generalize the theory of encounters beyond their usual regime of validity in the high-energy, semiclassical limit. At very low energies, of order $1/t$, the encounters receive large quantum corrections that disturb the cancellation between encounters, reproducing the expected $\tau$-scaled $K_\beta(t)$.
In \hyperref[SectionTwo]{\textbf{Section Two}}, we introduce a formula for $K_\beta(t)$ in a double-scaled matrix integral in the ``$\tau$-scaled" limit, generalizing \cite{Okuyama:2020ncd,Okuyama:2021cub}. We reconcile the existence of a convergent genus expansion for $K_\beta(t)$ with the absence of such an expansion for $K_E(t)$. In particular, one can think of the genus expansion for $K_\beta(t)$ as coming entirely from very low energies.
In \hyperref[SectionThree]{\textbf{Section Three}} we review an analog of the genus expansion for $K_E(t)$ in periodic orbit theory: the sum over encounters. The sum over encounters gives an expansion in $e^{-S(E)}$, valid at high energies. For periodic orbit systems in the GUE symmetry class (no time-reversal), corrections to the ramp coming from encounters cancel order by order \cite{PhysRevLett.93.014103,PhysRevE.72.046207}. In JT gravity, we discuss a direct analog of the simplest type of encounter contribution in a theory with time-reversal symmetry, contributing to the SFF at genus one-half.
In \hyperref[SectionFour]{\textbf{Section Four}} we study the Airy model, the low-energy limit of JT gravity. The wormhole geometries in this model are very simple, and in one-to-one correspondence with ribbon graphs in the Feynman diagram expansion of Kontsevich's matrix model. These graphs allow us to generalize the encounter computations beyond the semiclassical, high-energy regime. At genus one and high energies, the encounter contributions mutually cancel in the GUE symmetry class. At low energies, quantum corrections to the encounters spoil this cancellation, leading to the nonzero contribution to $K_\beta(t)$. The full answer at this genus comes from a large region of moduli space, far from the semiclassical encounter regime.
{\bf Note:} Two recent papers \cite{Blommaert:2022lbh,Weber:2022sov} are closely related to our work. A preliminary version of section two of this paper was shared with the authors of \cite{Blommaert:2022lbh,Weber:2022sov} in October 2021.
\section{Tau scaling of the spectral form factor}\label{SectionTwo}
In this section we discuss the ``$\tau$-scaling'' limit of matrix integrals in which we conjecture that the spectral form factor has a simple form with a convergent genus expansion. Consider a double-scaled matrix integral with unitary symmetry class and classical density of states
\begin{equation}
\rho(E) = e^{S_0}\rho_0(E).
\end{equation}
The spectral form factor is defined as
\begin{equation}
K_\beta(t) \equiv \langle Z(\beta+\i t)Z(\beta - \i t)\rangle, \hspace{20pt} Z(x) \equiv \text{Tr}\, e^{-x H}.
\end{equation}
Here the angle brackets represent the average in the matrix integral. We would like to analyze this in a limit where $t$ goes to infinity and $e^{S_0}$ also goes to infinity, holding fixed $\beta$, and also holding fixed the ratio
\begin{equation}
\tau = t e^{-S_0}.
\end{equation}
This will be referred to as the ``$\tau$-scaled'' limit.
In the $\tau$-scaled limit, the time $t = e^{S_0}\tau$ is large, so the SFF will be dominated by correlations of nearby energy levels. Pair correlations of nearby levels are described by the universal sine-kernel formula, which translates to a ramp-plateau structure $\text{min}\{t/2\pi,\rho(E)\}$ as a function of the center of mass energy $E$. By integrating this contribution over $E$, one gets the following candidate expression for the spectral form factor
\begin{align}\label{foldingformula}
K_\beta(t) &\stackrel{?}{\approx} \int_{E_0}^\infty \d E \ e^{-2\beta E} \text{min}\left\{\frac{t}{2\pi},\rho(E)\right\}.
\end{align}
This was previously discussed as an uncontrolled approximation to the SFF \cite{Cotler:2016fpe}. Here we would like to propose that it is exact in the $\tau$-scaled limit,
\begin{equation}
\lim_{S_0\to\infty} e^{-S_0}K_\beta(\tau e^{S_0}) = \int_{E_0}^\infty \d E \ e^{-2\beta E}\text{min}\left\{\frac{\tau}{2\pi},\rho_0(E)\right\}.\label{tauconj}
\end{equation}
Let's try an example by taking $\rho_0(E) = \frac{\sqrt{E}}{2\pi}$, which is sometimes called the Airy model, or the Kontsevich-Witten model. Then (\ref{tauconj}) becomes
\begin{align}
e^{-S_0}K_\beta(\tau e^{S_0}) &= \frac{1}{2\pi}\int_0^\infty \d E e^{-2\beta E}\text{min}\left\{\tau,\sqrt{E}\right\}\\
&= \frac{1}{2\pi}\frac{\pi^{1/2}}{2^{5/2}\beta^{3/2}}\text{Erf}(\sqrt{2\beta}\tau)\label{airyconj}\\
&=\frac{\tau}{4\pi\beta} - \frac{\tau^3}{6\pi}+ \frac{\beta}{10\pi}\tau^5 - \frac{\beta^2}{21\pi} \tau^7 +\dots.\label{airyanssec1}
\end{align}
We can compare this to the exact answer for the spectral form factor of the Airy model \cite{okounkov2002generating,Okuyama:2021cub}
\begin{align}
K_\beta(t) &= \langle Z(2\beta)\rangle \text{Erf}(e^{-S_0}\sqrt{2\beta(\beta^2+t^2)})\\
&= \frac{\exp\left(S_0 + \frac{1}{3}e^{-2S_0}\beta^3\right)}{4\sqrt{\pi}\beta^{3/2}}\text{Erf}(e^{-S_0}\sqrt{2\beta(\beta^2+t^2)}).\label{eqn:airySFF}
\end{align}
This agrees with (\ref{airyconj}) in the $\tau$-scaled limit.
As a second example, we can take $\rho_0(E) =\frac{1}{4\pi^2}\sinh(2\pi\sqrt{E})$, which corresponds to JT gravity:
\begin{align}
e^{-S_0}K_\beta(\tau e^{S_0}) &= \frac{1}{2\pi} \int_0^\infty \d E e^{-2\beta E} \text{min}\left\{\tau,\frac{1}{2\pi}\sinh(2\pi\sqrt{E})\right\}\\
&= \frac{e^{\frac{\pi^2}{2\beta}}}{16\sqrt{2\pi}\beta^{3/2}}\left[\text{Erf}\left(\frac{\frac{\beta}{\pi}\text{arcsinh}(2\pi \tau) + \pi}{\sqrt{2\beta}}\right)+\text{Erf}\left(\frac{\frac{\beta}{\pi}\text{arcsinh}(2\pi \tau)- \pi}{\sqrt{2\beta}}\right)\right]\label{JTerfs}\\
&= \frac{\tau}{4\pi\beta} - \frac{\tau^3}{6\pi}+ \left(\frac{\beta}{10\pi}+\frac{2\pi}{15}\right)\tau^5 - \left(\frac{\beta^2}{21\pi} + \frac{4\pi\beta}{21} + \frac{64\pi^3}{315}\right)\tau^7 +\dots\label{tauexp}
\end{align}
For JT gravity, no exact formula for the SFF is known,\footnote{See \cite{Okuyama:2020ncd,Okuyama:2021cub} for discussion of a different limit where $\beta$ is also large, and see \cite{Johnson:2020exp} for numerical evaluation.} but (\ref{JTerfs}) can be checked by using topological recursion \cite{Eynard:2004mh,Eynard:2007kz} to compute the exact spectral form factor to a given order in $e^{-S_0}$, and then applying $\tau$-scaling. Using this method, we confirmed (\ref{JTerfs}) up to order $\tau^{13}$.
Note that the series in $\tau$ corresponds to the genus expansion, as one can see by undoing the $\tau$-scaling and replacing $\tau \to t e^{-S_0}$. In particular, the power of $\tau$ is $\tau^{2g+1}$. Normally, the genus expansion of the SFF in JT gravity is an asymptotic series. But after $\tau$-scaling it has a nonzero radius of convergence $|\tau| < {1\over 2\pi}$, and the analytic continuation is nonsingular along the entire real $\tau$ axis. For large $\tau$ it reproduces the plateau.\footnote{In appendix \ref{Pappendix} we show that (\ref{tauconj}) always has a nonzero radius of convergence.}
The presence of powers of $\beta$ in the leading $\tau$-scaled answer indicates that there are cancellations of higher powers of $t$. For example, the term proportional to $\beta \tau^5$ arises from a linear combination of terms $(\beta + \i t)^{p_1}(\beta - \i t)^{p_2}$ with $p_1 + p_2 = 6$ such that the leading power $t^6$ cancels. This cancellation has been studied by \cite{Blommaert:2022lbh,Weber:2022sov}.
The conjecture (\ref{tauconj}) was designed so that if we compute the inverse Laplace transform to $K_E(\tau e^{S_0})$, the answer will simply be $e^{S_0}\min\{\tau/2\pi,\rho_0(E)\}$. In particular, for fixed $E > 0$, the expansion in powers of $\tau$ terminates after the linear term -- naively there is simply no genus expansion for fixed energy in the $\tau$-scaled limit. A more refined viewpoint is that the genus expansion has coefficients that are derivatives of $\delta$ functions of $\rho(E)$. This can be seen by writing $\min(x,y) = x - (x-y)\theta(x-y)$ and expanding in powers of $x$. It can also be seen by inverse Laplace transforming (\ref{tauexp}) term by term.
So the genus expansion of the canonical SFF can be understood as arising from contributions localized at zero energy where the plateau time is short. To see this from another perspective, consider a $\rho_0$ of the general form
\begin{equation}\label{generaldensity}
2\pi\rho_0(E) = a_1 E^{1/2} + a_3 E^{3/2} + a_5 E^{5/2} + a_7 E^{7/2}+\dots
\end{equation}
Then the conjecture (\ref{tauconj}) gives
\begin{align}\label{tauexpansionintro}
\int \d E e^{-2\beta E} &\text{min}\left\{\frac{\tau}{2\pi},\rho_0(E)\right\}\\& = \frac{\tau}{4\pi\beta} - \frac{\tau^3}{6\pi a_1^2} + \frac{(a_1\beta + 2a_3)}{10 \pi a_1^5}\tau^5 - \frac{(2a_1^2\beta^2 + 12 a_1a_3\beta - 6 a_1 a_5 + 21 a_3^2)}{42\pi a_1^8}\tau^7+\dots\notag
\end{align}
The contribution from genus $g$ depends on only the $g-1$ first terms in the expansion of $\rho_0(E)$ around $E = 0$. Indeed, in appendix \ref{Pappendix} we show that the coefficient of $\tau^{2g+1}$ for $g \ge 1$ is
\begin{equation}
-\frac{1}{g(2g+1)(2\pi)^{2g+1}} \oint_0 \frac{\d E}{2\pi \i} \frac{e^{-2\beta E}}{\rho_0(E)^{2g}}.
\end{equation}
In the rest of the paper we will try to understand where this series comes from. We will start by comparing to another type of expansion associated to the spectral form factor -- the theory of encounters in periodic orbits.
\newpage
\section{Encounters in orbits and in JT}\label{SectionThree}
One case where the spectral form factor has been studied extensively is semiclassical chaotic billiards.\footnote{See the introduction of \cite{muller2005periodic} for history and references.} There, the Gutzwiller trace formula is used to write an expression for the spectral form factor in terms of a sum over pairs of periodic orbits. Special pairings of orbits called ``encounters'' lead to a series in $\tau$ that is vaguely reminiscent of (\ref{tauexpansionintro}).
However, there are important differences: the encounters cancel between themselves for systems with unitary symmetry class, and the encounter analysis is only valid at high energies, in the semiclassical region. It is tempting to view the genus expansion (\ref{tauexpansionintro}) as analogous to a type of ``souped up'' encounter theory that can accurately treat very low energies, outside the semiclassical limit, and for which the encounters do not quite cancel.
We will explore this further in section \ref{SectionFour}. In the current section we prepare by reviewing the theory of encounters in periodic orbits and finding an analog of the simplest (Sieber-Richter) encounter in a JT gravity calculation.
\subsection{Review of periodic-orbit theory}
Consider a semiclassical billiards system, consisting of a particle moving in a stadium.\footnote{We set $\hbar = 1$. The semiclassical limit corresponds to high energies.} The starting point for the theory of encounters is Gutzwiller's trace formula for the oscillating part of the density of states $\rho_{\text{osc}}(E)$ in terms of a sum over classical periodic orbits $\gamma$:
\begin{equation}
\rho_{\text {osc}}(E)\sim {1\over \pi }\text{Re}\sum_{\gamma} A_{\gamma} e^{\i S_{\gamma}}.
\end{equation}
Here $A_{\gamma}$ is the stability amplitude (one-loop determinant) and $S_{\gamma}$ is the classical action. The microcanonical spectral form factor is then given by a double sum over orbits $\gamma,\gamma'$:
\begin{align}
K_E(t) &=\langle \int \d\epsilon e^{\i \epsilon t} \rho_{\text{osc}}(E+{\epsilon \over 2})\rho_{\text{osc}}(E-{\epsilon\over 2})\rangle \\
&={1\over 2\pi }\langle \sum_{\gamma,\gamma'} A_{\gamma}A^*_{\gamma'}e^{\i (S_{\gamma}-S_{\gamma'})}\delta\left(t-{t_{\gamma}+t_{\gamma'}\over 2}\right)\rangle.\label{eqn:SFF}
\end{align}
Here $t_{\gamma}={\partial S_{\gamma}\over \partial E}$ is the period of the semiclassical orbits $\gamma$ and $\langle \cdot \rangle$ represents an average over the energy window.
$K_E(t)$ receives both diagonal ($S_\gamma=S_\gamma'$) and off-diagonal ($S_\gamma\neq S_\gamma'$) contributions. In a chaotic system, one expects $S_{\gamma}=S_{\gamma'}$ only if $\gamma$ and $\gamma'$ are identical or related by symmetry -- the simplest (GUE) case is to assume there is no symmetry so $\gamma = \gamma'$. Berry showed \cite{berry1985semiclassical} that the sum over $\gamma = \gamma'$ leads to the linear ramp $t/2\pi$ in the GUE spectral form factor. The factor of $t$ comes from the possibility of a relative time shift between $\gamma$ and $\gamma'$. In the GOE case, there is additional time reversal symmetry $\mathcal{T}^2=1$, and diagonal sum also contains the time reversed orbit $\gamma'=\mathcal{T}\gamma$. This leads to an additional factor of two, so $K_E(t) \sim t/\pi$.
The off-diagonal contributions are weighted by an oscillatory factor $e^{\i (S_{\gamma}-S_{\gamma'})}$. Encounter theory is a way of identifying systematic classes of orbits such that the difference in actions is small. These consist of orbit pairs $\gamma,\gamma'$ that closely follow each other except for small off-shell regions known as encounters. The impressive achievement of encounter theory is that a sum over such encounters reproduces the fact that the GUE $K_E(t)$ has no corrections before the plateau, and the GOE $K_E(t)$ has a particular expansion
\begin{align}
K_E^{(GOE)}(t)&={t\over \pi}-{t\over 2\pi}\log\left[1+{t\over \pi \rho(E)}\right]\\ &={t\over \pi}-{2t^2\over\rho(E)(2\pi)^2}+{2 t^3\over \rho(E)^2(2\pi)^3}-...
\end{align}
Berry's analysis explains the linear term. The quadratic term was explained by Sieber and Richter \cite{Sieber_2001}, the cubic term was explained in \cite{heusler2004universal}, and the full series was reproduced in \cite{muller2005periodic}.
\subsubsection{Sieber-Richter pair}
The simplest example of an encounter is the Sieber-Richter pair or ``2-encounter'' which exists in a theory with time-reversal symmetry. The pair of orbits $\gamma,\gamma'$ can be sketched in configuration space as follows (this figure and (\ref{fig223}) were modified from \cite{muller2005periodic} with permission):
\begin{equation}\label{SRfig}
\includegraphics[valign = c, scale = 0.4]{figures/SR.pdf}
\end{equation}
We will focus on the case with only two degrees of freedom.
The key feature is that the orbit $\gamma$ (the red/solid orbit) returns close to itself at some point $t_1$ along the orbit. This point is referred to as an encounter, and the partner orbit $\gamma'$ differs from $\gamma$ only in the vicinity of the encounter (and as a consequence it is time-reversed in one of the two ``stretches'' outside the encounter region).
The encounter can be characterized by the deviation of the two nearby segments of $\gamma$, and it is convenient to decompose this deviation into the stable and unstable directions $s,u$. Within the encounter region, the $s,u$ variables decay and grow exponentially in time, with a Lyapunov exponent $\lambda$. This determines the duration of the encounter region:
\begin{equation}
t_{enc}={1\over \lambda}\log {c^2\over |s u|},
\end{equation}
where $c$ characterizes the regime of validity of the linearized analysis near the encounter.\footnote{At high energies, the result does not depend on the precise value of $c$.} In the two regions outside the encounter (called stretches), the two orbits follow each other closely, up to time reversal. This means that the difference in actions $S_\gamma - S_{\gamma'}$ comes only from the encounter region itself. This difference in action is determined by the $s,u$ variables and takes the form\footnote{This is reminiscent of the action that controls out-of-time-order correlators \cite{Stanford:2021bhl,Gu:2021xaj}.}
\begin{equation}
S_{\gamma}-S_{\gamma'}=s u.
\end{equation}
The probability that orbit $\gamma$ will have such an encounter is determined by ergodicity, which gives a uniform measure in the phase space ${1\over (2\pi)^2\rho_E}\d t_1 \d s \d u$. The Sieber-Richter pair's contribution to the spectral form factor can then be evaluated using the following integral:
\begin{equation}\label{srint}
K_{E}(t)\supset {t\over \pi }{1\over (2\pi)^2\rho(E)}\int^{c}_{-c} \d s \int_{-c}^{c}\d u{t\over 2t_{enc}} \int_{0}^{t-2t_{enc}} \d t_1 e^{\i su}.
\end{equation}
The factors in the integral are explained as follows:
\begin{enumerate}
\item The overall ${t\over \pi }=2\times {t\over 2\pi }$ factor reflects the relative time shift between $\gamma$ and $\gamma'$ and the time reversal symmetry. This part is the same as in the linear ramp.
\item The additional ${t\over 2t_{enc}}$ factor reflects the fact that the encounter region can be anywhere along the orbit: $t$ comes from integrating over the time of the reference point; ${1\over 2t_{enc}}$ fixes an over-counting from the choice of the reference point inside the encounter (changing this reference point would rescale $s$ and $u$ oppositely).
\item The integration range of the time where the encounter takes place, $t_1$, is upper bounded by $t-2t_{enc}$ to ensure the existence of the encounter region.
\end{enumerate}
The integral (\ref{srint}) gives:
\begin{equation}\label{eqn:k1/2}
K_E(t)\supset {t\over \pi }{t\over (2\pi)^2\rho(E)}\int \d s \d u e^{\i su}({t\over 2t_{enc}}-1)\approx-{2t^2\over (2\pi)^2\rho(E)}.
\end{equation}
Naively the answer should be of order $t^3$, but this term is proportional to $\int \d s \d u e^{\i s u}/\log|su| \approx 0$, and the nonzero answer comes from the subleading $t^2$ term.
\subsubsection{Cancellation of encounters in GUE}
The Sieber-Richter pair does not contribute in a theory without time-reversal symmetry (GUE case) because the portions of the orbits in the right stretch of figure (\ref{SRfig}) would have no reason to follow each other. Instead, in the GUE case the leading encounters (in the ${1\over \rho_E}$ expansion) are a configuration with two 2-encounters, and a configuration with a single 3-encounter where three segments of the orbit simultaneously approach each other:
\begin{equation}\label{fig223}
\includegraphics[valign = c, scale = .382]{figures/2_2_and_3_encounters.pdf}
\end{equation}
The two 2-encounters (denoted as $(2)^2$) is a straightforward generalization of the 2-encounter. It contains two pairs of $(s_i,u_i)$ soft modes with encounter time $t_{enc}^i$ and three zero modes $t_i$ labelling the stretch lengths.
Its contribution $K_{E,(2)^2}(t)$ to the spectral form factor is given by the following integral :
\begin{equation}
K_E(t)\supset K_{E,(2)^2}(t)= {t\over 2\pi}{1\over (4\pi^2\rho(E))^2}\int \prod_{i=1}^2\d s_i \d u_i e^{\i \sum_{i=1}^2s_i u_i} {t\over 4 \prod_{i=1}^2t_{enc}^i} {(t-\sum_{i=1}^22t_{enc}^i)^3\over 6},
\end{equation}
where the ${(t-\sum_{i=1}^22t_{enc}^i)^3\over 6}$ comes from the integration of the three zero modes $t_i$. As before, the $s_i,u_i$ integral is nonzero only when the measure is independent of $t_{enc}^i$.
This kills the $t^5,t^4$ powers in the two 2-encounter and left with only a $t^3$ piece:
\begin{equation}\label{twoEncounters}
K_{E,(2)^2}(t)={t^3\over (2\pi)^3\rho(E)^2}.
\end{equation}
The 3-encounter (denoted as $(3)^1$) is a limiting case of the two 2-encounters where one of the stretches shrinks to zero.
It can be thought of as a sequential swap of pairs of trajectories where each swap leads to an action difference $s_i u_i$ between the swapped trajectories. These deviations $(s_i,u_i)$ can be relatd to the deviations between nearest neighbor trajectories $(\hat s_i, \hat u_i)$\footnote{See Sec.II of \cite{muller2005periodic} for a detailed discussion.}, which determine the encounter duration $t_{enc}={1\over \lambda }\log{c^2\over \text{max}(\hat s_i)\text{max}(\hat u_i)}$. The contribution to the spectral form factor is
\begin{align}
K(t)\supset K_{E,(3)^1}(t)&={t\over 2\pi}{1\over (4\pi^2\rho(E))^2}\int \prod_{i=1}^2\d s_i \d u_i e^{\i \sum_{i=1}^2s_i u_i} {t\over 3 t_{enc}} {(t-3t_{enc})^2\over 2}\\ &=-{t^3\over (2\pi)^3\rho(E)^2}.\label{threeencounter}
\end{align}
In particular, these two contributions cancel (although GOE variants of them that include the possibility of time-reversed stretches do not cancel). In \cite{muller2005periodic} it was shown that this cancellation between the GUE encounters continues to hold to all orders in the ${1\over \rho_E}$ expansion, reproducing the RMT expectation that the ramp is exact before the plateau time.
\subsection{Sieber-Richter pair in JT}\label{section:SRJT}
In this section, we will explain the analog of Sieber-Richter pair in JT gravity. As explained in \cite{Altland:2020ccq}, this corresponds to the topology of a cylinder with a crosscap inserted. We are grateful to Adel Rahman for collaboration on the calculations in this section.
The cylinder with a crosscap inserted corresponds to the following quotient of hyperbolic space:
\begin{equation}
\includegraphics[valign = c, scale = 1.1]{figures/xcap.pdf}\label{embeddingdiagram}
\end{equation}
The identification that defines the quotient is specified by gluing together the two geodesics with single arrows and also gluing together the two geodesics with double arrows, keeping mind the orientation of the arrows.
The wiggly solid red segments form a single $S^1$ boundary, and have renormalized length $\beta_1$, which will be continued to $\beta + \i t$. Similarly, the wiggly dashed black segments form a single $S^1$ of renormalized length $\beta_2$, which will be continued to $\beta - \i t$.
The two curves labeled $b_1$ together form a circular geodesic of length $b_1$, and the two curves labeled $b_2$ form a circular geodesic of length $b_2$. The two lines labeled $a,a'$ form two circular geodesics that intersect at a point. These are ``one-sided'' geodesics, meaning that a neighborhood of either one is a Mobius strip, rather than a cylinder. Hyperbolic geometry imposes one constaint on these parameters \cite{Norbury}:
\begin{equation}\label{CCrelation}
\sinh(\frac{a}{4})\sinh(\frac{a'}{4})=\cosh(\frac{b_1+b_2}{4})\cosh(\frac{b_1-b_2}{4}).
\end{equation}
The geometry has a $\mathbb{Z}_2$ mapping class group that interchanges $a\leftrightarrow a'$. A convenient way to fix this is to parametrize the geometry by $a$ and to require that $a < a'$. This amounts to requiring $a < a_*$, where
\begin{equation}\label{CCcutoff}
\sinh^2(\frac{a^*}{4})=\cosh(\frac{b_1+b_2}{4})\cosh(\frac{b_1-b_2}{4}).
\end{equation}
The path integral of JT gravity on this space is then
\begin{equation}
2\times e^{-S_0}\int_0^\infty b_1\d b_1 b_2\d b_2 Z_{\text{Tr}}(\beta_1,b_1)Z_{\text{Tr}}(\beta_2,b_2)\int_{\epsilon}^{a^*} \frac{\d a}{2\tanh(\frac{a}{4})}.
\end{equation}
Let's explain each of the factors in this expression. The factor of two is from the possibility of an orientation reversal on going from one boundary to the other. The factor of $e^{-S_0}$ is from the topological weighting $e^{S_0\chi}$ where $\chi = -1$ is the Euler characteristic of the crosscap cylinder. The integral over $b_1$ comes with a factor of $b_1$ that represents the integral of the Weil-Petersson measure $\d b \wedge \d\tau$ over the twist parameter $\tau$. The factors of $Z_{\text{Tr}}(\beta,b)$ represent the integral over the boundary wiggles. Finally, the $a$ parameter is integrated with the crosscap measure \cite{Norbury,Gendulphe,Stanford:2019vob} with an upper limit specified by $a_*$ to account for the mapping class group. Note that the integral would be divergent near $a = 0$, which represents the fact that in JT gravity, the path integral on non-orientable surfaces is divergent. We regularized the integral by cutting it off at $\epsilon$, and we will see that this divergence does not survive the $\tau$-scaling limit.\footnote{This regularization corresponds e.g.~to studying the $(2,p)$ minimal string with large but finite $p$.}
To obtain the contribution to the spectral form factor, we continue the parameters $\beta_1,\beta_2$ to $\beta \pm \i t$:
\begin{equation}\label{canonicalCC}
\begin{aligned}
K_{\beta}(t)&\supset 2\times e^{-S_0}\int_0^\infty b_1\d b_1 b_2\d b_2\int_{\epsilon}^{a^*} \frac{\d a}{2\tanh(\frac{a}{4})} Z_{\text{Tr}}(\beta+it,b_1)Z_{\text{Tr}}(\beta-it,b_2) \\
&= 2\times e^{-S_0}\int_0^\infty b_1\d b_1 b_2\d b_2\int_{\epsilon}^{a^*} \frac{\d a}{2\tanh(\frac{a}{4})} \frac{e^{-b_1^2/(4(\beta+\i t))}}{\sqrt{4\pi (\beta+\i t)}}\frac{e^{-b_2^2/(4(\beta-\i t))}}{\sqrt{4\pi (\beta-\i t)}} \\
&\approx \frac{e^{-S_0}}{2\pi t}\int_0^\infty b_1\d b_1 b_2\d b_2\int_{\epsilon}^{a^*} \frac{\d a}{2\tanh(\frac{a}{4})} \exp\left\{\i\frac{b_1^2-b_2^2}{4t} - \frac{\beta}{4t^2}(b_1^2+b_2^2)\right\}.
\end{aligned}
\end{equation}
Here, the lower bound $\epsilon$ of the integration range of $a$ is the cutoff that regularizes the crosscap integral mentioned before -- it will drop out below. In the last step we used $t \gg \beta$.
Because the answer we expect is proportional to ${1\over \rho(E)}$, it is convenient to go to the microcanonical spectral form factor, using
\begin{align}
\int {\d\beta\over 2\pi \i} e^{2\beta E}Z_{\text{Tr}}(\beta+it,b_1)Z_{\text{Tr}}(\beta-it,b_2) &\approx {1\over 4\pi t} \int {\d \beta\over 2\pi \i} e^{2\beta E}\exp\left\{\i\frac{b_1^2-b_2^2}{4t} - \frac{\beta}{4t^2}(b_1^2+b_2^2)\right\} \\
&={1\over 4\pi t} \exp\left\{\i\frac{b_1^2-b_2^2}{4t}\right\}\delta\left({b_1^2+b_2^2\over 4t^2}-2E \right).
\end{align}
We can also evaluate the integral over $a$, getting\footnote{We choose to include in $V$ the factor of two from the sum over orientation reversal of one of the trumpets.}
\begin{equation}\label{eqn:v1/2_2}
V_{1/2}(b_1,b_2) = 2\times\int_\epsilon^{a_*}\frac{\d a}{2\tanh(\frac{a}{4})} = 2\log \cosh\frac{b_1+b_2}{4} + 2\log \cosh\frac{b_1-b_2}{4} - 4\log\frac{\epsilon}{4}.
\end{equation}
We can now evaluate the microcanonical spectral form factor for fixed $E$ and large $t$:
\begin{align}
K_E(t)&\supset \frac{e^{-S_0}}{4\pi t} \int_0^\infty b_1\d b_1 b_2\d b_2 V_{1/2}(b_1,b_2) \exp\left\{\i\frac{b_1^2-b_2^2}{4t}\right\}\delta\left({b_1^2+b_2^2\over 4t^2}-2E \right)\label{structure}\\
&\approx e^{-S_0} {2t^2\sqrt{E}\over \pi} \int_{-\infty}^\infty \d (\delta b) e^{\i \sqrt{E} \delta b} (\log \cosh(\sqrt{E}t)+\log \cosh{\delta b\over 4} - 2\log\frac{\epsilon}{4})\label{deltab}\\
&=e^{-S_0} {2t^2\sqrt{E}\over \pi} \int_{-\infty}^\infty \d (\delta b) e^{\i \sqrt{E} \delta b}\log \cosh{\delta b\over 4}\\
&=-{t^2\over 2\pi ^2 \rho(E)}.\label{expdecay}
\end{align}
which matches the encounter result (\ref{eqn:k1/2}). In the last step we used that in JT gravity, the density of states is $\rho_0(E) = \sinh(2\pi \sqrt{E})/(2\pi)^2$.
We will now make a few remarks connecting this calculation to the encounter picture (see appendix \ref{app:soft} for more details). The $b_1,b_2$ parameters can be regarded as analogous to the lengths of the periodic orbits, with difference of orbit actions $\Delta S = S_\gamma - S_{\gamma'}$ analogous to $(b_1^2-b_2^2)/4t \approx \sqrt{E}\delta b$. With this understanding we can write the moduli space volume in JT as
\begin{equation}
V_{1/2}(b_1,b_2) = 2\sqrt{E} t + 2\log\cosh\frac{\Delta S}{4\sqrt{E}} + \text{const.}
\end{equation}
In periodic orbit theory, we should compare this moduli space volume to the integral over the parameters of the encounter with fixed action difference $\Delta S$:
\begin{equation}
\int_{-c}^{c}\d s \d u \delta(su-\Delta S) {t-2t_{enc}\over t_{enc}}=\lambda\left(t-2t_{enc}\right)=\lambda t+2\log{|\Delta S| \over c^2}.
\end{equation}
The Lyapunov exponent $\lambda$ from periodic orbit theory should be compared to the JT gravity chaos exponent $2\pi/\beta = 2\sqrt{E}$, so the terms linear in $t$ match. However, these terms drop out after integrating over $\Delta S$ with weighting $e^{\i \Delta S}$. Instead, the answer is determined by the subleading terms, and in particular by the locations of their singularities in the upper half-plane for $\Delta S$. In the JT case, the closest singularity to the real axis is at $\Delta S = 2\pi\i \sqrt{E}$, leading to the exponential suppression of (\ref{expdecay}).
\section{Beyond encounters}\label{SectionFour}
In a GUE-like theory such as orientable JT gravity, the analog of encounters are expected to cancel exactly at fixed energy. In this section, we will discuss a convenient decomposition of the moduli space that separates the contributions of different encounters. This will make it possible to understand the encounter contributions and their cancellation, as well as the failure of their cancellation at low energies.
Instead of JT gravity, we will work with the simpler Airy model, which may be viewed as the low energy or low temperature limit of JT gravity, where one approximates the $\sinh( c \sqrt{E})$ density of states as $\rho(E) = c\sqrt{E} $. In this limit, the lengths of the asymptotic boundaries, as well as the lengths of any internal closed geodesics, go to infinity. One can see this by taking this limit in the JT gravity formula for partition functions as trumpets integrated against the Weil-Petersson (WP) volume
\begin{equation}\label{JTpartitionfunctions}
\langle Z(\beta_1)\dots Z(\beta_n)\rangle_{\text{JT}}\supset e^{\chi S_0} \int_0^\infty b_1 \d b_1 \dots \int_0^\infty b_n \d b_n \; \frac{e^{-\frac{b_1^2}{4\beta_1}}}{\sqrt{4\pi \beta_1}} \dots \frac{e^{-\frac{b_n^2}{4\beta_n}}}{\sqrt{4\pi \beta_n}} V_{g,n}(b_1,\dots, b_n).
\end{equation}
Partition functions for the Airy model can be obtained from the JT answers by an infinite rescaling of $\beta$, accompanied by a renormalization of $S_0$
\begin{equation}
\langle Z(\beta_1)\dots Z(\beta_n)\rangle_{\text{Airy}}= \lim_{\Lambda\rightarrow \infty} \Lambda^{\frac{3}{2}\chi} \langle Z(\Lambda \beta_1)\dots Z(\Lambda \beta_n)\rangle_{\text{JT}}
\end{equation}
To take this limit in (\ref{JTpartitionfunctions}), we rescale the $b_i$ by $\sqrt{\Lambda}$. The WP volumes are polynomials in the $b_i$, with degree $6g+2n-6$. We define the Airy volumes as
\begin{equation}
V_{g,n}^{\text{Airy}}(b_1\dots b_n) = \lim_{\Lambda\rightarrow \infty} \Lambda^{3-3g-n} V_{g,n}(\Lambda b_1,\dots,\Lambda b_n).
\end{equation}
These Airy volumes are then homogeneous polynomials in the $b_i$ of degree $6g+2n-6$, given by the leading powers of the full WP volumes. The Airy partition functions can be written as trumpets integrated against the Airy volumes, with $S_0 \rightarrow S_0 +\frac{3}{2}\log(\Lambda)$.
In the limit where the boundary lengths $b_1\dots b_n$ become infinitely long, the surfaces counted by the WP volumes simplify. The Gauss-Bonnet theorem implies that a constant negative curvature surface with geodesic boundaries has a fixed volume proportional to its Euler character. As the lengths of the boundaries are going to infinity, the surfaces must become infinitely thin strips in order for the volume to remain fixed.
This thin strip limit allows for a simple decomposition of the moduli space of these surfaces, described by Kontsevich \cite{kontsevich1992intersection}, which will connect in a transparent way to the encounters discussed in the previous section and to the description of the Airy model using a double-scaled matrix integral. We now briefly review this decomposition, following \cite{DoThesis}.
\subsection{Kontsevich's decomposition of moduli space}
In the thin strip (Airy) limit, the moduli space can be described as a sum over trivalent ribbon graphs, together with an integral over the lengths of the edges that make up the graphs, subject to the constraint that the boundaries have lengths $b_i$:
\begin{equation}
V^{\text{Airy}}_{g,n}(b_1\dots b_n) =\frac{2^{2g-2+n}}{|\text{Aut}(\Gamma)|}\prod_{k = 1}^E \int_0^{\infty} \d l_k \prod_{i=1}^n \delta(b_i - \sum_{k = 1}^E n^i_k l_{k})
\end{equation}
Here $E = 6g-6+3n$ is the number of edges in the graph, $l_k$ is the length of edge $k$, and $n^i_k \in \{0,1,2\}$ is the number of sides of edge $k$ that belong to boundary $i$.
The Laplace transform of this expression is a little simpler:
\begin{align}\label{LapVMat}
\tilde{V}^{\text{Airy}}_{g,n}(z_1\dots z_n) &\equiv \int_0^\infty \prod_{i = 1}^n\left[\d b_i e^{-b_i z_i}\right] V^{\text{Airy}}_{g,n}(b_1\dots b_n) \\ &=\sum_{\Gamma \in \Gamma_{g,n}}\frac{2^{2g-2+n}}{|\text{Aut}(\Gamma)|} \prod_{k=1}^{6g-6+3n}\frac{1}{z_{l(k)}+z_{r(k)}}.
\end{align}
Here $\Gamma_{g,n}$ is set of trivalent ribbon graphs with genus $g$ and $n$ boundaries, contructed from $E=6g-6+3n$ edges and $V=4g-4+2n$ trivalent vertices. The $k$ variable runs over the $6g-6+3n$ edges, and the $l(k)\in \{1\cdots n \}$ index labels which boundary of the Riemann surface the left side of the ribbon belongs to. Similarly, $r(k)$ labels which boundary the right side of the ribbon belongs to.
We are interested in the case where there are two boundaries, so we will draw ribbon graphs with ribbon edges denoted by solid red ($1$) and dashed black ($2$) lines. Then a $11$ edge comes with a factor of $1/(2z_1)$, a $22$ edge comes with a factor of $1/(2z_2)$, and a $12$ edge comes with a factor of $1/(z_1+z_2)$. These ribbon graphs can be orientable or non-orientable, depending on what variety of JT gravity or Airy gravity we are interested in.\footnote{In the non-orientabe case, one also has additional factors of two due to the possibility of inserting orientation reversing operators along particular cycles.} An example in the non-orientable case is
\begin{equation}
\includegraphics[scale = .65, valign = c]{figures/exampleGraph.pdf}
\end{equation}
This graph has two boundaries and genus one-half, and together with three other graphs discussed in section \ref{genusonehalfsubsection} below, it gives the Kontsevich-graph description of the Sieber-Richter two-encounter.
We will also consider the graphs with two boundaries and genus one. To enumerate the graphs, a useful fact is that all of the orientable graphs for fixed $(g,n)$ can be obtained from a single graph by repeatedly applying the cross operation (or Whitehead collapse) \cite{Whitehead1936equivalent,Penner1988perturbative}:
\begin{equation}\label{crossOp}
\includegraphics[valign = c, scale = .7]{figures/cross.pdf}
\end{equation}
This is consistent with that fact that moduli space $\overline{\mathcal{M}}_{g,n}$ is a connected space: if we back off of the Airy limit of JT gravity, then the strips have finite width, and the operation (\ref{crossOp}) is a smooth transition.
\subsection{Genus one-half}\label{genusonehalfsubsection}
In this section we will illustrate the connection between encounters and Kontsevich graphs by studying the example of genus one-half, with two boundaries. In this case, the volume of the moduli space is
\begin{equation}\label{Airygenusonehalfvol}
V^{\text{Airy}}_{\frac{1}{2},2}(b_1,b_2) = \text{Max}(b_1,b_2).
\end{equation}
This can be obtained by taking the large $b_1,b_2$ limit of the JT gravity answer (\ref{eqn:v1/2_2}). To take this limit, one drops the constant piece and replaces $\log \cosh(\frac{b_1\pm b_2}{4})$ with $\frac{1}{4}|b_1\pm b_2|$.
There are four Kontsevich graphs with two boundaries and genus one half:
\begin{equation}
\includegraphics[scale = .55,valign = c]{figures/g=halfgraphs.pdf}
\end{equation}
Here the graphs are labeled by $(n_{11},n_{22})$, the number of $11$ and $22$ propagators. The contributions of these graphs to $\tilde{V}(z_1,z_2)$ are
\begin{align}
(1,0):\hspace{10pt} \frac{c_{1,0}}{z_1 (z_1+z_2)^2} &, \hspace{20pt} (0,1):\hspace{10pt} \frac{c_{0,1}}{z_2 (z_1+z_2)^2} \\
(2,0):\hspace{10pt} \frac{c_{2,0}}{z_1^2 (z_1+z_2)} &, \hspace{20pt} (0,2):\hspace{10pt} \frac{c_{0,2}}{z_2^2 (z_1+z_2)}
\end{align}
where the coefficients $c_{0,1} = c_{1,0}$ and $c_{0,2} = c_{2,0}$ are determined the by the symmetry factor of the graph, together with a factor of two from the possibility of orientation reversal along one boundary.
Rather than computing the symmetry factors, we can compute $c_{1,0}$ and $c_{2,0}$ indirectly by matching to the volume (\ref{Airygenusonehalfvol}). To find the contribution of each graph to the volume, we take the inverse Laplace transform, for example
\begin{equation}
(1,0):\hspace{10pt} \int_{\gamma+i\mathbb{R}}\frac{\d z_1}{2\pi i} \frac{\d z_2}{2\pi i}e^{b_1 z_1 +b_2 z_2} \frac{c_{1,0}}{z_1 (z_1+z_2)^2} = c_{1,0} b_2 \theta(b_1-b_2).
\end{equation}
Together with a similar term from $(0,1)$, this gives
\begin{equation}\label{genusonehalfvolsfirstgraph}
(1,0)+(0,1) = c_{1,0} \text{min}(b_1,b_2).
\end{equation}
Similarly,
\begin{equation}
(2,0) + (0,2) = c_{2,0} |b_1-b_2|.
\end{equation}
To match to (\ref{Airygenusonehalfvol}) we conclude that $c_{1,0} = c_{2,0} = 1$.
The corresponding contributions to the spectral form factor
\begin{equation}\label{b1b2integral}
\langle Z(\beta_1)Z(\beta_2)\rangle \supset e^{-S_0}\int \frac{b_1 \d b_1}{\sqrt{4\pi \beta_1}}\frac{b_2 \d b_2}{\sqrt{4\pi \beta_2}} e^{-\frac{b_1^2}{4\beta_1} - \frac{b_2^2}{4\beta_2}}V(b_1,b_2)
\end{equation}
are then (keeping the leading power of $t$)
\begin{align}
(1,0) + (0,1) &= e^{-S_0}\frac{t^2}{\sqrt{2\pi\beta}}\\
(2,0) + (0,2) &= -2e^{-S_0}\frac{t^2}{\sqrt{2\pi\beta}}
\end{align}
The sum of these contributions is $-e^{-S_0}t^2/\sqrt{2\pi\beta}$, which is the Laplace transform of the microcanonical answer $-e^{-S_0} t^2/(\pi\sqrt{E})$, which matches the two-encounter contribution (\ref{eqn:k1/2}) in the special case of the Airy density of states $\rho(E)=\frac{\sqrt{E}}{2\pi}$. Of course, this follows from the low-energy limit of the match we previously found in JT gravity. The interesting feature is that both classes of graphs contribute at the same order, and we have to sum both in order to reproduce the answer from the encounter.
The $(1,0)$ and $(0,1)$ graphs naively resemble a two-encounter; if one shrinks away the $11$ (or $22$) propagator, we find a graph with only 12 propagators and a quartic vertex. The $12$ propagators correspond to nearly parallel stretches of the $1$ and $2$ geodesic boundaries on the surface, so these graphs represent contributions for which the two boundaries are nearly parallel in a pattern that matches the Sieber-Richter pair. Of course, in computing the spectral form factor one glues on trumpets to the surface with geodesic boundaries, but the asymptotic boundaries also remain almost parallel for the stretches. Along these stretches, the geometry locally looks like the double-cone (or a non-orientable ``twisted" double-cone).
The $(2,0)$ and $(0,2)$ graphs are not as obviously connected to encounter theory, but they do represent a small part of the moduli space integral that is analogous to the $s,u$ integration in the encounter. To see which part of moduli space it corresponds to, consider the $a$ geodesic. In the Airy limit, this is simply the shortest loop on the Kontsevich graph that includes the twisted edge. For the $(2,0)$ and $(0,2)$ graphs, this means that $a$ is the twisted edge itself, which forms a loop shorter than $|b_1-b_2|/2$. For the $(1,0)$ and $(0,1)$ graphs, $a$ corresponds to a loop that includes the twisted edge plus the shorter untwisted edge, with total length longer than $|b_1-b_2|/2$. So the two classes of graphs divide the moduli space up as
\begin{align}
V^{(\text{Airy})}_{\frac{1}{2},2}(b_1,b_2) &= 2\int_0^{a^*=\frac{1}{2}\text{Max}(b_1,b_2)} \d a \\ &= 2\underbrace{\int_{\frac{|\delta b|}{2}}^{a^*=\frac{1}{2}\text{Max}(b_1,b_2)} \d a}_{(1,0)+(0,1)= \frac{1}{2} \text{Min}(b_1,b_2)}+ 2\underbrace{\int_0^{\frac{|\delta b|}{2}}\d a}_{(2,0)+(0,2)=\frac{|\delta b|}{2}}.
\end{align}
By splitting the integral into two parts, we introduce ``fictitious" endpoint contributions, proportional to $|\delta b|$, which cancel between the two graphs.
We can understand the geometry a bit better by fattening the Kontsevich graphs up and connecting them to the embedding space diagram (\ref{embeddingdiagram}). Here we will focus on the part of the embedding diagram bounded by the $b_1$, $b_2$ geodesics, removing the asymptotic trumpets. The two classes of Kontsevich graphs correspond to two limiting embedding space diagrams, with the $a$, $a'$ geodesics shown:
\begin{equation*}\label{limitingdiagrams}
\includegraphics[valign = c, width = \textwidth]{figures/xcap2.pdf}
\end{equation*}
On the left we start with a diagram similar to the middle of (\ref{embeddingdiagram}). A limiting case of this diagram represents a strip-like geometry. Upon making the identifications indicated by the arrows, we end up with the $(1,0)$ graph. On the right we begin with a somewhat different-looking embedding space diagram, which limits to the $(2,0)$ graph.
Though the two embedding diagrams that we start with look somewhat different, we can see that their topology is the same after making the indicated identifications. To see this more clearly, we may cut the embedding space diagram corresponding to the $(2,0)$ graph, then glue a pair of the identified edges to end up with an embedding space diagram resembling the $(1,0)$ diagram.
\begin{equation}
\includegraphics[valign = c, scale = 1.2]{figures/xcap3.pdf}
\end{equation}
After cutting and gluing the $(2,0)$, it must also be deformed somewhat to match the $(1,0)$ diagram; for instance, the newly cut geodesic, with an identification indicated by three arrows, is ``long" on the left diagram, but ``short" on the right. This corresponds to the fact that as shown, each of these two embedding space diagrams represent different limiting regions of moduli space, corresponding to the distinct $(1,0)$ and $(2,0)$ graphs. The limiting case of this deformation corresponds to the cross operation on the ``middle" edge of the $(2,0)$ graph.
\subsection{Genus one}
We now turn to our main interest, which is the first nontrivial ($\tau^3$) term in the series (\ref{airyanssec1}) for a GUE-like theory. This term arises at genus one. At genus one with two boundaries, the volume of the moduli space in the Airy limit is
\begin{equation}\label{AIRY12}
V^{\text{Airy}}_{1,2}(b_1,b_2) = \frac{(b_1^2+b_2^2)^2}{192}.
\end{equation}
Integrating this against trumpet wave functions and taking the limit of large $t$ leads to the term $-\tau^3/(6\pi)$ in the spectral form factor (\ref{airyanssec1}). We can gain a bit of insight by understanding how this contribution arises from different Kontsevich graphs, which can be related in turn to encounters.
\begin{figure}
\includegraphics[width = \textwidth]{figures/g=1graphs.pdf}
\caption{{\sf \small The nine Kontsevich graphs with two boundaries and genus one consist of these five, together with another four given by interchanging the solid/red and dashed/black lines on the last four graphs. The dashed/black lines correspond to $1$ boundaries, and the solid/red lines correspond to $2$. The gray lines with arrows show what happens if we apply a cross operation to a given edge.}}\label{figGenusOneGraphs}
\end{figure}
The Kontsevich graphs that contribute to $V_{1,2}$ have six propagators total, which can be $11$, $22$ and $12$ propagators. Up to symmetries, there are nine distinct graphs, see Figure \ref{figGenusOneGraphs}, and they can be characterized by the number of $11$ and $22$ propagators,
\begin{equation}\label{k1k2}
(5,0), \ (4,0), \ (3,0), \ (2,0), \ (1,1), \ (0,2), \ (0,3), \ (0,4), \ (0,5).
\end{equation}
For example, the $(5,0)$ graph has five $11$ propagators, zero $22$ propagators, and one $12$ propagator. It is given by
\begin{equation}
\frac{c_{5,0}}{z_1^5 (z_1+z_2)}
\end{equation}
where the constant $c_{5,0}$ can be computed by evaluating the symmetry factor of the graph. As in genus one-half, these factors can be determined indirectly by matching to (\ref{AIRY12}). For example, after inverse Laplace transforming this, we find that the contribution to the volume $V^{\text{Airy}}_{1,2}(b_1,b_2)$ is
\begin{equation}
\int_{\gamma + \i \mathbb{R}}\frac{\d z_1}{2\pi \i}\frac{\d z_2}{2\pi\i} e^{b_1 z_1 + b_2 z_2} \frac{c_{0,5}}{z_1^5 (z_1+z_2)} = \frac{c_{0,5}}{24}(b_1-b_2)^4\theta(b_1-b_2).
\end{equation}
One can work out a similar expression for each of the $(k_1,k_2)$ cases in (\ref{k1k2}), and the coefficients $c_{k_1,k_2}$ are uniquely determined by the condition that the contributions of all of the graphs should add up to (\ref{AIRY12}). Explicitly,
\begin{equation}
c_{5,0} = c_{4,0} = \frac{1}{8}, \hspace{20pt} c_{3,0} = \frac{1}{6}, \hspace{20pt} c_{2,0} = \frac{1}{4}, \hspace{20pt} c_{1,1} = \frac{1}{2}
\end{equation}
and equal values for $k_1\leftrightarrow k_2$.
The contribution of a given graph to the spectral form factor is then obtained from
\begin{equation}\label{b1b2integral}
\langle Z(\beta_1)Z(\beta_2)\rangle \supset e^{-2S_0}\int \frac{b_1 \d b_1}{\sqrt{4\pi \beta_1}}\frac{b_2 \d b_2}{\sqrt{4\pi \beta_2}} e^{-\frac{b_1^2}{4\beta_1} - \frac{b_2^2}{4\beta_2}}\int_{\gamma + \i \mathbb{R}}\frac{\d z_1}{2\pi \i}\frac{\d z_2}{2\pi\i} e^{b_1 z_1 + b_2 z_2}\frac{c_{k_1,k_2}}{z_1^{k_1}z_2^{k_2}(z_1+z_2)^{6-k_1-k_2}}.
\end{equation}
This integral reduces to a sum of hypergeometric functions. We can simplify the expression by setting $\beta_1 = \beta + \i t$ and $\beta_2 = \beta - \i t$ with large $t$, and keeping all terms that grow at order $t^3$ or faster. This leads to
\begin{align}
(5,0) + (0,5) &= e^{-2S_0}\frac{t^3}{6\pi}\\
(4,0) + (0,4) &= e^{-2S_0}\frac{t^3}{6\pi}\left(3\log\frac{2t}{\beta} - 9\right)\\
(3,0) + (0,3) &= e^{-2S_0}\frac{t^3}{6\pi}\left(-6\log\frac{2t}{\beta}+10\right)\\
(2,0) + (0,2) &= e^{-2S_0}\frac{t^3}{6\pi}\left(-\frac{t^2}{\beta^2} + 3\log\frac{2t}{\beta}-3\right)\label{02}\\
(1,1) &= e^{-2S_0}\frac{t^3}{6\pi}\cdot\frac{t^2}{\beta^2}\label{11}
\end{align}
The sum of these contributions gives $-e^{-2S_0}t^3/(6\pi) = - e^{S_0}\tau^3/(6\pi)$, which produces the cubic term in (\ref{airyanssec1}). However, individual graphs contain terms that grow faster with time.
\subsubsection{Encounters}
As in the genus one-half case from section \ref{genusonehalfsubsection}, we can make a map from Kontsevich graphs to encounters by shrinking the $11$ and $22$ edges to form a graph with only $12$ edges but with higher-degree vertices. If we do this, the $(1,1)$ and $(0,2)$ graphs will correspond to a case with two two-encounters, and the $(0,3)$ and $(0,4)$ graphs will correspond to a three-encounter. After shrinking the $22$ edges, the final graph $(0,5)$ does not correspond to an encounter, but in parallel to the discussion of the $(0,2)$ graph from genus one-half, we believe it should be considered part of the extended three-encounter moduli space.
Let's examine the Kontsevich graphs that correspond to a pair of two-encounters. The contribution is the sum of (\ref{02}) and (\ref{11}), which gives
\begin{equation}
(2,0) + (0,2) + (1,1) = e^{-2S_0}\frac{t^3}{2\pi}\left(\log\frac{2t}{\beta} - 1\right).\label{graph2enc}
\end{equation}
We would like to compare this to the semiclassical answer for a pair of two-encounters (\ref{twoEncounters}), for the density of states of the Airy model $\rho(E) = \sqrt{E}/(2\pi)$
\begin{align}
\text{two two-encounters} &= e^{-2S_0}\frac{t^3}{2\pi E}.
\end{align}
In the canonical ensemble, this gives the naive expression
\begin{equation}
\text{two two-encounters} \stackrel{?}{=} e^{-2S_0}\frac{t^3}{2\pi}\int_0^\infty\frac{\d E}{E} e^{-2\beta E}.
\end{equation}
The reason this expression is naive is that at very low energies, the encounter picture breaks down, because the action is small enough that we do not require orbits to form pairs whose action cancels.
For the case of genus one-half, this breakdown was not significant because the analogous integral over energy was $\int \d E e^{-2\beta E}/ \sqrt{E}$ which is convergent. But in the present case, the integral diverges and the cutoff associated to the breakdown of encounter theory becomes important. We can estimate the energy of the breakdown from the point where the action $S \sim E t$ becomes of order one, which gives $E \sim 1/t$. A revised estimate for the semiclassical encounter contribution would then be
\begin{equation}\label{twotwoencounters}
\text{two two-encounters} \stackrel{?}{=} e^{-2S_0}\frac{t^3}{2\pi}\int_{1/t}^\infty\frac{\d E}{E} e^{-2\beta E} = e^{-2S_0}\frac{t^3}{2\pi}\left(\log\frac{t}{\beta} +\text{const}\right).
\end{equation}
This matches the form of (\ref{graph2enc}).
One can similarly find agreement between the predicted contribution of a three enounter and the sum of the graphs $(5,0) + (0,5) + (4,0) + (0,4) + (3,0) + (0,3)$. In particular, the cancellation of the encounters demonstrated in \cite{muller2005periodic} is visible here in the fact that the log terms cancel between the graphs summed in (\ref{graph2enc}) and the three-encounter graphs.
\subsubsection{Beyond encounters}
Because the $t^3\log(t)$ terms cancel, the entire contribution comes from the $t^3$ terms, and in encounter language, these contributions depend on the details of the small energy region (e.g.~the precise cutoff one uses in (\ref{twotwoencounters})). This cannot be computed using standard encounter theory. However, the Kontsevich graphs continue to be valid for all energies. In this sense, the Kontsevich graphs give a quantum completion of the semiclassical encounter theory for this particular system.
It is interesting to understand the region of the $b_1,b_2$ integral (\ref{b1b2integral}) that is relevant for the $t^3\log(t)$ pieces that cancel out vs.~the full $t^3$ answer. The log terms arise from non-analyticities at $b_1 = b_2$, where the phases contributed by the trumpet wave functions cancel. This is analogous to the fact that encounter contributions in periodic-orbit theory arise from a nonanlyticity in the region $\Delta S = 0$ where a pair of orbits have approximately the same length and cancelling actions.
However, the full moduli space volume is analytic in $b_1,b_2$, which implies that the log terms must cancel when we sum over graphs. What region of the $b_1,b_2$ integral is important for producing the leftover $t^3$? We have an integral of the form
\begin{equation}
\frac{1}{t}\int_0^\infty b_1 \d b_1 b_2 \d b_2 e^{\i (b_1^2-b_2^2)/(4t)} (b_1^2-b_2^2)^2.
\end{equation}
In this integral, $b_1^2\sim t$ and $b_2^2\sim t$, but with no particular preference for the region where $b_1 = b_2$. So the $1$ and $2$ boundaries have significantly different lengths, and the $11$ or $22$ portions of the Kontsevich graphs are as long as the $12$ portions. This corresponds to the idea that we are probing low energies, so that action $b_1^2/t$ is of order one and it does not need to cancel between the two ``orbits.'' Note that in periodic orbits, the analog of this region would be outside the regime of the validity of the semiclassical encounter approximations.
\section{Discussion}\label{Discussion}
In the Airy model at genus one, we found that the answer for $K_\beta(t)$ came from an integral over a large portion of moduli space. This poses a challenge for understanding the geometric origin of the series for $K_\beta(t)$ at higher genus and for theories with other densties of states; $K_\beta(t)$ has a universal form, fixed entirely by $\rho_0(E)$, but this universal answer comes from a highly quantum integral over moduli space. This suggests that there is some universal structure in the moduli space responsible for this series. Though we have not understood this structure, our findings in the Airy model hint at a relationship with encounters.
In the Airy model the moduli space has a natural structure, given by the Konstsevich graphs. In a sense we made precise at genus one half and genus one, the whole moduli space should be thought of as made up of ``quantum corrected" encounters, valid at very low energies. Perhaps in JT (and even in more general large $N$ chaotic systems) there is a ``fattened" version of this quantum encounter region of moduli space that is responsible for the answer, rather than the entire moduli space.
We can see a hint that the connection between the genus expansion for $K_\beta(t)$ and encounters generalizes to higher genus/other spectral curves by generalizing the estimate (\ref{twotwoencounters}) of $K_\beta(t)$ from encounters. An encounter configuration is expected to give a contribution to $K_E(t)$ proportional to $t^{2g+1}/\rho(E)^{2g}$. An estimated contribution of the encounter to $K_\beta(t)$, generalizing (\ref{twotwoencounters}) and extrapolating to low energies, would then be\footnote{We have dropped all terms that would be small in the $\tau$-scaling limit.}
\begin{align}\label{highergenuscutoff}
K_\beta(t) & \stackrel{?}{\supset} C e^{-2g S_0} t^{2g+1} \int_{\frac{1}{t}}^{\infty} \frac{\d E}{\rho_0(E)^{2g}} e^{-2\beta E},
\cr
&= C e^{-2g S_0} t^{2g+1} \bigg[P_g^{(\rho)}(\beta) \bigg(\log\frac{t}{\beta}+\text{const}\bigg) + \text{Higher powers of t}\bigg].
\end{align}
Here $P_{g}^{(\rho)}(\beta)$ is a polynomial in $\beta$ of degree $g-1$, whose coefficients depend on the first $g$ coefficients in the expansion (\ref{generaldensity}) for $\rho_0(E)$.
Summing over encounters at each genus, the familiar cancellations between encounters in $K_E(t)$ imply that the log terms cancel, leaving us with cutoff-dependent terms which may or may not cancel between encounters. We can compare this estimate with the conjecture (\ref{tauexpansionintro}) for $K_\beta(t)$
\begin{equation}\label{Kbetaexpansion}
K_\beta(t) = \sum_{g=0}^\infty P^{(\rho)}_{g}(\beta) \; e^{-2 g S_0}\; t^{2g+1}.
\end{equation}
In appendix \ref{Pappendix} we show that the polyomials $P^{(\rho)}_{g}(\beta)$ in (\ref{highergenuscutoff}) and (\ref{Kbetaexpansion}) are indeed the same.\footnote{Up to an overall genus-dependent coefficient which can be absorbed into the coefficient $C$ in (\ref{highergenuscutoff}).} So the ``const" terms in the estimate (\ref{highergenuscutoff}) match the genus g contribution in (\ref{Kbetaexpansion}), up to an overall cutoff-dependent factor.
Another set of questions concerns the relationship between the genus expansion for $K_\beta(t)$ and other approaches to understanding the plateau, such as the sigma model approach \cite{Wegner:1979tu,doi:10.1080/00018738300101531,PhysRevLett.75.902}, the Riemann-Siegel lookalike formula \cite{Berry_1990,Keating:1992tq,BerryKeatin1992}, and orbit action correlation functions \cite{argaman1993correlations}. Understanding the relationship between these approaches and the approach taken in this paper may be useful for learning lessons about theories that do not have a $\tau$-scaled spectral form factor. We discuss the sigma model approach in \hyperref[Appendix A]{Appendix A} and the action correlation approach in \hyperref[Appendix B]{Appendix B}.
\section*{Acknowledgements}
We thank Alexander Altland, Adel Rahman, Julian Sonner and the authors of \cite{Blommaert:2022lbh,Weber:2022sov} for discussions and Raghu Mahajan and Stephen Shenker for initial collaboration. PS is supported by a grant from the Simons Foundation (385600, PS), and by NSF grant PHY-2207584. DS is supported in part by DOE grant DE-SC0021085 and by the Sloan Foundation. ZY is supported in part by the Simons Foundation.
|
1,941,325,221,122 | arxiv | \section{Introduction}
\label{sec:introduction}
Evolutionary game theory in spatial environments has attracted much interest
from researchers who seek to understand cooperative behaviour among rational
individuals in complex environments. Many models have considered the scenarios
where participant’s interactions are constrained by particular graph
topologies, such as lattices \cite{Hauert2002,Nowak1992}, small-world graphs
\cite{Chen2008,Fu2007}, scale-free graphs \cite{Szolnoki2016,Xia2015} and,
bipartite graphs \cite{Gomez2011}. It has been shown that the spatial
organisation of strategies on these topologies affects the evolution of
cooperation \cite{Cardinot2016sab}.
The Prisoner's Dilemma (PD) game remains one of the most studied games in
evolutionary game theory as it provides a simple and powerful framework to
illustrate the conflicts inherent in the formation of cooperation. In addition,
some extensions of the PD game, such as the Optional Prisoner's Dilemma (OPD)
game, have been studied in an effort to investigate how levels of cooperation
can be increased. In the OPD game, participants are afforded a third option ---
that of abstaining and not playing and thus obtaining the loner's payoff ($L$).
Incorporating this concept of abstention leads to a three-strategy game where
participants can choose to cooperate, defect or abstain from a game
interaction.
The vast majority of the spatial models in previous work have used static and
unweighted networks. However, in many social scenarios that we wish to model,
such as social networks and real biological networks, the number of
individuals, their connections and environment are often dynamic. Thus, recent
studies have also investigated the effects of evolutionary games played on
dynamically weighted networks
\cite{Huang2015,Wang2014,Cao2011,Szolnoki2009,Zimmermann2004} where it has
been shown that the coevolution of both networks and game strategies can play a
key role in resolving social dilemmas in a more realistic scenario.
In this paper we define and explore the Coevolutionary Optional Prisoner's
Dilemma (COPD) game, which is a simple coevolutionary spatial model where both
the game strategies and the link weights between agents evolve over time. In
this model, the interaction between agents is described by an OPD game.
Previous research on spatial games has shown that when the temptation to defect
is high, defection is the dominant strategy in most cases. We believe that the
combination of both optional games and coevolutionary rules can help in the
emergence of cooperation in a wider range of scenarios.
Thus, given the Coevolutionary Optional Prisoner's Dilemma game (i.e., an OPD
game in a spatial environment, where links between agents can be evolved), the
aims of the work are to understand the effect of varying the parameters $T$
(temptation to defect), $L$ (loner's payoff), $\Delta$ and $\delta$ for both
unbiased and biased environments.
By investigating the effect of these parameters, we aim to:
\begin{itemize}
\item Compare the outcomes of the COPD game with other games.
\item Explore the impact of the link update rules and its properties.
\item Investigate the evolution of cooperation when abstainers are present
in the population.
\item Investigate how many abstainers would be necessary to guarantee
robust cooperation.
\end{itemize}
The results show that cooperation emerges even in extremely adverse scenarios
where the temptation to defect is almost at its maximum. It can be observed
that the presence of the abstainers are fundamental in protecting cooperators
from invasion. In general, it is shown that, when the coevolutionary rules are
used, cooperators do much better, being also able to dominate the whole
population in many cases. Moreover, for some settings, we also observe
interesting phenomena of cyclic competition between the three strategies, in
which abstainers invade defectors, defectors invade cooperators and cooperators
invade abstainers.
The paper outline is as follows: Section~\ref{sec:related} presents a brief
overview of the previous work in both spatial evolutionary game theory with
dynamic networks and in the Optional Prisoner's Dilemma game.
Section~\ref{sec:methodology} gives an overview of the methodology employed,
outlining the Optional Prisoner's Dilemma payoff matrix, the coevolutionary
model used (Monte Carlo simulation), the strategy and link weight update rules,
and the parameter values that are varied in order to explore the effect of
coevolving both strategies and link weights.
Section~\ref{sec:results1} discusses the benefits of combining the concept of
abstention and coevolution.
Section~\ref{sec:results2} further explore the effect of using the COPD game
in an unbiased environment.
Section~\ref{sec:results3} investigates the robustness of cooperative behaviour
in a biased environment.
Finally, Section~\ref{sec:conclusion} summarizes the main conclusions and
outlines future work.
\section{Related Work}
\label{sec:related}
The use of coevolutionary rules constitute a new trend in evolutionary game
theory. These rules were first introduced by Zimmermann et al.
\cite{Zimmermann2001}, who proposed a model in which agents can adapt their
neighbourhood during a dynamical evolution of game strategy and graph topology.
Their model uses computer simulations to implement two rules: firstly, agents
playing the Prisoner's Dilemma game update their strategy (cooperate or defect)
by imitating the strategy of an agent in their neighbourhood with a higher
payoff; and secondly, the network is updated by allowing defectors to break
their connection with other defectors and replace the connection with a
connection to a new neighbour selected randomly from the whole network.
Results show that such an adaptation of the network is responsible for an
increase in cooperation.
In fact, as stated by Perc and Szolnoki \cite{Perc2010}, the spatial
coevolutionary game is a natural upgrade of the traditional spatial
evolutionary game initially proposed by Nowak and May \cite{Nowak1992}, who
considered static and unweighted networks in which each individual can interact
only with its immediate neighbours. In general, it has been shown that
coevolving the spatial structure can promote the emergence of cooperation in
many scenarios \cite{Wang2014,Cao2011}, but the understanding of cooperative
behaviour is still one of the central issues in evolutionary game theory.
Szolnoki and Perc \cite{Szolnoki2009} proposed a study of the impact of
coevolutionary rules on the spatial version of three different games, i.e., the
Prisoner's Dilemma, the Snow Drift and the Stag Hunt game. They introduce the
concept of a teaching activity, which quantifies the ability of each agent to
enforce its strategy on the opponent. It means that agents with higher teaching
activity are more likely to reproduce than those with a low teaching activity.
Differing from previous research \cite{Zimmermann2004,Zimmermann2001}, they
also consider coevolution affecting either only the defectors or only the
cooperators. They discuss that, in both cases and irrespective of the applied
game, their coevolutionary model is much more beneficial to the cooperators
than that of the traditional model.
Huang et al. \cite{Huang2015} present a new model for the coevolution of game
strategy and link weight. They consider a population of $100 \times 100$ agents
arranged on a regular lattice network which is evolved through a Monte Carlo
simulation. An agent's interaction is described by the classical Prisoner's
Dilemma with a normalized payoff matrix. A new parameter, $\Delta/\delta$, is
defined as the link weight amplitude and is calculated as the ratio of
$\Delta/\delta$. They found that some values of $\Delta/\delta$ can provide the
best environment for the evolution of cooperation. They also found that their
coevolutionary model can promote cooperation efficiently even when the
temptation of defection is high.
In addition to investigations of the classical Prisoner's Dilemma on spatial
environments, some extensions of this game have also been explored as a means
to favour the emergence of cooperative behaviour. For instance, the Optional
Prisoner's Dilemma game, which introduces the concept of abstention, has been
studied since Batali and Kitcher \cite{Batali1995}. In their work, they
proposed the opt-out or ``loner's'' strategy in which agents could choose to
abstain from playing the game, as a third option, in order to avoid cooperating
with known defectors. There have been a number of recent studies exploring
this type of game \cite{Xia2015,Ghang2015,Olejarz2015,Jeong2014,Hauert2008}.
Cardinot et al. \cite{Cardinot2016sab} discuss that, with the introduction of
abstainers, it is possible to observe new phenomena and, in a larger range of
scenarios, cooperators can be robust to invasion by defectors and can
dominate.
Although recent work has discussed the inclusion of optional games with
coevolutionary rules \cite{Cardinot2016ecta}, this still needs to be investigated
in a wider range of scenarios. Therefore, our work aims to combine both of
these trends in evolutionary game theory in order to identify favourable
configurations for the emergence of cooperation in adverse scenarios, where,
for example, the temptation to defect is very high or when the initial
population of abstainers is either very scarce or very abundant.
\section{Methodology}
\label{sec:methodology}
The goal of the experiments outlined in this section is to investigate the
environmental settings when coevolution of both strategy and link weights of
the Optional Prisoner's Dilemma on a weighted network takes place.
This section includes a complete description of the Optional Prisoner's
Dilemma (PD) game, the spatial environment and the coevolutionary rules for
both the strategy and link weights. Finally, we also outline the experimental
set-up.
In the classical version of the Prisoner's Dilemma, two agents can choose
either cooperation or defection. Hence, there are four payoffs associated with
each pairwise interaction between the two agents. In consonance with common
practice \cite{Huang2015,Nowak1992}, payoffs are characterized by the reward
for mutual cooperation ($R=1$), punishment for mutual defection ($P=0$),
sucker's payoff ($S=0$) and temptation to defect ($T=b$, where $1<b<2$).
Note that this parametrization refers to the weak version of the Prisoner's
Dilemma game, where $P$ can be equal to $S$ without destroying the nature of
the dilemma. In this way, the constraints $T > R > P \ge S$ maintain the dilemma.
The extended version of the PD game presented in this paper includes the
concept of abstention, in which agents can not only cooperate ($C$) or defect
($D$) but can also choose to abstain ($A$) from a game interaction, obtaining
the loner's payoff ($L=l$) which is awarded to both players if one or both
abstain. As defined in other studies \cite{Cardinot2016sab,Hauert2002},
abstainers receive a payoff greater than $P$ and less than $R$ (i.e., $P<L<R$).
Thus, considering the normalized payoff matrix adopted, $0<l<1$. The payoff
matrix and the associated values are illustrated in Table~\ref{tab:payoffs}.
\begin{table}[htb]
\caption{The Optional Prisoner's Dilemma game matrix.}
\label{tab:payoffs}
\begin{subtable}{.4\linewidth}
\centering
\begin{tabular}{c c | c | c}
& {\bf C} & {\bf D} & {\bf A} \\
\cline{2-4}
{\bf C} & \multicolumn{1}{|c|}{\backslashbox{R}{R}}
& \backslashbox{S}{T}
& \multicolumn{1}{c|}{\backslashbox{L}{L}} \\
\cline{1-4}
{\bf D} & \multicolumn{1}{|c|}{\backslashbox{T}{S}}
& \backslashbox{P}{P}
& \multicolumn{1}{c|}{\backslashbox{L}{L}} \\
\cline{1-4}
{\bf A} & \multicolumn{1}{|c|}{\backslashbox{L}{L}}
& \backslashbox{L}{L}
& \multicolumn{1}{c|}{\backslashbox{L}{L}} \\
\cline{2-4}
\end{tabular}
\caption{Extended game matrix.}
\end{subtable}
\begin{subtable}{.6\linewidth}
\centering
\setlength{\tabcolsep}{8pt}
\begin{tabular}{l|c}
{\bf Payoff} & {\bf Value} \\
\hline
{Temptation to defect (T)} & $]1,2[$ \\
{Reward for mutual cooperation (R)} & $1$ \\
{Punishment for mutual defection (P)} & $0$ \\
{Sucker's payoff (S)} & $0$ \\
{Loner's payoff (L)} & $]0,1[$ \\
\end{tabular}
\caption{Payoff values.}
\end{subtable}
\end{table}
In these experiments, the following parameters are used: a $102 \times 102$
~($N=102^2$) regular lattice grid with periodic boundary conditions is created
and fully populated with agents, which can play with their eight immediate
neighbours (Moore neighbourhood). We adopt an unbiased environment (in which
initially each agent is designated as a cooperator ($C$), defector ($D$) or
abstainer ($A$) with equal probability) as well as investigating a biased
environment (in which the percentage of abstainers present in the environment
is varied). Also, each edge linking agents has the same initial weight $w=1$,
which will adaptively change in accordance with the interaction.
Monte Carlo methods are used to perform the Coevolutionary Optional Prisoner's
Dilemma game. In one Monte Carlo (MC) step, each player is selected once on
average. This means that one MC step comprises $N$ inner steps where the
following calculations and updates occur:
\begin{itemize}
\item Select an agent ($x$) at random from the population.
\item Calculate the utility $u_{xy}$ of each interaction of $x$ with its
eight neighbours (each neighbour represented as agent $y$) as follows:
\begin{equation}
u_{xy} = w_{xy} P_{xy},
\end{equation}
where $w_{xy}$ is the edge weight between agents $x$ and $y$, and
$P_{xy}$ corresponds to the payoff obtained by agent $x$ on playing the
game with agent $y$.
\item Calculate $U_x$ the accumulated utility of $x$, that is:
\begin{equation}
U_x = \sum_{y \in \Omega_x}u_{xy},
\end{equation}
where $\Omega_x$ denotes the set of neighbours of the agent $x$.
\item In order to update the link weights, $w_{xy}$, between agents, compare
the values of $u_{xy}$ and the average accumulated utility
(i.e., $\bar{U_x}=U_x/8$) as follows:
\begin{equation}
\label{eq:bigdelta}
w_{xy} =
\begin{dcases*}
w_{xy} + \Delta & if $u_{xy} > \bar{U_x}$ \\
w_{xy} - \Delta & if $u_{xy} < \bar{U_x}$ \\
w_{xy} & otherwise
\end{dcases*},
\end{equation}
where $\Delta$ is a constant such that $0 \le \Delta \le \delta$.
\item In line with previous research \cite{Huang2015,Wang2014}, $w_{xy}$
is adjusted to be within the range
\begin{equation}
\label{eq:smalldelta}
1-\delta \le w_{xy} \le 1+\delta,
\end{equation}
where $\delta$ ($0 < \delta \le 1$) defines the weight heterogeneity.
Note that when $\Delta$ or $\delta$ are equal to $0$, the link weight
keeps constant (${w=1}$), which results in the traditional scenario
where only the strategies evolve.
\item In order to update the strategy of $x$, the accumulated utility $U_x$
is recalculated (based on the new link weights) and compared with the
accumulated utility of one randomly selected neighbour ($U_y$).
If $U_y>U_x$, agent $x$ will copy the strategy of agent $y$ with a
probability proportional to the utility difference
(Equation~\ref{eq:prob}), otherwise, agent $x$ will keep its strategy
for the next step.
\begin{equation}
\label{eq:prob}
p(s_x=s_y) = \frac{U_y-U_x}{8(T-P)},
\end{equation}
where $T$ is the temptation to defect and $P$ is the punishment for
mutual defection. This equation has been considered previously by
Huang et al. \cite{Huang2015}.
\end{itemize}
Simulations are run for $10^5$ MC steps and the fraction of cooperation is
determined by calculating the average of the final $10^3$ MC steps. To alleviate
the effect of randomness in the approach, the final results are obtained by
averaging $10$ independent runs.
The following scenarios are investigated:
\begin{itemize}
\item The benefits of coevolution and abstention.
\item Presence of abstainers in the coevolutionary model.
\item Inspecting the coevolutionary environment.
\item Investigating the properties of the parameters $\Delta$ and $\delta$.
\item Varying the number of states.
\item Investigating the relationship between $\Delta/\delta$, $b$ and $l$.
\item Investigating the robustness of cooperation in a biased environment.
\end{itemize}
\section{The Benefits of Coevolution and Abstention}
\label{sec:results1}
This section presents some of the main differences between the outcomes
obtained by the proposed Coevolutionary Optional Prisoner's Dilemma (COPD) game
and other models which do not adopt the concept of coevolution and/or
abstention. In the COPD game, we also investigate how a population in an
unbiased environment evolves over time.
\subsection{Presence of Abstainers in the Coevolutionary Model}
In order to provide a means to effectively explore the impact of our
coevolutionary model, i.e., the Coevolutionary Prisoner's Dilemma (COPD) game,
in the emergence of cooperation, we start by investigating the performance of
some of the existing models. Namely, the Coevolutionary Prisoner's Dilemma
(CPD) game (i.e., same coevolutionary model as the COPD but without the concept of
abstention), the traditional Prisoner's Dilemma (PD) game, and the Optional
Prisoner's Dilemma game.
As shown in Figure~\ref{fig:compare}, it can be observed that for both PD and
CPD games, when the defector's payoff is very high (i.e., $b > 1.7$) defectors
spread quickly and dominate the environment. On the other hand, when abstainers
are present in a static and unweighted network, i.e., playing the OPD game, we
end up with abstainers dominating the environment. Undoubtedly, in many
scenarios, having a population of abstainers is better than a population of
defectors. However, it provides clear evidence that all these three models fail
to sustain cooperation. In fact, results show that in this type of adverse
environment (i.e., with a high temptation to defect), cooperation has no
chance of emerging.
\begin{figure}[t]
\captionsetup[subfigure]{labelformat=empty}
\centering
\begin{subfigure}{0.325\textwidth}
\centering
\epsfig{file=comparePD, width=\textwidth}
\caption{PD}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\epsfig{file=compareCPD, width=\textwidth}
\caption{CPD (${\Delta=0.72;\ \delta=0.8}$)}
\end{subfigure}
\begin{subfigure}{0.325\textwidth}
\centering
\epsfig{file=compareOPD, width=\textwidth}
\caption{OPD ($l=0.6$)}
\end{subfigure}
\caption{
Comparison of the Prisoner's Dilemma (PD), the Coevolutionary Prisoner's
Dilemma (CPD) and the Optional Prisoner's Dilemma (OPD) games. All with the
same temptation to defect, $b=1.9$.
}
\label{fig:compare}
\end{figure}
Surprisingly, as shown in Figure~\ref{fig:phase}, when considering the
Coevolutionary Optional Prisoner's Dilemma (COPD) game for the same
environmental settings of Figure~\ref{fig:compare} (i.e., $l=0.6$,
$\Delta=0.72$ and $\Delta=0.72$), with the temptation of defection almost at
its peak (i.e., $b=1.9$), it is possible to reach high levels of cooperation.
Figure~\ref{fig:phase} shows a typical phase diagram for both CPD and COPD
games for a fixed value of $\delta=0.8$ and $l=0.6$ (on the COPD game). It can be
observed that if a given environmental setting (i.e, $b$, $\Delta$ and
$\delta$) produces a stable population of cooperators in the CPD game, then the
presence of abstainers will not change it. In other words, the COPD game does
not affect the outcome of scenarios in which cooperation is stable in the
absence of abstainers. Thus, the main changes occur in scenarios in which
defection dominates or survives ($b>1.5$).
\begin{figure}[htb]
\centering
{\epsfig{file=phase_l0_ld08_0pA, width=0.492\textwidth}}
{\epsfig{file=phase_l06_ld08_33pA, width=0.492\textwidth}}
\caption{
Typical phase diagram for an initial balanced population playing the
Coevolutionary Prisoner's Dilemma game (left) and the Coevolutionary
Optional Prisoner's Dilemma game with $l=0.6$ (right), both with $\delta=0.8$.
}
\label{fig:phase}
\end{figure}
To summarize, despite the fact that the Coevolutionary Prisoner's Dilemma (CPD)
game succeeds in the promotion of cooperation in a wide range of scenarios,
it is still not able to avoid the invasion by defectors in cases where $b>1.5$, which
does not happen when the abstainers are present (i.e., COPD game).
\subsection{Inspecting the Coevolutionary Environment}
In order to further explain the results witnessed in the previous experiments,
we investigate how the population evolves over time for the Coevolutionary
Optional Prisoner's Dilemma game. Figure~\ref{fig:c_mcs} features the time
course of cooperation for three different values of $\Delta/\delta=\{0.0,\
0.2,\ 1.0\}$, which are some of the critical points when $b=1.9$, $l=0.6$ and
$\delta=0.8$. Based on these results, in Figure~\ref{fig:snapshots} we show
snapshots for the Monte Carlo steps $0$, $45$, $1113$ and $10^5$ for the three
scenarios shown in Figure~\ref{fig:c_mcs}.
\begin{figure}[p]
\centering
{\epsfig{file=Fig2, width=0.65\linewidth}}
\caption{
Progress of the fraction of cooperation $\rho_c$ during a Monte Carlo
simulation for $b=1.9$, $l=0.6$ and $\delta=0.8$.
}
\label{fig:c_mcs}
\end{figure}
\begin{figure}[p]
\centering
{\epsfig{file=step0, width=0.244\linewidth}}
{\epsfig{file=00_step45, width=0.244\linewidth}}
{\epsfig{file=00_step1113, width=0.244\linewidth}}
{\epsfig{file=00_step99999, width=0.244\linewidth}}
\vspace{0.07cm}
{\epsfig{file=step0, width=0.244\linewidth}}
{\epsfig{file=02_step45, width=0.244\linewidth}}
{\epsfig{file=02_step1113, width=0.244\linewidth}}
{\epsfig{file=02_step99999, width=0.244\linewidth}}
\vspace{0.07cm}
{\epsfig{file=step0, width=0.244\linewidth}}
{\epsfig{file=10_step45, width=0.244\linewidth}}
{\epsfig{file=10_step1113, width=0.244\linewidth}}
{\epsfig{file=10_step99999, width=0.244\linewidth}}
\caption{
Snapshots of the distribution of the strategy in the Monte Carlo steps
$0$, $45$, $1113$ and $10^5$ (from left to right) for $\Delta/\delta$
equal to $0.0$, $0.2$ and $1.0$ (from top to bottom). In this Figure,
cooperators, defectors and abstainers are represented by the colours
blue, red and green respectively. All results are obtained for $b=1.9$,
$l=0.6$ and $\delta=0.8$.
}
\label{fig:snapshots}
\end{figure}
We see from Figure~\ref{fig:snapshots} that for the traditional case (i.e.,
$\Delta/\delta=0.0$), abstainers spread quickly and reach a stable state in
which single defectors are completely isolated by abstainers. In this way, as
the payoffs obtained by a defector and an abstainer are the same, neither will
ever change their strategy. In fact, even if a single cooperator survives up to
this stage, for the same aforementioned reason, its strategy will not change
either. In fact, the same behaviour is noticed for any value of $b>1.2$ and
$\Delta/\delta=0$ (COPD in Figure~\ref{fig:phase}).
When $\Delta/\delta=0.2$, it is possible to observe some sort of equilibrium
between the three strategies. They reach a state of cyclic competition in which
abstainers invade defectors, defectors invade cooperators and cooperators
invade abstainers.
This behaviour, of balancing the three possible outcomes, is very common in
nature where species with different reproductive strategies remain in
equilibrium in the environment. For instance, the same scenario was observed as
being responsible for preserving biodiversity in the neighbourhoods of the
\textit{Escherichia coli}, which is a bacteria commonly found in the lower
intestine of warm-blooded organisms. According to Fisher \cite{Fisher2008},
studies were performed with three natural populations mixed together, in which
one population produces a natural antibiotic but is immune to its effects; a
second population is sensitive to the antibiotic but can grow faster than the
third population; and the third population is resistant to the antibiotic.
Because of this balance, they observed that each population ends up
establishing its own territory in the environment, as the first population
could kill off any other bacteria sensitive to the antibiotic, the second
population could use their faster growth rate to displace the bacteria which
are resistant to the antibiotic, and the third population could use their
immunity to displace the first population.
Another interesting behaviour is noticed for $\Delta/\delta=1.0$. In this
scenario, defectors are dominated by abstainers, allowing a few clusters of
cooperators to survive. As a result of the absence of defectors, cooperators
invade abstainers and dominate the environment.
\section{Exploring the Coevolutionary Optional Prisoner's Dilemma game}
\label{sec:results2}
In this section, we present some of the relevant experimental results of the
Monte Carlo simulations of the Coevolutionary Optional Prisoner's Dilemma game
in an unbiased environment. That is, a well-mixed initial population with a
balanced amount of cooperators, defectors and abstainers.
\subsection{Investigating the properties of $\Delta$ and $\delta$}
\label{sec:delta}
This section aims to investigate the properties of the presented model
(Sect.~\ref{sec:methodology}) in regard to the parameters $\Delta$ and
$\delta$. These parameters play a key role in the evolutionary dynamics of this
model because they define the number of possible link weights that an agent is
allowed to have (i.e., they define the number of states).
Despite the fact that the number of states is discrete, the act of counting
them is not straightforward. For instance, when counting the number of
states between $1-\delta$ and $1+\delta$ for $\Delta=0.2$ and $\delta=0.3$, we
could incorrectly state that there are four possible states for this scenario
(i.e., $\{0.7,\ 0.9,\ 1.1,\ 1.3\}$). However, considering that the link weights
of all edges are initially set to $w=1$, and due to the other constraints
(Equations \ref{eq:bigdelta} and \ref{eq:smalldelta}), the number of states is
actually seven (i.e., $\{0.7,\ 0.8,\ 0.9,\ 1.0,\ 1.1,\ 1.2,\ 1.3\}$).
In order to better understand the relationship between $\Delta$ and $\delta$,
we plot $\Delta$, $\delta$ and $\Delta/\delta$ as a function of the number of
states (numerically counted) for a number of different values of both
parameters (Figure~\ref{fig:delta}). It was observed that given the pairs
$(\Delta_1,\ \delta_1)$ and $(\Delta_2/\delta_2)$, if $\Delta_1/\delta_1$ is
equal to $\Delta_2/\delta_2$, then the number of states of both settings is the
same.
\begin{figure}[tb]
\centering
{\epsfig{file=delta, height=6.1cm}}
\caption{
The ratio $\Delta/\delta$ as a function of the number of states. For any combination
of $\Delta$ and $\delta$, the ration $\Delta/\delta$ will always have the same number
of states.
}
\label{fig:delta}
\end{figure}
Figure~\ref{fig:delta} shows the ratio $\Delta/\delta$ as a function of the
number of states. As we can see, although the function is non-linear and
non-monotonic, in general, higher values of $\Delta/\delta$ have less states.
\subsection{Varying the Number of States}
Figure~\ref{fig:c_amp} shows the impact of the coevolutionary model on the
emergence of cooperation when the ratio $\Delta/\delta$ varies for a range of
fixed values of the loner's payoff ($l$), temptation to defect ($b$) and
$\delta$. In this experiment, we observe that when $l=0.0$, the outcomes of the
Coevolutionary Optional Prisoner's Dilemma (COPD) game are very similar to
those observed by Huang et al. \cite{Huang2015} for the Coevolutionary
Prisoner's Dilemma (CPD) game. This result can be explained by the normalized
payoff matrix adopted in this work (Table~\ref{tab:payoffs}). Clearly,
when $l=0.0$, there is no advantage in abstaining from playing the game,
thus agents choose the option to cooperate or defect.
Results indicate that, in cases where the temptation to defect is very
low (e.g, $b \le 1.34$), the level of cooperation does not seem to be affected
by the increment of the loner's payoff, except when the advantage of abstaining
is very high (e.g, $l>0.8$). However, these results highlight that the presence
of the abstainers may protect cooperators from invasion. Moreover, the
difference between the traditional Optional Prisoner's Dilemma (i.e.,
$\Delta/\delta=0.0$) for $l=\{0.0,\ 0.6\}$ and all other values of
$\Delta/\delta$ is strong evidence that our coevolutionary model is very
advantageous to the promotion of cooperative behaviour.
Namely, when $l=0.6$, in the traditional case with a static and unweighted
network ($\Delta/\delta=0.0$), the cooperators have no chance of surviving;
except, of course, when $b$ is very close to the reward for mutual cooperation
$R$, where it is possible to observe scenarios of quasi-stable states of the
three strategies or between cooperators and defectors. In fact, in the
traditional OPD ($\Delta/\delta=0.0$), when $l>0.0$ and $b>1.2$, abstainers
are always the dominant strategy. However, when the coevolutionary rules are
used, cooperators do much better, being also able to dominate the whole
population in many cases.
It is noteworthy that the curves in Figure~\ref{fig:c_amp} are usually
non-linear and/or non-monotonic because of the properties of the ratio
$\Delta/\delta$ in regard to the number of states of each combination
of $\Delta$ and $\delta$ (Sect.~\ref{sec:delta}).
\begin{figure}[tb]
\centering
{\epsfig{file=Fig1_l000, height=3.92cm}}
{\epsfig{file=Fig1_l060, height=3.92cm}}
\caption{
Relationship between cooperation and the ratio $\Delta/\delta$ when
the loner's payoff ($l$) is equal to $0.0$ (left) and $0.6$ (right).
}
\label{fig:c_amp}
\end{figure}
\subsection{Investigating the Relationship between $\Delta/\delta$, $b$ and $l$}
\label{sec:varyall}
To investigate the outcomes in other scenarios, we explore a wider range of
settings by varying the values of the temptation to defect ($b$), the loner's
payoff ($l$) and the ratio $\Delta/\delta$ for a fixed value of $\delta = 0.8$.
As shown in Figure~\ref{fig:ternary}, cooperation is the dominant strategy in
the majority of cases. Note that in the traditional case, with an unweighted
and static network, i.e., $\Delta/\delta=0.0$, abstainers dominate in all
scenarios illustrated in this ternary diagram. In addition, it is also possible
to observe that certain combinations of $l$, $b$ and $\Delta/\delta$ guarantee
higher levels of cooperation. In these scenarios, cooperators are protected by
abstainers against exploitation from defectors.
Another observation is that defectors are attacked more efficiently by
abstainers as we increase the loner’s payoff ($l$). Simulations reveal that,
for any scenario, if the loner’s payoff is greater than $0.7$ ($l > 0.7$),
defectors have no chance of surviving.
However, the drawback of increasing the value of $l$ is that it makes it difficult
for cooperators to dominate abstainers, which might produce a quasi-stable
population of cooperators and abstainers. It is noteworthy that it is a
counter-intuitive result from the COPD game, since the loner’s payoff is always
less than the reward for mutual cooperation (i.e., $L < R$), even for extremely
high values of $L$. This scenario (population of cooperators and abstainers)
should always lead cooperators to quickly dominate the environment.
In fact, it is still expected that, in the COPD game, cooperators dominate
abstainers, but depending on the value of the loner’s payoff, or the amount of
abstainers in the population at this stage, it might take several Monte Carlo
steps to reach a stable state, which is usually a state of cooperation fully
dominating the population.
An interesting behaviour is noticed when $l=[0.45, 0.55]$ and $b > 1.8$. In
this scenario, abstainers quickly dominate the population, making a clear
division between two states: before this range (defectors hardly die off) and
after this range (defectors hardly survive). In this way, a loner’s payoff value
greater than $0.55$ ($l > 0.55$) is usually the best choice to promote
cooperation. This result is probably related to the difference between the
possible utilities for each type of interaction, which still needs further
investigation in future.
Although the combinations shown in Figure~\ref{fig:ternary} for higher values
of b ($b>1.8$) are just a small subset of an infinite number of possible
values, it is clearly shown that a reasonable fraction of cooperators can
survive even in an extremely adverse situation where the advantage of defecting
is very high. Indeed, our results show that some combinations of high values
of $l$ and $\delta$, such as for $\delta=0.8$ and $l=0.7$, can further improve the
levels of cooperation, allowing for the full dominance of cooperation.
\begin{figure}[tb]
\centering
{\epsfig{file=ternary_c, width=0.325\linewidth}}
{\epsfig{file=ternary_d, width=0.325\linewidth}}
{\epsfig{file=ternary_a, width=0.325\linewidth}}
\caption{
Ternary diagrams of different values of $b$, $l$ and $\Delta/\delta$
for $\delta=0.8$.
}
\label{fig:ternary}
\end{figure}
\section{Investigating the Robustness of Cooperation in a Biased Environment}
\label{sec:results3}
The previous experiments revealed that the presence of abstainers together with
simple coevolutionary rules (i.e., the COPD game) act as a powerful mechanism to
avoid the spread of defectors, which also allows the dominance of cooperation
in a wide range of scenarios.
However, the distribution of the strategies in the initial population used in
all of the previous experiments was uniform. That is, we have explored cases in which
the initial population contained a balanced amount of cooperators, defectors and
abstainers. Thus, in order to explore the robustness of these outcomes in
regard to the initial amount of abstainers in the population, we now aim to
investigate how many abstainers would be necessary to guarantee robust
cooperation.
Figure~\ref{fig:unbalanced} features the fraction of each strategy in the
population (i.e., cooperators, defectors and abstainers) over time for fixed
values of $b=1.9$, $\Delta=0.72$ and $\delta=0.8$. In this experiment, several
independent simulations were performed, in which the loner’s payoff ($l$) and the
number of abstainers in the initial population were varied from $0.0$ to $1.0$
and from $0.1\%$ to $99.9\%$, respectively. Other special cases were also
analyzed, such as placing only one abstainer into a balanced population of
cooperators and defectors, and placing only one defector and one cooperator in
a population of abstainers. For the sake of simplicity, we report only the
values of $l=\{0.2,\ 0.6,\ 0.8\}$ for an initial population with one abstainer
and with $5\%$, $33\%$ and $90\%$ abstainers, which are representative of the
outcomes at other values also.
\begin{figure}[p]
\centering
{\epsfig{file=b19_l02_bd072_ld08_1A, width=0.325\linewidth}}
{\epsfig{file=b19_l06_bd072_ld08_1A, width=0.325\linewidth}}
{\epsfig{file=b19_l08_bd072_ld08_1A, width=0.325\linewidth}}
{\epsfig{file=b19_l02_bd072_ld08_5pA, width=0.325\linewidth}}
{\epsfig{file=b19_l06_bd072_ld08_5pA, width=0.325\linewidth}}
{\epsfig{file=b19_l08_bd072_ld08_5pA, width=0.325\linewidth}}
{\epsfig{file=b19_l02_bd072_ld08_33pA, width=0.325\linewidth}}
{\epsfig{file=b19_l06_bd072_ld08_33pA, width=0.325\linewidth}}
{\epsfig{file=b19_l08_bd072_ld08_33pA, width=0.325\linewidth}}
{\epsfig{file=b19_l02_bd072_ld08_90pA, width=0.325\linewidth}}
{\epsfig{file=b19_l06_bd072_ld08_90pA, width=0.325\linewidth}}
{\epsfig{file=b19_l08_bd072_ld08_90pA, width=0.325\linewidth}}
\caption{
Time course of each strategy for $b=1.9$, $\Delta=0.72$,
$\delta=0.8$ and different values of $l$ (from left to right,
$l=\{0.2,\ 0.6,\ 0.8\}$). The same settings are also tested
on populations seeded with different amount of abstainers
(i.e, from top to bottom: 1 abstainer, 5\% of the population,
1/3 of the population, 90\% of the population).
}
\label{fig:unbalanced}
\end{figure}
Note that, for all these simulations, the initial population of cooperators and
defectors remained in balance. For instance, an initial population with $50\%$
of abstainers, will consequently have $25\%$ of cooperators and $25\%$ of
defectors.
Experiments reveal that the COPD game is actually extremely robust to radical
changes in the initial population of abstainers. It has been shown that if the
loner’s payoff is greater than $0.55$ ($l>0.55$), then one abstainer might alone
be enough to protect cooperators from the invasion of defectors (see Figures
\ref{fig:unbalanced}a, \ref{fig:unbalanced}b and \ref{fig:unbalanced}c).
However, this outcome is only possible if the single abstainer is in the middle
of a big cluster of defectors.
This outcome can happen because the payoff obtained by the abstainers is always
greater than the one obtained by pairs of defectors (i.e., $L<P$). Thus, in a
cluster of defectors, abstention is always the best choice. However, as this single
abstainer reduces the population of defectors, which consequently increases the
population of abstainers and cooperators in the population, defection may start
to be a good option again due to the increase of cooperators. Therefore, the
exploitation of defectors by abstainers must be as fast as possible, otherwise,
they might not be able to effectively attack the population of defectors. In
this scenario, the loner’s payoff is the key parameter to control the speed in
which abstainers invade defectors. This explains why a single abstainer
is usually not enough to avoid the dominance of defectors when $l<0.55$.
In this way, as the loner’s payoff is the only parameter that directly affects
the evolutionary dynamics of the abstainers, intuition might lead one to expect
to see a clear and perhaps linear relationship between the loner’s payoff and
the initial number of abstainers in the population. That is, given the same set
of parameters, increasing the initial population of abstainers or the loner’s
payoff ($l$) would probably make it easier for abstainers to increase or even
dominate the population. Despite the fact that it might be true for high values
of the loner’s payoff (i.e., $l\ge0.8$, as observed in Figure~\ref{fig:unbalanced})
is not applicable to other scenarios. Actually, as it is also shown in
Figure~\ref{fig:unbalanced}, if the loner’s payoff is less than $0.55$,
changing the initial population of abstainers does not change the outcome at
all. When $0.55 \le l < 0.8$, a huge initial population of abstainers can
actually promote cooperation best.
As discussed in Section~\ref{sec:varyall}, populations of cooperators and
abstainers tend to converge to cooperation. In this way, the scenario showed in
Figure~\ref{fig:unbalanced} for $l=0.8$ will probably end up with cooperators
dominating the population, but as the loner’s payoff is close to the reward for
mutual cooperation, the case in Figure~\ref{fig:unbalanced}i will converge
faster than the one showed in Figure\ref{fig:unbalanced}l.
Another very counter-intuitive behaviour occurs in the range $l=[0.45, 0.55]$
(this range may shift a little bit depending on the value of $b$), where the
outcome is usually of abstainers quickly dominating the population
(Sect.\ref{sec:varyall}). In this scenario, we would expect that changes in the
initial population of abstainers would at least change the speed in which the
abstainers fixate in the population. That is, a huge initial population of
abstainers would probably converge quickly. However, it was observed that the
convergence speed is almost the same regardless of the size of the initial
population of abstainers.
In summary, results show that an initial population with $5\%$ of abstainers
is usually enough to make reasonable changes in the outcome, increasing the
chances of cooperators surviving or dominating the population.
\section{Conclusions and Future Work}
\label{sec:conclusion}
This paper studies the impact of a simple coevolutionary model in which not
only the agents’ strategies but also the network evolves over time. The model
consists of placing agents playing the Optional Prisoner’s Dilemma game in a
dynamic spatial environment, which in turn, defines the Coevolutionary Optional
Prisoner’s Dilemma (COPD) game.
In summary, based on the results of several Monte Carlo simulations, it was
shown that the COPD game allows for the emergence of cooperation in a wider
range of scenarios than the Coevolutionary Prisoner’s Dilemma (CPD) game (i.e.,
the same coevolutionary model in populations which do not have the option to
abstain from playing the game). Results also showed that COPD performs much
better than the traditional version of these games (i.e., the Prisoner’s
Dilemma (PD) and the Optional Prisoner’s Dilemma (OPD) games) where only the
strategies evolve over time in a static and unweighted network.
Moreover, we observed that the COPD game is actually able to reproduce outcomes
similar to other games by setting the parameters as follows:
\begin{itemize}
\item CPD: $l=0$.
\item OPD: $\Delta=0$ (or $\delta=0$).
\item PD: $l=0$ and $\Delta=0$ (or $\delta=0$).
\end{itemize}
Also, it was possible to observe that abstention acts as an important
mechanism to avoid the dominance of defectors. For instance, in adverse
scenarios such as when the defector’s payoff is very high (i.e., $b>1.7$), for both
PD and CPD games, defectors spread quickly and dominated the environment. On
the other hand, when abstainers were present (COPD game), cooperation was able
to survive and even dominate.
Furthermore, simulations showed that defectors die off when the loner’s payoff is
greater than $0.7$ ($l>0.7$). However, it was observed that increasing the loner’s
payoff makes it difficult for cooperators to dominate abstainers, which is a
counter-intuitive result, since the loner’s payoff is always less than the
reward for mutual cooperation (i.e., $L < R$), this scenario should always lead
cooperators to dominance very quickly. In this scenario, cooperation is still
the dominant strategy in most cases, but it might require several Monte Carlo
steps to reach a stable state.
Results revealed that the COPD game also allows scenarios of cyclic dominance
between the three strategies (i.e., cooperation, defection and abstention),
indicating that, for some parameter settings, the COPD game is intransitive.
That is, the population remains balanced in such a way that cooperators invade
abstainers, abstainers invade defectors and defectors invade cooperators,
closing a cycle.
We also explored the robustness of these outcomes in regard to the initial
amount of abstainers in the population (biased population). In summary, it was
shown that, in some of the scenarios, even one abstainer might alone be enough
to protect cooperators from the invasion of defectors, which in turn increases
the chances of cooperators surviving or dominating the population.
Although recent research has considered coevolving game strategy (with optional
games) and link weights \cite{Cardinot2016ecta}, this work presents a more complete
analysis. We conclude that the combination of both of these trends in
evolutionary game theory may shed additional light on gaining an in-depth
understanding of the emergence of cooperative behaviour in real-world
scenarios.
Future work will consider the exploration of different topologies and the
influence of a wider range of scenarios, where, for example, agents could
rewire their links, which, in turn, adds another level of complexity to the
model. Future work will also involve applying our studies and results to
realistic scenarios, such as social networks and real biological networks.
\bigskip
\subsubsection*{Acknowledgments. }
This work was supported by the National Council for Scientific and Technological Development (CNPq-Brazil).
\bibliographystyle{splncs03}
|
1,941,325,221,123 | arxiv | \section{Introduction}
\label{section:introduction}
Deep learning (DL), in particular supervised deep learning models, has gained tremendous success during the past decade, and the development of supervised learning relies on a large amount of high-quality labeled data.
However, high-quality data is often difficult to collect and the cost of labeling is expensive.
\emph{Self-supervised learning (SSL)} is proposed to resolve labeled data restrictions by generating ``labels'' from the unlabeled dataset (called \emph{pre-training dataset}), and use the derived ``labels'' to pre-train an \emph{encoder}.
With huge amounts of unlabeled data and advanced model architectures, one can train a powerful encoder to learn informative representations (also referred to as features) from the input data, which can be further leveraged as a feature extractor to train a \emph{downstream classifier}.
Such encoders pre-trained by self-supervised learning show great promise in various downstream tasks.
For instance, on the ImageNet dataset~\cite{RDSKSMHKKBBF15}, Chen et al.~\cite{CKNH20} show that, by using SimCLR pre-trained with ImageNet (unlabeled), the downstream classifier can achieve $85.8\%$ top-5 accuracy with only $1\%$ labels.
He et al.~\cite{HFWXG20} show that self-supervised learning can surpass supervised learning under seven downstream tasks including segmentation and detection.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{pic/scenerio.pdf}
\caption{An illustration of deploying self-supervised learning pre-trained encoders as a service. The legitimate user aims to train downstream classifiers while the adversary tries to generate a surrogate encoder.}
\label{fig::scenario}
\end{figure}
Compared to the supervised learning-based classifier which only suits a specific classification task, the SSL pre-trained encoder can achieve remarkable performance on different downstream tasks.
Despite having amazing performance on many downstream tasks, the data collection and training process of SSL is also expensive as it benefits from larger datasets and more powerful computing devices.
For instance, the performance of MoCo~\cite{HFWXG20}, one popular image encoder, pre-trained with dataset Instagram-1B ($\sim 1$ billion images) outperforms that of the encoder pre-trained with dataset ImageNet-1M (1.28 million images).
SimCLR requires 128 TPU v3 cores to train a ResNet-50 encoder due to the large batch size setting~\cite{CKNH20}.
The cost to train a powerful encoder by SSL is usually prohibitive for individuals.
Therefore, the high-performance encoders are usually pre-trained by leading AI companies with sufficient computing resources, and shared via cloud platforms for commercial usage.
For instance, OpenAI's API provides access to GPT-3~\cite{BMRSKDNSSAAHKHCRZWWHCSLGCCBMRSA20} which can be considered as a powerful encoder for a variety of natural language processing (NLP) downstream tasks, such as code generation, style transfer, etc.
Once being deployed on the cloud platform, the pre-trained encoders are not only accessible for legitimate users but also threatened by potential adversaries.
As illustrated in \autoref{fig::scenario}, for the legitimate user, the pre-trained encoder is used to train a downstream classifier.
On the other hand, an adversary may perform \emph{model stealing attacks}~\cite{TZJRR16,OSF19,KTPPI20,SHHZ22} which aim to learn a surrogate encoder that has similar functionality as the pre-trained encoder published online.
Such attacks may not only compromise the intellectual property of the service provider but also serve as a stepping stone for further attacks such as membership inference attacks~\cite{SSSS17,SZHBFB19,LJQG21}, backdoor attacks~\cite{JLG22}, and adversarial examples~\cite{PMGJCS17}.
The copyright of the pre-trained encoder is threatened by these attacks, which call for effective defenses.
As one major technique to protect the copyright of a given model, model watermarking~\cite{LHZG19,JCCP21} inserts a secret watermark pattern into the model.
The ownership of the model can then be claimed if similar or exactly the same pattern is successfully extracted from the model.
Recent studies on model watermarking mainly focus on the classifier that targeted a specific task~\cite{ABCPK18,ZGJWSHM18,JCCP21}.
Meanwhile, compared to classifiers, watermarking SSL pre-trained encoders may face several intrinsic challenges.
First, model watermarking against the classifier usually needs to specify a target class before being executed, while the SSL pre-trained encoder does not have such information.
Second, downstream tasks for SSL pre-trained encoders are flexible, which challenges the traditional model watermarking scheme that is only suitable for one specific downstream task.
To the best of our knowledge, no watermarking schemes have been proposed for SSL pre-trained encoders.
\mypara{Our Work}
In this paper, we first quantify the copyright breaching threat against SSL pre-trained encoder through the lens of model stealing attacks.
Then, we introduce \emph{SSLGuard}\xspace, the first watermarking algorithm for the SSL pre-trained encoders to protect their copyright.
Note that in this work, we consider the image encoder.
For model stealing attacks, we first assume that the adversary only has black-box access to the SSL pre-trained encoder (i.e., \emph{victim} encoder).
The adversary's goal is to build a surrogate encoder to ``copy'' the functionality of the victim encoder.
We then characterize the adversary's background knowledge into two dimensions, i.e., surrogate dataset's distribution and surrogate encoder's architecture.
Regarding the surrogate dataset which is used to train the surrogate encoder, we consider the adversary may or may not know the victim encoder's pre-training dataset.
Regarding the surrogate encoder's architecture, we first assume that it shares the same architecture as the victim encoder.
Then, we relax this assumption and find that the effectiveness of model stealing attacks can even increase by leveraging a larger model architecture.
We empirically show that the model stealing attacks against victim encoders achieve remarkable performance.
For instance, given a ResNet-18 encoder pre-trained on CIFAR-10, the ResNet-50 surrogate encoder can achieve $89.71\%$ accuracy on CIFAR-10 and $76.56\%$ accuracy on STL-10, while the accuracy for the victim encoder is $92.00\%$ on CIFAR-10 and $79.54\%$ on STL-10, respectively.
This is because the rich information hidden in the features can better help the surrogate encoder mimic the behavior of the victim encoder.
Such observation emphasizes the underlying threat of jeopardizing the model owner's intellectual property and the emergence of copyright protection.
To protect the copyright of SSL pre-trained encoders, we propose \emph{SSLGuard}\xspace, a robust \emph{black-box} watermarking algorithm for SSL pre-trained encoders.
Concretely, given a secret key, the goal of \emph{SSLGuard}\xspace is to embed a watermark based on the secret key into a clean SSL pre-trained encoder.
The output of \emph{SSLGuard}\xspace contains a watermarked encoder and a key-tuple.
To be specific, the key-tuple consists of the secret key, a verification dataset, and a decoder.
\emph{SSLGuard}\xspace finetunes a clean encoder to a watermarked encoder, which can map samples in the verification dataset to secret representations, and these secret representations can be transformed to the secret key through the decoder.
For other encoders, the decoder only transforms the representations generated from the verification dataset into random vectors.
Recent research has shown that if a watermarked encoder is stolen, its corresponding watermark will vanish~\cite{JCCP21}.
To remedy this situation, \emph{SSLGuard}\xspace adopts shadow dataset and shadow encoder to locally simulate the process of model stealing attacks.
In the training process, \emph{SSLGuard}\xspace optimizes a trigger that can be recognized by both the watermarked encoder and the shadow encoder.
We later show in \autoref{section:evaluation} that such a design can strongly preserve the watermark even in the surrogate encoder stolen by the adversary.
Empirical evaluations over 4 datasets (i.e., CIFAR-10, CIFAR-100, STL-10, and GTSRB) and 3 encoder pre-training algorithms (i.e., SimCLR, MoCo v2, and BYOL) show that \emph{SSLGuard}\xspace can successfully embed/extract the watermark to/from the SSL pre-trained encoder without sacrificing its performance and is robust to model stealing attacks.
Moreover, we consider an adaptive adversary who has the knowledge that the victim encoder is being watermarked and will perform further watermark removal attacks such as pruning and finetuning to ``clean'' the model.
We empirically show that \emph{SSLGuard}\xspace is still effective in such a scenario.
In summary, we make the following contributions:
\begin{itemize}
\item We unveil that the SSL pre-trained encoders are highly vulnerable to model stealing attacks.
\item We propose \emph{SSLGuard}\xspace, the first watermarking algorithm against SSL pre-trained encoders, which is able to protect the intellectual property of published encoders.
\item Extensive evaluations show that \emph{SSLGuard}\xspace is effective in embedding and extracting watermarks and robust against model stealing and other watermark removal attacks such as pruning and finetuning.
\end{itemize}
\section{Background}
\label{section:background}
\subsection{Self-supervised Learning}
Self-supervised learning is a rising AI paradigm that aims to train an encoder by a large scale of unlabeled data.
A high-performance pre-trained encoder can be shared into the public platform as an upstream service.
In downstream tasks, customers can use the features output from the pre-trained encoder to train their classifiers with limited labeled data~\cite{CKNH20} or even no data~\cite{RKHRGASAMCKS21}.
One of the most remarkable self-supervised learning paradigms is contrastive learning~\cite{CKNH20,HFWXG20,CFGH20,GSATRBDPGAPKMV20,RKHRGASAMCKS21}.
In general, encoders are pre-trained through contrastive losses which calculate the similarities of features in a latent space.
In this paper, we consider three representative contrastive learning algorithms, i.e., SimCLR~\cite{CKNH20}, MoCo v2~\cite{CFGH20}, and BYOL~\cite{RKHRGASAMCKS21}.
\mypara{SimCLR~\cite{CKNH20}}
SimCLR is a simple framework for contrastive learning.
It consists of four components, including \emph{Data augmentation}, \emph{Base encoder $f(\cdot)$}, \emph{Projection head $g(\cdot)$} and \emph{Contrastive loss function}.
The data augmentation module is used to transform a data sample $x$ randomly into two augmented views.
Specifically, the augmentations include random cropping, random color distortions, and random Gaussian blur.
If two augmented views are generated from the same data sample $x$, we treat them as a positive pair, otherwise, they are considered as a negative pair.
Positive pairs of $x$ are denoted as $\tilde{x_i}$ and $\tilde{x_j}$.
Base encoder $f(\cdot)$ extracts feature vectors $h_i=f(\tilde{x_i})$ from augmented inputs. Projection head $g(\cdot)$ is a small neural network that maps feature vectors to a latent space where contrastive loss is applied.
SimCLR uses a multilayer perceptron (MLP) as the projection head $g(\cdot)$ to obtain the output $z_i = g(h_i)$.
For a set of samples $\{\tilde{x_k}\}$ including both positive and negative pairs, contrastive loss aims to maximize the similarity between the feature vectors of positive pairs and minimize those of negative pairs.
Given $N$ samples in each mini-batch, we could get $2N$ augmented samples.
Formally, the loss function for a positive pair $\tilde{x_i}$ and $\tilde{x_j}$ can be formulated as:
\begin{equation*}
\label{equ:a}
l(i,j) = -\log \frac{\exp(\text{sim}(z_i,z_j)/\tau)}{\sum^{2N}_{k=1,k\neq i}\exp(\text{sim}(z_i,z_k)/\tau)},
\end{equation*}
where $\text{sim}(\cdot,\cdot)$ denotes the cosine similarity function and $\tau$ denotes a temperature parameter.
SimCLR jointly trains the base encoder and projection head by minimizing the final loss function:
\begin{equation*}
\label{equ:a}
\mathcal{L}_{SimCLR} = \frac{1}{2N}\sum_{k=1}^N[l(2k-1,2k)+l(2k,2k-1)],
\end{equation*}
where $2k-1$ and $2k$ are the indexes for each positive pair.
Once the model is trained, SimCLR discards the projection head and keeps the base encoder $f(\cdot)$ only, which serves as the pre-trained encoder.
\mypara{MoCo v2~\cite{CFGH20}}
Momentum Contrast (MoCo)~\cite{HFWXG20} is a famous contrastive learning algorithm, and MoCo v2 is the modified version (using projection head and more data augmentations).
MoCo points out that contrastive learning can be regarded as a dictionary look-up task.
The ``keys'' in the dictionary are the features output from the encoder.
A ``query'' matches a key if they are encoded from the same image.
MoCo aims to train an encoder that outputs similar features for a query and its matching key, and dissimilar features for others.
The dictionary is desirable to be large and consistent, which contains rich negative images and helps to learn good representations.
MoCo aims to build such a dictionary with a queue and momentum encoder.
MoCo contains two parts: \emph{query encoder} $f_q(x;\theta_q)$ and \emph{key encoder} $f_k(x;\theta_k)$.
Given a query sample $x^q$, MoCo gets an encoded query $q=f_q(x^q)$.
For other samples $x^k$, MoCo builds a dictionary whose keys are $\{k_0,k_1,...\}$, $k_i=f_k(x_i^k)$, $i\in \mathbb{N}$.
The dictionary is a dynamic queue that keeps the current mini-batch encoded features and discards ones in the oldest mini-batch.
The benefit of using a queue is decoupling the dictionary size from the mini-batch size, so the dictionary size can be set as a hyper-parameter.
Assume $k_+$ is the key that $q$ matches, then the loss function will be defined as:
\begin{equation*}
\label{equ:a}
\mathcal{L}_{MoCo} = -\log\frac{\exp(q\cdot k_+/\tau)}{\sum_{i=0}^K \exp(q\cdot k_i/\tau)}.
\end{equation*}
$\tau$ is a temperature hyper-parameter.
MoCo trains $f_q$ by minimizing contrastive loss and updates $\theta_q$ by gradient descent.
However, it is difficult to update $\theta_k$ by back-propagation because of the queue, so $f_k$ is updated by moving-averaged as:
\begin{equation*}
\label{equ:a}
\theta_k \leftarrow m \theta_k + (1-m)\theta_q,
\end{equation*}
where $m \in [0,1)$ denotes a momentum coefficient. Finally, we keep the $f_q$ as the final pre-trained encoder.
\mypara{BYOL~\cite{GSATRBDPGAPKMV20}}
Bootstrap Your Own Latent (BYOL) is a novel self-supervised learning algorithm.
Different from previous methods, BYOL does not rely on the negative pairs, and it has a more robust selection of image augmentations.
BYOL's architecture consists of two neural networks: \emph{online networks} and \emph{target networks}.
The online networks, with parameters $\theta$, consist of an encoder $f_{\theta}$, a projector $g_{\theta}$ and a predictor $q_{\theta}$.
The target networks are made up of an encoder $f_{\xi}$ and a projector $g_{\xi}$.
The two networks bootstrap the representations and learn from each other.
Given an input sample $x$, BYOL produces two augmented views $v \leftarrow t(x)$ and $v' \leftarrow t'(x)$ by using image augmentations $t$ and $t'$, respectively.
The online networks output a projection $z_{\theta} \leftarrow g_{\theta}(f_{\theta}(v))$ and target networks output
a target projection $z'_{\xi} \leftarrow g_{\xi}(f_{\xi}(v'))$.
The online networks' goal is to make prediction $q_\theta(z_\theta)$ similar to $z'_{\xi}$.
Formally, the similarity can be defined as the following:
\begin{equation*}
\label{equ:a}
L_{\theta,\xi} = 2 - 2 \cdot \frac{\langle q_{\theta}(z_{\theta}), z'_{\xi} \rangle}{\Vert q_{\theta}(z_{\theta}) \Vert_2 \cdot \Vert z'_{\xi} \Vert_2}.
\end{equation*}
Conversely, BYOL feeds $v'$ to online networks and $v$ to target networks separately and gets $ \widetilde{L_{\theta,\xi}}$.
The final loss function can be formulated as:
\begin{equation*}
\label{equ:a}
\mathcal{L}_{BYOL} = L_{\theta,\xi} + \widetilde{L_{\theta,\xi}}.
\end{equation*}
BYOL updates the weights of the online and target networks by:
\begin{equation*}
\label{equ:a}
\theta \leftarrow {\rm optimizer}(\theta, \bigtriangledown_{\theta}\mathcal{L}_{BYOL},\eta),
\end{equation*}
\begin{equation*}
\label{equ:a}
\xi \leftarrow \tau \xi + (1-\tau)\theta,
\end{equation*}
where $\eta$ is a learning rate of the online networks.
The target networks' weight $\xi$ is updated in a weighted average way, and $\tau \in [0,1]$ denotes the decay rate of the target encoder.
Once the model is trained, we treat the online networks' encoder $f_{\theta}$ as the pre-trained encoder.
\subsection{Model Stealing Attacks}
Model stealing attacks~\cite{TZJRR16,CCGJY18,OSF19,DR19,KTPPI20,JCBKP20,CCGJY20,WYPY20,SHHZ22} aim to steal the parameters or the functionality of the victim model.
To achieve this goal, given a victim model $f(x;\theta)$, the adversary can issue a bunch of queries to the victim model and obtain the corresponding responses.
Then the queries and responses serve as the inputs and ``labels'' to train the surrogate model, denoted as $f'(x;\theta')$. Formally, given a query dataset $\mathcal{D}$, the adversary can train $f'(x;\theta')$ by
\begin{equation}
\label{equ:modelstealing}
\mathcal{L}_{steal} = \mathbb{E}_{x \sim \mathcal{D}}[{\rm sim}(f(x;\theta),f'(x;\theta'))].
\end{equation}
where ${\rm sim(\cdot,\cdot)}$ is a similarity function.
Note that if the victim model is a classifier, the response can be the prediction probability of each class.
If the victim model is an encoder, the response can be the feature vector.
A successful model stealing attack may not only breach the intellectual property of the victim model but also serve as a springboard for further attacks such as membership inference attacks~\cite{SSSS17,SZHBFB19,LJQG21,HZ21,SM21,LZ21,HWWBSZ21}, backdoor attacks~\cite{YLZZ19,SSP20,CSBMSWZ21,JLG22} and adversarial examples~\cite{GSS15,PMGJCS17,CW172,KGB16,LCLS16}.
Previous work has demonstrated that neural networks are vulnerable to model stealing attacks.
In this paper, we concentrate on model stealing attacks to SSL pre-trained encoders, which have not been studied yet.
\subsection{DNNs Watermarking}
Considering the cost of training deep neural networks (DNNs), DNNs watermarking algorithms have received wide attention as it is an effective method to protect the copyright of the DNNs.
Watermark is a traditional concept for media such as audio, video, etc., and it has been extended to protect the intellectual property of deep learning models recently~\cite{UNSS17,MPT17,RCK18,ABCPK18,JCCP21}.
Concretely, the watermarking procedure can be divided into two steps, i.e., injection and verification.
In the injection step, the model owner injects a watermark and a pre-defined behavior into the model in the training process.
The watermark is usually secret, such as a trigger that is only known to the model owner~\cite{LHZG19}.
In the verification step, the ownership of a suspect model can be claimed if the watermarked encoder has the pre-defined behavior when the input samples contain the trigger.
So far, the watermarking algorithms mainly focus on the classifiers in a specific task.
However, how to design a watermarking algorithms to SSL pre-trained encoders that can fit for various downstream tasks remains largely unexplored.
\section{Threat Model}
In this paper, we consider two parties: the \emph{defender} and the \emph{adversary}.
The defender is the owner of the victim encoder, whose goal is to protect the copyright of the victim encoder when publishing it as an online service.
The adversary, on the contrary, aims to steal the functionality of the victim encoder and construct a surrogate encoder that can (1) behave similarly as the victim encoder in various downstream tasks and (2) bypass the copyright protection method for the victim encoder.
\mypara{Adversary's Background Knowledge}
For the adversary, we first assume that they only have the black-box access to the victim encoder, which is the most challenging setting for the adversary~\cite{OSF19,JCBKP20,KTPPI20,SHHZ22}.
In this setting, the adversary can only query the victim encoder with data samples and obtain their corresponding responses, i.e., the features.
Then, data samples and the corresponding responses are used to train the surrogate encoders.
We categorize the adversary's background knowledge into two dimensions, i.e., pre-training dataset and victim encoder's architecture.
Concretely, we assume that the adversary has a query dataset to perform the attack.
Note that the query dataset can be the same as the victim encoder's training dataset (i.e., the pre-training dataset $\mathcal{D}_{train}$).
However, we later show that the attack is still effective even with a query dataset that is from a different distribution of $\mathcal{D}_{train}$.
Regarding the victim encoder's architecture, we first assume that the adversary can obtain it since such information is usually publicly accessible.
Then we empirically show that this assumption can be relaxed as well, and the attack is even more effective when the surrogate encoder leverages deeper model architectures.
\mypara{Adaptive Adversary}
We then consider an adaptive adversary who knows that the victim encoder is already watermarked.
This means they can leverage watermark removal techniques like pruning~\cite{LKDSG17} and finetuning~\cite{LDG18} on the surrogate encoder to bypass the watermark verification.
A more powerful adversary (e.g., insider threat~\cite{CJG21}) with white-box access to the victim encoder can directly perform pruning and finetuning against the victim encoder and treat it as the surrogate encoder.
\section{Design on Watermarking Algorithm}
\label{section:watermark_design}
In this section, we present \emph{SSLGuard}\xspace, a watermarking scheme to preserve the copyright of the SSL pre-trained encoders.
\emph{SSLGuard}\xspace should have the following properties:
\begin{itemize}
\item {\bf Fidelity:}
To minimize the impact of \emph{SSLGuard}\xspace on the legitimate user, the influence of \emph{SSLGuard}\xspace on clean pre-trained encoders should be negligible, which means \emph{SSLGuard}\xspace keeps the utility of downstream tasks.
%
\item {\bf Effectiveness: } \emph{SSLGuard}\xspace should judge whether a suspect model is a surrogate (or clean) model with high precision. In other words, \emph{SSLGuard}\xspace should extract watermarks from surrogate encoders effectively.
%
\item {\bf Undetectability: }The watermark cannot be extracted by a \emph{no-match} secret key-tuple. Undetectability ensures that ownership of the SSL pre-trained encoder could not be misrepresented.
%
\item {\bf Efficiency: } \emph{SSLGuard}\xspace should embed and extract watermark efficiently. For instance, the time costs for the watermark embedding and extracting process should be less than pre-training an SSL model.
%
\item {\bf Robustness: } \emph{SSLGuard}\xspace should be robust to watermark removal attacks, such as model stealing, pruning, and finetuning.
\end{itemize}
In the following subsections, we will introduce the design methods for \emph{SSLGuard}\xspace.
\autoref{table:notation} summarizes the notations used in this paper.
\subsection{Overview}
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{pic/workflow.pdf}
\caption{The workflow of \emph{SSLGuard}\xspace. Given a clean SSL pre-trained encoder (colored in green), \emph{SSLGuard}\xspace outputs a key-tuple and a watermarked encoder (colored in yellow).
The defender can employ the watermarked encoder on the cloud platform or adopt the key-tuple to extract watermark from a suspect encoder.}
\label{fig::workflow}
\end{figure}
Firstly, we show a mathematics result~\cite{CFJ13}:
In space $\mathbb{R}^n$, the empirical distribution of angles $\theta$ between two random vectors converges to a distribution with the following probability density function:
\begin{equation*}
\label{equ:a}
f(\theta) = \frac{1}{\sqrt \pi}\cdot \frac{\Gamma (\frac{n}{2})}{\Gamma (\frac{n-1}{2})} \cdot (\sin \theta)^{n-2}, \theta \in [0,\pi].
\end{equation*}
The distribution $f(\theta)$ will be very close to normal distribution if $n \geq 5$.
We could get that two random vectors in high-dimension space (such as $\mathbb{R}^{128}$) are almost \emph{vertical}.
The inspiration for \emph{SSLGuard}\xspace is based on this mathematical fact.
Given a vector that has the same dimension as features.
If this vector is randomly initialized, then, the average cosine similarity between these features and this vector should be concentrated around zero.
However, if the average cosine similarity is much bigger than $0$ or even close to $1$, this can be considered as a signal that those features are strongly related to this vector.
Therefore, the defender can generate a verification dataset $\mathcal{D}_{v}$ and a secret key $sk$.
The defender can finetune a clean encoder to transform samples from $\mathcal{D}_{v}$ to the features which have high cosine similarity with $sk$.
Meanwhile, if the defender input these verification samples to a clean encoder, the distribution of cosine similarity between features and $sk$ should be a normal distribution with $0$ as its mean value.
We leverage this mechanism to design \emph{SSLGuard}\xspace.
The workflow of \emph{SSLGuard}\xspace is shown in \autoref{fig::workflow}.
Concretely, given a clean encoder $F$ which is pre-trained by a certain SSL algorithm, \emph{SSLGuard}\xspace will output a watermarked encoder $F_*$ and a secret key-tuple $\kappa$ as:
\begin{equation*}
\label{equ:a}
\begin{split}
F_*, \kappa & \leftarrow SSLGuard(F), \\
\kappa &= \{\mathcal{D}_v, \mathcal{G}, sk\}.
\end{split}
\end{equation*}
The secret key-tuple $\kappa$ consists of three items: verification dataset $\mathcal{D}_v$, decoder $\mathcal{G}$ and secret key $sk$.
$\mathcal{G}$ is an MLP that maps the features from encoders to a new latent space to calculate the cosine similarity.
$sk \in \mathbb{R}^m$ is a vector whose dimension is the same as $\mathcal{G}$'s outputs.
\emph{SSLGuard}\xspace contains two processes, i.e., watermark injection and extraction.
For the injection process, \emph{SSLGuard}\xspace uses a secret key-tuple $\kappa$ to embed the watermark into a clean encoder $F$ and outputs watermarked encoder $F_*$ as:
\begin{equation*}
\label{equ:a}
F_* \leftarrow Embed(F, \kappa).
\end{equation*}
The defender can release $F_*$ to the cloud platform and keep $\kappa$ carefully.
For the extraction process, given a suspect encoder $F'$, the defender can use $\kappa$ to extract a feature vector $sk'$ (called \emph{decoded key}) from $F'$ by:
\begin{equation*}
\label{equ:a}
sk' \leftarrow Extract(F', \kappa).
\end{equation*}
Then, the defender can measure the cosine similarity between $sk'$ and $sk$, and judge if a suspect encoder $F'$ is copied from the released encoders by:
\begin{equation*}
\label{equ:a}
Verify(F') = \left\{
\begin{array}{ll}
1 & {WR >th_v}\\
0 & {\text{otherwise}}\\
\end{array} \right. ,
\end{equation*}
Here we adopt watermark rate (WR) as the metric to denote the ratio of the verified samples whose outputs $sk'$ are close to $sk$.
Concretely, WR is defined as:
\begin{equation*}
\label{equ:a}
WR = \frac{1}{|D_v|}\sum_{x \in D_v}\mathds{1}({\rm sim}(sk',sk)>th_w). \\
\end{equation*}
We need two thresholds here: $th_v$ and $th_w$.
$th_w$ is used to calculate WR, and $th_v$ is a threshold to verify the copyright.
We set $th_w = 0.5$ and $th_v = 50 \%$ in default.
The overview of \emph{SSLGuard}\xspace is depicted in \autoref{figure:overview}.
Concretely, we first train a watermarked encoder that contains the information of the verification dataset and secret key.
The clean encoder serves as a query-based API to guide the training process.
The shadow encoder is used to simulate the model stealing process to better preserve the watermark under model stealing attacks.
The watermarked encoder should keep the utility of the clean encoder while preserving the watermark embedded in it.
More details will be introduced in the following subsections.
\begin{figure*}[t]
\centering
\includegraphics[width=18cm]{pic/algorithm.pdf}
\caption{The overview of \emph{SSLGuard}\xspace.}
\label{figure:overview}
\end{figure*}
\begin{table}[!t]
\centering
\caption{List of notations.}
\label{table:notation}
\begin{tabular}{r l }
\toprule
{\bf Notation} & {\bf Description} \\
\midrule
$F$, $F_*$, $F_s$ & Clean/Watermarked/Shadow encoder \\
$\mathcal{D}_{train}$, $\mathcal{D}_{shadow}$ & Pre-training/Shadow dataset \\
$\mathcal{D}_{priv}$, $\mathcal{D}_{v}$ & Private/Verification dataset \\
$T$, $M$ & Trigger, Mask \\
$\kappa$, $\mathcal{G}$ &Key-tuple, Decoder \\
$sk$, $sk'$ & Secret key, Decoded key \\
S & Surrogate encoder \\
DA & Downstream accuracy \\
WR & Watermark rate \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Preparation}
To watermark a pre-trained encoder, the defender should prepare a private dataset $\mathcal{D}_{priv}$, a mask $M$, and a random trigger $T$.
The mask $M$ is a binary matrix that contains the position information of trigger $T$, which means $M$ and $T$ have the same size.
Followed by ~\cite{YLZZ19,GWZZLSW21}, we inject the trigger into private samples $x_p$ by:
\begin{equation}
\label{equ:a}
\mathcal{P}(x_p, T) =(1-M) \circ x_p + M \circ T, x_p \in \mathcal{D}_{priv},
\end{equation}
Where $\circ$ denotes the element-wise product.
Finally, once we get the final trigger $T_F$, we can generate the verification dataset $\mathcal{D}_v=\{x_v | x_v=\mathcal{P}(x_p, T_F)\}$.
Besides, inspired by Chen et al.~\cite{CKNH20}, the defender prepares a decoder, denoted as $\mathcal{G}$, to map the features generated from the pre-trained encoder to a new latent space for the cosine similarity calculation.
We use an MLP as $\mathcal{G}$ in this paper.
$\mathcal{G}$ takes the features generated from an encoder $E$ as inputs, and outputs decoded key $sk' = \mathcal{G}(E(x))$.
Here we define two loss functions.
Our first goal is to let the decoded keys transformed from dataset $\mathcal{D}_1$ to be similar as the secret key $sk$, and we define \emph{correlated loss function} as:
\begin{equation}
\label{equ:a}
\mathcal{L}_{corr}(\mathcal{D}_1,E) =-\frac{\sum_{x \sim \mathcal{D}_1}{{\rm sim}(\mathcal{G}(E(x)),sk)}}{|\mathcal{D}_1|},
\end{equation}
where ${\rm sim}(\cdot,\cdot)$ is a similarity function.
If not otherwise specified, we use cosine similarity as the similarity function.
The goal of $\mathcal{L}_{corr}$ is to train an encoder and decoder together to transform the correlated sample $x$ into $sk'$, where $sk'$ is similar to $sk$.
The more similar $sk'$ and $sk$ is, the smaller $\mathcal{L}_{corr}$ will be, until converge to $-1$.
Secondly, given a dataset $\mathcal{D}_2$, the decoder $\mathcal{G}$ transforms decoded keys to the vertical direction of $sk$ for uncorrelated samples $x\sim \mathcal{D}_2$, then we will get another loss function, \emph{uncorrelated loss function}, as:
\begin{equation}
\label{equ:a}
\mathcal{L}_{uncorr}(\mathcal{D}_2, E) =(\frac{\sum_{x \sim \mathcal{D}_2}{{\rm sim}(\mathcal{G}(E(x)),sk)}}{|\mathcal{D}_2|})^2.
\end{equation}
\emph{SSLGuard}\xspace will adopt $\mathcal{L}_{uncorr}$ to the decoded keys that should not be encoded to the similar direction of $sk$.
Finally, we here define an \emph{embedding match loss function} to match the features generated from encoder $E'$ and $E''$:
\begin{equation}
\label{equ:a}
\mathcal{L}_{match}(\mathcal{D}_3,E',E'') =-\frac{\sum_{x \sim \mathcal{D}_3}{{\rm sim}(E'(x),E''(x))}}{|\mathcal{D}_3|}
\end{equation}
\emph{SSLGuard}\xspace leverages $\mathcal{L}_{match}$ to maintain the utility of the watermarked encoder and simulate the model stealing attacks.
\subsection{Watermark Embedding}
As shown in \autoref{figure:overview}, \emph{SSLGuard}\xspace totally adopts three encoders: clean encoder $F(x;\theta)$, watermarked encoder $F_*(x;\theta_w)$ and shadow encoder $F_s(x;\theta_s)$.
Meanwhile, \emph{SSLGuard}\xspace also uses three datasets: pre-training dataset $\mathcal{D}_{train}$, shadow dataset $\mathcal{D}_{shadow}$ and verification dataset $\mathcal{D}_{v}$.
In the following part, we will introduce our loss functions for each encoder.
\mypara{Watermarked Encoder}
For the watermarked encoder, we want it to keep the utility of the clean encoder.
Therefore, for the samples from $\mathcal{D}_{train}$, we force the features from $F$ and $F_*$ to become similar through $\mathcal{L}_{match}$, so the loss $\mathcal{L}_0$ can be defined as:
\begin{equation}
\label{equ:a}
\mathcal{L}_0 = \mathcal{L}_{match}(\mathcal{D}_{train}, F,F_*).
\end{equation}
Meanwhile, the decoder $\mathcal{G}$ should successfully extract $sk$ from verification dataset $\mathcal{D}_{v}$ instead of pre-training dataset $\mathcal{D}_{train}$, the loss $\mathcal{L}_1$ to achieve this goal is defined as:
\begin{equation}
\label{equ:a}
\mathcal{L}_1 = \mathcal{L}_{uncorr}(\mathcal{D}_{train}, F_*) + \mathcal{L}_{corr}(\mathcal{D}_v, F_*).
\end{equation}
So the final loss function for the watermarked encoder is
\begin{equation}
\label{equ:a}
\mathcal{L}_w = \mathcal{L}_0 + \mathcal{L}_1.
\end{equation}
\mypara{Shadow Encoder}
For the shadow encoder, its task is to mimic the model stealing attacks.
Here we use $\mathcal{D}_{shadow}$ to simulate the query process.
The loss function of shadow encoder is:
\begin{equation}
\label{equ:a}
\mathcal{L}_{s} = \mathcal{L}_{match}(\mathcal{D}_{shadow}, F_*,F_s).
\end{equation}
\mypara{Trigger and Decoder}
Given a verification dataset, we aim to optimize a trigger $T$ and a decoder $\mathcal{G}$ to extract $sk$ from watermarked encoder and shadow encoder, the corresponding loss can be defined as:
\begin{equation}
\label{equ:a}
\mathcal{L}_2 = \mathcal{L}_{corr}(\mathcal{D}_v,F_*) + \mathcal{L}_{corr}(\mathcal{D}_v,F_s).
\end{equation}
Meanwhile, for clean encoder $F$, the decoder $\mathcal{G}$ should not map the decoded keys closely to $sk$ from any dataset, the corresponding loss can be defined as:
\begin{equation}
\label{equ:a}
\mathcal{L}_3 = \mathcal{L}_{uncorr}(\mathcal{D}_{train},F) + \mathcal{L}_{uncorr}(\mathcal{D}_{v},F).
\end{equation}
Besides, for the watermarked encoder $F_*$ and shadow encoder $F_s$, the decoder should be able to distinguish between the normal samples and verification samples, the loss to achieve this goal can be defined as:
\begin{equation}
\label{equ:a}
\mathcal{L}_4 = \mathcal{L}_{uncorr}(\mathcal{D}_{train},F_*) + \mathcal{L}_{uncorr}(\mathcal{D}_{train},F_s).
\end{equation}
Given the above losses, the final loss function for trigger and decoder can be defined as:
\begin{equation}
\label{equ:a}
\mathcal{L}_{T} = \mathcal{L}_2 + \mathcal{L}_3 + \mathcal{L}_4.
\end{equation}
\mypara{Optimization Problem}
After designing all loss functions, we formulate \emph{SSLGuard}\xspace as an optimization problem.
Concretely, we update the parameters as follows:
\begin{equation}
\label{equ:a}
\begin{split}
\theta_s & \leftarrow {\rm Optimizer}(\theta_s, \bigtriangledown_{\theta_s}\mathcal{L}_{s}, \eta_s), \\
\theta_w & \leftarrow {\rm Optimizer}(\theta_w, \bigtriangledown_{\theta_w}\mathcal{L}_{w}, \eta_w), \\
T & \leftarrow {\rm Optimizer}(T, \bigtriangledown_{T}\mathcal{L}_{T}, \eta_T), \\
\mathcal{G} & \leftarrow {\rm Optimizer}(\mathcal{G}, \bigtriangledown_{G}\mathcal{L}_{T}, \eta_G). \\
\end{split}
\end{equation}
Where $\eta_s$, $\eta_w$, $\eta_T$, and $\eta_G$ are learning rates of shadow encoder, watermarked encoder, trigger, and decoder, respectively.
\section{Evaluation}
\label{section:evaluation}
\subsection{Experimental Setup}
\mypara{Datasets} We use the following datasets to conduct our experiments.
\begin{itemize}
\item {\bf CIFAR-10~\cite{CIFAR}} The CIFAR-10 dataset has $60,000$ images in $10$ classes. Among them, there are $50,000$ images for training and $10,000$ images for testing. The size of each sample is $32 \times 32 \times 3$.
We randomly sample $100$ images of $10$ classes from testing set to be our private dataset $\mathcal{D}_{priv}$.
We use the training set as our query dataset in model stealing attacks.
%
\item {\bf CIFAR-100~\cite{CIFAR}.} Similar to CIFAR-10, The CIFAR-100 dataset contains $60,000$ images with size $32 \times 32 \times 3$ in $100$ classes, and there are 500 training images and 100 testing images in each class. We use the training set as our query dataset in model stealing attacks.
%
\item {\bf STL-10~\cite{CNL11}.}
The STL-10 dataset consists of $5,000$ training samples and $8,000$ test samples.
Besides, it also contains $100,000$ unlabeled samples. The unlabeled data are extracted from a similar but broader distribution of images than labeled data.
We use unlabeled samples to query the victim encoder in model stealing attacks.
We resize each sample from $96 \times 96 \times 3$ to $32 \times 32 \times 3$.
%
\item {\bf GTSRB~\cite{SSSI11}.} German Traffic Sign Detection Benchmark (GTSDB) contains $39,209$ training images and $12,630$ test images. It contains $43$-category traffic signs.
We resize images to $32 \times 32 \times 3$.
The attackers use the training set to query the victim encoder in model stealing attacks.
\end{itemize}
\mypara{Pre-training Encoder}
In our experiment, we use the training set of CIFAR-10 as the pre-training dataset and adopt ResNet-18~\cite{HZRS16} as the model architecture to pre-train the encoder.
Regarding the training algorithm, we consider SimCLR, MoCo v2, and BYOL.
Our implementation is based on the publicly available code of contrastive learning.\footnote{\url{https://github.com/vturrisi/solo-learn}}
The encoders are pre-trained for $1,000$ epochs with batch size of $256$ and SGD as the optimizer.
The dimension of the encoder's output, i.e., the feature, is $512$.
Once being trained, those encoders are considered as the clean encoder.
\mypara{SSLGuard}
We reload the clean encoder and finetune it to be the watermarked encoder.
Note that we freeze the weights in batch normalization layers following the settings by Jia et al.~\cite{JLG22}.
We use ResNet-50 as the shadow encoder's architecture as the adversary can also leverage larger model architecture to perform the model stealing attacks.
We leverage the SGD optimizer with $0.01$ learning rate to train both the watermarked and shadow encoders for $200$ epochs.
The dimension of $sk$ is $128$.
For the trigger, we random a $32 \times 32 \times 3$ tensor from uniform distribution as the initial trigger, and the learning rate is $0.005$ for the trigger.
For each sample in $\mathcal{D}_{priv}$, $35\%$ space of it will be patterned by the trigger.
We use a 3-layer MLP as the decoder $\mathcal{G}$.
The number of neurons of $\mathcal{G}$ is 512, 256, and 128, respectively.
We use the SGD optimizer with $0.005$ learning rate to update its parameters.
\mypara{Downstream Classifier}
We use a $2$-layer MLP as the downstream classifier with $256$ neurons in its hidden layer.
For each downstream task, We freeze the parameters of the pre-trained encoders and train the downstream classifier for $20$ epochs using Adam optimizer~\cite{KB15} with $0.005$ learning rate.
\subsection{Clean Pre-trained Encoder}
\label{subsection:clean_pretrained_encoder}
Given three \emph{clean} SSL pre-trained encoders (i.e., pre-trained by SimCLR, MoCo v2, and BYOL), we first measure their downstream accuracy for different tasks.
Then we evaluate the effectiveness of model stealing attacks against those SSL pre-trained encoders.
\mypara{Clean Downstream Accuracy (CDA)}
We consider two downstream classification tasks, i.e., CIFAR-10 and STL-10.
The downstream accuracy (DA) is shown in \autoref{table:clean_DA}.
We observe that the SSL pre-trained encoders can achieve remarkable performance on different downstream tasks.
For instance, DA is over $90\%$ on CIFAR-10 for all SSL pre-trained encoders.
Although all three encoders are pre-trained on CIFAR-10, DA on STL-10 are also above $75\%$, which means the SSL pre-trained encoders can learn high-level semantic information from one task, and the informative features can be generalized to other tasks.
Such observation further demonstrates the necessity of protecting the copyright of the SSL pre-trained encoders.
Note that we adopt clean DA (CDA) as our baseline accuracy.
Later we measure an encoder's performance by comparing its DA with CDA.
\begin{table}[h]
\centering
\caption{The baseline DA on different downstream tasks with different SSL algorithms.}
\label{table:clean_DA}
\begin{tabular}{l c c }
\toprule
{\bf SSL} & {\bf CIFAR-10} & {\bf STL-10} \\
\midrule
SimCLR & $90.11\%$ & $75.93 \% $ \\
MoCo v2 & $92.00\%$ & $79.54\%$ \\
BYOL & $92.06\%$ & $79.35\%$ \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.66\columnwidth}
\includegraphics[width=\columnwidth]{pic/simclr_cl_arch.pdf}
\caption{SimCLR}
\label{fig:steal_clean_arch_a}
\end{subfigure}
\begin{subfigure}{0.66\columnwidth}
\includegraphics[width=\columnwidth]{pic/moco_cl_arch.pdf}
\caption{MoCo v2}
\label{fig:steal_clean_arch_b}
\end{subfigure}
\begin{subfigure}{0.66\columnwidth}
\includegraphics[width=\columnwidth]{pic/byol_cl_arch.pdf}
\caption{BYOL}
\label{fig:steal_clean_arch_c}
\end{subfigure}
\caption{The performance of surrogate encoders trained with different architectures. The x-axis represents different downstream tasks. The y-axis represents DA.}
\label{fig:steal_clean_arch}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.65\columnwidth}
\includegraphics[width=\columnwidth]{pic/simclr_cl_data.pdf}
\caption{SimCLR}
\label{fig:steal_clean_data_a}
\end{subfigure}
\begin{subfigure}{0.65\columnwidth}
\includegraphics[width=\columnwidth]{pic/moco_cl_data.pdf}
\caption{MoCo v2}
\label{fig:steal_clean_data_b}
\end{subfigure}
\begin{subfigure}{0.65\columnwidth}
\includegraphics[width=\columnwidth]{pic/byol_cl_data.pdf}
\caption{BYOL}
\label{fig:steal_clean_data_c}
\end{subfigure}
\caption{The performance of surrogate encoders trained with different query data. The x-axis represents different downstream tasks. The y-axis represents DA.}
\label{fig:steal_clean_data}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.66\columnwidth}
\includegraphics[width=\columnwidth]{pic/simclr_cl_loss.pdf}
\caption{SimCLR}
\label{fig:steal_clean_sim_a}
\end{subfigure}
\begin{subfigure}{0.66\columnwidth}
\includegraphics[width=\columnwidth]{pic/moco_cl_loss.pdf}
\caption{MoCo v2}
\label{fig:steal_clean_sim_b}
\end{subfigure}
\begin{subfigure}{0.66\columnwidth}
\includegraphics[width=\columnwidth]{pic/byol_cl_loss.pdf}
\caption{BYOL}
\label{fig:steal_clean_sim_c}
\end{subfigure}
\caption{The performance of surrogate encoders trained with different similarity functions. The x-axis represents different similarity functions. The y-axis represents DA.}
\label{fig:steal_clean_sim}
\end{figure*}
\mypara{Model Stealing Attacks}
Since the SSL pre-trained encoders (victim encoders) are powerful, we then evaluate whether they are vulnerable to model stealing attacks.
To build the surrogate encoder, we consider three key information, i.e., surrogate encoder's architecture, the distribution of the query dataset, and the similarity function used to ``copy'' the victim encoder.
\mypara{Surrogate Encoder's Architecture}
We first investigate the impact of the surrogate encoder's architecture.
Note that here we only adopt CIFAR-10 as the query dataset and cosine similarity as the similarity function to measure the difference between the victim and surrogate encoders' features.
Since the architecture of the victim encoder can be non-public, the attacker may try different surrogate encoder's architectures to perform the model stealing attack.
Concretely, we assume attackers may leverage ResNet-18, ResNet-34, ResNet-50, and ResNet-101 as the surrogate encoder's architecture.
If the output dimension is different from ResNet-18, e.g., ResNet-50/ResNet-101 outputs 2048-dimensional features, we leverage an extra linear layer to transform them into 512-dimension (same as ResNet-18 and ResNet-34).
The DA of surrogate encoders is summarized in \autoref{fig:steal_clean_arch}.
A general trend is that the deeper the surrogate encoder's architecture, the better performance it can achieve on the downstream tasks.
For instance, for the victim encoder which is pre-trained by SimCLR (\autoref{fig:steal_clean_arch_a}), the DA on CIFAR-10 and STL-10 are $82.88\%$ and $71.02\%$ when the surrogate model's architecture is ResNet-18, while the DA increases to $88.48\%$ and $74.38\%$ when the surrogate encoder's architecture is changed to ResNet-101.
This may be because deeper model architecture can provide a wider parameter space and greater representation ability.
Therefore, in general, deeper surrogate encoder's architectures can better ``copy'' functionality from victim encoders.
However, we also observe that the deepest surrogate encoder's architecture does not always yield the best model stealing performance.
Take the victim encoder pre-trained by MoCo v2 as an example (\autoref{fig:steal_clean_arch_b}), the DA on CIFAR-10 and STL-10 are $89.71\%$ and $76.56\%$ when the surrogate model's architecture is ResNet-50, while the DA decreases a little into $88.79\%$ and $73.58\%$ when the surrogate encoder's architecture becomes ResNet-101.
We speculate this is because a larger model capacity leads to a better memorization of training samples, which degrades its performance on downstream tasks.
Note that in the following experiments, the adversary uses ResNet-50 as the surrogate encoder's architecture by default as it has the best performance in most of cases.
\mypara{Distribution of The Query Dataset}
Secondly, we evaluate the impact of the query dataset's distribution.
In the real-world scenario, the adversary may or may not have the access to the original pre-training dataset of the victim encoder, so the query dataset may come from different distributions of the original pre-training dataset.
Here the adversary leverages ResNet-50 as the surrogate model's architecture and cosine similarity as the similarity function.
Regarding the query dataset, the adversary may leverage CIFAR-10, STL-10, CIFAR-100, and GTSRB as the query dataset to perform the attack.
The results are shown in \autoref{fig:steal_clean_data}.
First, we observe that the model stealing attack is more effective with the same distribution query dataset.
For instance, given the victim model trained by SimCLR (\autoref{fig:steal_clean_data_a}), for downstream task CIFAR-10, the DA are $87.62\%$, $82.89\%$, $80.20\%$, and $62.73\%$ when the query dataset is CIFAR-10, STL-10, CIFAR-100, and GTSRB respectively.
Meanwhile, for the task STL-10, DA are $74.90\%$, $76.96\%$, $69.67\%$, and $54.57\%$, respectively.
This demonstrates the efficacy of model stealing attacks.
Another observation is that the distribution of the surrogate dataset may also influence DA on different tasks.
For instance, given the victim model trained by BYOL (\autoref{fig:steal_clean_data_c}),
when the downstream task is CIFAR-10 classification, the DA is $89.76\%$ with CIFAR-10 as the query dataset, while only $83.64\%$ with STL-10 as the query dataset.
However, when the downstream task is STL-10 classification, the DA is $77.23\%$ with CIFAR-10 as the query dataset, but increases to $79.04\%$ with STL-10 as the query dataset.
Therefore, if the adversary is aware of the downstream task, they can construct a query dataset that is close to the downstream tasks to improve the performance.
\mypara{Similarity Function}
Finally, we investigate the effect of similarity functions.
Besides cosine similarity, the adversary can also use mean squared error (MSE) or mean absolute error (MAE) to match the victim encoder's features.
Here the adversary leverages ResNet-50 as the surrogate model's architecture and CIFAR-10 as the query dataset.
The results are shown in \autoref{fig:steal_clean_sim}.
We can see that cosine similarity outperforms MSE and MAE in all settings.
For instance, For instance, given the victim model trained by BYOL (\autoref{fig:steal_clean_sim_c}), the DA is $89.76\%$, $50.24\%$, and $32.06\%$ ( $77.23\%$, $42.20\%$, and $28.01\%$) on CIFAR-10 (STL-10) when the similarity function is cosine similarity, MSE, and MAE.
This indicates that cosine similarity can better facilitate the stealing process.
This can be credited to the normalization effect of cosine similarity, which helps to learn the features better~\cite{GSATRBDPGAPKMV20}.
\mypara{Summary}
Our evaluation shows that SSL pre-trained encoders are highly vulnerable to model stealing attacks.
Also, we have several key findings: 1) A deeper surrogate encoder's architecture is helpful to conduct better model stealing attacks; 2) A query dataset with similar or the same distribution as the downstream task can enhance the performance of the surrogate encoder in this downstream task; and 3) cosine similarity is better than MSE and MAE to conduct model stealing attacks against the SSL pre-trained encoders.
\begin{figure}[t]
\centering
\begin{subfigure}{0.45\columnwidth}
\includegraphics[width=\columnwidth]{pic/tsne_simclr_clean.pdf}
\caption{$F^{simclr}$}
\label{fig:tsne_embed_simclr_a}
\end{subfigure}
\begin{subfigure}{0.45\columnwidth}
\includegraphics[width=\columnwidth]{pic/tsne_simclr_embed.pdf}
\caption{$F_*^{simclr}$}
\label{fig:tsne_embed_simclr_b}
\end{subfigure}
\caption{The t-SNE visualizations of features output from $F^{simclr}$ and $F_*^{simclr}$ when we input 800 samples in 10 classes randomly chosen from the CIFAR-10 dataset. Each point represents a feature vector. Each color represents one class.}
\label{fig:tsne_embed_simclr}
\end{figure}
\subsection{\emph{SSLGuard}\xspace}
In this section, we adopt \emph{SSLGuard}\xspace to embed the watermark into clean encoders pre-trained by SimCLR, MoCo v2, and BYOL.
We aim to validate four properties of \emph{SSLGuard}\xspace, i.e., effectiveness, utility, undetectability, and robustness.
\mypara{Effectiveness}
We first evaluate the effectiveness of \emph{SSLGuard}\xspace.
Concretely, we check whether the model owner can extract the watermark from the watermarked encoders.
Ideally, the watermark should be successfully extracted from the watermarked encoder and shadow encoder, but not the clean encoder.
We use the generated key-tuple $\kappa$ to measure watermark rate (WR) for clean encoder $F$, watermarked encoder $F_*$, and shadow encoder $F_s$ on three SSL algorithms.
As shown in \autoref{table:effectiveness}, the WR of watermarked encoders and shadow encoders are all $100\%$, which means encoder $F_*$ and $F_s$ both contain information of $\mathcal{D}_v$ and $sk$.
Meanwhile, the WR of clean encoders is almost $0\%$.
This means that our watermarking algorithm \emph{SSLGuard}\xspace is generic and does not judge a clean encoder to be a watermarked encoder.
We later show that \emph{SSLGuard}\xspace is robust to the adversary with different capabilities.
\begin{table}[h]
\centering
\caption{WR on clean and watermarked encoders.}
\label{table:effectiveness}
\begin{tabular}{l c c c}
\toprule
{\bf Encoder} & {\bf SimCLR} & {\bf MoCo v2} & {\bf BYOL} \\
\midrule
Clean Encoder & $ 10\%$ & $ 2\%$ & $ 0\%$ \\
Watermarked Encoder & $100 \%$ & $100 \%$ & $ 100\%$ \\
Shadow Encoder & $100 \%$ & $100 \%$ & $ 100\%$ \\
\bottomrule
\end{tabular}
\end{table}
\mypara{Utility}
One of the initial intentions of \emph{SSLGuard}\xspace is to maintain the utility of the original task.
Firstly, we visualize features output from $F^{simclr}$ (the clean encoder pre-trained by SimCLR) and $F_*^{simclr}$ using t-Distributed Neighbor Embedding (t-SNE)~\cite{MH08}, which is depicted in \autoref{fig:tsne_embed_simclr}.
We observe that the t-SNE results of $F^{simclr}$ and $F_*^{simclr}$ are almost identical and the features are successfully separated by both encoders.
This demonstrates that watermarked encoder trained by\emph{SSLGuard}\xspace can faithfully reproduce the features generated from the clean encoder.
Also, we train downstream classifiers by using three watermarked encoders $F_*^{simclr}$, $F_*^{moco}$ and $F_*^{byol}$ on CIFAR-10 and STL-10.
\autoref{table:utility} shows the DA in different scenarios.
We can see that the DA of the watermarked encoder is similar to the clean encoder.
For instance, given the clean encoder pre-trained by SimCLR, the DA for the watermarked encoder only drops $0.02\%$ from CDA.
This is expected since the watermarked encoder can almost identically reproduce the features generated from the clean encoder.
The evaluation shows that our watermarking algorithm \emph{SSLGuard}\xspace does not sacrifice the utility of clean encoders on different downstream tasks.
\begin{table}[h]
\centering
\caption{DA of different watermarked encoders on different downstream tasks. }
\label{table:utility}
\begin{tabular}{l c c }
\toprule
{\bf Encoder} & {\bf CIFAR-10} & {\bf STL-10} \\
\midrule
$F_*^{simclr}$ & $90.09\%$ $(-0.02\%)$ & $75.82\%$ $(-0.11\%)$ \\
$F_*^{moco}$ & $91.94\%$ $(-0.06\%)$ & $79.56\%$ $(+0.02\%)$ \\
$F_*^{byol}$ & $92.03\%$ $(-0.03\%)$ & $79.37\%$ $(+0.02\%)$ \\
\bottomrule
\end{tabular}
\end{table}
\mypara{Undetectability}
We then check if the watermark can be extracted by a \emph{no-matching} key-tuple.
Through \emph{SSLGuard}\xspace, we generate three key-tuples: $\kappa^{simclr}$, $\kappa^{moco}$ and $\kappa^{byol}$.
We use one of the key-tuple to verify other watermarked encoders, such as using $\kappa^{simclr}$ to judge $F_*^{moco}$.
As shown in \autoref{table:undetectability}, we see that the WR is almost $0\%$ in no-match pairs, which means we can't use a non-matching $\kappa$ to verify a watermarked encoder,
Therefore, the adversary cannot generate a key-tuple to declare wrong ownership for a watermarked encoder.
Moreover, given an encoder $F_*^{simclr}$, we randomly generate $1,000$ 128-dimensional vectors and calculate the cosine similarity between decoder $\mathcal{G}$'s outputs and these random vectors.
We visualize the probability distribution of the cosine similarity in \autoref{fig:pdf_cos_a}.
We observe that if encoder $F_*$ and decoder $\mathcal{G}$ do not contain the information about $sk$, the cosine similarity values are concentrated near 0, which satisfied the theoretical analysis above.
Results show that ``cosine similarity more than 0.5'' is an abnormal signal.
Therefore, ``WR > $50\%$'' is a strong evidence for copyright support.
\begin{table}[h]
\centering
\caption{WR with different key-tuples on different watermarked encoders.}
\label{table:undetectability}
\begin{tabular}{l c c c}
\toprule
{\bf Key-tuple} & $F_*^{simclr}$ & $F_*^{moco}$ & $F_*^{byol}$ \\
\midrule
$\kappa^{simclr}$ & $100\%$ & $0\%$ & $0\%$ \\
$\kappa^{moco}$ & $0\%$ & $100\%$ & $0\%$ \\
$\kappa^{byol}$ & $0\%$ & $0\%$ & $100\%$ \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[t]
\centering
\begin{subfigure}{0.45\columnwidth}
\includegraphics[width=\columnwidth]{pic/pdf_simclr.pdf}
\caption{Normal distribution.}
\label{fig:pdf_cos_a}
\end{subfigure}
\begin{subfigure}{0.45\columnwidth}
\includegraphics[width=\columnwidth]{pic/simclr_steal1_pdf.pdf}
\caption{Abnormal distribution.}
\label{fig:pdf_cos_b}
\end{subfigure}
\caption{The histogram of ${\rm sim}(\mathcal{G}(E(x_v)),v)$. In (a), we adopt $F_*^{simclr}$ as the encoder $E$ and $1,000$ random vectors as $v$ to calculate the distribution of the cosine similarity. In (b), we set $S_1^{simclr}$ as the encoder $E$ and $sk$ from $\kappa^{simclr}$ as $v$ to calculate the distribution. }
\label{fig:pdf_cos}
\end{figure}
\begin{table}[h]
\centering
\caption{Details os different model stealing attacks}
\label{table:steal_type}
\begin{tabular}{l l l l}
\toprule
{\bf Attack} & {\bf Architecture} & {\bf Query Dataset} & {\bf Similarity} \\
\midrule
\texttt{Steal-1}\xspace & ResNet-50 & CIFAR-10 & Cosine \\
\texttt{Steal-2}\xspace & ResNet-50 & STL-10 & Cosine \\
\texttt{Steal-3}\xspace & ResNet-101 & CIFAR-10 & Cosine \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[h]
\centering
\caption{DA and WR on different surrogate encoders}
\label{table:steal_results}
\scalebox{0.9}{
\begin{tabular}{llccc}
\toprule
{\bf Attack} & {\bf Metric} & {\bf SimCLR} & {\bf MoCo v2} & {\bf BYOL} \\
\midrule
\multirow{3}{*}{\texttt{Steal-1}\xspace} & WR & $94\%$ & $96\%$ & $100\%$ \\
~ & DA (CIFAR-10) & $88.62\%$ & $89.96\%$ & $90.14\%$ \\
~ & DA (STL-10) & $75.78\%$ & $76.63\%$ & $77.22\%$ \\ \midrule
\multirow{3}{*}{\texttt{Steal-2}\xspace} & WR & $77\%$ & $94\%$ & $100\%$ \\
~ & DA (CIFAR-10) & $82.99\%$ & $84.35\%$ & $83.83\%$ \\
~ & DA (STL-10) & $77.00\%$ & $78.87\%$ & $79.02\%$ \\ \midrule
\multirow{3}{*}{\texttt{Steal-3}\xspace} & WR & $95\%$ & $98\%$ & $100\%$ \\
~ & DA (CIFAR-10) & $87.20\%$ & $88.97\%$ & $88.35\%$ \\
~ & DA (STL-10) & $73.38\%$ & $73.43\%$ & $73.86\%$ \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table*}[t]
\centering
\caption{The results of DA on downstream task CIFAR-10 and WR on pruned and finetuned surrogate encoders.}
\label{table:wr_prune}
\scalebox{0.8}{
\begin{tabular}{lcccccccccc}
\toprule
\multirow{2}{*}{{\bf Encoder}} & \multicolumn{5}{c}{{\bf WR/DA under Pruning ($\%$)}} & \multicolumn{5}{c}{{\bf WR/DA under Finetuning ($\%$)}} \\ \cline{2-11}
~ & $r=0.1$ & $r=0.2$ & $r=0.3$ & $r=0.4$ & $r=0.5$ & $r=0.1$ & $r=0.2$ & $r=0.3$ & $r=0.4$ & $r=0.5$ \\
\midrule
$F_*^{simclr}$ & {\bf 100}/90.13 & {\bf 100}/90.36 & {\bf 100}/89.59 & {\bf 100}/90.13 & {\bf 100}/89.89 & {\bf 80}/90.19 & {\bf 80}/90.35 & {\bf 80}/89.67 & {\bf 80}/90.08 & {\bf 80}/89.76 \\
$S_1^{simclr}$ & {\bf 94}/88.42 & {\bf 92}/88.60 & {\bf 93}/88.65 & {\bf 95}/88.29 & {\bf 87}/87.62 & {\bf 92}/88.75 & {\bf 91}/88.68 & {\bf 90}/88.58 & {\bf 92}/88.70 & {\bf 91}/88.87 \\
$S_2^{simclr}$ & {\bf 76}/82.92 & {\bf 77}/82.87 & {\bf 73}/82.96 & {\bf 75}/82.49 & {\bf 61}/82.08 & {\bf 73}/83.67 & {\bf 69}/84.06 & {\bf 74}/83.70 & {\bf 76}/84.12 & {\bf 75}/84.42 \\
$S_3^{simclr}$ & {\bf 95}/87.29 & {\bf 95}/87.45 & {\bf 96}/86.72 & {\bf 96}/86.70 & {\bf 96}/85.53 & {\bf 89}/87.43 & {\bf 92}/87.37 & {\bf 90}/87.54 & {\bf 91}/86.97 & {\bf 92}/87.72\\
\midrule
$F_*^{moco}$ & {\bf 100}/91.95 & {\bf 100}/91.82 & {\bf 100}/91.92 & {\bf 100}/92.22 & {\bf 100}/92.28 & {\bf 100}/92.02 & {\bf 100}/91.94 & {\bf 100}/92.14 & {\bf 100}/92.36 & {\bf 100}/92.50 \\
$S_1^{moco}$ & {\bf 96}/89.85 & {\bf 95}/90.18 & {\bf 95}/89.98 & {\bf 96}/89.57 & {\bf 95}/89.39 & {\bf 96}/90.33 & {\bf 96}/90.29 & {\bf 95}/90.43 & {\bf 96}/90.74 & {\bf 96}/90.74 \\
$S_2^{moco}$ & {\bf 94}/84.11 & {\bf 94}/83.85 & {\bf 94}/83.75 & {\bf 95}/83.86 & {\bf 91}/83.09 & {\bf 92}/85.40 & {\bf 92}/85.36 & {\bf 92}/85.11 & {\bf 92}/85.95 & {\bf 92}/85.74 \\
$S_3^{moco}$ & {\bf 97}/88.95 & {\bf 95}/88.99 & {\bf 97}/88.49 & {\bf 100}/87.38 & {\bf 100}/86.54 & {\bf 95}/89.29 & {\bf 94}/89.22 & {\bf 95}/89.37 & {\bf 93}/88.46 & {\bf 95}/88.63 \\
\midrule
$F_*^{byol}$ & {\bf 100}/91.95 & {\bf 100}/91.82 & {\bf 100}/91.92 & {\bf 100}/92.22 & {\bf 100}/92.28 & {\bf 100}/92.13 & {\bf 100}/91.77 & {\bf 100}/91.73 & {\bf 100}/92.43 & {\bf 100}/92.23 \\
$S_1^{byol}$ & {\bf 100}/89.85 & {\bf 100}/90.18 & {\bf 100}/89.98 & {\bf 100}/89.57 & {\bf 100}/89.39 & {\bf 100}/90.31 & {\bf 100}/90.40 & {\bf 100}/90.39 & {\bf 100}/90.33 & {\bf 100}/90.28 \\
$S_2^{byol}$ & {\bf 100}/84.11 & {\bf 100}/83.85 & {\bf 100}/83.75 & {\bf 100}/83.86 & {\bf 100}/83.09 & {\bf 100}/85.16 & {\bf 100}/85.43 & {\bf 100}/85.26 & {\bf 100}/85.36 & {\bf 100}/85.64 \\
$S_3^{byol}$ & {\bf 100}/88.95 & {\bf 100}/88.99 & {\bf 100}/88.49 & {\bf 100}/87.38 & {\bf 100}/86.54 & {\bf 100}/88.49 & {\bf 100}/88.96 & {\bf 100}/88.80 & {\bf 100}/88.59 & {\bf 100}/88.83 \\
\bottomrule
\end{tabular}
}
\end{table*}
\mypara{Robustness}
We then quantify the robustness of \emph{SSLGuard}\xspace through the lens of model stealing attacks.
Note that we consider the most powerful surrogate encoder's architectures and most query datasets.
Concretely, based on the evaluation in \autoref{subsection:clean_pretrained_encoder}, we consider ResNet-50 and ResNet-101 as the surrogate encoder's architectures when the query dataset is CIFAR-10, and ResNet-50 as the surrogate encoder's architecture when the query dataset is STL-10.
We name the three attacks \texttt{Steal-1}\xspace, \texttt{Steal-2}\xspace, and \texttt{Steal-3}\xspace.
The details of each attack are shown in \autoref{table:steal_type}.
The WR and DA for different attacks are shown in \autoref{table:steal_results}.
We observe that although the model stealing attack is effective against the watermarked encoder, we can still verify the ownership of the surrogate model as the WR is also high.
For instance, for \texttt{Steal-1}\xspace against the watermarked encoder pre-trained by SimCLR, we denote it as $S_1^{simclr}$, the DA is $88.62\%$ and $75.78\%$ on CIFAR-10 and STL-10, while the WR is $94\%$.
Moreover, we plot the distribution of the cosine similarity in \autoref{fig:pdf_cos_b}.
We observe that the cosine similarity distribution is quite abnormal compared to \autoref{fig:pdf_cos_a}, which indicates that the watermark embedded by \emph{SSLGuard}\xspace can still preserve in the surrogate model stolen by the adversary.
We also find that \emph{SSLGuard}\xspace can transfer to the attacks with different query datasets (from the pre-training dataset and shadow dataset).
For instance, when the adversary leverages STL-10 as the query dataset to launch the model stealing attacks, i.e., \texttt{Steal-2}\xspace, the WR are $77\%$, $94\%$, and $100\%$ on $S_2^{simclr}$, $S_2^{moco}$, and $S_2^{byol}$, respectively.
Such observation demonstrates that the watermark embedded by \emph{SSLGuard}\xspace can consistently be verified even the query dataset comes from an unknown distribution of the model owner.
Besides, \emph{SSLGuard}\xspace can also transfer to the attacks with different surrogate model architectures.
For instance, in \texttt{Steal-3}\xspace, the adversary adopts ResNet-101 as the surrogate encoder's architecture, which has a different architecture from both the victim encoder (ResNet-18) and the shadow encoder (ResNet-50).
\autoref{table:steal_results} shows that, WR are $95\%$, $98\%$ and $100\%$ on the surrogate encoders $S_3^{simclr}$, $S_3^{moco}$, and $S_3^{byol}$, respectively.
This implies that the watermark embedded by \emph{SSLGuard}\xspace can also be verified when the adversary leverages totally different surrogate encoder's architecture to conduct the attacks.
\mypara{Summary}
Our results show that the \emph{SSLGuard}\xspace can effectively embed the watermark into a clean encoder while preserving its utility in the downstream tasks.
Also, the watermark embedding by \emph{SSLGuard}\xspace cannot be extracted by a no-matching key-tuple.
Coupling with the results under different types of model stealing attacks, we demonstrate that the \emph{SSLGuard}\xspace is robust against model stealing attacks with different query datasets and surrogate model's architectures.
\subsection{Adaptive Adversary}
We then consider an adaptive adversary who is aware of the fact the victim encoder is watermarked.
In this case, the adversary may leverage further watermark removal techniques such as pruning and finetuning against the surrogate encoder to eliminate the watermark.
A more powerful adversary may have white-box access to the victim encoder (e.g., insider threats), which means that they can directly perform pruning and finetuning to eliminate the watermark in the victim encoder.
\mypara{Pruning}
Pruning is an effective technology for model compression.
It is also considered as a watermark removal attack.
In this part, we set $r$ fraction of weights of the convolutional layer which has the smallest absolute value to zero.
We range $r$ from $0.1$ to $0.5$ as a larger $r$ degrades the DA significantly.
For instance, for downstream task CIFAR-10, DA for $S_2^{simclr}$ drops from $82.08\%$ to $79.99\%$ with $r=0.5$ to $r=0.6$.
We summarize the results of WR (in bold) and DA (on task CIFAR-10) under pruning in \autoref{table:wr_prune}.
We observe that the WR is still preserved to a large extent with different $r$.
For instance, For the surrogate encoder $S_1^{simclr}$, the WR is $94\%$ when $r=0.1$ and still have $87\%$ when $r$ increases to $0.5$.
For the adversary with white-box access (i.e., they can directly conduct pruning against the victim encoder), the WR is also preserved.
Take $F_*^{moco}$ as an example, the WR is always $100\%$ with $r$ ranging from $0.1$ to $0.5$.
This means \emph{SSLGuard}\xspace is still robust against pruning.
\mypara{Finetuning}
After pruning the surrogate encoders, the adversary can finetune the surrogate encoders under the victim encoder's supervision, which is followed by the setting in~\cite{JCCP21}.
This process is also called fine-pruning.
From pruning surrogate encoders, we totally get 60 pruned copies,
For the surrogate encoder after pruning, we finetune them for 20 epochs with a learning rate of 0.01.
The results are shown in \autoref{table:wr_prune}.
Firstly, we can see that adversary can improve DA for pruned surrogate encoders through finetuning.
For instance, for $S_3^{simclr}$, DA increases from $85.53\%$ to $87.72\%$ when $r=0.5$.
We also observe that all WR are all above $50\%$, which means \emph{SSLGuard}\xspace can also extract watermarks in these finetuned surrogate encoders.
Therefore, our results show that \emph{SSLGuard}\xspace is robust to finetuning as well.
\mypara{Summary}
Our evaluation over the adaptive adversary shows that \emph{SSLGuard}\xspace is still robust against further watermark removal attacks such as pruning and finetuning, which shows the great potential of protecting the copyright of SSL pre-trained encoders.
\section{Discussion}
\mypara{The Necessity of The Shadow Encoder}
The reason why \emph{SSLGuard}\xspace can extract watermarks from the surrogate encoder is that it locally simulates a model stealing process by using a shadow dataset and shadow encoder.
In this part, we aim to demonstrate the need for such a design.
We discard the shadow encoder, embed the watermark into a clean pre-trained encoder on SimCLR, and get a watermarked encoder $F_*^{sim}$ and key-tuple $\kappa^{sim}$.
And we mount three model stealing attacks, i.e., \texttt{Steal-1}\xspace, \texttt{Steal-2}\xspace, and \texttt{Steal-3}\xspace, to $F_*^{sim}$ to get three surrogate encoders.
We adopt $\kappa^{sim}$ to check the WR of these surrogate encoders and find that the watermark rates are $23\%$, $63\%$, and $32\%$, respectively.
This means the watermark may not be verified.
Meanwhile, DA for three surrogate encoders are $89.90\%$, $82.99\%$ and, $87.36\%$ on CIFAR-10, respectively.
This indicates that the adversary can successfully steal the victim encoder as the DA for the surrogate encoder is close to the target encoder.
In conclusion, \emph{SSLGuard}\xspace cannot work well without the shadow encoder as the adversary can steal a surrogate encoder with high utility while bypassing the watermark verification process.
Therefore, the shadow encoder is crucial for defending against model stealing attacks.
\mypara{Model Level Verification}
To further demonstrate the efficacy of \emph{SSLGuard}\xspace, we set $10$ different random seeds to launch the model stealing attacks, and we get 30 surrogate encoders in total, including 10 surrogate encoders on \texttt{Steal-1}\xspace, \texttt{Steal-2}\xspace, and \texttt{Steal-3}\xspace, respectively.
We calculate WR for each surrogate encoder and summarize the results in \autoref{fig:model_level}.
We observe that \emph{SSLGuard}\xspace can successfully verify the copyright of all 30 surrogate encoders, as the WR for all surrogate encoders is larger than $50\%$.
The results indicate that \emph{SSLGuard}\xspace can consistently defense model stealing attacks to a large extent.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{pic/model_level.pdf}
\caption{The results of WR of surrogate encoders on \texttt{Steal-1}\xspace, \texttt{Steal-2}\xspace, and \texttt{Steal-3}\xspace, respectively. We launch each type of model stealing attack for $10$ times.}
\label{fig:model_level}
\end{figure}
\section{Related Work}
\mypara{Privacy and Security for SSL}
There have been more and more studies on the privacy and security of self-supervised learning.
Jia et al.~\cite{JLG21} sum up 10 security and privacy problems for SSL.
Among them, only a small part has been studied.
Liu et al.~\cite{LJQG21} study membership inference attacks against contrastive learning-based pre-train encoder.
Concretely, Liu et al.~\cite{LJQG21} leverage data augmentations over the original samples to generate multiple augmented views to query the target encoder and obtain the features.
Then, the authors measure the similarities among the features.
The intuition is that, if the sample is a member, then the similarities should be high since many augmented views of the sample are used during the training procedure, which makes them embedded closer.
He and Zhang~\cite{HZ21} perform the first privacy analysis of contrastive learning.
Concretely, the authors observe that the contrastive models are less vulnerable to membership inference attacks, while more vulnerable to attribute inference attacks.
The reason is that contrastive models are more generalized with less overfitting level, which lead to fewer membership inference risks, but the representation learned by contrastive learning are more informative, thus leaking more attribute information.
Jia et al.~\cite{JLG22} propose the first backdoor attack against SSL pre-trained encoders.
By injecting the trigger pattern in the pre-training process of an encoder that correlated to a specific downstream task, the backdoored encoder can behave abnormally for this downstream task.
The author further shows that triggers for multiple tasks can be simultaneously injected into the encoder.
\mypara{DNNs Copyright Protection}
In recent years, several techniques for DNNs copyright protection have been proposed.
Among them, DNNs watermarking is one of the most representative algorithms.
Jia et al.~\cite{JCCP21} propose an entangled watermarking algorithm that encourage the classifiers to represent training data and watermarks similarly.
The goal of the entanglement is to force the adversary to learn the knowledge of the watermarks when he steals the model.
DNN fingerprinting is another protection method.
Unlike watermarking, the goal of fingerprinting is to extract a specific property from the model.
Cao et al.~\cite{CJG21} introduce a fingerprinting extraction algorithm, namely IPGuard.
IPGuard regards the data points near the classification boundary as the model's fingerprint.
If a suspect classifier predicts the same labels for these points, then it will be judged as a surrogate classifier.
Chen et al.~\cite{CWPSCJMLS22} propose a testing framework for supervised learning models.
They propose six metrics to measure whether a suspect model is a copy of the victim model.
Among these metrics, four of them need white-box access, and black-box access is enough for the rest.
\section{Conclusion}
In this paper, we first quantify the copyright breaching threats of SSL pre-trained encoders through the lens of model stealing attacks.
We empirically show that the SSL pre-trained encoders are highly vulnerable to model stealing attacks.
This is because the rich information in the features can be leveraged to better capture the behavior of the victim encoder.
To protect the copyright of SSL pre-trained encoder, we propose \emph{SSLGuard}\xspace, a robust black-box watermarking algorithm for the SSL pre-trained encoders
Concretely, given a secret key, \emph{SSLGuard}\xspace embeds a watermark into a clean pre-trained encoder and outputs a watermarked version.
The shadow training technique is also applied to preserve the watermark under potential model stealing attacks.
Extensive evaluations show that \emph{SSLGuard}\xspace is effective in embedding and extracting watermarks and robust against model stealing and other watermark removal attacks such as pruning and finetuning.
\newpage
\balance
\bibliographystyle{plain}
|
1,941,325,221,124 | arxiv | \section{An enlightening way of looking for new physics}
Several considerations (e.g., UV-completions for the Standard Model) and
observations (e.g., Dark Matter)
lead us to assume the existence of particles beyond the Standard Model.
In order to have evaded detection so far, such new particles can
be either very heavy, or rather
light (e.g., at sub-eV scale) if they have
extremely small coupling to known particles \cite{Jaeckel:2010ni}.
The most renowned example
of such proposed weakly interacting slim particles, ``WISPs'' for short,
is arguably the axion \cite{Wilczek:1977pj},
which is a consequence of the Peccei-Quinn solution to the strong CP-problem.
In addition, strong interest has emerged recently for so-called
axion-{\it like} particles (ALPs), whose mass-coupling
relation is relaxed compared to
the QCD axion:
ALPs can, e.g., appear in intermediate string
scale scenarios \cite{Cicoli:2012sz},
and
constitute Dark Matter \cite{Arias:2012az}. Moreover they could
explain yet puzzling observations\footnote{Note that
astrophysical observations
of course also put strong constraints on the existence of ALPs.
A recent comprehensive overview of the corresponding
parameter space can be found in \cite{Hewett:2012ns}.}
in some astrophysical processes
such as anomalous White Dwarf cooling \cite{Isern:2008nt} and the
transmissibility of the universe to
high-energetic photons \cite{Horns:2012fx}.
Further WISPs can be particles of hidden sectors in string- and field-theoretic
extensions of the
Standard Model, see, e.g., \cite{Abel:2008ai}:
Particularly hidden photons (HPs), i.e., gauge bosons
of an extra U(1) gauge group
as well hidden sector matter. The latter
can acquire an electromagnetic fractional charge
and thus can constitute so-called minicharged particles (MCPs),
see \cite{Jaeckel:2010ni}
for an overview. Hidden photons are also a
viable Dark Matter candidate \cite{Arias:2012az}
and could be responsible for the phenomenon of
Dark Radiation \cite{Jaeckel:2008fi}.
In addition, WISPs can appear as scalar modes in theories of massive gravity \cite{Brax:2012ie}.
If such WISPs exist, it is expedient to search for
them also by their interactions
with photons. Amongst others, this is advantageous
because photons can be easily produced at high rates and do not have tree-level self-interactions within the Standard Model.
Thus, beyond-Standard-Model physics becomes readily accessible.
Experiments particularly apt to look for WISPs with photons are of the
`light-shining-through-a-wall'-type (LSW) \cite{Redondo:2010dp}:
Laser photons can be converted into a WISP in front of a light-blocking
barrier (generation region)
and reconverted into photons behind that barrier (regeneration region).
Depending on the particle type, these
conversion processes are induced by magnetic
fields\footnote{Note that
for the test of ALPs and MCPs the optimal direction of these fields
is rather different \cite{Dobrich:2012sw}.} or are manifest as oscillations.
The most sensitive LSW laboratory setup thus far is the first
stage of the Any Light Particle Search (ALPS-I) \cite{Ehret:2010mh} at DESY.
With major upgrades in magnetic length, laser power and the detection system,
the proposed ALPS-II experiment
aims at improving the sensitivity by a few orders of magnitude for the
different WISPs.
Following last year's workshop contribution \cite{vonSeggern:2011em} we shortly
present an updated status of ALPS-II.
\section{ALPS-II status and prospect}
The reason for proposing to realize an upgraded version of
ALPS is its sensitivity
to particularly interesting parameter regions for various WISPs,
as indicated in the previous section.
Three key ingredients are responsible for this sensitivity boost \cite{TDR},
cf. Tab.~\ref{tab:param}:
Foremost, the magnetic length of
ALPS-II is expected to be 468~Tm. This can be achieved by a string of
10+10 HERA dipoles which can be taken
from a reserve of 24 spare magnets manufactured for HERA at DESY.
Note that the setup of this string requires an aperture increase by
straightening the beam pipe
to avoid clipping losses for the laser.
The viability of this undertaking has been proven with the
straightened ALPS-I magnet
achieving quench currents above values measured for the unbent magnet \cite{TDR}.
Secondly, the effective photon flux in the setup is planned \cite{baehre} to be increased
through a higher power buildup in the production cavity and
by virtue of `resonant regeneration' \cite{Hoogeveen:1990vq},
i.e., an optical resonator in the
regeneration region,
locked to the resonator in the
generation region\footnote{Note that `resonant regeneration' is already
successfully used in
related setups with microwaves \cite{Betz:2012tp}. For the optical regime,
different locking schemes have been proposed \cite{TDR,Mueller:2010zzc}.}.
To assure long-term stability of the cavity mirrors,
ALPS-II will employ infrared light at 1064nm wavelength
(instead of green 532nm for ALPS-I).
Thirdly, ALPS-II will feature a nearly background-free transition edge sensor
allowing for high detection efficiency even to infrared light.
Whilst this sensor is under development,
the ALPS-I CCD can be used as a fall-back option.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Parameter & Scaling & ALPS-I & ALPS-IIc & Sens. gain \\ [1pt] \hline
Effective laser power $P_{\rm laser}$ & $g_{a\gamma} \propto P_{\rm laser}^{-1/4}$ & 1\,kW & 150\,kW & 3.5\\[1pt] \hline
Rel. photon number flux $n_\gamma$ & $g_{a\gamma} \propto n_\gamma^{-1/4}$ & 1 (532\,nm) & 2 (1064\,nm) & 1.2\\[1pt] \hline
Power built up in RC $P_{\rm RC}$ & $g_{a\gamma} \propto P_{reg}^{-1/4}$ & 1 & 40,000 & 14\\[1pt] \hline
$BL$ (before\& after the wall) & $g_{a\gamma} \propto (BL)^{-1}$ & 22\,Tm & 468\,Tm & 21\\[1pt] \hline
Detector efficiency $QE$ & $g_{a\gamma} \propto QE^{-1/4}$ & 0.9 & 0.75 & 0.96\\[1pt] \hline
Detector noise $DC$ & $g_{a\gamma} \propto DC^{1/8}$ & 0.0018\,s$^{-1}$ & 0.000001\,s$^{-1}$ & 2.6\\[1pt] \hline
Combined improvements & & & & 3082\\[1pt] \hline
\end{tabular}
\caption{Parameters of the ALPS-I experiment in comparison to the ALPS-II proposal.
The second column shows
the dependence of the reachable ALP-photon coupling on the experimental parameters.
The last column lists the approximate
sensitivity gain for ALP searches compared to ALPS-I.
For hidden photons, there is no gain from the magnetic field. Thus the
sensitivity gain follows as above except for the factor coming from $BL$
and amounts to 147.}
\label{tab:param}
\end{table}
\begin{figure*}[htbp]
\begin{tabular}{cc}
\begin{minipage}[t]{0.48\textwidth}
\begin{center}
\includegraphics[width=1\textwidth]{doebrich_babette_fig1.pdf}
\caption{Sketch of the prospective reach of ALPS-IIc (orange) in the axion-like particle parameter space,
see text for details.
}
\label{fig:ALPs}
\end{center}
\end{minipage}
&
\begin{minipage}[t]{0.5\textwidth}
\begin{center}
\includegraphics[width=1\textwidth]{doebrich_babette_fig2.pdf}
\caption{Expected sensitivity range of ALPS-IIa/b (red/blue) for hidden photons,
cf. text for details.
}
\label{fig:HPs}
\end{center}
\end{minipage}
\end{tabular}
\end{figure*}
ALPS-II is set out to be realized in three stages:
ALPS-IIa is already well under way and will search for hidden photons
in a 10m+10m LSW-setup without magnets.
It is meant to demonstrate
the viability of the optics setup, particularly the locking scheme of the regeneration
cavity.
As physics benefit, it will be sensitive to the Dark Radiation
hint \cite{Jaeckel:2008fi} as well as
to parameter regions favored in intermediate string scale scenarios and a small region compatible with
HP Dark Matter, see Fig.~\ref{fig:HPs}.
As already successfully used at ALPS-I \cite{Ehret:2010mh}, also gas will be inserted into the vacuum
tube at each stage to close the gaps arising from
at minima of the conversion probability.
ALPS-IIb, still without magnets is planned to test a smooth operation at 100m+100m length in the HERA tunnel.
Only the final stage, ALPS-IIc, is planned to operate with magnets and thus explore
the parameter space of axion-like particles,
cf. Fig.~\ref{fig:ALPs}.
\section{Photons as probe for asymptotically safe gravity}
A further field of search for physics beyond the Standard Model
in which lasers-based setups could
yield insight, is the search for a quantum theory of gravity.
As there is no tree-level background within the Standard Model,
it is tempting to explore if measuring the cross-section of photon-photon scattering at
high energies can teach us about quantum gravity (QG), since the small QG signal is easier
accessible without a tree-level background.
To achieve high photon energies in a collider mode,
different future options are advanced:
Compton-backscattering is possible within linacs or from wake-field-accelerated charged particles
using pulsed high-intensity lasers \cite{Kando:2008te}.
The tiny cross section for photon-photon scattering through graviton exchange \cite{Gupta}
may be drastically enhanced in scenarios with extra dimensions \cite{Cheung:1999ja}.
This is not a new idea,
but strongly deserves a revisit \cite{Dobrich:2012nv} in the context of UV-complete theories,
that do not rely on an energy cutoff scale, such as asymptotically
safe gravity, see \cite{AS_reviews} for reviews.
It is worth exploring the possibility further if laser-based searches
could eventually even shed light onto QG.
\section{Summary}
Laser-based laboratory searches can be a strong tool to address various questions of
beyond the Standard Model physics. In this contribution we have discussed the status and prospect of
ALPS-II, which is designed to explore particularly motivated parameter regions of different WISPs.
Further, we have briefly pointed at the possibility of even testing quantum gravity scenarios with purely
laser-based setups.
\vspace{0.1cm}
\noindent
{\it The author would like to thank the Patras conference organizers for a marvelous
and stimulating workshop in Chicago
and all fellow colleagues from ALPS-II for fruitful and fun collaboration.}
\begin{footnotesize}
|
1,941,325,221,125 | arxiv | \section{Introduction}
\label{sec:intro}
Experiments on ultracold atomic gases in optical lattices have opened the opportunity to simulate condensed-matter models in a well-controlled setup~\cite{Lewenstein-07,Bloch-08,ReichelVuletic11}, and also to probe the non-equilibrium dynamics of such strongly correlated quantum systems~\cite{Kinoshita-06,Hofferberth-07,Gring-12,Trotzky-12,Cheneau-12,Langen-13}. These systems are isolated from their environment to the extent that we can consider them closed, and they allow for dynamic control of the system parameters, enabling investigation of dynamics induced by quantum quenches~\cite{CalabreseCardy06,CalabreseCardy07}, ie, the time evolution following a sudden change in one of the system parameters. This triggered a considerable interest in theoretically addressing the non-equilibrium behaviour of various condensed-matter models as a consequence of sudden quenches, see References~\cite{Polkovnikov-11,Eisert-15,GogolinEisert16} for a review. \change{A natural extension of this setting are finite-time quenches, ie, the study of the dynamics during and after the change of the system parameters over a finite time interval $\tau$. Obviously the two extreme limits to this problem are the sudden quench and the adiabatically slow change of parameters, but the huge regime between these limits and the generic time dependence of the parameters open many possibilities for new features in the non-equilibrium dynamics. The main aim of the present manuscript is the study of this dynamics in the prototypical transverse-field Ising chain. So far finite-time quenches have been investigated mostly in bosonic as well as fermionic Hubbard models~\cite{MoeckelKehrein10,Bernier-12,Sandri-12} and one-dimensional Luttinger liquids~\cite{Dora-11,DziarmagaTylutki11,PerfettoStefanucci11,Pollmann-13,Bernier-14,Sachdeva-14,CS16,Porta-16}.}
In this work, we focus on finite-time quenches in the transverse field Ising (TFI) model. The Ising model has been realised experimentally in an ultracold gas of bosonic atoms in a linear potential~\cite{Simon-11}, and its behaviour following sudden quenches across the critical point has been observed~\cite{Meinert-13}. Theoretically, going back to the 70s~\cite{Barouch-70,BarouchMcCoy71,BarouchMcCoy71-2} sudden quenches have been studied extensively in this system. In particular, the order parameter and spin correlation functions~\cite{Calabrese-11,Calabrese-12jsm1} as well as the generalised Gibbs ensemble~\cite{Rigol-07,Calabrese-12jsm2,FagottiEssler13} describing the late time stationary state have been investigated in detail. For example, as a consequence of the Lieb--Robinson bounds~\cite{LiebRobinson72,Bravyi-06} the spin correlation functions show a clear light-cone effect~\cite{CalabreseCardy06}. In our results, those behaviours are reproduced and generalised to take into account the finite-quench duration and non-sudden protocol. We note that in the context of the Kibble--Zurek mechanism some attention has been given to linear ramps through the quantum phase transition in the TFI model~\cite{Dziarmaga05,CherngLevitov06,Kolodrubetz-12}. Furthermore, the behaviour of the order parameter in the late-time limit after linear ramps in the ferromagnetic phase has been investigated~\cite{Maraga-14}.
This paper is organised as follows: In section~\ref{sec:TFI} we set the notation and review the diagonalisation of the TFI chain. In section~\ref{sec:quench} we discuss the setup of a finite-time quantum quench and derive expressions for the time evolution of the correlation functions in the TFI chain without addressing the specifics of the quench protocol. We also construct the generalised Gibbs ensemble describing the stationary state at late times after the quench, again without addressing specific quench protocols. In section~\ref{sec:results protocols} we discuss specific quench protocols: linear, exponential, cosine and sine, cubic and quartic quenches, as well as piecewise differentiable versions thereof. \change{Hereby the protocols are chosen to cover different features of the time dependence like non-differentiable kinks. This allows us to identify properties of the non-equilibrium dynamics that are universal, ie, independent of these details.} We derive the equations governing the time-dependent Bogoliubov coefficients for each of the protocols and calculate their exact solutions for the linear and exponential quenches. In section~\ref{sec:results observables} we analyse the behaviour of the total energy, transverse magnetisation, transverse and longitudinal spin correlation functions, and the Loschmidt echo during and after the quench. In section~\ref{sec:SL} we briefly discuss the scaling limit of our results. Finally, in section~\ref{sec:curved} we re-interpret our results in the context of time-dependently curved spacetimes~\cite{NeuenhahnMarquardt15} before concluding with an outlook in section~\ref{sec:conclusion}.
\section{Transverse field Ising (TFI) chain}
\label{sec:TFI}
The Hamiltonian of the $N$-site TFI chain is given by
\begin{equation}
\label{eq:TFIM}
H = -J\sum_{i=1}^{N} \left( \sigma^x_i \sigma^x_{i+1} + g \sigma^z_i \right),
\end{equation}
where $\sigma^a$, $a=x,y,z$, are the Pauli matrices, $J>0$ sets the energy scale and periodic boundary conditions $\sigma^a_{N+1}=\sigma^a_1$ are imposed. The dimensionless parameter $g$ describes the coupling to an external, transverse magnetic field. In the thermodynamic limit, the TFI chain at zero temperature is a prototype system which exhibits a quantum phase transition~\cite{Sachdev99}. The transition occurs between the ferromagnetic (ordered) phase for $g<1$ and the paramagnetic (disordered) phase for $g>1$, with the critical point being $g_\mathrm{c}=1$.
The Hamiltonian \eqref{eq:TFIM} can be exactly diagonalised by transforming to a spinless representation~\cite{Lieb-61,Pfeuty70,Calabrese-12jsm1}. First, using the spin raising and lowering operators $\sigma^{\pm}_i = \frac{1}{2}\left( \sigma^x_i \pm \rmi \sigma^y_i\right)$ it is recast into
\begin{equation}
H=-J\sum_{i=1}^N \left( \sigma^+_{i} + \sigma^-_{i}\right)\left( \sigma^+_{i+1} + \sigma^-_{i+1}\right) - Jg\sum_{i=1}^N \left( 1- 2 \sigma^+_{i} \sigma^-_{i}\right).
\end{equation}
We can then transform the spin operators to fermions by means of a Jordan--Wigner transformation
\begin{equation}
c_i = \mathrm{exp}\bigg( \rmi \pi \sum_{j<i} \sigma^+_j \sigma^-_j \bigg) \sigma^-_i,\quad c_i^{\dagger}= \sigma^+_i \mathrm{exp}\bigg( \rmi \pi \sum_{j<i} \sigma^+_j \sigma^-_j \bigg),
\label{eq:Jordan-Wigner}
\end{equation}
where $c_i $ and $c_i^{\dagger}$ are spinless fermionic creation and annihilation operators at lattice site $i$. The Hamiltonian in terms of the Jordan--Wigner fermions obtains a block-diagonal structure $H=H_{\mathrm{e}} \oplus H_{\mathrm{o}}$, where
\begin{equation}
H_{\mathrm{e/o}} = -J\sum_{i=1}^{N} \big( c_i^{\dagger} -c_i^{\phantom{\dagger}} \big)\big( c_{i+1}^{\dagger} +c_{i+1}^{\phantom{\dagger}} \big) - Jg \sum_{i=1}^{N} \big( 1- 2 c_i^{\dagger} c_i^{\phantom{\dagger}} \big).
\end{equation}
The reduced Hamiltonian $H_{\mathrm{e/o}}$ acts only on the subspace of the Fock space with even/odd number of fermions. In the sector with an even fermion number, the so-called Neveu--Schwarz (NS) sector, the fact that $\mathrm{exp}(\rmi \pi \sum_{i=1}^{N}c_i^{\dagger} c_i^{\phantom{\dagger}})=1$ implies that the fermions have to satisfy antiperiodic boundary conditions $c_{N+1}=-c_1$. Similarly, in the sector with odd fermion number, usually referred to as Ramond (R) sector, the relation $\mathrm{exp}(\rmi \pi \sum_{i=1}^{N}c_i^{\dagger} c_i^{\phantom{\dagger}})=-1$ implies periodic boundary conditions $c_{N+1}=c_1$.
We perform a discrete Fourier transformation to momentum space as ${c_k = \frac{1}{\sqrt{N}} \sum_{i=1}^{N} c_i^{\phantom{\dagger}} \rme^{\rmi k i}}$, where we have set the lattice spacing to unity. The Hamiltonian becomes
\begin{equation}
H_{\mathrm{e/o}} = - JgN + 2J \sum_{k} (g-\cos{k}) c_k^{\dagger} c_k^{\phantom{\dagger}} - \rmi J \sum_k \sin{k} \big( c_{-k}^{\dagger} c_k^{\dagger} + c_{-k}^{\phantom{\dagger}} c_k^{\phantom{\dagger}} \big),
\end{equation}
where the sum over momenta $k$ implies the sum over $n = -\frac{N}{2}, \dots, \frac{N}{2}-1$, and the momenta are quantised as $k_n^{\mathrm{NS}} = \frac{2 \pi}{N} (n+\frac{1}{2})$ in the even and $k_n^{\mathrm{R}}=\frac{2\pi n}{N}$ in the odd sector.
In the even sector, the Hamiltonian can finally be diagonalised by applying the Bogoliubov transformation
\begin{equation}
\eta_k^{\phantom{\dagger}} = u_k^{\phantom{\dagger}} c_k^{\phantom{\dagger}} - \rmi v_k^{\phantom{\dagger}} c_{-k}^{\dagger},\quad \eta_k^{\dagger} = u_k^{\phantom{\dagger}} c_k^{\dagger} + \rmi v_k^{\phantom{\dagger}} c_{-k}^{\phantom{\dagger}}.
\label{eq:Bogoliubov transformation}
\end{equation}
We choose the transformation such that the Bogoliubov coefficients $u_k$ and $v_k$ are real. From the requirement that the Bogoliubov operators satisfy the usual fermionic anticommutation relations, we obtain $u_k^2+v_k^2=1$ as well as $u_k=u_{-k}$ and $v_k=-v_{-k}$. We can therefore parametrise the Bogoliubov coefficients as $u_k= \cos{\frac{\theta_k}{2}}$ and $v_k= \sin{\frac{\theta_k}{2}}$. The requirement in the new representation is that the off-diagonal terms of the Hamiltonian vanish, which yields the condition
\begin{equation}
\label{eq:theta}
\rme^{\rmi \theta_k}=\frac{g-\rme^{\rmi k}}{\sqrt{1+g^2-2g\cos{k}}}.
\end{equation}
The Hamiltonian is then diagonalised as $H_\mathrm{e} = \sum_k \varepsilon_k \left( \eta_k^{\dagger} \eta_k^{\phantom{\dagger}} - \frac{1}{2}\right)$, where the single particle dispersion relation is $\varepsilon_k=2J\sqrt{1+g^2-2g\cos{k}}$. The excitation gap is thus given by $\Delta=\varepsilon_{k=0}=2J|1-g|$.
In the odd sector, the diagonalisation proceeds similarly using the Bogoliubov transformation \eqref{eq:Bogoliubov transformation}. Additional care has to be taken for the momenta $k_0=0$ and $k_{-N/2}=-\pi$, which do not have partners with $-k$. The resulting Hamiltonian is $H_\mathrm{o} = \sum_{k \neq 0} \varepsilon_k \left( \eta_k^{\dagger} \eta_k^{\phantom{\dagger}} - \frac{1}{2}\right)-2J(1-g)\left( \eta_0 ^{\dagger} \eta_0 ^{\phantom{\dagger}} - \frac{1}{2}\right)$.
\section{Finite-time quantum quenches}
\label{sec:quench}
\subsection{General quench protocols}
In the setup we consider, the system is initially prepared in the ground state of the Hamiltonian $H(g_{\rmi})$, which is the vacuum state for the Bogoliubov fermions $\eta_k^{\phantom{\dagger}}$ and $\eta_k^{\dagger}$. Starting at $t=0$ the coupling to the transverse field is continuously changed over a finite quench time $\tau$ until it reaches its final value $g_{\mathrm{f}}$. Following the quench, ie, for times $t>\tau$, the system evolves according to the Hamiltonian $H(g_{\mathrm{f}})$. In other words, we consider the time-dependent system
\begin{equation}
\label{eq:TFIMtime}
H(t) = -J\sum_{i=1}^{N} \left( \sigma^x_i \sigma^x_{i+1} + g(t) \sigma^z_i \right),
\end{equation}
with the continuous function $g(t)$ taking the limiting values
\begin{equation}
g(t<0)=g_\mathrm{i},\quad g(t>\tau)=g_\mathrm{f}.
\end{equation}
Some examples of protocols $g(t)$ are shown in figure~\ref{fig:protocols and gap change rate}.
\begin{figure}[t]
\centering
\includegraphics[width=.4\linewidth]{figure1a.pdf}
\quad
\includegraphics[width=.4\linewidth]{figure1b.pdf}
\caption{Left: Sketch of different quench protocols for $g_{\rmi}=0$ to $g_{\mathrm{f}}=\frac{2}{3}$. Right: Corresponding change rate of the gaps.}
\label{fig:protocols and gap change rate}
\end{figure}
We are interested in the dynamical behaviour of physical observables, ie, in calculating the expectation values of time-evolved operators taken with respect to the initial ground state. Unlike in the sudden-quench case where the pre- and post-quench Bogoliubov fermions are directly related~\cite{Sengupta-04}, for general protocols one has different instantaneous Bogoliubov fermions and Bogoliubov coefficients at each time, which is not practical. Instead, we assume the following ansatz for the time evolution of the Jordan--Wigner fermions~\cite{Dziarmaga05}
\begin{equation}
\label{eq:time dependent JW}
c_k (t) = u_k(t) \eta_k + \rmi v_k(t) \eta_{-k}^{\dagger},
\end{equation}
ie, we keep the Bogoliubov fermions $\eta_k$ which diagonalise the initial Hamiltonian and cast the temporal dependence into the functions $u_k(t)$ and $v_k(t)$. Making use of the Heisenberg equations of motion for the operators $c_k^{\phantom{\dagger}}(t)$ and $c_k^{\dagger}(t)$ we obtain
\begin{equation}
\rmi \frac{\rmd}{\rmd t}\pmatrix{u_k(t) \cr v^{\ast}_{-k}(t)} = \pmatrix{A_k(t) & B_k\cr B_k & -A_k(t)} \pmatrix{u_k(t) \cr v^{\ast}_{-k}(t)},
\label{eq:uvDGL}
\end{equation}
with $A_k(t)=2J\big[g(t)-\cos{k}\big]$, $B_k=2J \sin{k}$, and the asterisk $\ast$ denoting complex conjugation. According to \eqref{eq:Bogoliubov transformation} the initial conditions read
\begin{equation}
u_k(t=0)=\cos\frac{\theta_k^\mathrm{i}}{2},\quad v_k(t=0)=\sin\frac{\theta_k^\mathrm{i}}{2},
\end{equation}
with the angle $\theta_k^\mathrm{i}$ defined by \eqref{eq:theta} with the initial value $g=g_\mathrm{i}$. The equations \eqref{eq:uvDGL} can also be decoupled as
\begin{equation}
\label{eq:during quench u and v}
\frac{\partial^2}{\partial t^2} y_k^{\phantom{\dagger}}(t) + \left(A_k^{\phantom{\dagger}}(t)^2+B_k^2 \pm \rmi \frac{\partial}{\partial t} A_k^{\phantom{\dagger}}(t)\right)y_k^{\phantom{\dagger}}(t) = 0,
\end{equation}
where the upper and lower sign refers to the equation for $y_k^{\phantom{\ast}}(t)=u_{k}^{\phantom{\ast}}(t)$ and $y_k^{\phantom{\ast}}(t)=v_{-k}^{\ast}(t)$ respectively. During the quench, the solutions to these equations depend on the precise form of $g(t)$ and in some cases allow for explicit analytic solutions. We will address several of these protocols in section \ref{sec:results protocols}.
After the quench, the equations for the Bogoliubov coefficients simplify to
\begin{equation}
\frac{\partial^2}{\partial t^2} y_k^{\phantom{\dagger}}(t) + \omega_k^2\, y_k^{\phantom{\dagger}}(t) = 0,
\end{equation}
with the solution
\begin{equation}
\label{eq:post-quench u and v}
y_k(t) = \mathrm{c}_3^y \rme^{\rmi \omega_k t} + \mathrm{c}_4^y \rme^{-\rmi \omega_k t},\quad \omega_k = \sqrt{A_k^{\phantom{\dagger}}(\tau)^2+B_k^2}=\varepsilon_k(g_{\mathrm{f}}),
\end{equation}
where we have defined the single-mode energies $\omega_k$ after the quench. The constants $c_3^y$ and $c_4^y$ are determined by the continuity of the solutions at $t=\tau$ with the results
\begin{eqnarray}
c_3^u = \frac{\rme^{-\rmi \omega_k \tau}}{2 \omega_k} \left[ \Big(\omega_k^{\phantom{\dagger}}-A_k^{\phantom{\dagger}}(\tau)\Big) u_k^{\phantom{\ast}}(\tau) - B_k^{\phantom{\dagger}} v_{-k}^{\ast}(\tau)\right],\\
\label{eq: u post-quench constants}
c_4^u = \frac{\rme^{\rmi \omega_k \tau}}{2 \omega_k} \left[ \Big(\omega_k^{\phantom{\dagger}}+A_k^{\phantom{\dagger}}(\tau)\Big) u_k^{\phantom{\ast}}(\tau) + B_k^{\phantom{\dagger}} v_{-k}^{\ast}(\tau)\right],
\end{eqnarray}
for $u_k(t)$, and
\begin{eqnarray}
\label{eq: v post-quench constants}
c_3^v = \frac{\rme^{-\rmi \omega_k \tau}}{2 \omega_k} \left[ \Big(\omega_k^{\phantom{\dagger}}+A_k^{\phantom{\dagger}}(\tau)\Big) v_{-k}^{\ast}(\tau) - B_k^{\phantom{\dagger}} u_k^{\phantom{\ast}}(\tau)\right], \\
c_4^v = \frac{\rme^{\rmi \omega_k \tau}}{2 \omega_k} \left[ \Big(\omega_k^{\phantom{\dagger}}-A_k^{\phantom{\dagger}}(\tau)\Big) v_{-k}^{\ast}(\tau) + B_k^{\phantom{\dagger}} u_k^{\phantom{\dagger}}(\tau)\right],
\end{eqnarray}
for $v_{-k}^{\ast}(t)$. We stress that these constants depend on the momenta $k$ and the quench duration $\tau$, but we use the shorthand notation $c_n^y=c_{n,k}^y(\tau)$ for brevity.
\subsection{Transverse magnetisation and correlation functions}
In order to probe the system, we aim at calculating local observables during and after a time-dependent quench. The observables we have in mind are the transverse magnetisation and two-point functions in the transverse and longitudinal direction. Here we briefly sketch how to express these observables in terms of the time-dependent Bogoliubov coefficients $u_k(t)$ and $v_k(t)$.
Firstly, we write the correlators in terms of Jordan--Wigner fermions in \eqref{eq:Jordan-Wigner} and define auxiliary operators $a_i^{\phantom{\dagger}}=c_i^{\dagger}+c_i^{\phantom{\dagger}}$ and $b_i^{\phantom{\dagger}}=c_i^{\dagger} - c_i^{\phantom{\dagger}}$ to obtain
\begin{eqnarray}
M^z & = & \braket{ \sigma^z_i} = \braket{b_i a_i}, \\
\rho^z_n & = & \braket{\sigma^z_i\sigma^z_{i+n}} = \braket{a_ib_ia_{i+n}b_{i+n}}, \\
\rho^x_n & = & \braket{\sigma^x_i\sigma^x_{i+n}} = \braket{b_ia_{i+1}b_{i+1} \dots a_{i+n-1}b_{i+n-1}a_{i+n}}.
\end{eqnarray}
Here we have used $1-2c_i^{\dagger} c_i^{\phantom{\dagger}}=a_i b_i=-b_i a_i$ and suppressed the time dependence of the operators for concise notation. Furthermore, due to translational invariance the observables do not depend on the lattice site. Secondly, we define the contractions, vacuum expectation values of pairs of operators, as $S_{ij}=\braket{b_ib_j}$, $Q_{ij}=\braket{a_ia_j}$ and $G_{ij}=\braket{b_ia_j}=-\braket{a_jb_i}$. The transverse magnetisation is then simply given by
\begin{equation}
\label{eq:M^z}
M^z=-G_{ii}.
\end{equation}
Employing the Wick theorem, we can express the two-point functions in terms sums of products of all possible contractions. This can be conveniently written in the form of Pfaffians~\cite{BarouchMcCoy71}
\begin{equation}
\fl \label{eq:rho^z}
\rho^z_n =
\left. \matrix{
\big| S_{i,i+n} & G_{i,i} & G_{i,i+n} \cr
& G_{i+n,i} & G_{i+n,i+n} \cr
& & Q_{i,i+n}} \right|,
\end{equation}
\begin{equation}
\fl\label{eq:rho^x}
\rho^x_n =
\left. \setlength\arraycolsep{4pt}
\begin{array}{cccccccc}
\big| S_{i,i+1} & S_{i,i+2} & \dots & S_{i,i+n-1} & G_{i,i+1} & G_{i,i+2} & \dots & G_{i,i+n} \cr
& S_{i+1,i+2} & \dots & S_{i+1,i+n-1} & G_{i+1,i+1} & G_{i+1,i+2} & \dots & G_{i+1,i+n} \cr
& & & \vdots & \vdots & \vdots & & \vdots \cr
& & & S_{i+n-2,i+n-1} & G_{i+n-2,i+1} & G_{i+n-2,i+2} & \dots & G_{i+n-2,i+n} \cr
& & & & G_{i+n-1,i+1} & G_{i+n-1,i+2} & \dots & G_{i+n-1,i+n} \cr
& & & & & Q_{i+1,i+2} & \dots & Q_{i+1,i+n} \cr
& & & & & & & \vdots \cr
& & & & & & & Q_{i+n-1,i+n} \cr
\end{array} \right|.
\end{equation}
In this triangular notation, Pfaffians can be viewed as generalised determinants which can be expanded along ``rows'' and ``columns'' in terms of minor Pfaffians~\cite{AmicoOsterloh04}. They can also be evaluated from the corresponding antisymmetric matrices $A$ as ${\mathrm{Pf}(A)}^2=\det{A}$. Here the matrix $A$ has vanishing elements on the main diagonal, its upper triangular part is the Pfaffian as written in \eqref{eq:rho^z} and \eqref{eq:rho^x}, and the lower triangular part is the negative transpose of the Pfaffian in question. Thirdly, we introduce auxiliary quadratic correlators $\alpha_{ij}$ and $\beta_{ij}$, and express their time-dependence in terms of Bogoliubov coefficients by using \eqref{eq:time dependent JW}
\begin{eqnarray}
\label{eq:auxiliary correlators alpha}
\alpha_{ij}(t) = \braket{c_i^{\phantom{\dagger}}(t) c_j^{\dagger}(t)} = \frac{1}{2 \pi} \int_{-\pi}^{\pi} \rmd k \, \rme^{-\rmi k(i-j)} |u_k(t)|^2, \\
\label{eq:auxiliary correlators beta}
\beta_{ij}(t) = \braket{c_i^{\phantom{\dagger}}(t) c_j^{\phantom{\dagger}}(t)} = \frac{\rmi}{2 \pi} \int_{-\pi}^{\pi} \rmd k \, \rme^{-\rmi k(i-j)} u_k(t) v_{-k}(t).
\end{eqnarray}
Using these functions, the various contractions become
\begin{eqnarray}
S_{ij}(t) = 2 \rmi \, \mathrm{Im}(\beta_{ij}(t)) - \delta_{ij}, \\
Q_{ij}(t) = 2 \rmi \, \mathrm{Im}(\beta_{ij}(t)) + \delta_{ij}, \\
G_{ij}(t) = 2 \, \mathrm{Re}(\beta_{ij}(t)) - 2 \alpha_{ij}(t) + \delta_{ij}.
\end{eqnarray}
These functions are the entries of \eqref{eq:M^z}--\eqref{eq:rho^x}, which give the general expressions for the time dependence of the correlation functions. An additional simplification is the fact that we may use translational invariance of the system to write $S_{ij}=S(j-i)$, $Q_{ij}=Q(j-i)$ and $G_{ij}=G(j-i)$. It is then evident that the corresponding matrices in \eqref{eq:rho^z} and \eqref{eq:rho^x} are block-Toeplitz matrices, with entries on each descending diagonal in a block identical, which reduces the computational effort.
The behaviour of these functions depends on the form and duration of the quench and will be discussed in subsequent sections. Some general features will be treated analytically.
\subsection{Generalised Gibbs ensemble}
It is by now well established that at very late times after a sudden quench a stationary state is formed which is well described by a generalised Gibbs ensemble (GGE)~\cite{Rigol-07,BarthelSchollwoeck08,Calabrese-12jsm2,FagottiEssler13,VidmarRigol16}. This ensemble contains the infinitely many integrals of motion in the TFI chain and thus retains more information about the initial state than just its energy. Considering finite-time quenches, a similar situation appears where the role of the initial state is taken by the time-evolved state at $t=\tau$. More precisely, since the time evolution for times $t>\tau$ is governed by the time-independent Hamiltonian $H(\tau)=H(g_\mathrm{f})$, we can construct the GGE
\begin{equation}
\rho_{\mathrm{GGE}} = \frac{1}{Z}\,\mathrm{exp}\left(-\sum_k \lambda_k n_k^{\mathrm{f}} \right),
\end{equation}
where $n_k^{\mathrm{f}} = {\eta_k^{\mathrm{f}}}^{\dagger} {\eta_k^{\mathrm{f}}}$ are the post-quench mode occupations with ${\eta^{\mathrm{f}}_k}^{\dagger}$ and $\eta^{\mathrm{f}}_k$ being the Bogoliubov fermions which diagonalise $H(g_{\mathrm{f}})$. We note in passing that the mode occupations are non-local in space, but that they are related to local integrals of motion via a linear transformation~\cite{FagottiEssler13} and can thus be used in the construction of the GGE. The Lagrange multipliers $\lambda_k$ are fixed by
\begin{equation}
\braket{{\eta_k^{\mathrm{f}}}^{\dagger} {\eta_k^{\mathrm{f}}}} = \Tr\left(\rho_{\mathrm{GGE}}\,{\eta_k^{\mathrm{f}}}^{\dagger} {\eta_k^{\mathrm{f}}}\right)
\end{equation}
and $Z=\Tr\big[\mathrm{exp}(-\sum_k \lambda_k n_k^\mathrm{f})\big]$ is the normalisation. Explicitly, by first reverting to Jordan--Wigner fermions and then to the initial Bogoliubov fermions $\eta_k$, we find
\begin{equation}
\label{eq:post-quench occupation number}
\eqalign{ \braket{{\eta_k^{\mathrm{f}}}^{\dagger} (\tau) \eta_k^{\mathrm{f}}(\tau)} = & (u_k^{\mathrm{f}})^2 |v_k^{\phantom{\dagger}}(\tau)|^2 + (v_{k}^{\mathrm{f}})^2 |u_{k}^{\phantom{\dagger}}(\tau)|^2 \cr & + u_k^{\mathrm{f}} v_{-k}^{\mathrm{f}} \left( u_{-k}^{\phantom{\dagger}}(\tau) v_k^{\phantom{\dagger}}(\tau) + u_{-k}^{\ast} (\tau) v_k^{\ast}(\tau)\right),}
\end{equation}
where $u_k^{\mathrm{f}}$ and $v_k^{\mathrm{f}}$ are the Bogoliubov coefficients corresponding to Bogoliubov fermions $\eta_k^{\mathrm{f}}$ as defined in \eqref{eq:Bogoliubov transformation}. This allows us to fix the Lagrange multipliers by equating \eqref{eq:post-quench occupation number} with
\begin{equation}
\Tr\left(\rho_{\mathrm{GGE}}\,{\eta_k^{\mathrm{f}}}^{\dagger} \eta_k^{\mathrm{f}} \right) = \frac{1}{1+\rme^{\lambda_k}}.
\end{equation}
We see that the Lagrange multipliers, and consequently the expectation values in the stationary state, depend on the duration $\tau$ and form $g(t)$ of the quench through the Bogoliubov coefficients $u_k^{\phantom{\ast}}(\tau)$ and $v_{-k}^{\ast}(\tau)$.
To show the validity of the GGE, we prove the equivalence of the stationary values and the GGE values of the quadratic correlators introduced in \eqref{eq:auxiliary correlators alpha} and \eqref{eq:auxiliary correlators beta}. Putting \eqref{eq:post-quench u and v} into \eqref{eq:auxiliary correlators alpha} and \eqref{eq:auxiliary correlators beta}, and taking the long-time average, we obtain
\begin{eqnarray}
\label{eq:alpha GGE}
\fl \eqalign{\alpha_{ij}^{\mathrm{s}} = \frac{1}{4 \pi} \int_{-\pi}^{\pi} \rmd k \, \rme^{-\rmi k(i-j)} & \left[ 1+\cos^2{\theta_k^{\mathrm{f}}}\left(|u_k^{\phantom{\ast}}(\tau)|^2 - |v_{-k}^{\phantom{\ast}}(\tau)|^2\right) \right. \cr & \left. -\cos{\theta_k^{\mathrm{f}}}\sin{\theta_k^{\mathrm{f}}}\left(u_k^{\phantom{\ast}}(\tau) v_{-k}^{\phantom{\ast}}(\tau)+u_k^{\ast}(\tau) v_{-k}^{\ast}(\tau)\right) \right],} \\
\label{eq:beta GGE}
\fl \eqalign{\beta_{ij}^{\mathrm{s}} = \frac{\rmi}{4 \pi} \int_{-\pi}^{\pi} \rmd k \, \rme^{-\rmi k(i-j)} & \left[ \sin^2{\theta_k^{\mathrm{f}}}\left(u_k^{\phantom{\ast}}(\tau) v_{-k}^{\phantom{\ast}}(\tau)+u_k^{\ast}(\tau) v_{-k}^{\ast}(\tau)\right) \right. \cr & \left. - \cos{\theta_k^{\mathrm{f}}}\sin{\theta_k^{\mathrm{f}}}\left( |u_k^{\phantom{\ast}}(\tau)|^2 - |v_{-k}^{\phantom{\ast}}(\tau)|^2\right) \right].}
\end{eqnarray}
The same result is obtained for the GGE expectation values $\alpha_{ij}^{\mathrm{GGE}}=\Tr\left(\rho_{\mathrm{GGE}}c_ic_j^\dagger\right)$ and $\beta_{ij}^{\mathrm{GGE}}=\Tr\left(\rho_{\mathrm{GGE}}c_ic_j\right)$.
\section{Results for different quench protocols}
\label{sec:results protocols}
In this section we collect some results for the explicit quench protocols sketched in figure~\ref{fig:protocols and gap change rate}, namely the linear, exponential, cosine, sine and polynomial quenches. We also define some piecewise differentiable protocols, which are later used as a check of principles but not extensively studied.
\subsection{Linear quench}
We start with the simplest finite-time quench protocol, which is the linear quench of the form (see the blue line in figure~\ref{fig:protocols and gap change rate})
\begin{equation}
g(t)=g_{\rmi} + v_g t,
\end{equation}
where $v_g=(g_{\mathrm{f}}-g_{\rmi})/\tau$ is the rate of change in the transverse field. For the linear quench the differential equations in \eqref{eq:during quench u and v} become
\begin{equation}
\partial_t^2 y_k(t) + \left[at^2+b_k t+c_k\right] y_k(t) = 0,
\end{equation}
with $a=4v_g^2$, $b_k=8 v_g(g_{\rmi}-\cos{k})$, $c_k=4(1+g_{\rmi}^2-2g_{\rmi}\cos{k})\pm2\rmi v_g$, and the upper and lower signs refering to the equations for $y_k(t)=u_{k}^{\phantom{\ast}}(t)$ and $y_k(t)=v_{-k}^{\ast}(t)$ respectively. This equation can be cast into the form of a Weber differential equation~\cite{AbramowitzStegun65} whose solutions in terms of parabolic cylinder functions are
\begin{equation}
\label{eq:Bogoliubov coefficients linear}
y_k(t)=c_1^y D_{\nu_k}(\tilde{t}(t)) + c_2^y D_{-\nu_k-1}(\rmi \tilde{t}(t)),
\end{equation}
where $\nu_k=(-4\rmi a c_k^{\phantom{2}} + \rmi b_k^2 - 4a^{3/2})/(8a^{3/2})$ and $\tilde{t}(t) = \rme^{\rmi \pi/4}(2^{1/2} a^{1/4} t + 2^{-1/2} a^{-3/4} b_k)$. The constants $c_1^y$ and $c_2^y$ are set by the initial conditions
\begin{eqnarray}
u_k(t)|_{t=0}&=&u_k^{\rmi},\\
v_{-k}^\ast (t)|_{t=0}&=&v_{-k}^{\rmi},\\
\frac{\mathrm{d}}{\mathrm{d} t} u_k(t) |_{t=0} &=& -\rmi A_k(0) u_k^\rmi -\rmi B_k v_{-k}^\rmi,\\
\frac{\mathrm{d}}{\mathrm{d} t} v_{-k}^\ast(t) |_{t=0} &=& - \rmi B_k u_k^\rmi + \rmi A_k(0) v_{-k}^\rmi;
\end{eqnarray}
explicitly we find
\begin{eqnarray}
c^u_1 &= &
\frac{-D_{-\nu -1}\left(\rmi d_2\right) \left\{ u_{k}^\rmi \left[2 A_k(0)+\rmi d_1 d_2\right]+2 v_{-k}^\rmi B_k \right\}+2 d_1 u_k^\rmi D_{-\nu }\left(\rmi d_2\right)}{2 d_1 \left\{ D_{-\nu }\left(\rmi d_2\right) D_{\nu }\left(d_2\right)+\rmi D_{-\nu -1}\left(\rmi d_2\right) \left[D_{\nu +1}\left(d_2\right)-d_2 D_{\nu }\left(d_2\right)\right]\right\}},\nonumber\\
c^u_2 &=&
\frac{D_{\nu }\left(d_2\right) \left\{ u_k^\rmi \left[2 A_k(0)-\rmi d_1 d_2\right]+2 v_{-k}^\rmi B_k \right\}+2 \rmi d_1 u_k^\rmi D_{\nu+1}\left(d_2\right)}{2 d_1 \left\{ D_{-\nu }\left(\rmi d_2\right) D_{\nu }\left(d_2\right)+\rmi D_{-\nu -1}\left(\rmi d_2\right) \left[D_{\nu +1}\left(d_2\right)-d_2 D_{\nu }\left(d_2\right)\right]\right\}},
\end{eqnarray}
for $u_{k}^{\phantom{\ast}}(t)$, and
\begin{eqnarray}
c^v_1 &= &
\frac{D_{-\nu -1}(\rmi d_2) \left\{ v_{-k}^\rmi \left[ 2A_k(0)-\rmi d_1 d_2\right] -2 u_k^\rmi B_k\right\}
+2 d_1 v_{-k}^\rmi D_{-\nu }(\rmi d_2)}{2 d_1 \left\{ D_{-\nu }(\rmi d_2) D_{\nu }(d_2)- \rmi D_{-\nu -1}(\mathrm{i} d_2) \left[d_2 D_{\nu }(d_2)-D_{\nu +1}(d_2) \right] \right\}},\nonumber\\
c^v_2 &=&
\frac{D_{\nu } \left(d_2\right) \left\{v_{-k}^\rmi \left[-2 A_k(0)-\rmi d_1 d_2\right]+2 u_k^\rmi B_k\right\}+2 \rmi d_1 v_{-k}^\rmi D_{\nu+1}\left(d_2\right)}{2 d_1\left\{ D_{-\nu }\left(\rmi d_2\right) D_{\nu }\left(d_2\right)- \rmi D_{-\nu -1}\left(\rmi d_2\right)\left[d_2 D_{\nu }\left(d_2\right)-D_{\nu +1}\left(d_2\right)\right]\right\}},
\end{eqnarray}
for $v_{-k}^{\ast}(t)$. In both cases, for brevity, we use $\tilde{t}(t) = d_1 t + d_2$ with $d_1=\rme^{\rmi \pi/4} 2^{1/2} a^{1/4} $ and $d_2=\rme^{\rmi \pi/4} 2^{-1/2} a^{-3/4} b_k$, and we have suppressed the subindex $k$ in $\nu_k$ as well as $d_2$.
\begin{figure}[t]
\centering
\includegraphics[width=.4\textwidth]{figure2.pdf}
\caption{Comparison of the mode occupations $n_k^\mathrm{f}$ after different finite-time quenches with the sudden-quench result $n_k^\mathrm{s}$. We observe that $n_k^\mathrm{f}\to n_k^\mathrm{s}$ for $\tau\to 0$ irrespective of the quench protocol.}
\label{fig:figure1}
\end{figure}
In order to investigate the limit of sudden quenches we calculated the post-quench mode occupation $n_k^\mathrm{f}$ given in \eqref{eq:post-quench occupation number} after a linear quench and compared it to the post-quench mode occupation after a sudden quench $n_k^{\mathrm{s}}$. The latter is given by~\cite{Sengupta-04} $n_k^{\mathrm{s}}=\frac{1}{2}\left[1-\cos(\theta_k^{\mathrm{f}}-\theta_k^{\rmi})\right]$. As is shown in \fref{fig:figure1} the difference between the two vanishes as the quench duration is decreased, ie, $\lim_{\tau\to 0}n_k^\mathrm{f}=n_k^\mathrm{s}$. As can be seen from the figure, this is true for the exponential, cosine and sine protocols as well.
\subsection{Exponential quench}
Another quench protocol which allows for an explicit analytical solution is the exponential quench of the form
\begin{equation}
g(t)=g_{\rmi}-1+\mathrm{exp}\left(\ln{(|g_{\mathrm{f}}-g_{\rmi}+1|)}\frac{t}{\tau}\right),
\end{equation}
which is shown as green line in figure~\ref{fig:protocols and gap change rate}. The differential equations in \eqref{eq:during quench u and v} in this case become
\begin{equation}
\partial^2_t y_k(t) + \left[a_k+b_k\rme^{f t}+c\rme^{2 f t}\right] y_k(t) = 0,
\end{equation}
where $a_k=4\left[g_{\rmi}^2+2(1+\cos{k})(1-g_{\rmi})\right]$, $b_k=8\left[g_{\rmi}-1-\cos{k}\pm\rmi (4\tau)^{-1}\ln|g_{\mathrm{f}}-g_{\rmi}+1|\right]$, $c=4$ and $f=\tau^{-1}\ln |g_{\mathrm{f}}-g_{\rmi}+1|$. The upper and lower signs in $b_k$ refer to the equation for $y_k(t)=u_{k}^{\phantom{\ast}}(t)$ and $y_k(t)=v_{-k}^{\ast}(t)$ respectively. This equation can be solved using a substitution $y_k(t)=w_k(t)z_k(t)$, where $w_k(t)$ is chosen such that the equation for $z_k(t)$ reduces to an associated Laguerre equation~\cite{AbramowitzStegun65}. The full solution is
\begin{equation}
y_k(t)=\rme^{\rmi \sqrt{a_k}t} \rme^{\rmi \frac{\sqrt{c}}{f}(1-\rme^{ft})} \left[ c_1^y U(-\lambda_k,1+\nu_k,\tilde{t}(t))+c_2^y L^{\nu_k}_{\lambda_k}(\tilde{t}(t)) \right]
\end{equation}
where $U(-\lambda,1+\nu,\tilde{t})$ denotes a confluent hypergeometric function of the second kind and $L^{\nu}_{\lambda}(\tilde{t})$ is a generalised Laguerre polynomial. Here $\lambda_k=-\rmi \sqrt{a_k}/f-\rmi b_k /(2 f \sqrt{c})-1/2$, $\nu_k=2\rmi \sqrt{a_k}/f$ and $\tilde{t}(t) = d_1 \rme^{f t}$ with $d_1=2 \rmi \sqrt{c}/f$. The constants $c_1^y$ and $c_2^y$ are set by the initial conditions with the explicit results given by
\begin{eqnarray}
\fl \eqalign{c_1^u = \frac{\left[ \rmi \left(-A_k(0)-\sqrt{a}+\sqrt{c} \right) L_{\lambda }^{\nu }(d_1)+d_1 f L_{\lambda -1}^{\nu+1}(d_1)\right]u_k^\rmi -\rmi B_k L_{\lambda }^{\nu }(d_1) v_{-k}^\rmi }{d_1 f \left[ U(-\lambda ,\nu +1,d_1) L_{\lambda -1}^{\nu +1}(d_1)+ \lambda U(1-\lambda,\nu +2,d_1) L_{\lambda }^{\nu }(d_1)\right]}},\\
\fl \eqalign{c_2^u = &\frac{\left[\rmi \left(A_k(0)+\sqrt{a}-\sqrt{c}\right) U(-\lambda,\nu+1,d_1)+d_1 f \lambda U(1-\lambda,\nu +2,d_1)\right]u_k^\rmi }{d_1 f \left[ U(-\lambda ,\nu +1,d_1) L_{\lambda -1}^{\nu +1}(d_1)+ \lambda U(1-\lambda ,\nu +2,d_1) L_{\lambda }^{\nu }(d_1)\right]} \\
& +\frac{\rmi B_k U(-\lambda ,\nu +1,d_1) v_{-k}^\rmi}{d_1 f \left[ U(-\lambda ,\nu +1,d_1) L_{\lambda -1}^{\nu +1}(d_1)+ \lambda U(1-\lambda ,\nu +2,d_1) L_{\lambda }^{\nu }(d_1)\right]},}\nonumber\\
\fl \eqalign{c_1^v = \frac{\left[\rmi \left( A_k(0)-\sqrt{a}+\sqrt{c}\right) L_{\lambda }^{\nu }(d_1)+d_1 f L_{\lambda -1}^{\nu +1}(d_1)\right]v_{-k}^\rmi -\rmi B_k L_{\lambda }^{\nu }(d_1)u_k^\rmi}{d_1 f \left[U(-\lambda ,\nu +1,d_1) L_{\lambda -1}^{\nu +1}(d_1)+\lambda U(1-\lambda ,\nu +2,d_1) L_{\lambda }^{\nu }(d_1)\right]}},\\
\fl \eqalign{c_2^v = & \frac{\left[\rmi \left(-A_k(0) +\sqrt{a}-\sqrt{c}\right) U(-\lambda ,\nu +1,d_1)+d_1 f \lambda U(1-\lambda ,\nu +2,d_1)\right]v_{-k}^\rmi }{d_1 f \left[U(-\lambda ,\nu +1,d_1) L_{\lambda -1}^{\nu +1}(d_1)+\lambda U(1-\lambda ,\nu+2,d_1)L_{\lambda }^{\nu }(d_1)\right]} \\
&+\frac{\rmi B_k U(-\lambda ,\nu +1,d_1) u_k^\rmi}{d_1 f \left[U(-\lambda ,\nu +1,d_1) L_{\lambda -1}^{\nu +1}(d_1)+\lambda U(1-\lambda ,\nu+2,d_1) L_{\lambda }^{\nu }(d_1)\right]}.}\nonumber
\end{eqnarray}
We stress that we have suppressed the subindex $k$ of $\nu_k$ and $\lambda_k$ for clarity.
As in the case of a linear quench, we have compared the post-quench mode occupations $n_k^\mathrm{f}$ with the sudden-quench result (see \fref{fig:figure1}). We find very similar behaviour to the linear quench even for moderate quench durations.
\subsection{Cosine and sine quench}
The cosine quench is defined as a half period of a negative cosine
\begin{equation}
g(t)=\frac{g_{\rmi}+g_{\mathrm{f}}}{2}+\frac{g_{\rmi}-g_{\mathrm{f}}}{2}\,\cos\frac{\pi t}{\tau}.
\end{equation}
Unlike the two protocols discussed above, this protocol is differentiable for all times. Unfortunately, the differential equations \eqref{eq:during quench u and v} in this case have no analytic solution, so we have to resort to a numerical treatment.
Similarly, the sine quench is defined as a quarter period of a sine
\begin{equation}
g(t)=g_{\rmi}+(g_{\mathrm{f}}-g_{\rmi})\,\sin\frac{\pi t}{2\tau}.
\end{equation}
In this protocol the transverse field initially changes faster than in the others, but slows down close to $t=\tau$. It is differentiable everywhere except at $t=0$. Again, the differential equations \eqref{eq:during quench u and v} have no analytic solution and we study this case numerically.
The comparison of the obtained post-quench mode occupations for the cosine and sine protocols with the sudden-quench result are again shown in \fref{fig:figure1}.
\subsection{Polynomial quenches}
The cubic quench is defined as
\begin{equation}
g=g_\rmi-3 (g_\rmi-g_\mathrm{f}) \left(\frac{t}{\tau}\right)^2+2(g_\rmi-g_\mathrm{f})\left(\frac{t}{\tau}\right)^3,
\end{equation}
and the quartic quench is
\begin{equation}
g=g_\rmi-2 (g_\rmi-g_\mathrm{f}) \left(\frac{t}{\tau}\right)^2+(g_\rmi-g_\mathrm{f})\left(\frac{t}{\tau}\right)^4.
\end{equation}
Both protocols are differentiable everywhere, ie, they feature no kinks. The differential equations \eqref{eq:during quench u and v} have no analytic solution in these cases.
\subsection{Piecewise quenches with a kink}
Finally we introduce a quench protocol composed of two cosine functions stitched together at $t=\frac{\tau}{2}$ so that the protocol is continuous, but the derivative is not. The protocol is defined as
\begin{equation}
g(t)=
\left\{
\begin{array}{l l}
\frac{g_\mathrm{f}+(1-\sqrt{2})g_\rmi}{2-\sqrt{2}} -\frac{g_\mathrm{f}-g_\rmi}{2-\sqrt{2}} \cos{\frac{\pi t}{2\tau}}, & 0 \le t \leq \frac{\tau}{2}, \\ [1ex]
\frac{3 g_\mathrm{f}+g_\rmi}{4} +\frac{g_\mathrm{f}-gi_\rmi}{4} \cos{\frac{2\pi t}{\tau}}, & \frac{\tau}{2} < t \leq \tau.
\end{array} \right.
\end{equation}
Similarly, we define a protocol consisting of two linear functions with different slopes. We do this by stitching them at $t=\frac{\tau}{2}$, leading to a discontinuous derivative:
\begin{equation}
g(t)=
\left\{
\begin{array}{l l}
g_\rmi+\frac{4}{3\tau}(g_\mathrm{f}-g_\rmi)t, & 0 \le t \leq \frac{\tau}{2}, \\ [1ex]
\frac{1}{3}(2g_\rmi + g_\mathrm{f})+\frac{2}{3\tau}(g_\mathrm{f}-g_\rmi)t,& \frac{\tau}{2} < t \leq \tau.
\end{array} \right.
\end{equation}
\section{Results for observables}
\label{sec:results observables}
\subsection{Total energy}
\begin{figure}[t]
\centering
\includegraphics[width=.45\linewidth]{figure3a.pdf}
\quad
\includegraphics[width=.45\linewidth]{figure3b.pdf}
\caption{Left: Total energy per site $E_\mathrm{tot}(t)$ during a quench from $g_{\rmi}=0$ to $g_{\mathrm{f}}=\frac{2}{3}$ over $\tau J=1$ for different quench protocols. Inset: Quenches from $g_{\rmi}=0$ to $g_{\mathrm{f}}=\frac{2}{3}$ and varying durations show the approach of $E(\tau)$ to the ground-state energy of the final Hamiltonian $E_{\mathrm{gs}}^{\mathrm{f}}$. Right: Mode occupation $n_k^\mathrm{f}$ of the final Hamiltonian at $t=\tau$ for quenches between phases (upper panel) and inside a phase (lower panel).}
\label{fig:total energy}
\end{figure}
The simplest observable in the system is the total energy per site, $E_{\mathrm{tot}}=\frac{1}{N}\braket{H(t)}$. Following the quench, the total energy is constant due to the unitary time evolution. During the quench, however, it depends on the quench protocol as
\begin{equation}
\label{eq: energy}
\eqalign{
E_{\mathrm{tot}} =
\frac{J}{2 \pi} \int_{-\pi}^{\pi} \rmd k \, & \bigg[ 2\Big(g(t)-\cos{k}\Big) |v_k^{\phantom{\dagger}}(t)|^2 \Bigg.
\cr & \Bigg. + \sin{k} \left(u_{k}^{\phantom{\dagger}}(t) v_k^{\phantom{\dagger}}(t)+u_{k}^{\ast}(t)v_{k}^{\ast}(t)\right) -g(t) \bigg].}
\end{equation}
Clearly the total energy in the system depends on the quench details, as shown in \ref{fig:total energy}, while it becomes independent of these details in the sudden and adiabatic limits as is well expected. We find that the total energy after the quench $E_{\mathrm{tot}}(\tau)$ approaches the adiabatic value $E_{\mathrm{gs}}^{\mathrm{f}}$ as a power-law, with the exponent depending on the quench details. For quenches within either the ferromagnetic or paramagnetic phase we notice two types of behaviour depending on whether the protocol has any kinks, ie, non-differentiable points: Quenches which feature kinks, ie, the linear, exponential, sine, piecewise linear and piecewise cosine quenches all approach the adiabatic value as $E_\mathrm{tot}(\tau)-E_\mathrm{gs}^\mathrm{f} \propto \tau^{-2}$. Strikingly, quenches such as cosine, cubic and quartic, which feature no kinks, display a much faster approach, ie, $E_\mathrm{tot}(\tau)-E_\mathrm{gs}^\mathrm{f} \propto \tau^{-4}$. The inset to figure~\ref{fig:total energy} demonstrates the different approaches to $E_\mathrm{gs}^\mathrm{f}$ for several protocols. In contrast, for quenches across the critical point we find $E_\mathrm{tot}(\tau)-E_\mathrm{gs}^\mathrm{f} \propto \tau^{-1/2}$ irrespective of the details of the protocol.
\change{The different adiabatic behaviour for quenches between different parts of the phase diagram may be related to differences in the behaviour of the mode occupations at $k=0$ as illustrated in figure~\ref{fig:total energy}. For quenches across the critical point (upper panel) the mode occupation at $k=0$ is finite, while, in contrast, for quenches within a phase one finds $n_{k=0}^\mathrm{f}=0$. However, we observe no obvious difference between quench protocols with and without kinks.} We note that the cosine and sine quench have a higher mode occupation, especially of the high-energy modes, and consequently a higher total energy after the quench as compared to the linear and exponential quenches as visible in the left panel.
Furthermore, at $t=\tau$ the total energy will be smooth for the cosine and sine quenches since the transverse field $g(t)$ is differentiable, while $E_\mathrm{tot}$ possesses kinks for the linear and exponential quenches originating from the kinks in $g(t)$.
\subsection{Transverse magnetisation}
Next, let us now look into the behaviour of the transverse magnetisation. We can compare the magnetisation during the quench to the equilibrium ground-state magnetisation corresponding to the instantaneous value of the coupling $g(t)$, as shown in \fref{fig:Mz during quench}. If the quench duration is small compared to the inverse of the energy scales of the system, the quench is fast. In this case, the magnetisation is significantly offset from the equilibrium value for the corresponding $g(t)$. This is because the system cannot follow the change by reaching the ground state of the instantaneous Hamiltonians $H(t)$. On the other hand, as the quench slows down, the system starts its relaxation during the quench, as demonstrated by the oscillatory behaviour of the magnetisation. However, in both cases there is a noticeable lag in the reaction at the very beginning of the quench. This behaviour remains qualitatively the same for different quench protocols, although there are quantitative differences, as can be seen in \fref{fig:Mz during quench}. These differences can be understood by comparing the behaviours to the gap change rates $|\dot{\Delta}|$ shown in \fref{fig:protocols and gap change rate}. The sine quench has the highest gap change rate initially, which means that the system experiences this quench as the most violent, as demonstrated by the large amplitude of oscillations of the magnetisation from its equilibrium value. On the other hand, the cosine quench has the slowest initial gap change rate and the magnetisation in this case is much closer to the equilibrium value.
\begin{figure}[t]
\centering
\includegraphics[width=.4\linewidth]{figure4a.pdf}
\quad
\includegraphics[width=.42\linewidth]{figure4b.pdf}
\caption{Transverse magnetisation during the quench for various quench durations (left) and protocols (right). Left: Linear quench from $g_{\rmi}=0$ to $g_{\mathrm{f}}=\frac{2}{3}$. Full line is the equilibrium value of transverse magnetisation for a given $g(t)$. Right: Deviation of the magnetisation in linear, cosine, sine and exponential quenches from the equilibrium magnetisation for the corresponding $g(t)$.}
\label{fig:Mz during quench}
\end{figure}
Following the quench, the magnetisation approaches a steady value. This stationary part of the magnetisation is given by $M^z_{\mathrm{s}}=\lim_{t\to\infty} M^z(t)$ with the result
\begin{equation}
\eqalign{M^z_{\mathrm{s}} = \frac{1}{\pi} \int_{-\pi}^{\pi} \rmd k & \left[ \cos^2{\theta_k^{\mathrm{f}}} \Big(|v_k^{\phantom{\ast}}(\tau)|^2 - |u_{k}^{\phantom{\ast}}(\tau)|^2\Big) \right. \cr &\left. -\cos{\theta_k^{\mathrm{f}}}\sin{\theta_k^{\mathrm{f}}}\Big(u_k^{\phantom{\ast}} (\tau)v_{k}^{\phantom{\ast}}(\tau)+u_k^{\ast}(\tau) v_{k}^{\ast}(\tau)\Big) \right],}
\end{equation}
which coincides with the GGE value. The dependence of the stationary value on the duration of the quench $\tau$ and quench protocol $g(t)$ is shown in \fref{fig:Mz stationary values}. In the left panel we notice that for quenches within the ferromagnetic regime an oscillatory behaviour in $\tau$ exists which is most pronounced for the linear and exponential quench protocol and may be linked to the existence of a kink in $g(t)$ at $t=\tau$. We notice similar oscillatory behaviour in the sine and piecewise quenches. In the inset we see the large-$\tau$ behaviour of the stationary magnetisation which is similar to the large-$\tau$ behaviour of the total energy. The deviation from the adiabatic value for quenches with a kink behaves as $|M_\mathrm{s}^z-M_\mathrm{a}^z|\propto \tau^{-2}$. In contrast, there is a $|M_\mathrm{s}^z-M_\mathrm{a}^z|\propto \tau^{-4}$ behaviour in quenches without such kinks. The same type of approaches are observed for quenches within the paramagnetic regime. On the other hand, for quenches through the phase transition, no oscillations are observed and the approach to the adiabatic limit is much slower, ie, $|M_\mathrm{s}^z-M_\mathrm{a}^z|\propto \tau^{-1/2}$ (right panel).
\begin{figure}[t]
\centering
\includegraphics[width=.45\linewidth]{figure5a.pdf}
\quad
\includegraphics[width=.45\linewidth]{figure5b.pdf}
\caption{Stationary part of the transverse magnetisation as a function of the quench duration for several quench protocols. The dashed black and full grey lines show the adiabatic and sudden values respectively. The insets show large $\tau$ behaviour, where the adiabatic value is defined by $M_\mathrm{a}^z=\lim_{\tau\to\infty}M_\mathrm{s}^z$.}
\label{fig:Mz stationary values}
\end{figure}
The relaxation to the stationary value is described by the time-dependent part of the magnetisation ($t>\tau$)
\begin{equation}
M^z_{\mathrm{r}}(t)=M^z(t)-M^z_\mathrm{s}=-\frac{2}{\pi} \int_{-\pi}^{\pi} \, \rmd k\,\mathrm{Re} \left[ f(k) \rme^{2\rmi \omega_k t} \right],
\end{equation}
where we recall that $\omega_k=\varepsilon_k(g_\mathrm{f})$ defines the single-mode energies after the quench and we have defined
\begin{equation}
\eqalign{f(k) =\frac{1}{4} \rme^{-2\rmi\omega_k\tau} & \left[ \sin^2{\theta_k^{\mathrm{f}}} \Big(|v_k^{\phantom{\ast}}(\tau)|^2 - |u_{k}^{\phantom{\ast}}(\tau)|^2\Big) \right. \cr & \left. + (\cos{\theta_k^{\mathrm{f}}}-1)\sin{\theta_k^{\mathrm{f}}}\Big(u_k^{\phantom{\ast}} (\tau)v_{k}^{\phantom{\ast}}(\tau)+u_k^{\ast}(\tau) v_{k}^{\ast}(\tau)\Big) \right].}
\end{equation}
Using a stationary phase approximation we can evaluate the late-time behaviour of this integral to be
\begin{equation}
\fl M^z_{\mathrm{r}} (t) = - \sqrt{\frac{2}{\pi}} \sum_{k_0} |\Phi''(k_0)|^{-3/2} \mathrm{Re} \left[ f''(k_0) \mathrm{exp} \left( \rmi \Phi(k_0) t + \rmi\,\mathrm{Sgn} (\Phi''(k_0)) \frac{3\pi}{4} \right) \right] t^{-3/2},
\end{equation}
where $\Phi(k)=2\omega_k = 4 J \sqrt{1+g_{\mathrm{f}}^2-2g_{\mathrm{f}} \cos{k}}$ and the stationary points are $k_0=-\pi,0,\pi$ respectively. \Fref{fig:Mz relaxation} shows the relaxation of the magnetisation. As in the case of a sudden quench~\cite{Barouch-70,FiorettoMussardo10} the relaxation follows a $t^{-3/2}$ law. Superimposed on the decay are oscillations with frequencies $2\omega_0$ and $2\omega_{\pi}$ originating from the stationary points of the phase. The quench protocol and the duration of quench implicitly enter the expression of the prefactor of $t^{-3/2}$ via the Bogoliubov coefficients in $f(k)$.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figure6.pdf}
\caption{Approach to the stationary value of the transverse magnetisation following a linear quench. The full grey line is the stationary phase approximation result, the dashed line is $t^{-3/2}$ with a constant prefactor, and the blue dots show the numerical evaluation for certain times. The inset shows the spectral analysis of the oscillations demonstrating peaks at frequencies $2\omega_0$ and $2\omega_{\pi}$ indicated by the vertical lines.}
\label{fig:Mz relaxation}
\end{figure}
\subsection{Transverse two-point function}
The two-point function in the transverse direction is given by the Pfaffian \eqref{eq:rho^z} which can be evaluated from a $4\times4$ matrix. The elements of this matrix are $\alpha_{ij}$ and $\beta_{ij}$, the quadratic correlators introduced in \eqref{eq:auxiliary correlators alpha} and \eqref{eq:auxiliary correlators beta} respectively. At late times we evaluate the behaviour of these correlators using a stationary phase approximation with the result
\begin{equation}
\label{eq:alpha SPA}
\alpha_{i,i+n}(t) = \alpha^{\mathrm{s}}_{i,i+n} + F_n^1(t) t^{-3/2},\quad \beta_{i,i+n}(t) = \beta^{\mathrm{s}}_{i,i+n} + F_n^2(t) t^{-3/2}.
\end{equation}
The stationary parts of the functions are given in \eqref{eq:alpha GGE} and \eqref{eq:beta GGE}, they are found to be negligibly small in comparison to the amplitudes of the time-dependent parts. The prefactors $F_n^1(t)$ and $F_n^2(t)$ are sums of oscillatory terms at $k_0=-\pi,0,\pi$, with constant amplitudes and frequencies $2\omega_{k_0}$. Based on this, the connected two-point function in the transverse direction behaves as
\begin{eqnarray}
\rho_{\mathrm{C},n}^z(t) &=&\langle\sigma_i^z(t)\sigma_{i+n}^z(t)\rangle-\langle\sigma_i^z(t)\rangle^2\\
&=&4 \left(|\beta_{i,i+n}(t)|^2-|\alpha_{i,i+n}(t)|^2\right) = \rho_{\mathrm{s},n}^z + G_n^1(t) t^{-3/2} + G_n^2(t) t^{-3}.\label{eq:connected rhoz}
\end{eqnarray}
Since $\alpha^{\mathrm{s}}_{ij}$ and $\beta^{\mathrm{s}}_{ij}$ are negligibly small, the first two terms in \eqref{eq:connected rhoz} are suppressed, and the observed late-time behaviour is a $t^{-3}$ decay. The prefactor $G_n^2(t)$ is a sum of oscillatory terms with constant amplitudes and frequencies $2(\omega_0+\omega_{\pi})$, $2(\omega_0-\omega_{\pi})$, $4\omega_0$ and $4\omega_{\pi}$. This is shown in \fref{fig:rhoz relaxation}. The power-law decay is independent of the quench details, ie, the quench protocol or whether the initial and final values of the quench parameter are in the paramagnetic or the ferromagnetic phase.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{figure7.pdf}
\caption{Connected two-point correlation function in the transverse direction for a linear quench. The full line is the stationary phase approximation result, the dashed line is the $t^{-3}$ envelope. The time scale at which correlations set in is indicated by $t_\mathrm{F}$. Inset: Spectral analysis of the oscillations demonstrating a pronounced peak at $2(\omega_0+\omega_{\pi})$ and washed out peaks at the frequencies $2(\omega_0-\omega_{\pi})$, $4\omega_0$, $4\omega_{\pi}$ respectively.}
\label{fig:rhoz relaxation}
\end{figure}
The connected two-point function is exponentially small in the spatial separation $n$ up to the Fermi time $t_\mathrm{F}$ when it exhibits the onset of correlations. At later times it shows an algebraic decay $\propto t^{-3}$ with oscillations as shown in \fref{fig:rhoz relaxation}. The appearance of the time $t_\mathrm{F}$ corresponds to the physical picture of quasiparticles spreading through the system after sudden quantum quenches as originally put forward by Calabrese and Cardy~\cite{CalabreseCardy06,CalabreseCardy07}. The picture adapted to the case of finite-time quenches is as follows~\cite{Bernier-14,CS16}: during the quench, $0\le t\le \tau$, pairs of quasiparticles with momenta $-k$ and $k$ are created. The quasiparticles originating from closely separated points are entangled and propagate through the system semi-classically with the instantaneous velocity $v_k(t)$, which is the propagation velocity of the elementary excitations $v_k(t)=\rmd \varepsilon_k[g(t)]/\rmd k$ for a given transverse field $g(t)$. A consequence of this is the light-cone effect---entangled quasiparticles arriving at the same time at points separated by $n$ induce correlations between local observables at these points. This can be seen in \fref{fig:rhoz relaxation}, where the connected transverse correlation function does not change significantly until $t_\mathrm{F} J \simeq nJ/2v_{\mathrm{max}}=15$. In this \change{rough} estimate, we use that the velocity of the fastest mode after the quench is $v_{\mathrm{max}}(t)=2 J\,\mathrm{min}[1,g(t)]$. As stated before, following the onset, the correlations algebraically decay to time-independent values.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{figure8.pdf}
\caption{Left: Density plot of the two-point correlation function in the transverse direction following a linear quench. Points of onset are extracted from the first variation with an absolute value not smaller than 1\% of the global maximum, lines are linear fits on those points. In white we also indicate the horizon in the sudden-quench case. Right: Density plots of the two-point correlation function in the transverse direction during and after several quench protocols with the same initial and final parameters as on the left. Lines are square root fits on onset points.}
\label{fig:rhoz light cone}
\end{figure}
The main effects of the finite quench time on the light-cone effect are shown in \fref{fig:rhoz light cone}. Firstly, the quasiparticles are not only created at $t=0$, but over the entire quench duration $\tau$. Secondly, during the quench, the particles with momentum $k$ propagate with the instantaneous velocity $v_k(t)$, leading to a bending~\cite{Bernier-14,CS16} of the light cone for times $t\le \tau$ clearly visible the plots. Together, these two effects result in a delay of the light cone as compared to the sudden case. A simple estimate for this delay can be obtained by considering the fastest mode created at $t=0$, which will have propagated at $t=\tau$ to $x_\mathrm{est}=\int_0^{\tau} \mathrm{d}t \, v_\mathrm{max}(t)$. On the other hand, in the sudden case the horizon will be at the position $x_\mathrm{sq}=v_{\mathrm{max}}(\tau)\tau$, implying for the delay $\Delta x\approx x_\mathrm{sq}-x_\mathrm{est}$, which is consistent with the results shown in \fref{fig:rhoz light cone}.
\subsection{Longitudinal two-point function}
The two-point function in the longitudinal direction can be evaluated from the Pfaffian \eqref{eq:rho^x}. The corresponding antisymmetric matrix is of dimension $2n \times 2n$, where $n$ is the separation of the spins we are considering. The elements of the matrix are $\alpha_{ij}$ and $\beta_{ij}$ from equations \eqref{eq:auxiliary correlators alpha} and \eqref{eq:auxiliary correlators beta}.
We consider the longitudinal two-point function in the disordered phase only, which equals the connected correlation function because the expectation value of the order parameter vanishes. We analyse its behaviour by using the results of the stationary phase approximation given in \eqref{eq:alpha SPA}. Based on this, the connected two-point function in the longitudinal direction and for a quench within the paramagnetic phase behaves as
\begin{equation}
\rho_{n}^x(t) = \rho_{\mathrm{s},n}^x + F_n(t) t^{-3/2},
\end{equation}
in leading order. We note that the power-law decay $\propto t^{-3/2}$ is identical to the sudden-quench case~\cite{Calabrese-12jsm1}. The prefactor $F_n(t)$ is a sum of oscillatory terms, and $\rho_{\mathrm{s},n}^x$ is exponentially small in the separation $n$, as can be seen in \fref{fig:rhox}. Similar to the transverse two-point function discussed in the previous section there is a clear light-cone effect with a bending of the horizon during the quench.
\begin{figure}[t]
\centering
\includegraphics[width=.45\linewidth]{figure9a.pdf}
\quad
\includegraphics[width=.45\linewidth]{figure9b.pdf}
\caption{Left: Two-point correlation function in the longitudinal direction following a linear quench. The full line is the stationary phase approximation result, the dashed line its $t^{-3/2}$ envelope. Right: Stationary value of the two-point function for varying spin separations. The full line is an exponential fit to the data.}
\label{fig:rhox}
\end{figure}
Finally we note that the longitudinal two-point function and order parameter have been investigated in the late-time limit after linear ramps within the ferromagnetic phase by Maraga~\emph{et al.}~\cite{Maraga-14}. In particular, they showed that the stationary longitudinal two-point function decays exponentially in the separation $n$, ie, $\rho_{\mathrm{s},n}^x\propto e^{-n/\xi}$, with the correlation length $\xi$ being finite even for arbitrarily small quench rates $v_g=(g_\mathrm{f}-g_\rmi)/\tau$, implying the absence of order $\lim_{n\to\infty}\rho_{\mathrm{s},n}^x=0$ after linear ramps. The decay towards this stationary state was not investigated in detail, but in analogy to the sudden-quench case~\cite{Calabrese-12jsm1} we expect the stationary value to be approached as $\propto t^{-3}$.
\subsection{Loschmidt echo}
It was observed previously~\cite{Heyl-13} that the time evolution of the Loschmidt amplitude after sudden quenches will show non-analytic behaviour provided the quench connected different equilibrium phases. Due to the formal similarity of this behaviour and equilibrium phase transitions this was dubbed dynamical phase transition. Subsequently various aspects of these dynamical phase transitions have been investigated \change{theoretically in various models~\cite{KS13,Kriel-14,Heyl14,Canovi-14,VajnaDora15,SchmittKehrein15,HuangBalatsky16},} in particular revealing important differences to the usual equilibrium phase transitions~\cite{VajnaDora14,AndraschkoSirker14}. \change{The experimental observation of a dynamical phase transition in the time evolution of a fermionic quantum gas has been recently reported in Ref.~\cite{Flaeschner-16}.}
In the present work we investigate the signature of the dynamical phase transition following finite-time quenches in the TFI chain. More precisely, we consider the return amplitude between the time evolved state $\ket{\Psi(t)}=U(t)\ket{\Psi_0}$ and the initial state $\ket{\Psi_0}=\ket{\Psi(t=0)}$, ie,
\begin{equation}
\label{eq:defG}
G(t)=\langle\Psi_0|\Psi(t)\rangle=\langle\Psi_0|U(t)|\Psi_0\rangle.
\end{equation}
The expectation is that the corresponding rate function $l(t)=-\frac{1}{N}\ln|G(t)|^2$ will show non-analytic behaviour at specific times $t_n^\star$ provided the finite-time quench crossed the quantum phase transition at $g=1$. We note in passing that the Loschmidt echo after finite-time quenches has been considered previously by Sharma et al.~\cite{Sharma-16}. However, this work considered solely the evolution after the quench, ie, the amplitude $\langle\Psi(\tau)|\Psi(t>\tau)\rangle$, and the finite-time quench appears as a way to prepare the ``initial state" $\ket{\Psi(\tau)}$. We stress that, in contrast, we consider the full time evolution both during and after the quench.
To compute the return amplitude \eqref{eq:defG}, we start by noting that the Hamiltonian has the form $H(t)=\sum_{k>0}H_k(t)$ with
\begin{equation}
\label{eq:H_k}
H_k(t)=A_k(t) \big( c_k^{\dagger} c_k^{\phantom{\dagger}} + c_{-k}^{\dagger} c_{-k}^{\phantom{\dagger}}\big) -\rmi B_k \big( c_{-k}^{\dagger} c_k^{\dagger} + c_{-k}^{\phantom{\dagger}} c_k^{\phantom{\dagger}} \big),
\end{equation}
ie, the individual Hamiltonians $H_k(t)$ couple only pairs of modes $-k$ and $k$. The time-evolution operator thus also decomposes as $U(t)=\prod_{k>0}U_k(t)$. Next, we revert to the pre-quench operators $\eta_k$ to write the single-mode Hamiltonian \eqref{eq:H_k} in terms of the operators
\begin{equation}
K_k^+=\eta_k^{\dagger} \eta_{-k}^{\dagger},\quad K_k^-=\eta_k^{\phantom{\dagger}} \eta_{-k}^{\phantom{\dagger}}, \quad K_k^0=\frac{1}{2} \big( \eta_k^{\dagger} \eta_k^{\phantom{\dagger}} - \eta_{-k}^{\phantom{\dagger}} \eta_{-k}^{\dagger} \big),
\end{equation}
which satisfy the SU(1,1) algebra $[K_k^-,K_p^+]=2\delta_{kp}K_k^0$, $[K_k^0,K_p^\pm]=\pm\delta_{kp}K_k^\pm$. Now we can make the following ansatz for the time-evolution operator~\cite{Perelomov86,Dora-13}
\begin{equation}
U_k(t) = \mathrm{exp}\Big[\rmi \varphi_k^{\phantom{}}(t)\Big] \mathrm{exp}\Big[a_k^+(t) K_k^+\Big] \mathrm{exp}\Big[a_k^0(t) K_k^0\Big] \mathrm{exp}\Big[a_k^-(t) K_k^-\Big].
\end{equation}
From $\rmi \partial_t U_k(t)=H_k(t) U_k(t)$ we then obtain differential equations for the coefficients $\varphi_k(t)$, $a_k^+(t)$, $a_k^0(t)$ and $a_k^-(t)$ which we solve with the initial conditions $\varphi_k(0)=a_k^+(0)=a_k^0(0)=a_k^-(0)=0$. With this result the return amplitude becomes
\begin{equation}
\fl\label{eq:LE}
G(t)=\mathrm{exp}\left[ -\frac{\rmi N}{2\pi} \int_{0}^{\pi} \mathrm{d}k \int_{0}^{t} \mathrm{d}t' \, A_k(t') \right] \mathrm{exp}\left[ \frac{N}{2\pi} \int_{0}^{\pi} \mathrm{d}k \, \ln{\left(u_k^{\rmi} u_k^{\ast}(t) +v_k^{\rmi} v_k(t)\right)} \right].
\end{equation}
The corresponding rate function is given by
\begin{equation}
\label{eq:rate function}
l(t)=-\frac{1}{\pi}\int_0^\pi\mathrm{d} k\,\ln\Big|u_k^\rmi u_k^{\ast}(t) +v_k^{\rmi} v_k(t)\Big|,
\end{equation}
which we plot in figure \ref{fig:loschmidt echo} during and after quenches across the critical point with different quench durations and protocols. The rate function clearly features non-analyticities at times $t_n^{\star}(\tau)$. We note that the superscript $\star$ denotes the critical times rather than complex conjugation, which we denote by the superscript $\ast$ throughout the paper. In contrast, we did not observe such non-analyticities for quenches within a phase.
\begin{figure}[t]
\centering
\begin{minipage}{0.45\linewidth}
\includegraphics[width=\linewidth]{figure10a.pdf}
\end{minipage}
\quad
\begin{minipage}{0.45\linewidth}
\includegraphics[width=\linewidth]{figure10b.pdf}
\end{minipage}
\caption{Left: Rate function of the return probability following linear quenches of different durations. Vertical lines are the times $t_n^{\star}(\tau)$. The full line is the rate function for the sudden quench~\cite{Silva08,Heyl-13}. Right: Rate function following quenches of various protocols. Inset: Dependence of the offset $\Delta t^{\star}(\tau)$ on the quench duration and protocol. Full lines are guides to the eye.}
\label{fig:loschmidt echo}
\end{figure}
We observe that a short quench reproduces the sudden-quench result~\cite{Silva08,Heyl-13}, whereas for longer quenches there is an offset in the characteristic times. This can be further investigated by considering the return amplitude \eqref{eq:LE} after the quench $t>\tau$ using the post-quench solutions from \eqref{eq:post-quench u and v} for $u_k(t)$ and $v_k(t)$. In this case the corresponding rate function becomes
\begin{equation}
l(t)=-\frac{1}{\pi}\int_0^\pi\mathrm{d} k\,\ln\Big|u_k^{\rmi}c_4^{u*}-v_k^{\rmi}c_4^{v*}+(u_k^{\rmi}c_3^{u*}-v_k^{\rmi}c_3^{v*})e^{-2\mathrm{i}\omega_k t}\Big|,
\label{eq:lpostquench}
\end{equation}
where $u_k^\rmi$ and $v_k^\rmi$ are the Bogoliubov coefficients of the initial Hamiltonian and $c_{3/4}^{u/v}$ are the momentum- and quench duration-dependent functions given in equations \eqref{eq: v post-quench constants}--\eqref{eq: u post-quench constants}. When considering the analytic continuation $t\to-\mathrm{i} z$ of \eqref{eq:lpostquench}, the argument of the logarithm will vanish at lines in the complex plane parametrised by the momentum $k$ and explicitly located at
\begin{equation}
\change{z_m(k)}=\frac{1}{2 \omega_k} \left( \ln{\frac{u_k^{\rmi}{c_3^u}^{\ast}-v_k^{\rmi}{c_3^v}^{\ast}}{u_k^{\rmi}{c_4^u}^{\ast}-v_k^{\rmi}{c_4^v}^{\ast}}}+\rmi \pi (2\change{m}+1) \right),
\label{eq:LElines}
\end{equation}
with $m$ being an integer. The lines \eqref{eq:LElines} will cut the real time axis provided there exists a momentum $k^\star$ with $\mathrm{Re}\,z_m(k^\star)=0$. The corresponding critical times at which the rate function $l(t)$ will show non-analytic behaviour are given by $t_m^\star=-\mathrm{i}\,\mathrm{Im}\,z_m(k^\star)$ with the explicit result
\begin{equation}
t_m^\star(\tau)=\Delta t^\star(\tau)+t^\star\left(m+\frac{1}{2}\right),\quad m=0,1,2,\ldots,
\label{eq:tstar}
\end{equation}
where the periodicity is given by $t^\star= \pi/\omega_{k^\star}$ and the offset reads
\begin{equation}
\Delta t^\star(\tau) = \frac{1}{2\omega_k}\arg\frac{u_k^{\rmi}c_3^{u*}-v_k^{\rmi}c_3^{v*}}{u_k^{\rmi}c_4^{u*}-v_k^{\rmi}c_4^{v*}}\Bigg|_{k=k^\star}.
\end{equation}
We note that $\Delta t^\star(\tau)$ depends on $\tau$ via the coefficients $c_{3/4}^{u/v}$. We also stress that the result \eqref{eq:tstar} is only valid for critical times after the quench $t^\star_m>\tau$. A comparison with the explicit numerical evaluation of the rate function defined via \eqref{eq:LE} is plotted in figure~\ref{fig:loschmidt echo}; it shows excellent agreement. In particular, the non-analyticities occur periodically and are shifted relative to each other. The latter finding originates from the the fact that the critical mode $k^\star$, obtained from \change{$\mathrm{Re}\,z_m(k^\star)=0$,} depends implicitly on the quench protocol. We also note that the condition $\mathrm{Re}\,z_m(k^\star)=0$ cannot be satisfied for quenches within the same phase, while for quenches across the critical point such a mode exists.
\begin{figure}[t]
\centering
\begin{minipage}{0.45\linewidth}
\includegraphics[width=\linewidth]{figure11a.pdf}
\end{minipage}
\quad
\begin{minipage}{0.45\linewidth}
\includegraphics[width=\linewidth]{figure11b}
\end{minipage}
\caption{Left: Rate function of the return probability following linear quenches of different durations. We stress that the first critical time $t_1^\star$ occurs during quench, ie, $t_1^\star<\tau$. \change{On the time axis we indicate the quench durations $\tau_i$ as well as the times $t_i^\mathrm{c}$ at which the critical point $g_\mathrm{c}=1$ is crossed.} Right: Scaling of the critical momenta defined via $\mathrm{Re}\,z_n(k^\star)=0$ and $n_{\tilde{k}}=1/2$ for a linear quench. The behaviour is consistent with $k^\star,\tilde{k}\propto\tau^{-1/2}$.}
\label{fig:loschmidt echo 2}
\end{figure}
The analysis above is restricted to $t>\tau$, but for relatively short quenches it captures all critical times $t^\star_m$, since they all occur after the quench. However, for slower quenches kinks in the rate function occur during the quench. We plot several such situations in which $t_1^\star<\tau$ in figure~\ref{fig:loschmidt echo 2}. In this case, the analysis of the rate function in \eqref{eq:rate function} would require the use of solutions of the differential equations with time-dependent coefficients in \eqref{eq:during quench u and v}, which are not always explicitly known.
Finally we compare the critical mode $k^\star$ obtained from \change{$\mathrm{Re}\,z_m(k^\star)=0$} with the mode $\tilde{k}$ defined by $n_{\tilde{k}}=1/2$, ie, corresponding to infinite temperatures. For dynamical phase transitions after sudden quenches it was found that these two modes are identical~\cite{Heyl-13}. In contrast, for the finite-time quenches we considered here this is not the case, ie, in general we find $k^\star\neq\tilde{k}$. Nevertheless, as shown in figure~\ref{fig:loschmidt echo 2} the scaling behaviour of these two critical momenta is consistent with $k^\star,\tilde{k}\propto\tau^{-1/2}$ as expected in the Kibble--Zurek scaling limit~\cite{Chandran-12,Kolodrubetz-12}.
\section{Scaling limit}
\label{sec:SL}
As is well known, the vicinity of the quantum phase transition at $g=1$ can be described by the scaling limit~\cite{ItzyksonDrouffe89vol1,GogolinNersesyanTsvelik98}
\begin{equation}
J\to\infty,\quad g\to 1,\quad a_0\to 0,
\label{scalingI}
\end{equation}
where $a_0$ denotes the lattice spacing, while keeping fixed both the gap $\Delta$ and the velocity $v$ defined by
\begin{equation}
2J|1-g|=\Delta,\quad 2Ja_0=v.
\label{scalingII}
\end{equation}
The Hamiltonian in the scaling limit reads
\begin{equation}
H =\int_{-\infty}^\infty \frac{\mathrm{d} x}{2\pi}\left[ \frac{\mathrm{i} v}{2}
(\psi\partial_x\psi-{\bar\psi}\partial_x\bar\psi) -
\mathrm{i} \Delta{\bar\psi}\psi\right],
\label{eq:Hscaling}
\end{equation}
where $\psi$ and $\bar{\psi}$ are right- and left-moving components of a Majorana fermion possessing the relativistic dispersion relation $\varepsilon(k)=\sqrt{\Delta^2+(vk)^2}$. Thus we see that the finite-time quenches considered in this article will lead to a time-dependent fermion mass~\cite{Das-16} $\Delta(t)=2J|1-g(t)|$. We expect our results to directly carry over to the field theory \eqref{eq:Hscaling}, eg, the post-quench relaxation of the transverse magnetisation should follow $M_\mathrm{r}^z(t)\propto t^{-3/2}$ as is observed after sudden quenches~\cite{FiorettoMussardo10,SE12}.
\section{Quantum field on curved spacetime}
\label{sec:curved}
Recently Neuenhahn and Marquardt~\cite{NeuenhahnMarquardt15} put forward the idea of using one-dimensional bosonic condensates with time-dependent Hamiltonians in order to simulate a 1+1-dimensional expanding universe. In the following we argue that a similar construction can be performed for the Ising field theory \eqref{eq:Hscaling}. We start from the corresponding action in Minkowski space
\begin{equation}
S_\mathrm{IFT}=\frac{1}{2}\int\mathrm{d} t\,\mathrm{d} x\,\Bigl[\mathrm{i} v\,\bar{\Psi}\gamma^\mu\partial_\mu\Psi+\mathrm{i}\Delta(t)\bar{\Psi}\gamma_3\Psi\Bigr],
\end{equation}
where we introduced the two-spinor $\Psi$ and two-dimensional gamma matrices in the Weyl representation via
\begin{equation}
\Psi=\left(\begin{array}{c}\psi\\\bar{\psi}\end{array}\right),\; \bar{\Psi}=\Psi^\dagger\gamma^0=\bigl(\bar{\psi},\psi\bigr),\;\gamma^0=\sigma^x,\;\gamma^1=\mathrm{i}\sigma^y,\;\gamma_3=\sigma^z,
\end{equation}
and set $\partial_0=\partial_t$ and $\partial_1=\partial_x$.
On the other hand, the action of a Dirac field with mass $m$ in curved spacetime in 1+1-dimensions is given by~\cite{MukhanovWinitzki07,ParkerToms09}
\begin{equation}
S_g=\frac{1}{2}\int\mathrm{d}^2x \sqrt{-g} \Bigl[\mathrm{i} v\,\bar{\Psi}\gamma^ae_a^\mu\nabla_\mu\Psi+\mathrm{i} m\bar{\Psi}\gamma_3\Psi\Bigr],
\end{equation}
where $g$ is the determinant of the metric tensor, $e_a^\mu$ is the corresponding zweibein and $\nabla_\mu$ denotes the covariant derivative. Specifically we consider the spatially flat Friedmann--Robertson--Walker metric
\begin{equation}
\mathrm{d}s^2=\mathrm{d}t^2-R^2(t)\mathrm{d}x^2,
\end{equation}
which describes a homogeneous, spatially expanding spacetime. With conformal time $\mathrm{d}\eta=\mathrm{d}t/R(t)$ the metric becomes
\begin{equation}
\mathrm{d}s^2=R^2(\eta) \left(\mathrm{d}\eta^2-\mathrm{d}x^2\right)=R^2(\eta)\eta_{\mu\nu}\mathrm{d}x^\mu\mathrm{d}x^\nu=g_{\mu\nu}\mathrm{d} x^\mu\mathrm{d} x^\nu,
\end{equation}
where $\eta_{\mu\nu}=\mathrm{diag}(1,-1)$ is the Minkowski metric. Using this, the zweibein defined via $\eta_{ab}=e_a^\mu e_b^\nu g_{\mu\nu}$ is found to be $e_a^\mu=R^{-1}\delta_a^\mu$ with the inverse $e^a_\mu=R\delta^a_\mu$. The covariant derivative is given by
\begin{equation}
\nabla_\mu=\partial_\mu+\frac{1}{8}\omega_\mu^{ab}\bigl[\gamma_a,\gamma_b\bigr]=\partial_\mu-\frac{\partial_\eta R}{2R}\delta_\mu^1\gamma_3,
\end{equation}
where we have evaluated the spin connection $\omega_\mu^{ab}=\eta^{bc}\omega_{\mu\phantom{a}c}^{\phantom{\mu}a}$ defined using the Christoffel symbols $\Gamma^\lambda_{\mu\nu}$ as $\omega_{\mu\phantom{a}b}^{\phantom{\mu}a}=-e_b^\nu(\partial_\mu e^a_\nu-\Gamma^\lambda_{\mu\nu}e^a_\lambda)$. Thus with $-g=R^4$ we find
\begin{equation}
S_g=\frac{1}{2}\int\mathrm{d}\eta\mathrm{d} x\,R\left[\mathrm{i} v\,\bar{\Psi}\gamma^\mu\partial_\mu\Psi+\mathrm{i} \bar{\Psi}\left(mR\gamma_3+\frac{v\partial_\eta R}{2R}\gamma^0\right)\Psi\right].
\end{equation}
Finally, rescaling the fields according to $\chi=\sqrt{R}\Psi$, we obtain
\begin{equation}
S_g=\frac{1}{2}\int\mathrm{d}\eta\,\mathrm{d} x\,\Bigl[\mathrm{i} v\,\bar{\chi}\gamma^\mu\partial_\mu\chi+\mathrm{i} mR(\eta)\bar{\chi}\gamma_3\chi\Bigr],
\end{equation}
thus establishing the relation $\Delta(t)=mR(\eta(t))$ between the time-dependent gap and the scaling factor in the Friedmann--Robertson--Walker metric. Hence we conclude that the spreading of correlations during the finite-time quench can be interpreted as propagation of particles in an expanding space time. This result is very similar to the relation obtained in the bosonic case~\cite{NeuenhahnMarquardt15}.
\section{Conclusion}
\label{sec:conclusion}
In conclusion, we have investigated finite-time quantum quenches in the transverse-field Ising chain, ie, continuous changes in the transverse field over a finite time $\tau$. We discussed the general treatment of such time-dependent quenches in the TFI model and applied this framework to several quench protocols. \change{The precise forms of these protocols were chosen to cover different features like kinks in the time dependence. Specifically we} derived exact expressions for the time evolution of the system in the case of a linear and an exponential protocol, and for several others we obtained numerical solutions. Furthermore, we constructed the GGE for the post-quench dynamics using the mode occupations of the eigenmodes of the final Hamiltonian.
Using these results, we analysed the behaviour of several observables during and after the quench. Namely, we investigated the behaviour of the total energy, transverse magnetisation, transverse and longitudinal spin correlation functions and the Loschmidt echo. We confirmed that the stationary values to which the observables relax correspond to the GGE expectation values, as was of course expected. The approaches to the stationary values are oscillatory power laws, details of which can be extracted from a stationary-phase approximation. Furthermore, we checked that the stationary values reproduce the corresponding results for sudden quenches in the short-$\tau$ limit as well as the adiabatic expectation values in the long-$\tau$ limit. As a function of the quench time $\tau$ the approach to the adiabatic values was shown to follow different power laws, depending on whether the quench is within a phase, or if it is done across the critical point.
In the time evolution of the two-point functions we observed the light-cone effect known from sudden quenches. In comparison to the sudden case, however, there is an offset in the horizon after the quench as well as a non-linear regime during the quench. These effects can be ascribed to the production of quasiparticles during the quench as well as to the fact that their instantaneous velocities depend on the quench protocol.
Furthermore, we investigated the behaviour of Loschmidt echo and found signatures of dynamical phase transitions when quenching across the critical point, as was observed previously in sudden quenches. We analysed the rate function of the return amplitude and observed smooth behaviour when quenching within a phase, and periodic non-analyticities when quenching across the critical point. The latter are delayed as compared to the sudden-quench case. We found exact analytical expressions for the post-quench times at which these non-analyticities occur, characterising their periodicity and the delay. In addition, we showed numerically that the non-analyticities can occur during the quench as well, provided the quench duration is sufficiently long.
Finally, we looked into the scaling limit of the theory in which the transverse-field quench corresponds to a quench in the mass of the Majorana fermions. We showed that, alternatively, we can describe the quenching procedure by a field theory with constant parameters put on a curved expanding spacetime, as was proposed previously for a bosonic field theory.
In the future it would be interesting to study the behaviour of other models during and after finite-time quenches. As such investigations presumably have to be based on numerical simulations, the results presented here may serve as an ideal starting point. From our perspective, a natural model to begin with would be the axial next-nearest-neighbour Ising chain, which, in the language of Jordan--Wigner fermions, would correspond to an interacting, non-quadratic theory. While universal results like the scaling behaviour close to the phase transition are expected to be identical to the TFI chain, the non-universal details of the time evolution may reveal interesting interaction effects and their interplay with the energy scale set by the finite quench time.
\section*{Acknowledgements}
We would like to thank Piotr Chudzinski, Maurizio Fagotti, Nava Gaddam, Markus Heyl, Dante Kennes and especially Michael Kolodrubetz for useful comments and discussions. This work is part of the D-ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). This work was supported by the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO), under 14PR3168.
\section*{References}
\providecommand{\newblock}{}
|
1,941,325,221,126 | arxiv | \section{Introduction}
For a faithful presentation of the the problem and result discussed
in \cite{GaoLu:16} we quote from this paper:
\medskip
\textquotedblleft The fourth-order polynomial defined by
$H(x):=\nu/2(1/2x^{2}-\lambda)^{2}$, where $x\in\mathbb{R},$
$\nu,\lambda$ are positive constants (1)
\noindent is the well-known Landau's second-order free energy, each of its
local minimizers represents a possible phase state of the material, while each
local maximizer characterizes the critical conditions that lead to the phase transitions.
...
The purpose of this paper is to find the extrema of the following nonconvex
total potential energy functional in 1D,
$I[u]:=\left( \int_{a}^{b}H\left( \frac{du}{dx}\right) -fu\right) dx$. (2)
\noindent The function $f\in C[a,b]$ satisfies the normalized balance condition
$\int_{a}^{b}f(x)dx=0$, (3)
\noindent and
there exists a unique zero root for $f$ in $[a,b]$. (4)
\noindent Moreover, its $L^{1}$-norm is sufficiently small such that
$\left\Vert f\right\Vert _{L^{1}(a,b)}<2\lambda\nu\sqrt{2\lambda}/(3\sqrt{3
)$. (5)
The above assumption is reasonable since large $\left\Vert f\right\Vert
_{L^{1}(a,b)}$ may possibly lead to instant fracture, which is represented by
nonsmooth solutions. The deformation $u$ is subject to the following two constraints,
$u\in C^{1}[a,b]$, (6)
\smallskip$\frac{du}{dx}(a)=\frac{du}{dx}(b)=0$. (7)
...
Before introducing the main result, we denote
$F(x):=-\int_{a}^{x}f(\rho)d\rho,~~x\in\lbrack a,b]$.
\noindent Next, we define a polynomial of third order as follows,
$E(y):=2y^{2}(\lambda+y/\nu),~~y\in\lbrack-\nu\lambda,+\infty)$.
\noindent Furthermore, for any $A\in\lbrack0,8\lambda^{3}\nu^{2}/27)$,
$E_{3}^{-1}(A)\leq E_{2}^{-1}(A)\leq E_{1}^{-1}(A)$
\noindent stand for the three real-valued roots for the equation $E(y)=A$.
At the moment, we would like to introduce the main theorem.
\textbf{Theorem 1.1.} For any function $f\in C[a,b]$ satisfying (3)--(5), one
can find the local extrema for the nonconvex functional (2).
\textbullet\ For any $x\in\lbrack a,b]$, $\overline{u}_{1}$ defined below is a
local minimizer for the nonconvex functional (2),
\noindent$\overline{u}_{1}(x)=\int_{a}^{x}F(\rho)/E_{1}^{-1}(F^{2}(\rho
))d\rho+C_{1},~~\forall C_{1}\in\mathbb{R}$. (9)
\textbullet\ For any $x\in\lbrack a,b]$, $\overline{u}_{2}$ defined below is a
local minimizer for the nonconvex functional (2),
$\overline{u}_{2}(x)=\int_{a}^{x}F(\rho)/E_{2}^{-1}(F^{2}(\rho))d\rho
+C_{2},~~\forall C_{2}\in\mathbb{R}$. (10)
\textbullet\ For any $x\in\lbrack a,b]$, $\overline{u}_{3}$ defined below is a
local maximizer for the nonconvex functional (2),
$\overline{u}_{3}(x)=\int_{a}^{x}F(\rho)/E_{3}^{-1}(F^{2}(\rho))d\rho
+C_{3},~~\forall C_{3}\in\mathbb{R}$. (11)\textquotedblright
\medskip
As mentioned in \cite{GaoLu:16}, in getting the above result the
authors use ``the canonical duality method".
\medskip
Let us observe from the beginning that nothing is said about the norm (and the
corresponding topology) on $C^{1}[a,b]$ when speaking about local extrema
(minimizers or maximizers).
In the following we discuss a slightly more general problem and
compare our conclusions with those of Theorem 1.1 in
\cite{GaoLu:16}. We don't analyze the method by which the
conclusions in Theorem 1.1 of \cite{GaoLu:16} are obtained even if
this is worth being done. Similar problems are considered by Gao and
Ogden in \cite{GaoOgd:08} and \cite{GaoOgd:08z} which are discussed
by Voisei and Z\u{a}linescu in \cite{VoiZal:11} and \cite{VoiZal:12},
respectively.
More precisely consider $\theta\in C[a,b]$ such that $\theta(x)>0$ for
$x\in\lbrack a,b]$, the polynomial $H$ defined by $H(y):=\tfrac{1}{2
(\tfrac{1}{2}y^{2}-\lambda)^{2}$ with $\lambda>0$, and the function
\[
J:=J_{f}:C^{1}[a,b]\rightarrow\mathbb{R},\quad J_{f}(u):=\int_{a}^{b
\theta\cdot\left( H\circ u^{\prime}-fu\right) ,
\]
where, $\int_{a}^{b}h$ denotes the Riemann integral
$\int_{a}^{b}h(x)dx$ of the function $h:[a,b]\rightarrow\mathbb{R}$
(when it exists). Of course, taking $\theta$ the constant function
$\nu$ $(>0)$ and replacing $f$ by $\nu^{-1}f$ we get the functional
$I$ considered in \cite{GaoLu:16}.
Let us set
\begin{align*}
X & :=C_{0}[a,b]:=\{v\in C[a,b]\mid v(a)=v(b)=0\},\\
Y & :=C_{1,0}[a,b]:=\{u\in C^{1}[a,b]\mid u^{\prime}:=du/dx\in C_{0}[a,b]\}.
\end{align*}
Of course $X$ is a linear subspace of $C[a,b]$; it is even a closed subspace
(and so a Banach space) if $C[a,b]$ is endowed with the supremum norm
$\left\Vert \cdot\right\Vert _{\infty}$. Clearly, other norms could be
considered on $X$.
Observe that the function $F$ defined in \cite{GaoLu:16} (and quoted
above) is in $C^{1}[a,b]\cap X$ with $F^{\prime}:=dF/dx=-f$.
Moreover, condition (5) implies that $\left\Vert F\right\Vert
_{\infty}<2\lambda\sqrt{2\lambda }/(3\sqrt{3})=(2\lambda/3)^{3/2}$
because
\[
\left\vert F(x)\right\vert =\left\vert \int_{a}^{x}f(\xi)d\xi\right\vert
\leq\int_{a}^{x}\left\vert f(\xi)\right\vert d\xi\leq\int_{a}^{b}\left\vert
f(\xi)\right\vert d\xi=\left\Vert f\right\Vert _{L^{1}(a,b)}.
\]
Furthermore, condition (4) implies that $F(x)>0$ for $x\in(a,b)$, or
$F(x)<0$ for $x\in(a,b)$.
For $u\in Y$ and $v:=u^{\prime}$ we have that
\begin{equation}
\int_{a}^{b}uf=-\int_{a}^{b}uF^{\prime}=-\left. u(x)F(x)\right\vert _{a
^{b}+\int_{a}^{b}u^{\prime}F=\int_{a}^{b}vF. \label{r-gl1
\end{equation}
Using this fact, for $u$ satisfying the constraints (6) and (7), and
$v:=u^{\prime}$, one has
\[
J(u)=\int_{a}^{b}\theta\left( H\circ v-Fv\right) =:K(v).
\]
\section{Study of local extrema of the function $K$}
As mentioned above, in the sequel
$H:\mathbb{R}\rightarrow\mathbb{R}$ is defined by
$H(y):=\tfrac{1}{2}\left( \tfrac{1}{2}y^{2}-\lambda\right) ^{2}$
with $\lambda>0$, $\theta\in C[a,b]$ is such that
$\mu:=\min_{x\in\lbrack a,b]}\theta(x)>0$; moreover $F\in
C^{1}[a,b]\cap X$ is such that $F(x)\ne 0$ for $x\in (a,b)$ and
$\left\Vert F\right\Vert _{\infty}<(2\lambda/3)^{3/2}$.
Our first purpose is to find the local extrema of
\begin{equation}
K:=K_{F}:X\rightarrow\mathbb{R},\quad K_{F}(v):=\int_{a}^{b}\theta\cdot\left(
H\circ v-Fv\right) \label{r-K
\end{equation}
on $X=C_{0}[a,b]$ endowed with the norm $\left\Vert \cdot\right\Vert _{p}$,
where $p\in\lbrack1,\infty]$.
First we study the Fr\'{e}chet and G\^{a}teaux differentiability of $K.$
\begin{lemma}
\label{l1}Let $g\in C[a,b]\setminus\{0\}$, $s\in\mathbb{N}^{\ast
\setminus\{1\}$ and $p\in\lbrack1,\infty]$. Then, with $h\in X,$
\[
\lim_{\left\Vert h\right\Vert _{p}\rightarrow0}\frac{1}{\left\Vert
h\right\Vert _{p}}\int_{a}^{b}gh^{s}=0\iff p\geq s.
\]
\end{lemma}
Proof. Set $\gamma:=\left\Vert g\right\Vert _{\infty}$ $(>0)$. For
$s<p<\infty$ and $h\in X$ we have that
\[
\left\vert \int_{a}^{b}gh^{s}\right\vert \leq\gamma\int_{a}^{b}\left\vert
h\right\vert ^{s}\cdot1\leq\gamma\left( \int_{a}^{b}\left( \left\vert
h\right\vert ^{s}\right) ^{p/s}\right) ^{s/p}\left( \int_{a}^{b
1^{p/(p-s)}\right) ^{(p-s)/p},
\]
and so
\begin{equation}
\left\vert \int_{a}^{b}gh^{s}\right\vert \leq\gamma(b-a)^{^{(p-s)/p
}\left\Vert h\right\Vert _{p}^{s}\quad\forall h\in X.\label{r-gl7
\end{equation}
The above inequality is true also, as easily seen, for $p=s$ and $p=\infty$
(setting $(p-s)/p:=1$ in the former case); from it we get $\lim_{\left\Vert
h\right\Vert _{p}\rightarrow0}\frac{1}{\left\Vert h\right\Vert _{p}}\int
_{a}^{b}gh^{s}=0$ because $s>1.$
Assume now that $p<s$. Since $g\in C[a,b]\setminus\{0\}$, there
exist $\delta>0$, and $a^{\prime},b^{\prime}\in\lbrack a,b]$ with
$a^{\prime
}<b^{\prime}$ such that $g(x)\geq\delta$ for $x\in\lbrack a^{\prime
,b^{\prime}]$ or $g(x)\leq-\delta$ for $x\in\lbrack a^{\prime},b^{\prime}]$.
Doing a translation, we suppose that $a^{\prime}=0$. For $n\in\mathbb{N
^{\ast}$ with $n\geq n_{0}$ $(\geq2/b^{\prime})$ consider
\begin{equation}
h_{n}(x):=\left\{
\begin{array}
[c]{ll
\alpha_{n}x & \text{if }x\in\lbrack0,1/n],\\
\alpha_{n}(2/n-x) & \text{if }x\in(1/n,2/n),\\
0 & \text{if }x\in\lbrack a,0)\cup\lbrack2/n,b],
\end{array}
\right. \label{r-gl8
\end{equation}
with $\alpha_{n}:=n^{1+\gamma/p}>0$, where
$\frac{p-1}{s-1}<\gamma<1$.
Clearly, $h_{n}\in X=C_{0}[a,b]$. In this situatio
\[
\left\vert \int_{a}^{b}gh_{n}^{s}\right\vert =\int_{0}^{2/n}\left\vert
g\right\vert h_{n}^{s}\geq2\delta\int_{0}^{1/n}(\alpha_{n}x)^{s
dx=2\delta\alpha_{n}^{s}\frac{1}{s+1}\frac{1}{n^{s+1}}=\frac{2\delta
{s+1}n^{\frac{s\gamma-p}{p}},
\]
while a similar argument gives
\[
\left\Vert h_{n}\right\Vert _{p}=\left( 2\alpha_{n}^{p}\frac{1}{p+1}\frac
{1}{n^{p+1}}\right) ^{1/p}=\left( \frac{2}{p+1}\right) ^{1/p
n^{\frac{\gamma-1}{p}}\rightarrow0.
\]
On the other hand,
\[
\frac{1}{\left\Vert h_{n}\right\Vert _{p}}\left\vert \int_{a}^{b}gh_{n
^{s}\right\vert \geq\frac{2\delta}{s+1}\left( \frac{p+1}{2}\right)
^{1/p}n^{\frac{\gamma(s-1)-(p-1)}{p}}\rightarrow\infty,
\]
which proves our assertion. The proof is complete. \hfill$\square$
\begin{proposition}
\label{p1}Let $X=C_{0}[a,b]$ be endowed with the norm $\left\Vert
\cdot\right\Vert _{p}$, where $p\in\lbrack1,\infty]$. Then $K$ is
G\^{a}teaux differentiable; moreover, for $v\in X$, $K$ is Fr\'{e}chet
differentiable at $v$ if and only if $p\geq4$.
\end{proposition}
Proof. Let us set $g_{2}:=\tfrac{1}{2}\theta\left( \tfrac{3}{2}v^{2
-\lambda\right) $, $g_{3}:=\tfrac{1}{2}\theta v$ and $g_{4}:=\tfrac{1
{8}\theta$; of course, $g_{2},g_{3},g_{4}\in C[a,b]$. Set also $\beta
:=\max\{\left\Vert g_{2}\right\Vert _{\infty},\left\Vert g_{3}\right\Vert
_{\infty}\}$.
Observe that for all $v,h\in X$ we have that
\begin{equation}
K(v+h)=K(v)+\int_{a}^{b}\theta\left[ v(\tfrac{1}{2}v^{2}-\lambda)-F\right]
h+\int_{a}^{b}\tfrac{1}{2}\theta\left( \tfrac{3}{2}v^{2}-\lambda\right)
h^{2}+\int_{a}^{b}\tfrac{1}{2}\theta vh^{3}+\int_{a}^{b}\tfrac{1}{8}\theta
h^{4}. \label{r-gl2
\end{equation}
For $v\in X$ consider
\begin{equation}
T_{v}:X\rightarrow\mathbb{R},\quad T_{v}(h):=\int_{a}^{b}\theta\left[
v(\tfrac{1}{2}v^{2}-\lambda)-F\right] h\quad(h\in X). \label{r-gl6
\end{equation}
Clearly, $T_{v}$ is a linear operator; $T_{v}$ is also continuous for every
$p\in\lbrack1,\infty]$. Indeed, setting $\gamma_{v}:=\left\Vert \theta\left[
v(\tfrac{1}{2}v^{2}-\lambda)-F\right] \right\Vert _{\infty}\in\mathbb{R}_{+}$
we have that
\[
\left\vert T_{v}(h)\right\vert \leq\gamma_{v}\int_{a}^{b}\left\vert
h\right\vert \leq\gamma_{v}\left\Vert h\right\Vert _{p}\cdot\left\Vert
1\right\Vert _{p^{\prime}}=\gamma_{v}(b-a)^{1/p^{\prime}}\left\Vert
h\right\Vert _{p}\quad\forall h\in X
\]
for $p,p^{\prime}\in\lbrack1,\infty]$ with $p^{\prime}$ the
conjugate of $p,$ that is $p^{\prime}:=p/(p-1)$ for
$p\in(1,\infty)$, $p^{\prime}:=\infty$ for $p=1$ and $p^{\prime}:=1$
for $p=\infty$. Hence $T_{v}$ is continuous.
Let $p\in\lbrack1,\infty]$ and $v\in X$ be fixed. Using (\ref{r-gl2}) we have
that
\[
\left\vert \frac{K(v+h)-K(v)-T_{v}(h)}{\left\Vert h\right\Vert _{p
}\right\vert \leq\frac{1}{\left\Vert h\right\Vert _{p}}\left( \left\vert
\int_{a}^{b}g_{2}h^{2}\right\vert +\left\vert \int_{a}^{b}g_{3}h^{3
\right\vert +\left\vert \int_{a}^{b}g_{4}h^{4}\right\vert \right)
\]
for $h\neq0$. Using Lemma \ref{l1} for $p\geq4$, we obtain that $\lim
_{\left\Vert h\right\Vert _{p}\rightarrow0}\frac{K(v+h)-K(v)-T_{v
(h)}{\left\Vert h\right\Vert _{p}}=0$. Hence $K$ is Fr\'{e}chet differentiable
at $v.$
Assume now that $p<4$. Using again (\ref{r-gl2}) we have tha
\[
K(v+h)-K(v)-T_{v}(h)\geq\tfrac{\mu}{8}\int_{a}^{b}h^{4}-\beta\int_{a
^{b}\left\vert h\right\vert ^{3}-\beta\int_{a}^{b}h^{2}\quad\forall h\in X.
\]
Take $a=a^{\prime}=0<b^{\prime}=b$ (possible after a translation), $\alpha
_{n}:=n^{1+\gamma/p}$ with $\frac{3}{s-1}<\gamma<1$ and $h:=h_{n}$ defined by
(\ref{r-gl8}). Using the computations from the proof of Lemma \ref{l1}, we
get
\begin{equation}
\int_{a}^{b}\left\vert h_{n}\right\vert ^{s}=2\int_{0}^{1/n}(\alpha_{n
x)^{s}dx=\frac{2}{s+1}n^{\frac{s\gamma-p}{p}},\quad\left\Vert h_{n}\right\Vert
_{p}=\left( \frac{2}{p+1}\right) ^{1/p}\frac{1}{n^{(1-\gamma)/p}
\rightarrow0, \label{r-gl11
\end{equation}
whence
\begin{align*}
\frac{K(v+h_{n})-K(v)-T_{v}(h_{n})}{\left\Vert h_{n}\right\Vert _{p}} &
\textstyle\geq\left( \frac{p+1}{2}\right) ^{1/p}n^{\frac{1-\gamma}{p
}\left( \tfrac{\mu}{8}\cdot\tfrac{2}{5}n^{\frac{4\gamma-p}{p}}-\tfrac{2
{4}\beta_{3}n^{\frac{3\gamma-p}{p}}-\tfrac{2}{3}\beta_{2}n^{\frac{2\gamma
-p}{p}}\right) \\
& \textstyle=\left( \frac{p+1}{2}\right) ^{1/p}n^{\frac{1-p+3\gamma}{p
}\left( \tfrac{\mu}{20}-\tfrac{1}{2}\beta_{3}n^{-\frac{\gamma}{p}}-\tfrac
{2}{3}\beta_{2}n^{-\frac{2\gamma}{p}}\right) \rightarrow\infty.
\end{align*}
This shows that $K\ $is not Fr\'{e}chet differentiable at $v.$
Because $K:(X,\left\Vert \cdot\right\Vert _{\infty})\rightarrow\mathbb{R}$ is
Fr\'{e}chet differentiable at $v\in X$, it follows that
\begin{equation}
\lim_{t\rightarrow0}\frac{K(v+th)-K(v)}{t}=T_{v}(h)\in\mathbb{R}\quad\forall
h\in X. \label{r-gl5
\end{equation}
Because $T_{v}:(X,\left\Vert \cdot\right\Vert
_{p})\rightarrow\mathbb{R}$ is linear and continuous, it follows
that $K$ is G\^{a}teaux differentiable at $v$ for every
$p\in\lbrack1,\infty]$ with $\nabla K(v)=T_{v}$. \hfill$\square$
\medskip
We consider now the problem of finding the stationary points of $K$,
that is those points $v\in X$ with $T_{v}=0.$
\begin{proposition}
\label{p2}The functional $K$ has only one stationary point
$\overline{v}$. More precisely, for each $x\in\lbrack a,b]$,
$\overline{v}(x)$ is the unique solution from
$(-\sqrt{2\lambda/3},\sqrt{2\lambda/3})$ of the equation
$z(\tfrac{1}{2}z^{2}-\lambda)=F(x)$.
\end{proposition}
Proof. Assume that $v\in X$ is stationary; hence
$T_{v}h=\int_{a}^{b}Vh=0$ for every $h\in X$, where $V:=\theta
v(\tfrac{1}{2}v^{2}-\lambda)-F$ $(\in X\subset C[a,b])$. We claim
that $V=0$. In the contrary case, since $V$ is continuous, there
exists $x_{0}\in(a,b)$ with $V(x_{0})\neq0$. Suppose that
$V(x_{0})>0$. By the continuity of $V$ there exist $a^{\prime},b^{\prime
\in\mathbb{R}$ such that $a<a^{\prime}<x_{0}<b^{\prime}<b$ and $V(x)>0$ for
every $x\in\lbrack a^{\prime},b^{\prime}]$. Take
\[
\overline{h}:[a,b]\rightarrow\mathbb{R},\quad\overline{h}(x):=\left\{
\begin{array}
[c]{ll
\frac{x-a^{\prime}}{b^{\prime}-a^{\prime}} & \text{if }x\in(a^{\prime
,\tfrac{1}{2}(a^{\prime}+b^{\prime})],\\
\frac{b^{\prime}-x}{b^{\prime}-a^{\prime}} & \text{if }x\in(\tfrac{1
{2}(a^{\prime}+b^{\prime}),b^{\prime}],\\
0 & \text{if }x\in\lbrack a,a^{\prime}]\cup(b^{\prime},b].
\end{array}
\right.
\]
Then $\overline{h}\in X$ and $\overline{h}(x)>0$ for $x\in(a^{\prime
},b^{\prime})$. Since $0=\int_{a}^{b}V\overline{h}=\int_{a^{\prime
}^{b^{\prime}}V\overline{h}$ and $V\overline{h}$ is continuous and nonnegative
on $[a^{\prime},b^{\prime}]$ we obtain that $V(x)\overline{h}(x)=0$ for
$x\in\lbrack a^{\prime},b^{\prime}]$, and so $0=V(x_{0})\overline{h}(x_{0
)>0$. This contradiction shows that $V=0$. The proof in the case $V(x_{0})<0$
reduces to the preceding one replacing $V$ by $-V$. Hence
\begin{equation}
\theta v(\tfrac{1}{2}v^{2}-\lambda)=F~~\text{on~~}[a,b]. \label{r-gl3
\end{equation}
Consider the polynomial function $G:\mathbb{R}\rightarrow\mathbb{R}$ defined
by $G(z):=z\left( \tfrac{1}{2}z^{2}-\lambda\right) $. Then $G^{\prime
}(z)=\tfrac{3}{2}z^{2}-\lambda$ having the zeros $\pm\kappa$, where
\begin{equation}
\kappa:=\sqrt{2\lambda/3}. \label{r-k
\end{equation}
The behavior of $G$ is given in the table below.
\begin{center}
{\footnotesize
\begin{tabular}
[c]{c|ccccccccccccccccc
$z$ & $-\infty$ & & $-2\kappa$ & & $-\sqrt{3}\kappa$ & & $-\kappa$ & & $0$
& & $\kappa$ & & $\sqrt{3}\kappa$ & & $2\kappa$ & & $\infty$\\\hline
$G^{\prime}(z)$ & & $+$ & & $+$ & & $+$ & $0$ & $-$ & & $-$ & $0$ & $+$ &
& $+$ & & $+$ & \\\hline
$G(z)$ & $-\infty$ & $\nearrow$ & $-\kappa^{3}$ & $\nearrow$ & $0$ &
$\nearrow$ & $\kappa^{3}$ & $\searrow$ & $0$ & $\searrow$ & $-\kappa^{3}$ &
$\nearrow$ & $0$ & $\nearrow$ & $\kappa^{3}$ & $\nearrow$ & $\infty
\end{tabular}
}
\end{center}
This table shows that the equation $G(z)=A$ with $A\in(-\sqrt{8\lambda^{3
/27},\sqrt{8\lambda^{3}/27})$ has three real solutions, more precisely,
\begin{equation}
z_{1}(A)\in(-2\kappa,-\kappa),\quad z_{2}(A)\in(-\kappa,\kappa),\quad
z_{3}(A)\in(\kappa,2\kappa). \label{r-gl4
\end{equation}
Moreover, the mappings $z_{i}:(-\kappa^{3},\kappa^{3})\rightarrow\mathbb{R}$
are continuous with $z_{1}(0)=-\sqrt{3}\kappa$, $z_{2}(0)=0$, $z_{3
(0)=\sqrt{3}\kappa$. This shows that $z_{i}\circ F\in X$ if and only $i=2,$
and so the only solution in $X$ of the equation $v(\tfrac{1}{2}v^{2
-\lambda)=F$ is $\overline{v}:=z_{2}\circ F$. \hfill$\square$
\medskip
Let us analyze if $\overline{v}:=z_{2}\circ F$ is a local extremum of $K.$
\begin{proposition}
\label{p3}Let $\overline{v}\in X$ be the stationary point of $K$.
Then $\overline{v}:=z_{2}\circ F$ [with $z_{2}$ defined in
(\ref{r-gl4})] is a local maximizer for $K$ with respect to
$\left\Vert \cdot\right\Vert _{\infty }$, and $\overline{v}$ is not
a local extremum point of $K$ with respect to $\left\Vert
\cdot\right\Vert _{p}$ for $p\in\lbrack1,4).$
\end{proposition}
Proof. Let us consider first the case $p=\infty$. From (\ref{r-gl2}) we get
\begin{equation}
K(\overline{v}+h)-K(\overline{v})=\int_{a}^{b}\theta\left[ \tfrac{1
{2}\left( \tfrac{3}{2}\overline{v}^{2}-\lambda\right) +\tfrac{1}{2
\overline{v}h+\tfrac{1}{8}h^{2}\right] h^{2}\quad\forall h\in X.
\label{r-gl10
\end{equation}
Since $F\in C[a,b]$, there exists some $x_{0}\in\lbrack a,b]$ such
that $\left\Vert F\right\Vert _{\infty}=\left\vert
F(x_{0})\right\vert <(2\lambda/3)^{3/2}$, and so $\left\vert
\overline{v}(x)\right\vert \leq\left\vert
\overline{v}(x_{0})\right\vert =:\gamma<\sqrt{2\lambda/3}$ for
$x\in\lbrack a,b]$. It follows that $\tfrac{1}{2}\left( \tfrac{3}{2
\overline{v}^{2}-\lambda\right) \leq\tfrac{1}{2}\left( \tfrac{3}{2
\gamma^{2}-\lambda\right) =:-\eta<\tfrac{1}{2}\left( \tfrac{3}{2
\frac{2\lambda}{3}-\lambda\right) =0$. Hence
\begin{equation}
\tfrac{1}{2}\left( \tfrac{3}{2}\overline{v}^{2}-\lambda\right) +\tfrac{1
{2}\overline{v}h+\tfrac{1}{8}h^{2}\leq-\eta+\tfrac{1}{2}\gamma\left\Vert
h\right\Vert _{\infty}+\tfrac{1}{8}\left\Vert h\right\Vert _{\infty
^{2}<0\quad\forall h\in X,~\left\Vert h\right\Vert _{\infty}<\varepsilon,
\label{r-gl9
\end{equation}
where $\varepsilon:=2\big(\sqrt{\gamma^{2}+2\eta}-\gamma\big)$. It follows
that $\overline{v}$ is a (strict) local maximizer of $K.$
Assume now that $p\in\lbrack1,4)$. Of course, there exists a sequence
$(h_{n})_{n\geq1}\subset X\setminus\{0\}$ such that $\left\Vert h_{n
\right\Vert _{\infty}\rightarrow0$. Taking into account
(\ref{r-gl9}), we have that $K(\overline{v}+h_{n})<K(\overline{v})$
for large $n$. Since $\left\Vert h_{n}\right\Vert _{p}\rightarrow0$,
$\overline{v}$ is not a local minimizer of $K$ with respect to
$\left\Vert \cdot\right\Vert _{p}$. In the proof of Proposition
\ref{p1} we found a sequence $(h_{n})_{n\geq1}\subset
X\setminus\{0\}$ such that $\left\Vert h_{n}\right\Vert
_{p}\rightarrow0$ and
$\left\Vert h_{n}\right\Vert _{p}^{-1}\left( K(\overline{v}+h_{n
)-K(\overline{v})-T_{\overline{v}}h_{n}\right) \rightarrow\infty$.
Since $T_{\overline{v}}=0$, we obtain that
$K(\overline{v}+h_{n})-K(\overline{v})>0$ for large $n$, proving
that $\overline{v}$ is not a local maximizer of $K$. Hence
$\overline{v}$ is not a local extremum point of $K$. \hfill$\square$
\medskip
We don't know if $\overline{v}$ is a local maximizer of $K$ for $p\in
\lbrack4,\infty)$; having in view (\ref{r-gl9}), surely, $\overline{v}$ is not
a local minimizer of $K.$
Proposition \ref{p3} shows the importance of the norm (and more generally, of
the topology) on a space when speaking about local extrema.
\medskip
Let us establish now the relations between the local extrema of $J$
with the constraints (6) and (7) in \cite{GaoLu:16}, that is local
extrema of $J$ restricted to $C_{1,0}[a,b]$, and the local extrema
of $K$ in the case in which $C^{1}[a,b]$ is endowed with the (usual)
norm defined by
\begin{equation}
\left\Vert u\right\Vert :=\left\Vert u\right\Vert _{\infty}+\left\Vert
u^{\prime}\right\Vert _{\infty}\quad(u\in C^{1}[a,b]), \label{r-gl13
\end{equation}
and $C_{0}[a,b]$ is endowed with the norm $\left\Vert \cdot\right\Vert
_{\infty}.$
\begin{proposition}
\label{p4}Consider the norm $\left\Vert \cdot\right\Vert $ (defined
in (\ref{r-gl13})) on $C^{1}[a,b]$ and the norm $\left\Vert
\cdot\right\Vert _{\infty}$ on $C_{0}[a,b]$. If $\overline{u}$ is a
local minimizer (maximizer) of $J$ on $C_{1,0}[a,b]$, then
$\overline{u}^{\prime}$ is a local minimizer (maximizer) of $K$.
Conversely, if $\overline{v}$ is a local minimizer (maximizer) of
$K$, then $\overline{u}\in C^{1}[a,b]$ defined by $\overline
{u}(x):=u_{0}+\int_{a}^{x}\overline{v}(\xi)d\xi$ for $x\in\lbrack
a,b]$ and a fixed $u_{0}\in\mathbb{R}$ is a local minimizer
(maximizer) of $J$ on $C_{1,0}[a,b].$
\end{proposition}
Proof. Assume that $\overline{u}$ is a local minimizer of $J$ on
$C_{1,0}[a,b]$; hence $\overline{u}\in C_{1,0}[a,b]$. It follows that there
exists $r>0$ such that $J(\overline{u})\leq J(u)$ for every $u\in
C_{1,0}[a,b]$ with $\left\Vert u-\overline{u}\right\Vert <r$. Set
$\overline{v}:=\overline{u}^{\prime}$ and take $v\in X=C_{0}[a,b]$ with
$\left\Vert v-\overline{v}\right\Vert _{\infty}<r^{\prime}:=r/(1+b-a)$. Define
$u:[a,b]\rightarrow\mathbb{R}$ by $u(x):=\overline{u}(a)+\int_{a}^{x
v(\xi)d\xi$ for $x\in\lbrack a,b]$. Then $u\in C_{1,0}[a,b]$ and
$u^{\prime }=v$. Since
$\overline{u}(x)=\overline{u}(a)+\int_{a}^{x}\overline{v}(\xi
)d\xi$, we get
\[
\left\Vert u-\overline{u}\right\Vert =\left\Vert u-\overline{u}\right\Vert
_{\infty}+\left\Vert u^{\prime}-\overline{u}^{\prime}\right\Vert _{\infty
\leq(b-a)\left\Vert v-\overline{v}\right\Vert _{\infty}+\left\Vert
v-\overline{v}\right\Vert _{\infty}<r^{\prime}(1+b-a)=r.
\]
Hence $K(\overline{v})=J(\overline{u})\leq J(u)=K(v)$. This shows that
$\overline{v}$ is a local minimizer for $K$.
Conversely, assume that $\overline{v}$ is a local minimizer for $K$.
Then there exists $r>0$ such that $K(\overline{v})\leq K(v)$ for
$v\in C_{0}[a,b]$ with $\left\Vert v-\overline{v}\right\Vert <r$,
and take $u_{0}\in\mathbb{R}$ and
$\overline{u}:[a,b]\rightarrow\mathbb{R}$ defined by $\overline
{u}(x):=u_{0}+\int_{a}^{x}\overline{v}(\xi)d\xi$ for $x\in\lbrack
a,b]$. Then $\overline{u}\in C_{1,0}[a,b]$. Consider $u\in
C_{1,0}[a,b]$ with $\left\Vert u-\overline{u}\right\Vert <r$, that
is
\[
\left\Vert u-\overline{u}\right\Vert _{\infty}+\left\Vert u^{\prime
-\overline{u}^{\prime}\right\Vert _{\infty}=\left\Vert u-\overline
{u}\right\Vert _{\infty}+\left\Vert u^{\prime}-\overline{v}\right\Vert
_{\infty}<r;
\]
then $\left\Vert u^{\prime}-\overline{v}\right\Vert _{\infty}<r$.
Since $u^{\prime}\in C_{0}[a,b]$, it follows that
$J(u)=K(u^{\prime})\geq K(\overline{v})=J(\overline{u})$, and so
$\overline{u}$ is a local minimizer of $J$ on $C_{1,0}[a,b]$. The
case of local maximizers for $J$ and $K$ is treated similarly.
\hfill$\square$
\medskip
Putting together Propositions \ref{p2}, \ref{p3} and \ref{p4} we get the next result.
\begin{theorem}
\label{t-gl}Consider the norm $\left\Vert \cdot\right\Vert $ (defined in
(\ref{r-gl13})) on $C^{1}[a,b]$ and the norm $\left\Vert \cdot\right\Vert
_{\infty}$ on $C_{0}[a,b]$. Let $\overline{u}\in C_{1,0}[a,b]$ and set
$\overline{v}:=\overline{u}^{\prime}$. Then the following assertions are equivalent:
\emph{(i)} $\overline{u}$ is a local maximum point of $J$ restricted to
$C_{1,0}[a,b].$
\emph{(ii)} $\overline{u}$ is a local extremum point of $J$ restricted to
$C_{1,0}[a,b].$
\emph{(iii)} $\overline{v}$ is a stationary point of $K.$
\emph{(iv)} $\overline{v}$ is a local extremum point of $K.$
\emph{(v)} $\overline{v}$ is a local maximum point of $K.$
\emph{(vi)} $\overline{v}=z_{2}\circ F$, where $z_{2}(A)$ is the
unique solution of the equation $z\left(
\tfrac{1}{2}z^{2}-\lambda\right) =A$ in the interval
$(-\sqrt{2\lambda/3},\sqrt{2\lambda/3}]$ for $A\in(-(2\lambda
/3)^{3/2},(2\lambda/3)^{3/2}).$
\emph{(vii)} there exists $u_{0}\in\mathbb{R}$ such that $\overline
{u}(x)=u_{0}+\int_{a}^{x}z_{2}(F(\rho))d\rho$ for every $x\in\lbrack a,b].$
\end{theorem}
\section{Discussion of Theorem 1.1 from Gao and Lu's paper \cite{GaoLu:16}}
First of all, we think that in the formulation of \cite[Th.
1.1]{GaoLu:16}, \textquotedblleft local extrema for the nonconvex
functional (2)\textquotedblright\ must be replaced by
\textquotedblleft local extrema for the nonconvex functional (2)
with the constraints (6) and (7)\textquotedblright,
\textquotedblleft local minimizer for the nonconvex functional
(2)\textquotedblright\ must be replaced by \textquotedblleft local
minimizer for the nonconvex functional (2) with the constraints (6)
and (7)\textquotedblright\ (2 times), and \textquotedblleft local
maximizer for the nonconvex functional (2)\textquotedblright\ must
be replaced by \textquotedblleft local maximizer for the nonconvex
functional (2) with the constraints (6) and (7)\textquotedblright.
Below, we interpret \cite[Th. 1.1]{GaoLu:16} with these
modifications.
As pointed in Introduction, no norms are considered on the spaces
mentioned in \cite{GaoLu:16}. For this reason in Theorem \ref{t-gl}
we considered the usual norms on $C^{1}[a,b]$ and $C_{0}[a,b]$;
these norms are used in this discussion. Moreover, let
$\theta(x):=1$ for $x\in\lbrack a,b]$ in Theorem \ref{t-gl} and
$\nu=1$ in \cite[Th. 1.1]{GaoLu:16}. In the conditions of \cite[Th.
1.1]{GaoLu:16} $F(x)>0$ for $x\in(a,b)$ or $F(x)<0$ for $x\in(a,b).$
For the present discussion we take the case $F>0$ on $(a,b).$
Assume that the mappings
\begin{equation}
\rho\mapsto F(\rho)/E_{j}^{-1}\left( F^{2}(\rho\right) )=:v_{j}(\rho)
\label{r-vj
\end{equation}
[where \textquotedblleft$E_{3}^{-1}(A)\leq E_{2}^{-1}(A)\leq E_{1}^{-1}(A)$
stand for the three real-valued roots for the equation $E(y)=A
\textquotedblright\ with $E(y)=2y^{2}(y+\lambda)$ and $A\in\lbrack
0,8\lambda^{3}/27)$] are well defined for $\rho\in\{a,b\}$ [there are no
problems for $\rho\in(a,b)$].
If \cite[Th. 1.1]{GaoLu:16} is true, then $v_{1},v_{2},v_{3}\in
C_{0}[a,b]$; moreover, $v_{1}$ and $v_{2}$ are local minimizers of
$K$, and $v_{3}$ is a local maximizer of $K$. This is of course
\textbf{false} taking into account Theorem \ref{t-gl} because $K$
has not local minimizers.
Because $z_{2}\circ F$ is the unique local maximizer of $K$, we must have that
$v_{3}=z_{2}\circ F$. Let us see if this is true. Because $z_{i}(A)$ are
solutions of the equation $G(z)=A$ and $E_{j}^{-1}(A)$ are solutions of the
equation $E(y)=A$, we must study the relationships among these numbers.
First, the behavior of $E$ is given in the next table.
\begin{center}
{\small
\begin{tabular}
[c]{c|ccccccccccc
$y$ & $-\infty$ & & $-\lambda$ & & $-\frac{2\lambda}{3}$ & & $0$ & &
$\frac{\lambda}{3}$ & & $\infty$\\\hline
$E^{\prime}(y)$ & & $+$ & & $+$ & $0$ & $-$ & $0$ & $+$ & & $+$ & \\\hline
$E(y)$ & $-\infty$ & $\nearrow$ & $0$ & $\nearrow$ & $\frac{8\lambda^{3}}{27}$
& $\searrow$ & $0$ & $\nearrow$ & $\frac{8\lambda^{3}}{27}$ & $\nearrow$ &
$\infty
\end{tabular}
}
\end{center}
Secondly, for $y,z,A\in\mathbb{C}\setminus\{0\}$ such that $yz=A$ we have
that
\begin{equation}
G(z)=A\Leftrightarrow\frac{A}{y}\left( \frac{1}{2}\frac{A^{2}}{y^{2}
-\lambda\right) =A\Leftrightarrow2y^{2}(y+\lambda)=A^{2}\Leftrightarrow
E(y)=A^{2}. \label{r-gl12
\end{equation}
Analyzing the behavior of $G$ and $E$ (recall that $\kappa=\sqrt{2\lambda/3
$), and the relation $yz=A$ for $A\neq0$ (mentioned above), the correspondence
among the solutions of the equations $G(z)=A$ and $E(y)=A^{2}$ for
$A\in(0,(2\lambda/3)^{3/2})$ is:
\begin{equation}
z_{1}(A)=A/E_{2}^{-1}(A^{2}),\quad z_{2}(A)=A/E_{3}^{-1}(A^{2}),\quad
z_{3}(A)=A/E_{1}^{-1}(A^{2}) \label{r-ze
\end{equation}
for all $A\in(0,\left( 2\lambda/3\right) ^{3/2})$. This shows that
only the third assertion of \cite[Th. 1.1]{GaoLu:16} is true (of
course, considering the norm defined in (\ref{r-gl13}) on
$C^{1}[a,b]$).
\section{Discussion of Theorem 1.1 from Lu and Gao's paper \cite{LuGao:16}}
A similar problem to that in \cite{GaoLu:16}, discussed above, is
considered in \cite{LuGao:16}. In the abstract of this paper one
finds:
\textquotedblleft In comparison with the 1D case discussed by D. Gao and R.
Ogden, there exists huge difference in higher dimensions, which will be
explained in the theorem".
More precisely, in \cite{LuGao:16} it is said:
\textquotedblleft In this paper, we consider the fourth-order polynomial
defined by
$H(|\vec{\gamma}|):=\nu/2\left(
1/2|\vec{\gamma}|^{2}-\lambda\right) ^{2},$
$\vec{\gamma}\in\mathbb{R}^{n}$, $\nu,\lambda>0$ are constants,
$|\vec{\gamma }|^{2}=\vec{\gamma}\cdot\vec{\gamma}$.
...
The purpose of this paper is to find the extrema of the following nonconvex
total potential energy functional in higher dimensions,
(1) $I[u]:=\int_{\Omega}\left( H(|\nabla u|)-fu\right) dx,$
\noindent where $\Omega=$Int$\left\{ \mathbb{B}(O,R_{1})\setminus
\mathbb{B}(O,R_{2})\right\} $, $R_{1}>R_{2}>0,$
$\mathbb{B}(O,R_{1})$ and $\mathbb{B}(O,R_{2})$ denote two open
balls with center $O$ and radii $R_{1}$ and $R_{2}$ in the Euclidean
space $\mathbb{R}^{n}$, respectively. \textquotedblleft
Int\textquotedblright\ denotes the interior points. In
addition, let $\Sigma_{1}:=\{x:|x|=R_{1}\}$, and $\Sigma_{2}:=\{x:|x|=R_{2
\}$, then the boundary $\partial\Omega=\Sigma_{1}\cup\Sigma_{2}$. The radially
symmetric function $f\in C(\overline{\Omega})$ satisfies the normalized
balance condition
(2) $\int_{\Omega}f(|x|)dx=0$,
\noindent and
(3) $f(|x|)=0$ if and only if $|x|=R_{3}\in(R_{2},R_{1})$.
\noindent Moreover, its $L^{1}$-norm is sufficiently small such that
(4) $\left\Vert f\right\Vert _{L^{1}(\Omega)}<4\lambda\nu R_{2}^{n-1
\sqrt{2\lambda\pi^{n}}/(3\sqrt{3}\Gamma(n/2)),$
\noindent where $\Gamma$ stands for the Gamma function. This assumption is
reasonable since large $\left\Vert f\right\Vert _{L^{1}(\Omega)}$ may possibly
lead to instant fracture. The deformation $u$ is subject to the following
three constraints,
(5) $u$ is radially symmetric on $\overline{\Omega}$,
(6) $u\in W^{1,\infty}(\Omega)\cap C(\overline{\Omega})$,
(7) $\nabla u\cdot\vec{n}=0$ on both $\Sigma_{1}$ and $\Sigma_{2}$,
\noindent where $\vec{n}$ denotes the unit outward normal on $\partial\Omega$.
By variational calculus, one derives a correspondingly nonlinear
Euler--Lagrange equation for the primal nonconvex functional, namely,
(8) $\operatorname*{div}\left( \nabla H(|\nabla u|)\right) +f=0$ in $\Omega$,
\noindent equipped with the Neumann boundary condition (7). Clearly, (8) is a
highly nonlinear partial differential equation which is difficult to solve by
the direct approach or numerical method [2, 15]. However, by the canonical
duality method, one is able to demonstrate the existence of solutions for this
type of equations.
...
Before introducing the main result, we denote
$F(r):=-1/r^{n}\int_{R_{2}}^{r}f(\rho)\rho^{n-1}d\rho.~~r\in\lbrack
R_{2},R_{1}]$.
\noindent Next, we define a polynomial of third order as follows,
$E(y):=2y^{2}(\lambda+y/\nu),~~y\in\lbrack-\nu\lambda,+\infty)$.
\noindent Furthermore, for any $A\in\lbrack0,8\lambda^{3}\nu^{2}/27)$,
$E_{3}^{-1}(A)\leq E_{2}^{-1}(A)\leq E_{1}^{-1}(A)$
\noindent stand for the three real-valued roots for the equation $E(y)=A$.
At the moment, we would like to introduce the theorem of multiple extrema for
the nonconvex functional (2).
\textbf{Theorem 1.1.} For any radially symmetric function $f\in C(\overline
{\Omega})$ satisfying (2)--(4), we have three solutions for the nonlinear
Euler--Lagrange equation (8) equipped with the Neumann boundary condition, namely
\textbullet\ For any $r\in\lbrack R_{2},R_{1}]$, $\overline{u}_{1}$ defined
below is a local minimizer for the nonconvex functional (2),
\noindent$(9)\quad\overline{u}_{1}(\left\vert x\right\vert )=\overline{u
_{1}(r):=\int_{R_{2}}^{r}F(\rho)\rho/E_{1}^{-1}(F^{2}(\rho)\rho^{2
)d\rho+C_{1},~~\forall C_{1}\in\mathbb{R}$.
\textbullet\ For any $r\in\lbrack R_{2},R_{1}]$, $\overline{u}_{2}$ defined
below is a local minimizer for the nonconvex functional (2) in 1D. While for
the higher dimensions $n\geq2$, $\overline{u}_{2}$ is not necessarily a local
minimizer for (2) in comparison with the 1D case.
\noindent$(10)\quad\overline{u}_{2}(\left\vert x\right\vert )=\overline{u
_{2}(r):=\int_{R_{2}}^{r}F(\rho)\rho/E_{2}^{-1}(F^{2}(\rho)\rho^{2
)d\rho+C_{2},~~\forall C_{2}\in\mathbb{R}$.
\textbullet\ For any $r\in\lbrack R_{2},R_{1}]$, $\overline{u}_{3}$ defined
below is a local maximizer for the nonconvex functional (2),
\noindent$(11)\quad\overline{u}_{3}(\left\vert x\right\vert )=\overline{u
_{3}(r):=\int_{R_{2}}^{r}F(\rho)\rho/E_{3}^{-1}(F^{2}(\rho)\rho^{2
)d\rho+C_{3},~~\forall C_{3}\in\mathbb{R}$.
...
In the final analysis, we apply the canonical duality theory to prove Theorem
1.1.\textquotedblright
\medskip
First, observe that one must have (1) instead of (2) just before the
statement of \cite[Th.~1.1]{LuGao:16}, as well as in its statement,
excepting for (2)--(4). Secondly, (even from the quoted texts) one
must observe that the wording in \cite{GaoLu:16} and \cite{LuGao:16}
is almost the same; the mathematical part is very, very similar,
too.
\medskip
To avoid any confusion, in the sequel the Euclidian norm on $\mathbb{R}^{n}$
will be denoted by $\left\vert \cdot\right\vert _{n}$ instead of $\left\vert
\cdot\right\vert .$
Remark that it is said $f\in C(\overline{\Omega})$, which implies
$f$ is applied to elements $x\in\overline{\Omega}$, while a line
below one considers $f(\left\vert x\right\vert )$ (that is
$f(\left\vert x\right\vert _{n})$ with our notation); because the
(Euclidean) norm $\left\vert x\right\vert _{n}$ of
$x\in\overline{\Omega}$ belongs to $[R_{2},R_{1}]$, writing
$f(\left\vert x\right\vert )$ shows that
$f:[R_{2},R_{1}]\rightarrow\mathbb{R}$. Of course, these create
ambiguities. Probably the authors wished to say that a function
$g:\overline{\Omega}\rightarrow\mathbb{R}$ is radially symmetric if
there exists $\psi:[R_{2},R_{1}]\rightarrow\mathbb{R}$ such that
$g(x)=\psi (\left\vert x\right\vert _{n})$ for every
$x\in\overline{\Omega}$, that is $g=\psi\circ\left\vert
\cdot\right\vert _{n}$ on $\overline{\Omega}$; observe that $\psi$
is continuous if and only if $\psi\circ\left\vert \cdot\right\vert
_{n}$ is continuous. Because also the functions $u$ in the
definition of $I$
are asked to be radially symmetric on $\overline{\Omega}$ (see \cite[(5)
{LuGao:16}), it is useful to observe that for a Riemann integrable
function $\psi:[R_{2},R_{1}]\rightarrow\mathbb{R}$, using the usual
spherical change of variables, we have that
\begin{equation}
\int_{\Omega}\left( \psi\circ\left\vert \cdot\right\vert _{n}\right)
(x)dx=\frac{2\pi^{n/2}}{\Gamma(n/2)}\cdot\int_{R_{2}}^{R_{1}}r^{n-1
\psi(r)dr=\gamma_{n}\int_{R_{2}}^{R_{1}}\theta\psi, \label{r-int
\end{equation}
where
\begin{equation}
\gamma_{n}:=\frac{2\pi^{n/2}}{\Gamma(n/2)},\text{~~and~~}\theta:[R_{2
,R_{1}]\rightarrow\mathbb{R},~~\theta(r):=r^{n-1}. \label{r-gn-teta
\end{equation}
So, in the sequel we consider that $f:[R_{2},R_{1}]\rightarrow\mathbb{R}$ is
continuous. Condition \cite[(2)]{LuGao:16} becomes $\int_{R_{2}}^{R_{1}}\theta
f=0$ [for the definition of $\theta$ see (\ref{r-gn-teta})], condition
\cite[(3)]{LuGao:16} is equivalent to the existence of a unique $R_{3
\in(R_{2},R_{1})$ such that $f(R_{3})=0$ (that is $(\theta f)(R_{3})=0$),
while condition \cite[(4)]{LuGao:16} is equivalent to $\big\Vert\theta
f\big\Vert_{L^{1}[R_{2},R_{1}]}<\nu R_{2}^{n-1}(2\lambda/3)^{3/2}.$
Moreover, condition \cite[(5)]{LuGao:16} is equivalent to the
existence of $\upsilon:[R_{2},R_{1}]\rightarrow\mathbb{R}$ such that
$u=\upsilon \circ\left\vert \cdot\right\vert _{n}$, while the
condition $u\in C(\overline{\Omega})$ is equivalent to $\upsilon\in
C[R_{2},R_{1}].$
Which is the meaning of $\nabla u(x)$ in condition \cite[(7)]{LuGao:16} for
$u\in W^{1,\infty}(\Omega)$ and $x\in\Sigma_{1}$ (or $x\in\Sigma_{2}$)? For
example, let us consider $\upsilon:[1,3]\rightarrow\mathbb{R}$ defined by
$\upsilon(t):=(t-1)^{2}\sin\frac{1}{t-1}$ for $t\in(1,2)$. Is $u:=\upsilon
\circ\left\vert \cdot\right\vert _{n}$ in $W^{1,\infty}(\Omega)$ for
$R_{2}:=1$ and $R_{1}:=2?$ If YES, which is $\nabla u(x)$ for $x\in
\mathbb{R}^{n}$ with $\left\vert x\right\vert _{n}=1?$
Let us assume that $\upsilon\in
C^{1}(R_{2}-\varepsilon,R_{1}+\varepsilon)$ for some
$\varepsilon\in(0,R_{2})$ and take $u:=\upsilon\circ\left\vert
\cdot\right\vert _{n}$. Then clearly $u\in C^{1}(\Delta)$, where
$\Delta:=\{x\in\mathbb{R}^{n}\mid\left\vert x\right\vert _{n}\in
(R_{2}-\varepsilon,R_{1}+\varepsilon)\}$, and
\begin{equation}
\nabla u(x)=\upsilon^{\prime}(\left\vert x\right\vert _{n})\left\vert
x\right\vert _{n}^{-1}x,\quad\left\vert \nabla u(x)\right\vert _{n}=\left\vert
\upsilon^{\prime}(\left\vert x\right\vert _{n})\right\vert \label{r-ngr
\end{equation}
for all $x\in\Delta$. Without any doubt, $u|_{\Omega}\in W^{1,\infty
(\Omega)$; moreover, $\nabla
u(x)\vec{n}=\upsilon^{\prime}(\left\vert x\right\vert
_{n})\left\vert x\right\vert _{n}^{-1}x\cdot(\left\vert x\right\vert
_{n}^{-1}x)=\upsilon^{\prime}(R_{1})$ for every $x\in\Sigma_{1}$ and
$\nabla u(x)\vec{n}=-\upsilon^{\prime}(R_{2})$ for $x\in\Sigma_{2}$.
Hence such a $u|_{\Omega}$ satisfies condition \cite[(7)]{LuGao:16}
if and only if
$\upsilon^{\prime}(R_{1})=\upsilon^{\prime}(R_{2})=0.$
Having in view the remark above, we discuss the result in \cite[Th.~1.1
{LuGao:16} for $W^{1,\infty}(\Omega)$ replaced by $C^{1}(\overline{\Omega}),$
more precisely the result in \cite{LuGao:16} concerning the local extrema of
$I$ defined in \cite[(1)]{LuGao:16} (quoted above) on the space
\begin{align*}
U:= & \{u:=\upsilon\circ\left\vert \cdot\right\vert _{n}\mid\upsilon\in
C^{1}[R_{2},R_{1}],~\upsilon^{\prime}(R_{1})=\upsilon^{\prime}(R_{2})=0\}\\
= & \{\upsilon\circ\left\vert \cdot\right\vert _{n}\mid\upsilon\in
C_{1,0}[R_{2},R_{1}]\}\subset C^{1}\left( \overline{\Omega}\right)
\end{align*}
when $C^{1}\left( \overline{\Omega}\right) $ (and $U$) is endowed with the
norm
\begin{equation}
\left\Vert u\right\Vert :=\left\Vert u\right\Vert _{\infty}+\left\Vert \nabla
u\right\Vert _{\infty}; \label{r-gl13b
\end{equation}
moreover, in the sequel, $V:=C_{0}[R_{2},R_{1}]$ is endowed with the norm
$\left\Vert \cdot\right\Vert _{\infty}.$
Unlike \cite{LuGao:16}, let us set
\begin{equation}
F(r):=-\frac{1}{r^{n-1}}\int_{R_{2}}^{r}f(\rho)\rho^{n-1}d\rho=-\frac
{1}{r^{n-1}}\int_{R_{2}}^{r}\theta f,\quad r\in\lbrack R_{2},R_{1}],
\label{r-F
\end{equation}
where $\theta$ is defined in (\ref{r-gn-teta}).
\begin{remark}
\label{rem1}Notice that our $F(r)$ is $r$ times the one introduced in
\cite{LuGao:16}.
\end{remark}
From (\ref{r-F}) and the hypotheses on $f$, we have that
$F(R_{1})=F(R_{2})=0$ and $(\theta F)^{\prime}=-\theta f$ on
$[R_{2},R_{1}]$. Since
\[
(\theta F)^{\prime}(r)=0\iff(\theta f)(r)=0\iff f(r)=0\iff r=R_{3
\]
and $(\theta F)(R_{1})=(\theta F)(R_{2})=0$, it follows that $\theta
F>0$ or $\theta F<0$ on $(R_{2},R_{1})$, that is $F>0$ or $F<0$ on
$(R_{2},R_{1}).$ Moreover, from the definition of $F$ we get
\[
R_{2}^{n-1}\left\vert F(r)\right\vert \leq\left\vert r^{n-1}F(r)\right\vert
=\left\vert \int_{R_{2}}^{r}\theta f\right\vert \leq\int_{R_{2}}^{R_{1
}\left\vert \theta f\right\vert =\big\Vert\theta f\big\Vert_{L^{1}[R_{2
,R_{1}]}<R_{2}^{n-1}(2\lambda/3)^{3/2
\]
for every $r\in\lbrack R_{2},R_{1}]$, whence $\left\vert
F(r)\right\vert <(2\lambda/3)^{3/2}$ for $r\in\lbrack R_{2},R_{1}].$
Let $u\in U$, that is $u:=\upsilon\circ\left\vert \cdot\right\vert
_{n}$ with $\upsilon\in C_{1,0}[R_{2},R_{1}]$, and set
$v:=\upsilon^{\prime}$ $(\in
C_{0}[R_{2},R_{1}])$. We have tha
\begin{equation}
\int_{R_{2}}^{R_{1}}\theta f\upsilon=-\int_{R_{2}}^{R_{1}}(\theta F)^{\prime
}\upsilon=-(\theta F\upsilon)|_{R_{2}}^{R_{1}}+\int_{R_{2}}^{R_{1}}\theta
F\upsilon^{\prime}=\int_{R_{2}}^{R_{1}}\theta Fv. \label{r-gl14
\end{equation}
Using (\ref{r-int}) and (\ref{r-ngr}) we ge
\begin{align*}
I[u] & =\int_{\Omega}\left[ H(\left\vert \nabla u(x)\right\vert
)-f(\left\vert x\right\vert )u(x)\right] dx=\int_{\Omega}\left[ H(\left\vert
\upsilon^{\prime}(\left\vert x\right\vert _{n})\right\vert )-f(\left\vert
x\right\vert )\upsilon(\left\vert x\right\vert )\right] dx\\
& =\gamma_{n}\int_{R_{2}}^{R_{1}}\theta(H\circ\left\vert v\right\vert
-Fv)=\gamma_{n}\int_{R_{2}}^{R_{1}}\theta(H\circ v-Fv),
\end{align*}
that is
\[
I[u]=\gamma_{n}K(v),
\]
where $K$ is defined in (\ref{r-K}) and $[a,b]:=[R_{2},R_{1}]$.
Therefore, Theorem \ref{t-gl} applies also in this situation.
Applying it we get that $I$ defined in \cite[(1)]{LuGao:16} has no
local minimizers and $\overline{u}\in C^{1}(\overline{\Omega})$ is a
local maximizer of $I|_{U}$ if and only if there exists
$u_{0}\in\mathbb{R}$ such that $\overline{u}(x)=u_{0}+\int
_{R_{2}}^{\left\vert x\right\vert _{n}}z_{2}(F(\rho))d\rho$ for
every $x\in\overline{\Omega}$, where $z_{2}(A)$ is the unique
solution of the equation $z\left( \tfrac{1}{2}z^{2}-\lambda\right)
=A$ in the interval
$(-\sqrt{2\lambda/3},\sqrt{2\lambda/3})$ for $A\in(-(2\lambda/3)^{3/2
,(2\lambda/3)^{3/2}).$
For the present discussion we take the case in which $F>0$ on $(a,b)$. In this
case observe that $z_{2}(A)=A/E_{3}^{-1}(A^{2})$ for $A\in(0,(2\lambda
/3)^{3/2})$. This proves that the first and second assertions of \cite[Th.
1.1]{LuGao:16} are \emph{false}; in particular, $\overline{u}_{2}$ is not a
local minimizer of $I|_{U}$ (exactly as in the 1D case).
Moreover, from the discussion above, we can conclude that also the assertion
\textquotedblleft In comparison with the 1D case discussed by D. Gao and R.
Ogden, there exists huge difference in higher dimensions" from the abstract of
\cite{LuGao:16} is false.
|
1,941,325,221,127 | arxiv | \section{Introduction}
If time was an observable, then, in standard quantum mechanics, it would be represented by an operator that is (i) self-adjoint or at least Hermitian; and is (ii) canonically conjugate to the Hamiltonian. The former assures us that the eigenvalues are real, and the latter is a consequence of its dynamics. Since our time operator $\operator{T}$ evolves as $\dv*{\operator{T}}{t} = \pm 1$ (where the sign depends on whether it increases or decreases in step with parametric time), then the Heisenberg equation of motion gives the time-energy canonical commutation relation
\begin{equation} \label{eq:teccr}
[\operator{T},\operator{H}] = \pm i\hbar \, ,
\end{equation}
where $\operator{H}$ is the Hamiltonian. In contrast with parametric time, this time operator $\operator{T}$ contains a dynamical aspect and usually encompasses questions regarding the duration of an event, or the time of event occurrence, whose value changes as parametric time changes. The relation \eqref{eq:teccr} is a consequence of Dirac's correspondence principle between the Poisson bracket and the commutator \cite{gotay2000}. This commutation relation between time and energy is also connected to the time-energy uncertainty relation $\Delta\operator{T} \Delta\operator{H} \geq \hbar/2$, and contributes to some of the many possible interpretations on $\Delta \operator{T}$ and $\Delta \operator{H}$ \cite{muga2008}.
Finding such time operator has been met with several obstacles; in general, the problem of time in quantum mechanics has been a subject of much controversy throughout the years \cite{muga2008,muga2009}. Very early on, Pauli's infamous argument denying the existence of such operator \cite{pauli1980} shaped much of the research, and pushed towards a nonstandard approach in dealing with time. In his footnote regarding the Heisenberg equation of motion, Pauli rejected the existence of a Hermitian operator $\operator{T}$ which satisfies
\eqref{eq:teccr} (the unitarity of $\exp(-iE\operator{T}/\hbar)$ indicates that he actually meant that there exists no \textit{self-adjoint} operator $\operator{T}$) relegating time as merely an ordinary number. However, it was rigorously shown that there is no inconsistency in assuming a bounded self-adjoint time operator conjugate to the Hamiltonian with a semi-bounded, unbounded, or finitely countable spectrum \cite{galapon2002}. We note though that while the self-adjointness of $\operator{T}$ is also a desired property, it will be sufficient to find Hermitian operators $\operator{T}$ (without any detailed analysis of the domains) satisfying the commutation relation \eqref{eq:teccr}.
Various authors have worked on finding such operator $\operator{T}$: using quantization \cite{aharonov1961,galapon2018}, using a wave function in time-representation \cite{kijowski1974}, the energy shift operator \cite{bauer1983}, and a partial derivative with respect to the energy \cite{razavy1969,olkhovsky1974,goto1981}. Here, we highlight three methods of finding operators conjugate to the Hamiltonian: the work of Bender and Dunne using basis operators \cite{bender1989exact,bender1989integration}, the work of Galapon using supraquantization \cite{galapon2004}, and the work of Galapon and Villanueva using the Liuoville superoperator \cite{galapon2008}.
Bender and Dunne's solution \cite{bender1989exact,bender1989integration} constitutes expanding both $\operator{T}$ and $\operator{H}$ in terms of basis operators
\begin{align}
\operator{T}_{m,n} &= \frac{1}{2^n} \sum_{k=0}^n \binom{n}{k} \operator{q}^k \operator{p}^m \operator{q}^{n-k} \label{eq:bdbo1} \\
&= \frac{1}{2^m} \sum_{j=0}^m \binom{m}{j} \operator{p}^j \operator{q}^n \operator{p}^{m-j} \, , \label{eq:bdbo2}
\end{align}
for $m \geq 0$ and $n \geq 0$. Note that the two forms are equivalent, due to the commutation relation between $\operator{q}$ and $\operator{p}$, i.e., $[\operator{q},\operator{p}]=i\hbar$. This can also be extended when either $m$ or $n$ is negative: for $n \geq 0$ and $m < 0$, we use \eqref{eq:bdbo1}; while for $m \geq 0$ and $n < 0$, we use \eqref{eq:bdbo2}. The Bender-Dunne basis operators \eqref{eq:bdbo1} and \eqref{eq:bdbo2} are the Weyl-ordered quantization of $p^m q^n$ and are densely defined operators in the Hilbert space $L^2(\mathbb{R})$ \cite{bunao2014}. Other basis operators such as the simple symmetric and Born-Jordan ordering are also possible choices \cite{domingo2015,bagunu2021}.
For a given Hamiltonian and for a given choice of basis operators, the coefficients of the expansion of $\operator{H}$ will already be known. The unknown operator $\operator{T}$ may then take the form
\begin{equation} \label{eq:benderdunne}
\operator{T} = \pm \sum_{m,n} \alpha_{m,n} \operator{T}_{m,n} \, ,
\end{equation}
where the $\alpha_{m,n}$'s are solved by imposing \eqref{eq:teccr} (and Hermiticity if desired) and getting a recurrence relation. Note the non-uniqueness of this solution is due to the fact that we can add any operator $\operator{C}$ which commutes with the Hamiltonian to get another solution $\operator{T} + \operator{C}$. Bender and Dunne's minimal solution is obtained by vanishing as many $\alpha_{m,n}$'s as possible while still satisfying the recurrence relation generated by \eqref{eq:teccr}. Using the quantized $p^m q^n$ as basis operators is in contrast with doing quantization of the classical observable itself, wherein the latter does not guarantee that \eqref{eq:teccr} is satisfied \cite{gotay2000,galapon2018}.
Galapon's \cite{galapon2004} supraquantization approach provides a solution of \eqref{eq:teccr} in coordinate representation. While addressing the problems of quantization and of the quantum time of arrival and inspired by earlier efforts by Mackey \cite{mackey1968}, Galapon introduced the idea of supraquantization which aims to construct quantum observables using the axioms of quantum mechanics and the properties of the system. In supraquantization, the classical observable is the boundary condition, which is in contrast with quantization wherein the classical observable is treated as the starting point.
The supraquantized operators are constructed under the rigged Hilbert space $\dual{\Phi} \supset \mathcal{H} \supset \Phi$ \cite{bohm1978}, and the generalized observable in $\Phi$-representation takes the integral form
\begin{equation}
(\mathcal{T}\varphi)(q) = \int_{-\infty}^\infty \braket{ q | \mathcal{T} | q' } \varphi(q') \dd{q'} \, .
\end{equation}
The kernel is assumed to take the form
\begin{equation} \label{eq:tkintro}
\braket{ q | \mathcal{T} | q' } = \frac{\mu}{i\hbar} T(q,q') \sgn(q - q') \, ,
\end{equation}
inspired by the 1-dimensional time of arrival of a particle of mass $\mu$ under some continuous potential $V(q)$. The rigged extension of \eqref{eq:teccr} imposes that for $\mathcal{T}$ to be conjugate to the rigged Hilbert space extension of $\operator{H}$, then the kernel factor $T(q,q')$ should satisfy
\begin{equation} \label{eq:tkeintro}
-\frac{\hbar^2}{2\mu} \, \pdv[2]{T(q,q')}{q} + \frac{\hbar^2}{2\mu} \, \pdv[2]{T(q,q')}{{q'}} + [V(q) - V(q')] T(q,q') = 0 \, ,
\end{equation}
and
\begin{equation} \label{eq:tkegbcpm}
\dv{T(q,q)}{q} + \pdv{T(q,q')}{q} \bigg|_{q'=q} + \pdv{T(q,q')}{{q'}} \bigg|_{q'=q} = \mp 1 \, ,
\end{equation}
where, in the right hand side of \eqref{eq:tkegbcpm}, $-1$ is used when increasing with step in time, and $+1$ when decreasing (the latter being used when getting the time of arrival solution). The hyperbolic second-order partial differential equation \eqref{eq:tkeintro} is referred to as the time kernel equation \cite{galapon2004}, and, together with condition \eqref{eq:tkegbcpm}, defines a family of solutions which are canonically conjugate to the Hamiltonian. The quantum time of arrival is but one possible solution of \eqref{eq:tkeintro} and \eqref{eq:tkegbcpm}; and hence, just one example of a Hermitian operator conjugate to the Hamiltonian.
Galapon and Villanueva's Liouville solution \cite{galapon2008} uses the Liouville superoperator $\mathcal{L}_\operator{A} = [\operator{A}, \cdot]$ so that the solution takes the form
\begin{equation} \label{eq:liouville}
\operator{T} = \mp \mathcal{L}_\operator{H}^{-1} (i\hbar\operator{1}) \, .
\end{equation}
Dividing $\mathcal{L}_\operator{H}$ into its kinetic and potential parts, $\mathcal{L}_\operator{K}$ and $\mathcal{L}_\operator{V}$, enables a geometric expansion of \eqref{eq:liouville}. The domain is to be restricted to the Bender-Dunne operators \eqref{eq:benderdunne} so that the inverse can be well-defined. The Liouville solution is then given by
\begin{equation}
\operator{T} = \pm \mu \sum_{k=0}^\infty (-1)^k \qty(\mathcal{L}_\operator{K}^{-1} \mathcal{L}_\operator{V})^k \operator{T}_{-1,1} \, .
\end{equation}
It turns out that this is just the quantum version of the classical time of arrival at the origin, and is in the same vein as supraquantization. It was observed that for linear systems, the Liouville solution and the Bender-Dunne minimal solution coincided for $\alpha_{-1,1} = \mu$. The Liouville solution is equal to the Weyl quantization of the local time of arrival (expansion of the time of arrival about the free solution) for linear systems; meanwhile, only the leading term is equal to Weyl quantization for nonlinear systems. In coordinate representation, the Liouville kernel satisfies both \eqref{eq:tkeintro} and \eqref{eq:tkegbcpm} for everywhere analytic potentials. This Liouville solution highlights the connection between the Bender-Dunne solution and the supraquantized solution via the time kernel equation.
While much of the focus has been on the time of arrival solution, it should be noted that the time of arrival is not the only possible time observable that exists. The multi-faceted nature of time means that there are other time observables that will satisfy the commutation relation \eqref{eq:teccr}, not just the time of arrival. To reiterate, if $\operator{T}$ satisfies \eqref{eq:teccr}, then one can construct another operator, $\operator{T} + \operator{C}$, that is also a solution to \eqref{eq:teccr}, where $\operator{C}$ commutes with the Hamiltonian. One example is by choosing $\operator{C} = f(\operator{H})$ where $f$ is some suitable function of the Hamiltonian $\operator{H}$. Our interest is then to investigate these other conjugate solutions.
In this paper, we focus our attention on constructing a conjugate operator in $\Phi$-representation under the supraquantization approach. That is, we shall be using the differential equation \eqref{eq:tkeintro} and the condition \eqref{eq:tkegbcpm} to study these other Hamiltonian conjugate solutions. Solutions to this time kernel equation \eqref{eq:tkeintro}, satisfying the conjugate condition \eqref{eq:tkegbcpm}, and satisfying the additional boundary conditions $T(q,q) = q/2$ and $T(q,-q) = 0$ along the diagonal, corresponds to the quantum time of arrival at the origin of a particle in 1-dimension under potential $V(q)$. The solution is unique, is conjugate to the Hamiltonian, is Hermitian, satisfies time-reversal symmetry, and reduces to the classical time of arrival in the classical limit \cite{galapon2004}. We would like also to further study the condition \eqref{eq:tkegbcpm} to extract a more general form of boundary conditions for $T(q,q)$ and $T(q,-q)$, so that the other conjugate solutions can be constructed using methods in \cite{galapon2004}. Since the method is intertwined with the time of arrival, we restrict ourselves to operators which decrease in step with time, i.e., for $-1$ in the right hand side of \eqref{eq:teccr}, and $+1$ in the right hand side of \eqref{eq:tkegbcpm}. The negative of our solution will simply be the corresponding operator which increases in step with time (e.g., time of flight).
The rest of the paper is organized as follows. In Section \ref{sec:tke}, we briefly review the time kernel equation of \cite{galapon2004}. In Section \ref{sec:conjugate}, we derive the general form of $T(q,q)$ and $T(q,-q)$ which satisfy \eqref{eq:tkegbcpm}, prove the existence and uniqueness of the solution, and get conditions for Hermiticity and time-reversal symmetry. In Section \ref{sec:examples}, we look at some interesting examples of other conjugate solutions, primarily the time of arrival plus negative powers of the Hamiltonian. In Section \ref{sec:mtke}, we explore a modified form of the time kernel equation \eqref{eq:tkeintro} and condition \eqref{eq:tkegbcpm} that was introduced by Domingo \cite{domingo2004}, which removes the assumption of the form of the kernel \eqref{eq:tkintro}. We look at the modified conjugate solutions generated here, and compare it with the original time kernel equation solution. In Section \ref{sec:conclusion}, we conclude.
\section{The Time of Arrival Operator in 1-Dimension and the Time Kernel Equation} \label{sec:tke}
Classically, the time for a particle initially at $(q,p)$ in phase space at time $t = 0$ to arrive at some other point $x$ in the configuration space is given by
\begin{equation} \label{eq:ctoa}
T_x(q,p) = -\sgn(p) \sqrt{\frac{\mu}{2}} \int_x^q \frac{\dd{q'}}{\sqrt{H(q,p) - V(q')}} \, ,
\end{equation}
where $\sgn$ is the signum function, $\mu$ is the mass of the particle, $H(q,p)$ is the Hamiltonian, and $V(q)$ is the interaction potential. We are only interested in the region $\Omega = \Omega_q \times \Omega_p$ where \eqref{eq:ctoa} is real-valued. In this region, we can expand $T_x$ about the free particle solution and get the local time of arrival $t_x(q,p)$ \cite{galapon2004}
\begin{equation} \label{eq:ltoa}
t_x(q,p) = \sum_{k=0}^\infty (-1)^k T_k(q,p;x) \, ,
\end{equation}
where $T_k(q,p;x)$ satisfies the recurrence relation
\begin{equation} \label{eq:ltoarecur}
\begin{split}
T_0(q,p;x) &= \frac{\mu}{p} (x - q) \, , \\
T_k(q,p;x) &= \frac{\mu}{p} \int_q^x \dv{V}{q'} \, \pdv{T_{k-1}(q',p;x)}{p} \dd{q'} \, .
\end{split}
\end{equation}
For $p \neq 0$ and $V$ continuous at $q$, it was shown in \cite{galapon2004} that there exists a neighborhood of $q$ determined by $\abs{V(q) - V(q')} < K_\epsilon \leq p^2(2\mu)^{-1}$ such that for every $x$ in the said neighborhood of $q$, $t_x(q,p)$ converges absolutely and uniformly to $T_x(q,p)$. Since $T_x(q,p)$ holds in the entire $\Omega$, and $t_x(q,p)$ holds only in some local neighborhood $\omega_q$ of $\Omega_q$, then $t_x(q,p) \subset T_x(q,p)$, i.e., $T_x(q,p)$ is the analytic continuation of $t_x(q,p)$ in $\Omega \backslash \omega$. It is this local form of the time of arrival \eqref{eq:ltoa} that was supraquantized in \cite{galapon2004}.
The problem of supraquantizing the time of arrival conjugate with the Hamiltonian is treated in the rigged Hilbert space $\dual{\Phi} \supset \mathcal{H} \supset \Phi$, where $\Phi$ is chosen in such a way that $\Phi$ is a dense subset of the domain of the Hamiltonian $\operator{H}$, and is invariant under $\operator{H}$. As with \cite{galapon2004}, we choose $\Phi$ to be the infinitely differentiable complex valued functions with compact support in the real line; $\dual{\Phi}$ is its corresponding dual space. The supraquantized time operator $\mathcal{T}: \Phi \to \dual{\Phi}$ is given by
\begin{equation} \label{eq:tintop}
(\mathcal{T}\varphi)(q) = \int_{-\infty}^\infty \braket{ q | \mathcal{T} | q' } \varphi(q') \dd{q'} \, .
\end{equation}
Next, the transfer principle is hypothesized, which states that a particular property of one element of a class of observables can be transferred to the rest of the class \cite{galapon2004}. From the free particle solution, the kernel for all continuous potentials is then assumed to take the form
\begin{equation} \label{eq:tk}
\braket{ q | \mathcal{T} | q' } = \frac{\mu}{i\hbar} T(q,q') \sgn(q - q') \, ,
\end{equation}
where $T(q,q')$ is real valued, symmetric $T(q,q') = T(q',q)$, and analytic.
Let $\rhsextension{H}$ be the extension of $\operator{H}$ in the entire $\dual{\Phi}$, i.e., $\rhsextension{H}: \dual{\Phi} \to \dual{\Phi}$ such that $\braket{\rhsextension{H}\phi|\varphi} = \braket{\phi|\adjoint{H}\varphi}$ for all $\phi$ in $\dual{\Phi}$ and $\varphi$ in $\Phi$, where $\adjoint{H}$ is the adjoint of $\operator{H}$ in $\Phi$. For $\dv*{\mathcal{T}}{t} = -\mathcal{I}$, the canonical commutation relation \eqref{eq:teccr} can be written as
\begin{equation} \label{eq:htccr}
\braket{ \tilde{\varphi} | [ \rhsextension{H}, \mathcal{T} ] \varphi } = i\hbar \braket{ \tilde{\varphi} | \varphi } \, ,
\end{equation}
for all $\tilde{\varphi}$ and $\varphi$ in $\Phi$. The left hand side gives
\begin{equation} \label{eq:teccrtkegbc}
\begin{split}
&\braket{ \tilde{\varphi} | [ \rhsextension{H}, \mathcal{T} ] \varphi } = i\hbar \int \tilde{\varphi}^*(q) \qty(\dv{T(q,q)}{q} + \pdv{T(q,q')}{q} \bigg|_{q'=q} + \pdv{T(q,q')}{{q'}} \bigg|_{q'=q}) \varphi(q) \dd{q} \\
&\qquad + \frac{\mu}{i\hbar} \iint \tilde{\varphi}^*(q) \qty[ -\frac{\hbar^2}{2\mu} \, \pdv[2]{T(q,q')}{q} + V(q)T(q,q') ] \sgn(q-q') \varphi(q') \dd{q'}\dd{q} \\
&\qquad - \frac{\mu}{i\hbar} \iint \tilde{\varphi}^*(q) \qty[ -\frac{\hbar^2}{2\mu} \, \pdv[2]{T(q,q')}{{q'}} + V(q')T(q,q') ] \sgn(q-q') \varphi(q') \dd{q'}\dd{q} \, .
\end{split}
\end{equation}
The integration is across the common support of $\phi(q)$ and $\tilde{\phi}(q)$. We then need $\Phi$ to be invariant under $\rhsextension{H}$; if $\operator{H}$ is self-adjoint, then this is true for $\Phi$ invariant under $\operator{H}$. The $\mathcal{T}\rhsextension{H}$ term, the last term in \eqref{eq:teccrtkegbc}, is obtained by doing integration by parts and noting that $\phi(q)$ and $\tilde{\phi}(q)$ vanish at the boundary of their respective compact supports.
For this to satisfy the right hand side of \eqref{eq:htccr}, the kernel factor $T(q,q')$ must satisfy the so-called time kernel equation \cite{galapon2004}
\begin{equation} \label{eq:tke}
-\frac{\hbar^2}{2\mu} \, \pdv[2]{T(q,q')}{q} + \frac{\hbar^2}{2\mu} \, \pdv[2]{T(q,q')}{{q'}} + [V(q) - V(q')] T(q,q') = 0 \, ,
\end{equation}
and the boundary condition
\begin{equation} \label{eq:tkegbc}
\dv{T(q,q)}{q} + \pdv{T(q,q')}{q} \bigg|_{q'=q} + \pdv{T(q,q')}{{q'}} \bigg|_{q'=q} = 1 \, .
\end{equation}
The boundary conditions along the diagonal of $T(q,q')$ can be fixed in such a way that it satisfies certain properties. For example, if one requires the kernel solution to satisfy conjugacy \eqref{eq:tkegbc}, to be Hermitian, to satisfy time-reversal symmetry, and to approach the local time of arrival at the origin via the Wigner-Weyl transform,
\begin{equation} \label{eq:wignerweyl}
\mathcal{T}_\hbar(q,p) = \int_{-\infty}^\infty \Braket{ q + \frac{v}{2} | \mathcal{T} | q - \frac{v}{2} } \exp\left(-i\frac{vp}{\hbar}\right) \dd{v} \, ,
\end{equation}
in the classical limit of vanishing $\hbar$, then the specific boundary conditions should be \cite{galapon2004}
\begin{equation} \label{eq:tkebc}
T(q,q) = \frac{q}{2} \, , \qquad T(q,-q) = 0 \, .
\end{equation}
Solutions of the time kernel equation \eqref{eq:tke} satisfying these boundary conditions \eqref{eq:tkebc} are unique for continuous potentials, and correspond to the quantum time of arrival operator at the origin (if the arrival point is elsewhere, we just shift the potential to move the arrival point to the origin). To illustrate, the following are some time of arrival kernel solutions for linear systems (i.e., linear equations of motion):
\paragraph{Free Particle} For $V(q) = 0$, the solution satisfying \eqref{eq:tke} and \eqref{eq:tkebc} is
\begin{equation} \label{eq:freetkesoln}
T(q,q') = \frac{1}{4} (q + q') \, .
\end{equation}
\paragraph{Harmonic Oscillator} For $V(q) = \mu\omega^2 q^2/2$,
\begin{equation} \label{eq:hoscitkesoln}
T(q,q') = \frac{1}{4} \sum_{j=0}^\infty \frac{1}{(2j+1)!} \qty(\frac{\mu \omega}{2\hbar})^{2j} (q+q')^{2j+1} (q-q')^{2j} = \frac{1}{4} \qty(\frac{2\hbar}{\mu\omega}) \frac{1}{q-q'} \sinh\qty(\frac{\mu\omega}{2\hbar}(q+q')(q-q')) \, .
\end{equation}
These solutions give the local time of arrival at the origin upon using the Wigner-Weyl transform \eqref{eq:wignerweyl} and the relation
\begin{equation} \label{eq:fouriertnsgnt}
\int_{-\infty}^\infty t^n \sgn(t) e^{-it\omega} \dd{t} = \frac{2 n!}{(i\omega)^{n+1}} \, , \qquad \omega \neq 0 \, .
\end{equation}
It was also shown that for nonlinear systems, the solution approaches the local time of arrival for vanishing $\order{\hbar^2}$.
\section{The Boundary Conditions of the Time Kernel Equation} \label{sec:conjugate}
The time of arrival boundary conditions \eqref{eq:tkebc} just give one type of kernel solution which satisfies the general boundary condition \eqref{eq:tkegbc}---it is but one member of the family of conjugate solutions. This was due to the condition that the solution approaches the classical time of arrival in the classical limit. Other Hamiltonian conjugates may correspond to a different kind of boundary conditions along the diagonal.
It will be convenient to rewrite the time kernel equation in canonical form,
\begin{equation} \label{eq:tkecanon}
-\frac{2 \hbar^2}{\mu} \pdv{T(u,v)}{u}{v} + \qty[ V\qty(\frac{u+v}{2}) - V\qty(\frac{u-v}{2}) ] T(u,v) = 0 \, ,
\end{equation}
where we have changed variables to $u = q + q'$ and $v = q - q'$. Throughout the paper, we will often be using this canonical form to solve for $T(u,v)$ via Frobenius method, where $T(q,q')$ can be retrieved by changing the variables back to $q$ and $q'$.
We will be looking at analytic solutions to \eqref{eq:tkecanon} of the form
\begin{equation} \label{eq:tkesolncanon}
T(u,v) = \sum_{m,n} \alpha_{m,n} u^m v^n \, ,
\end{equation}
or equivalently, in $qq'$-coordinates,
\begin{equation} \label{eq:tkesoln}
T(q,q') = \sum_{m,n} \alpha_{m,n} (q+q')^m (q-q')^n \, ,
\end{equation}
for nonnegative $m$ and $n$.
\subsection{Boundary Conditions for the Hamiltonian Conjugate Solutions}
\begin{figure}
\centering
\begin{tikzpicture}
\draw[thick,->] (0,0) -- (6,0) node[anchor=north west] {$x$};
\draw[thick,->] (0,0) -- (0,5) node[anchor=south east] {$y$};
\draw[thick] (0,0) -- (4,0) node[anchor=north west] {$(u,0)$};
\draw[thick] (4,0) -- (4,4) node[anchor=south west] {$(u,v)$};
\draw[thick] (4,4) -- (0,4) node[anchor=south east] {$(0,v)$};
\draw[thick] (0,4) -- (0,0) node[anchor=north east] {$(0,0)$};
\draw[fill=gray!50] (0,0)--(4,0)--(4,4)--(0,4)--cycle;
\node at (2,2) {$\mathcal{R}$};
\fill (0,0) circle (2pt);
\fill (4,0) circle (2pt);
\fill (4,4) circle (2pt);
\fill (0,4) circle (2pt);
\end{tikzpicture}
\caption{Rectangular region $\mathcal{R}$ in the plane defined by $0 \leq x \leq u$ and $0 \leq y \leq v$.} \label{fig:region}
\end{figure}
It would be preferable to rewrite the general condition \eqref{eq:tkegbc} as boundary conditions along the diagonal $q' = q$ and $q' = -q$. This would let us use the methods found in \cite{galapon2004} to construct the other conjugate solutions. We then start by showing the required form of the boundary conditions along the diagonal that is essentially identical to \eqref{eq:tkegbc}. Since we will only be looking at entire analytic potentials, then it would be reasonable to restrict ourselves to analytic solutions of the time kernel equation. In canonical form \eqref{eq:tkecanon} and \eqref{eq:tkesolncanon}, the boundary conditions will be along the axis $u = 0$ and $v = 0$.
\begin{theorem}
Analytic solutions \eqref{eq:tkesolncanon} to the time kernel equation \eqref{eq:tkecanon} satisfy the general boundary condition \eqref{eq:tkegbc} if and only if they satisfy the specific boundary conditions
\begin{equation} \label{eq:tkebchccanon}
T(u,0) = \frac{u}{4} + c \, , \qquad T(0,v) = g(v) + c \, ,
\end{equation}
where $c$ is a constant, $g$ is some differentiable function, and $g(0)=0$.
\end{theorem}
\begin{proof}
Going back to $(q,q')$, we substitute the assumed form of the solution \eqref{eq:tkesoln} to the general boundary condition \eqref{eq:tkegbc} to get
\begin{equation}
\sum_m \alpha_{m,0} 2^{m+1} m q^{m-1} = 1 \, .
\end{equation}
To satisfy this, we need to set $\alpha_{1,0} = 1/4$, $\alpha_{2,0} = \alpha_{3,0} = \cdots = 0$. We can set the $\alpha_{0,n}$'s to be any constant for $n\geq0$. Thus, for $\alpha_{0,0} = c$, $\alpha_{0,k} = \beta_k$, and $g(v) = \sum_{k=1}^\infty \beta_k v^k$, we see that \eqref{eq:tkesoln} gives $T(q,q) = \alpha_{0,0} + \alpha_{1,0} (2q) = c + q/2$ and $T(q,-q) = \sum_n \alpha_{0,n} (2q)^n = c + g(2q)$. In canonical form, we get $T(u,0) = u/4 + c$ and $T(0,v) = g(v) + c$, verifying \eqref{eq:tkebchccanon}.
Conversely, if $T(u,v)$ satisfies \eqref{eq:tkebchccanon}, then from \eqref{eq:tkecanon}, the integral form of the time kernel equation gives
\begin{equation} \label{eq:tkeinthccanon}
T(u,v) = \frac{u}{4} + g(v) + c + \frac{\mu}{2\hbar^2} \int_0^v \int_0^u \qty[V\qty(\frac{x+y}{2}) - V\qty(\frac{x-y}{2}) ] T(x,y) \dd{x} \dd{y} \, ,
\end{equation}
and, in $(q,q')$,
\begin{equation} \label{eq:tkeinthc}
\begin{split}
T(q,q') &= \frac{1}{4}(q+q') + \sum_{k=1}^\infty \beta_k (q-q')^k + c \\
&\quad + \frac{\mu}{2\hbar^2} \int_0^{q-q'} \int_0^{q+q'} \qty[V\qty(\frac{x+y}{2}) - V\qty(\frac{x-y}{2}) ] T(x,y) \dd{x} \dd{y} \, ,
\end{split}
\end{equation}
where, again, we let $g(v) = \sum_{k=1}^\infty \beta_k v^k$. The integration is evaluated over the region in the plane defined by $0 \leq x \leq u$ and $0 \leq y \leq v$ (see Figure \ref{fig:region}). Then, using the Leibniz integral rule, we get $\dv{q}T(q,q) = 1/2$, $\pdv{q}T(q,q')|_{q'=q} = 1/4 + \beta_1$, and $\pdv{q'}T(q,q')|_{q'=q} = 1/4 - \beta_1$, i.e., $T(q,q')$, and hence $T(u,v)$, satisfies the general boundary condition \eqref{eq:tkegbc}.
\end{proof}
In $qq'$-coordinates, the boundary conditions \eqref{eq:tkebchccanon} can be written as
\begin{equation} \label{eq:tkebchc}
T(q,q) = \frac{q}{2} + c \qquad\text{and}\qquad T(q,-q) = g(2q) + c \, ,
\end{equation}
where $c$ is a constant, $g$ is some differentiable function, and $g(0)=0$. Note that we have the awkward notation $g(2q)$, since, from the canonical form, we expect it to be $g(v) \to g(q-q')$, and the $T(q,-q)$ condition makes this $g(2q)$.
Therefore, solutions to the time kernel equation \eqref{eq:tke} satisfying the boundary conditions \eqref{eq:tkebchc} also satisfy the general boundary condition \eqref{eq:tkegbc}, and thus corresponds to operators canonically conjugate to the Hamiltonian. Conversely, solutions satisfying the general condition \eqref{eq:tkegbc} also satisfy \eqref{eq:tkebchc}. This means that the boundary condition \eqref{eq:tkebchc} constitute the \textit{same} family of Hamiltonian conjugates as that of \eqref{eq:tkegbc}. One could then use Frobenius method to construct solutions of the time kernel equation \eqref{eq:tke} satisfying \eqref{eq:tkebchc}. The Wigner-Weyl transform \eqref{eq:wignerweyl} will be used to see its classical limit.
\subsection{Existence and Uniqueness}
Note that the rest of the $\alpha_{m,n}$'s are obtained from a recurrence relation imposed by the time kernel equation. The above theorem assumed that this analytic solution exists. We shall focus our attention only on continuous potentials. We prove below the existence and uniqueness of the solution for these types of potentials \cite{freiling2008} (using analogous steps in \cite{domingo2004}).
\begin{theorem} \label{thm:tkeunique}
If $V(q)$ is a continuous function at any point of the real line, then there exists a unique continuous solution $T(q,q')$ to the time kernel equation \eqref{eq:tke} satisfying the boundary conditions \eqref{eq:tkebchc}.
\end{theorem}
\begin{proof}
In canonical form, we have the time kernel equation \eqref{eq:tkecanon} and the boundary conditions \eqref{eq:tkebchccanon}, with the integral form of the time kernel equation given by \eqref{eq:tkeinthccanon}.
Let $T(u,v) = \lim_{n \to \infty} T_n(u,v)$, where
\begin{equation}
T_0(u,v) = \frac{u}{4} + g(v) + c \, ,
\end{equation}
and
\begin{equation}
T_n(u,v) = \frac{u}{4} + g(v) + c + \frac{\mu}{2\hbar^2} \int_0^v \int_0^u \qty[V\qty(\frac{x+y}{2}) - V\qty(\frac{x-y}{2}) ] T_{n-1}(x,y) \dd{x} \dd{y} \, .
\end{equation}
We can also write $T_n$ as
\begin{equation}
T_n(u,v) = \frac{u}{4} + g(v) + c + \sum_{j=1}^n \qty[T_j(u,v) - T_{j-1}(u,v)] \, .
\end{equation}
Since $V(q)$ is continuous, there exists an $M > 0$ such that $\abs{V\qty(\frac{x+y}{2}) - V\qty(\frac{x-y}{2})} \leq M$ for any point $(x,y)$ in the region $\mathcal{R}$ in Figure \ref{fig:region}. Since $g$ is differentiable, we could write it as $g(v) = \sum_{k=1}^\infty \beta_{k} v^k$. To simplify things, we let $g(v) + c = \sum_{k=0}^\infty \beta_{k} v^k$, where $\beta_0 = c$. We then have,
\begin{align}
\abs{T_1(u,v) - T_0(u,v)} &\leq \frac{\mu M}{2 \hbar^2} \int_0^v \int_0^u \abs{\frac{x}{4} + \sum_k \beta_k y^k} \dd{x} \dd{y} \nonumber \\
&\leq \frac{\mu M}{2 \hbar^2} \int_0^v \int_0^u \qty(\abs{x} + \sum_k \abs{\beta_k}\abs{y^k}) \dd{x} \dd{y} \nonumber \\
&\leq \frac{\mu M}{2 \hbar^2} \qty(\abs{u}^2 \abs{v} + \sum_k \abs{\beta_k} \abs{u} \abs{v}^{k+1} \frac{1}{k+1}) \nonumber \\
&\quad = \frac{\mu M}{2 \hbar^2} \abs{u}\abs{v} \qty(\abs{u} + \sum_k \abs{\beta_k} \frac{\abs{v}^k}{k+1}) \, .
\end{align}
Similarly,
\begin{align}
\abs{T_2(u,v) - T_1(u,v)} &\leq \frac{\mu M}{2 \hbar^2} \int_0^v \int_0^u \abs{T_1(x,y) - T_0(x,y)} \dd{x} \dd{y} \nonumber \\
&\leq \qty(\frac{\mu M}{2 \hbar^2})^2 \frac{\abs{u}^2 \abs{v}^2}{2} \qty(\abs{u} + \sum_k \abs{\beta_k} \frac{\abs{v}^k}{(k+1)(k+2)}) \, ,
\end{align}
and,
\begin{equation}
\abs{T_3(u,v) - T_2(u,v)} \leq \qty(\frac{\mu M}{2 \hbar^2})^3 \frac{\abs{u}^3 \abs{v}^3}{2 \cdot 3} \qty(\abs{u} + \sum_k \abs{\beta_k} \frac{\abs{v}^k}{(k+1)(k+2)(k+3)}) \, .
\end{equation}
By induction,
\begin{equation}
\abs{T_j - T_{j-1}} \leq \qty(\frac{\mu M}{2 \hbar^2})^j \frac{\abs{u}^j \abs{v}^j}{j!} \qty(\abs{u} + \sum_k \abs{\beta_k} \frac{\abs{v}^k}{(k+1)(k+2)\dotsm(k+j)} ) \, .
\end{equation}
We see that this goes to zero as $j$ approaches infinity. Therefore, $T_n(u,v)$ is absolutely and uniformly convergent for all finite values of $u$ and $v$, and the limit $T(u,v)$ defines the solution to the time kernel equation.
Now, suppose $T_a(u,v)$ and $T_b(u,v)$ are two solutions to the time kernel equation. The existence of a continuous solution implies that there exists a $K > 0$ such that $\abs{T_a(u,v)} \leq K$ and $\abs{T_b(u,v)} \leq K$ in the bounded region in Figure \ref{fig:region}. Then, from their integral forms \eqref{eq:tkeinthccanon}, we get as a first approximation,
\begin{align}
\abs{T_a(u,v) - T_b(u,v)} &\leq \frac{\mu M}{2 \hbar^2} \int_0^v \int_0^u \abs{T_a(x,y) - T_b(x,y)} \dd{x} \dd{y} \nonumber \\
&\leq \frac{\mu M}{2 \hbar^2} (2K) \abs{u}\abs{v} \, .
\end{align}
We substitute this back to get a second approximation,
\begin{equation}
\abs{T_a(u,v) - T_b(u,v)} \leq \qty(\frac{\mu M}{2\hbar^2})^2 (2K) \frac{\abs{u}^2 \abs{v}^2}{2} \, .
\end{equation}
By induction, the $n$th approximation gives
\begin{equation}
\abs{T_a(u,v) - T_b(u,v)} \leq \qty(\frac{\mu M}{2\hbar^2})^n (2K) \frac{\abs{u}^n \abs{v}^n}{n!} \, .
\end{equation}
This approaches zero as $n$ approaches infinity. Therefore, the solution is unique.
\end{proof}
\subsection{Hermiticity and Time-Reversal Symmetry} \label{subsec:hermtime}
Since we are interested in constructing Hermitian operators conjugate to the Hamiltonian, we need to impose additional conditions to the time kernel equation solutions to satisfy Hermiticity. To compare with the time of arrival solution, we will also be looking at the conditions for time-reversal symmetry.
\paragraph{Hermiticity} It would be sufficient to construct conjugate operators that are Hermitian. The supraquantized time operator is Hermitian if $\adjoint{\mathcal{T}} = \mathcal{T}$. From \eqref{eq:tintop} and \eqref{eq:tk} in canonical form, the Hermiticity condition can be rewritten as
\begin{equation}
T(u,v) = T^*(u,-v) \, .
\end{equation}
We look at the integral form of $T(u,v)$ in \eqref{eq:tkeinthccanon}, and get
\begin{equation}
T^*(u,-v) = \frac{u}{4} + g^*(-v) + c^* + \frac{\mu}{2\hbar^2} \int_0^v \int_0^u \qty[V\qty(\frac{x+y}{2}) - V\qty(\frac{x-y}{2}) ]^* T^*(x,-y) \dd{x} \dd{y} \, .
\end{equation}
So, if $T(u,v) = T^*(u,-v)$ (and taking $T(x,y) = T^*(x,-y)$ to be true inside the integral as well), then a sufficient condition will be obtained by equating each term, giving
\begin{equation}
g(v) = g^*(-v) \, ,
\end{equation}
\begin{equation} \label{eq:chermitian}
c = c^* \, ,
\end{equation}
\begin{equation} \label{eq:vhermitian}
V(q) = V^*(q) \, .
\end{equation}
With $g(v) = \sum_k \beta_k v^k$, we can rewrite the first condition as
\begin{equation} \label{eq:bhermitian}
\beta_k = \beta_k^* (-1)^k \, .
\end{equation}
Thus, $c$ and $V(q)$ should be purely real, and the coefficients of the expansion of $g(v) = \sum_k \beta_k v^k$ should satisfy $\beta_k = \beta_k^* (-1)^k$. One way of satisfying the condition for $\beta_k$ is by letting $\beta_k = i^k \tilde{\beta}_k$, where $\tilde{\beta}_k$ is purely real. Another way is by letting $\beta_k$ vanish for odd $k$ and letting $\beta_k$ be purely real for even $k$.
\paragraph{Time-reversal symmetry} For completeness, we shall look at the conditions for time-reversal symmetry. The time operator satisfies time reversal symmetry if $\Theta\mathcal{T}\Theta^{-1} = -\mathcal{T}$ where $\Theta$ is the time-reversal operator, which in canonical form, gives
\begin{equation}
T(u,v) = T^*(u,v) \, .
\end{equation}
Similar with the Hermiticity conditions, we can get the conditions for time-reversal symmetry to be
\begin{equation}
g(v) = g^*(v) \, ,
\end{equation}
\begin{equation} \label{eq:ctimereversal}
c = c^* \, ,
\end{equation}
\begin{equation} \label{eq:vtimereversal}
V(q) = V^*(q) \, .
\end{equation}
Thus, we require that $g(v)$, $c$, and $V(q)$ be purely real. The first condition is equivalent to having its power series coefficients satisfy
\begin{equation} \label{eq:btimereversal}
\beta_k = \beta_k^* \, ,
\end{equation}
for all $k$, that is, the $\beta_k$'s are all purely real.
\paragraph{Hermiticity and Time-reversal symmetry} The operator will satisfy both Hermiticity and time-reversal symmetry if the time kernel equation solution $T(u,v)$ satisfies all the above conditions. We see that $g$ should satisfy
\begin{equation}
g(v) = g^*(v) = g^*(-v) \, ,
\end{equation}
meaning that it is purely real and even. Looking at its expansion coefficients, both $\beta_k$ conditions \eqref{eq:bhermitian} and \eqref{eq:btimereversal} are satisfied if
\begin{equation}
(-1)^k = 1 \, ,
\end{equation}
which can only be satisfied for even $k$. Therefore, the solution is both Hermitian and satisfies time reversal symmetry if
\begin{equation}
\beta_k = 0 \, , \qquad \text{for odd $k$,}
\end{equation}
\begin{equation}
\beta_k = \beta_k^* \, , \qquad \text{for even $k$,}
\end{equation}
\begin{equation}
c = c^* \, ,
\end{equation}
\begin{equation}
V(q) = V^*(q) \, .
\end{equation}
where the $\beta_k$'s are from $g(v) = \sum_k \beta_k v^k$.
\paragraph{General Free Particle Solution} Let's take a look at the general solution \eqref{eq:tkeinthccanon} for the free particle case $V(q) = 0$,
\begin{equation}
T(u,v) = \frac{u}{4} + c + g(v) \, ,
\end{equation}
which gives
\begin{equation}
\braket{q|\mathcal{T}|q'} = \frac{\mu}{i\hbar}\sgn(q-q) \qty(\frac{q+q'}{4} + c + g(q-q')) \, .
\end{equation}
The Wigner-Weyl transform \eqref{eq:wignerweyl} gives
\begin{equation}
\mathcal{T}_\hbar(q,p) = -\frac{\mu q}{p} - \frac{2\mu}{p}c + \frac{\mu}{i\hbar}\int_{-\infty}^\infty g(v) \sgn(v) \exp\qty(-iv\frac{p}{\hbar})\dd{v} \, ,
\end{equation}
where \eqref{eq:fouriertnsgnt} was used. Using $g(v) = \sum_{k=1}^\infty \beta_k v^k$, the above equation becomes
\begin{equation}
\mathcal{T}_\hbar(q,p) = -\frac{\mu q}{p} - \frac{2\mu}{p}c - \frac{2\mu}{p} \sum_{k=1}^\infty \beta_k \frac{k!}{i^k} \qty(\frac{\hbar}{p})^k \, .
\end{equation}
For purely real $c$, the first two terms satisfy Hermiticity and time-reversal symmetry. The time-reversal symmetry condition of $\beta_k = \beta_k^*$ leaves the third term to have imaginary components. The Hermiticity condition removes the imaginary components, for example by having $\beta_k = i^k \tilde{\beta}_k$ for purely real $\tilde{\beta}_k$. Both Hermiticity and time-reversal symmetry conditions leave the third term to be purely real and only have terms of order $\order{\hbar^{2k}}$. Analysis of other potentials will become tedious since method of successive approximation will be required, but one can continue hypothesizing the transfer principle so that these properties carry over.
\section{Explicit Examples} \label{sec:examples}
One simple way of generating other Hamiltonian conjugates is by adding a term which commutes with the Hamiltonian, e.g., when $\mathcal{T}_0$ is canonically conjugate to the Hamiltonian, then so is $\mathcal{T}_0 + \mathcal{T}_C$, where $\mathcal{T}_C$ commutes with the Hamiltonian; one example is $\mathcal{T}_C = f(\operator{H})$ where $f$ is some suitable function of the Hamiltonian. Note that the Hamiltonians here are extended to the rigged Hilbert space, though starting here we drop the $\cross$ notation for convenience. Thus, we get other possible kernel solutions of the form
\begin{equation} \label{eq:tkc}
T(q,q') = T_0(q,q') + T_C(q,q') \, ,
\end{equation}
where $T_0(q,q')$ corresponds to a Hamiltonian conjugate, and $T_C(q,q')$ corresponds to an operator that commutes with the Hamiltonian. From \eqref{eq:teccrtkegbc}, we must require the kernel factor $T_C(q,q')$ in \eqref{eq:tkc} to satisfy the time kernel equation
\begin{equation} \label{eq:tkec}
-\frac{\hbar^2}{2\mu} \, \pdv[2]{T_C(q,q')}{q} + \frac{\hbar^2}{2\mu} \, \pdv[2]{T_C(q,q')}{{q'}} + [V(q) - V(q')] T_C(q,q') = 0 \, ,
\end{equation}
and the boundary condition
\begin{equation} \label{eq:tkemgbc}
\dv{T_C(q,q)}{q} + \pdv{T_C(q,q')}{q} \bigg|_{q'=q} + \pdv{T_C(q,q')}{{q'}} \bigg|_{q'=q} = 0 \, ,
\end{equation}
where the right hand side is now zero (cf. \eqref{eq:tkegbc}). This $T_C(q,q')$ solution corresponds to operators $\mathcal{T}_C$ which commute with the Hamiltonian, i.e., $\mathcal{T}_C$ is a constant of motion.
We now proceed with examples of other Hamiltonian conjugate solutions. We look at different boundary conditions along the diagonal of the form \eqref{eq:tkebchc} which satisfy the Hermiticity conditions \eqref{eq:bhermitian}, \eqref{eq:chermitian}, and \eqref{eq:vhermitian}. However, for all the examples except the last one, we do not impose time-reversal symmetry. In this paper, we are mostly interested in operators which are Hermitian and are conjugate to the Hamiltonian.
\subsection{Reciprocal of the Hamiltonian}
\paragraph{Free Particle} Consider, for $V(q)=0$, the solution
\begin{equation}
T(q,q') = \frac{1}{4}(q+q') - i \beta (q-q') \, ,
\end{equation}
where $\beta$ is some constant. The first term is the usual free time of arrival solution. The second term is also a solution to the free time kernel equation which satisfies \eqref{eq:tkemgbc} instead. Note that $T(q,q') = T^*(q',q)$. This solution satisfies the boundary conditions
\begin{equation} \label{eq:tkehbc}
T(q,q) = \frac{q}{2} \qquad \text{and} \qquad T(q,-q) = - 2 i \beta q \, .
\end{equation}
This is just \eqref{eq:tkebchc} with $c=0$ and $g(2q) = -i\beta (2q)$. This means that the solution is unique and corresponds to a Hamiltonian conjugate. From Section \ref{subsec:hermtime}, for purely real $\beta$, the solution is Hermitian, which one clearly sees from $T(q,q')$; however, the solution does not satisfy time-reversal symmetry since $T(q,-q)$ is purely imaginary.
The solution is in the form $T(q,q') = T_{\text{TOA}}(q,q') + T_C(q,q')$, where $T_{\text{TOA}}$ is the usual time of arrival solution, and $T_C$ is a solution satisfying \eqref{eq:tkemgbc}. Using \eqref{eq:tk}, we get the kernel
\begin{equation}
\braket{ q | \mathcal{T} | q' } = \frac{\mu}{4i\hbar} (q + q') \sgn(q - q') - \frac{\mu \beta}{\hbar} (q - q') \sgn(q - q') \, ,
\end{equation}
which is in the form
The Wigner-Weyl transform \eqref{eq:wignerweyl} of this kernel is given by
\begin{equation}
\mathcal{T}_\hbar(q,p) = -\frac{\mu q}{p} + \beta \frac{2\mu\hbar}{p^2} \, ,
\end{equation}
for $p \neq 0$, where we have used the relation \eqref{eq:fouriertnsgnt}. We see that the first term is the free classical time of arrival (at the origin), while the second term $T_C$ becomes $\beta\hbar H^{-1}$, where $H(q,p) = p^2/2\mu$ is the free Hamiltonian. In the classical limit, this second term vanishes as $\hbar \to 0$ unless $\beta$ is sufficiently large, in the sense that $\beta \propto \hbar^{-1}$.
Note that $\hbar/H$ has the correct units of time. Therefore, the boundary conditions \eqref{eq:tkehbc} correspond to the time of arrival \textit{shifted} by $\beta\hbar H^{-1}$. One could interpret the shift term in the kernel solution to be the supraquantization of $\beta\hbar H^{-1}$, i.e., for free particle,
\begin{equation}
\beta\hbar \braket{q|\mathcal{T}_C|q'} = - \frac{\mu \beta}{\hbar} (q - q') \sgn(q - q') \, .
\end{equation}
And so, when we set $\beta = 0$, we will obviously recover the time of arrival solution as \eqref{eq:tkehbc} becomes the time of arrival boundary conditions. When we apply the transfer principle \cite{galapon2004}, the boundary conditions \eqref{eq:tkehbc} must then correspond to a solution in the form of $T_{\text{TOA}} + \beta\hbar H^{-1}$ for all continuous potentials.
\paragraph{Harmonic Oscillator} The boundary conditions \eqref{eq:tkehbc} become
\begin{equation}
T(u,0) = \frac{u}{4} \qquad \text{and} \qquad T(0,v) = -i\beta v \, ,
\end{equation}
in canonical form. We solve for $T(u,v)$ using Frobenius method, where the solution is of the form \eqref{eq:tkesolncanon}, and substituting to the time kernel equation \eqref{eq:tkecanon} will give a recurrence relation for $\alpha_{m,n}$ which depends on $V(q)$. The boundary conditions impose that
\begin{equation}
\alpha_{m,0} = \frac{1}{4} \delta_{m,1} \qquad\text{and}\qquad \alpha_{0,n} = -i\beta \delta_{n,1} \, .
\end{equation}
We also let $\alpha_{m,n} = 0$ for $m < 0$ or $n < 0$ so that the solution is analytic.
For $V(q) = \frac{1}{2} \mu \omega^2 q^2$, the time kernel equation gives the recurrence relation
\begin{equation}
\alpha_{m,n} = \frac{\mu^2 \omega^2}{4\hbar^2} \frac{1}{mn} \alpha_{m-2,n-2} \, .
\end{equation}
The condition $\alpha_{m,0} = \frac{1}{4} \delta_{m,1}$ produces nonvanishing values for $\alpha_{1,0}$, $\alpha_{3,2}$, $\alpha_{5,4}$, and so on---these terms are just that of the time of arrival solution $T_{\text{TOA}}$. The second condition $\alpha_{0,n} = -i\beta \delta_{n,1}$ produces a separate branch of nonvanishing terms
\begin{equation}
\alpha_{0,1} = -i\beta \, ,
\end{equation}
\begin{equation}
\alpha_{2,3} = -i\beta \frac{\mu^2 \omega^2}{4\hbar^2} \qty(\frac{1}{2}) \qty(\frac{1}{3}) \, ,
\end{equation}
\begin{equation}
\alpha_{4,5} = -i\beta \qty(\frac{\mu^2 \omega^2}{4\hbar^2})^2 \qty(\frac{1}{2 \cdot 4}) \qty(\frac{1}{3 \cdot 5}) \, ,
\end{equation}
and so on. We can simplify this to a single index relation,
\begin{equation}
\alpha_j = -i\beta \qty(\frac{\mu\omega}{2\hbar})^{2j} \frac{1}{(2j+1)!} \, ,
\end{equation}
and thus, the solution is then, upon changing back to $(q,q')$,
\begin{equation}
T(q,q') = T_{\text{TOA}}(q,q') - i\beta \sum_{j=0}^\infty \frac{1}{(2j+1)!} \qty(\frac{\mu\omega}{2\hbar})^{2j} (q+q')^{2j} (q-q')^{2j+1} \, ,
\end{equation}
where the first term $T_{\text{TOA}}(q,q')$ is the usual harmonic oscillator time of arrival solution. One then gets the kernel \eqref{eq:tk} by multiplying $(\mu/i\hbar)\sgn(q-q')$. Upon using the Wigner-Weyl transform \eqref{eq:wignerweyl}, we get
\begin{equation}
\mathcal{T}_\hbar(q,p) = t_{\text{TOA}}(q,p) + \beta \frac{2\mu\hbar}{p^2} \sum_{j=0}^\infty (-1)^j \qty(\mu^2 \omega^2 \frac{q^2}{p^2})^j \, ,
\end{equation}
where \eqref{eq:fouriertnsgnt} was used. We see again that the first term is the local time of arrival at the origin for the harmonic oscillator case. Meanwhile, the second term is a series expansion of $\beta\hbar H^{-1}$ for the harmonic oscillator Hamiltonian $H(q,p) = p^2/2\mu + \mu\omega^2 q^2 / 2$. This converges for $q$ close to the origin, in accordance with the local time of arrival (the shifting term converges for $\abs{q}<p/\mu\omega$). We therefore arrive at the shifted time of arrival as well for sufficiently large $\beta$.
The time kernel equation \eqref{eq:tke} and the boundary conditions \eqref{eq:tkehbc} then constitute the supraquantization of the series expansion of the classical time of arrival at the origin shifted by $\beta\hbar H^{-1}$. These supraquantized operators are Hermitian and are canonically conjugate to the their corresponding Hamiltonians. The shifting term takes the form
\begin{equation}
\beta \hbar \braket{q|\mathcal{T}_C|q'} = \frac{\mu}{i\hbar} T_C(q,q') \sgn(q-q') \, .
\end{equation}
For the harmonic oscillator case, we see that
\begin{equation}
\braket{q|\mathcal{T}_C|q'} = -\frac{\mu}{\hbar^2} \sum_{j=0}^\infty \frac{1}{(2j+1)!} \qty(\frac{\mu\omega}{2\hbar})^{2j} (q+q')^{2j} (q-q')^{2j+1} \sgn(q-q') \, .
\end{equation}
These solutions are a manifestation of the multiple solutions of $[\operator{H},\mathcal{T}] = \pm i\hbar$, i.e., we can add any multiple of $\hbar \mathcal{T}_C$ to $\mathcal{T}$ and the commutation relation will still be satisfied. The multiple solutions to the time-energy canonical commutation relation can be differentiated by looking at their dynamics and their internal symmetry \cite{caballar2009,caballar2010}. For example, one can see that if $\beta \neq 0$, then $\braket{q|\mathcal{T}|q'}^* \neq -\braket{q|\mathcal{T}|q'}$, i.e., time reversal symmetry is broken. This is in contrast with the time of arrival solution (the $\beta = 0$ case) where time-reversal symmetry is satisfied.
Finally, we note that when we set the first condition to $T(q,q) = 0$ (setting this gets rid of the time of arrival term), we have a method of constructing the supraquantization of just the reciprocal of the Hamiltonian. One can see that, for linear systems, its supraquantization $\braket{q|\mathcal{T}_C|q'}$ is equivalent to the corresponding Weyl quantization $\braket{q|\operator{H}^{-1}|q'}$ \cite{caballar2010,magadan2018}, consistent with the supraquantization of the local time of arrival being equal to its Weyl quantized form \cite{galapon2004}. However, it is only equal to the leading term for nonlinear systems.
\subsection{Negative powers of the Hamiltonian}
We can generalize the above example and consider the boundary conditions
\begin{equation} \label{eq:tkehnbc}
T(q,q) = \frac{q}{2} \qquad \text{and} \qquad T(q,-q) = - i^{2N-1} \beta \mu^{-2(N-1)} (2q)^{2N-1} \, .
\end{equation}
for any integer $N \geq 1$. Since these are also a special case of \eqref{eq:tkebchc}, then the solutions also correspond to Hamiltonian conjugates. Since this satisfies \eqref{eq:bhermitian}, i.e., $(i^{2N-1}\tilde{\beta}_{2N-1})^* = (-1)^{2N-1} (i^{2N-1}\tilde{\beta}_{2N-1})$ for purely real $\tilde{\beta}_{2N-1} = \beta$, we also note that the solutions will be Hermitian. However, the solutions will not satisfy time-reversal symmetry.
Changing variables to $u=q+q'$ and $v=q-q'$ gives the conditions in canonical form
\begin{equation}
T(u,0) = \frac{u}{4} \qquad \text{and} \qquad T(0,v) = -i^{2N-1} \frac{\beta}{\mu^{2N-2}} v^{2N-1} \, .
\end{equation}
Assuming an analytic solution \eqref{eq:tkesolncanon} to the time kernel equation \eqref{eq:tkecanon}, then we get conditions
\begin{equation} \label{eq:tkehnbca}
\alpha_{m,0} = \frac{1}{4} \delta_{m,1} \qquad\text{and}\qquad \alpha_{0,n} = -i^{2N-1} \frac{\beta}{\mu^{2N-2}} \delta_{n,2N-1} \, ,
\end{equation}
and $\alpha_{m,n} = 0$ for negative $m$ or $n$.
\paragraph{Free Particle} For $V(q) = 0$, the boundary conditions give the solution
\begin{equation}
T(q,q') = T_{\text{TOA}}(q,q') - i^{2N-1} \frac{\beta}{\mu^{2N-2}} (q-q')^{2N-1} \, ,
\end{equation}
which is just the time of arrival solution plus a $T_C$ term satisfying \eqref{eq:tkemgbc}. Using the Wigner-Weyl transform \eqref{eq:wignerweyl} and relation \eqref{eq:fouriertnsgnt}, we get
\begin{equation}
\mathcal{T}_\hbar = -\frac{\mu q}{p} + \beta \frac{(2N-1)!}{2^{N-1}} \frac{\hbar^{2N-1}}{\mu^{3(N-1)}} \frac{1}{H^N} \, .
\end{equation}
Note that second term of the transform, $\hbar^{2N-1} \mu^{-3(N-1)} H^{-N}$, also has the correct units of time. This gives us the free time of arrival shifted by a term proportional to $H^{-N}$, where $H$ is the classical Hamiltonian. It follows that the corresponding kernel solution $T_C$ is its supraquantization, for integer $N \geq 1$. This disappears in the classical limit unless $\beta$ is sufficiently large, like when $\beta \propto \hbar^{-(2N-1)}$.
\paragraph{Harmonic Oscillator} For $V(q) = \frac{1}{2} \mu \omega^2 q^2$, the boundary conditions give a solution
\begin{equation}
T(q,q') = T_{\text{TOA}} -i^{2N-1}\frac{\beta}{\mu^{2N-2}} \sum_{s=0}^\infty \qty(\frac{\mu\omega}{2\hbar})^{2s} \frac{1}{2^s s!} \frac{\Gamma\qty(N+\frac{1}{2})}{2^s \Gamma\qty(N+\frac{1}{2}+s)} (q+q')^{2s} (q-q')^{2N-1+2s} \, .
\end{equation}
After using \eqref{eq:wignerweyl} and \eqref{eq:fouriertnsgnt},
\begin{equation}
\mathcal{T}_\hbar = t_{\text{TOA}}(q,p) + \beta \frac{(2N-1)!}{2^{N-1}} \frac{\hbar^{2N-1}}{\mu^{3(N-1)}} \qty(\frac{2\mu}{p^2})^N \sum_{s=0}^\infty (-1)^s \frac{\Gamma(N+s)}{s! \Gamma(N)} \qty(\mu^2 \omega^2 \frac{q^2}{p^2})^s \, .
\end{equation}
We see that the second term is proportional to the series expansion of $H^{-N}$ for the harmonic oscillator. Thus, we get the harmonic oscillator time of arrival shifted by a term proportional to $H^{-N}$, where $H$ is the harmonic oscillator Hamiltonian, for $q$ sufficiently close to the origin.
Therefore, the boundary conditions \eqref{eq:tkehnbc} correspond to time kernel solutions that are the supraquantization of the series expansion of the time of arrival at the origin shifted by a term proportional to $H^{-N}$. The constructed supraquantized operators are unique, are Hermitian, and are conjugate to the Hamiltonian. The form of the supraquantized $H^{-N}$ is
\begin{equation}
\beta \frac{(2N-1)!}{2^{N-1}} \frac{\hbar^{2N-1}}{\mu^{3(N-1)}} \braket{q | \mathcal{T}_C | q'} = \frac{\mu}{i\hbar} T_C(q,q') \sgn(q-q') \, ,
\end{equation}
where $T_C(q,q')$ is the shifting term of the time of arrival solution.
\paragraph{General Potential} We shall now look at the general potential case,
\begin{equation} \label{eq:genpol}
V(q) = \sum_{s=1}^\infty a_s q^s \, .
\end{equation}
With this potential, the time kernel equation gives the recurrence relation \cite{galapon2004}
\begin{equation} \label{eq:genpolrecur}
\alpha_{m,n} = \frac{\mu}{2\hbar^2} \frac{1}{mn} \sum_{s=1}^\infty \frac{a_s}{2^{s-1}} \sum_{k=0}^{[s]} \binom{s}{2k+1} \alpha_{m-s+2k,n-2k-2} \, ,
\end{equation}
where $[s] = \frac{s}{2} - 1$ for even $s$, and $[s] = \frac{s-1}{2}$ for odd $s$. The first boundary condition in \eqref{eq:tkehnbca} just gives the time of arrival solution. The second boundary condition in \eqref{eq:tkehnbca} spits out the shifting term $T_C$.
We then focus on the $\alpha_{m,n}$'s generated by the second condition of \eqref{eq:tkehnbca}. For the general potential, we have, for positive $m$ and $j$ (see Appendix \ref{app:genpolsoln}),
\begin{equation}
\alpha_{m,2N-1+2j} = \sum_{s=0}^{j-1} \qty(\frac{\mu}{2\hbar^2})^{j-s} \alpha_{m,j}^{(s)} \, ,
\end{equation}
where
\begin{equation}
\alpha_{m,j}^{(s)} = \frac{1}{m(2N-1+2j)} \sum_{r=0}^{s} \sum_{n=2r+1}^{m+2r} \frac{a_n}{2^{n-1}} \binom{n}{2r+1} \alpha^{(s-r)}_{m-n+2r,j-r-1} \, .
\end{equation}
The Wigner-Weyl transform \eqref{eq:wignerweyl} gives
\begin{equation}
\mathcal{T}_\hbar \propto \sum_{s=0}^{j-1} (-1)^j \qty(\frac{\mu}{2})^{j-s} (2N-1+2j)! \alpha^{(s)}_{m,j} \frac{\hbar^{2N-1+2s}}{p^{2j}} \, .
\end{equation}
We are only interested in the leading term $s = 0$; we ignore the higher order terms $s > 0$ for now. This leading term is of $\mathcal{O}(\hbar^{2N-1})$, and the $s > 0$ terms are of order $\mathcal{O}(\hbar^{2N+1})$. We then deal with
\begin{equation}
\alpha^{(0)}_{m,j} = -i^{2N-1} \frac{\beta}{\mu^{2N-2}} \frac{\Gamma\qty(N+\frac{1}{2})}{\Gamma\qty(N+\frac{1}{2}+j)} \frac{1}{2^m} C_{m,j} \, ,
\end{equation}
where
\begin{equation}
C_{m,j} = \frac{1}{m} \sum_{s=1}^m s a_s C_{m-s,j-1} \, ,
\end{equation}
and $C_{m,0} = \delta_{m,0}$. Then, our time kernel solution is of the form
\begin{equation}
T^{(0)}(u,v) = T_{\text{TOA}}^{(0)}(u,v) - i^{2N-1} \frac{\beta}{\mu^{2N-2}} \sum_{j=0}^\infty \sum_{m=0}^\infty \qty(\frac{\mu}{2\hbar^2})^j \frac{\Gamma\qty(N+\frac{1}{2})}{\Gamma\qty(N+\frac{1}{2}+j)} \frac{1}{2^m} C_{m,j} u^m v^{2N-1+2j} \, ,
\end{equation}
where $T_{\text{TOA}}^{(0)}(u,v)$ is the leading term of the time of arrival solution generated by the first condition of \eqref{eq:tkehnbca}. The second term meanwhile is the leading term of the solution generated by the second condition in \eqref{eq:tkehnbca}. The Wigner-Weyl transform then gives
\begin{equation}
\begin{split}
\mathcal{T}^{(0)}_\hbar(q,p) &= t_{\text{TOA}}(q,p) + \beta \frac{(2N-1)!}{2^{N-1}} \frac{\hbar^{2N-1}}{\mu^{3(N-1)}} \\
&\qquad \qquad \qquad \quad \times \qty(\frac{2\mu}{p^2})^N \sum_{k=0}^\infty (-1)^k \frac{\Gamma(N+k)}{k!\Gamma(N)} \qty(\frac{2\mu}{p^2})^k \qty(\sum_{s=1}^\infty a_s q^s)^k \, ,
\end{split}
\end{equation}
where the second term is nothing but a term proportional to $H^{-N}$ for the general potential.
Thus, even in the general potential case, we still get the time of arrival shifted by a term proportional to an inverse power of the Hamiltonian. We then have the following Proposition.
\begin{proposition}
For an entire analytic potential $V(q)$, and for constant $\beta \in \mathbb{R}$ and $N \in \mathbb{Z}^+$, the solution to the time kernel equation
\begin{equation}
-\frac{\hbar^2}{2\mu} \, \pdv[2]{T(q,q')}{q} + \frac{\hbar^2}{2\mu} \, \pdv[2]{T(q,q')}{{q'}} + [V(q) - V(q')] T(q,q') = 0 \, ,
\end{equation}
with boundary conditions
\begin{equation}
T(q,q) = \frac{q}{2} \qquad T(q,-q) = - i^{2N-1} \beta \mu^{-2(N-1)} (2q)^{2N-1} \, ,
\end{equation}
is given by
\begin{equation}
T(q,q') = T_\text{TOA}(q,q') + T_C(q,q') \, ,
\end{equation}
where the first term $T_\text{TOA}(q,q')$ is the kernel of the time of arrival operator, and the second term $T_C(q,q')$ gives the Wigner-Weyl transform
\begin{equation} \label{eq:wwtkehnbc}
\mathcal{T}_{\hbar,C}(q,p) =
\begin{cases}
\beta \frac{(2N-1)!}{2^{N-1}} \frac{\hbar^{2N-1}}{\mu^{3(N-1)}} \qty(\frac{1}{H(q,p)})^N \, , &\qquad \text{for linear systems,} \\
\beta \frac{(2N-1)!}{2^{N-1}} \frac{\hbar^{2N-1}}{\mu^{3(N-1)}} \qty(\frac{1}{H(q,p)})^N + \order{\hbar^{2N+1}} \, , &\qquad \text{for nonlinear systems.}
\end{cases}
\end{equation}
\end{proposition}
In the classical limit $\hbar \to 0$, all that remains in the Wigner-Weyl of $T(q,q')$ is the local time of arrival (at the origin) term $t_0(q,p)$. If $\beta \propto 1/\hbar^{2N-1}$, then the higher order terms in \eqref{eq:wwtkehnbc} reduce from $\order{\beta\hbar^{2N+1}}$ to $\order{\hbar^2}$, and the classical limit is then the local time of arrival shifted by a term proportional to a negative power of the Hamiltonian. We then interpret the time kernel solution
\begin{equation} \label{eq:tkehnbcgen}
\braket{q|\mathcal{T}|q'} = \braket{q|\mathcal{T}_\text{TOA}|q'} + \beta \frac{(2N-1)!}{2^{N-1}} \frac{\hbar^{2N-1}}{\mu^{3(N-1)}} \braket{q | \mathcal{T}_C | q'} \, ,
\end{equation}
as the supraquantization of
\begin{equation}
\mathcal{T}_0(q,p) = t_0(q,p) + \beta \frac{(2N-1)!}{2^{N-1}} \frac{\hbar^{2N-1}}{\mu^{3(N-1)}} \qty(\frac{1}{H(q,p)})^N \, .
\end{equation}
Again, we note that $\braket{q | \mathcal{T}_C | q'}$ and $\braket{q | \operator{H}^{-N} | q'}$ are not equal in general, since only the leading term of $\braket{q | \mathcal{T}_C | q'}$ corresponds to $H^{-N}$.
\subsection{An example satisfying time-reversal symmetry}
The inverse powers of the Hamiltonian constitute solutions that can be Hermitian, but not satisfy time reversal symmetry. The second boundary condition for the Hamiltonian solution was in the form $\beta_{2N-1} q^{2N-1}$ for positive integer $N$, i.e., they only have nonvanishing $\beta_k$'s for odd $k$. To satisfy time-reversal symmetry, $\beta_k$ should vanish for odd $k$ and be purely real for even $k$.
Suppose we have the boundary condition
\begin{equation}
T(q,q) = \frac{q}{2} \qquad\text{and}\qquad T(q,-q) = \lambda q^2 \, ,
\end{equation}
where $\lambda$ is some constant with units of inverse length. This gives, for the free particle case $V(q) = 0$, the time kernel solution
\begin{equation}
T(q,q') = \frac{1}{4}(q+q') + \lambda \qty(\frac{q-q'}{2})^2 \, ,
\end{equation}
which gives
\begin{equation}
\braket{q|\mathcal{T}|q'} = \frac{\mu}{4i\hbar} (q+q') \sgn(q-q') + \lambda \frac{\mu}{4i\hbar} (q-q')^2 \sgn(q-q') \, .
\end{equation}
This kernel is Hermitian and satisfies time reversal symmetry. The Wigner-Weyl transform gives
\begin{equation}
\mathcal{T}_\hbar(q,p) = -\frac{\mu q}{p} + \lambda \frac{\mu \hbar^2}{p^3} \, .
\end{equation}
The second term then has units of time, and is also interpreted to shift the time of arrival result. This shifting term vanishes as $\hbar \to 0$. Note that this shifting term is not anymore a function of the Hamiltonian but still corresponds to an operator which commutes with the Hamiltonian. This demonstrates that the term added to a Hamiltonian conjugate need not be a function of a Hamiltonian to still satisfy the canonical commutation relation.
\section{The Modified Time Kernel Equation} \label{sec:mtke}
The transfer principle can be exhibited in a less explicit way in deriving the time of arrival operator \cite{domingo2004}. This gives a more general way of establishing quantum-classical correspondence via supraquantization. Instead of assuming a definite form of the kernel, the whole kernel $\braket{q | \mathcal{T} | q'}$ can be regarded as unknown. We then interpret our time operator $\mathcal{T}$ as a distribution on some test function space, i.e., $\braket{\mathcal{T},\varphi} = \int \mathcal{T}(q,q') \varphi(q,q') \dd{q} \dd{q'}$.
Since $\rhsextension{H}$ is a mapping from $\dual{\Phi}$ to $\dual{\Phi}$, then $\rhsextension{H}\mathcal{T}$ is well defined for $\mathcal{T}\varphi \in \dual{\Phi}$. For the $\mathcal{T}\rhsextension{H}$ term, we choose $\Phi$ to be invariant under $\rhsextension{H}$ (which is satisfied if $\Phi$ is invariant under a self-adjoint $\operator{H}$); here we continue choosing $\Phi$ to be the space of infinitely differentiable functions with compact support. We then have, for $\varphi$ and $\tilde{\varphi}$ in $\Phi$,
\begin{equation}
\begin{split}
\braket{ \tilde{\varphi} | [ \rhsextension{H}, \mathcal{T} ] \varphi } &= \iint \tilde{\varphi}^*(q) \qty[-\frac{\hbar^2}{2\mu} \, \pdv[2]{ \mathcal{T}(q,q')}{q} + V(q)\mathcal{T}(q,q')] \varphi(q')\dd{q'}\dd{q} \\
& \quad - \iint \tilde{\varphi}^*(q) \qty[-\frac{\hbar^2}{2\mu} \, \pdv[2]{ \mathcal{T}(q,q')}{{q'}} + V(q')\mathcal{T}(q,q')] \varphi(q')\dd{q'}\dd{q} \, ,
\end{split}
\end{equation}
where $\mathcal{T}(q,q')$ is essentially the kernel $\braket{q|\mathcal{T}|q'}$ in the original time kernel equation; and where the $\mathcal{T}\rhsextension{H}$ is obtained using the definition of the derivative of a distribution $\mathcal{T} \varphi^{(n)} = (-1)^n \mathcal{T}^{(n)} \varphi$, thus
\begin{equation}
\mathcal{T}\rhsextension{H}\varphi(q') = \mathcal{T}\qty(-\frac{\hbar^2}{2\mu} \dv[2]{{q'}}\varphi(q') + V(q')\varphi(q')) = -\frac{\hbar^2}{2\mu} \dv[2]{\mathcal{T}}{{q'}}\varphi(q') + V(q')\mathcal{T}\varphi(q') \, .
\end{equation}
This holds for any $\Phi$ where the derivative of a distribution is related to the distribution of the derivative in this way. Note that the integrals here are to be understood in the distributional sense. They reduce to the usual integration for regular distributions, but is only symbolic for singular distributions (e.g., Dirac delta distribution).
If we then impose the canonical commutation relation $[ \rhsextension{H}, \mathcal{T} ] = i\hbar$, we must require that
\begin{equation} \label{eq:mtke}
-\frac{\hbar^2}{2\mu} \, \pdv[2]{ \mathcal{T}(q,q')}{q} + \frac{\hbar^2}{2\mu} \, \pdv[2]{ \mathcal{T}(q,q')}{{q'}} + [V(q) - V(q')]\mathcal{T}(q,q') = i\hbar\delta(q-q') \, .
\end{equation}
We call \eqref{eq:mtke} the \textbf{modified time kernel equation}. Note that \textit{any} solution to the modified time kernel equation \eqref{eq:mtke} automatically corresponds to an operator that is canonically conjugate with the Hamiltonian. For the boundary condition $\mathcal{T}(\Gamma) = 0$ for any boundary $\Gamma$, it was shown to have the same result as the original time kernel equation for linear systems \cite{domingo2004}. Comparing with the original time kernel equation \eqref{eq:tke} and boundary condition \eqref{eq:tkegbc}, the modified time kernel equation is written as one equation instead of two, and does not have the assumption that $\mathcal{T}(q,q') = (\mu/i\hbar)T(q,q')\sgn(q-q')$ for analytic $T(q,q')$.
We first express \eqref{eq:mtke} in its canonical form by changing variables to $u = q + q'$ and $v = q - q'$,
\begin{equation} \label{eq:mtkecanon}
-\frac{2\hbar^2}{\mu} \, \pdv[2]{\mathcal{T}(u,v)}{u}{v} + \qty[V\qty(\frac{u+v}{2}) - V\qty(\frac{u-v}{2})] \mathcal{T}(u,v) = i\hbar \delta(v) \, .
\end{equation}
In \cite{domingo2004}, given the time of arrival boundary conditions, \eqref{eq:mtkecanon} was solved using Green's function and method of successive approximation. Here, to get the general solution, we first note that
\begin{equation}
\int \delta(y) \dd{y} = \alpha H(v) - \beta H(-v) + c \, ,
\end{equation}
where $H(x)$ is the Heaviside function, $\alpha$ and $\beta$ are constants, $c$ is the constant distribution, and $\alpha + \beta = 1$. Note that the antiderivative of $\delta$ still resides inside $\dual{\Phi}$.
From \eqref{eq:mtkecanon}, we then get the integral form of the modified time kernel equation,
\begin{equation}
\begin{split} \label{eq:mtkeint}
\mathcal{T}(u,v) &= \frac{\mu}{2 i \hbar} u [\alpha H(v) - \beta H(-v)] + f(u) + g(v) \\
&\quad + \frac{\mu}{2\hbar^2} \int_0^v \int_0^u \qty[V\qty(\frac{x+y}{2}) - V\qty(\frac{x-y}{2})] \mathcal{T}(x,y) \dd{x} \dd{y} \, ,
\end{split}
\end{equation}
where $f$ and $g$ are distributions in $u$ and $v$, respectively. The distribution $\mathcal{T}$ acts on test functions of two independent variables $\phi(u,v) \in \Phi$. Note that here, $\frac{\partial^{k_1+k_2}}{\partial u^{k_1} \partial v^{k_2}} \phi(u,v)$ exists and is continuous everywhere for all positive integers $k_1$ and $k_2$. Also, the support of $\phi$ is the closure of the set of all points in $(u,v)$ wherein $\phi(u,v)$ is nonzero.
The definite integrals in \eqref{eq:mtkeint} will depend on the nature of the potential $V(q)$ and the time kernel distribution $\mathcal{T}$. For now, we will only deal with the case of continuous potentials and with regular distributions $\mathcal{T}$, i.e., the $f(u)$ and $g(v)$ are locally integrable functions. One could interpret the integrals as
\begin{equation}
\begin{split}
&\Braket{\qty[V\qty(\frac{x+y}{2}) - V\qty(\frac{x-y}{2})] \mathcal{T}(x,y) \theta(x,y), \lambda(x,y)} \\
&\qquad\qquad\qquad\qquad\qquad\qquad = \int_0^v \int_0^u \qty[V\qty(\frac{x+y}{2}) - V\qty(\frac{x-y}{2})] \mathcal{T}(x,y) \dd{x} \dd{y} \, ,
\end{split}
\end{equation}
where $\theta(x,y)$ is identically equal to one over the region $0 \leq x \leq u$ and $0 \leq y \leq v$ (c.f. over the region $\mathcal{R}$ in Figure \ref{fig:region}) and is zero outside, and where $\lambda(x,y) \in \Phi$ is identically equal to one over neighborhood of the region $0 \leq x \leq u$ and $0 \leq y \leq v$, and is zero outside some bigger region.
Finally, we note that two distributions $\mathcal{T}_1$ and $\mathcal{T}_2$ are equal if
\begin{equation}
\braket{\mathcal{T}_1,\phi} = \braket{\mathcal{T}_2,\phi} \, ,
\end{equation}
for every $\phi \in \Phi$; in other words, $\braket{\mathcal{T}_1 - \mathcal{T}_2,\phi} = 0$. If these regular distributions correspond to continuous functions, then $\mathcal{T}_1$ and $\mathcal{T}_2$ must be identical (at least, in the neighborhood of the support of $\phi$). In general, the regular distributions, $\mathcal{T}_1$ and $\mathcal{T}_2$, correspond to locally integrable functions, which include functions with points where it is discontinuous (e.g. the Heaviside function). This means that two equal regular distributions can differ at most on a set of measure zero \cite{zemanian1987}.
Below, we will show that there exists a regular distribution $\mathcal{T}$ that is a solution to the modified time kernel equation. The proof is analogous to Theorem \ref{thm:tkeunique}, where here, we recast it in a distributional sense.
\begin{theorem} \label{thm:mtkeexist}
Consider the integral form of the modified time kernel equation given by \eqref{eq:mtkeint}. If $V(q)$ is a continuous function at any point of the real line, and if $f$ and $g$ are locally integrable functions, then there exists a regular distribution $\mathcal{T}$ that is a solution to the modified time kernel equation \eqref{eq:mtke}.
\end{theorem}
\begin{proof}
From the integral form of the modified time kernel equation \eqref{eq:mtkeint}, we can use the method of successive approximation. Let $\qty{\mathcal{T}_n}_{n=0}^\infty$ be a sequence of locally integrable functions where
\begin{equation}
\mathcal{T}_0(u,v) = \frac{\mu}{2 i \hbar} u [\alpha H(v) - \beta H(-v)] + f(u) + g(v) \, ,
\end{equation}
and
\begin{equation}
\begin{split}
\mathcal{T}_n(u,v) &= \frac{\mu}{2 i \hbar} u [\alpha H(v) - \beta H(-v)] + f(u) + g(v) \\
&\quad + \frac{\mu}{2\hbar^2} \int_0^v \int_0^u \qty[V\qty(\frac{x+y}{2}) - V\qty(\frac{x-y}{2})] \mathcal{T}_{n-1}(x,y) \dd{x} \dd{y} \, .
\end{split}
\end{equation}
For locally integrable functions $f(u)$ and $g(v)$, the first approximation $\mathcal{T}_0(u,v)$ is also a locally integrable function. It follows that $\mathcal{T}_n(u,v)$ is also locally integrable. They then correspond to regular distributions in $\dual{\Phi}$. We would like to show that $\qty{\mathcal{T}_n(u,v)}_{n=0}^\infty$ converges.
It is convenient to also write $\mathcal{T}_n$ as
\begin{equation} \label{eq:mtketnsum}
\mathcal{T}_n(u,v) = \frac{\mu}{2 i \hbar} u [\alpha H(v) - \beta H(-v)] + f(u) + g(v) + \sum_{j=1}^n \qty[\mathcal{T}_j(u,v) - \mathcal{T}_{j-1}(u,v)] \, .
\end{equation}
Since $V(q)$ is continuous, there exists an $M > 0$ such that $\abs{V\qty(\frac{x+y}{2}) - V\qty(\frac{x-y}{2})} \leq M$ for any point $(x,y)$ in the region $\mathcal{R}$ in Figure \ref{fig:region}. Also, since $f$ and $g$ are locally integrable, then there exists an $N_f > 0$ and $N_g > 0$ such that $\int_{\Omega_f} \abs{f(x)} \dd{x} \leq N_f$ and $\int_{\Omega_g} \abs{g(y)} \dd{y} \leq N_g$ for all compact subsets $\Omega_f \subset \domain{f}$ and $\Omega_g \subset \domain{g}$, where $\domain{f}$ and $\domain{g}$ are the domains of $f$ and $g$ respectively. We then have,
\begin{align}
\abs{\mathcal{T}_1(u,v) - \mathcal{T}_0(u,v)} &\leq \frac{\mu M}{2 \hbar^2} \int_0^v \int_0^u \abs\bigg{\frac{\mu}{2 i \hbar} x \qty\Big[\alpha H(y) - \beta H(-y)] + f(x) + g(y)} \dd{x} \dd{y} \nonumber \\
&\leq \frac{\mu M}{2 \hbar^2} \int_0^v \int_0^u \qty\bigg[\frac{\mu}{2 \hbar} \abs{x} \qty\Big( \abs{\alpha} \abs{H(y)} + \abs{\beta} \abs{H(-y)} ) + \abs{f(x)} + \abs{g(y)}] \dd{x} \dd{y} \nonumber \\
&\leq \frac{\mu M}{2 \hbar^2} \qty[ \frac{\mu}{2\hbar} \frac{\abs{u}^2 \abs{v}}{2} \qty\Big(\abs{\alpha} + \abs{\beta}) + \abs{u}\abs{v} \qty\Big(N_f + N_g) ] \nonumber \\
&\quad = \frac{\mu M}{2 \hbar^2} \abs{u}\abs{v} \qty[\frac{\mu}{2\hbar} \frac{\abs{u}}{2} \qty\Big(\abs{\alpha} + \abs{\beta}) + \qty\Big(N_f + N_g) ] \, .
\end{align}
Similarly,
\begin{align}
\abs{\mathcal{T}_2(u,v) - \mathcal{T}_1(u,v)} &\leq \frac{\mu M}{2 \hbar^2} \int_0^v \int_0^u \abs{\mathcal{T}_1(x,y) - \mathcal{T}_0(x,y)} \dd{x} \dd{y} \nonumber \\
&\leq \qty(\frac{\mu M}{2 \hbar^2})^2 \frac{\abs{u}^2\abs{v}^2}{2 \cdot 2} \qty[\frac{\mu}{2\hbar} \frac{\abs{u}}{3} \qty\Big(\abs{\alpha} + \abs{\beta}) + \qty\Big(N_f + N_g) ] \, ,
\end{align}
and,
\begin{equation}
\abs{\mathcal{T}_3(u,v) - \mathcal{T}_2(u,v)} \leq \qty(\frac{\mu M}{2 \hbar^2})^3 \frac{\abs{u}^3\abs{v}^3}{(3 \cdot 2)(3 \cdot 2)} \qty[\frac{\mu}{2\hbar} \frac{\abs{u}}{4} \qty\Big(\abs{\alpha} + \abs{\beta}) + \qty\Big(N_f + N_g) ] \, .
\end{equation}
By induction,
\begin{equation}
\abs{\mathcal{T}_j(u,v) - \mathcal{T}_{j-1}(u,v)} \leq \qty(\frac{\mu M}{2 \hbar^2})^j \frac{\abs{u}^j\abs{v}^j}{(j!)^2} \qty[\frac{\mu}{2\hbar} \frac{\abs{u}}{(j+1)} \qty\Big(\abs{\alpha} + \abs{\beta}) + \qty\Big(N_f + N_g) ] \, .
\end{equation}
We see that this goes to zero as $j$ approaches infinity. Thus, the partial sum in \eqref{eq:mtketnsum} is absolutely and uniformly convergent for all finite values of $u$ and $v$. Therefore, the sequence $\qty{\mathcal{T}_n(u,v)}_{n=0}^\infty$ converges.
Let the limit of $\mathcal{T}_n$ as $n \to \infty$ be denoted by $\mathcal{T}$. Then, $\mathcal{T}$ is a solution to the modified time kernel equation. Since $\dual{\Phi}$ is closed under convergence \cite{zemanian1987}, then $\mathcal{T}_n \in \dual{\Phi}$ implies that $\mathcal{T}$ is also in $\dual{\Phi}$, i.e., $\mathcal{T}$ is a regular distribution for locally integrable functions $f$ and $g$.
\end{proof}
When $\mathcal{T}$ is a regular distribution, then there is a one-to-one relation between $\mathcal{T}$ and a locally integrable function $\mathcal{T}(q,q') = \braket{q|\mathcal{T}|q'}$. The uniqueness proof of Theorem \ref{thm:tkeunique} could also be attempted here. One has to note that all the locally integrable functions which differ at most on a set of measure zero produce the same regular distribution. Each solution can then be thought of as an equivalence class of functions corresponding to a particular regular distribution. Additionally, Theorem \ref{thm:mtkeexist} only shows the existence of a distributional solution for locally integrable function $f$ and $g$. This does not encompass all the possible solutions, since one could also choose $f$ and $g$ to be Dirac deltas, making $\mathcal{T}$ a singular distribution.
\paragraph{Hermiticity} Our operator $\mathcal{T}$ is Hermitian if $\mathcal{T}(q,q') = \mathcal{T}^*(q',q)$; in canonical form,
\begin{equation}
\mathcal{T}(u,v) = \mathcal{T}^*(u,-v) \, .
\end{equation}
From the general solution \eqref{eq:mtkeint}, we see that
\begin{equation}
\begin{split}
\mathcal{T}^*(u,-v) &= \frac{\mu}{2 i \hbar} u [\beta^* H(v) - \alpha^* H(-v)] + f^*(u) + g^*(-v) \\
&\quad + \frac{\mu}{2\hbar^2} \int_0^v \int_0^u \qty[V\qty(\frac{x+y}{2}) - V\qty(\frac{x-y}{2})]^* \mathcal{T}^*(x,-y) \dd{x} \dd{y} \, .
\end{split}
\end{equation}
Comparing the first terms of $\mathcal{T}(u,v)$ and $\mathcal{T}^*(u,-v)$ gives
\begin{equation}
(\alpha - \beta^*)H(v) = (\beta - \alpha^*)H(-v) \, .
\end{equation}
Since $H(v)$ and $H(-v)$ are independent, then we get
\begin{equation}
\alpha = \beta^* \, .
\end{equation}
With $\alpha + \beta = 1$, then the above condition also implies that $\alpha + \alpha^* = 1$, and similarly, $\beta + \beta^* = 1$.
Thus, the Hermiticity condition $\mathcal{T}(u,v) = \mathcal{T}^*(u,-v)$ is satisfied if
\begin{equation}
\Re(\alpha) = \Re(\beta) = \frac{1}{2} \, ,
\end{equation}
\begin{equation}
\alpha = \beta^* \, ,
\end{equation}
\begin{equation}
f(u) = f^*(u) \, ,
\end{equation}
\begin{equation}
g(v) = g^*(-v) \, ,
\end{equation}
\begin{equation}
V(q) = V^*(q) \, .
\end{equation}
If the real parts of $\alpha$ and $\beta$ are $1/2$, then $\mathcal{T}(u,v)$ will contain a term with $\sgn(v)$, which will generate the usual time of arrival solution.
\paragraph{Time-reversal symmetry} Our operator $\mathcal{T}$ satisfies time-reversal symmetry if $\mathcal{T}^*(q,q') = -\mathcal{T}(q,q')$; in canonical form,
\begin{equation}
\mathcal{T}^*(u,v) = -\mathcal{T}(u,v) \, .
\end{equation}
Then, comparing the first terms give
\begin{equation}
(\alpha - \alpha^*) H(v) = (\beta - \beta^*) H(-v) \, .
\end{equation}
Again, since $H(v)$ and $H(-v)$ are independent, then $\alpha = \alpha^*$ and $\beta = \beta^*$.
Thus, time-reversal symmetry holds when
\begin{equation}
\alpha = \alpha^* \, , \qquad \beta = \beta^* \, ,
\end{equation}
\begin{equation}
\alpha + \beta = 1 \, ,
\end{equation}
\begin{equation}
f(u) = -f^*(u) \, ,
\end{equation}
\begin{equation}
g(v) = -g^*(v) \, ,
\end{equation}
\begin{equation}
V(q) = V^*(q) \, .
\end{equation}
In other words, $\alpha$, $\beta$, and $V(q)$ are purely real while $f(u)$ and $g(v)$ are purely imaginary.
\paragraph{Both Hermitian and Time-reversal symmetric} If we want our solution to satisfy both Hermiticity and time-reversal symmetry, then the conditions for both must be simultaneously imposed. For the constants $\alpha$ and $\beta$, since time-reversal symmetry requires purely real $\alpha$ and $\beta$, and Hermiticity requires that the real part of both be $1/2$; thus $\alpha = \beta = 1/2$. For $f$, we have the condition $f(u) = f^*(u) = -f^*(u)$, which gives $f^*(u) = 0$, and thus, $f(u) = 0$. For $g$, we have $g(v) = g^*(-v) = -g^*(v)$, which means that $g$ is purely imaginary and odd. Thus, both Hermiticity and time-reversal symmetry are satisfied if
\begin{equation}
\alpha = \beta = \frac{1}{2} \, ,
\end{equation}
\begin{equation}
f(u) = 0 \, ,
\end{equation}
\begin{equation}
g(v) = g^*(-v) = -g^*(v) \, ,
\end{equation}
\begin{equation}
V(q) = V^*(q) \, .
\end{equation}
The $\alpha = \beta = 1/2$ condition makes the first term in \eqref{eq:mtkeint} contain a $\sgn(v)$ which gives the time of arrival solution. We let $f$ vanish, $g(v)$ be odd and purely imaginary, and the potential $V(q)$ to be purely real.
\subsection{Free Particle}
For the free particle case $V(q) = 0$, the general solution is simply
\begin{equation} \label{eq:mtkefree}
\mathcal{T}(u,v) = \frac{\mu}{2 i \hbar} u [\alpha H(v) - \beta H(-v)] + f(u) + g(v) \, .
\end{equation}
Using the Wigner-Weyl transform \eqref{eq:wignerweyl} and using the relation for the Fourier transform of $H(t)$, $\mathfrak{F} H(t)$, where $\mathfrak{F} f = \int_{-\infty}^\infty f(t) e^{-i\omega t} \dd{t}$, we have \cite{zemanian1987},
\begin{equation}
\int_{-\infty}^\infty H(t) e^{-i t\omega} \dd{t} = \pi \delta(\omega) + \frac{1}{i\omega} \, .
\end{equation}
Using $\mathfrak{F} 1 = 2\pi \delta(\omega)$ as well, we then get
\begin{equation}
\begin{split}
\mathcal{T}_\hbar(q,p) &= -\frac{\mu q}{p} - i \pi \mu q (\alpha - \beta) \delta(p) \\
& \quad + 2\pi \hbar f(2q) \delta(p) + \int_{-\infty}^\infty g(v) \exp\qty(-i\frac{p}{\hbar}v) \dd{v} \, .
\end{split}
\end{equation}
The first term is nothing but the free time of arrival at the origin. Even without requiring Hermiticity or time-reversal symmetry, being conjugate to the Hamiltonian alone brought upon this time of arrival term. Conditions for $\alpha$, $\beta$, $f$, and $g$ then determine the shift from the usual time of arrival result---which, in essence, determine all the other Hamiltonian conjugate solutions.
One could observe that the modified time kernel equation gives a more general solution compared to the original time kernel. For example, If $\alpha \neq \beta$ and $f(u) \neq 0$, then the Wigner-Weyl transform will have terms containing Dirac deltas. Since the time of arrival term $-\mu q/p$ is only valid for $p \neq 0$, we interpret the delta term as a contribution of a stationary particle, i.e., when $p=0$, the particle will never reach the arrival point unless it is already there; we must necessarily require that $f(0) = 0$ in this interpretation. This ``stationary particle" term will disappear in the classical limit $\hbar \to 0$ unless $f$ is sufficiently large, e.g., $f \propto \hbar^{-1}$.
These Dirac delta terms cannot appear in the original time kernel equation solution, since the form of the kernel is always assumed to have $\sgn(v)$ multiplied by an analytic function in $v$. In using the transform \eqref{eq:wignerweyl} and the Fourier transform of $v\sgn(v)$, we see that a Dirac delta can never arise in the classical limit of the original kernel. Therefore, the original free time kernel equation solutions only correspond to moving particles. Meanwhile, the free modified time kernel equation allows stationary particles.
Let us now look at the consequences of imposing Hermiticity and time-reversal symmetry on the Wigner-Weyl transform.
\begin{itemize}
\item Hermiticity only: The second term containing $i(\alpha - \beta) = -2\Im(\alpha)$ will become purely real, and the third term containing $f$ will be purely real. The condition on $g$ will ensure that the Fourier transform is real ($g(v) = g^*(-v)$ implies that $\mathfrak{F} g = (\mathfrak{F} g)^*$). The Wigner-Weyl transform will be purely real and will continue to contain Dirac delta terms.
\item Time-reversal symmetry only: The second and third term is purely imaginary. The Fourier transform of $g$ is not necessarily real anymore. Dirac deltas are still present.
\item Both Hermiticity and time-reversal symmetry: Both the second and third term vanish, and the last term is purely real. Here, all the Dirac delta terms have vanished.
\end{itemize}
Note that by letting $\alpha = \beta = 1/2$ and $f(u) = g(v) = 0$, we get the free classical time of arrival (at the origin) $\mathcal{T}_\hbar(q,p) = -\mu q p^{-1}$, and the corresponding free time kernel $\mathcal{T}(u,v) = \mu (4 i \hbar)^{-1} u \sgn(v)$.
The term containing $g$ is essentially a distributional Fourier transform, and its form will depend on $g(v)$. One interesting example is by setting $g(v) = - \mu \hbar^{-1} v \sgn(v)$. The Wigner-Weyl transform of this term is $\hbar/H(q,p)$, where $H(q,p)$ is the free classical Hamiltonian. This gives the inverse Hamiltonian shifted time of arrival, which we also constructed with the original time kernel equation.
\subsection{Harmonic Oscillator}
In general, for $V(q) \neq 0$, the solution can be constructed using an iterative solution to the Fredholm integral of the second kind. For the harmonic oscillator $V(q) = \mu\omega^2 q^2/2$, we have
\begin{equation}
\begin{split}
\mathcal{T}(u,v) &= \frac{\mu}{2 i \hbar} u [\alpha H(v) - \beta H(-v)] + f(u) + g(v) \\
&\quad + \frac{\mu^2 \omega^2}{4\hbar^2} \int_0^v \int_0^u x y \mathcal{T}(x,y) \dd{x} \dd{y} \, .
\end{split}
\end{equation}
Let the initial approximation $\mathcal{T}_0$ be
\begin{equation}
\mathcal{T}_0(u,v) = \frac{\mu}{2 i \hbar} u [\alpha H(v) - \beta H(-v)] + f(u) + g(v) \, ,
\end{equation}
and let $\mathcal{T}_n$ be
\begin{equation}
\begin{split}
\mathcal{T}_n(u,v) &= \frac{\mu}{2 i \hbar} u [\alpha H(v) - \beta H(-v)] + f(u) + g(v) \\
&\quad + \frac{\mu^2 \omega^2}{4\hbar^2} \int_0^v \int_0^u xy \mathcal{T}_{n-1}(x,y) \dd{x} \dd{y} \, .
\end{split}
\end{equation}
Using the above equations, we get the next approximation $\mathcal{T}_1$, given by
\begin{equation}
\begin{split}
\mathcal{T}_1 &= \mathcal{T}_0 + \frac{\mu}{2 i \hbar} \frac{\mu^2 \omega^2}{4\hbar^2} \int_0^v \int_0^u x^2 y [\alpha H(y) - \beta H(-y)] \dd{x} \dd{y} \\
&\quad + \frac{\mu^2 \omega^2}{4\hbar^2} \int_0^v \int_0^u x y f(x) \dd{x} \dd{y} + \frac{\mu^2 \omega^2}{4\hbar^2} \int_0^v \int_0^u x y \, g(y) \dd{x} \dd{y} \, .
\end{split}
\end{equation}
Note that
\begin{equation}
\int_0^v y [\alpha H(y) - \beta H(-y)] \dd{y} =
\begin{cases}
\frac{v^2}{2} \alpha \, , & \quad \text{for } v > 0 \, , \\
\frac{v^2}{2} (-\beta) \, , & \quad \text{for } v < 0 \, .
\end{cases}
\end{equation}
and so, we have
\begin{equation}
\begin{split}
\mathcal{T}_1 &= \mathcal{T}_0 + \frac{\mu}{2 i \hbar} \frac{\mu^2 \omega^2}{4\hbar^2} \frac{u^3}{3} \frac{v^2}{2} [\alpha H(v) - \beta H(-v)] \\
&\quad + \frac{\mu^2 \omega^2}{4\hbar^2} \frac{v^2}{2} \int_0^u x f(x) \dd{x} + \frac{\mu^2 \omega^2}{4\hbar^2} \frac{u^2}{2} \int_0^v y \, g(y) \dd{y} \, .
\end{split}
\end{equation}
The next approximation gives
\begin{equation}
\begin{split}
\mathcal{T}_2 &= \mathcal{T}_1 + \frac{\mu}{2 i \hbar} \qty(\frac{\mu^2 \omega^2}{4\hbar^2})^2 \qty(\frac{u^5}{1 \cdot 3 \cdot 5}) \qty(\frac{v^4}{2 \cdot 4}) [\alpha H(v) - \beta H(-v)] \\
&\quad + \qty(\frac{\mu^2 \omega^2}{4\hbar^2})^2 \frac{v^4}{2 \cdot 4} \int_0^u \int_0^x x' f(x') \dd{x'} \dd{x} + \qty(\frac{\mu^2 \omega^2}{4\hbar^2})^2 \frac{u^4}{2 \cdot 4} \int_0^v \int_0^y y' \, g(y') \dd{y'} \dd{y} \, .
\end{split}
\end{equation}
By induction, we get
\begin{equation}
\begin{split}
\mathcal{T}_n(u,v) &= \frac{\mu}{2 i \hbar} \sum_{j=0}^n \qty(\frac{\mu\omega}{2\hbar})^{2j} \frac{1}{(2j+1)!} u^{2j+1} v^{2j} [\alpha H(v) - \beta H(-v)] \\
&+ \sum_{j=0}^n \qty(\frac{\mu\omega}{2\hbar})^{2j} \frac{v^{2j}}{2^j j!} F_j(u) + \sum_{j=0}^n \qty(\frac{\mu\omega}{2\hbar})^{2j} \frac{u^{2j}}{2^j j!} G_j(v) \, ,
\end{split}
\end{equation}
for $n \geq 0$, where,
\begin{equation}
F_0(u) = f(u) \, , \qquad G_0(v) = g(v) \, ,
\end{equation}
\begin{equation}
F_s(u) = \int_0^u x F_{s-1}(x) \dd{x} \, , \qquad \text{and} \qquad G_s(v) = \int_0^v y G_{s-1}(y) \dd{y} \qquad \text{for } s \geq 1 \, .
\end{equation}
We arrive at the solution by letting $n \to \infty$,
\begin{equation}
\begin{split}
\mathcal{T}(u,v) &= \frac{\mu}{2 i \hbar} \sum_{j=0}^\infty \qty(\frac{\mu\omega}{2\hbar})^{2j} \frac{1}{(2j+1)!} u^{2j+1} v^{2j} [\alpha H(v) - \beta H(-v)] \\
&+ \sum_{j=0}^\infty \qty(\frac{\mu\omega}{2\hbar})^{2j} \frac{v^{2j}}{2^j j!} F_j(u) + \sum_{j=0}^\infty \qty(\frac{\mu\omega}{2\hbar})^{2j} \frac{u^{2j}}{2^j j!} G_j(v) \, .
\end{split}
\end{equation}
This is the general solution to the modified time kernel equation for the harmonic oscillator case.
Using $\mathfrak{F}(it)^k = (-1)^k 2\pi \delta^{(k)}(\omega)$, and \cite{zemanian1987}
\begin{equation}
\int_{-\infty}^\infty t^n H(t) e^{-it\omega} \dd{t} = i^n \pi \delta^{(n)}(\omega) + \frac{n!}{(i\omega)^{n+1}} \, ,
\end{equation}
the Wigner-Weyl transform gives
\begin{equation}
\begin{split}
\mathcal{T}_\hbar(q,p) &= - \sum_{j=0}^\infty \frac{(-1)^j}{2j+1} \mu^{2j+1} \omega^{2j} \frac{q^{2j+1}}{p^{2j+1}} - i \pi (\alpha - \beta) \sum_{j=0}^\infty \frac{(-1)^j}{(2j+1)!} \frac{\mu^{2j+1} \omega^{2j}}{\hbar^{2j}} q^{2j+1} \delta^{(2j)}(p) \\
&\quad + 2\pi \sum_{j=0}^\infty \frac{(-1)^j}{8^j j!} \frac{\mu^{2j} \omega^{2j}}{\hbar^{2j-1}} F_j(2q) \delta^{(2j)}(p) + \sum_{j=0}^\infty \frac{1}{2^j j!} \frac{\mu^{2j}\omega^{2j}}{\hbar^{2j}} q^{2j} \int_{\infty}^\infty G_j(v) \exp\qty(-i\frac{p}{\hbar}v) \dd{v} \, .
\end{split}
\end{equation}
The consequences of Hermiticity and time-reversal symmetry are similar to that of the free particle case. Imposing Hermiticity makes the second and third term purely real. For $g(v) = g^*(-v)$, we get $G_s(v) = G^*_s(-v)$ for $s \geq 0$, and so the Fourier transform in the last term is purely real as well. Imposing time-reversal symmetry gives imaginary components in the solution. Imposing both Hermiticity and time-reversal symmetry leaves us with the first and last terms (removing all Dirac delta terms and their derivatives), both purely real.
We therefore see that the general solution of the modified time kernel equation has terms in its Wigner-Weyl transform containing Dirac deltas and their derivatives. Imposing conditions on $\alpha$ (or $\beta$), $f$ and $g$ determines one specific (equivalence class) solution. These give solutions to the time-energy canonical commutation relation. We get the usual time of arrival solution when $\alpha = \beta$ and $f(u) = g(v) = 0$.
\section{Conclusion} \label{sec:conclusion}
The time kernel equation and the accompanying boundary conditions provide a solution to the time-energy canonical commutation relation in position representation, one specific case of which is the time of arrival solution. In this paper, we accomplished the following: (1) we rewrote the boundary conditions in a more convenient form, (2) gave conditions for Hermiticity and time-reversal symmetry, (3) provided some interesting examples of other Hamiltonian conjugate solutions, the shifted time of arrival solution, and (4) considered a modified form of the time kernel equation and studied its solutions. The modified time kernel equation provides an even more general way of constructing Hermitian Hamiltonian conjugates, removing the assumption on the form of the kernel and considering locally integrable functions, which resulted with Dirac deltas in the Wigner-Weyl transform that aren't present in the original kernel solution.
Both of the above methods of solving a differential equation constitute the supraquantization of an operator that is canonically conjugate to the Hamiltonian in position representation. We see that with this requirement, for a particle in 1-dimension under a continuous potential $V(q)$, both solutions give the time of arrival plus some other terms. This time of arrival term is always present, and could be thought of as the ``master time", wherein other possible Hamiltonian conjugates are just shifted time of arrivals.
The (original) time kernel equation solutions can be written as Bender-Dunne operators in position representation \cite{galapon2008}, and are thus Hilbert space operators \cite{bunao2014}. This connection with the Bender-Dunne operators is not present in the modified kernel solution. In position representation, the Bender-Dunne minimal solution gives a kernel that is an analytic function multiplied by a signum function; this does not encompass all possible solutions that the modified time kernel equation provides. The status of the modified time kernel equation solutions as Hilbert space operators is still an unanswered question.
Are all these solutions time observables? While, at the minimum, we require conjugacy with the Hamiltonian and Hermiticity, one can only interpret a particular solution as some time observable by looking at its classical limit. What about its self-adjointness? That would require further study, for instance, of the deficiency indices of these operators \cite{reed1975}. What is the role of time-reversal symmetry? For now, it is unclear whether this is a strict requirement for all time observables.
\section*{Data Availability Statement}
There is no data associated with this manuscript.
\begin{appendices}
\section{The General Potential Solution} \label{app:genpolsoln}
The general potential \eqref{eq:genpol} for the time kernel equation \eqref{eq:tke} gives the recurrence relation \eqref{eq:genpolrecur} for the coefficients of the analytic solution. The boundary conditions \eqref{eq:tkehnbca} give two branches of nonvanishing terms: the first condition gives the time of arrival, and the second condition gives the shifting term. This second condition gives a nonvanishing $\alpha_{0,2N-1}$ for positive integer $N$, and so the recurrence relation \eqref{eq:genpolrecur} makes all $\alpha_{m,n}$ terms with even $n$ vanish.
We solve this following analogous steps as that in \cite{galapon2004}. We now look at the branch of coefficients $\alpha_{m,n}$ with odd $n$. First, we have, for $m \geq 0$,
\begin{equation} \label{eq:am2n-1}
\alpha_{m,2N-1} = -i^{2N-1} \frac{\beta}{\mu^{2N-2}} \delta_{m,0} \, .
\end{equation}
Using the recurrence relation \eqref{eq:genpolrecur}, we get $\alpha_{m,2N+1}$ for $m \geq 1$,
\begin{equation}
\alpha_{m,2N+1} = -i^{2N-1} \frac{\beta}{\mu^{2N-2}} \frac{\mu}{2\hbar^2} \frac{1}{2N+1} \frac{a_m}{2^{m-1}} \, .
\end{equation}
Similarly,
\begin{equation}
\begin{split}
\alpha_{m,2N+3} &= -i^{2N-1} \frac{\beta}{\mu^{2N-2}} \qty(\frac{\mu}{2\hbar^2})^2 \frac{1}{m(2N+1)(2N+3)} \sum_{s=1}^{m-1} \frac{s a_s a_{m-s}}{2^{m-2}} \\
&\qquad -i^{2N-1} \frac{\beta}{\mu^{2N-2}} \frac{\mu}{2\hbar^2} \frac{1}{m(2N+3)} \frac{a_{m+2}}{2^{m+1}} \binom{m+2}{3} \, .
\end{split}
\end{equation}
This suggests that, for $m \geq 1$ and $j \geq 1$,
\begin{equation} \label{eq:amnasmj}
\alpha_{m,2N-1+2j} = \sum_{s=0}^{j-1} \qty(\frac{\mu}{2\hbar^2})^{j-s} \alpha_{m,j}^{(s)} \, .
\end{equation}
The recurrence relation \eqref{eq:genpolrecur} gives
\begin{equation}
\alpha_{m,2N-1+2j} = \frac{\mu}{2\hbar^2} \frac{1}{m(2N-1+2j)} \sum_{s=1}^\infty \frac{a_s}{2^{s-1}} \sum_{k=0}^{[s]} \binom{s}{2k+1} \alpha_{m-s+2k,2N+2j-2k-3} \, .
\end{equation}
Note that $\alpha_{m-s+2k,2N+2j-2k-3}$ is nonvanishing for $m-s+2k \geq 0$ and $2N+2j-2k-3 \geq 2N - 1$. Also note that the binomial coefficient is nonvanishing for $s \geq 2k+1$. We can then rewrite the above equation into
\begin{equation}
\alpha_{m,2N-1+2j} = \frac{\mu}{2\hbar^2} \frac{1}{m(2N-1+2j)} \sum_{k=0}^{j-1} \sum_{s=2k+1}^{m+2k} \frac{a_s}{2^{s-1}} \binom{s}{2k+1} \alpha_{m-s+2k,2N-1+2(j-k-1)} \, .
\end{equation}
Substituting \eqref{eq:amnasmj} to the right hand side gives
\begin{equation}
\begin{split}
\alpha_{m,2N-1+2j} &= \frac{\mu}{2\hbar^2} \frac{1}{m(2N-1+2j)} \sum_{k=0}^{j-1} \sum_{s=2k+1}^{m+2k} \frac{a_s}{2^{s-1}} \binom{s}{2k+1} \\
&\quad \times \sum_{r=0}^{j-k-2} \qty(\frac{\mu}{2\hbar^2})^{j-k-1-r} \alpha^{(r)}_{m-s+2k,j-k-1} \, .
\end{split}
\end{equation}
Since $0 \leq k \leq j - 1$, then the summation in $r$ is nonvanishing up to $j-k-1$, and so we can rewrite this into
\begin{equation}
\begin{split}
\alpha_{m,2N-1+2j} &= \frac{1}{m(2N-1+2j)} \\
&\quad \times \sum_{k=0}^{j-1} \sum_{r=0}^{j-k-1} \sum_{s=2k+1}^{m+2k} \frac{a_s}{2^{s-1}} \binom{s}{2k+1} \qty(\frac{\mu}{2\hbar^2})^{j-k-r} \alpha^{(r)}_{m-s+2k,j-k-1} \, .
\end{split}
\end{equation}
Interchanging the $r$ and $k$ summations, we get
\begin{equation}
\begin{split}
\alpha_{m,2N-1+2j} &= \frac{1}{m(2N-1+2j)} \\
&\qquad \times \sum_{r=0}^{j-1} \sum_{k=0}^{j-r-1} \sum_{s=2k+1}^{m+2k} \frac{a_s}{2^{s-1}} \binom{s}{2k+1} \qty(\frac{\mu}{2\hbar^2})^{j-k-r} \alpha^{(r)}_{m-s+2k,j-k-1} \, .
\end{split}
\end{equation}
Rewriting this by letting $k$ run from $0$ to $r$ gives
\begin{equation}
\begin{split}
\alpha_{m,2N-1+2j} &= \sum_{r=0}^{j-1} \qty(\frac{\mu}{2\hbar^2})^{j-r} \\
&\qquad \times \frac{1}{m(2N-1+2j)} \sum_{k=0}^{r} \sum_{s=2k+1}^{m+2k} \frac{a_s}{2^{s-1}} \binom{s}{2k+1} \alpha^{(r-k)}_{m-s+2k,j-k-1} \, .
\end{split}
\end{equation}
Comparing this with \eqref{eq:amnasmj}, we thus get the recurrence relation
\begin{equation} \label{eq:asmjrecur}
\alpha_{m,j}^{(s)} = \frac{1}{m(2N-1+2j)} \sum_{r=0}^{s} \sum_{n=2r+1}^{m+2r} \frac{a_n}{2^{n-1}} \binom{n}{2r+1} \alpha^{(s-r)}_{m-n+2r,j-r-1} \, .
\end{equation}
From \eqref{eq:amnasmj}, we see that the nonvanishing coefficients contribute to $v^{2N-1+2j}$, giving a Wigner-Weyl contribution \eqref{eq:wignerweyl} proportional to
\begin{equation}
\mathcal{T}_\hbar \propto \frac{1}{\hbar} \alpha_{m,2N-1+2j} \int_{-\infty}^\infty v^{2N-1+2j} \sgn(v) \exp\qty(-i\frac{p}{\hbar}v) \dd{v} \, ,
\end{equation}
which gives, upon using \eqref{eq:fouriertnsgnt},
\begin{equation}
\mathcal{T}_\hbar \propto \sum_{s=0}^{j-1} (-1)^j \qty(\frac{\mu}{2})^{j-s} (2N-1+2j)! \alpha^{(s)}_{m,j} \frac{\hbar^{2N-1+2s}}{p^{2j}} \, .
\end{equation}
From our results for linear systems, we infer that the Wigner-Weyl transform here should look like
\begin{equation}
\mathcal{T}_\hbar = \beta \frac{(2N-1)!}{2^{N-1}} \frac{\hbar^{2N-1}}{\mu^{3(N-1)}} \frac{1}{H^N} \, .
\end{equation}
This means that we are only interested at the $\hbar^{2N-1}$ term in the Wigner-Weyl transform since that is the only contributing factor to our desired classical limit. This term is the $s = 0$ term in our calculations. The leading $\hbar$ correction is then $\mathcal{O}(\hbar^{2N+1})$, corresponding to $s=1$.
Since only the $s=0$ terms correspond to the classical limit, then from \eqref{eq:asmjrecur}, the recurrence relation that we are interested in studying is
\begin{equation} \label{eq:a0mjrecur}
\alpha^{(0)}_{m,j} = \frac{1}{m(2N-1+2j)} \sum_{n=1}^m \frac{n a_n}{2^{n-1}} \alpha^{(0)}_{m-n,j-1} \, ,
\end{equation}
where we have let $\alpha^{(s)}_{m,j}$ for $s > 0$ vanish.
We note in passing that the leading $\hbar$ correction of the time of arrival result is $\mathcal{O}(\hbar^2)$ for nonlinear systems \cite{galapon2004}. We then need to let $\hbar$ vanish if we are to recover the correct classical limit for the time of arrival. One could choose $\beta$ such that $\beta\hbar^{2N-1}$ does not vanish, i.e., $\beta \propto 1/\hbar^{2N-1}$, so that the leading $\hbar$ correction for $T_C$ becomes $\mathcal{O}(\hbar^2)$. In this scenario, letting $\mathcal{O}(\hbar^2)$ vanish won't remove the $H^{-N}$ term.
Going back, since $\alpha_{m,2N-1}$ is given by \eqref{eq:am2n-1}, then
\begin{equation} \label{eq:a0m0}
\alpha^{(0)}_{m,0} = -i^{2N-1} \frac{\beta}{\mu^{2N-2}} \delta_{m,0} \, ,
\end{equation}
\begin{equation}
\alpha^{(0)}_{m,1} = -i^{2N-1} \frac{\beta}{\mu^{2N-2}} \frac{1}{2N+1} \frac{a_m}{2^{m-1}} \, ,
\end{equation}
\begin{equation}
\alpha^{(0)}_{m,2} = -i^{2N-1} \frac{\beta}{\mu^{2N-2}} \frac{1}{m(2N+3)(2N+1)} \frac{1}{2^{m-2}} \sum_{s=1}^m s a_s a_{m-s} \, ,
\end{equation}
\begin{equation}
\alpha^{(0)}_{m,3} = -i^{2N-1} \frac{\beta}{\mu^{2N-2}} \frac{1}{m(2N+5)(2N+3)(2N+1)} \frac{1}{2^{m-3}} \sum_{s=1}^m \frac{s a_s}{m-s} \sum_{r=1}^{m-s} r a_r a_{m-s-r} \, ,
\end{equation}
where we have used the recurrence relation \eqref{eq:a0mjrecur} to get the other nonvanishing terms. We infer that
\begin{equation} \label{eq:a0mjcmj1}
\alpha^{(0)}_{m,j} = -i^{2N-1} \frac{\beta}{\mu^{2N-2}} \frac{\Gamma\qty(N+\frac{1}{2})}{\Gamma\qty(N+\frac{1}{2}+j)} \frac{1}{2^m} C_{m,j} \, .
\end{equation}
Substituting this to the right side of \eqref{eq:a0mjrecur} gives
\begin{equation} \label{eq:a0mjcmj2}
\alpha^{(0)}_{m,j} = -i^{2N-1} \frac{\beta}{\mu^{2N-2}} \frac{\Gamma\qty(N+\frac{1}{2})}{\Gamma\qty(N+\frac{1}{2}+j)} \frac{1}{2^m} \frac{1}{m} \sum_{s=1}^m s a_s C_{m-s,j-1} \, .
\end{equation}
Comparing \eqref{eq:a0mjcmj1} and \eqref{eq:a0mjcmj2}, we get a recurrence relation for $C_{m,j}$,
\begin{equation} \label{eq:cmjrecur}
C_{m,j} = \frac{1}{m} \sum_{s=1}^m s a_s C_{m-s,j-1} \, .
\end{equation}
From \eqref{eq:a0m0} and \eqref{eq:a0mjcmj1}, we see that $\alpha^{(0)}_{m,0} = -i \frac{\beta}{2^{2N-2}} \delta_{m,0} = -i \frac{\beta}{2^{2N-2}} \frac{1}{2^m} C_{m,0}$, so $C_{m,0} = 2^m \delta_{m,0}$, i.e.,
\begin{equation} \label{eq:cm0}
C_{m,0} = \delta_{m,0} \, .
\end{equation}
We now go back to the time kernel solution, which takes the form
\begin{equation}
T(u,v) = T_{\text{TOA}}(u,v) + \sum_{j=0}^\infty \sum_{m=0}^\infty \alpha_{m,2N-1+2j} u^m v^{2N-1+2j} \, .
\end{equation}
From \eqref{eq:amnasmj}, this becomes
\begin{equation}
T(u,v) = T_{\text{TOA}}^{(0)}(u,v) + \sum_{j=0}^\infty \sum_{m=0}^\infty \qty(\frac{\mu}{2\hbar^2})^j \alpha^{(0)}_{m,j} u^m v^{2N-1+2j} \, ,
\end{equation}
wherein we only consider the $s = 0$ term. In the time of arrival solution, only the $s = 0$ term is taken as well \cite{galapon2004}. With our condition that $\beta\hbar^{2N-1}$ does not vanish, we see that this equation is the leading order solution to the general potential case, as we have let $\order{\hbar^2}$ vanish. To continue, we use \eqref{eq:a0mjcmj1} to get
\begin{equation}
T(u,v) = T_{\text{TOA}}^{(0)}(u,v) - i^{2N-1} \frac{\beta}{\mu^{2N-2}} \sum_{j=0}^\infty \sum_{m=0}^\infty \qty(\frac{\mu}{2\hbar^2})^j \frac{\Gamma\qty(N+\frac{1}{2})}{\Gamma\qty(N+\frac{1}{2}+j)} \frac{1}{2^m} C_{m,j} u^m v^{2N-1+2j} \, .
\end{equation}
Thus, the Wigner-Weyl transform \eqref{eq:wignerweyl} gives
\begin{equation}
\begin{split}
\mathcal{T}_\hbar(q,p) &= t_{\text{TOA}}(q,p) - \frac{\mu}{\hbar} \frac{\beta}{2^{2N-2}} \sum_{j=0}^\infty \sum_{m=0}^\infty \qty(\frac{\mu}{2\hbar^2})^j \frac{\Gamma\qty(N+\frac{1}{2})}{\Gamma\qty(N+\frac{1}{2}+j)} \frac{1}{2^m} C_{m,j} (2q)^m \\
&\qquad \qquad \qquad \quad \times \int_{-\infty}^\infty v^{2N-1+2j} \sgn(v) \exp\qty(-i\frac{p}{\hbar}v) \dd{v} \, ,
\end{split}
\end{equation}
which becomes, after using \eqref{eq:fouriertnsgnt} and some simplifications,
\begin{equation}
\begin{split}
\mathcal{T}_\hbar(q,p) &= t_{\text{TOA}}(q,p) + \beta \frac{(2N-1)!}{2^{N-1}} \frac{\hbar^{2N-1}}{\mu^{3(N-1)}} \\
&\qquad \qquad \qquad \quad \times \qty(\frac{2\mu}{p^2})^N \sum_{k=0}^\infty (-1)^k \frac{\Gamma(N+k)}{k!\Gamma(N)} \qty(\frac{2\mu}{p^2})^k k! \sum_{m=0}^\infty C_{m,k} q^m \, .
\end{split}
\end{equation}
Note that for the general potential \eqref{eq:genpol},
\begin{equation}
\frac{1}{H^N} = \qty(\frac{2\mu}{p^2})^N \sum_{k=0}^\infty (-1)^k \frac{\Gamma(N+k)}{k!\Gamma(N)} \qty(\frac{2\mu}{p^2})^k \qty(\sum_{s=1}^\infty a_s q^s)^k \, ,
\end{equation}
for sufficiently small $q$. We then are left with showing that
\begin{equation} \label{eq:cmk}
k! \sum_{m=0}^\infty C_{m,k} q^m = \qty(\sum_{s=1}^\infty a_s q^s)^k \, ,
\end{equation}
so that the shifting term approaches the correct classical limit.
Firstly, for $k=0$,
\begin{equation}
\sum_{m=0}^\infty C_{m,0} q^m = 1 \, ,
\end{equation}
and so, by \eqref{eq:cm0}, we know that this equality holds. To show that this holds for $k > 1$, we first differentiate both sides of \eqref{eq:cmk} with respect to $q$,
\begin{equation}
k! \sum_{m=0}^\infty m C_{m,k} q^{m-1} = k \qty(\sum_{s=1}^\infty a_s q^s)^{k-1} \sum_{r=1}^\infty r a_r q^{r-1} \, .
\end{equation}
Suppose \eqref{eq:cmk} is true; we can rewrite the above equation into
\begin{equation}
k! \sum_{m=0}^\infty m C_{m,k} q^{m-1} = k \qty((k-1)! \sum_{n=0}^\infty C_{n,k-1} q^n) \sum_{r=1}^\infty r a_r q^{r-1} \, .
\end{equation}
We rewrite the double sum as
\begin{equation}
\sum_{m=0}^\infty m C_{m,k} q^{m-1} = \sum_{m=0}^\infty \sum_{j=0}^m (m-j) a_{m-j} C_{j,k-1} q^{m-1} \, ,
\end{equation}
or,
\begin{equation}
\sum_{m=0}^\infty m C_{m,k} q^{m-1} = \sum_{m=0}^\infty \sum_{s=0}^m s a_s C_{m-s,k-1} q^{m-1} \, ,
\end{equation}
and thus, we obtain
\begin{equation}
m C_{m,k} = \sum_{s=0}^m s a_s C_{m-s,k-1} \, .
\end{equation}
By \eqref{eq:cmjrecur}, we know that this equality holds as well, implying that \eqref{eq:cmk} is indeed true.
\end{appendices}
|
1,941,325,221,128 | arxiv | \section{Introduction}
\vskip 0.15cm
While many new data of charmed baryon nonleptonic weak decays became available
in recent years, the experimental study of hadronic weak decays of bottom
baryons is just beginning to start its gear. This is best illustrated
by the decay mode $\Lambda_b\to J/\psi\Lambda$ which is interesting both
experimentally and theoretically. Its branching ratio was originally
measured by the UA1 Collaboration to be $(1.8\pm 1.1)\times 10^{-2}$
\cite{UA1}. However,
both CDF \cite{CDF93} and LEP \cite{LEP} Collaborations did not see any
evidence for this decay. The theoretical situation is equally ambiguous:
The predicted branching ratio ranges from $10^{-3}$ to $10^{-5}$. Two early
estimates \cite{Dunietz,Cheng92} based on several different approaches
for treating the $\Lambda_b\to\Lambda$ form factors
yield a branching ratio of order $10^{-3}$. It was reconsidered in
\cite{CT96} within the nonrelativistic quark model by taking into account
the $1/m_Q$ corrections to baryonic form factors at zero recoil and
the result
${\cal B}(\Lambda_b\to J/\psi\Lambda)=1.1\times 10^{-4}$ was
obtained (see the erratum in \cite{CT96}).
Recently, it was found that ${\cal B}(\Lambda_b\to J/\psi\Lambda)$
is of order $10^{-5}$ in \cite{Datta} by extracting form factors at
zero recoil from experiment and in \cite{Guo} by generalizing the Stech's
approach for form factors to the baryon case. This issue is finally settled
down experimentally: The decay $\Lambda_b\to J/\psi\Lambda$ is observed
by CDF \cite{CDF96}
and the ratio of cross section times branching fraction, $\sigma_{\Lambda_b}
{\cal B}(\Lambda_b\to J/\psi\Lambda)/[\sigma_{B^0}{\cal B}(B^0\to J/\psi
K_S)]$ is measured. The branching ratio of $\Lambda_b\to J/\psi \Lambda$
turns out to be $(3.7\pm 1.7\pm 0.4)\times
10^{-4}$, assuming $\sigma_{\Lambda_b}/\sigma_B=0.1/0.375$ and ${\cal B}
(B^0\to J/\psi K_S)=3.7\times 10^{-4}$. It is interesting to note that this is
also the first successful measurement of exclusive hadronic decay rate
of bottom baryons, even though the branching ratio of
$\Lambda_b\to\Lambda\pi$ is expected
to exceed that of $\Lambda_b\to J/\psi\Lambda$ by an order of magnitude.
Needless to say, more and more data of bottom baryon decay data will be
accumulated in the near future.
Encouraged by the consistency between experiment and our nonrelativistic
quark model calculations for $\Lambda_b\to J/\psi\Lambda$, we would like to
present in this work a systematic study of exclusive nonleptonic decays of
bottom baryons (for earlier studies, see \cite{Rudaz,Mannel}).
Just as the meson case, all hadronic weak decays of baryons
can be expressed in terms of the following quark-diagram amplitudes
\cite{CCT}: ${\cal A}$, the external $W$-emission diagram; ${\cal B}$, the internal
$W$-emission diagram; ${\cal C}$, the $W$-exchange diagram and ${\cal E}$, the
horizontal $W$-loop diagram. The external and
internal $W$-emission diagrams are sometimes referred to as color-allowed and
color-suppressed factorizable contributions. However, baryons being made out
of three quarks, in contrast to two quarks for mesons, bring along several
essential complications. First of all, the factorization approximation that
the hadronic matrix element is factorized into the product of two matrix
elements of single currents and that the nonfactorizable term such as
the $W$-exchange contribution is negligible relative to the factorizable
one is known empirically to be working reasonably well for describing the
nonleptonic weak decays of heavy mesons \cite{Cheng89}. However, this
approximation is {\it a priori} not directly applicable to the charmed
baryon case as $W$-exchange there, manifested as pole diagrams, is no
longer subject to helicity and color suppression.\footnote{This is
different from the naive color suppression of internal
$W$-emission. It is known in the heavy meson case that nonfactorizable
contributions will render the color suppression of internal $W$-emission
ineffective. However, the
$W$-exchange in baryon decays is not subject to color suppression
even in the absence of nonfactorizable terms. A simple way to see this is
to consider the large-$N_c$ limit. Although the $W$-exchange diagram is down
by a factor of $1/N_c$ relative to the external $W$-emission one, it is
compensated by the fact that the baryon contains $N_c$ quarks in the limit
of large $N_c$, thus allowing $N_c$ different possibilities for $W$ exchange
between heavy and light quarks \cite{Korner}.}
That is, the pole contribution can be as important as the factorizable one.
The experimental measurement of the decay modes $\Lambda_c^+\to\Sigma^0\pi^+,~
\Sigma^+\pi^0$ and $\Lambda^+_c\to\Xi^0K^+$, which do not receive any
factorizable contributions, indicates that $W$-exchange
indeed plays an essential role in charmed baryon decays. Second,
there are more possibilities in drawing the ${\cal B}$ and ${\cal C}$ types of
amplitudes \cite{CCT}; in general there exist two distinct internal
$W$-emissions and
several different $W$-exchange diagrams and only one of the internal
$W$-emission amplitudes is factorizable.
The nonfactorizable pole contributions to hadronic weak decays
of charmed baryons have been studied in the literature \cite{CT92,CT93,XK92}.
In general, nonfactorizable $s$- and $p$-wave amplitudes for
${1\over 2}^+\to {1\over 2}^++P(V)$ decays ($P$: pseudoscalar meson, $V$:
vector meson),
for example, are dominated by ${1\over 2}^-$ low-lying baryon resonances
and ${1\over 2}^+$ ground-state baryon poles, respectively. However, the
estimation of pole amplitudes is a
difficult and nontrivial task since it involves weak baryon matrix elements
and strong coupling constants of ${1\over 2}^+$ and ${1\over 2}^-$ baryon
states. This is the case in particular for $s$-wave terms as
we know very little about the ${1\over 2}^-$ states.
As a consequence, the evaluation of pole diagrams is far more uncertain
than the factorizable terms. Nevertheless, the bottom baryon system
has some advantages over the charmed baryon one. First, $W$-exchange is
expected to be less important in the nonleptonic decays of the former.
The argument goes as follows. The $W$-exchange contribution to the total
decay width of the heavy baryon relative to the spectator diagram is of
order $R=32\pi^2|\psi_{Qq}(0)|^2/m^3_Q$ \cite{Cheng92}, where the square
of the wave function $|\psi_{Qq}(0)|^2$ determines the probability of
finding a light quark $q$ at the location of the heavy quark $Q$. Since
$|\psi_{cq}(0)|^2\sim |\psi_{bq}(0)|^2\sim (1-2)\times 10^{-2}\,{\rm GeV}^2$
\cite{Cheng92},
it is clear that $R$ is of order unity in the charmed baryon case, while it
is largely suppressed in bottom baryon decays. Therefore, although
$W$-exchange plays a dramatic role in charmed baryon case (it even dominates
over the spectator contribution in hadronic decays of $\Lambda_c^+$ and
$\Xi_c^0$ \cite{Cheng92}), it becomes negligible in inclusive hadronic decays
of bottom baryons. It is thus reasonable to assume that the same suppression
is also inherited in the two-body nonleptonic weak decays of bottom
baryons. Second, for charmed baryon decays, there are only a few decay modes
which proceed through external or internal $W$-emission diagram, namely,
Cabibbo-allowed $\Omega_c^0\to\Omega^-\pi^+(\rho^+),~\Xi^{*0}\bar{K}^0
(\bar{K}^{*0})$ and Cabibbo-suppressed $\Lambda_c^+\to p\phi$,
$\Omega_c^0\to\Xi^-\pi^+(\rho^+)$. However, even at the Cabibbo-allowed
level, there already exist a significant number of bottom baryon decays
which receive
contributions only from factorizable diagrams (see Tables II and III below)
and $\Lambda_b\to J/\psi \Lambda$ is one of the most noticeable examples.
For these decay modes we can make a reliable estimate based on the
factorization approach as they do not involve troublesome nonfactorizable
pole terms. Moreover, with the aforementioned suppression of
$W$-exchange, many decay channels are dominated by
external or internal $W$-emission. Consequently, contrary to the charmed
baryon case, it suffices to apply the factorization hypothesis to describe
most of Cabibbo-allowed two-body nonleptonic decays of bottom baryons, and
this makes the study of bottom baryon decays considerably simpler than that
in charmed baryon decays.
Under the factorization approximation, the baryon decay amplitude is
governed by a decay constant and form factors. In order to study
heavy-to-heavy and heavy-to-light baryon form factors, we will follow
\cite{CT96} to employ the nonrelativistic quark model to evaluate the
form factors at zero recoil. Of course, the quark model results should be
in agreement with the predictions of the heavy quark effective theory (HQET)
for antitriplet-to-antitriplet
heavy baryon form factors to the first order in $1/m_Q$ and for
sextet-to-sextet ones to the zeroth order in $1/m_Q$. The
quark model, however, has the merit that it is applicable to heavy-to-light
baryonic transitions as well and accounts for $1/m_Q$ effects for
sextet-to-sextet heavy baryon transition.
In this paper, we will generalize the work of \cite{CT96}
to ${1\over 2}^+-{3\over 2}^+$ transitions in order to study the decays
${1\over 2}^+\to{3\over 2}^++P(V)$. As the conventional practice, we then
make the pole dominance assumption for the $q^2$ dependence to
extrapolate the form factor from zero recoil to the desired $q^2$ point.
The layout of the present paper is organized as follows. In Sec.~II we first
discuss the quark-diagram amplitudes for Cabibbo-allowed bottom baryon
decays. Then with the form factors calculated using the nonrelativistic quark
model, the external and internal $W$-emission amplitudes are computed
under the factorization approximation. Results of model calculations and
their physical implications are discussed in Sec.~III. A detail of
the quark model evaluation of form factors is presented in Appendix A and the
kinematics for nonleptonic decays of baryons is summarized in Appendix B.
\section{Nonleptonic Weak Decays of Bottom Baryons}
\subsection{Quark Diagram Classification}
The light quarks of the bottom baryons belong to either a ${\bf {\bar 3}}$ or
a ${\bf 6}$ representation of the flavor SU(3). The $\Lambda_b^+$,
$\Xi_b^{0A}$, and $\Xi_b^{-A}$ form a
${\bf {\bar 3}}$ representation and they all decay weakly. The $\Omega_b^-$,
$\Xi_b^{0S}$, $\Xi_b^{-S}$, $\Sigma_b^{+,0,-}$ form a ${\bf 6}$
representation; among them, however, only $\Omega_b^-$ decays weakly.
Denoting the bottom baryon, charmed baryon, octet baryon, decuplet baryon
and octet meson by $B_b,~B_c,~B({\bf 8}),~B({\bf 10})$ and $M({\bf 8})$, respectively,
the two-body nonleptonic decays of bottom baryon can be classified into:
\begin{eqnarray}
& (a) &~~~~ B_b({\bf {\bar 3}})\to B_c({\bf {\bar 3}})+M({\bf 8}), \nonumber \\
& (b) &~~~~ B_b({\bf {\bar 3}})\to B_c({\bf 6})+M({\bf 8}), \\
& (c) &~~~~ B_b({\bf {\bar 3}})\to B({\bf 8})+M({\bf 8}), \nonumber \\
& (d) &~~~~ B_b({\bf {\bar 3}})\to B({\bf 10})+M({\bf 8}), \nonumber
\end{eqnarray}
and
\begin{eqnarray}
& (e) &~~~~ B_b({\bf 6})\to B_c({\bf 6})+M({\bf 8}), \nonumber \\
& (f) &~~~~ B_b({\bf 6})\to B_c^*({\bf 6})+M({\bf 8}), \nonumber \\
& (g) &~~~~ B_b({\bf 6})\to B_c({\bf {\bar 3}})+M({\bf 8}), \\
& (h) &~~~~ B_b({\bf 6})\to B({\bf 8})+M({\bf 8}), \nonumber \\
& (i) &~~~~ B_b({\bf 6})\to B({\bf 10})+M({\bf 8}), \nonumber
\end{eqnarray}
where $B_c^*$ designates a spin-${3\over 2}$ sextet charmed baryon. In
\cite{CCT} we have given a general formulation of the quark-diagram
scheme for the nonleptonic weak decays of charmed baryons, which can
be generalized directly to the bottom baryon case.
The general quark diagrams for decays in (2.1) and (2.2) are:
the external $W$-emission ${\cal A}$, internal $W$-emission diagrams ${\cal B}$ and ${\cal B'}$,
$W$-exchange diagrams ${\cal C}_1,~{\cal C}_2$ and ${\cal C'}$, and the horizontal $W$-loop
diagrams ${\cal E}$ and ${\cal E}'$ (see Fig.~2 of \cite{CCT} for notation and for
details).\footnote{The quark diagram amplitudes ${\cal A},~{\cal B},~{\cal B'}\cdots$ etc. in
each type of hadronic decays are
in general not the same. For octet baryons in the final state, each of the
$W$-exchange and $W$-loop amplitudes has two more independent types: the
symmetric and the antisymmetric, for example, ${\cal C}_{1A}$, ${\cal C}_{1S}$,
${\cal E}_A,~{\cal E}_S,\cdots$ etc. \cite{CCT}.}
The quark coming from the bottom quark decay in diagram ${\cal B'}$ contributes
to the final meson formation, whereas it contributes to the final
baryon formation in diagram ${\cal B}$. Consequently, diagram ${\cal B'}$ contains
factorizable contributions
but ${\cal B}$ is not. Note that, contrary to the charmed baryon case,
the horizontal $W$-loop diagrams (or the so-called penguin diagrams under
one-gluon-exchange approximation) can contribute to some of
Cabibbo-allowed decays of bottom baryons. Since the two spectator light
quarks in the heavy baryon are antisymmetrized in $B_Q({\bf {\bar 3}})$ and
symmetrized in $B_Q({\bf 6})$
and since the wave function of $B({\bf 10})$ is totally symmetric, it is clear
that factorizable amplitudes ${\cal A}$ and ${\cal B'}$ cannot contribute to the decays
of types (b), (d) and (g). For example, decays of type (d)
receive contributions only from the $W$-exchange and $W$-loop diagrams,
namely ${\cal C}_{2S},~{\cal C}'_S$ and ${\cal E}_S$ (see Fig.~1 of \cite{CCT}).
There are only a few Cabibbo-allowed $B_b({\bf {\bar 3}})\to B({\bf 10})+M({\bf 8})$ decays:
\begin{eqnarray}
\Lambda_b^0\to\,D^0\Delta^0,~D^{*0}\Delta^0;~~~~\Xi_b^{0,-}\to D^0\Sigma
^{*0,-},~D^{*0}\Sigma^{*0,-}.
\end{eqnarray}
They all only receive contributions from the $W$-exchange diagram ${\cal C}'_S$.
We have shown In Tables II and III the quark diagram amplitudes for those
Cabibbo-allowed bottom baryon decays that do receive contributions from the
external $W$-emission ${\cal A}$ or internal $W$-emission ${\cal B'}$.
\subsection{Factorizable Contributions}
At the quark level, the hadronic decays of bottom baryons proceed the
above-mentioned various quark diagrams. At the hadronic level, the decay
amplitudes are conventionally evaluated using factorization approximation
for quark diagrams ${\cal A}$ and ${\cal B'}$ and pole approximation for the remaining
diagrams ${\cal B},~{\cal C}_1,~{\cal C}_2,\cdots$ \cite{Korner,CT92,CT93,XK92}. Among all
possible pole contributions, including resonances and continuum states, one
usually focuses on the most important poles such as the low-lying ${1\over
2}^+,{1\over 2}^-$ states. However, it is difficult to make a reliable
estimate of pole contributions since they involve baryon matrix elements and
strong coupling constants of the pole states. Fortunately, among the 32
decay modes of Cabibbo-allowed decays ${1\over 2}^+\to{1\over 2}^++P(V)$
listed in Table II and 8 channels of ${1\over 2}^+\to {3\over 2}^++P(V)$ in
Table III, 20 of them receive
contributions only from factorizable terms.
Furthermore, as discussed in the Introduction, the $W$-exchange
contribution to the inclusive decay rate of bottom baryons relative to
the spectator decay is of order $32\pi^2|\psi_{bq}(0)|^2/m^3_b\sim (3-5)\%$.
It is thus reasonable to assume that the same suppression persists at the
exclusive two-body decay level. The penguin contributions ${\cal E}$ and ${\cal E}'$ to
the Cabibbo-allowed decay modes e.g.,
$\Lambda_b\to D_s^{(*)}\Lambda_c,~\Xi_b\to D_s^{(*)}
\Xi_c,~\Omega_b\to D_s^{(*)}\Omega_c$ (see Table II) can be safely neglected
since the Wilson coefficient $c_6(m_b)$ of the penguin operator $O_6$ is
of order 0.04 \cite{Buras} and there is no chiral enhancement in the
hadronic matrix element of $O_6$ due to the absence of a light meson in
the final state. Therefore, by neglecting the $W$-exchange contribution as
a first order approximation, we can make sensible predictions for most of
decay modes exhibited in Tables II and III. As for the nonfactorizable
internal $W$-emission ${\cal B}$, there is no reason to argue that it is negligible.
To proceed we first consider the Cabibbo-allowed decays $B_b(
{1\over 2}^+)\to B({1\over 2}^+)+P(V)$. The general amplitudes are
\begin{eqnarray}
{\cal M}[B_i(1/2^+)\to B_f(1/2^+)+P] &=& i\bar{u}_f(p_f)(A+B
\gamma_5)u_i(p_i), \\
{\cal M}[B_i(1/2^+)\to B_f(1/2^+)+V] &=& \bar{u}_f(p_f)\varepsilon^{*\mu}[A_1
\gamma_\mu\gamma_5+A_2(p_f)_\mu\gamma_5+B_1\gamma_\mu+B_2(p_f)_\mu] u_i(p_i),
\nonumber
\end{eqnarray}
where $\varepsilon_\mu$ is the polarization vector of the vector meson. The
QCD-corrected weak Hamiltonian responsible for Cabibbo-allowed
hadronic decays of bottom baryons read
\begin{eqnarray}
{\cal H}_W=\,{G_F\over\sqrt{2}}\,V_{cb}V_{ud}^*(c_1O_1+c_2O_2)+(u\to
c,~d\to s),
\end{eqnarray}
with $O_1=(\bar{u}s)(\bar{b}c)$ and $O_2=(\bar{u}c)(\bar{b}s)$, where
$(\bar{q}_1q_2)\equiv\bar{q}_1\gamma_\mu(1-\gamma_5)q_2$. Under factorization
approximations, the external or internal $W$-emission contributions to the
decay amplitudes are given by
\begin{eqnarray}
A &=& \lambda a_{1,2}f_P(m_i-m_f)f_1(m^2_P), \nonumber \\
B &=& \lambda a_{1,2}f_P(m_i+m_f)g_1(m^2_P),
\end{eqnarray}
and
\begin{eqnarray}
A_1 &=& -\lambda a_{1,2}f_Vm_V[g_1(m_V^2)+g_2(m^2_V)(m_i-m_f)], \nonumber \\
A_2 &=& -2\lambda a_{1,2}f_Vm_Vg_2(m^2_V), \nonumber \\
B_1 &=& \lambda a_{1,2}f_Vm_V[f_1(m_V^2)-f_2(m^2_V)(m_i+m_f)], \\
B_2 &=& 2\lambda a_{1,2}f_Vm_Vf_2(m^2_V), \nonumber
\end{eqnarray}
where $\lambda=G_F V_{cb}V_{ud}^*/\sqrt{2}$ or $G_F V_{cb}V_{cs}^*/\sqrt{2}$,
depending on the final meson state under consideration, $f_i$ and $g_i$
are the form factors defined by ($q=p_i-p_f$)
\begin{eqnarray}
\langle B_f(p_f)|V_\mu-A_\mu|B_i(p_i)\rangle &=& \bar{u}_f(p_f)
[f_1(q^2)\gamma_\mu+if_2(q^2)\sigma_{\mu\nu}q^\nu+f_3(q^2)q_\mu \nonumber \\
&& -(g_1(q^2)\gamma_\mu+ig_2(q^2)\sigma_{\mu\nu}q^\nu+g_3(q^2)q_\mu)\gamma_5]
u_i(p_i),
\end{eqnarray}
$m_i~(m_f)$ is the mass of the initial (final) baryon,
$f_P$ and $f_V$ are the decay constants
of pseudoscalar and vector mesons, respectively, defined by
\begin{eqnarray}
\langle 0|A_\mu|P\rangle =-\langle P|A_\mu|0\rangle=\,if_P q_\mu,~~~~\langle 0|V_\mu|V\rangle=\,\langle
V|V_\mu|0\rangle=\,f_Vm_V\varepsilon^*_\mu,
\end{eqnarray}
with the normalization $f_\pi=132$ MeV.
Since in this paper we rely heavily on the factorization approximation to
describe bottom baryon decay, we digress for a moment to discuss its content.
In the naive factorization approach,
the coefficients $a_1$ for the external $W$-emission amplitude and $a_2$
for internal $W$-emission are given by $(c_1+{c_2\over 3})$ and
$(c_2+{c_1\over 3})$, respectively. However, we have learned from charm
decay that
the naive factorization approach never works for the decay rate of
color-suppressed decay modes, though it usually operates for color-allowed
decays. For example, the predicted rate of $\Lambda_c^+\to p\phi$ in the
naive approach is too small when compared with experiment \cite{CT92}.
This implies that the inclusion of
nonfactorizable contributions is inevitable and necessary. If nonfactorizable
effects amount to a redefinition of the effective parameters $a_1$,
$a_2$ and are universal (i.e., channel-independent) in charm or bottom
decays, then we still have a new factorization scheme with the
universal parameters $a_1,~a_2$ to be determined from experiment. Throughout
this paper, we will thus treat $a_1$ and $a_2$ as free effective parameters.
The factorization hypothesis implies iuniversal and channle-independent
$a_1^{\rm eff}$ and $a_2^{\rm eff}$ in charm or bottom decay.\footnote{
For $D(B)\to PP$ or $VP$ decays ($P$ denotes a pseudoscalar meson, V a
vector meson),
nonfactorizable effects can always be lumped into the effective parameters
$a_1$ and $a_2$. For $D(B)\to VV$ and heavy baryon decays, universal
nonfactorizable terms are assumed under the factorization approximation.
The first systematical study of heavy meson decays within the framework
of improved factorization was carried out by Bauer, Stech and Wirbel
\cite{BSW}. Theoretically, nonfactorizable terms come mainly from
color-octet currents. Phenomenological analyses of $D$ and $B$ decay data
\cite{Cheng,Kamal} indicate that while the factorization hypothesis in
general works reasonably well, the effective parameters $a_{1,2}$ do
show some variations from channel to channel.}
Since we shall consider heavy-to-heavy and heavy-to-light baryonic
transitions, it is clear that HQET is not adequate for our purposes: the
predictive power of HQET for baryon form factors at order $1/m_Q$ is limited
only to antitriplet-to-antitriplet heavy baryonic transition. Hence, we will
follow \cite{CT96} to apply the nonrelativistic quark model to evaluate
the weak current-induced baryon
form factors at zero recoil in the rest frame of the heavy parent baryon,
where the quark model is most trustworthy. This quark model approach has
the merit that it is applicable to heavy-to-heavy and heavy-to-light
baryonic transitions at maximum $q^2$ and that it becomes meaningful
to consider $1/m_q$ corrections so long as the recoil momentum is smaller
than the $m_q$ scale.
The complete quark model results for form factors $f_i$ and $g_i$ at zero
recoil read \cite{CT96}
\begin{eqnarray}
f_1(q^2_m)/N_{fi} &=& 1-{\Delta m\over 2m_i}+{\Delta m\over
4m_im_q}\left(1-{\bar{\Lambda}\over 2m_f}\right)(m_i+m_f-\eta\Delta m)\nonumber \\
&& -{\Delta m\over 8m_im_f}\,{\bar{\Lambda}\over m_Q}(m_i+m_f+\eta\Delta m),
\nonumber \\
f_2(q^2_m)/N_{fi} &=& {1\over 2m_i}+{1\over 4m_im_q}\left(1-{\bar{\Lambda}
\over 2m_f}\right)[\Delta m-(m_i+m_f)\eta] \nonumber \\
&& -{\bar{\Lambda}\over 8m_im_fm_Q}[\Delta m+(m_i+m_f)\eta], \nonumber \\
f_3(q^2_m)/N_{fi} &=& {1\over 2m_i}-{1\over 4m_im_q}\left(1-{\bar{\Lambda}
\over 2m_f}\right)(m_i+m_f-\eta\Delta m) \nonumber \\
&& +{\bar{\Lambda}\over 8m_im_fm_Q}(m_i+m_f+\eta\Delta m), \\
g_1(q^2_m)/N_{fi} &=& \eta+{\Delta m\bar{\Lambda}\over 4}\left({1\over m_i
m_q}-{1\over m_fm_Q}\right)\eta, \nonumber \\
g_2(q^2_m)/N_{fi} &=& -{\bar{\Lambda}\over 4}\left({1\over m_i m_q}-
{1\over m_fm_Q}\right)\eta, \nonumber \\
g_3(q^2_m)/N_{fi} &=& -{\bar{\Lambda}\over 4}\left({1\over m_i m_q}+
{1\over m_fm_Q}\right)\eta, \nonumber
\end{eqnarray}
where $\bar{\Lambda}=m_f-m_q$, $\Delta m=m_i-m_f$, $q_m^2=\Delta m^2$,
$\eta=1$ for the ${\bf {\bar 3}}$ baryon $B_i$, and $\eta=-{1\over 3}$ for the
${\bf 6}$ baryon $B_i$, and $N_{fi}$ is a flavor factor:
\begin{eqnarray}
N_{fi}=\,_{\rm flavor-spin}\langle B_f|b_q^\dagger b_Q|B_i\rangle_{\rm flavor-spin}
\end{eqnarray}
for the heavy quark $Q$ in the parent baryon $B_i$ transiting into the
quark $q$ (being a heavy quark $Q'$ or a light quark) in the daughter baryon
$B_f$. It has been shown in \cite{CT96} that the quark model predictions
agree with HQET for antitriplet-to-antitriplet (e.g., $\Lambda_b\to\Lambda_c,~
\Xi_b\to\Xi_c$) form factors to order $1/m_Q$. For sextet $\Sigma_b\to
\Sigma_c$ and $\Omega_b\to\Omega_c$ transitions, the HQET predicts that
to the zeroth order in $1/m_Q$ (see e.g., \cite{Yan})
\begin{eqnarray}
\langle B_f(v',s')|V_\mu|B_i(v,s)\rangle &=& -{1\over 3}\bar{u}_f(v',s')\Big\{\left[
\omega\gamma_\mu-2(v+v')_\mu\right]\xi_1(\omega) \nonumber \\
&& +\left[(1-\omega^2)\gamma_\mu-2(1-\omega)(v+v')_\mu\right]\xi_2(\omega)
\Big\}u_i(v,s), \nonumber \\
\langle B_f(v',s')|A_\mu|B_i(v,s)\rangle &=& {1\over 3}\bar{u}_f(v',s')\Big\{\left[
\omega\gamma_\mu+2(v-v')_\mu\right]\xi_1(\omega) \\
&& +\left[(1-\omega^2)\gamma_\mu-2(1+\omega)(v-v')_\mu\right]\xi_2(\omega)
\Big\}u_i(v,s), \nonumber
\end{eqnarray}
where $\omega\equiv v\cdot v'$, $\xi_1$ and $\xi_2$ are two universal baryon
Isgur-Wise functions with the normalization of $\xi_1$ known to be
$\xi_1(1)=1$. From Eq.(2.12) we obtain
\begin{eqnarray}
f_1 &=& F_1+{1\over 2}(m_i+m_f)\left({F_2\over m_i}+{F_3\over m_f}\right),
\nonumber \\
f_2 &=& {1\over 2}\left({F_2\over m_i}+{F_3\over m_f}\right), \nonumber \\
f_3 &=& {1\over 2}\left({F_2\over m_i}-{F_3\over m_f}\right), \\
g_1 &=& G_1-{1\over 2}(m_i-m_f)\left({G_2\over m_i}+{G_3\over m_f}\right),
\nonumber \\
g_2 &=& {1\over 2}\left({G_2\over m_i}+{G_3\over m_f}\right), \nonumber \\
g_3 &=& {1\over 2}\left({G_2\over m_i}-{G_3\over m_f}\right), \nonumber
\end{eqnarray}
with
\begin{eqnarray}
&& F_1=-G_1=-{1\over 3}\,[\omega\xi_1+(1-\omega^2)\xi_2], \nonumber \\
&& F_2=F_3={2\over 3}\,[\xi_1+(1-\omega)\xi_2], \\
&& G_2=-G_3={2\over 3}\,[\xi_1-(1+\omega)\xi_2]. \nonumber
\end{eqnarray}
Since $N_{fi}=1$ and $\eta=1$ for sextet-to-sextet transition, it
follows from (2.10) that
\begin{eqnarray}
&& f_1(q_m^2) = -{1\over 3}\left[ 1-(m_i+m_f)\left({1\over m_i}+{1\over m_f}
\right)\right], \nonumber \\
&& f_2(q^2_m) =\,{1\over 3}\left({1\over m_i}+{1\over m_f}\right),~~~~
f_3(q^2_m) =\,{1\over 3}\left({1\over m_i}-{1\over m_f}\right), \\
&& g_1(q^2_m)=-{1\over 3},~~~~g_2(q^2_m)=g_3(q_m^2)=0. \nonumber
\end{eqnarray}
It is easily seen that at zero recoil $\omega=1$, the quark model results
(2.15) are in accord with the HQET predictions (2.13) provided that
\begin{eqnarray}
\xi_2(1)=\,{1\over 2}\,\xi_1(1)=\,{1\over 2}.
\end{eqnarray}
This is precisely the prediction of large-$N_c$ QCD \cite{Chow}.
Three remarks are in order. First, there are two different quark model
calculations of baryon form factors \cite{Marcial,Sing} prior to the work
\cite{CT96}. An obvious criterion for testing the reliability of quark
model calculations is that model results must satisfy all the constraints
imposed by heavy quark symmetry. In the heavy quark limit, normalizations of
heavy-to-heavy form factors and hence some relations between form factors at
zero recoil are fixed by heavy quark symmetry. These constraints are not
respected in \cite{Marcial}. While this discrepancy is improved in the work
of \cite{Sing}, its prediction for $\Lambda_b\to\Lambda_c$ (or
$\Xi_b\to\Xi_c$) form factors at order $1/m_Q$ is still too large by a
factor of 2 when compared with HQET \cite{CT96}. Second, the flavor factor
$N_{fi}$ (2.11) for heavy-to-light transition is usually smaller than
unity (see Table I) due to the fact that SU(N) flavor symmetry is badly
broken. As stressed in \cite{Sing,Kroll,Korner94}, it is important to take
into account this flavor-suppression factor when evaluating the heavy-to-light
baryon form factors. Third, in deriving the baryon matrix elements at zero
recoil in the rest frame of the parent baryon, we have neglected the
kinetic energy (k.e.) of the quark participating weak transition relative
to its constituent mass $M_q$. This is justified in the nonrelativistic
constituent quark model even when the final baryon
is a hyperon or a nucleon. The kinetic energy of the QCD current quark
inside the nucleon at rest is of order a few hundred MeV. In the
nonrelativistic quark model this kinetic energy is essentially absorbed in
the constituent mass of the constituent quark. As a result, it is a good
approximation to neglect (k.e./$M_q$) for the constituent quarks inside
the nucleon (or hyperon) at rest. Of course, this approximation works best
for $Q\to Q'$ transition, and fairly good for $Q\to s$ or $Q\to u(d)$
transition.
We next turn to the Cabibbo-allowed decays $B_b({1\over 2}^+)\to
B^*({3\over 2}^+)+P(V)$ with the general amplitudes:
\begin{eqnarray}
{\cal M}[B_i(1/2^+)\to B_f^*(3/2^+)+P] &=& iq_\mu\bar{u}^\mu_f(p_f)(C+D
\gamma_5)u_i(p_i), \nonumber \\
{\cal M}[B_i(1/2^+)\to B_f^*(3/2^+)+V] &=& \bar{u}_f^\nu(p_f)\varepsilon^{*\mu}[
g_{\nu\mu}(C_1+D_1\gamma_5) \\
&& +p_{1\nu}\gamma_\mu(C_2+D_2\gamma_5)+p_{1\nu}
p_{2\mu}(C_3+D_3\gamma_5)]u_i(p_i), \nonumber
\end{eqnarray}
with $u^\mu$ being the Rarita-Schwinger vector spinor for a spin-${3\over 2}$
particle. The external and internal $W$-emission contributions under
factorization approximation become
\begin{eqnarray}
C &=& -\lambda a_{1,2}f_P[\bar g_1(m^2_P)+(m_i-m_f)\bar g_2(m_P^2)+(m_iE_f-
m^2_f)\bar g_3(m_P^2)],
\nonumber \\
D &=& \lambda a_{1,2}f_P[\bar f_1(m_P^2)-(m_i+m_f)\bar f_2(m_P^2)+(m_iE_f-
m^2_f)\bar f_3(m_P^2)],
\end{eqnarray}
and
\begin{eqnarray}
C_i =- \lambda a_{1,2}f_Vm_V\bar g_i(m_V^2),~~~~
D_i =\, \lambda a_{1,2}f_Vm_V\bar f_i(m_V^2),
\end{eqnarray}
where $i=1,2,3$, and the form factors $\bar f_i$ as well as $\bar g_i$ are
defined by
\begin{eqnarray}
\langle B_f^*(p_f)|V_\mu-A_\mu|B_i(p_i)\rangle &=& \bar{u}_f^\nu
[(\bar f_1(q^2)g_{\nu\mu}+\bar f_2(q^2)p_{1\nu}\gamma_\mu+\bar f_3(q^2)
p_{1\nu}p_{2\mu})\gamma_5 \nonumber \\
&& -(\bar g_1(q^2)g_{\nu\mu}+\bar g_2(q^2)p_{1\nu}\gamma_\mu+
\bar g_3(q^2)p_{1\nu}p_{2\mu})]u_i.
\end{eqnarray}
In deriving Eq.~(2.18) we have applied the constraint $p_\nu u^\nu(p)=0$.
As before, form factors are evaluated at zero recoil
using the nonrelativistic quark model and the results are (see
Appendix A for detail):
\begin{eqnarray}
\bar f_1(q_m^2)/N_{fi} &=& {2\over\sqrt{3}}\left(1+{\bar{\Lambda}\over
2m_q}+{\bar{\Lambda}\over 2m_Q}\right), \nonumber \\
\bar f_2(q_m^2)/N_{fi} &=& {1\over\sqrt{3}m_i}\left(1+{\bar{\Lambda}\over
2m_q}+{\bar{\Lambda}\over 2m_Q}\right), \nonumber \\
\bar f_3(q^2_m)/N_{fi} &=& -{1\over\sqrt{3}m_im_f}\left(1+{\bar{\Lambda}\over
2m_q}+{\bar{\Lambda}\over 2m_Q}\right), \nonumber \\
\bar g_1(q^2_m)/N_{fi} &=& -{2\over \sqrt{3}}, \\
\bar g_2(q^2_m)/N_{fi} &=& -{1\over \sqrt{3}}\,{\bar \Lambda\over m_qm_i},
\nonumber \\
\bar g_3(q^2_m)/N_{fi} &=& -\bar f_3(q^2_m)/N_{fi}. \nonumber
\end{eqnarray}
The above form factors are applicable to heavy-to-heavy (i.e.,
${\bf 6}\to{\bf 6}^*$)
and heavy-to-light (i.e., ${\bf 6}\to{\bf 10}$) baryon transitions.
In HQET the ${{1\over 2}}^+\to {3\over 2}^+$ matrix elements are given by
(see e.g., \cite{Yan})
\begin{eqnarray}
\langle B_f^*(v')|V_\mu|B_i(v)\rangle &=& {1\over\sqrt{3}}\,\bar u_f^\nu(v')\Big\{
(2g_{\mu\nu}+\gamma_\mu v_\nu)\xi_1
+ v_\nu[(1-v\cdot v')\gamma_\mu-2v'_\mu]\xi_2\Big\}\gamma_5 u_i(v), \nonumber \\
\langle B_f^*(v')|A_\mu|B_i(v)\rangle &=& -{1\over\sqrt{3}}\,\bar u_f^\nu(v')\Big\{
(2g_{\mu\nu}-\gamma_\mu v_\nu)\xi_1
+ v_\nu[(1+v\cdot v')\gamma_\mu-2v'_\mu]\xi_2\Big\}u_i(v), \nonumber \\
\end{eqnarray}
where $\xi_1$ and $\xi_2$ are the baryon Isgur-Wise functions introduced
in (2.12). We find that at zero recoil
\begin{eqnarray}
&& \bar f_1(q^2_m)={2\over\sqrt{3}},~~~\bar f_2(q^2_m)={1\over\sqrt{3}m_i},
~~~\bar f_3(q^2_m)=-{2\over\sqrt{3}}\,{\xi_2(1)\over m_im_f}, \nonumber \\
&& \bar g_1(q^2_m)=-{2\over\sqrt{3}},~~~\bar g_2(q^2_m)={1\over\sqrt{3}m_i}[1-
2\xi_2(1)],~~~\bar g_3(q^2_m)={2\over\sqrt{3}}\,{\xi_2(1)\over m_im_f}.
\end{eqnarray}
Since $N_{fi}=1$ for heavy-to-heavy transition, it is clear that the
quark model results for ${{1\over 2}}^+\to {3\over 2}^+$ form factors (2.21)
in the heavy quark limit are in agreement with the HQET predictions (2.23)
with $\xi_2(1)={{1\over 2}}$ [see Eq.~(2.16)].
Since the calculation for the $q^2$ dependence of form factors is
beyond the scope of the nonrelativistic quark model, we will follow the
conventional practice to assume a pole
dominance for the form-factor $q^2$ behavior:
\begin{eqnarray}
f(q^2)={f(0)\over \left(1-{q^2\over m_V^2}\right)^n}\,,~~~~g(q^2)={g(0)\over
\left(1-{q^2\over m_A^2}\right)^n}\,,
\end{eqnarray}
where $m_V$ ($m_A$) is the pole mass of the vector (axial-vector) meson
with the same quantum number as the current under consideration.
The function
\begin{eqnarray}
G(q^2)=\left({1-q^2_m/m^2_{\rm pole}\over 1-q^2/m_{\rm pole}^2} \right)^2
\nonumber
\end{eqnarray}
plays the role of the baryon Isgur-Wise function $\zeta(\omega)$ for
$\Lambda_Q\to
\Lambda_{Q'}$ transition, namely $G=1$ at $q^2=q^2_m$. The function
$\zeta(\omega)$ has been calculated in the literature in various different
models \cite{Jenkins,Sadzi,GuoK,Iva}. Using the pole masses $m_V=6.34$ GeV,
$m_A=6.73$ GeV for $\Lambda_b\to\Lambda_c$ transition, it is found
in \cite{CT96} that $G(q^2)$ is consistent with $\zeta(\omega)$ only if
$n=2$. Nevertheless, one should bear in mind that the $q^2$ behavior of form
factors is probably more complicated and it is likely that a simple
pole dominance only applies to a certain $q^2$ region.
Assuming a dipole $q^2$ behavior for form factors, we have tabulated
in Table I the numerical values of $B_b({{1\over 2}}^+)\to{{1\over 2}}^+$, $B_b({{1\over 2}}^+)
\to{3\over2}^+$ and $B_c({{1\over 2}}^+)\to {3\over 2}^+$ form factors at
$q^2=0$ calculated using (2.10) and (2.21).
Uses have been made of $|V_{cb}|=0.038$ \cite{Stone}, the constituent
quark masses (light quark masses being taken from p.619 of PDG \cite{PDG})
\begin{eqnarray}
m_b=5\,{\rm GeV},~~~m_c=1.6\,{\rm GeV},~~m_s=510\,{\rm MeV},~~m_d=322\,{\rm
MeV},~~m_u=338\,{\rm MeV},
\end{eqnarray}
the pole masses:
\begin{eqnarray}
b\to c: &&~~~~~m_V=6.34\,{\rm GeV},~~~m_A=6.73\,{\rm GeV}, \nonumber \\
b\to s: &&~~~~~m_V=5.42\,{\rm GeV},~~~m_A=5.86\,{\rm GeV}, \nonumber \\
b\to d: &&~~~~~m_V=5.32\,{\rm GeV},~~~m_A=5.71\,{\rm GeV}, \\
c\to s: &&~~~~~m_V=2.11\,{\rm GeV},~~~m_A=2.54\,{\rm GeV}, \nonumber \\
c\to u: &&~~~~~m_V=2.01\,{\rm GeV},~~~m_A=2.42\,{\rm GeV}, \nonumber
\end{eqnarray}
and the bottom baryon masses:
\begin{eqnarray}
m_{\Lambda_b}=5.621\,{\rm GeV},~~~m_{\Xi_b}=5.80\,{\rm GeV},~~~m_{\Omega_b}
=6.04\,{\rm GeV}.
\end{eqnarray}
Note that the CDF measurement \cite{CDF96} $m_{\Lambda_b}=5621\pm 4\pm 3$ MeV
has better accuracy than the PDG value $5641\pm 50$ MeV \cite{PDG}; the
combined value is $m_{\Lambda_b}=5621\pm 5$ MeV.
\section{Results and Discussion}
With the baryon form factors tabulated in Table I we are in a position
to compute the factorizable contributions to the decay rate and up-down
asymmetry for Cabibbo-allowed weak decays of bottom baryons $B_b({{1\over 2}}^+)\to
{{1\over 2}}^+({3\over 2}^+)+P(V)$. The factorizable external and internal
$W$-emission amplitudes are given by (2.6), (2.7), (2.18) and (2.19).
The calculated results are summarized in Tables II and III. (The formulas for
decay rates and up-down asymmetries are given in Appendix B.) For decay
constants we use
\begin{eqnarray}
&& f_\pi=132\,{\rm MeV},~~~~~f_D=200\,{\rm MeV}~\cite{Sach},~~~~~f_{D_s}=
241\,{\rm MeV}~\cite{Sach}, \nonumber \\
&& f_\rho=216\,{\rm MeV},~~~~~f_{J/\psi}=395\,{\rm MeV},~~~~~f_{\psi'}=
293\,{\rm MeV},
\end{eqnarray}
where we have taken into account the momentum dependence of the fine-structure
constant to determine $f_{J/\psi}$ and $f_{\psi'}$ from experiment. In the
absence of reliable theoretical estimates for $f_{D^*}$ and $f_{D^*_s}$, we
have taken $f_{D^*}=f_{D}$ and $f_{D^*_s}=f_{D_s}$ for numerical calculations.
From Tables II and III we see that, except for those decay modes with
$\psi'$ in the final state and for $\Omega_b\to{{1\over 2}}^++P(V)$ decays,
the up-down asymmetry parameter $\alpha$ is found to be negative.\footnote{The
parameter $\alpha$ of $\Lambda_b\to J/\psi\Lambda$ is estimated to be 0.25
in \cite{Datta}, whereas it is $-0.10$ in our case.}
As noted in \cite{Mannel}, the parameter $\alpha$ in
${{1\over 2}}^+\to{{1\over 2}}^++P(V)$ decay becomes
$-1$ in the soft pseudoscalar meson or vector meson limit, i.e., $m_P\to
0$ or $m_V\to 0$. In practice, $\alpha$ is sensitive to $m_V$ but not so
to $m_P$. For example, $\alpha\approx -1$ for $\Lambda_b\to D_s\Lambda_c$
and $\Xi_b\to D_s\Xi_c$ even though the $D_s$ meson is heavy, but it changes
from $\alpha=-0.88$ for $\Lambda_b\to\rho\Lambda_c$ to $-0.10$ for $\Lambda_b
\to J/\psi\Lambda$. As stressed in Sec.~II, by treating $a_1$ and $a_2$
as free parameters, our predictions should be most reliable
for those decay modes which proceed only through the external $W$-emission
diagram ${\cal A}$ or the internal $W$-emission ${\cal B}'$.
Moreover, we have argued that the penguin contributions ${\cal E}'$ and ${\cal E}$ to
Cabibbo-allowed decays are safely negligible and that
the $W$-exchange amplitudes ${\cal C}_1,~{\cal C}_2,~{\cal C'}$
are very likely to be suppressed in bottom baryon decays. It is thus very
interesting to test the suppression of $W$-exchange in decay modes of
$B_b({\bf {\bar 3}})\to B({\bf 10})+P(V)$ that proceed only through $W$-exchange
[see (2.3)] and in decays $B_b({\bf {\bar 3}})\to B_c({\bf {\bar 3}})+P(V)$, e.g.,
$\Xi_b\to\pi(\rho)\Xi_c,~\Xi_b\to D_s^{(*)}\Xi_c$, that receive
contributions from factorizable terms and $W$-exchange. Since the
nonfactorizable internal $W$-emission amplitude ${\cal B}$ is {\it a priori} not
negligible, our results for $\Lambda_b\to\pi(\rho)\Lambda_c,~\Omega_b\to
D_s^{(*)}\Omega_c^{(*)}$ (see Tables II and III) are subject to the
uncertainties due to possible contributions from the quark diagram ${\cal B}$.
In order to have the idea about the
magnitude of branching ratios, let us take $a_1\sim 1$ as that inferred from
$B\to D^{(*)}\pi(\rho)$ decays \cite{CT95} and $a_2\sim 0.28$ as that in $B\to
J/\psi K^{(*)}$ decays.\footnote{A fit to recent measurements of $B\to
J/\psi K(K^*)$ decays by CDF and CLEO yields \cite{Cheng96} $a_2(B\to
J/\psi K)=0.30$ and $a_2(B\to J/\psi K^*)=0.26$.}
Using the current world average $\tau(\Lambda_b)=(1.23\pm 0.06)\times
10^{-12}s$ \cite{Stone}, we find from Table II that
\begin{eqnarray}
&& {\cal B}(\Lambda_b^0\to D_s^-\Lambda_c^+) \cong 1.1\times 10^{-2},~~~~~{\cal B}(
\Lambda_b^0\to D_s^{*-}\Lambda_c^+) \cong 9.1\times 10^{-3}, \nonumber \\
&& {\cal B}(\Lambda_b^0\to\pi^-\Lambda_c^+)\sim 3.8\times 10^{-3},~~~~~
{\cal B}(\Lambda_b^0\to\rho^-\Lambda_c^+) \sim 5.4\times 10^{-3}, \\
&& {\cal B}(\Lambda_b^0\to J/\psi \Lambda)=1.6\times 10^{-4},~~~~~{\cal B}(\Lambda_b^0\to
\psi'\Lambda)=1.4\times 10^{-4}. \nonumber
\end{eqnarray}
Our estimate for the branching ratio of $\Lambda_b\to J/\psi \Lambda$
is consistent with the CDF result \cite{CDF96}:
\begin{eqnarray}
{\cal B}(\Lambda_b\to J/\psi \Lambda)=\,(3.7\pm 1.7\pm 0.4)\times 10^{-4}.
\end{eqnarray}
Recall that the predictions (3.2) are obtained for $|V_{cb}|=0.038\,$.
Since the decay mode $\Omega_c^0\to\pi^+\Omega^-$ has been seen
experimentally, we also show the estimate of $\Gamma$ and $\alpha$ in
Table IV for $\Omega_c^0\to {3\over 2}^++P(V)$ decays with the relevant
form factors being given in Table I. For comparison, we have
displayed in Table IV the model results of Xu and Kamal
\cite{XK92b}\footnote{The $B$ and $D$ amplitudes in Eq.~(4) of \cite{XK92b},
where the formulas for $\Gamma$ and $\alpha$ in ${{1\over 2}}^+\to{3\over 2}^++P$
decay are given, should be interchanged.},
K\"orner and Kr\"amer \cite{Korner}. In the model of Xu and Kamal,
the $D$-wave amplitude in (2.17) and hence the parameter $\alpha$
vanishes in the decay $\Omega_c\to {3\over 2}^++P$ due to the fact that
the vector current is conserved at all $q^2$ in their scheme 1 and at $q^2=0$
in scheme 2. By contrast, the $D$-wave amplitude in our case does not
vanish. Assuming that the form factors $\bar f_1,~\bar f_2,~\bar f_3$
have the same $q^2$ dependence, we see from (2.18) and (2.21) that
the amplitude $D$ is proportional to $(E_f-m_f)/m_f$, which vanishes at
$q^2=q^2_{\rm max}$ but not at $q^2=m_P^2$. Contrary to the decay
$\Omega_b^-\to {3\over 2}^++P(V)$, the
up-down asymmetry is found to be positive in $\Omega_c^0\to{3\over 2}^+
+P(V)$ decays. Note that the sign of $\alpha$ for $\Omega_c\to
{3\over 2}^++V$ is
opposite to that of \cite{XK92b}.\footnote{It seems to us that the sign
of $A_i$ and $B_i$ (or $C_i$ and $D_i$ in our notation) in
Eq.~(58) of \cite{XK92b} should be flipped. A consequence of this sign
change will render $\alpha$ positive in $\Omega_c\to {3\over 2}^++V$ decay.}
Therefore, it is desirable to measure the parameter $\alpha$ in
decays $\Omega_c\to{3\over 2}^++P(V)$ to discern
different models. To have an estimate of the branching ratio, we take
the large-$N_c$ values $a_1(m_c)=1.10,~a_2(m_c)=-0.50$ as an illustration
and obtain
\begin{eqnarray}
&& {\cal B}(\Omega_c^0\to \pi^+\Omega^-)\simeq 1.0\times 10^{-2},~~~~~
{\cal B}(\Omega_c^0\to \rho^+\Omega^-)\simeq 3.6\times 10^{-2}, \nonumber \\
&& {\cal B}(\Omega_c^0\to \overline K^0\Xi^{*0})\simeq 2.5\times 10^{-3},~~~~~
{\cal B}(\Omega_c^0\to \overline K^{*0}\Xi^{*0})\simeq 3.7\times 10^{-3},
\end{eqnarray}
where use of $\tau(\Omega_c)=6.4\times 10^{-14}s$ \cite{PDG} has been made.
Three important ingredients on which the calculations are built in this
work are : factorization, nonrelativistic quark model, and diople $q^2$
behavior of form factors. The factorization hypothesis can be tested by
extracting the effective parameters $a_1$, $a_2$ from data and
seeing if they are channel independent. Thus far we have neglected the
effects of final-state interactions which are supposed to be less important
in bottom baryon decay since decay particles in the two-body final state
are energetic and moving fast,
allowing less time for significant final-state interactions. We have argued
that, in the nonrelativistic quark model, the ratio of (k.e./$M_q$) is small
even for the constituent quark inside the nucleon (or hyperon) at rest.
As for the $q^2$ dependence of baryon form factors, we have applied
dipole dominance motivated by the consistency with the $q^2$ behavior
of the baryon Isgur-Wise function. Nevertheless, in order to check the
sensitivity of the form factor $q^2$ dependence, we have repeated
calculations using the monopole form. Since for a given $q^2$, the absolute
values of the form factors in the monopole behavior are larger than that in
the dipole one, it is expected that the branching rations obtained
under the monopole ansatz will get enhanced, especially when the final-state
baryons are hyperons. Numerically, we find that, while decay asymmetries
remain essentially unchanged, the decay rates of $B_b({1\over 2}^+)\to
B_c({1\over 2}^+)+P(V)$ and $B_b({1\over 2}^+)\to {\rm hyperon}+P(V)$ are in
general enhanced by factors of $\sim 1.8$ and $\sim 3.5$, respectively.
In reality, the utilization of a simple $q^2$
dependence, monopole or dipole, is probably too simplified. It thus
appears that major calculational uncertainties arise mainly from the {\it
ad hoc} ansatz on the form factor $q^2$ behavior.
In conclusion, if the $W$-exchange contribution to the hadronic decays of
bottom baryons is negligible, as we have argued, then the theoretical
description of bottom baryons decaying into ${{1\over 2}}^++P(V)$ and
${3\over 2}^++P(V)$ is relatively clean since these decays either receive
contributions only from external/internal $W$-emission or
are dominated by factorizable terms. The absence or the suppression of
the so-called pole terms makes the study of Cabibbo-allowed decays of bottom
baryons considerably simpler than that in charmed baryon decay. We have
evaluated the heavy-to-heavy and heavy-to-light baryon form factors at
zero recoil using the nonrelativistic quark model and reproduced the HQET
results for heavy-to-heavy baryon transition. It is stressed that for
heavy-to-light baryon form factors, there is a flavor-suppression factor
which must be taken into account in calculations. Predictions of the
decay rates and up-down asymmetries for $B_b\to{{1\over 2}}^++P(V)$
and $\Omega_c\to{3\over 2}^++P(V)$ are given. The parameter
$\alpha$ is found to be negative except for $\Omega_b\to{{1\over 2}}^++P(V)$ decays
and for those decay modes with $\psi'$ in the final state. We also
present estimates of $\Gamma$ and $\alpha$ for $\Omega_c\to{3\over 2}^++P(V)$
decays. It is very desirable to measure the asymmetry parameter to discern
different models.
\vskip 1.5 cm
\centerline{\bf ACKNOWLEDGMENT}
\vskip 0.5cm
This work was supported in part by the National Science Council of ROC
under Contract No. NSC86-2112-M-001-020.
\vskip 1.0 cm
\centerline{\bf Appendix A.~~Baryon Form Factors in the Quark Model}
\vskip 0.5 cm
\renewcommand{\thesection}{\Alph{section}}
\renewcommand{\theequation}{\thesection\arabic{equation}}
\setcounter{equation}{0}
\setcounter{section}{1}
Since the ${{1\over 2}}^+$ to ${{1\over 2}}^+$ baryon form factors have been evaluated
at zero recoil in the nonrelativistic quark model \cite{CT96}, we will focus
in this Appendix on the baryon form factors in ${{1\over 2}}^+$ to ${3\over 2}^+$
transition. Let $u^\alpha$ be the Rarita-Schwinger vector-spinor for a
spin-${3\over 2}$ particle. The general four plane-wave
solutions for $u^\alpha$ are (see, for example, \cite{Lurie})
\begin{eqnarray}
&& u^\alpha_1=(u^0_1,\,\vec{u}_1)=(0,\,\vec{\epsilon}_1u_\uparrow), \nonumber \\
&& u^\alpha_2=(u^0_2,\,\vec{u}_2)=\left(\sqrt{2\over 3}\,{|p|\over m}u_\uparrow,\,
{1\over\sqrt{3}}\vec{\epsilon}_1u_\downarrow-\sqrt{2\over 3}\,{E\over m}\vec{
\epsilon}_3u_\uparrow\right), \nonumber \\
&& u^\alpha_3=(u^0_3,\,\vec{u}_3)=\left(\sqrt{2\over 3}\,{|p|\over m}u_\downarrow,\,
{1\over\sqrt{3}}\vec{\epsilon}_2u_\uparrow-\sqrt{2\over 3}\,{E\over m}\vec{
\epsilon}_3u_\downarrow\right), \\
&& u^\alpha_4=(u^0_4,\,\vec{u}_4)=(0,\,\vec{\epsilon}_2u_\downarrow), \nonumber
\end{eqnarray}
in the frame where the baryon momentum $\vec{p}$ is along the $z$-axis, and
\begin{eqnarray}
\epsilon_1={1\over\sqrt{2}}\left(\matrix{ 1 \cr i \cr 0 \cr}\right),~~~
\epsilon_2={1\over\sqrt{2}}\left(\matrix{ 1 \cr -i \cr 0 \cr}\right),~~~
\epsilon_3=\left(\matrix{ 0 \cr 0 \cr 1 \cr}\right).
\end{eqnarray}
Note that the spin $z$-component of the four solutions (A1) corresponds to
${3\over 2},{{1\over 2}},-{{1\over 2}},-{3\over 2}$, respectively. Substituting (A1)
into (2.20) yields
\begin{eqnarray}
\langle B^*_f(+1/2)|V_0|B_i(+1/2)\rangle &=& \sqrt{2\over 3}\,{p\over m_f}\bar{u}
_\uparrow(\bar f_1\gamma_5+\bar f_2m_i\gamma_0\gamma_5+\bar f_3m_iE_f\gamma_5)
u_\uparrow, \\
\langle B^*_f(+1/2)|A_0|B_i(+1/2)\rangle &=& \sqrt{2\over 3}\,{p\over m_f}\bar{u}
_\uparrow(\bar g_1+\bar g_2m_i\gamma_0+\bar g_3m_iE_f)u_\uparrow, \\
\langle B^*_f(+3/2)|\vec{V}|B_i(+1/2)\rangle &=& -\bar f_1\vec{\epsilon}_1\bar{u}_\uparrow
\gamma_5u_\uparrow, \\
\langle B^*_f(+3/2)|\vec{A}|B_i(+1/2)\rangle &=& -\bar g_1\vec{\epsilon}_1\bar{u}_\uparrow
u_\uparrow, \\
\langle B^*_f(+1/2)|\vec{V}|B_i(-1/2)\rangle &=& -\bar f_1\left({1\over\sqrt{3}}\vec{
\epsilon}_1\bar{u}_\downarrow-\sqrt{2\over 3}\,{E_f\over m_f}\vec{\epsilon}_3\bar{u}
_\uparrow\right)\gamma_5u_\downarrow \nonumber \\
&& +\sqrt{2\over 3}\,{pm_i\over m_f}\,\bar{u}_\uparrow(\bar f_2\vec{\gamma}
\gamma_5+\bar f_3\vec{p}\gamma_5)u_\downarrow, \\
\langle B^*_f(+1/2)|\vec{A}|B_i(-1/2)\rangle &=& -\bar g_1\left({1\over\sqrt{3}}\vec{
\epsilon}_1\bar{u}_\downarrow-\sqrt{2\over 3}\,{E_f\over m_f}\vec{\epsilon}_3\bar{u}
_\uparrow\right)u_\downarrow \nonumber \\
&& +\sqrt{2\over 3}\,{pm_i\over m_f}\,\bar{u}_\uparrow(\bar g_2\vec{\gamma}+\bar
g_3\vec{p}\,)u_\downarrow,
\end{eqnarray}
where $\vec{p}$ is the momentum of the daughter baryon along the $z$-axis in
the rest frame of the parent baryon. The baryon matrix elements in
(A3)-(A8) can be evaluated in the nonrelativistic quark model. Following
the same procedure outlined in \cite{CT96}, we obtain
\begin{eqnarray}
\langle B_f^*|V_0|B_i\rangle/N_f &=& \langle 1\rangle, \nonumber \\
\langle B_f^*|\vec{V}|B_i\rangle/N_f &=& -{1\over 2m_q}\left(1-{\bar{\Lambda}\over 2
m_f}\right)\langle\vec{q}+i\vec{\sigma}\times\vec{q}\,\rangle+{\bar{\Lambda}\over 4m_Q
m_f}\langle\vec{q}-i\vec{\sigma}\times\vec{q}\,\rangle, \nonumber \\
\langle B_f^*|A_0|B_i\rangle/N_f &=& \left[-{1\over 2m_q}\left(1-{\bar{\Lambda}\over 2
m_f}\right)+{\bar{\Lambda}\over 4m_Qm_f}\right]\langle\vec{\sigma}\cdot\vec{q}\,
\rangle, \\
\langle B_f^*|\vec{A}|B_i\rangle/N_f &=& \langle\vec{\sigma}\rangle-{\bar{\Lambda}\over 4m_Q
m_f^2}\,\langle(\vec{\sigma}\cdot\vec{q}\,)\vec{q}-{1\over 2}\vec{\sigma}q^2\rangle,
\nonumber
\end{eqnarray}
where $\vec{q}=\vec{p}_i-\vec{p}_f$, $N_f=\sqrt{(E_f+m_f)/2m_f}$,
$m_q$ is the mass of the quark $q$ in $B_f^*$ coming from the decay
of the heavy quark $Q$ in $B_i$, and $\langle X\rangle$ stands for $_{\rm
flavor-spin}\langle B^*_f|X|B_i\rangle_{\rm flavor-spin}$. Form factors $\bar f_i$
and $\bar g_i$ are then determined from (A3) to (A9). For example, $\bar
f_1$ can be determined from the $x$ (or $y$) component of (A5) which is
\begin{eqnarray}
\langle B_f^*(+3/2)|V_x|B_i(+1/2)\rangle=-{\bar f_1\over\sqrt{2}}\,{N_f\over
E_f+m_f}\,\chi^\dagger_\uparrow\vec{\sigma}\cdot\vec{q}\chi_\uparrow={\bar f_1\over
\sqrt{2}}\,{pN_f\over E_f+m_f},
\end{eqnarray}
where $\chi$ is a two-component Pauli spinor. From (A9) we find
\begin{eqnarray}
\langle B_f^*(+3/2)|V_x|B_i(+1/2)\rangle = {pN_f\over 4m_q}\left(1-{\bar{\Lambda}
\over 2m_f}+{\bar\Lambda m_q\over 2m_Qm_f}\right)\langle(\sigma_+-\sigma_-)
b_q^\dagger b_Q\rangle.
\end{eqnarray}
Since [$N_{fi}$ being defined by (2.11)]
\begin{eqnarray}
_{\rm flavor-spin}\langle B_f^*(+3/2)|(\sigma_+-\sigma_-)b_q^\dagger b_Q|B_i(+1/2)
\rangle_{\rm flavor-spin}={4\over\sqrt{6}}N_{fi},
\end{eqnarray}
for sextet $B_i$ and vanishes for antitriplet $B_i$, it is evident that
only the decay of $\Omega_b$ into ${3\over 2}^++P(V)$ can receive
factorizable contributions. Indeed the decays $B_b({\bf {\bar 3}})\to B({\bf 10})+P(V)$
proceed only through $W$-exchange or $W$-loop, as discussed in Sec.~II.
It follows from (A10)-(A12) that at zero recoil
\begin{eqnarray}
\bar f_1(q_m^2)/N_{fi}=\,{2\over\sqrt{3}}\left(1+{\bar{\Lambda}\over
2m_q}+{\bar{\Lambda}\over 2m_Q}\right),
\end{eqnarray}
which is the result shown in (2.21). The form factor $\bar f_2$ is then fixed
by the $x$ (or $y$) component of (A7). Substituting $\bar f_1$ and $\bar f_2$
into (A3) determines $\bar f_3$. The remaining form factors $\bar g_i$ are
determined in a similar way.
\vskip 1.0 cm
\centerline{\bf Appendix B.~~Kinematics}
\vskip 0.7 cm
\setcounter{equation}{0}
\setcounter{section}{2}
In this Appendix we summarize the kinematics relevant to the two-body
hadronic decays of ${{1\over 2}}^+\to{{1\over 2}}^+({3\over 2}^+)+P(V)$.
With the amplitudes (2.4) for ${{1\over 2}}^+\to{{1\over 2}}^++P$ decay and (2.17)
for ${{1\over 2}}^+\to {3\over 2}^++P$, the decay rates and up-down asymmetries read
\begin{eqnarray}
\Gamma (1/2^+\to 1/2^++P) &=& {p_c\over 8\pi}\left\{ {(m_i+m_f)^2-m_P^2
\over m_i^2}\,|A|^2+{(m_i-m_f)^2-m^2_P\over m_i^2}\,|B|^2\right\}, \nonumber \\
\alpha (1/2^+\to 1/2^++P) &=& -{2\kappa{\rm Re}(A^*B)\over |A|^2+
\kappa^2|B|^2},
\end{eqnarray}
and
\begin{eqnarray}
\Gamma (1/2^+\to {3/ 2}^++P) &=& {p_c^3\over 8\pi}\left\{ {(m_i-m_f)^2-
m_P^2\over m_i^2}\,|C|^2+{(m_i+m_f)^2-m^2_P\over m_i^2}\,|D|^2\right\},\nonumber \\
\alpha (1/2^+\to{3/ 2}^++P) &=& -{2\kappa{\rm Re}(C^*D)\over \kappa^2
|C|^2+|D|^2},
\end{eqnarray}
where $p_c$ is the c.m. momentum and $\kappa=p_c/(E_f+m_f)=\sqrt{(E_f-m_f)/(
E_f+m_f)}$. For ${{1\over 2}}^+\to{{1\over 2}}^++V$ decay we have \cite{Pak}
\footnote{The formulas for the decay rate of ${{1\over 2}}^+\to{{1\over 2}}^++V$ decay given
in \cite{CT92,CT96} contain some errors which are corrected in errata.}
\begin{eqnarray}
\Gamma(1/2^+\to 1/2^++V) &=& {p_c\over 8\pi}\,{E_f+m_f\over m_i}\left[
2(|S|^2+|P_2|^2)+{E^2_V\over m_V^2}(|S+D|^2+|P_1|^2)\right], \nonumber \\
\alpha(1/2^+\to 1/2^++V) &=& {4m^2_V{\rm Re}(S^*P_2)+2E^2_V{\rm Re}(S+D)^*
P_1\over 2m_V^2(|S|^2+|P_2|^2)+E^2_V(|S+D|^2+|P_1|^2)},
\end{eqnarray}
with the $S,~P$ and $D$ waves given by
\begin{eqnarray}
S &=& -A_1, \nonumber \\
P_1 &=& -{p_c\over E_V}\left( {m_i+m_f\over E_f+m_f}B_1+m_iB_2\right), \nonumber \\
P_2 &=& {p_c\over E_f+m_f}B_1, \\
D &=& -{p_c^2\over E_V(E_f+m_f)}\,(A_1-m_iA_2), \nonumber
\end{eqnarray}
where the amplitudes $A_1,~A_2,~B_1$ and $B_2$ are defined in (2.4).
However, as emphasized in \cite{Korner}, it is also convenient to express
$\Gamma$ and $\alpha$ in terms of the helicity amplitudes
\begin{eqnarray}
h_{\lambda_f,\lambda_V;\lambda_i}=\,\langle B_f(\lambda_f)V(\lambda_V)|H_W|B_i(
\lambda_i)\rangle
\end{eqnarray}
with $\lambda_i=\lambda_f-\lambda_V$. Then \cite{Korner}
\begin{eqnarray}
\Gamma &=& {p_c\over 32\pi m_i^2}\sum_{\lambda_f,\lambda_V}\left(|h_{
\lambda_f,\lambda_V;1/2}|^2-|h_{-\lambda_f,-\lambda_V;-1/2}|^2\right), \nonumber\\
\alpha &=& \sum_{\lambda_f,\lambda_V} { \left(|h_{
\lambda_f,\lambda_V;1/2}|^2-|h_{-\lambda_f,-\lambda_V;-1/2}|^2\right)\over
\left(|h_{\lambda_f,\lambda_V;1/2}|^2+|h_{-\lambda_f,-\lambda_V;-1/2}|^2
\right)}\,.
\end{eqnarray}
The helicity amplitudes for ${{1\over 2}}^+\to{{1\over 2}}^++V$ decay are given by
\cite{Korner}
\begin{eqnarray}
H_{\lambda_1,\lambda_2;1/2}^{ {\rm p.v.}\,({\rm p.c.})}
&=& H_{\lambda_1,\lambda_2;1/2}\mp H_{-\lambda_1,-\lambda_2;-1/2}, \nonumber\\
H_{-1/2,-1;1/2}^{ {\rm p.v.}\,({\rm p.c.}) } &=&
2\left\{ \matrix{\sqrt{Q_+}A_1 \cr -\sqrt{Q_-}B_1 \cr}\right\}, \\
H_{1/2,0;1/2}^{ {\rm p.v.}\,({\rm p.c.}) } &=&
{\sqrt{2}\over m_V}\left\{ \matrix{\sqrt{Q_+}\,(m_i-m_f)A_1-\sqrt{Q_-}\,m_ip_c
A_2\cr \sqrt{Q_-}\,(m_i+m_f)B_1+\sqrt{Q_+}\,m_ip_cB_2 \cr} \right\}, \nonumber
\end{eqnarray}
where the upper (lower) entry is for parity-violating (-conserving) helicity
amplitude, and
\begin{eqnarray}
Q_\pm=\,(m_i\pm m_f)^2-m^2_V=2m_i(E_f\pm m_f).
\end{eqnarray}
Note that the helicity amplitudes for ${{1\over 2}}^+\to {{1\over 2}}^++V$ decay shown in
Eq.~(20) of \cite{Korner} are too large by a factor of $\sqrt{2}$.
One can check explicitly that the decay rates and up-down asymmetries
evaluated
using the partial-wave method and the helicity-amplitude method are
equivalent. For completeness, we also list the helicity amplitudes for
${{1\over 2}}^+\to{3\over 2}^++V$ decay \cite{Korner}:
\begin{eqnarray}
H_{\lambda_1,\lambda_2;1/2}^{ {\rm p.v.}\,({\rm p.c.})}
&=& H_{\lambda_1,\lambda_2;1/2}\pm H_{-\lambda_1,-\lambda_2;-1/2}, \nonumber \\
H_{3/2,1;1/2}^{ {\rm p.v.}\,({\rm p.c.}) } &=&
2\left\{ \matrix{-\sqrt{Q_+}C_1 \cr \sqrt{Q_-}D_1 \cr} \right\}, \\
H_{-1/2,-1;1/2}^{ {\rm p.v.}\,({\rm p.c.}) } &=&
{2\over \sqrt{3}}\left\{ \matrix{-\sqrt{Q_+}\,[C_1-2(Q_-/m_f)C_2] \cr
\sqrt{Q_-}\,[D_1-2(Q_+/m_f)D_2] \cr} \right\}, \nonumber \\
H_{1/2,0;1/2}^{ {\rm p.v.}\, ({\rm p.c.}) } &=&
{2\sqrt{2}\over \sqrt{3}\,m_fm_V}\left\{ \matrix{-\sqrt{Q_+}\,\big[{{1\over 2}}(m_i^2
-m_f^2-m_V^2)C_1 +Q_-(m_i+m_f)C_2+m_i^2p_c^2C_3\big] \cr
\sqrt{Q_-}\,\big[{{1\over 2}}(m_i^2-m_f^2-m_V^2)D_1 -Q_+(m_i-m_f)D_2+
m_i^2p_c^2D_3\big] \cr} \right\}. \nonumber
\end{eqnarray}
\renewcommand{\baselinestretch}{1.1}
\newcommand{\bibitem}{\bibitem}
\newpage
|
1,941,325,221,129 | arxiv | \section{Transformations and Information}
Cosmologists have been trained to look at the world through linear two-point statistics: the power spectrum and correlation function of the overdensity field, $\delta=\rho/\bar{\rho}-1$. This is for good reason: linear perturbation theory is naturally expressed in terms of the power spectrum of $\delta$, which sources gravity. The raw power spectrum works well for the nearly Gaussian cosmic microwave background (CMB) as well, and the galaxy and matter density fields on large scales. Also, $\delta$ has the benefit that the power at a given scale is largely invariant if the resolution is increased. But the usual correlation function and power spectrum dramatically lose constraining power in a non-Gaussian field such as the matter or galaxy density field on small scales, so to reach the highest-possible precision in cosmology, other approaches are necessary.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{twotransforms.png}
\includegraphics[width=\columnwidth]{pdfpower.pdf}
\end{center}
\caption{
\textit{\textbf{Upper left}}: a quadrant of the the overdensity $\delta=\rho/\bar{\rho} - 1$ in a 2\,$h^{-1}$\,Mpc-thick slice of the 500\,$h^{-1}$\,Mpc Millennium simulation \citep{mill}, viewed with an unfortunate linear color scale.
\textit{\textbf{Upper middle}}: the same slice, rank-order-Gaussianized, i.e., applying a function on each pixel to give a Gaussian PDF.
\textit{\textbf{Upper right}}: the $\delta$ field with its Fourier phases randomized.
\textit{\textbf{Lower left:}} PDFs of the upper-left and upper-right 3D fields.
\textit{\textbf{Lower right:}} $P_\delta$\ and $P_{G(\delta)}$\ of $\delta$ (red) and $G(\delta)$ (blue), measured from each of several 2D slices such as those shown in the upper-left and middle panels. The wild, coherent fluctuations in $P_\delta$\ from slice to slice illustrate its high (co)variance, absent in $P_{G(\delta)}$, which has nearly the covariance properties as in a Gaussian random field.
}
\label{fig:showdens}
\end{figure}
Illustrating the nature of non-Gaussianity encountered in cosmology, Fig.\ \ref{fig:showdens} shows a dark-matter density slice from the Millennium simulation, together with the results after two operations: PDF (probability density function) Gaussianization, and randomizing Fourier phases. The original field and the phase-randomized field look staggeringly different, but have the same $\delta$ two-point statistics. The fact that the power spectrum does not distinguish these fields has been used to demonstrate the need to go to higher-point statistics. But I assert that this is the wrong way to go, to added complexity and difficulty of analysis.
The main difference by eye is simply in the PDF, shown at bottom left. But even more differences are extractable between the two starkly different fields at `lower order' ($N\le 2$). Applying a nonlinear mapping to a field changes its higher-point statistics. For example, a Gaussian field, with higher-point ($N>2$) functions identically zero, will sprout nonzero higher-point functions at all orders if a nonlinear function is applied to it \citep{Szalay1988}. I assert that it is useful to define Gaussianized clustering statistics ($[N>1]$-point functions), in which first, the field is PDF-Gaussianized (i.e.\ a mapping is applied to give a field with as Gaussian a 1-point PDF as possible), and then clustering statistics are measured.
The phase randomization already imparts a Gaussian PDF to the upper-right panel, so Gaussianization does nothing. Gaussianization greatly changes the upper-left panel, though, into the upper-middle panel. A mapping was applied to give a Gaussian PDF of variance ${\rm Var}[\ln(1+\delta)]$ for each slice. This changes the (2-point) power spectra in slices from the red to blue curves, as shown at bottom right in Fig.\ \ref{fig:showdens}. This produces a change in both the mean, and, crucially, the covariance of the power spectrum. The covariance reduction can be seen qualitatively in the lower-right panel of Fig.\ \ref{fig:showdens}, which shows the density power spectra $P_\delta$\ and Gaussianized-density power spectra $P_{G(\delta)}$\ for several Millennium Simulation slices. There are wild fluctuations (i.e.\ variance, or covariance) on small scales of $P_\delta$\ that are not present in $P_{G(\delta)}$. These wild fluctuations show up as a drastic reduction in the cosmological-parameter Fisher information in $P_\delta$, i.e. an enlargement in error bars \citep[\eg][]{MeiksinWhite1998,rh05,NeyrinckEtal2006,ns07,TakahashiEtal2009}.
Analyzing $P_{G(\delta)}$, on the other hand, enhances cosmological parameter constraints, in principle by a factor of several \citep{Neyrinck2011b}.
My proposal is to measure the 1-point PDF and Gaussianized clustering statistics together; the Gaussianization step not only reduces covariance in the power spectrum itself, but also the covariance between the power spectrum and 1-point PDF. A mathematical reason to analyze the complete 1-point PDF (not simply its moments) is that, if it is sufficiently non-Gaussian, analyzing even arbitrarily high moments does not give all of its information, as has long been known in the statistical literature \citep[\eg][]{aitchison1957lognormal}. This was pointed out in a cosmological context by \citet{ColesJones1991}, and \citet{Carron2011} more recently emphasized the mistakenness of the conventional wisdom that measuring all $N$-point functions gives all spatial statistical information in cosmology, and the consequences for constraining parameters.
\section{Tracer bias}
From a statistical point of view, measuring the 1-point PDF along with Gaussianized clustering statistics is a superior approach to measuring just the raw power spectrum. One might be fearful of additional irritants from galaxy-vs-matter bias in Gaussianized clustering statistics, but in fact Gaussianization automatically provides a natural framework to incorporate bias issues, potentially {\it simplifying} analysis greatly. Suppose the tracer $\delta_g$, and matter density $\delta_m$ are related by any invertible function. Then mapping both PDF's onto the same function (for example, a Gaussian) will give precisely the same fields. This fact has long been exploited in the field of genus statistics \citep[\eg][]{RhoadsEtal1994,GottEtal2009}; any local monotonic density transformation will leave Gaussianized statistics unchanged in a local-bias approximation. Gaussianization was first applied for power spectra in cosmology by \citet{Weinberg1992}. Unfortunately, it does not seem to reconstruct the initial conditions perfectly on small scales, as was the original aim, although it does largely restore the initial power spectrum's shape \citep{NeyrinckEtal2009}. One way to understand this shape restoration is that whereas in $P_\delta$, power smears only from large to small scales, power in $P_{G(\delta)}$\ migrates rather evenly both upward and downward in scale \citep{NeyrinckYang2013}. This is because $P_\delta$\ is mainly sensitive to overdense regions where fluctuations contract, and a sort of one-halo shot noise appears from sharp spikes \citep{NeyrinckEtal2006}. $P_{G(\delta)}$, on the other hand, is rather equally sensitive to underdense regions as well, where fluctuations expand.
Fig.\ \ref{fig:gausspower}, taken from \citet{NeyrinckEtal2014}, shows what Gaussianizing does for different tracers explicitly. It uses the MIP ({\it multim in parvo}) ensemble of $N$-body simulations \citep{AragonCalvo2012}, in which 225 realizations were run with the same initial large-scale modes (with $k<2\pi/(4h^{-1}\,{\rm Mpc})$), but differing small-scale modes. So each simulation gives a different realization of haloes in the same cosmic web. For Fig.\ \ref{fig:gausspower}, we averaged together the halo and matter density fields over the realizations, and measured the $\delta$ and $G(\delta)$ power spectra. In the ensemble, there is a rather clean mapping between mean matter density and mean halo density, a power law with an exponential cutoff at low density; see \citet{NeyrinckEtal2014} for details. The correspondence in the Gaussianized power spectra is impressive.
However, in this discussion, we have ignored an important caveat: galaxy discreteness. If empty, zero-density pixels exist, this makes a naive logarithmic transform inapplicable. Also, if there are multiple pixels with the same density, then any assumed mapping from a perfect Gaussian to $\delta$ is not invertible. In this case, $\delta$ can be rank-ordered, and for a $\delta$ appearing multiple times, $G(\delta)$ can be set to the average of all $G(\delta)$ that would map to that range of $\delta$; see \citet{Neyrinck2011b} for more details. This problem can be negligible, as in the cases of the two figures above, or it can be substantial, in the high-discreteness limit. A rule of thumb is to use pixels large enough to contain on average several galaxies. As long as this scale is in the non-linear regime, it will be fruitful to Gaussianize. A promising new alternative is an optimal transform for a pixelized Poisson-lognormal field \citep{CarronSzapudi2014}. This gives the maximum posterior density from a single pixel in a lognormal-Poisson Bayesian reconstruction, as in \citet{KitauraEtal2010}.
\begin{figure}[H]
\begin{minipage}[b]{0.6\linewidth}
\begin{center}
\includegraphics[width=0.9\columnwidth]{gausspower.pdf}
\end{center}
\end{minipage}
\begin{minipage}[b]{0.4\linewidth}
\begin{center}
\leavevmode
\end{center}
\caption{Power spectra of matter and two mass ranges of haloes in the MIP ensemble-mean fields. The Gaussianized-density power spectra $P_{{\rm Gauss}(\delta)}$ show substantially less difference among the various density fields than the raw density power spectra $P_\delta$, supporting the hypothesis that a local, strictly-increasing density mapping captures the mean relationship between matter and haloes.}
\vskip 2 cm
\normalsize \hspace{1 em} The usual $\delta$ clustering statistics have large statistical error bars on nonlinear scales, which can swamp errors from sub-optimal measurement. But Gaussianized clustering statistics have great statistical power; with that power comes great responsibility to measure them accurately, which is what we plan to do in future work.
\label{fig:gausspower}
\end{minipage}
\end{figure}
|
1,941,325,221,130 | arxiv | \section{Introduction}
\label{sec:introduction}
The Orion Nebula is one of the most studied objects in the sky, with observational records dating about 400 years coinciding with the early developments of the telescope \citep{Muench2008}. It is an object of critical importance for astrophysics as it contains the nearest (400 pc) massive star formation region to Earth, the Orion Nebula Cluster (ONC) \citep[e.g.][]{1965ApJ...142..964J,1972ApJ...175...89W}, which is the benchmark region for massive star and cluster formation studies. Recent distance estimates to the Orion Nebula using parallax put this object at about 400 pc from Earth (389$^{+24}_{-21}$ pc \citep{2007ApJ...667.1161S}, 414$\pm$7 pc \citep{Menten2007}, 437$\pm$19 pc \citep{2007PASJ...59..897H}, and 419$\pm$6 pc \citep{2008PASJ...60..991K}). Some of the most basic observables of the star formation process, like, 1) star formation rates \citep{Lada1995,Lada2010}, 2) star formation history \citep{Hillenbrand1997}, 3) age spreads \citep{Jeffries11,Reggiani11}, 4) the initial mass function to the substellar regime \citep{2000ApJ...540..236H,2002ApJ...573..366M,2012ApJ...748...14D,2012ApJ...752...59H}, 5) the fraction, size distribution, and lifetime of circumstellar disks \citep{Hillenbrand1998b,Lada2000,Muench2001,Vicente2005}, 6) their interplay with massive stars \citep{Odell1993}, binarity \citep{1998ApJ...500..825P,2006A&A...458..461K}, rotation \citep{2002A&A...396..513H}, magnetic fields \citep{2003ApJ...584..911F}, and 7) young cluster dynamics \citep{Hillenbrand1998,2008ApJ...676.1109F,2009ApJ...697.1103T}, have all been derived from this benchmark region \citep[see the meticulous reviews of][]{Bally2008,Muench2008,ODell2008}. Naturally, the ONC is also the benchmark region for theoretical and numerical models of massive and clustered star formation \citep{Palla1999,Klessen2000,Clarke2000,Bonnell2001,Bate2003,Tan2006,Huff2006,Krumholz2011,Allison2011}
\begin{figure*}[!tbp]
\begin{center}
\includegraphics[width=7in]{fig_m42_christensen_o}
\caption{Optical image of the North end of the Orion A molecular cloud, including the relatively more evolved populations of NGC 1981, NGC 1977 and NGC 1980 (Orion OB 1c subgroup) and the Orion Nebula Cluster (Orion OB 1d subgroup), projected against the Orion Nebula (M42). This image illustrates well the complicated distribution of young stars in the vicinity of the ONC, with scattered groups of more evolved blue massive stars projected against partially embedded groups of younger stars (M43, ONC, OMC-2/3, L1641N). Image courtesy of Jon Christensen (christensenastroimages.com)
}
\label{fig:1}
\end{center}
\end{figure*}
\begin{figure}[!tbp]
\begin{center}
\includegraphics[width=\hsize,angle=0]{coverage}
\caption{Coverage of the optical datasets used in this study. The SDSS images used in this study are represented in green, the {\it CFHT/Megacam} images in red, and the Calar Alto 1.23m CCD observations are represented in light blue. The contour corresponding to an integrated intensity of $^{13}$CO of 30~K km/s is represented in white. North is up and east is left. The angular scale is indicated in the lower left. Background photograph courtesy of Rogelio Bernal Andreo (DeepSkyColors.com) }
\label{fig:coverage}
\end{center}
\end{figure}
It is quite remarkable that within only 1.5$^{\circ}$ of the ONC there are several contiguous, and likely overlapping, groups of young stars (see Figure~\ref{fig:1}), although few studies have tackled the entire region as a whole. Unfortunately the three-dimensional arrangement of star formation regions, in particular massive ones, is far from simple and is essentially unknown given the current distance accuracy to even the nearest star formation regions. It is clear however that the Orion Nebula cluster is partially embedded in its parental Orion A molecular cloud which in turn is inside the large $\sim 200$ pc Orion star formation complex, where groups of young stars with ages from a few to about 10 Myr are seen \citep{Brown1994,Briceno2007}. It has long been suspected that a more evolved group \citep[subgroup Ori OB 1c, including NGC 1981 and NGC 1980,][]{1964ARA&A...2..213B,1978ApJS...36..497W} is in the foreground of the molecular cloud where the younger ONC population (subgroup Ori OB 1d) is still partially embedded \citep[see][for a large scale analysis of the possible interplay between these two subgroups]{Gomez1998}.
There are two different views on the stellar population inside the Orion Nebula. The first suggests that the core of the ONC, the Trapezium cluster, is a distinct entity from the rest of the stellar population in the nebula, while the second, more prevalent, suggests the Trapezium is instead the core region of a larger cluster emerging from the Orion Nebula. \citet{1986ApJ...307..609H} performed one of the first CCD observations of an area centered on the Trapezium cluster (covering $\sim9.2^{\prime \,2}$), and from the exceptional high stellar density found they argued that the Trapezium cluster was a distinct entity from the surrounding stellar population, including the stellar population inside the Orion Nebula. An opposite view was proposed by \citet{Hillenbrand1998} who compared optical and near-infrared surveys of the ONC with virial equilibrium cluster models to argue that the entire ONC is likely a single young stellar population.
Confirming which view is correct is critical because they imply different formation scenarios for the ONC, and assuming the ONC is typical, different scenarios for the formation of stellar clusters in general. While the first view implies the bursty formation of the bulk of the stars in a relatively small volume of the cloud, the second, by assuming a more extended cluster, calls necessarily for a longer and more continuos process, allowing for measurable age spreads in the young population, and for substancial fractions of young stellar objects (YSOs) at all evolutionary phases, from Class 0 to Class III. Observationally, the first view argues that the Orion OB 1c subgroup is a distinct star formation event from the 1d subgroup while the second and more prevalent view argues that the two subgroups are the same population, i.e., the Ori OB 1c subgroup is simply the more evolved stellar population emerging from the cloud where group 1d still resides.
If the first view prevails, i.e., if the Trapezium cluster and ongoing star formation in the dense gas in its surroundings represent a distinct population from the rest of the stars in the larger ONC region, then what is normally taken in the literature as the ONC is likely to be a superposition of different stellar populations. If this is the case, then the basic star formation observables currently accepted for this benchmark region (e.g., ages, age spread, cluster size, mass function, disk frequency, etc.) could be compromised.
In this paper we address this important issue by attempting to characterize the stellar populations between Earth and the Orion Nebula. Our approach consists of using the Orion A cloud to block optical background light, effectively isolating the stellar population in front of it. We then use a multi-wavelength observational approach to characterize the cloud's and nebula's foreground population. We find that there are two well defined, distinct, and unfortunately overlapping stellar populations: 1) a foreground, ``zero'' extinction population dominated by the poorly studied but massive NGC 1980 cluster, and 2) the reddened population associated with the Trapezium cluster and L1641N star forming regions, supporting the first view on the structure of the ONC as described above. This result calls for a revision of most of the star formation observables for this fiducial object.
This paper is structured as follows. In Sect, 2 we describe the observational data acquired for this project as well as the archival data used. In Sect. 3 we present the results of our approach, namely the identification of the two foreground populations and its characterization. We present a general discussion on the importance of the result found in Sect. 4 and summarize the main results of the paper in Sect. 5.
\begin{figure*}[!tbp]
\begin{center}
\includegraphics[width=\hsize,angle=0]{g_gr_tdust.jpg}
\caption{$g$ vs. $g-r$ color-magnitude diagram for the entire survey in regions of increasing total line-of-sight extinction.}
\label{fig:grcmd}
\end{center}
\end{figure*}
\section{Data}
\label{sec:data}
To characterize the foreground population to the Orion A molecular cloud we will make use of existing surveys together with raw data from Canada-France-Hawaii Telescope (CFHT), Calar Alto Observatory (CAHA 1.23m), and the Spitzer satellite, that were processed and analyzed for the purpose of this investigation.
\subsection{Catalogues}
\label{sec:catalogues}
We retrieved the astrometry and photometry for all sources within a box of 5\degr$\times$15\degr\, centered around RA=85.7 and Dec=-4\degr (J2000) in the {\it Sloan Digital Sky Survey III}, the {\it Wide-field Infrared Survey Explorer} \citep[WISE ][]{2012yCat.2311....0C}, the Third XMM-Newton serendipitous source \citep{2009A&A...493..339W}
and the 2MASS catalogues \citep{2006AJ....131.1163S}. Table~\ref{table_obs} gives an overview of the properties of these catalogues.
\begin{table}
\caption{Catalogues and observations used in this study \label{table_obs}}
\begin{tabular}{lc}\hline\hline
Instrument & Band / \\
& Channel \\
\hline
XMM-Newton/EPIC & 0.1--10~$keV$ \\
SDSS & $u$,$g$,$r$,$i$,$z$ \\
CFHT/MegaCam & $u$,$g$,$r$ \\
2MASS & $J$,$H$,$Ks$ \\
WISE & 3.3,4.6,12,22~$\mu$m \\
Spitzer/IRAC & 3.6,4.5,5.8,8.0~$\mu$m \\
Spitzer/MIPS & 24~$\mu$m \\
Calar Alto 1.23m & $u$,$g$,$r$ \\
\hline
\end{tabular}
\end{table}
\subsection{CFHT/Megacam}
\label{sec:cfhtmegacam}
A mosaic of 2$\times$2 pointings covering 2\degr$\times$2\degr\, centered on the Orion Nebula Cluster (ONC) was observed with {\it CFHT/Megacam} \citep{2003SPIE.4841...72B} with the Sloan $ugr$ filters on 2005 February 14 (P.I. Cuillandre). Figure~\ref{fig:coverage} gives an overview of the area covered by these observations. The conditions were photometric, as described in the \emph{Skyprobe} database \citep{2004ASSL..300..287C}. Seeing was variable, oscillating between 1--2\arcsec\, as measured in the images. A total of 5 exposures of 150~s ($u$-band), 40~s ($g$-band), and 40~s ($r$-band) each were obtained at each of the 4 positions. The observations were made in dither mode, with a jitter width of a few arcminutes at each position. This allows filling the CCD-to-CCD and position gaps and correcting for deviant pixels and cosmic ray events. The images were processed using the recommended \emph{Elixir} reduction package \citep{2004PASP..116..449M}. Aperture photometry was then extracted using {\it SExtractor} \citep{1996A&AS..117..393B} and the photometric zero-points in the SDSS system were derived by cross-matching with the SDSS catalogue. The CFHT/Megacam observations complement the SDSS data in one critical aspect: they provide data for regions around bright stars and nebulae, in particular the Orion Nebula region that is missing in the SDSS data.
\subsection{Calar Alto/1.23m CCD Camera}
\label{sec:caha123}
Selected pointings of the ONC (see Table~\ref{table_obs_caha} and Fig.~\ref{fig:coverage}) were observed on 2011 December 15 with the Calar Alto CCD camera mounted on the 1.23m telescope (hereafter CAHA123). The CCD camera is a 2k$\times$2k optical imager with a 17\arcmin\, field-of-view. The Sloan filters available at Calar Alto vignet the field and reduce it to a circular 11\arcmin\, diameter field-of-view. These observations are meant to complement the CFHT and SDSS observations below their saturation limits (at $ugr\approx$12~mag), and in the vicinity of bright saturated stars. Short exposures of 0.1 and 5.0~s were obtained in the Sloan $gr$ filters, and of 0.1 and 10~s in the Sloan $u$ filter. The telescope was slightly defocused to avoid saturation of the brightest stars. Three standard fields \citep[SA~97, SA~92 and BD+21D0607,][]{2002AJ....123.2121S} were observed during the course of the night to derive accurate zero-points. Each pointing was observed with a small dithering of a couple of arcminutes in order to correct for deviant pixels and cosmic ray events. The images were pre-processed (bias subtraction and flat-field correction) using standard procedure with the \textit{Eclipse} reduction package \citep{1997Msngr..87...19D}. The astrometric registration and stacking were then performed using the \textit{AstrOmatic} software suite \citep[][]{2010jena.confE.227B}. Aperture photometry was finally extracted using \textit{SExtractor} and the photometric zero-points in the SDSS system were derived by cross-matching with the SDSS and {\it Megacam} catalogues. The night was clear but not photometric. We observe a dispersion in the zero-point measurements through the night of 0.06~mag in $u$ and $g$, and 0.16~mag in $r$, which we add quadratically to the photometric measurement uncertainties.
\begin{table}
\caption{CAHA 1.23m CCD observations\label{table_obs_caha}}
\begin{tabular}{lcc}\hline\hline
Field & RA (J2000) & Dec (J2000) \\
\hline
Trapezium & 05:35:19.341 & $-$05:23:30.35 \\
Field 1 & 05:35:24.651 & $-$05:55:06.69 \\
Field 2 & 05:34:56.819 & $-$05:59:59.55 \\
Field 3 & 05:35:25.044 & $-$05:59:15.44 \\
Field 4 & 05:34:52.120 & $-$05:34:04.66 \\
Field 5 & 05:33:59.980 & $-$05:35:40.17 \\
Field 6 & 05:36:09.483 & $-$05:38:17.68 \\
Field 7 & 05:37:23.116 & $-$05:56:10.97 \\
\hline
\end{tabular}
\end{table}
\subsection{Spitzer IRAC}
\label{sec:spitzer-irac}
The ONC has been extensively observed with {\it IRAC} on-board the {\it Spitzer} observatory in the course of programs 30641, 43 and 50070. We retrieved the corresponding IRAC BCD images and associated ancillary products from the public archive, and processed them following standard procedures with the recommended {\it MOPEX} software \citep{2005PASP..117.1113M}. The observations were all made using the High Dynamics Range mode, providing short (0.6~s) and long (12~s) exposure. We processed the two sets independently so as to cover the largest dynamic range. The procedure within {\it MOPEX} includes overlap correction, resampling, interpolation (to have an output pixel scale of 0\farcs6) and outlier rejection. The individual frames were then median combined using {\it Swarp} \citep{SWARP} using the rms maps produced by MOPEX as weight maps. Aperture photometry of all the sources brighter than the 3-$\sigma$ noise of the local background was extracted using \textit{SExtractor}. We verify that the corresponding photometry is in good agreement with IRAC photometry from \citet{2009A&A...504..461F} within the uncertainties.
\subsection{Spitzer MIPS1}
\label{sec:spitzer-mips}
The ONC was observed with the {\it Spitzer Space Telescope} and its MIPS instruments in the course of programs 202, 30641, 30765, 3315, 40503, 47, 50070, and 58. We retrieved from the public archive all the corresponding MIPS1 (24~$\mu$m) BCD images, and processed them with the recommended MOPEX software. The procedure includes self-calibration (flat-fielding), overlap correction, outlier rejection, and weighted coaddition into the final mosaic. Aperture photometry of all the sources brighter than the 3-$\sigma$ noise of the local background was extracted using \textit{SExtractor}. We also verify that the corresponding photometry is in very good agreement with MIPS photometry from \citet{2009A&A...504..461F} within the uncertainties.
\begin{figure*}[!tbp]
\begin{center}
\includegraphics[width=6in,angle=0]{jh_hk.pdf}
\caption{J$-$H versus H$-$K Color-Color versus J brightness diagram. The solid grey lines represent the main sequence and giant sequences from \cite{Bessel1988}. The dashed grey line represents the main sequence from \cite{2007AJ....134.2340K}. The sizes of the symbols are proportional to J-band brightness. Sources taken as foreground candidates are marked in blue, while rejected sources, mostly extincted sources, are marked in red. }
\label{fig:jhkcc}
\end{center}
\end{figure*}
\section{Results}
\label{sec:results}
We are interested in the foreground population to the Orion A cloud, in particular the foreground populations towards the ONC. To separate it from the background we will use the optical properties of dust grains in the Orion A cloud to block the optical light to the cloud background. This is a very effective way of isolating the stellar population between Earth and the Orion A cloud, in particular if we use blue optical bands where dust extinction is most effective. To select the final sample of foreground stars we take two filtering steps informed by Color-Magnitude and Color-Color diagrams in the optical and infrared. In particular, we start by 1) using blue optical magnitudes and colors to define a reliable subsample of sources in front of the cloud, then 2) using a near-infrared color-color diagram to reject sources affected by extinction (these are sources that are either young stars inside the cloud or background sources that are bright enough to be detected in the optical survey).
\begin{figure*}[!tbp]
\begin{center}
\includegraphics[width=\hsize,angle=0]{density-crop_o.pdf}
\caption{Left panel: spatial density of the foreground sources (blue sample). The unshaded area in the Figure represents the region of the cloud where A$_V \geq 5$ mag, on which the selection was performed. The blue contours (with increments of 10\% from the maximum) represent the surface density of foreground sources (constructed with a gaussian kernel with a width of 20\arcmin). Right panel: same as in the left panel but for the reddened sources. The distribution of foreground sources shows a well defined peak coinciding with the poorly studied NGC 1980. The reddened sources, on the other hand, peak around 1) the Trapezium cluster and are mostly confined to the nebula and 2) the L1641N star forming region, with a peak towards a hitherto unrecognized group of YSOs (see text). The reddened and foreground populations are spatially uncorrelated but there is significant overlap between the two, in particular with the sources inside the Orion Nebula.
}
\label{fig:density}
\end{center}
\end{figure*}
\begin{figure}[!tbp]
\begin{center}
\includegraphics[width=\hsize,angle=0]{density_xray_o.pdf}
\caption{Spatial density of all the X-ray sources in the Third XMM-Newton serendipitous source catalog for the same region as in the previous Figure. The gaussian kernel used and contour separation are the same. NGC 1980 is detected as a distinct enhancement in the surface density of X-ray sources, together with the Trapezium cluster, the L1641N population, and the hitherto unrecognized group that we name L1641W.}
\label{fig:density-xrays}
\end{center}
\end{figure}
The first filtering step is displayed in Figure~\ref{fig:grcmd} where we present a $g$ vs $g-r$ color-magnitude (CMD) diagram combining the SDSS, CFHT/Megacam and CAHA123 photometry. We chose $g-r$ over the more extinction sensitive $u-g$ as the $u-$band observations are significantly less deep, and extremely sensitive to excesses related to accretion and activity.
The first diagram (on the left) represents the $g$ vs $g-r$ color-magnitude diagram for all the sources in our combined database. The three CMD-diagrams to the right are a subset of the first containing only sources projected against increasing contours of dust extinction of the Orion A cloud (about 4, 5, and 6 magnitudes of visual extinction). These column density thresholds were estimated from the $^{13}$CO map of \citet{Bally87}, cross-calibrated with the extinction map of \cite{Lombardi2011}. While using directly the extinction map of \cite{Lombardi2011} gives similar results, we preferred to avoid dealing with any possible systematics affecting this map caused by a potential substantial population of foreground sources. As we impose the condition of keeping only sources that are seen against increasing levels of dust extinction two things occur: 1) the number of stars decreases, naturally, because the solid angle on the sky decreases, and 2) a well defined sequence appears. This sequence is not what is expected from the general Galactic population between Earth and the Orion A cloud at 400 pc, as confirmed with the Besa\c{c}on stellar population model \citep{Robin03}. From this step we retain the subsample of sources that is seen in projection against column densities of greater than $\sim5$ visual magnitudes of extinction (third panel in Figure~\ref{fig:grcmd}), or a total of 2169 sources from more than 1.25$\times 10^5$ sources in the combined SDSS--MEGACAM--CAHA123 catalog. Most of the discarded sources have colors consistent with unreddened and slightly reddened unrelated field stars towards the background of the Orion A cloud. Among the sources that pass the first filter there could be some with g-band excess emission, but these should have a negligible effect in the selection process, in particular because the next filtering step is done at the near-infrared.
The second filtering step consists of discarding extincted sources. We want to remove from the sample any source that might be associated with the cloud (young stellar objects still embedded in the cloud, for example), as well as background sources that are bright enough to be detected at $\sim$0.487$\mu$m (g-band) through A$_V\sim5$ mag of cloud material. We perform this filtering using a J$-$H vs. H$-$K Color-Color diagram where extincted sources are easily identified along the reddening band, away from their unreddened main sequence (and giant) colors \citep[e.g.][]{1992ApJ...393..278L,1998ApJ...506..292A,2001A&A...377.1023L}. We present in Figure~\ref{fig:jhkcc} the J$-$H vs. H$-$K Color-Color diagram for the 2169 sources that passed the first filtering step. The size of the symbols in this Figure are proportional to J-band brightness. The selection criteria used to identify the likely foreground population was:
\begin{equation}
\label{eq:1}
H < \frac{0.96-(J-H)}{1.05} \,\, \mathrm{mag}
\end{equation}
\begin{equation}
\label{eq:2}
J < 15 \,\, \mathrm{mag}
\end{equation}
\begin{equation}
\label{eq:3}
J-H<0.74 \,\, \cup \,\, H-K>-0.2 \,\, \cup \,\, H-K<0.43 \,\, \mathrm{mag}
\end{equation}
\noindent
Sources that are consistent with having no extinction within the photometric errors, are marked in blue, while rejected sources are marked in red. Condition (1), the main filter, is taken as the border between extincted and non extincted sources, and it was selected to be roughly parallel to the main-sequence early M-star branch (to about the color of a M4-M5 star). Condition (2) and (3) further reject sources that are faint or have dubious NIR colors (either bluer than physically possible, or redder than main-sequence stars, suggestive of a NIR excess). Condition (2) in particular makes the selection more robust against photometric errors (the typical photometric error imposed by condition (2) is J$_{err} \sim$ 0.03$\pm$0.01, H$_{err} \sim$ 0.03$\pm$0.01, and K$_{err} \sim0.03\pm0.01$ mag, which translates into a maximum A$_V$ error of $\sim$ 1 mag), and should reach a sensitivity limit capable of including M4 main-sequence stars at the distance of the cloud (J $\sim$ 15 mag). More than two thirds of the 2169 sources are rejected (red sources) and a total of 624 sources have colors consistent with foreground stars suffering no or negligible amounts of extinction.\footnote{A table of candidate foreground sources is provided in electronic format.}
\begin{figure}[!tbp]
\begin{center}
\includegraphics[width=\hsize,angle=0]{klf_ecdf_o.pdf}
\caption{a) K-band luminosity functions for the foreground (blue) and the extincted (red) sample (see Figure~\ref{fig:jhkcc}). Albeit being affected by extinction, the red sample is surprisingly brighter than the foreground sample (blue), suggesting that the two populations are intrinsically different. b) Empirical cumulative distribution functions for both samples, together with upper and lower simultaneous 95\% confidence curves, confirming that the two populations have statistically different luminosity functions.
}
\label{fig:klf}
\end{center}
\end{figure}
\subsection{Spatial distribution of the foreground and reddened sources}
\label{sec:spatial-distribution}
The addition of the third dimension (J-band brightness) to Figure~\ref{fig:jhkcc} makes obvious the presence of a well populated main-sequence branch, from early types (B-stars) to late types (M-stars). The clear presence of so many early-type stars, as well as a well defined main sequence, suggests that the foreground population is not dominated by the random Galactic field from Earth to the ONC region, but could instead be a well defined stellar population. To investigate this idea we present in the left panel of Figure~\ref{fig:density} the spatial density of foreground sources (blue sample). The unshaded area in the Figure represents the region of the cloud where A$_V \geq 5$ mag, on which the selection was performed. We constructed a surface density map of foreground sources using a gaussian kernel with a width of 20\arcmin\, represented in the Figure as blue contours (with increments of 10\% from the maximum). It is clear from Figure~\ref{fig:density} that the distribution of foreground sources is not uniform, as expected for the Galactic field between Earth and the ONC region in the Orion A cloud, but is instead strongly peaked and fairly symmetric. The peak of the distribution coincides spatially with the poorly studied iota Ori cluster, or NGC 1980, suggesting that the foreground population is dominated by NGC 1980 cluster members. The elongated shape of the peak is not meaningful as it is caused by the relative narrow A$_V \geq 5$ mag region on which the density was calculated.
The right panel of Figure~\ref{fig:density} shows the spatial distribution of reddened sources. This distribution is dominated by two peaks, a relatively well defined one in the Trapezium cluster region and another, more diffuse, coinciding with the overall gas distribution around the L1641N star forming region. This is not surprising as the reddened sources are expected to be dominated by the embedded young stellar objects in the Orion A cloud, and these are know to cluster around these two regions. More striking, instead, is that the foreground sources and the reddened sources appear spatially anti-correlated: the maximum of the distribution of the foreground sources coincides with the minimum of the distribution of reddened sources. This is evidence that the foreground population is not the emerging young stellar population from the ONC but is instead an entirely different population. Because it contains a fully sampled and unreddened main-sequence from B- to M-stars (see Figure~\ref{fig:jhkcc}), this population is most likely the stellar population of the NGC 1980 cluster, seen in projection against the Orion A cloud. But because this population overlaps significantly with the ONC (the distribution of foreground sources appears symmetric to about 7 pc from its center), it implies that the ONC is not comprised by a single cluster with the Trapezium as its core, but has instead three stellar populations (a) the youngest population, including the Trapezium and ongoing star formation associated with the dense gas in the nebula, b) part of the NGC 1980 cluster in its foreground, and c) the unrelated, Galactic, foreground and background population.
In Figure~\ref{fig:density-xrays} we present, for the same region shown in Figure~\ref{fig:density}, the distribution of all the X-ray sources in the Third XMM-Newton serendipitous source catalog. We want to investigate if the NGC 1980 X-ray source counts appear as a distinct peak in the surface density map (constructed using a gaussian kernel with a width of 20\arcmin\, as in the previous Figure). As can be seen in this Figure, both the foreground and the extincted populations are detected, and a distinct enhancement in the surface density of X-ray sources is seen towards the center of NGC 1980, given strength to the idea that the foreground population is not the emerging young stellar population from the ONC but is instead an entirely different population. The highest peak in this surface density map is centered on the Trapezium cluster. This was not the case for the extincted population as seen in Figure~\ref{fig:density}, which peaks slightly to the South of the Trapezium, although this mismatch could be due simply to the fact that our MEGACAM g-band observations are affected by the bright nebula and dust extinction in the Trapezium region, hence substantially less sensitive to the embedded population of the Trapezium cluster.
\subsubsection{A hitherto unrecognized group of YSOs?}
\label{sec:hith-unrec-group}
The presence of an enhancement of the reddened sources in Figure~\ref{fig:density} towards RA: $5^h35^m$, Dec: $-6^\circ18^m$, immediately South of NGC 1980 and towards the West of what is normally taken as the L1641N cluster \citep[e.g.][]{Allen2008}, is tantalizing. Could this relatively small enhancement be another hitherto unrecognized group of YSOs? The enhancement is clearly detected in X-rays (Figure~\ref{fig:density-xrays}), and is tentatively detected in the optical (Figure~\ref{fig:density}), providing support in favor of this possibility. We name this potentially new group of about 50 stars (counted on the reddened sample) as L1641W. The group is not associated with any obvious nebula nor does it include any obvious bright star. Because it appears less extincted than the L1641N population, and it is not obviously detected in the Spitzer survey \citep[e.g.][]{Allen2008}, suggests that it is probably more evolved than the L1641N population. We speculate that this new group is either a foreground young group ramming into the Orion A cloud, or a slightly older sibling of NGC 1641N, leaving the cloud.
\begin{table}
\caption{Position of clusters in Figure~\ref{fig:density-xrays}, including the newly identified L1641W \label{clusters}}
\begin{tabular}{lcc}\hline\hline
Field & RA (J2000) & Dec (J2000) \\
\hline
L1641W & 05:34:51.0 & $-$06:17:40 \\
NGC 1980 & 05:35:11.0 & $-$05:58:00 \\
Trapezium & 05:35:16.5 & $-$05:23:14 \\
L1641N & 05:35:55.7 & $-$06:23:55 \\
\hline
\end{tabular}
\end{table}
\subsection{Luminosity function of foreground and reddened sources}
\label{sec:lumin-funct-blue}
In Figure~\ref{fig:klf} a) we present the K-band luminosity functions for both the foreground (blue) and reddened (red) samples (see Figure~\ref{fig:jhkcc}). To enable a direct comparison, the reddened sample was also constrained with condition (2), namely, J $<$ 15 mag. Surprisingly, the extincted sample (red) is brighter than the foreground sample (blue), even if no derredening procedure was applied to the red sample. We confirm that the differences between the two luminosity functions are significant, to a 95\% confidence level, by analyzing their empirical cumulative distribution functions (see Figure~\ref{fig:klf} b)). Note that had we de-reddened the extincted sample, the difference between the luminosity functions would have been even higher. This suggests, like in the previous section, that the foreground and reddened population are intrinsically different. A likely explanation for the difference in the luminosity functions is that the reddened sample is dominated by very young stellar objects still embedded the cloud, which are intrinsically brighter than normal stars because of both stellar evolution and the presence of K-band excess emission.
\begin{figure}[!tbp]
\begin{center}
\includegraphics[width=\hsize,angle=0]{vdispprofile_o.pdf}
\caption{The North-South velocity dispersion profile of the ONC region. The filled circles represent the sources in \citet{2009ApJ...697.1103T} with reliable radial velocities. The NGC 1977, the Trapezium cluster, and NGC 1980 are indicated, as well as the extent of the Orion Nebula (light open circle). The thick red line represents the North-South velocity dispersion profile measured in bins of Declination (indicated by the thin horizontal lines). There is an increase from the Trapezium to the edge of the nebula to the South, followed by a clear minimum around NGC 1980, strongly suggesting that NGC 1980 is a well defined and different population from the stellar population inside the nebula. Data from Table 13 of \citet{2009ApJ...697.1103T}.}
\label{fig:vdispprofile}
\end{center}
\end{figure}
\subsection{Velocity dispersion profile}
\label{sec:veloc-disp}
\citet{2009ApJ...697.1103T} presented a large kinematic study of the ONC, covering about 2$^\circ$ of Declination centered on the Orion A cloud, from NGC 1977 down to L1641N. This survey builds on previous work by \citet{2008ApJ...676.1109F} and constitutes the largest and highest precision kinematic survey of this region to date, offering a unique possibility to characterize kinematically the ONC foreground population identified in this paper. In Figure~\ref{fig:vdispprofile} we present the North-South velocity dispersion profile of the ONC region, taken from Table 13 of \citet{2009ApJ...697.1103T}. The filled circles represent the sources in this paper with reliable radial velocities. The NGC 1977, the OMC2/3, the Trapezium cluster, and NGC 1980 are indicated, as well as the extent of the Orion Nebula (light open circle). The thick red line represents the North-South velocity dispersion profile measured in bins of Declination (indicated by the thin horizontal lines).
It is striking that the velocity dispersion profile has a minimum at the location of NGC 1980. This is perhaps the strongest indication we have that the stellar population of NGC 1980 is a distinct population from the reddened population inside Orion A. The measurement of velocity dispersion in the bin that mostly includes NGC 1980 ($\sigma=$2.1 km/s) was not optimized to isolate the most probable members of this cluster, and should then be seen as an upper limit to the true velocity dispersion in this cluster. Still, this value is close to the velocity dispersion of the Trapezium cluster as measured from the proper motion of stars within one half degree of the center of the Trapezium, namely, 1.34$\pm$0.18 km/s for a sample of brighter stars \citep{1988AJ.....95.1744V} and 1.99$\pm$0.08 km/s for a larger sample including relatively fainter stars \citep{1988AJ.....95.1755J}. Both velocity dispersions were corrected to the more recently estimate of the distance to the Orion A cloud (400 pc).
Given the striking differences in the velocity dispersion profile we then calculated the mean radial velocity per bin from the subsample of single sources (not directly available in the Tobin et al. 2009 paper) and found that although showing variations from bin to bin, these variations are of the order of the measured dispersions. In particular, the mean velocity for the bin including the Trapezium ($-5.3^\circ < \delta < -5.4^\circ$) and NGC 1980 ($-5.8^\circ < \delta < -6.0^\circ$) is $25.7\pm3.0$ km/s and $24.3\pm2.7$ km/s respectively. Within the errors, estimated as the median absolute deviation in each bin, the NGC 1980 cluster has virtually the same radial velocity as the ONC. We note, however, that we are taking the bins as simple slices at constant declination, without trying to optimize their boundaries to better separate the different populations.
Because of the importance of measuring the velocity differences between the Trapezium and NGC 1980, especially for a discussion on the origin of NGC 1980, we made an alternative source selection and created two new subsamples, that are in principle more pure, but have about three times less sources. For the NGC 1980 subsample we matched the Tobin et al. 2009 catalog with the foreground population identified in this paper. For the Trapezium we matched the Tobin et al. 2009 catalog with the COUP sample \citep{2002ApJ...574..258F}, that is dominated by Trapezium sources, and removed sources that matched the foreground population. Because this Trapezium subsample should be of ``high confidence'', we used the radial velocity limits found in this subsample (6.2 km/s and 36.6 km/s) to exclude 5 extreme outliers in the NGC 1980 sample (with velocities of $\sim -40$ and $\sim 90$ km/s). In these subsamples, the mean velocity for the Trapezium and NGC 1980 clusters is $25.4\pm3.0$ km/s and $24.4\pm1.5$ km/s respectively, or essentially the same values as derived above, with the important difference that the dispersion of velocities in NGC 1980 is now reduced by about a factor of two, once again suggesting that this cluster is a distinct population from the reddened population inside Orion, as argued above. Still even with the decreased velocity dispersion the measured velocity difference of 1 km/s is not statistically significant.
\subsection{On the age and population size of NGC 1980}
In order to estimate an age to the NGC 1980 cluster we compare the evolutionary status of class~II sources in various clusters through the analysis of the median spectral energy distribution (SED) of late-type (spectral type later than K0) members. We follow the \citet{2005ApJ...629..881H} definition of Class II, namely objects with $0.2<[3.6]-[4.5]<0.7$~mag and $0.6<[5.8]-[8.0]<1.1$~mag. To compute the median SED for the different clusters we retrieved the optical, near-infrared (2MASS) and mid-infrared ({\it Spitzer} and {\it WISE}) photometry for samples of confirmed members of Taurus \citep[1--3~Myr,][]{2010ApJS..186..111L}, IC~348 \citep[1--3~Myr,][]{2006AJ....131.1574L}, NGC1333 \citep[1~Myr,][]{2010AJ....140..266W,2008ApJ...674..336G}, $\lambda-$Ori \citep[5--7~Myr,][]{2001AJ....121.2124D,2004ApJ...610.1064B}, and $\eta-$Cha \citep[5--10~Myr,][]{2005ApJ...634L.113M}. To compute the median SED for the Trapezium cluster we defined first a ``high confidence'' Trapezium member catalog, as we did in the previous Section, by cross-matching the X-ray COUP sample from \citep{2008ApJ...677..401P} with the foreground (NGC 1980) sample, and excluding all matches as unrelated foregrounds. The individual SEDs within each cluster were normalized to the $J$-band flux, and the median cluster SED of each cluster was computed. Figure~\ref{fig:medsed} shows the result. One can see from this Figure~\ref{fig:medsed} that the optical part of the SED varies from cluster to cluster, mostly due to dust extinction. More striking, the mid-infrared ($>3$~$\mu$m) excesses, related to the presence of a disk, decrease systematically with age.
The median SED of NGC 1980 seems to fit between the median SED of Taurus (1--3~Myr) and $\lambda-$Ori (5--7~Myr), suggesting an age in between that of these regions. But another constraint is given by the massive stars in the center of the cluster. Of the five brightest stars at the peak of the spatial distribution in Figure~\ref{fig:density}, only the brightest, iota Ori (O9 III, V$=$2.77 mag), seems to have evolved from the main sequence. This implies an age of about 4-5 Myr for this star, assuming it started its life as a 25 M$_\odot$ star \citep[e.g.][]{Massey:2003fk}. This age fits well within the inferred age from the median SED and is also in agreement with the estimate of \citet{1978ApJS...36..497W} for the age of Ori OB 1c subgroup (of about 4 Myr).
To estimate the size of the cluster population we concentrate on the distribution of foreground sources from the center of the cluster to the South in order to avoid incompleteness issues caused by the bright Orion Nebula. We counted the number of sources falling on a 20$^\circ$ ``pie slice'' inside the A$_V \geq 5$ mag region, centered on the cluster and with a radius of 7 pc. This radius corresponds approximately to the extent of the 10\% contour in Figure~\ref{fig:density}, chosen to account for contamination from the Galactic field between Earth and Orion (estimated to be 6--9\% of the foreground population in Section~\ref{sec:unrel-galact-field}). Note that this radius is not the half-mass radius but simply the radius to which we can trace the enhancement of sources over the unrelated foreground field.
We repeated this measurement several times to account for uncertainties in the location of the cluster center and obtained an average number of 110 sources in the 20$^\circ$ ``pie slice''. Assuming spherical symmetry for the distribution of sources in NGC 1980, we expect then a total of about 2000 sources in NGC 1980, or a total cluster mass of about 1000 M$_\odot$ (assuming an average mass per star of 0.5 M$_\odot$).
Assuming NGC 1980 has a normal Initial Mass Function (IMF), we can make a consistency test on the likelihood of the number of sources in this cluster being of the order of 2000. For this we constructed 200000 synthetic clusters of 1000 M$_\odot$ each by randomly sampling the Kroupa and the Chabrier IMFs \citep{Kroupa2001,2003PASP..115..763C} and tracked the mass of the most massive star in each synthetic cluster. The mean mass of the most massive star was 54$\pm$26 M$_\odot$ (Kroupa) and 22$\pm$11 M$_\odot$ (Chabrier). Assuming there were no supernovae in NGC 1980 yet, and that iota Ori is the most massive star in the cluster then, to first approximation, a population of about 2000 sources seems a plausible estimate of the size of NGC 1980 population.
\begin{figure}[!tbp]
\begin{center}
\includegraphics[width=\hsize,angle=0]{medsed}
\caption{Median SED of class~II sources in clusters of various evolutionary stages: NGC1333 (magenta, 1~Myr), Trapezium COUP sources (black, 1~Myr), Taurus (green, 1--3~Myr), NGC1980 members (red), $\lambda$ Ori (blue 5--7~Myr) and $\eta-$Cha (orange, 5--10~Myr).}
\label{fig:medsed}
\end{center}
\end{figure}
At the moment we cannot derive a reliable cluster radial profile, nor a half-mass radius, or even be certain about the position of the center of the cluster, as our optical observations are incomplete in the vicinity of the early type stars of NGC 1980, and we only have a ``pie slice'' view on the radial extent of the cluster. This should be improved in follow-up work, in particular in combination of dedicated NIR observations which are less sensitive to the large brightness contrasts between early and late type stars in this cluster.
\begin{figure*}[!tbp]
\begin{center}
\includegraphics[width=\hsize,angle=0]{contamination_o.pdf}
\caption{Matches between the foreground contamination and well known catalogs of the ONC. a) In green, the optical spectroscopic survey of \cite{2009ApJ...697.1103T}, with the matches with the foreground population marked in blue. b) In yellow, the well studied sample of \cite{Hillenbrand1997}, with the matches with the foreground population marked in blue. c) In red, the sample of X-ray sources from the COUP project \citep{2002ApJ...574..258F}, with the matches with the foreground population marked in blue. As in Figure~\ref{fig:density} the unshaded area represents the region of the cloud where A$_V \geq 5$ mag, on which the selection of foreground sources was performed. }
\label{fig:contamination}
\end{center}
\end{figure*}
\subsubsection{On the origin of NGC 1980 and impact on the Orion A cloud}
\label{sec:possible-impact-ngc}
We found in Section~\ref{sec:veloc-disp} that the radial velocity of NGC 1980 is indistinguishable, or has a difference of the order of a few km/s at best from the radial velocity of the embedded Trapezium population. This surprising result implies that the radial velocity of NGC 1980 is essentially the same as the velocity of the gas in the Orion A cloud, since the ONC population has the same radial velocity as the cloud \citep{2009ApJ...697.1103T}. This strongly suggests that NGC 1980 is somehow connected to the Orion A cloud, or better, that the cloud that formed NGC 1980 was physically related to the current Orion A cloud. One would not expect the distance to NGC 1980 to be substantially different than the current distance estimate to the ONC, and a fitting of the ZAMS on the optical data presented in this paper is indeed consistent with a distance of 400 pc.
Despite its relatively older age, lack of obvious H$_{\rm{II}}$ region, and lack of measurable dust extinction, NGC 1980 moves away from Earth at the same velocity as the large Orion A cloud on which it is seen in projection. Because of their likely proximity, one wonders on the effects of the ionizing stars from NGC 1980 on the Orion A cloud, or what the cloud was about 4-5 Myr ago, in particular on a possible acceleration and compression of the cloud by the UV radiation from these stars. How important was/is this process in this region? Could the formation of the ONC have been triggered by its older sibling, as suggested in \citet{Bally2008}? At first look our results would argue that the impact would have been minimal, NGC 1980 has essentially the same radial velocity as the Orion A cloud, but the work of \citet{1954BAN....12..177O,1954BAN....12..187K,Oort:1955cn} suggests that final speeds between the ionizing star and the cloud would be of the order of a few km/s, which cannot be ruled out by the current accuracy of the data. While our results do not give final evidence in support of the tantalizing suggestion that the formation of the ONC could have been triggered by NGC 1980, they are not inconsistent with it either. A new dedicated radial velocity survey of the region, together with a sensitive proper motion survey, are needed to understand the interplay between these two massive clusters. This configuration (an embedded cluster in the vicinity of a $\sim 5$ Myr cluster) is unlikely to be unique in massive star forming clouds, but it will best addressed in the nearest example.
\subsection{Contamination of ONC catalogs}
\label{sec:cont-catalogs}
We have showed above that there is a rich and distinct foreground population of stars, likely associated with the young ($\sim 5$ Myr) poorly studied but massive NGC 1980 cluster, that is not directly associated with the ongoing star formation in the ONC. This finding raises concerns on the contamination of currently available observables for this important region, and future studies should take this foreground population into account. But how large is this contamination? There are two well known ONC catalogs used in the literature, namely the \citet{Hillenbrand1997} and the catalogs of \citet{2009ApJS..183..261D,DaRio:2010cz,2012ApJ...748...14D} covering a roughly square area of about $0.5^\circ\times0.5^\circ$ ($\sim 3.5 \times 3.5$ pc) centered on the Trapezium cluster. The \citet{2012ApJ...748...14D} supersedes all previous catalogs, but it is the most recent hence less used in the community. On the other hand, the \citet{Hillenbrand1997} catalog has been used extensively in the literature and has spawned a large number of the star formation studies on the star formation properties of the ONC region. We estimate here the likely foreground contamination fraction for the \citet{Hillenbrand1997} catalog as it is the most used one, but also because it is likely to be the least contaminated since the Da Rio catalogs cover a slightly larger area of the sky towards NGC 1980.
To estimate the probable contamination fraction of \citet{Hillenbrand1997} we matched the foreground population with this catalog for stars falling within the A$_V \geq 5$ mag region where the foreground was selected (see Figure~\ref{fig:density}) and where I-band $<$ 16 mag. The last constraint accounts for the fact that the \citet{Hillenbrand1997} sample is not uniformly deep (it reaches about 2 magnitudes deeper around the Trapezium cluster), and that the selection of foreground stars, made at g-band, seems complete to about I-band $\sim$ 16 mag (after transformation of the SDSS photometry into Johnson's \citep{2007AJ....134..973I}). We find that 11\% of the sources in the \citet{Hillenbrand1997} catalog have a match in the foreground sample (8\% if we remove the constraint on the I-band brightness). If one sees the Trapezium cluster as a component of the ONC, and not as the only component, and remove it from consideration, then the fraction of foreground contaminants in the ONC rises to 32\%. For this estimate the area on the sky covered by the Trapezium cluster is taken as an ellipse with a$=7.5^\prime$ and b$=3.8^\prime$, with a position angle of $-10^\circ$, similar to the definition in \citet{Hillenbrand1998}. One can also make an estimate of the possible contamination to the entire \citet{Hillenbrand1997} catalog by applying equation~\eqref{eq:1} to the entire \citet{Hillenbrand1997} catalog, which gives then a contamination fraction of 20\%, or 63\% when the Trapezium is removed from consideration. Note that all these estimations assume that the fraction of ONC stars without measurable extinction is negligible, which is likely given the distribution of foreground stars in Figure~\ref{fig:density}, but will need to be investigated further in future work.
Even without removing the Trapezium cluster from consideration, contamination fractions of the order 10--20\% are significant and will necessarily lead to systematic errors in the basic derived physical quantities for this star formation benchmark. Still, these are necessarily lower limits to the true contamination fraction of the ONC sample for at least two reasons: 1) our g-band MEGACAM survey is not as sensitive in regions of high nebula brightness, especially around the Trapezium, and 2) we are not sensitive, by design, to background sources. While it is normally argued that the high background extinction behind the Trapezium blocks most background stars, this is only valid for the inner regions of the Trapezium cluster ($\sim25^{\prime \,2}$), but not valid for the entire $\sim700^{\prime \,2}$ Orion Nebula, \citep[e.g.][]{1999ApJ...510L..49J,2012MNRAS.422..521B}. So background contamination is variable across the ONC and expected for any optical or infrared survey of this region. In regards to the contamination of the ONC region by NGC 1980, it is a function of the position in the Nebula, having a minimum at the center of the Trapezium where the Trapezium cluster stellar density is highest, increasing gradually towards the South as one approaches the core of NGC 1980 (see Figure~\ref{fig:density}).
\subsubsection{Unrelated Galactic field foreground population}
\label{sec:unrel-galact-field}
Most of the identified foreground population (624 sources) is likely to belong to NGC 1980, as seen from the symmetric and peaked spatial distribution in Figure~\ref{fig:density}, but some fraction of these must be made of the Galactic field population between Earth and the Orion A cloud. A first and simple estimate of the size of this population can be made by correlating the foreground population with the sample of \citet{2009ApJ...697.1103T} for which good radial velocity measurements exist (see Figure~\ref{fig:contamination}). The distribution of radial velocities of the 188 sources in common reveals a gaussian like distribution centered at $\sim 26.1$ km/s, with a gap roughly between $-20$-$0$ km/s and $40$-$60$ km/s without any source, and 3 sources below $-20$ km/s and 9 above 60 km/s (i.e., 12 potential outliers). Not surprisingly, if we sigma-clip the entire distribution at 3 $\sigma$ we find 11 outliers. If we remove from the distribution the 12 potential outliers described above and then sigma-clip the rest of the distribution we find 5 more outliers, but this time at the wings of the distribution. This suggests that about 6--9\% of the foreground sources identified in this work are likely field sources unrelated to NGC 1980. This estimate of the Galactic field foreground contamination for Orion A is in rough agreement with what would be expected from the Besa\c{c}on stellar population model \citep{Robin03} for the depth of our MEGACAM survey.
\section{Conclusions}
We have made a link between the foreground population towards the well known star formation benchmark ONC region and the stellar population of the poorly studied NGC 1980 cluster (or iota Ori cluster). Not only did we detect the presence of a well populated main-sequence (from B-stars to M-stars), the foreground sources have: 1) a well defined spatial distribution peaking near iota Ori, 2) a fainter luminosity function when compared to the extincted young population embedded inside the cloud, and 3) a low velocity dispersion, typical of that of other young clusters.
Unlike the ONC, NGC 1980 is a relatively older cluster ($4-5$ Myr), lacks an obvious H$_{\rm{II}}$ region, and is comparatively free of dust extinction. Surprisingly, the radial velocity of NGC 1980 is currently indistinguishable from the radial velocity of the ONC embedded population or the radial velocity of Orion A cloud, suggesting that both clusters are genetically related, and at about the same distance from Earth.
A general concern that this study raises is the risk of population mixing in star formation studies. It is unlikely that the ONC is atypical in this respect and a dedicated multi-wavelength study to disentangle the different populations, together with a sensitive proper motion survey of the region, is urgently needed. The ONC is still the closer massive star formation region to Earth, and albeit more complicated than first assumed, it is still the one offering the best detailed view on the formation of massive stars and clusters.
The main conclusions of this work are:
\begin{itemize}
\item We make use of the optical effects of dust extinction to block the background stellar population to the Orion A cloud, and find that there is a rich foreground stellar population in front of the cloud, in particular the ONC. This population contains a well populated main-sequence, from B-stars to M-stars.
\item The spatial distribution of the foreground population is not random but clusters strongly around NGC 1980 (iota Ori), has a fainter luminosity function, and different velocity dispersion from the reddened population inside the Orion A cloud. This foreground population is, in all likelihood, the extended stellar content of the poorly studied NGC 1980 cluster.
\item We estimate the number of members of NGC 1980 to be of the order of 2000, which makes it one of the most massive clusters in the entire Orion complex, and estimate its age to be $\sim 4-5$ Myr by making a comparative study of median spectral energy distributions among known young populations and constraints from the age of post main sequence star iota Ori.
\item This newly found population overlaps significantly with what is currently assumed to be the ONC and the L1641N populations, and can make up for more than 10-20\% of what is currently taken as the ONC population (30-60\% if the Trapezium cluster is removed from consideration).
\item Our results suggest that what is normally taken in the literature as the ONC should be seen as a mix of several unrelated populations: 1) the youngest population, including the Trapezium cluster and ongoing star formation in the dense gas inside the nebula, 2) the young foreground population, dominated by the NGC 1980 cluster, and 3) the poorly constrained population of foreground and background Galactic field stars.
\item We re-determine the mean radial velocity for the Trapezium and NGC 1980 clusters to be $25.4\pm3.0$ km/s and $24.4\pm1.5$ km/s respectively, or indistinguishable within the errors, and similar to the radial velocity of the Orion A cloud, suggestive of a genetical connection between the two.
\item We identify a hitherto unrecognized group of about 50 YSOs West of L1641N (L1654W) that we speculate is either a foreground group ramming into the Orion A cloud, or a slightly older sibling of NGC 1641N, leaving the cloud.
\item This work supports a scenario where the ONC and L1641N are not directly associated with NGC 1980, i.e., they are not the same population emerging from its parental cloud but are instead distinct overlapping populations. This calls for a revision of most of the observables in the benchmark ONC region (e.g., ages, age spread, mass function, disk frequency, etc.).
\end{itemize}
\acknowledgements We thank the referee, John Bally, for comments that improved the manuscript. We also thank John Tobin, Nicola Da Rio, and Lynne Hillenbrand for comments and clarifications that improved the presentation of results. H. Bouy is funded by the Ram\'on y Cajal fellowship program number RYC-2009-04497. We acknowledge support from the Faculty of the European Space Astronomy Centre (ESAC). This publication is supported by the Austrian Science Fund (FWF). We thank Calar Alto Observatory for allocation of director's discretionary time to this programme.
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/.
SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, University of Cambridge, University of Florida, the French Participation Group, the German Participation Group, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration.
Based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA.
This research used the facilities of the Canadian Astronomy Data Centre operated by the National Research Council of Canada with the support of the Canadian Space Agency.
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA.
\bibliographystyle{aa}
|
1,941,325,221,131 | arxiv | \section{Introduction}
The study of D-branes in string theory has undergone a number of refinements, requiring the introduction of increasingly mathematical tools. They were first found to admit an interpretation in topological K-theory in \cite{1998}, and then as an object of the derived category in \cite{kontsevich1994homological} together with a BPS condition refined as $\Pi$-stability in \cite{2001} in the topological sector. Since its inception as a precise mathematical structure in \cite{2002math.....12237B}, the theory of Bridgeland stability conditions has occupied a unique place in mathematical physics; it has become a foundational tool in algebraic geometry for the construction and study of moduli spaces of sheaves and their birational geometry together with a wealth of applications to classical problems. For a survey of D-branes and $\Pi$-stability in the derived category, we refer the reader to the standard surveys~\cite{1999,2005} and for applications of Bridgeland stability to algebraic geometry, we refer to the surveys~\cite{macri2019lecturesbs,macri2019lectures,bayer2022unreasonable}.
A somewhat less studied line of inquiry is the investigation of general properties enjoyed by triangulated categories which admit a Bridgeland stability condition. Results in this direction would complement a growing, pervasive theme in algebraic geometry, namely, that many higher dimensional geometric phenomena are often governed by lower dimensional non-commutative smooth projective varieties together with their moduli spaces of Bridgeland stable objects.
This leads us to the main subject of this article: Can we classify non-commutative smooth projective varieties and to what extent does the existence of a stability condition constrain their structure?
Given an arbitrary triangulated category equipped with a stability condition, either a systematic classification or even the deduction of general properties is intractable without further assumptions.
As an example of the technical complexity involved, there are many possible definitions for even the dimension of a triangulated category, as pointed out in~\cite{2019arXiv190109461E}. Thus, to facilitate our analysis, we will focus on the notion of a global dimension of a stability condition~\cite{qiu2018global}, which roughly bounds the possible difference between phases of semistable objects related by a nonzero morphism.
A first step was undertaken in \cite{Kikuta_2021}, which classified triangulated categories admitting a stability condition $\sigma$ of global dimension $gldim(\sigma) < 1$ as $D^b(Q)$, with $Q$ a Dynkin quiver, and also characterized the global dimension of stability conditions on $D^b(C)$, with $C$ a smooth projective curve. In this article, we will study admissible subcategories $\mathcal{D} \xhookrightarrow{} D^b(X)$ with a stability condition $\sigma$ of global dimension $gldim(\sigma) < 2$. One of the fundamental implications for categories $\mathcal{D}$, admitting a stability condition, is the existence of well-behaved moduli spaces of semistable objects. By leveraging the fundamental results developed and discussed in \cite{macri2019lectures,2021}, we will identify a suitable curve $C$, in the moduli space of semistable objects in $\mathcal{D}$ under appropriate assumptions, and construct an equivalence of categories $\mathcal{D} \simeq D^b(C)$. As a consequence, we deduce the non-existence of such categories with global dimension $gldim(\sigma) < \frac{6}{5}$, deduce a converse to \cite[Theorem 5.16]{Kikuta_2021} and partially extend \cite[Theorem 5.12]{Kikuta_2021}.
\subsection{Summary of results} Throughout this paper, we will work over a fixed field $k$, algebraically closed and of characteristic $0$. Our main interest will be in the classification of $k$-linear triangulated categories $\mathcal{D}$ that are {\em connected}, i.e. do not admit completely orthogonal decompositions, and that are {\em geometric non-commutative schemes}~\cite{MR3545926}, i.e. admit an admissible embedding $\mathcal{D} \xhookrightarrow{} D^b(X)$ with $X$ a smooth, projective variety. Our techniques rely heavily on the theory of Bridgeland stability conditions, and we denote by $Stab(\mathcal{D})$ the space of stability conditions. Moreover, we denote by $Stab_{\mathcal{N},d}(\mathcal{D})$ the subspace of stability conditions $\sigma = (\mathcal{A},Z)$ that are {\em numerical}, i.e. $Z$ factors through the numerical Grothendieck group, and such that the image $Im(Z) \subset \mathbb{C}$ is discrete.
In the following, let $\mathcal{D}$ be a connected, geometric non-commutative scheme. Our main theorem asserts that under the assumption that the dimension is less than $\frac{6}{5}$, we can classify all such categories.
\begin{reptheorem}{thm:mainthm}
Assume that $\mathcal{D}$ has no exceptional objects and $\inf\limits_{\sigma \in Stab_{\mathcal{N},d}(\mathcal{D})}gldim(\sigma) < \frac{6}{5}$.
Then there exists an equivalence $\mathcal{D} \simeq D^b(C)$ with $C$ a smooth projective curve of genus $g \geq 1$.
\end{reptheorem}
As a consequence of \cite[Theorem 5.16]{Kikuta_2021}, which deduced the equality $\inf\limits_{\sigma \in Stab_{\mathcal{N},d}(D^b(C))}gldim(\sigma) = 1$ for all smooth projective curves $C$ of genus $g \geq 1$, we obtain the following corollary.
\begin{repcorollary}{cor:nonexistence}
There exists no such $\mathcal{D}$ with no exceptional objects and $\inf\limits_{\sigma\in Stab_{\mathcal{N},d}(\mathcal{D})}gldim(\sigma) \in( 1, \frac{6}{5})$.
\end{repcorollary}
We further build on Theorem~\ref{thm:mainthm} in dimension $1$ to obtain a characterization in terms of the upper Serre dimension, which roughly quantifies the growth of phases of objects under iterated applications of the Serre functor. The following serves as a converse to \cite[Theorem 5.16]{Kikuta_2021}.
\begin{reptheorem}{thm:infdim1}
Assume that $\mathcal{D}$ has no exceptional objects. Assume that there exist a numerical stability condition $\sigma \in Stab(\mathcal{D})$ such that $gldim(\sigma) < 2$ and $Im(Z) \subset \mathbb{C}$ is discrete. Then the following are equivalent:
\begin{enumerate}
\item
$\inf\limits_{\sigma \in Stab(\mathcal{D})}gldim(\sigma) = 1$
\item
$\mathcal{D} \simeq D^b(C)$ with $C$ a smooth projective curve of genus $g \geq 1$.
\setcounter{serreinv}{\value{enumi}}
\end{enumerate}
If in addition, $\mathcal{D}$ admits a Serre-invariant Bridgeland stability condition, then the above is equivalent to the following.
\begin{enumerate}
\setcounter{enumi}{\value{serreinv}}
\item
$\overline{Sdim}(\mathcal{D}) = 1$
\end{enumerate}
\end{reptheorem}
Finally, we specialize to cases with a stability condition $\sigma$ such that $gldim(\sigma) = 1$ and we obtain a characterization of the semi-orthogonal decomposition in the presence of exceptional objects.
\begin{repcorollary}{cor:gldim1}
Assume that there exists a numerical Bridgeland stability condition $\sigma \in Stab(\mathcal{D})$ such that $Im(Z) \subset \mathbb{C}$ is discrete and such that $gldim(\sigma) = 1$. Then $\mathcal{D} = \langle \mathcal{C}, E_1 ,\ldots E_n \rangle$ admits a semi-orthogonal decomposition for some integer $n$ with $E_i \in \mathcal{D}$ exceptional objects and $\mathcal{C}$ is either zero or equivalent to $D^b(E)$, with $E$ a smooth elliptic curve.
\end{repcorollary}
It would be interesting to investigate the sharpness of our results in Theorem~\ref{thm:mainthm}, and whether the upper bound can be generalized to $\frac{4}{3}$. Indeed, the Kuznetsov component $Ku(X) \subset D^b(X)$ with $X$ the cubic surface is fractional Calabi-Yau of dimension $\frac{4}{3}$. Though it is generated by a full exceptional collection, we expect that our results are insensitive to the base field and that the category, $Ku(X)$, for $X$ a Picard rank $1$ cubic surface over $\mathbb{Q}$ is indecomposable and in fact satisfies the equality $\inf\limits_{\sigma \in Stab_{\mathcal{N},d}(Ku(X))}gldim(\sigma) =\frac{4}{3}$.\footnote{We thank Xiaolei Zhao for conversations regarding this example.}
\subsection{Related works}
We note that the papers, \cite{MR2427460} and \cite{ctx31462736420006531}, classified abelian categories of homological dimension $1$ without exceptional objects and admitting Serre duality up to derived equivalence, by relying on the general results in~\cite{MR1887637}. In particular, this implies our reconstruction results in the case of $1$-Calabi-Yau categories. On the other hand, our analysis using moduli space techniques is more direct and is ultimately necessary in higher dimensional cases, where there a priori does not exist a heart of bounded t-structure of homological dimension $1$ satisfying Serre duality.
\subsection{Organization}
The organization of this paper is as follows. In Section~\ref{generalprop}, we formulate basic results for $\sigma$-semistable objects in categories satisfying $gldim(\sigma) < 2$. In Section~\ref{sec:1spherical}, we formulate a stronger condition which guarantees the existence of a $1$-spherical object. In Section~\ref{1cyreconstruction}, we prove a geometric reconstruction result using the moduli space of $\sigma$-semistable objects. Finally in Section~\ref{sec:main}, we prove our main theorem and deduce a number of implications in dimension $1$.
\subsection*{Acknowledgements.}
I would like to thank my advisor, Emanuele Macr\`i, for extensive discussions throughout the years. I am grateful to the University of Paris-Saclay for hospitality during the completion of this work. This work was partially supported by the NSF Graduate Research Fellowship under grant DGE-1451070 and by the ERC Synergy Grant ERC-2020-SyG-854361-HyperK.
\section{Stable objects on non-commutative curves}\label{generalprop}
Throughout this paper, the theory of Bridgeland stability conditions will serve as a fundamental tool. We will abide by the standard terminology: given a stability condition $\sigma = (\mathcal{A},Z)$ on a triangulated category $\mathcal{T}$, $\mathcal{A}$ denotes the corresponding heart of bounded t-structure, $Z \colon K_0(\mathcal{A}) \rightarrow \mathbb{C}$ denotes the central charge, and $P_\sigma$ denotes the associated slicing. Moreover, given an object $E \in \mathcal{T}$, $\varphi_\sigma^+(E)$ and $\varphi_\sigma^-(E)$ denote the maximal and minimal phases respectively, i.e. the phases of the maximal and minimal $\sigma$-semistable objects in the unique Harder-Narasimhan filtration of $E$.
In this section, we will prove several general results for categories behaving similarly to $D^b(C)$, with $C$ a smooth projective curve of genus $g \geq 1$. We first recall the following definition.
\begin{defn}
Let $\mathcal{T}$ be a triangulated category with a Bridgeland stability condition $\sigma = (\mathcal{A},Z)$. The {\em global dimension} of $\sigma$ is defined as follows.
\[
gldim(\sigma) \coloneqq \sup\{\varphi_2 -\varphi_1 \vert Hom_\mathcal{T}(E_1,E_2) \neq 0 \text{ for } E_i \in \mathcal{P}_\sigma(\varphi_i) \} \in [0,+\infty]
\]
\end{defn}
In order to identify with the setting of higher genus curves, we will also define the following.
\begin{defn}\label{defn:noncommcurve}
A triangulated category $\mathcal{T}$ is
\begin{enumerate}
\item
{\em connected} if there does not exist any non-trivial completely orthogonal decompositions.
\item
{\em non-rational} if there exists no exceptional objects in $\mathcal{D}$.
\end{enumerate}
\end{defn}
In the following, we will always assume that $\mathcal{T}$ is a non-rational triangulated category with Serre functor $S$ and of finite type over $k$, i.e. for any objects $E,F$, the vector space $\bigoplus\limits_i Hom(E,F[i])$ is finite dimensional. In addition, we assume that $\mathcal{T}$ admits a stability condition $\sigma = (\mathcal{A},Z)$ of global dimension $gldim(\sigma) < 2$.
Recall that there exists a {\em Mukai vector} $v \colon Ob(\mathcal{T}) \rightarrow K_{num}(\mathcal{T})$ sending an object $E \in \mathcal{T}$ to its class in the numerical Grothendieck group where $K_{num}(\mathcal{T}) = K_0(\mathcal{T}) / ker(\chi)$ with $$\chi \coloneqq \sum (-1)^i hom^i(-,-)\colon K_0(\mathcal{T}) \times K_0(\mathcal{T})\rightarrow \mathbb{Z}$$ the Euler pairing. For convenience of notation, for any objects $E, F \in \mathcal{D}$, we define the pairing $(v(E), v(F))\coloneqq -\chi(E,F)$. We first observe the following.
\begin{lemma}\label{lem:semistable}
Let $E \in \mathcal{D}$ be a $\sigma$-semistable object. Then the following holds.
\begin{enumerate}
\item
We have the inequality $v(E)^2 \geq 0$.
\item
If $v(E)^2 = 0$, then every $\sigma$-stable factor $E_i$ satisfies $v(E_i)^2 = 0$.
\item
$Hom(E,SE[-1]) \neq 0$.
\end{enumerate}
\end{lemma}
\begin{proof}
For the first claim, by definition and the assumption $gldim(\sigma) < 2$, we have the equalities
\[
v(E)^2 = -\chi(E,E) = -hom(E,E) + hom(E, E\left[1\right])
\]
We may assume that $E$ is strictly semistable, otherwise, it must be the case that $Hom(E,E) = k$. Then by the assumption of nonexistence of exceptional objects, we must have $Hom(E,E[1]) \neq 0$ and the claim follows. Otherwise, take a Jordan-H\"{o}lder filtration of $E$ with $\sigma$-stable factors $E_i$ of the same phase. Then we have $v(E) = \sum_i v(E_i)$ and in particular, the equality
\begin{equation}\label{eq:vsquare}
v(E)^2 = \sum_i v(E_i)^2 + \sum_{i\neq j} (v(E_i),v(E_j))
\end{equation}
It suffices to prove that $\sum_{i\neq j} (v(E_i),v(E_j)) \geq 0$. By definition, recall that
\[(v(E_i),v(E_j)) = -hom(E_i,E_j) + hom(E_i,E_j\left[1\right])
\]
If $E_i$ and $E_j$ were isomorphic, then we immediately have $(v(E_i),v(E_j)) \geq 0$ for all $i,j$. But if they were not, then we must have $Hom(E_i,E_j) = 0$ and we conclude.
For the second claim, assume that $v(E)^2 = 0$. Then by equation~\eqref{eq:vsquare}, together with the preceding paragraph, every term in the sum on the rhs of equation~\eqref{eq:vsquare} must be $\geq 0$ and therefore must be $0$. In particular, we must have $v(E_i)^2 = 0$ for all $i$ and we conclude.
For the third claim, assume on the contrary that $Hom(E,SE[-1]) = 0$. Then by Serre duality, we have $Hom(E,E[1]) = 0$. But then it must be the case that $v(E)^2 < 0$, contradicting the first claim.
\end{proof}
Next, we will reformulate the well-known weak Mukai Lemma in our setting using the assumption on the global dimension.
\begin{lemma}[Weak Mukai Lemma]\label{lem:wml}
Let $A \rightarrow E \rightarrow B$ be an exact triangle in $\mathcal{T}$ satisfying the inequality $\varphi_\sigma^-(A) > \varphi_\sigma^+(B)$.
Then we have the following inequality:
\[
hom^1(A,A) + hom^1(B,B) \leq hom^1(E,E)
\]
\end{lemma}
\begin{proof}
It suffices to prove the two vanishings $Hom(B, A[2]) = 0, Hom(A,B) = 0$. The conclusion then follows as in the proof of \cite[Lemma 2.5]{2017}. To see the first claim, we observe the following:
\[
\varphi_\sigma^-(A[2]) = \varphi_\sigma^-(A) + 2 > \varphi_\sigma^+(B) +2
\]
where the first equality follows by uniqueness of the Harder-Narasimhan filtration of $A$ into $\sigma$-semistable factors and the second inequality follows by assumption. In particular, this implies that $\varphi_\sigma(A_i[2]) > \varphi_\sigma(B_j) +2$ for any $\sigma$-semistable factor $A_i[2], B_j$ of $A[2],B$ respectively. But this implies that $Hom(B_j, A_i[2]) = 0$ for all $i, j$ by the assumption that $gldim(\sigma) < 2$. In particular, this implies that $Hom(B,A[2]) = 0$. The second claim follows from an analogous argument using the inequality $\varphi_\sigma^-(A) > \varphi_\sigma^+(B)$ and we conclude.
\end{proof}
We conclude this section by studying objects $E$ with minimal $hom^1(E,E)$, obtaining a certain converse statement to Lemma~\ref{lem:semistable}.
\begin{lemma}\label{lem:1cy}
Let $d$ be the minimum integer such that there exists an object $D \in \mathcal{T}$ with $d = hom^1(D,D)$.
\begin{enumerate}
\item
The integer $d$ is strictly positive.
\item
If $d \geq 2$, then any object $E$ such that $hom^1(E,E) \leq 2d -2$ is $\sigma$-stable.
\item
If $d = 1$, then any object $E$ such that $hom^1(E,E) = 1$ is $\sigma$-semistable and $v(E)^2 = 0$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1): Assume on the contrary that $d = 0$. Let $E$ be an object such that $Hom^1(E,E) = 0$. By Lemma~\ref{lem:wml}, we may assume that $E$ is $\sigma$-semistable. Indeed if not, then taking the Harder-Narasimhan filtration, Lemma~\ref{lem:wml} implies that every $\sigma$-semistable factor $E_i$ must satisfy $Hom^1(E_i,E_i) = 0$. On the other hand, this contradicts Lemma~\ref{lem:semistable}(1).
(2): We first prove that if $d > 0$ and $E$ satisfies $hom^1(E,E) \leq 2d-1$, then $E$ must be $\sigma$-semistable. Assume not. Then the Harder-Narasimhan filtration yields $n$ $\sigma$-semistable factors $E_i$. In particular, we have the inequalities
\[
nd \leq \sum\limits_{i = 1}^nhom^1(E_i,E_i) \leq hom^1(E,E) \leq 2d -1
\]
where the first follows from the minimality of $d$, the second follows from Lemma~\ref{lem:wml}, and the third follows by assumption. Clearly, we must have $n = 1$.
We now prove that if $d \geq 2$ and $E$ satisfies $hom^1(E,E) \leq 2d-2$, then $E$ is also $\sigma$-stable. Taking a Jordan-Holder filtration into $m$ $\sigma$-stable factors $F_i$ of the same phase, we obtain the equality
\begin{equation*}
v(E)^2 = \sum_i v(F_i)^2 + \sum_{i\neq j} (v(F_i),v(F_j))
\end{equation*}
As in the proof of the first paragraph of Lemma~\ref{lem:semistable}, we must have $ (v(F_i),v(F_j)) \geq 0$ for all non-equal pairs $i,j$. We obtain the inequalities
\[
md - m \leq \sum\limits_{i = 1}^m hom^1(F_i,F_i) -m = \sum\limits_{i=1}^m v(F_i)^2 \leq v(E)^2 \leq (2d-2) - hom(E,E) \leq 2d-3
\]
where the first follows from the minimality of $d$, the second is by definition, the third follows from the preceding paragraph, the fourth is by definition, and the fifth follows from the inequality $hom(E,E) \geq 1$. On the other hand, this again implies $m=1$, and we conclude.
(3): The fact that $E$ is $\sigma$-semistable follows from the first paragraph of the proof of part $(2)$. The claim that $v(E)^2 = 0$ follows from the fact that $hom(E,E) \geq 1$ and Lemma~\ref{lem:semistable}(1).
\end{proof}
\section{Global dimension bounds and $1$-spherical objects}\label{sec:1spherical}
Let $\mathcal{T}$ be a triangulated category of finite type over $k$ with a Serre functor $S$, admitting a Bridgeland stability condition, i.e. $Stab(\mathcal{D}) \neq 0$. In the following, let $d$ be the minimum integer such that there exists $E \in \mathcal{D}$ with $d = Hom^1(E,E)$. The goal of this section is to prove the following, largely following the methods exhibited in \cite[Section 3]{ctx31462736420006531}.
\begin{prop}\label{prop:1sphericalnew}
Assume $\mathcal{T}$ satisfies $d \geq 1$ and $\inf\limits_{\sigma \in Stab(\mathcal{T})} gldim(\sigma) < \frac{6}{5}$. Then $d = 1$ and every object $F \in \mathcal{T}$ satisfying $hom^1(F,F) = 1$ is $1$-spherical.
\end{prop}
\subsection{Bounds on extensions of objects with minimal $\bm{hom^1}$}\label{sec:minimal} In this subsection, we introduce a stronger condition than the existence of a stability condition $\sigma$ satisfying $gldim(\sigma) <2$, summarized in Definition~\ref{defn:noncurve}. Under this assumption, we deduce a number of restrictions on objects $E$ with minimal $hom^1(E,E)$. Most notably, we demonstrate in Lemma~\ref{lem:1spherical} that if $d = 1$, then any such object must be $1$-spherical. In Lemma~\ref{lem:restrictC}, we derive bounds on the object $Cone(E \rightarrow SE[-1])$ in the case $d \geq 2$.
We first introduce the following definition which refines the assumption on the existence of a stability condition $\sigma$ satisfying $gldim(\sigma) <2$.
\begin{defn}\label{defn:noncurve}
We will say that $\mathcal{T}$ is a {\em noncommutative curve} if there exists a stability condition $\sigma = (\mathcal{A} = \mathcal{P}(\varphi_0,\varphi_0+1],Z)$ such that
\begin{enumerate}
\item
There exists an object $E \in \mathcal{T}$ with $hom^1(E,E) = d$ such that $E, SE[-1] \in \mathcal{A}$.
\item
We have the inequalities: $$(\varphi_0 + 2) - \varphi(SE[-1]) > gldim(\sigma),\qquad (\varphi(E) + 2) - (\varphi_0 + 1) > gldim(\sigma)$$
\end{enumerate}
\end{defn}
\begin{rem}\label{rem:noncurvedim2}
If $\mathcal{T}$ is a noncommutative curve, then Definition~\ref{defn:noncurve}(2) immediately implies that the stability condition $\sigma$ satisfies $gldim(\sigma) < 2$. In addition, if $d \geq 1$, then $\mathcal{T}$ is also non-rational and the results in Section~\ref{generalprop} apply.
\end{rem}
In the following, $\mathcal{T}$ will always be a non-commutative curve such that $d \geq 1$. We fix an object $E \in \mathcal{T}$ and a stability condition $\sigma = (\mathcal{A} = \mathcal{P}(\varphi_0,\varphi_0+1],Z)$ satisfying Definition~\ref{defn:noncurve}. By Lemma~\ref{lem:1cy}, we note that $E$ and $SE[-1]$ are semistable with respect to any stability condition $\sigma$ with $gldim(\sigma) < 2$. Let $f \in Hom(E,SE[-1]) = Hom^1(E,E) \neq 0$ be any nonzero morphism and consider the exact triangle.
\[
\begin{tikzcd}
E \arrow[r, "f"] & SE[-1] \arrow[r] & C
\end{tikzcd}
\]
Taking cohomology with respect to $\mathcal{A}$, we have the following exact sequences
\begin{equation}\label{eq:exact1}
\begin{tikzcd}
0 \arrow[r] &\mathcal{H}^{-1}(C) \arrow[r] & E \arrow[r] & im(f) \arrow[r]& 0
\end{tikzcd}
\end{equation}
\begin{equation}\label{eq:exact2}
\begin{tikzcd}
0 \arrow[r] & im(f) \arrow[r]& SE[-1] \arrow[r] & \mathcal{H}^0(C) \arrow[r] &0
\end{tikzcd}
\end{equation}
We begin by recording the following general Lemmas which will be used extensively later in establishing Proposition~\ref{prop:1sphericalnew}. We first study some general bounds imposed on phases of the object $C$.
\begin{lemma}\label{lem:boundC}
The objects $E, SE[-1]$ are $\sigma$-semistable. If $\mathcal{H}^{-1}(C), \mathcal{H}^0(C) \neq 0$, then we have the following chain of inequalities on the phases.
\begin{align*}
\varphi_0 &< \varphi^-(\mathcal{H}^{-1}(C)) \leq \varphi^+(\mathcal{H}^{-1}(C)) \leq \varphi(E) \\
&\leq \varphi^-(im(f)) \leq \varphi^+(im(f)) \leq \varphi(SE[-1])\\ &\leq \varphi^-(\mathcal{H}^0(C)) \leq \varphi^+(\mathcal{H}^0(C)) \leq \varphi_0+1
\end{align*}
\end{lemma}
\begin{proof}
The claim that $E,SE[-1]$ are $\sigma$-semistable follows directly from the proof of Lemma~\ref{lem:1cy}(2). The first and last inequalities follow from the assumption that all objects are contained in $\mathcal{A}$. The second, fifth, and eighth inequalities are clear by definition. The rest of the inequalities follow directly from \cite[Lemma 3.4]{2002math.....12237B}.
\end{proof}
\begin{lemma}\label{lem:simple}
$E$ is a simple object, i.e. $hom(E,E) = 1$.
\end{lemma}
\begin{proof}
If $d = 1$, then the statement follows from Lemma~\ref{lem:semistable}(1) and the fact that $E$ is $\sigma$-semistable from Lemma~\ref{lem:boundC} . If $d \geq 2$, then this follows directly by Lemma~\ref{lem:1cy}(2) as $E$ is $\sigma$-stable.
\end{proof}
Finally, we record the following calculation which will be used extensively later, which highlights the utility of Definition~\ref{defn:noncurve}.
\begin{lemma}\label{lem:mainbound}
We have the following inequalities.
\begin{align*}
hom^1(\mathcal{H}^{-1}(C),\mathcal{H}^{-1}(C)) &\leq hom^1(E,\mathcal{H}^{-1}(C)) = d - 1 + hom(E, im(f)) - hom^1(E, im(f)) \\
hom^1(\mathcal{H}^0(C),\mathcal{H}^0(C)) &\leq hom^1(\mathcal{H}^0(C), SE[-1]) = d - 1 - hom(E, im(f)) + hom^1(E, im(f))
\end{align*}
\end{lemma}
\begin{proof}
We first establish the following equalities.
\begin{align*}
hom^1(E,\mathcal{H}^{-1}(C)) &= d - 1 + hom(E, im(f)) - hom^1(E, im(f)) \\
hom^1(\mathcal{H}^0(C), SE[-1]) &= d - 1 - hom(E, im(f)) + hom^1(E, im(f))
\end{align*}
We claim the vanishings $Hom(E, \mathcal{H}^{-1}(C))= 0$ and $Hom^2(E, \mathcal{H}^{-1}(C)) = 0$. The first equality then follows by applying $Hom(E,\cdot)$ to sequence~\eqref{eq:exact1}. For the first claim, assume $Hom(E, \mathcal{H}^{-1}(C)) \neq 0$. Then the composition with the injection $\mathcal{H}^{-1}(C) \hookrightarrow E$ yields a nonzero morphism which is not an isomorphism, contradicting the fact that $hom(E,E) = 1$ by Lemma~\ref{lem:simple}. The second claim follows from the fact that $\mathcal{H}^{-1}(C), SE[-1] \in \mathcal{A}$ and thus $Hom^2(E, \mathcal{H}^{-1}(C)) = Hom(\mathcal{H}^{-1}(C)[1], SE[-1]) = 0$. The second equality follows from an analogous argument by applying $Hom(\cdot, SE[-1])$ to sequence~\eqref{eq:exact2} and noting that $Hom(\mathcal{H}^0(C), SE[-1]) = 0$ and $Hom^2(\mathcal{H}^0(C), SE[-1]) = Hom(E[1], \mathcal{H}^0(C)) = 0$.
We now establish the inequalities
\begin{align*}
hom^1(\mathcal{H}^{-1}(C),\mathcal{H}^{-1}(C)) &\leq hom^1(E,\mathcal{H}^{-1}(C))\\
hom^1(\mathcal{H}^0(C),\mathcal{H}^0(C)) &\leq hom^1(\mathcal{H}^0(C),SE[-1])
\end{align*}
which, together with the above, implies the result. We claim the vanishings $Hom^2(im(f), \mathcal{H}^{-1}(C)) = 0$ and $Hom^2(\mathcal{H}^0(C) , im(f)) = 0$. The inequalities then follow by applying $Hom(\cdot,\mathcal{H}^{-1}(C))$ to sequence~\eqref{eq:exact1} and $Hom(\mathcal{H}^0(C),\cdot)$ to sequence~\eqref{eq:exact2}, respectively. The first claim follows from the inequalities
\[
(\varphi^-(\mathcal{H}^{-1}(C)) + 2) - \varphi^+(im(f)) \geq (\varphi_0 + 2) - \varphi(SE[-1]) > gldim(\sigma)
\]
by applying Definition~\ref{defn:noncurve}(2) and Lemma~\ref{lem:boundC}. Similarly, the second claim follows from the inequalities
\[
(\varphi^-(im(f)) + 2) - \varphi^+(\mathcal{H}^0(C)) \geq (\varphi(E) + 2) - (\varphi_0 + 1) > gldim(\sigma)
\]
\end{proof}
We now proceed to the critical Lemmas in establishing Proposition~\ref{prop:1sphericalnew}. We first demonstrate that it suffices to reduce to the case $d = 1$.
\begin{lemma}\label{lem:1spherical}
Assume that $d = 1$. Then $E$ is a $1$-spherical object.
\end{lemma}
\begin{proof}
From the injection $im(f) \hookrightarrow SE[-1]$, the surjection $E \twoheadrightarrow im(f)$, and the assumption $hom(E,SE[-1]) = hom^1(E,E) = 1$, we obtain the inequalities
\begin{align*}
1 &\leq hom(E, im(f)) \leq hom(E,SE[-1]) = 1 \\
1 &\leq hom(im(f),SE[-1]) \leq hom(E,SE[-1]) = 1
\end{align*}
and hence, we have $hom(E,im(f)) = hom^1(E,im(f)) = 1$.
We may assume that $f$ is not an isomorphism, otherwise the conclusion follows immediately. Then it must be the case that either $\mathcal{H}^{-1}(C)$ or $\mathcal{H}^0(C)$ is nonzero. Assume the former case. By Lemma~\ref{lem:mainbound}, we have the following inequality
\[hom^1(\mathcal{H}^{-1}(C),\mathcal{H}^{-1}(C)) \leq d - 1 + hom(E, im(f)) - hom^1(E, im(f)) = 0
\]
and so $hom^1(\mathcal{H}^{-1}(C),\mathcal{H}^{-1}(C)) = 0$, contradicting the assumption that $d = 1$ is minimal.
The proof for the case that $\mathcal{H}^0(C)$ is nonzero follows from an identical argument.
\end{proof}
To address the case $d \geq 2$, we prove that the object $C$ must satisfy stronger bounds.
\begin{lemma}\label{lem:restrictC}
Assume that $d \geq 2$. Then exactly one of the objects $\mathcal{H}^{-1}(C)$ or $\mathcal{H}^0(C)$ is nonzero. In addition, $C$ must be $\sigma$-stable.
\end{lemma}
\begin{proof}
If both objects were zero, then $f$ must be an isomorphism. In particular, we must have $hom^1(E,E) = hom(E,SE[-1]) = hom(E,E) = 1$ by Lemma~\ref{lem:simple}, contradicting the assumption that $d \geq 2$.
If both objects were non-zero, then by Lemma~\ref{lem:mainbound}, we have the following inequalities
\begin{align*}
hom^1(\mathcal{H}^{-1}(C), \mathcal{H}^{-1}(C)) &\leq d-1 + \chi(E,im(f))\\
hom^1(\mathcal{H}^{0}(C), \mathcal{H}^{0}(C)) &\leq d-1 - \chi(E,im(f))
\end{align*}
The minimality of $d$ then implies that either $\mathcal{H}^{-1}(C)$ or $\mathcal{H}^0(C)$ must be trivial.
We may consider the case where $\mathcal{H}^{-1}(C) = 0$, the latter case follows from an identical argument. Sequence~\eqref{eq:exact2} then yields the following exact sequence in $\mathcal{A}$.
\begin{equation}\label{eq:exact3}
\begin{tikzcd}
0 \arrow[r] & E \arrow[r] & SE[-1] \arrow[r] & \mathcal{H}^0(C) \arrow[r] & 0
\end{tikzcd}
\end{equation}
By Lemma~\ref{lem:mainbound}, we have the inequality
\begin{equation}\label{eq:3}
hom^1(\mathcal{H}^0(C),\mathcal{H}^0(C)) \leq hom^1(\mathcal{H}^0(C),SE[-1]) = d - 1 - \chi(E, E) = 2d-2
\end{equation}
In particular, this implies that $C = \mathcal{H}^0(C)$ is $\sigma$-stable by Lemma~\ref{lem:1cy}.
\end{proof}
\subsection{Applications of global dimension bounds}
In this subsection, we prove Proposition~\ref{prop:1sphericalnew}. We begin by reformulating Definition~\ref{defn:noncurve} in terms of the quantity $\inf\limits_{\sigma \in Stab(\mathcal{T})}gldim(\sigma)$.
In the following, we assume that $\mathcal{T}$ satisfies $d \geq 1$ and $\inf\limits_{\sigma \in Stab(\mathcal{T})} gldim(\sigma) < \frac{6}{5}$. Fix a stability condition $\sigma = (\mathcal{P},Z)$ such that $gldim(\sigma) < \frac{6}{5}$ and an object $E \in \mathcal{T}$ with $hom^1(E,E) = d$. By Lemma~\ref{lem:1cy}, we again note that $E$ and $SE[-1]$ are semistable with respect to any stability condition $\sigma$ with $gldim(\sigma) < 2$. Consider the following phase
\begin{equation}\label{eq:phi0}
\varphi_0 \coloneqq \frac{1}{2}(\varphi(E) + \varphi(SE[-1])) - \frac{1}{2}
\end{equation}
and the heart $\mathcal{A} = \mathcal{P}(\varphi_0,\varphi_0+1]$.
We first observe the following simple Lemma, which will be used repeatedly in the below.
\begin{lemma}\label{lem:sbound}
Let $F, SF[-1]\in \mathcal{T}$ be $\sigma$-semistable objects. Then the following inequalities hold.
\[0\leq\varphi(SF[-1]) - \varphi(F) \leq gldim(\sigma)-1
\]
\end{lemma}
\begin{proof}
The first inequality follows immediately from $hom(F,SF[-1]) = hom^1(F,F) \geq 1$ by the assumption on $d$. The second follows from $hom(F,SF) = hom(F,F) \neq 0$ and the definition of the global dimension.
\end{proof}
We now demonstrate that the results of section~\ref{sec:minimal} apply automatically.
\begin{lemma}\label{lem:65noncurve}
$\mathcal{T}$ is a non-commutative curve with respect to $E$ and the heart $\mathcal{A}= \mathcal{P}(\varphi_0,\varphi_0+1]$ in the sense of Definition~\ref{defn:noncurve}.
\end{lemma}
\begin{proof}
It suffices to show that the heart $\mathcal{A} = \mathcal{P}(\varphi_0, \varphi_0+1]$ satisfies the conditions in Definition~\ref{defn:noncurve}.
We first prove that $E,SE[-1] \in \mathcal{A}$. It suffices to prove the inequalities
\[\varphi_0 <\varphi(E) \leq \varphi(SE[-1]) \leq \varphi_0+1
\]
Applying Lemma~\ref{lem:sbound}, we must have
\begin{equation}\label{eq:sbound}
\varphi(E) \leq \varphi(SE[-1]) \leq \varphi(E) + \frac{1}{5}
\end{equation}
Applying the inequality~\eqref{eq:sbound} to Definition~\ref{eq:phi0}, we obtain
\begin{equation}\label{eq:chain}
\varphi_0 \leq \varphi(E) - \frac{2}{5} < \varphi(E) \leq \varphi(SE[-1]) \leq \varphi(E) + \frac{1}{2} = \frac{1}{2}(\varphi(E) + \varphi(E)) + \frac{1}{2} \leq \varphi_0 + 1
\end{equation}
which implies the claim.
Finally, we verify the inequalities:
\begin{equation}\label{eq:noncurveineq}
(\varphi_0 + 2) - \varphi(SE[-1]) > gldim(\sigma),\qquad (\varphi(E) + 2) - (\varphi_0 + 1) > gldim(\sigma)
\end{equation}
The first follows from the inequalities
\begin{align*}
(\frac{1}{2}(\varphi(E) + \varphi(SE[-1])) + \frac{3}{2})- \varphi(SE[-1]) &= \frac{1}{2}(\varphi(E) - \varphi(SE[-1])) + \frac{3}{2} \\
& \geq \frac{1}{2}(-\frac{1}{5}) + \frac{3}{2} \geq \frac{6}{5} > gldim(\sigma)
\end{align*}
where we used the second inequality in~\eqref{eq:sbound} in the second line. The second inequality in~\eqref{eq:noncurveineq} follows from an identical argument.
\end{proof}
\begin{rem}
We note that Lemma~\ref{lem:65noncurve} applies with an identical argument if $gldim(\sigma) < \frac{4}{3}$.
\end{rem}
Finally, with the stronger bound on the global dimension, we now prove Proposition~\ref{prop:1sphericalnew} by applying Lemmas~\ref{lem:1spherical} and~\ref{lem:restrictC}.
\begin{proof}[Proof of Proposition~\ref{prop:1sphericalnew}]
We will assume that $d \geq 2$. Fix a stability condition $\sigma = (\mathcal{P},Z)$ such that $gldim(\sigma) < \frac{6}{5}$, an object $E$ with $hom^1(E,E) = d$, and heart $\mathcal{A} = \mathcal{P}(\varphi_0,\varphi_0+1]$ with $\varphi_0$ given in~\eqref{eq:phi0}.
Consider the following sequence, where we recall $E,SE[-1] \in \mathcal{A}$ by Lemma~\ref{lem:65noncurve}.
\begin{equation}\label{eq:exact6}
\begin{tikzcd}
E \arrow[r] & SE[-1] \arrow[r] & C
\end{tikzcd}
\end{equation}
By Lemma~\ref{lem:restrictC}, we may also assume that $C \in \mathcal{A}$ and is $\sigma$-stable. The case with $C \in \mathcal{A}[1]$ will follow by an identical, dual argument.
We define the rotated slicing $\mathcal{A}' \coloneqq \mathcal{P}(\varphi_0+\frac{1}{5}, \varphi_0 + \frac{6}{5}]$. We first observe the inclusion $E,SE[-1],C \in \mathcal{A}'$. Indeed, we have the following chain of inequalities
\[
\varphi_0 + \frac{1}{5} \leq \varphi(E) - \frac{1}{5} < \varphi(E) \leq \varphi(SE[-1]) \leq \varphi(C) \leq \varphi_0 + 1 \leq \varphi_0 + \frac{6}{5}
\]
where the first follows from the first inequality in the chain~\eqref{eq:chain}, the third and fourth by Lemma~\ref{lem:boundC}, and the fifth from the inclusion $C \in \mathcal{A}$.
Applying $Hom(E,\cdot)$ to sequence~\eqref{eq:exact6}, and using that $hom(E,E) =1$ by Lemma~\ref{lem:simple}, it must be the case that $Hom(E,C) \neq 0$. We claim that this implies that $hom(E, SC[-1]) = hom^1(C, E) \geq d $. Indeed, let $g \in Hom(E,C)$ be non-zero. This yields the following exact sequences in $\mathcal{A}'$:
\begin{equation}\label{eq:exact5}
\begin{tikzcd}
0 \arrow[r]& ker(g) \arrow[r] & E \arrow[r] & im(g) \arrow[r] & 0
\end{tikzcd}
\end{equation}
\begin{equation}\label{eq:exact4}
\begin{tikzcd}
0 \arrow[r]& im(g) \arrow[r] & C\arrow[r] & coker(g) \arrow[r] & 0
\end{tikzcd}
\end{equation}
We apply $Hom(im(g), \cdot)$ to the Serre dual of sequence~\eqref{eq:exact4}
\[
\begin{tikzcd}
S(coker(g))[-2] \arrow[r] & S(im(g))[-1] \arrow[r] & SC[-1] \arrow[r] & S(coker(g))[-1]
\end{tikzcd}
\]
and observe that $Hom(im(g),S(coker(g))[-2]) = Hom(coker(g),im(g)[2]) = 0$ from the inequalities
\begin{equation}\label{eq:a'bound}
(\varphi^-(im(g)) + 2) - \varphi^+(coker(g)) \geq (\varphi(E) + 2) - (\varphi_0 + \frac{6}{5}) > gldim(\sigma)
\end{equation}
where $\varphi(E) \leq \varphi^-(im(g))$ from \cite[Lemma 3.4]{2002math.....12237B} and $\varphi^+(coker(g)) \leq \varphi_0 + \frac{6}{5}$ as $coker(g) \in \mathcal{A}'$. The second inequality in~\eqref{eq:a'bound} then follows from the inequalities:
\begin{align*}
(\varphi(E) + 2) - (\varphi_0 + \frac{6}{5}) &= \frac{1}{2}(\varphi(E) - \varphi(SE[-1])) + 2 - \frac{7}{10}\\
&\geq \frac{1}{2}(-\frac{1}{5}) + 2 - \frac{7}{10} = \frac{6}{5} > gldim(\sigma)
\end{align*}
where the second line follows from Lemma~\ref{lem:sbound}.
Thus, we have $d \leq Hom(im(g), S(im(g))[-1]) \hookrightarrow Hom(im(g), SC[-1])$, and so $d \leq Hom(im(g), SC[-1])$. As $C, SC[-1]$ are $\sigma$-stable by Lemma~\ref{lem:restrictC}, we apply Lemma~\ref{lem:sbound} and obtain $0 \leq\varphi(SC[-1]) - \varphi(C) \leq \frac{1}{5}$, and hence $SC[-1] \in \mathcal{A}'$. The claim follows by applying $Hom(\cdot, SC[-1])$ to the surjection $E \twoheadrightarrow im(g)$ in sequence~\eqref{eq:exact5}.
Altogether, we obtain the inequalities
\begin{align*}
hom^1(C,C) &= hom^1(C, SE[-1]) - hom^1(C,E) + hom(C,C) \\
&\leq (2d -2) - d + hom(C,C)\\
&= d - 2 + hom(C,C)
\end{align*}
where the first follows by applying $Hom(C, \cdot)$ to sequence~\eqref{eq:exact6} and the vanishings $Hom(C,SE[-1])= 0$ and $Hom^2(C,E) = 0$ which follow from the proof of Lemma~\ref{lem:mainbound}. The second follows from Lemma~\ref{lem:mainbound} applied to sequence~\ref{eq:exact6} and the above paragraph. On the other hand, as $C$ is $\sigma$-stable, we obtain $hom^1(C,C) \leq d-1$, contradicting the minimality of $d$. Thus, it must be the case that $d = 1$.
Finally, assume $F \in \mathcal{T}$ satisfies $hom^1(F,F) = d =1$. By Lemma~\ref{lem:1cy}, the objects $F, SF[-1]$ are $\sigma$-semistable. By Lemma~\ref{lem:65noncurve}, there exists a heart $\mathcal{A}$ with $F, SF[-1] \in \mathcal{A}$ satisfying Definition~\ref{defn:noncurve}. By Lemma~\ref{lem:1spherical}, $F$ must be a $1$-spherical object and we conclude.
\end{proof}
\section{Reconstruction from a $1$-Calabi-Yau object}\label{1cyreconstruction}
Let $\mathcal{D}$ be a triangulated category of finite type over $k$. We recall that $\mathcal{D}$ is a {\em geometric non-commutative scheme} if there exists an admissible embedding $\mathcal{D} \xhookrightarrow{} D^b(X)$ with $X$ a smooth, projective variety over $k$. In this subsection, we prove the following reconstruction result.
\begin{prop}\label{prop:reconstructionA}
Assume that $\mathcal{D}$ is a non-rational, geometric noncommutative scheme with a numerical stability condition $\sigma = (\mathcal{A},Z)$ such that $gldim(\sigma) <2$ and $Im(Z) \subset \mathbb{C}$ is discrete. If there exists an object $E \in \mathcal{D}$ such that $Hom^1(E,E) = k$, then there exists an admissible embedding $\Phi \colon D^b(C) \xhookrightarrow{} \mathcal{D}$ where $C$ is a smooth projective curve.
In addition, if $\mathcal{D}$ is connected and if every $\sigma$-stable object $E$ satisfying $Hom^1(E,E) = k$ is $1$-spherical, then $\Phi$ is an equivalence of categories.
\end{prop}
In this subsection, we will always assume $\mathcal{D} \xhookrightarrow{} D^b(X)$ a geometric noncommutative scheme and that all Bridgeland stability conditions are numerical, i.e. that the central charge factorizes through $K_{num}(\mathcal{D})$, the numerical Grothendieck group. We first recall the basic definitions and notions of the moduli spaces of objects in $\mathcal{D}$ following \cite[Section 5.3]{macri2019lectures} and we defer to \cite[Part II]{2021} for the analogous definitions in greater generality and for the fundamental properties.
Given a scheme $B$, locally of finite type over $k$, we define
\[
\mathcal{D}_{qcoh} \boxtimes D_{qcoh}(B) \subset D_{qcoh}(X\times B)
\]
to be the smallest triangulated subcategory in the unbounded derived category of quasi-coherent sheaves on $X\times B$ closed under direct sums and containing $\mathcal{D} \boxtimes D^b(B)$.
\begin{defn}
An object $E\in D_{qcoh}(X\times B)$ is \textit{B-perfect} if, locally over $B$, it is isomorphic to a bounded complex of quasi-coherent sheaves on $B$ that is flat and of finite presentation.
\end{defn}
\noindent
Let $D_{B-perf}(X \times B)$ be the full subcategory of $B$-perfect complexes in $D_{qcoh}(X\times B)$ and consider the following restriction to $\mathcal{D}$.
\[
\mathcal{D}_{B-perf} \coloneqq (\mathcal{D}_{qcoh} \boxtimes D_{qcoh}(B)) \cap D_{B-perf}(X \times B)
\]
Let $\mathcal{M} \colon (Sch/k)^{op} \rightarrow Grp$ be the $2$-functor defined as follows for $B$, locally of finite type.
\[
\mathcal{M}(B) \coloneqq \bigg\{ E\in \mathcal{D}_{B-perf} \colon \begin{array}{l} Ext^i(E|_{X \times \{b \}}, E|_{X \times \{b\}}) = 0,\text{ for all }i < 0 \\ \text{ and all geometric points }b \in B \end{array}\bigg\}
\]
Given a stability condition $\sigma \in \mathcal{D}$, we denote by $\mathcal{M}_\sigma(v)$ (resp. $\mathcal{M}_\sigma^{st}(v)$) the substack of $\mathcal{M}$ parametrizing $\sigma$-semistable (resp. $\sigma$-stable) objects in $\mathcal{D}$ of class $v$.
The following Theorem summarizes the relevant properties that we will need for these objects.
\begin{theorem}[{\cite[Theorem 21.24]{2021}}, {\cite[Theorem A.5]{muk87}}]\label{thm:stack}
Fix a class $v \in K_{num}(\mathcal{D})$ such that $\mathcal{M}_\sigma(v) = \mathcal{M}_\sigma^{st}(v)$. Then $\mathcal{M}_\sigma(v)$ is an algebraic stack, the coarse moduli space $M_\sigma(v)$ is a proper algebraic space, and $\mathcal{M}_\sigma(v)$ is a $\mathbb{G}_m$-gerbe over $M_\sigma(v)$. In particular, there exists a quasi-universal family in $\mathcal{D}_{B-perf}$.
\end{theorem}
The critical ingredient in the proof of Proposition~\ref{prop:reconstructionA} is to identify a suitable curve in the moduli space of $\sigma$-semistable objects in $\mathcal{D}$. By imposing a bound on the global dimension of the stability condition, we see that $M_\sigma(v)$ is particularly simple.
\begin{corollary}\label{cor:mprojcurve}
Assume that $\mathcal{D}$ admits a stability condition $\sigma$ such that $gldim(\sigma) < 2$. Assume that there exists a $\sigma$-semistable object $E$ of class $v$ such that $v^2 = 0$ and $\mathcal{M}_\sigma(v) = \mathcal{M}_\sigma^{st}(v)$. Then $M_\sigma(v)$ is a smooth projective curve.
\end{corollary}
\begin{proof}
By assumption, we have that $\chi(v,v) = 0$. As every $\sigma$-semistable object of class $v$ is $\sigma$-stable by assumption, we have that $Hom(F,F[i]) = k$ for $i = 0,1$ and vanishes otherwise for any $\sigma$-semistable object $F$ of class $v$ from the assumption that $gldim(\sigma) < 2$.
The above implies in particular, that $Hom(F,F[1]) = k$ and $Hom(F,F[2]) = 0$ and hence $M_\sigma(v)$ is of dimension $1$ and is smooth, respectively. Thus, $M_\sigma(v)$ is a smooth and proper algebraic curve, and so is automatically projective by \cite[\href{https://stacks.math.columbia.edu/tag/0A26}{Tag 0A26}]{stacks-project}.
\end{proof}
In order to fully utilize Theorem~\ref{thm:stack}, we will need to find a suitable class $v \in K_{num}(\mathcal{D})$ such that any $\sigma$-semistable object of class $v$ is also $\sigma$-stable. Under mild conditions on the stability condition, this is indeed the case.
\begin{lemma}\label{lem:conditionsreconstruct}
Assume that there exists a stability condition $\sigma = (\mathcal{A}, Z)\in Stab(\mathcal{D})$ such that $gldim(\sigma) < 2$ and $Im(Z) \subset \mathbb{C}$ is discrete. Assume that there exists a $\sigma$-semistable object $E$ such that $v(E)^2 = 0$. Then there exists a class $v \in K_{num}(\mathcal{D})$ such that $v^2 = 0$ and $\mathcal{M}_\sigma(v) = \mathcal{M}_\sigma^{st}(v) \neq 0$.
\end{lemma}
\begin{proof}
By assumption, there exists an object $E \in \mathcal{D}$ which is $\sigma$-semistable and satisfies $v(E)^2 = 0$. If there does not exist a strictly $\sigma$-semistable object of class $v(E)$, then we are done. Otherwise, assume $F$ is strictly $\sigma$-semistable of the same class. Taking a Jordan-H\"{o}lder filtration, we obtain $\sigma$-stable factors $F_i$ such that $\sum_i v(F_i) = v(F)$ and satisfying $v(F_i)^2 = 0$ by Lemma~\ref{lem:semistable}(2). Fixing a class $v_i = v(F_i)$ for some $i$, we proceed by induction, and this must terminate by the assumption of discreteness.
\end{proof}
Finally, we combine the above observations to deduce Proposition~\ref{prop:reconstructionA}. The following proof largely follows the methods exhibited in \cite[Lemma 32.5]{2021}.
\begin{proof}[Proof of Proposition~\ref{prop:reconstructionA}]
Fix a stability condition $\sigma = (\mathcal{A}, Z) \in Stab(\mathcal{D})$ such that $Im(Z) \subset \mathbb{C}$ is discrete and such that $gldim(\sigma) < 2$. If there exists an object $E\in \mathcal{D}$ such that $Hom^1(E,E) = k$, then by Lemma~\ref{lem:1cy}(3), $E$ must be $\sigma$-semistable and $v(E)^2 = 0$.
By Lemma~\ref{lem:conditionsreconstruct}, we may fix a class $v \in K_{num}(\mathcal{D})$ such that $\mathcal{M}_\sigma (v) = \mathcal{M}_\sigma^{st}(v) \neq 0$ and $v^2 = 0$.
By restricting to a connected component of the moduli functor, we may assume that $M_\sigma(v)$ is connected. By corollary~\ref{cor:mprojcurve}, $C \coloneqq M_\sigma(v)$ is a smooth, projective curve. By Theorem~\ref{thm:stack}, there exists a quasi-universal family $\mathcal{E} \in \mathcal{D}_{C-perf} \subset D^b(C \times X)$ which must be universal as there does not exist a non-trivial Brauer class in dimension $1$. Given two non-isomorphic $\sigma$-stable objects $E,F \in \mathcal{D}$ such that $v(E) = v(F) = v$, we have $Hom(E,F) = 0 = Hom(F,E)$ and thus $Hom^i(E,F) = 0 = Hom^i(F,E)$ for all $i$ from the fact that $v^2 = 0$ and the assumption on the global dimension. Thus, by \cite{https://doi.org/10.48550/arxiv.alg-geom/9506012}, the integral transform $\Phi_\mathcal{E} \colon D^b(C) \xhookrightarrow{} D^b(X)$ is an admissible embedding, and moreover factors through $\mathcal{D}$ by definition.
Under the additional assumption, every $\sigma$-stable object of class $v$ is also $1$-Calabi-Yau. Thus, the universal family $\mathcal{E} \in D^b(C \times X)$ is a family of $1$-Calabi-Yau objects in $\mathcal{D}$ and so $\Phi_\mathcal{E}$ is essentially surjective by the same argument as in the second paragraph of the proof of \cite[Theorem 5.4]{bridgeland2019equivalences}.
\end{proof}
\section{Reconstruction of non-commutative curves}\label{sec:main}
In this section, we prove our main Theorem~\ref{thm:mainthm} and deduce a number of additional corollaries for categories of dimension $1$. For convenience of notation, let $Stab_\mathcal{N}(\mathcal{D})$ the space of numerical Bridgeland stability conditions and let $Stab_{\mathcal{N},d}(\mathcal{D})$ be the subspace of stability conditions $\sigma = (\mathcal{A},Z)$ such that $Im(Z) \subset \mathbb{C}$ is discrete.
\begin{theorem}\label{thm:mainthm}
Assume that $\mathcal{D}$ is a connected, non-rational, geometric non-commutative scheme, and that $\inf\limits_{\sigma \in Stab_{\mathcal{N},d}(\mathcal{D})}gldim(\sigma) < \frac{6}{5}$.
Then there exists an equivalence $\mathcal{D} \simeq D^b(C)$ with $C$ a smooth projective curve of genus $g \geq1$.
\end{theorem}
\begin{proof}
By Lemma~\ref{lem:1cy} and Proposition~\ref{prop:1sphericalnew}, there exists a $1$-spherical object and every object $E \in \mathcal{D}$ satisfying $hom^1(E,E) = 1$ is also $1$-spherical. The conclusion then follows by Proposition~\ref{prop:reconstructionA}.
\end{proof}
\begin{rem}\label{rem:infdim}
We note that the assumptions in Theorem~\ref{thm:mainthm} can be weakened in various ways. In particular, the existence of a $1$-spherical object only needs the condition $\inf\limits_{\sigma\in Stab(\mathcal{D})}gldim(\sigma) < \frac{6}{5}$ where the infimum ranges over the full space of stability conditions. Then, we only need a separate stability condition $\sigma$ which in addition is numerical, has discrete central charge and satisfies $gldim(\sigma) < 2$ to achieve the desired equivalence.
\end{rem}
In the case of dimension $1$, we recall the following result.
\begin{theorem}\cite[Theorem 5.16]{Kikuta_2021}\label{thm:kikuta}
Let $C$ be a smooth projective curve of genus $g$.
\begin{enumerate}
\item
If $g = 0$, then there exists a stability condition on $D^b(C)$ such that $gldim(\sigma) = 1$.
\item
If $g = 1$, then $gldim(\sigma) = 1$ for any stability condition $\sigma \in Stab_\mathcal{N}(D^b(C))$.
\item
If $g \geq 2$, then $gldim(\sigma) > 1$ for any stability condition $\sigma \in Stab_\mathcal{N}(D^b(C))$ and\\ $\inf\limits_{\sigma \in Stab_\mathcal{N}(D^b(C))}gldim(\sigma) = 1$.
\end{enumerate}
\end{theorem}
As a consequence, we obtain the following corollary.
\begin{corollary}\label{cor:nonexistence}
There exists no connected, non-rational, geometric non-commutative schemes $\mathcal{D}$ satisfying $\inf\limits_{\sigma\in Stab_{\mathcal{N},d}(\mathcal{D})}gldim(\sigma) \in( 1, \frac{6}{5})$.
\end{corollary}
\subsection{Serre-invariance and higher genus curves}
As a consequence of Theorem~\ref{thm:mainthm}, we obtain two sharper categorical characterizations of higher genus curves, and a partial converse to Theorem~\ref{thm:kikuta}. In this section, we relate our results to the weaker notion (see \cite[Theorem 4.2]{Kikuta_2021}) of the upper Serre dimension of $\mathcal{T}$ under the additional assumption that the stability condition $\sigma$ is invariant under the action of the Serre functor.
We first recall the notion of a Serre-invariant stability condition and the Serre dimension on $\mathcal{T}$, a triangulated category of finite type over $k$ with a Serre functor $S$ and a classical generator $G$.
\begin{defn}
Let $\sigma = (\mathcal{A},Z)$ be a Bridgeland stability condition on $\mathcal{T}$. We say that $\sigma$ is Serre-invariant if $S \cdot \sigma = \sigma \cdot g$ for some $g \in \widetilde{GL}_2^+(\mathbb{R})$.
\end{defn}
More concretely, we have the following characterization of the $\widetilde{GL}_2^+(\mathbb{R})$ action.
\begin{rem}[\cite{2002math.....12237B},Lemma 8.2]\label{rem:faction}
Recall that specifying an element $g \in \widetilde{GL}_2^+(\mathbb{R})$ is equivalent to specifying a pair $(T,f)$ with $f \colon \mathbb{R} \rightarrow \mathbb{R}$ an increasing map such that $f (\varphi + 1) = f(\varphi) + 1$ and $T \colon \mathbb{R}^2 \rightarrow \mathbb{R}^2$ an orientation-preserving linear isomorphism agreeing on $\mathbb{R}/2\mathbb{Z}$.
\end{rem}
We also introduce the notion of the upper and lower Serre dimensions and we defer to \cite[Section 5]{2019arXiv190109461E} for the relevant definitions in greater generality. In the presence of a stability condition, the Serre dimension of $\mathcal{T}$ is particularly simple.
\begin{defn}[\cite{Kikuta_2021}, Proposition 3.9]\label{lem:asymptoticphase}
The upper and lower Serre dimension of $\mathcal{T}$ are defined respectively as follows.
\begin{enumerate}
\item
$\overline{Sdim}(\mathcal{T}) \coloneqq \limsup\limits_{n \rightarrow \infty}\frac{1}{n}\varphi_\sigma^+(S^nG)$
\item
$\underline{Sdim}(\mathcal{T}) \coloneqq \limsup\limits_{n \rightarrow \infty}\frac{1}{n}\varphi_\sigma^-(S^nG)$
\end{enumerate}
\end{defn}
\begin{rem}
The fact that these definitions are independent of the classical generator $G$ can be deduced from \cite[Definition 5.3]{2019arXiv190109461E}.
\end{rem}
In this subsection, we prove the following.
\begin{theorem}\label{thm:infdim1}
Assume that $\mathcal{D}$ is a connected, non-rational, geometric non-commutative scheme. Assume that there exist a numerical stability condition $\sigma \in Stab(\mathcal{D})$ such that $gldim(\sigma) < 2$ and $Im(Z) \subset \mathbb{C}$ is discrete. Then the following are equivalent:
\begin{enumerate}
\item
$\inf\limits_{\sigma \in Stab(\mathcal{D})}gldim(\sigma) = 1$
\item
$\mathcal{D} \simeq D^b(C)$ with $C$ a smooth projective curve of genus $g \geq 1$.
\setcounter{serreinv}{\value{enumi}}
\end{enumerate}
If in addition, $\mathcal{D}$ admits a Serre-invariant Bridgeland stability condition, then the above is equivalent to the following.
\begin{enumerate}
\setcounter{enumi}{\value{serreinv}}
\item
$\overline{Sdim}(\mathcal{D}) = 1$
\end{enumerate}
\end{theorem}
In the following, $\mathcal{T}$ will always denote a triangulated category of finite type over $k$ with a Serre functor $S$, admitting a classical generator $G$ and we fix a Bridgeland stability condition $\sigma = (\mathcal{A},Z)$. We first note, that in the presence of an appropriate $\sigma$-semistable object, we can bound the asymptotic phases of such an object under iterated applications of the Serre functor.
\begin{lemma}\label{lem:stablephase}
Assume there exists an object $E \in \mathcal{T}$ such that $S^nE$ is $\sigma$-semistable for all $n \in \mathbb{Z}$. Then we have the following inequalities:
\[
\underline{Sdim}(\mathcal{T}) \leq \limsup\limits_{n \rightarrow \infty} \frac{1}{n} \varphi(S^nE) \leq \overline{Sdim}(\mathcal{T})
\]
\end{lemma}
\begin{proof}
Fix $G \in \mathcal{T}$ to be any classical generator. Then, there must exist integers $i,j$ such that $Hom(G, E[i]), Hom(E, G[j]) \neq 0$. In particular, these imply the following inequalities.
\begin{align*}
\varphi^-(S^nG) \leq \varphi(S^nE) + i \\
\varphi(S^nE) \leq \varphi^+(S^nG) + j
\end{align*}
for all $n$. Applying $\limsup\limits_{n \rightarrow \infty} \frac{1}{n}$ together with Definition~\ref{lem:asymptoticphase} implies the claim.
\end{proof}
In the presence of a Serre-invariant stability condition and under the assumption that the Serre dimension is a positive integer $n$, we can identify a heart of a bounded t-structure $\mathcal{A} \subset \mathcal{T}$ of homological dimension $n$ satisfying Serre duality.
\begin{prop}\label{prop:reconstructionS}
Assume the following:
\begin{itemize}
\item
Assume that $\sigma = (\mathcal{P},Z)$ is Serre-invariant on $\mathcal{T}$.
\item
$\overline{Sdim}(\mathcal{T}) = n$ for $n\in \mathbb{Z}_+$.
\end{itemize}
Then there exists a slicing $\mathcal{A}_\varphi = \mathcal{P}([\varphi,\varphi+1))$ such that $S\mathcal{A}_\varphi[-n] = \mathcal{A}_\varphi$. In addition, $\mathcal{A}_\varphi$ is of homological dimension $n$.
\end{prop}
To prove the above Proposition, we will study the asymptotic action of $S$ on the phases $\varphi$ given by the action described in Remark~\ref{rem:faction}.
\begin{lemma}\label{lem:fixedphase}
Under the assumptions of Proposition~\ref{prop:reconstructionS}, let $(G,f) \in \widetilde{GL}_2^+(\mathbb{R})$ correspond to the action of $S$ on $\sigma = (\mathcal{P}.Z)$. Then there exists a real number $\varphi \in \mathbb{R}$ such that $f(\varphi) = \varphi + n$.
\end{lemma}
\begin{proof}
By assumption, the action of the auto-equivalence $F \coloneqq S[-n]$ on $\sigma$ is equivalent to an action by an element of $\widetilde{GL}_2^+(\mathbb{R})$ with the phase function $f_F(\varphi) = f(\varphi) - n$. Similarly, we denote by $f_{F^{-1}}$ the phase function associated with the inverse auto-equivalence $F^{-1} = S^{-1}[n]$. We will prove that the function $f_F \colon \mathbb{R} \rightarrow \mathbb{R}$ has a fixed point.
Fix a phase $\varphi_0$ such that $\mathcal{P}(\varphi_0) \neq 0 $. By \cite[Proposition 6.17]{kuznetsov2021serre} and Lemma~\ref{lem:stablephase}, we must have $\limsup\limits_{k\rightarrow \infty} \frac{1}{k} f^k_F(\varphi_0) = 0$. We first prove that the set $\{f_F^k(\varphi_0)\}_{k \in \mathbb{Z}_+} \subset \mathbb{R}$ is bounded of length $<1$. Assume on the contrary that the set $\{f_F^k(\varphi_0)\}_{k\in\mathbb{Z}_+}$ is of length $\geq 1$. Then acting by $f_{F^{-1}}$, which is increasing and periodic of order $1$, there must exist $m \in \mathbb{Z}_+$ such that either $f_F^m(\varphi_0) \geq \varphi_0 + 1$ or $\varphi_0 -1 \geq f_F^m(\varphi_0)$.
We assume the first case. By the assumption that $f_F$ is increasing and periodic of order 1, for any positive integer $k \in \mathbb{Z}$, we have that $f_F^{mk}(\varphi_0) \geq f_F^{m(k-1)}(\varphi_0+1) = f_F^{m(k-1)}(\varphi_0) + 1 \geq \ldots \geq \varphi_0 + k$. But then we have
\[
0 = \limsup\limits_{k\rightarrow\infty}\frac{1}{k} f^k_F(\varphi_0) \geq \limsup\limits_{k\rightarrow\infty}\frac{1}{km}f^{km}_F(\varphi_0) \geq \limsup\limits_{k\rightarrow\infty}\frac{1}{km} (\varphi_0 +k) = \frac{1}{m}
\]
where the first inequality follows as $\frac{1}{km}f_F^{km}(\varphi_0)$ is a subsequence of $\frac{1}{k}f_F^k(\varphi_0)$ and the second follows from the inequalities of the previous sentence. As $m \in \mathbb{Z}_+$, this is a contradiction.
In the latter case, an identical argument as in Lemma~\ref{lem:stablephase} with the inverse Serre functor together with \cite[Proposition 3.9]{Kikuta_2021} implies that $\limsup\limits_{k\rightarrow \infty} \frac{1}{k} f^k_{F^{-1}}(\varphi_0) = 0$. In particular, we have $f^m_{F^{-1}}(\varphi_0) \geq \varphi_0 + 1$ and an identical argument as in the above paragraph yields a contradiction.
Define $\varphi_0^+ \coloneqq \limsup\limits_{k\rightarrow \infty}f_F^k(\varphi_0)$. We claim that this gives the desired fixed point. Indeed, we have the sequence of equalities
\[f_F(\limsup\limits_{k\rightarrow \infty}f_F^k(\varphi_0)) = \limsup\limits_{k\rightarrow\infty}f_F^{k+1}(\varphi_0) = \limsup\limits_{k\rightarrow\infty}f_F^k(\varphi_0)
\]
In the first equality above, we have used the elementary fact that given a continuous increasing function $f \colon A \rightarrow \mathbb{R}$ with $A$ compact and a sequence $(x_k)_{k \in \mathbb{Z}_+} \subset A$, we have the equality
\[
f(\limsup\limits_{k \rightarrow \infty}x_k) = \limsup\limits_{k \rightarrow \infty}f(x_k)
\]
Thus, we have the desired fixed point, and we conclude.
\end{proof}
We now turn to proving Proposition~\ref{prop:reconstructionS}.
\begin{proof}[Proof of Proposition~\ref{prop:reconstructionS}]
By Lemma~\ref{lem:fixedphase}, there exists $\varphi_0 \in \mathbb{R}$ such that $S(\varphi_0) = \varphi_0+ n$. As the action is periodic of order one, we have $S(\varphi_0 - 1) = S(\varphi_0) - 1 = \varphi_0 + n-1$. As the action is continuous and increasing, we have that $S((\varphi_0 - 1, \varphi_0]) = (\varphi_0 + n-1, \varphi_0 + n]$. Thus, we conclude that $S(\mathcal{P}((\varphi_0 - 1, \varphi_0])) = \mathcal{P}((\varphi_0 + n - 1, \varphi_0 +n])$ as desired.
We now prove that the heart $\mathcal{A}_\varphi\coloneqq \mathcal{P}(\varphi_0 -1, \varphi_0]$ is indeed of homological dimension $n$. For any objects $A,B \in \mathcal{A}_\varphi$, we have $Hom(A,B[i]) = 0$ for $i < 0 $. On the other hand, we have $Hom(A,B[i]) = Hom(B[i], SA) = Hom(B[i-n],SA[-n]) = Hom^{n-i}(B,SA[-n]) = 0$ for $i > n$ as $SA[-n] \in \mathcal{A}_\varphi$ by Proposition~\ref{prop:reconstructionS}.
\end{proof}
Finally, we apply the conclusion of Proposition~\ref{prop:reconstructionS} to deduce Theorem~\ref{thm:infdim1}.
\begin{proof}[Proof of Theorem~\ref{thm:infdim1}]
$(1) \implies(2)$: This follows directly by Theorem~\ref{thm:mainthm} and remark~\ref{rem:infdim}.
$(2) \implies (1)$: This follows directly by Theorem~\ref{thm:kikuta}.
$(1) \implies (3)$: By \cite[Theorem 4.2]{Kikuta_2021} and the assumption, we have that $\overline{Sdim}(\mathcal{D}) \leq 1$. Assume on the contrary that $\overline{Sdim}(\mathcal{D}) < 1$. By Proposition~\ref{prop:1sphericalnew}, there exists an object $A$ such that $hom^1(A,A) = 1$, and hence $S^nA$ is $\sigma$-semistable for all $n$ by Lemma~\ref{lem:1cy}(3). By Lemma~\ref{lem:stablephase}, we have that $\limsup\limits_{n \rightarrow \infty} \frac{1}{n} \varphi(S^nA) < 1$. In particular, there must exist an integer $N$ such that $\varphi(S^NA) - \varphi(S^{N-1}A) < 1$. On the other hand, we have $Hom(A,A[n]) = Hom(S^{N-1}A,S^{N-1}A[n]) = Hom(S^{N-1}A[n],S^NA) = 0$ for $n \neq 0$ by the assumption on the phases and $\sigma$-semistability of $A$. Thus, $A$ must be an exceptional object, contradicting the assumption.
$(3) \implies (1)$: By Proposition~\ref{prop:reconstructionS}, there exists a slicing $\mathcal{A}_\varphi$ of homological dimension $1$. By \cite[Theorem 2]{ctx31462736420006531}, there exists a $1$-spherical object in $\mathcal{A}_\varphi$. By \cite[Lemma 3.3]{ctx31462736420006531} and the second part of Proposition~\ref{prop:reconstructionA}, we have that $\mathcal{D} \simeq D^b(C)$ with $C$ a smooth projective curve. As there does not exist any exceptional objects, it must be the case that the genus $g$ of $C$ satisfies $g \geq 1$.
\end{proof}
\subsection{The case of $\bm{gldim(\sigma) = 1}$}\label{gldim1}
In this subsection, we apply Theorem~\ref{thm:mainthm} to deduce a structural result for $\mathcal{D}$ when there exists a stability condition $\sigma$ with global dimension $gldim(\sigma) = 1$. In particular, our conclusion applies in the setting when there exists exceptional objects in $\mathcal{D}$.
\begin{corollary}\label{cor:gldim1}
Assume that $\mathcal{D}$ is a connected, geometric non-commutative scheme. Assume that there exists a numerical Bridgeland stability condition $\sigma \in Stab(\mathcal{D})$ such that $Im(Z) \subset \mathbb{C}$ is discrete and such that $gldim(\sigma) = 1$. Then $\mathcal{D} = \langle \mathcal{C}, E_1 ,\ldots E_n \rangle$ admits a semi-orthogonal decomposition for some integer $n$ with $E_i \in \mathcal{D}$ exceptional objects and $\mathcal{C}$ is either zero or equivalent to $D^b(E)$, with $E$ a smooth elliptic curve.
\end{corollary}
The main additional input is the following lemma, which allows us to inductively study admissible subcategories $\mathcal{D}$ using the theory of stability conditions.
\begin{lemma}\cite[Proposition 5.2]{Kikuta_2021}\label{lem:mon}
Let $\sigma \in Stab(\mathcal{D})$ be a stability condition such that $gldim(\sigma) = 1$ and $\mathcal{D}' \xhookrightarrow{} \mathcal{D}$ a nonzero admissible subcategory. Then there exists a stability condition $\sigma' \in Stab(\mathcal{D}')$ such that $gldim(\sigma') \leq gldim(\sigma)$.
\end{lemma}
We can now easily deduce corollary~\ref{cor:gldim1}.
\begin{proof}[Proof of corollary~\ref{cor:gldim1}]
If $\mathcal{D}$ contained an exceptional object $E$, then we take the orthogonal subcategory $E^\perp$. By Lemma~\ref{lem:mon}, there exists a stability condition $\sigma' \in Stab(E^\perp)$ such that $gldim(\sigma') \leq gldim(\sigma)$. If $gldim(\sigma') < gldim(\sigma)$, then by \cite[Lemma 5.5]{Kikuta_2021}, any object $E' \in E^\perp$ is exceptional if and only if it is $\sigma'$-stable and in particular, there must exist an exceptional object. As $HH_0(\mathcal{D}) < \infty$ because $\mathcal{D}$ is a geometric noncommutative scheme, we proceed by induction and conclude that $\mathcal{D} = \langle E_1 , \ldots , E_n \rangle$ is generated by a full exceptional collection.
If $gldim(\sigma') = gldim(\sigma) = 1$ and $E^\perp$ contains no exceptional objects, then we conclude by Theorem~\ref{thm:infdim1} together with Theorem~\ref{thm:kikuta}(1). If not, then we take an exceptional object in $E^\perp$ and its orthogonal and continue by induction. This procedure must terminate again by boundedness of the hochschild homology, and the claim follows.
\end{proof}
\bibliographystyle{amsplain}
|
1,941,325,221,132 | arxiv | \section{Introduction}
Understanding the origin of the small, positive value of dark energy density measured at present \cite{Planck:2018vyg} is one of the most challenging open problems in fundamental physics. One could hope that splitting the problem into two parts might help in seeking for its solution, but this is not quite the case. Indeed, on the one side one should understand how to embed a positive cosmological constant, or more generally a positive background energy density, into a consistent UV complete theory of quantum gravity. On the other side, one should understand the precise details of the supersymmetry breaking mechanism, which necessarily occurs when the background energy is positive, but it can also occur more generally. Both of these problems are challenging in themselves.
Effective field theories are the typical framework in which to perform such investigations. However, the presence of dynamical gravity in the setup might change the standard paradigm according to which these theories are constructed, based on a genuinely wilsonian approach. That this is indeed the case is the central idea underlying the swampland program \cite{Palti:2019pca,vanBeest:2021lhn}, which collects a number of conjectures that are supposed to encode essential features of quantum gravity. Following a purely bottom-up perspective, one should thus assume swampland conjectures as actual consistency criteria (or perhaps even principles) of quantum gravity and check whether or not they are satisfied in any given effective theory.
At present, it is fair to say that not all of the swampland conjectures share the same level of rigor and accuracy. For example, the absence of global symmetries \cite{Banks:1988yz} or the weak gravity conjecture \cite{Arkani-Hamed:2006emk} are widely believed to be established facts of quantum gravity, as they have been tested extensively through the years; more recent conjectures, such that the one forbidding de Sitter vacua \cite{Obied:2018sgi} and its various refinements, are instead still under debate. Nevertheless, there is evidence that all of the swampland conjectures should be connected to one another to form some sort of web. This implies that, even if a precise statement on the fate of de Sitter vacua in quantum gravity might not yet be known, some piece of information should be present in others, possibly more established, conjectures.
This line of reasoning has been followed by \cite{Cribiori:2020use,DallAgata:2021nnr}, where it is shown that, under certain assumptions, de Sitter vacua in extended supergravity are in tension with the magnetic weak gravity conjecture. The assumptions are the presence of an unbroken abelian gauge group (needed to apply the weak gravity conjecture), and a vanishing gravitino mass on the vacuum.\footnote{As in \cite{Cribiori:2020use,DallAgata:2021nnr}, here we always mean Lagrangian mass.} In those setups in which the argument of \cite{Cribiori:2020use,DallAgata:2021nnr} applies, any known version of the de Sitter conjecture seems thus to be redundant, since the fate of de Sitter critical points is already dictated by the weak gravity conjecture. This material is reviewed in section \ref{sec_WGCvsdS}.
Given that the interplay between the weak gravity conjecture and a vanishing gravitino mass has an impact on the fate of de Sitter critical points, one could wonder whether some piece of information about quantum gravity is actually encoded into the gravitino mass itself. Indeed, the gravitino is the superpartner of the graviton and it is thus expected to be related to (quantum) gravity. Furthermore, in a (quasi)-flat background, similar to the one we are living in at present, the gravitino mass is a good estimation of the supersymmetry breaking scale. Whether or not quantum gravity can say something non-trivial in its respect is thus of clear phenomenological interest. This motivated \cite{Cribiori:2021gbf,Castellano:2021yye} to propose a new swampland conjecture, stating that the limit of vanishing gravitino mass corresponds to a breakdown of the effective description. This material is reviewed in section \ref{sec_gravconj}.
Finally, in section \ref{sec_fincomm} we comment on possible extensions of these works and ideas. We work in Planck units.
\section{Weak gravity versus de Sitter}
\label{sec_WGCvsdS}
In this section, we review the general argument of \cite{Cribiori:2020use,DallAgata:2021nnr} showing that de Sitter critical points of extended supergravity with vanishing gravitino mass and with an unbroken abelian factor are in tension with the weak gravity conjecture. As discussed at the end of the section, this excludes almost all known examples in the literature (notably, \cite{DallAgata:2012plb} found an unstable de Sitter vacuum with massive gravitini and thus the argument here reviewed does not apply).
\subsection{A weak gravity constraint on de Sitter}
\label{arg}
Any effective field theory is characterised by at least two energy scales, namely an IR cutoff, $\Lambda_{IR}$, and a UV cutoff, $\Lambda_{UV}$. It natural to ask for the existence of an hierarchy between them,
\begin{equation}
\label{natcond}
\Lambda_{IR} \ll \Lambda_{UV},
\end{equation}
otherwise there would be almost no room for the theory to live in. The question is then what these two scales are.
In a de Sitter space, there is a natural notion of IR cutoff given by the Hubble scale
\begin{equation}
\label{LIRH}
\Lambda_{IR} \sim H,
\end{equation}
since a distance of order $1/H$ is the longest length that can be measured. The choice of $\Lambda_{UV}$ is instead more subtle. Assuming gravity to behave classically in the effective description, a natural guess would be $\Lambda_{UV} \sim M_p$. However, one of the main lessons of the swampland program is that gravity is not genuinely wilsonian and thus the UV cutoff of a given effective theory can be lowered from its naive expectation by (quantum) gravity effects. A realisation of this scenario is the magnetic weak gravity conjecture \cite{Arkani-Hamed:2006emk}, which states that in an effective theory with gravity and with an abelian gauge coupling $g$, the UV cutoff is bounded by
\begin{equation}
\label{mWGC}
\Lambda_{UV} \lesssim g M_p.
\end{equation}
For an effective theory on a positive background and with an abelian gauge coupling, the condition \eqref{natcond} translates thus into
\begin{equation}
\label{Hg}
H \ll g M_P.
\end{equation}
Alternatively, as explained in \cite{Cribiori:2020use}, one can arrive at the same conclusion by remaining agnostic about \eqref{natcond} and \eqref{LIRH}, but by asking that corrections to the two-derivative effective action are suppressed, thus giving $H\ll \Lambda_{UV}$, and by invoking then the weak gravity conjecture.
If the weak gravity conjecture holds, the relation \eqref{Hg} is a consistency condition for any effective theory on a positive energy background and with an abelian gauge coupling. Clearly, it can be violated if the vacuum energy of the theory is such that
\begin{equation}
\label{Hquant}
H \sim g M_p,
\end{equation}
with no parameter which can be arbitrarily tuned entering the relation. The point is then whether or not this situation is indeed realised in concrete examples. As shown in \cite{Cribiori:2020use,DallAgata:2021nnr}, this is realised in de Sitter critical points (regardless of stability) of extended supergravity with a vanishing gravitino mass and with an unbroken abelian gauge group. In other words, \cite{Cribiori:2020use,DallAgata:2021nnr} showed that the Hubble scale in those extended supergravity models is quantised in terms of the UV cutoff given by the magnetic weak gravity conjecture. This implies that such models are not compatible with the condition \eqref{natcond}, or equivalently they are not protected against corrections, and thus cannot be consistent effective theories.\footnote{The same argument has been used in \cite{Cribiori:2020wch} to show that pure Fayet-Iliopoulos terms in $N=1$ supergravity are in the swampland, as it was already argued by \cite{Komargodski:2009pc} enforcing the absence of global symmetries. This is yet another confirmation of the fact that swampland conjectures are related to one another. } (They can still be consistent truncations.)
This can also be seen as a manifestation of the Dine-Seiberg problem \cite{Dine:1985he}, in the sense that the vacuum of the theory lies outside the region in which corrections are under control.
Before reviewing the analysis of \cite{Cribiori:2020use,DallAgata:2021nnr} showing that de Sitter critical points in $N=2$ supergravity with vanishing gravitino mass are of the type \eqref{Hquant}, let us mention possible loopholes in the argument above. First, it relies on the original formulation of the weak gravity conjecture in flat space. It might happen that corrections proportional to the spacetime curvature modify \eqref{mWGC} in such a way that the argument cannot be applied anymore. However, if the curvature is large, in order that we can safely assume gravity to be classical, it is reasonable to expect that corrections due to this effect are suppressed and can be neglected in first approximation. This seems to be in line with the analysis of \cite{Huang:2006hc} (see also \cite{Antoniadis:2020xso}). Another reason of concern is related to the condition \eqref{natcond}, which might not hold in an effective theory coupled to gravity, or at least not parametrically. In other words, it might be that gravity necessarily introduces an IR/UV mixing already at the level of the cutoff scales. This idea has been recently made precise in \cite{Castellano:2021mmx} (see \cite{Cohen:1998zx} for earlier work). It is a possibility we cannot exclude and whose effects on the argument would be interesting to investigate. However, as explained in \cite{Cribiori:2020use}, one can arrive at \eqref{Hg} even without starting from \eqref{natcond}, but just by requiring consistency of the two-derivatives effective action.
\subsection{The argument in $N=2$ supergravity}
\label{argN=2}
In this section, we review the general argument of \cite{DallAgata:2021nnr} showing that de Sitter critical points of $N=2$ supergravity with a vanishing gravitino mass (or parametrically lighter than the Hubble scale) are of the type \eqref{Hquant}, and thus cannot be used as consistent effective theories, according to the weak gravity conjecture. The argument is rather general and extends the analysis of \cite{Cribiori:2020use}. Besides asking for a vanishing gravitino mass, it requires the presence of an unbroken abelian gauge group in the vacuum, in order for the weak gravity conjecture to apply, but it covers both stable and unstable critical points. We follow the conventions of \cite{Ceresole:1995ca,Andrianopoli:1996cm}.
The sigma-trace of the gravitino mass matrix in $N=2$ supergravity is
\begin{equation}
S^x \equiv S_{AB} {(\sigma^{x})_C}^A \epsilon^{BC} = i \mathcal{P}^x_\Lambda L^\Lambda
\end{equation}
and it contributes to the scalar potential with a negative sign, $- 4S^x \bar S^x$. It is the only negative definite contribution to the vacuum energy and it vanishes if (sum over $x=1,2,3$ understood)
\begin{equation}
| \mathcal{P}^x_\Lambda L^\Lambda|^2=0.
\end{equation}
Under this assumption, the complete $N=2$ scalar potential,
\begin{equation}
\label{VN=2el}
\mathcal{V} = g_{i \bar\jmath} k^i_\Lambda k^{\bar \jmath}_\Sigma \bar L^\Lambda L^\Sigma + 4 h_{uv} k^u_\Lambda k^v_\Sigma \bar L^\Lambda L^\Sigma + (U^{\Lambda \Sigma} - 3 \bar L^\Lambda L^\Sigma) \mathcal{P}^x_\Lambda \mathcal{P}^x_\Sigma,
\end{equation}
reduces to
\begin{equation}
\begin{aligned}
\label{VN=2dS1}
\mathcal{V}_{dS} =-\frac12 ({\rm Im}\mathcal{N}^{-1})^{\Lambda \Sigma} (\mathcal{P}^x_\Lambda \mathcal{P}^x_\Sigma + \mathcal{P}^0_\Lambda \mathcal{P}^0_\Sigma) + 4 h_{uv} k^u_\Lambda k^v_\Sigma \bar L^\Lambda L^\Sigma ,
\end{aligned}
\end{equation}
where we used that the matrix $U^{\Lambda \Sigma}$ is defined as $U^{\Lambda \Sigma} = -\frac12 ({\rm Im}\mathcal{N}^{-1})^{\Lambda \Sigma} - \bar L^\Lambda L^\Sigma$ and we employed the special geometry relation $g_{i \bar\jmath} k^i_\Lambda k^{\bar \jmath} \bar L^\Lambda L^\Sigma = - \frac12 ({\rm Im} \mathcal{N}^{-1})^{\Lambda \Sigma}\mathcal{P}^0_\Lambda \mathcal{P}^0_\Sigma$, which can be derived from $\mathcal{P}^0_\Lambda L^\Lambda=0$.
Since the last term in \eqref{VN=2dS1} is positive definite, we can write
\begin{equation}
\mathcal{V}_{dS} \geq -\frac 12 {\rm Im}\mathcal{N}^{\Lambda \Sigma} (\mathcal{P}^x_\Lambda \mathcal{P}^x_\Sigma + \mathcal{P}^0_\Lambda \mathcal{P}^0_\Sigma),
\end{equation}
and we recall that the matrix $({\rm Im} \mathcal{N}^{-1})^{\Lambda \Sigma}$ is negative definite.
Now, we want to recast this expression in terms of the gravitino charge and gauge coupling, in order to apply the weak gravity conjecture. We follow a manifestly symplectic-covariant procedure, similar to that of \cite{Cribiori:2022trc}, which was performed for supersymmetric anti-de Sitter vacua. We introduce a SU(2) charge matrix
\begin{equation}
2 {{\mathcal{Q}_\Lambda}_A}^B = \mathcal{P}^0_\Lambda \delta_A^B + {(\sigma_x)_A}^B \mathcal{P}_\Lambda^x, \qquad s.t. \quad {\rm Tr}\, \mathcal{Q}_\Lambda \mathcal{Q}_\Sigma = \frac12 \left(\mathcal{P}^0_\Lambda \mathcal{P}^0_\Sigma + \mathcal{P}^x_\Lambda \mathcal{P}^x_\Sigma\right).
\end{equation}
By employing the projectors ${{P^\parallel}^\Lambda}_\Sigma$, ${{P^\perp}^\Lambda}_\Sigma$ defined as in \cite{Cribiori:2020use}, we can split $\mathcal{Q}_\Lambda = \mathcal{Q}_\Lambda^\parallel + \mathcal{Q}_\Lambda^\perp$, where
\begin{equation}
\label{Qparallel}
{{\mathcal{Q}_\Lambda^\parallel}_A}^B = \Theta_\Lambda {q_A}^B, \qquad s.t. \quad {\rm Tr}\, Q^\parallel_\Lambda Q^\parallel_\Sigma = \Theta_\Lambda \Theta_\Sigma {\rm Tr}(q^2).
\end{equation}
Here, ${q_A}^B$ is the SU(2) charge entering the gravitino covariant derivative as
\begin{equation}
D_\mu \psi_{\nu\,A} = \dots+ i \tilde A_\mu {q_A}^B \psi_{\nu \, B},
\end{equation}
where $\tilde A_\mu = \Theta_\Lambda A^\Lambda_\mu$ is the combination of vectors $A^\Lambda_\mu$ with coefficients $\Theta_\Lambda$ which is gauging the weak gravity U$(1)$ group. By canonically normalising the kinetic term of $\tilde A_\mu$, we can then read off the gauge coupling
\begin{equation}
\label{gcoup}
g^2 = - \Theta_\Lambda( {\rm Im} \mathcal{N}^{-1})^{\Lambda \Sigma} \Theta_\Sigma.
\end{equation}
The vacuum energy is thus rewritten as (using projectors' orthogonality)
\begin{equation}
\begin{aligned}
\mathcal{V}_{dS} &\geq - ({\rm Im}\mathcal{N}^{-1})^{\Lambda \Sigma} {\rm Tr} \, \mathcal{Q}_\Lambda \mathcal{Q}_\Sigma \\
&=- ({\rm Im}\mathcal{N}^{-1})^{\Lambda \Sigma} ({\rm Tr} \, \mathcal{Q}^\parallel_\Lambda \mathcal{Q}^\parallel_\Sigma +{\rm Tr} \, \mathcal{Q}^\perp_\Lambda \mathcal{Q}^\perp_\Sigma ) \\
&\geq - ({\rm Im}\mathcal{N}^{-1})^{\Lambda \Sigma} {\rm Tr} \, \mathcal{Q}^\parallel_\Lambda \mathcal{Q}^\parallel_\Sigma .
\end{aligned}
\end{equation}
Employing \eqref{Qparallel} and \eqref{gcoup}, we get eventually
\begin{equation}
\mathcal{V}_{dS} \geq g^2 {\rm Tr} (q^2) \gtrsim {\rm Tr} (q^2) \Lambda^2_{UV},
\end{equation}
where in the last step we enforced the weak gravity conjecture. These vacua are of the form \eqref{Hquant} and thus cannot be consistent effective field theory descriptions. Notice that we are assuming some form of charge quantisation for the eigenvalues of ${q_A}^B$, but the precise details are not relevant for sake of the argument. The analogous proof for $N=8$ de Sitter critical points can be found in \cite{DallAgata:2021nnr}.
We see that asking for massless but charged gravitini on a positive energy background leads to an inconsistency according to the weak gravitino conjecture. This fact is also in agreement with the so called Festina Lente bound \cite{Montero:2019ekk,Montero:2021otb}, which proposes that $m^2\gtrsim \sqrt 6 g q H $ for every particle of mass $m$ and charge $q$. Thus, it rules out massless charged gravitini as well, albeit from a different perspective. Nevertheless, one expects all of the swampland considerations to be related to one another.
\subsubsection{Example: stable de Sitter from U$(1)_R\times$SO$(2,1)$ gauging}
Historically, the first examples of stable de Sitter vacua in $N=2$ supergravity have been proposed in \cite{Fre:2002pd}. One can check \cite{Cribiori:2020use} that the vacuum energy has precisely the form \eqref{Hquant}, but the gravitino mass vanishes at the critical point. Thus, the argument of the previous section applies and shows that those vacua are in tension with the weak gravity conjecture. As an illustrative example, below we review the simplest model of \cite{Fre:2002pd} from this perspective.
The model has $n_V=3$ vector multiplets and no hyper multiplet. The $3+1$ vectors are gauging an SO$(2,1)\times$U$(1)_R$ isometry of the scalar manifold $\frac{{\rm SU}(1,1)}{{\rm U}(1)}\times \frac{{\rm SO}(2,2)}{{\rm SO}(2)\times {\rm SO}(2)}$. One can start from a basis with prepotential
\begin{equation}
\mathcal{F} = -\frac12 \frac{X^1\left(\left(X^2\right)^2 - \left(X^3\right)^2\right)}{X^0}.
\end{equation}
In such a symplectic frame, the generators of SO$(2,1)$ are
\begin{equation}
T_0 =\scalebox{.65}{ $ \left(\begin{array}{cccccccc}
&&1&&&&&\\
&&&&&&-1&\\
\frac12&&&&&-1&&\\
&&&&&&&\\
&&&&&&-\frac12&\\
&&-\frac12&&&&&\\
&-\frac12&&&-1&&&\\
&&&&&&&
\end{array}
\right)$},
\quad
T_1 =\scalebox{.65}{ $ \left(\begin{array}{cccccccc}
&&-1&&&&&\\
&&&&&&1&\\
\frac12&&&&&1&&\\
&&&&&&&\\
&&&&&&-\frac12&\\
&&-\frac12&&&&&\\
&-\frac12&&&1&&&\\
&&&&&&&
\end{array}
\right)$},
\quad
T_2 = \scalebox{.65}{ $\left(\begin{array}{cccccccc}
-1&&&&&&&\\
&-1&&&&&&\\
&&&&&&&\\
&&&&&&&\\
&&&&1&&&\\
&&&&&1&&\\
&&&&&&&\\
&&&&&&&
\end{array}
\right),$}
\end{equation}
and we collect them into $T_\Lambda = (T_0, T_1, T_2, 0)$ for later convenience. When acting on $V^M=(X^\Lambda, F_\Lambda)$, the ${(T_\Lambda)^M}_N$ mix $X^\Lambda$ with $F_\Lambda$, i.e.~they generate so called non-perturbative symmetries, corresponding to gaugings which are not purely electric. To use the standard formula of the $N=2$ scalar potential \eqref{VN=2el}, we need to rotate the sections $V^M$ with an appropriate symplectic matrix,
\begin{equation}
V^M\to \tilde V^M = {S^M}_N V^N,
\end{equation}
where
\begin{equation}
{S^M}_N= \scalebox{.75}{ $\left(\begin{array}{cccccccc}
\frac12&&&&&1&&\\
-\frac12&&&&&1&&\\
&&1&&&&&\\
&&&\cos \delta&&&&\sin \delta\\
&-\frac12&&&1&&&\\
&-\frac12&&&-1&&&\\
&&&&&&1&\\
&&&-\sin\delta&&&&\cos \delta\\
\end{array}
\right)$}.
\end{equation}
Here, $\delta$ is a parameter defining the embedding of the U$(1)_R$ factor into Sp$(8,\mathbb{R})$ and it is historically known as de Roo-Wagemans angle \cite{deRoo:1985jh}. In the rotated basis the prepotential does not exist, but the new $X^\Lambda$ and $F_\Lambda$ are not mixed by the action of the SO$(2,1)$ generators. From this point on, we work in such basis for the symplectic sections, omitting the tilde for convenience.
We now look at the scalar potential. Given that there are no hyper multiplets in the setup, the second term in the scalar potential \eqref{VN=2el} is vanishing, but the others are not. We analyse them separately. The last term (we denote it $\mathcal{V}_F$ below) is simpler since it is associated to the U$(1)_R$ gauging corresponding to a Fayet--Iliopoulos term. Indeed, we have a constant prepotential
\begin{equation}
\mathcal{P}^x_\Lambda= e_F \delta^{3x} \eta_{3\Lambda},
\end{equation}
with charge $e_F$, giving rise to the contribution
\begin{equation}
\mathcal{V}_F = e_F^2 \frac{(\cos \delta + x^1 \sin \delta )^2 + (y^1)^2\sin^2\delta }{2y^1},
\end{equation}
where we set $z^i = \frac{X^i}{X^0} = x^i-i y^i$. Next, we concentrate on the first term in \eqref{VN=2el} (we denote it $\mathcal{V}_D$ below). This is non-vanishing since it corresponds to the gauging of SO$(2,1)$. The special K\"ahler prepotentials can be found with the general formula \cite{Andrianopoli:1996cm}\footnote{We are using the symplectic metric $\Omega_{MN} = \left(
\begin{array}{cc}
0&-1\\
1&0\\
\end{array}
\right).
$}
\begin{equation}
\mathcal{P}^0_\Lambda = e^K \bar V^M \Omega_{MN} {(T_\Lambda)^N}_P V^P,
\end{equation}
and in turn the killing vectors are computed as
\begin{equation}
k^i_{\Lambda} = i g^{i\bar\jmath}\partial_{\bar \jmath} \mathcal{P}^0_\Lambda.
\end{equation}
One can check that the consistency conditions $L^\Lambda \mathcal{P}^0_\Lambda = 0 =k^i_\Lambda L^\Lambda$ are satisfied. The contribution to the scalar potential is then
\begin{equation}
\mathcal{V}_D = e_D^2 \frac{(x^3)^2 + (y^2)^2}{2y^1((y^2)^2-(y^3)^2)},
\end{equation}
where we inserted a charge $e_D$ to keep track of the SO(2,1) gauging. The total scalar potential is the sum of these two contributions
\begin{equation}
\mathcal{V} = \mathcal{V}_F + \mathcal{V}_D.
\end{equation}
It has a stable de Sitter vacuum at
\begin{equation}
x^1 = - \frac{1}{\tan \delta},\qquad y^1 =\frac{e_D}{e_F \sin \delta},\qquad x^3 = 0, \qquad y^3 =0,
\end{equation}
with vacuum energy
\begin{equation}
\label{VexdS}
\mathcal{V}_{dS} = e_D e_F \sin \delta.
\end{equation}
Stability is explicitly verified in \cite{Fre:2002pd}.
Notice that the cosmological constant is strictly positive, since on the vacuum the K\"ahler potential is $K = - \log (4y^1 (y^2)^2) = -\log \left(\frac{4e_D^2 (y^2)^2}{\mathcal{V}_{dS}}\right)$.
One can check that on the vacuum a subgroup U$(1)_R\times$U$(1)$ of the original gauge group is unbroken. Therefore, we can enforce the weak gravity conjecture with respect to the U$(1)_R$ factor. Furthermore, the gravitino mass is vanishing and thus the argument of section \ref{argN=2} applies.
It remains only to canonically normalise the kinetic term of the U$(1)_R$ vector and correctly identify the gauge coupling and charge. In the language of the general argument given previously, we have
\begin{equation}
2 {{\mathcal{Q}^\parallel_\Lambda}_A}^B = \left(0,0,0,e_F {(\sigma^3)_A}^B\right) \equiv 2 \Theta_\Lambda {q_A}^B,
\end{equation}
meaning
\begin{equation}
{q_A}^B = {(\sigma^3)_A}^B \mathrm{q},\qquad \Theta_\Lambda = \left(0,0,0,\frac{e_F}{2\mathrm{q}}\right),\qquad {\rm Tr}\,(q^2) = 2 \mathrm{q}^2,
\end{equation}
where $\mathrm{q}$ is the charge.
The kinetic matrix on the vacuum is (in the properly rotated symplectic frame)
\begin{equation}
({\rm Im}\mathcal{N}^{-1})^{\Lambda \Sigma} = -\sin \delta\,{\rm diag} \left( \frac{e_F}{e_D},\frac{e_F}{e_D},\frac{e_F}{e_D},\frac{e_D}{e_F}\right),
\end{equation}
and we thus find the gauge coupling
\begin{equation}
4g^2 \mathrm{q}^2 = e_D e_F \sin \delta.
\end{equation}
Eventually, we see that the vacuum energy \eqref{VexdS} has precisely the form \eqref{Hquant}. If the weak gravity conjecture holds, these vacua are not protected against corrections and thus are not within the controlled region of the effective description.
\section{The gravitino conjecture}
\label{sec_gravconj}
A crucial assumption in the discussion of the previous section is that the gravitino mass is vanishing (or very light with respect to the Hubble scale). It is then natural to wonder if the limit of parametrically small or even vanishing gravitino mass can be problematic for a given effective description more in general, regardless of the background. That this is indeed the case has been conjectured in \cite{Cribiori:2021gbf,Castellano:2021yye}, and it is the topic of the present section. We stress that we will be here concerned with the limit, while the argument of the previous section involved the actual evaluation of the gravitino mass at a specific (critical) point. Even if related, the two operations are in principle different.
\subsection{The statement and its motivation}
The gravitino conjecture \cite{Cribiori:2021gbf,Castellano:2021yye} states that the limit of vanishing gravitino mass, $m_{3/2}$, corresponds to a breakdown of the effective field theory description
\begin{equation}
m_{3/2}\to 0 \qquad \Rightarrow \qquad \text{swampland},
\end{equation}
as it is associated to the emergence of an infinite tower of states becoming light in the same limit. Some motivations behind this statement and based on \cite{Cribiori:2020use,DallAgata:2021nnr} have already been reviewed in the previous section. Below, we would like to recall briefly some more.
Clearly, for supersymmetric anti-de Sitter vacua, the statement is equivalent to the anti-de Sitter distance conjecture \cite{Lust:2019zwm} and thus, at least in this specific case, it shares the same motivation. However, the two conjectures crucially differ on other backgrounds and indeed they lead to differ predictions. Yet another motivation can be drawn from the idea of supersymmetric protection \cite{Palti:2020qlc}, which states that the superpotential of an $N=1$ theory cannot vanish, unless the theory is related in a specific way to a parent one with more supersymmetry. If the superpotential $W$ cannot vanish, the same applies to the gravitino mass, since in $N=1$ this is given by $m_{3/2}^2 = e^K W \bar W$. Instead, if the superpotential is exactly vanishing everywhere on the moduli space, the background is supersymmetric Minkowski, but this is not excluded by the conjecture, since the latter is really a statement about the limit (in the same way as the anti-de Sitter distance conjecture does not forbid supersymmetric Minkowski backgrounds).
Let us notice that unitarity alone seems not to be enough to motivate the gravitino conjecture on a positive background. Indeed, unitarity bounds for the gravitino have been studied in \cite{Deser:2001pe} and one can clearly see that there is no non-trivial unitarity bound on the gravitino mass if the cosmological constant is positive. It thus seems that the arguments reviewed in the previous section and the Festina Lente bound are capturing some more information on quantum gravity on a positive background beyond unitarity.
In a quasi-flat universe, the gravitino mass is a good approximation for the supersymmetry breaking scale,
\begin{equation}
M_{\text{susy}}^2 \simeq m_{3/2} M_P.
\end{equation}
The fact that supersymmetry has not been observed so far is thus very much compatible with the gravitino conjecture, which might point towards a scenario with supersymmetry breaking at high scale. This is naturally realised in string theory through a mechanism known as brane supersymmetry breaking \cite{Sugimoto:1999tx,Antoniadis:1999xk} (see \cite{Basile:2021vxh} for a recent review in a swampland perspective), in which the supersymmetry breaking scale is the string scale. Due to this fact, the effective descriptions of these models typically requires a non-linear realisation of supersymmetry in the low energy, which is not directly captured by the framework of \cite{Cribiori:2021gbf,Castellano:2021yye}. It would be interesting to extend the gravitino conjecture also to these setups.
Several examples in which the conjectures is checked explicitly can be found in \cite{Cribiori:2020use,DallAgata:2021nnr}. Furthermore, the new string construction proposed in \cite{Coudarchet:2021qwc} seem also to be compatible with it, since it is argued that supersymmetry is unavoidably (and softly) broken in the closed string sector. Below, we discuss and review some new and old examples, and we summarise various results. For concreteness, we will assume that the mass of the states becoming light is related to the gravitino mass as (in Planck units)
\begin{equation}
\label{m32n}
m_\text{tower} \sim m_{3/2}^n,
\end{equation}
with $n$ some model-dependent parameter. Clearly, this assumption might not be verified in general, but in the examples considered in \cite{Cribiori:2021gbf,Castellano:2021yye} does indeed hold. Furthermore, one can try to find bounds for $n$ or relate it to the parameters entering other swampland conjectures, as done in \cite{Castellano:2021yye}. It would be also interesting to check how the logarithmic corrections systematically proposed by \cite{Blumenhagen:2019vgj} can affect the analysis and the bounds on $n$.
\subsection{$N=1$ models}
As it is well-known, the structure of four-dimensional $N=1$ supergravity with chiral and vector multiplets is completely fixed by three ingredients: a real gauge invariant function $G(z,\bar z)$, a holomorphic gauge kinetic function $f_{ab}(z)$, and a choice of holomorphic Killing vectors $k_a^i(z)$ generating the analytic isometries of the scalar manifold that are gauged by the vectors.
Let us focus on the first ingredient, since it is closely related to the gravitino mass. Indeed, $G(z,\bar z)$ is defined as (assuming $W \neq 0$)
\begin{equation}
G(z,\bar z) = K + \log W \bar W,
\end{equation}
where $K$ and $W$ are the K\"ahler potential and superpotential respectively. This expression stems from the known fact that in supergravity, contrary to global supersymmetry, $K$ and $W$ are not functions but sections. In particular, they both vary under K\"ahler transformations. This means that they are not physical quantities: any statement regarding $K$ and $W$ separately depends crucially on a specific choice of K\"ahler frame. Instead, the only actual physical quantity to be considered is their K\"ahler invariant combination, $G(z,\bar z)$. The gravitino mass is then
\begin{equation}
m_{3/2}^2 = e^G
\end{equation}
and it is indeed K\"ahler-invariant.
We can give a geometric interpretation of the gravitino conjecture. The scalar manifold of $N=1$ supergravity is restricted by supersymmetry to be K\"ahler-Hodge (while in the global case it is just K\"ahler). This is a K\"ahler manifold $\mathcal{M}$ together with a holomorphic line bundle $\mathcal{L}\to \mathcal{M}$, whose first Chern class is equal to the cohomology class of the K\"ahler form of $\mathcal{M}$, namely $c_1(\mathcal{L}) = [\mathcal{K}]$. Given a hermitean metric $h(z,\bar z)$ on the fiber, the first Chern class is $c_1(\mathcal{L}) = \frac{i}{2\pi} \bar \partial \partial \log h$. Setting it equal to the K\"ahler form $\mathcal{K} = \frac{i}{2\pi} \bar \partial \partial K$, one learns that for a K\"ahler-Hodge manifold the fiber metric is the exponential of the K\"ahler potential, $h = e^K$. The gravitino conjecture can then be geometrically rephrased as the statement that the limit of vanishing norm of a (non-vanishing) section of $\mathcal{L}$ leads to the breakdown of the effective theory.
Indeed, consider a holomorphic section $W(z)$, which is in fact the superpotential of the $N=1$ theory. Its norm is defined as $W h \bar W = || W||^2$. Thanks to holomorphicity, one can write $ \frac{i}{2\pi} \bar \partial \partial \log h = \frac{i}{2\pi} \bar \partial \partial \log ||W||^2 $, identifying thus $G(z,\bar z) = K + \log W \bar W = \log ||W||^2 $. The gravitino mass is then
\begin{equation}
m_{3/2}^2 = e^G = || W||^2.
\end{equation}
The limit of vanishing gravitino mass corresponds to the limit of vanishing norm of the section $W(z)$. The gravitino conjecture implies that K\"ahler-Hodge manifolds compatible with quantum gravity are those in which this norm cannot be continuously sent to zero. Notice that special K\"ahler manifolds, parametrised by scalars of $N=2$ vector multiplets, are also K\"ahler-Hodge and thus this picture can be extended directly. One possible way in which the limit is obstructed consists in the existence of a lower positive bound for the gravitino mass. Simple models with this property are presented in section \ref{sec_modinvmod}. A possible refinement of the conjecture, inspired by the idea that domestic geometry is the natural framework underlying supergravity theories, as proposed in \cite{Cecotti:2021cvv}, consists in postulating that also the (dual) limit of infinite gravitino mass (or norm $||W||^2$) is pathological. This seems to be the case in a simple STU model reviewed in section \ref{stumodel}.
In order to test explicitly the conjecture, one should identify the tower of states becoming light in the limit of vanishing gravitino mass. Given that we are working within the framework of supergravity, a natural candidate for the first light tower are Kaluza-Klein states. Even if there might be subtleties in the precise identification of this scale (see e.g.~\cite{DeLuca:2021mcj} for recent work in this direction), we will then assume that their mass is parametrically related to the volume of the internal manifold. In Calabi-Yau or orbifold compactifications, the volume ({\rm Vol}) typically appears in the K\"ahler potential as
\begin{equation}
K(z,\bar z) = - \alpha \log {\rm Vol} + K'(z, \bar z),
\end{equation}
where $\alpha$ is a model-dependent parameter, while the remaining part $K'$ will not play any role. In compactifications to four dimensions, one has that $m_{KK} \sim {\rm Vol}^{-\frac 23}$. We will also assume that the superpotential scales as
\begin{equation}
W \sim {\rm Vol}^{\frac \beta2},
\end{equation}
with $\beta$ a second model-dependent parameter. If $m_\text{tower} \sim m_{KK}$, we thus get a relation of the type \eqref{m32n}, with
\begin{equation}
n = \frac{4}{3(\alpha-\beta)}.
\end{equation}
This behavior can now be checked in concrete examples. As noted in \cite{Cribiori:2021gbf}, for heterotic compactifications to four-dimensional Minkowski spacetime one finds $n=\frac 43$, for type IIB GKP orientifolds one has $n=\frac23$ and for Scherk-Schwarz compactifications $n=4$. As discussed in \cite{Castellano:2021yye}, for general CY$_3$ orientifolds one finds a lower bound $n\geq \frac13$, while for F-theory flux compactifications on CY$_4$ one has $n\geq \frac 14$; however, for toroidal orientifolds the range seems to be further restricted to $\frac 23 \leq n \leq 1$.
\subsubsection{Modular invariant models}
\label{sec_modinvmod}
We would like to briefly review a class of models, not discussed in \cite{Cribiori:2021gbf,Castellano:2021yye}, in which the gravitino conjecture is satisfied by construction, since the gravitino mass admits a lower positive bound while varying over the moduli space. In these models, the interactions are fixed by asking that the action is left invariant by modular transformations acting on the scalar fields. Their construction in $N=1$ supergravity dates back to \cite{Ferrara:1989bc,Ferrara:1989qb}, while their string theory origin has been explored more in detail shortly after \cite{Font:1990nt,Font:1990gx,Cvetic:1991qm}. Recently, they have been revisited in a swampland context in \cite{Gonzalo:2018guu}.
In the simplest version, there is a single chiral multiplet $T$ transforming under SL$(2,\mathbb{Z})$ as
\begin{equation}
T \to \frac{aT + b}{cT + d}.
\end{equation}
The most generic $N=1$ Lagrangian invariant under this transformation is then given by
\begin{equation}
G(T,\bar T) = - 3 \log (-i (T-\bar T)) + \log W \bar W,
\end{equation}
with
\begin{equation}
W(T) = \frac{H(T)}{\eta(T)^6}.
\end{equation}
Here, $H(T)$ is a modular invariant holomorphic function and $\eta(T)$ the Dedekind function. It can be shown that, to avoid singularities in the fundamental domain, one has to choose
\begin{equation}
H(T) = (j(T) - 1728)^{\frac m2} j(T)^{\frac n3} P(j(T)),
\end{equation}
where $m$, $n$ are positive integers, and $P$ is a polynomial in the holomorphic Klein modular invariant form $j(T)$.
The origin of this superpotential is purely non-perturbative (gaugino condensation in heterotic compactifications). Therefore, the modulus $T$ which would be flat at the perturbative level is stabilised by non-perturbative corrections. The nature of these non-perturbative corrections is such that the decompactification limit, ${\rm Im} \, T \to \infty$, is prohibited, as the potential diverges in the same limit. This has to be contrasted with the standard behavior of perturbative potentials, which are typically vanishing at large distances. In other words, in these models a large distance limit is dynamically censored. This has consequences on the behavior of $m_{3/2}$ as a function of $T$: the limit of vanishing gravitino mass is prohibited and the gravitino conjecture is automatically satisfied.
As an illustration, let us consider the simplest superpotential which is realised in string compactifications,
\begin{equation}
W = \frac{1}{\eta(T)^6}.
\end{equation}
The gravitino mass is
\begin{equation}
m_{3/2}^2 = e^G = \frac{1}{(2{\rm Im} \,T)^3 \eta(T)^6\bar \eta(\bar T)^6}
\end{equation}
and it can be easily checked that it is minimised at ${\rm Im} \, T=1$ at a strictly positive value. A plot is reported in the figure below.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.6]{m32plot.pdf}
\end{center}
\end{figure}
A natural generalisation would be to consider models with a non-trivial function $H(T)$ or with two chiral multiplets $S$ and $T$ \cite{Cvetic:1991qm}, described by $G(T,\bar T) = -\log(-i(S-\bar S))- 3 \log (-i (T-\bar T)) + \log W \bar W$, with $W =\frac{\Omega(S)H(T)}{\eta(T)^6}$.
\subsection{$N=2$ STU model}
\label{stumodel}
The so called STU model is an instructive and yet simple example in which to test the gravitino conjecture, as done in \cite{Cribiori:2021gbf}. It can arise from heterotic string compactification on $K3\times T^2$ and we refer e.g.~to \cite{deWit:1995dmj} for more details. The four-dimensional low energy theory is $N=2$ supergravity with $n_V=3$ vector multiplets, whose scalars are coordinates of the manifold $\left(\frac{SU(1,1)}{SU(1)}\right)^3$. The prepotential is
\begin{equation}
\mathcal{F} = \frac{X^1 X^2 X^3}{X^0}.
\end{equation}
We consider the case of a U$(1)_R\subset$ SU$(2)_R$ gauging with a constant moment map, namely a Fayet--Iliopoulos term. For definiteness, we can choose it as
\begin{equation}
\mathcal{P}^x_\Lambda = q \delta^{3x}\eta_{0\Lambda}.
\end{equation}
In the presence of only Fayet--Iliopoulos terms, the $N=2$ scalar potential can be recast into an $N=1$ form \cite{Andrianopoli:1996cm}. Indeed, from \eqref{VN=2el} with $k^i_\Lambda = 0 = k^u_\Lambda$ one finds\footnote{One has to use that $U^{\Lambda \Sigma} = g^{i\bar\jmath}f^\Lambda_i f^\Sigma_{\bar \jmath} = e^K g^{i\bar\jmath}D_i X^\Lambda \bar D_{\bar \jmath}\bar X^\Sigma$, where $g_{i\bar\jmath}$ is the special K\"ahler metric.}
\begin{equation}
\mathcal{V} = e^K \left(g^{i\bar \jmath}D_i W \bar D_{\bar \jmath}\bar W - 3 W \bar W\right),
\end{equation}
where we define the would be ``superpotential''
\begin{equation}
W = X^\Lambda \mathcal{P}^3_\Lambda.
\end{equation}
The K\"ahler potential is
\begin{equation}
K = -\log( S T U)= - \log {\rm Vol},
\end{equation}
where $S = -2{\rm Im} z^1$, $T = -2{\rm Im} z^2$ and $U = -2{\rm Im} z^3$ and {\rm Vol} is the volume of the compact manifold. One can easily check that the scalar potential is identically vanishing and thus the model is no-scale. The gravitino mass matrix is
\begin{equation}
S_{AB} = \frac i2 e^{\frac K2}W
\left(\begin{array}{cc}
0&1\\
1&0\\
\end{array}
\right).
\end{equation}
The mass of the gravitini is thus given by $m_{3/2}^2\simeq e^K W\bar W$, similarly to the $N=1$ case. Furthermore, the gauge coupling \eqref{gcoup} is
\begin{equation}
\label{gel}
g^2 = 2 e^{K} =2 {\rm Vol}^{-1}.
\end{equation}
We see that the limit $m_{3/2}^2 \to 0$ is realised by ${\rm Vol} \to \infty$. Interestingly, in the same limit the gauge coupling vanishes and a global symmetry is restored
\begin{equation}
{\rm Vol} \to \infty\qquad \Rightarrow \qquad m_{3/2} \to 0 \quad \Leftrightarrow \quad g \to 0.
\end{equation}
This would be problematic, since there should be no global symmetries in quantum gravity. This is a simple example in which the gravitino conjecture is closely related to the absence of global symmetries, and it would be interesting to explore further other relations of this kind within the web of swampland conjectures. In the limit of large volume, one can identity the tower of states becoming light as Kaluza-Klein modes. Notice that the T-dual limit of vanishing volume (very large gravitino mass) would also be problematic, since in this case winding modes would become light and the supergravity approximation would not hold anymore. Correspondingly, the magnetic dual of the gauge coupling \eqref{gel} would then vanish.
\section{Discussion: de Sitter and gravitino mass in extended supergravity}
\label{sec_fincomm}
The general argument of \cite{Cribiori:2020use,DallAgata:2021nnr}, reviewed in the section \ref{sec_WGCvsdS}, proves that de Sitter critical points of $N=2$ supergravity with vanishing gravitino mass have a cosmological constant of the order of the UV cutoff predicted by the magnetic weak gravity conjecture and thus cannot be consistent effective theories.
The assumption of a vanishing gravitino mass might seem restrictive, but in \cite{Cribiori:2020use,DallAgata:2021nnr} it is found to hold true in almost all of the examples considered from the literature. In particular, in \cite{Cribiori:2020use} it is shown to occur in de Sitter critical points of minimal coupling models and cubic prepotential models with only vector multiplets, both for abelian and non-abelian gauging, while in \cite{DallAgata:2021nnr} explicit models with hyper multiplets are constructed. A model with massive gravitini in which the argument does not apply is the second de Sitter vacuum of \cite{DallAgata:2012plb}, which is unstable. A general strategy to construct de Sitter vacua with massive gravitini in $N=2$ supergravity has been proposed in \cite{Catino:2013syn}, but finding explicit models remains challenging \cite{DallAgata:2021nnr}.
Notice also that the argument requires an unbroken abelian gauge group in the vacuum, but in case there are only non-abelian groups it is still possible to formulate a weaker version, as explained in detail in \cite{DallAgata:2021nnr}.
In \cite{DallAgata:2021nnr}, the argument has also been formulated in $N=8$ supergravity, and it should be possible to extend it to any $2<N<8$ theory. Assuming this step to be straightforward, we could conclude that, if the gravitino mass is vanishing or parametrically light compared to the Hubble scale, the resulting de Sitter critical points (regardless of stability) of extended supergravity are in the swampland due to the weak gravity conjecture. In turn, this would mean that $N=0,1$ supersymmetry at the Lagrangian level seem to be the most promising chances to obtain a consistent effective description for de Sitter critical points, if the gravitino mass is vanishing. Notice that this is compatible with the recent conjecture of \cite{Andriot:2022way} and has a nice parallel with the scale separation analysis of \cite{Cribiori:2022trc}. In particular, minimal supersymmetry in four dimensions is peculiar since it allows for the existence of a superpotential which might not be directly related to a gauge coupling. This possibility cannot occur in $N>2$, as it is most clearly illustrated in the example of the STU model reviewed in section \ref{stumodel}.
The analysis here presented was performed in four spacetime dimensions, but it would be interesting to extend it to higher dimensions. While we leave a detailed study for future work, we can provide an argument why de Sitter critical points of $d>4$, $N>0$ theories with vanishing gravitino mass suffer of the same problem discussed in section \ref{sec_WGCvsdS} and belong to the swampland, according to the weak gravity conjecture. First, one has to recall that in $d>4$ the supergravity scalar potential is always of order $\mathcal{O}(g^2)$, since it stems from a gauging procedure. Then, in any dimensions and for any number of supercharges, the structure of the scalar potential is fixed by the supersymmetric ward identity \cite{Cecotti:1984wn}, which has the schematic form
\begin{equation}
\mathcal{V} \sim \left(\delta (\text{spin-1/2})\right)^2 - \left(\delta (\text{spin-3/2})\right)^2.
\end{equation}
The precise numerical factors and indices entering this relation depend on $d$ (spacetime dimensions) and $N$ (number of supersymmetries), but crucially the relative negative sign between the two terms does not, since it corresponds to the gravitational contribution. Moreover, the second term in the relation above is precisely the gravitino mass, namely the supersymmetry transformation of the gravitino on the vacuum, with (Lorentz covariant) derivative and other maximal symmetry breaking terms turned off. Thus, if the gravitino mass is vanishing, the scalar potential is non-negative and of order $\mathcal{O}(g^2)$ for $d>4$. The argument of \cite{Cribiori:2020use,DallAgata:2021nnr} then applies, provided one shows that there is no arbitrarily tunable parameter in the expression of the vacuum energy.
\acknowledgments
I would like to thank G.~Dall'Agata, F.~Farakos, D.~L\"ust and M.~Scalisi for discussions. This work is supported by the Alexander von Humboldt foundation.
|
1,941,325,221,133 | arxiv | \section{Introduction}
As known, coherent states $(CS)$ are widely and fruitful being utilized in
different areas of the theoretical physics \cite{ab1}-\cite{ab5}. The $CS$,
introduced
by
Schr\"odinger and Glauber, turned out to be orbits of the Heisenberg-Weyl
group.
That observation allowed one to formulate by analogy some general definition of
$CS$ for any Lie group \cite{ab4} . A connection between the $CS$ and the
quantization of classical systems, in particular , systems with a curved phase
phase space, was also established \cite{ab6,ab7} . By the origin, $CS$ are
quantum states, but, at the same time, they are parametrized by the points of
the phase space of a corresponding classical mechanics. Namely that
circumstance
makes
them
very convenient in the analysis of problems of the correspondance between the
quantum and the classical discription. All that explains the interest both to
the investigation of general problems of $CS$ theory and to the construction of
$CS$ of concrete groups. The $CS$ of such important in physics groups as
$SU(N)$ ones
are built and investigated in an uniform way in the present work. The $CS$ of
the
group $SU(2)$, from that family, are well known. One can point out some of the
first references \cite{ab8}-\cite{ab12}, where that states were built on the
base of the
well investigated structure of the $SU(2)$ matrices in the fundamental
representation. Another approach to
the $CS$ construction of the $SU(2)$ group was used by Berezin \cite{ab6,ab7} .
That approach is connected with the utilization of the representations of the
$SU(2)$ group in the space of polynomials of the powers not more that a
given one. A modification of the latter method in a gauge invariant form (with
extended number of variables in the coset space or phase space) allows us to
build the $CS$ for all the groups $SU(N)$ in an uniform way.
We construct the $CS$
by means of orbits of highest weights in the space of polynomials of a
fixed power. The representations used are equivalent to the total symmetric
irredicible unitary representations of the $SU(N)$ groups. The stationary
subgroups of the highest weights, in the case of consideration, are $U(N-1)$,
so
that the $CS$ are parametrized by the points of the coset space $SU(N)/U(N-1)$
which plays the role of the phase space of a corresponding classical mechanics,
and at the same time it is the well known projective space $CP^{N-1}$. The
logarithm of the modulus of the $CS$ overlapping, being interpreted as a
symmetric in
the space $CP^{N-1}$, generates Fubini-Study metric in the space.The $CS$ form
an overcomplited system in the representation space and minimize an
invariant dispersion of the quadratic Casimir operator.
The classical limit is investigated in terms of the operators symbols which are
constructed as the mean values in the $CS$. The role of the Planck constant
plays the quantity $h=P^{-1}$, where $P$ is the signature of the
representation. The
classical limit of so called star commutator of symbols generates the classical
Poisson bracket in the coresponding phase space. The present work is the
continuation of our papers \cite{ab13}-\cite{ab15}, where a part of results was
preliminary expounded.
\section{The construction of the $CS$ by means of the representations
of the $SU(N)$ groups on polynomials}
We are going to construct $CS$ of the $SU(N)$ groups as orbits in some
irreducible representations of the groups, factorized with respect to
stationary subgroups. First, we discribe the corresponding representations.
Let ${\bf C}^N$ is $N$-dimentional space of complex lines
$z=(z_\mu),\mu=\overline{1,N}$, with the scalar
product $(z,z')_C=\sum_{\mu}\bar{z}_\mu z'_\mu,\: \mu=\overline{1,N}$,
and $\widetilde{\bf C}^N$ is the dual space of complex columns with the
scalar product
$(\tilde{z},\tilde{z}')_{\widetilde{C}}=\sum_{\mu}\bar{\tilde{z}^\mu}\tilde{z}^
\mu $. The anti-isomorphism is given by the relation
$z\leftrightarrow\tilde{z}\Leftrightarrow\bar{z}_\mu=\tilde{z}^\mu $.
The mixed (Dirac) scalar product between elements of
${\bf C}^N$ and $\widetilde{\bf C}^N$ is defined by the equation:
\begin{equation} \label{e1}
\langle z',\tilde{z}\rangle =(\tilde{z}',\tilde{z})_{\widetilde{C}}=
\overline{(z',z)_C}=z'_\mu\tilde{z}^\mu\:.
\end{equation}
Let $g$ are matrices of the fundamental representation of the $SU(N)$ group.
This representation induces irreducible representations of the group in the
spaces $\Pi_P$ and $\widetilde{\Pi}_P$ of polynomials of a fixed power $P$
on the vectors $z$ and $\tilde{z}$ respectively, \begin{eqnarray}
T(g)\Psi_P(z)&=&\Psi_P(z_g),\: z_g=zg,\: \Psi_P\in\Pi_P\:,\nonumber\\
\widetilde{T}(g)\Psi_P(\tilde{z})&=&\Psi_P(\tilde{z}_g),\: \tilde{z}_g=g^{-1}
\tilde{z},\:\Psi_P(\tilde{z})\in\widetilde{\Pi}_P\:.\label{e2} \end{eqnarray}
\noindent The anti-isomorphism $z\leftrightarrow\tilde{z}$ induces the
correspondence $\Psi_P(\tilde{z})=\overline{\Psi_P(z)}$.
The representation (\ref{e2}) is equivalent to the one on total symmetrical
tensors of
signature $P$ . So, we will further call $P$ as the signature of the
irreducible
representation.
Obviously the monomials
\begin{eqnarray}
\Psi_{P,\{n\}}(z)=\sqrt{\frac{P!}{n_1!...n_N!}}z_1^{n_1}...z_N^{n_N}\:,\label
{e3}\\
\{n\}=\{n_{1},\ldots,n_{N}|\sum_{\mu}n_{\mu}=P\}\:,\nonumber
\end{eqnarray}
\noindent form a discrete basis in $\Pi_{P}$ , and the monomials $\Psi_{P,
\{n\}}(\tilde{z})=\overline{\Psi_{P,\{n\}}(z)}$ form a basis in
$\widetilde{\Pi_{P}}$. The monomials obey the remarkable relation
\begin{equation}\label{e4}
\sum_{\{n\}}\Psi_{P,\{n\}}(z')\Psi_{P,\{n\}}(\tilde{z})=\langle
z',\tilde{z}\rangle^{P}\:,
\end{equation}
\noindent which is group invariant on account of the invariance of the
scalar product (\ref{e1}) under the group transformation, $\langle
z'_{g},\tilde
{z}_{g}\rangle=\langle z',\tilde{z}\rangle$ .
We introduce also the scalar product of two polynomials:
\begin{eqnarray}\label{e5}
\langle\Psi_{P}|\Psi'_{P}\rangle&=&\int\overline{\Psi_{P}(z)}\Psi'_{P}
(z){\rm d}\mu_{P}(\bar{z},z)\:,\\
{\rm d}\mu_{P}(\bar{z},z)&=&\frac{(P+N-1)!}{(2\pi)^{N}P!}\delta(\sum|
z_{\mu}|^{2}-1)\prod{\rm d}\bar{z}_{\nu}{\rm d}z_{\nu}\:,\nonumber\\
{\rm d}\bar{z}{\rm d}z&=&{\rm d}(|z|^{2}){\rm d}(\arg z)\:\nonumber.
\end{eqnarray}
\noindent Using the integral
\[
\int_{0}^{1}{\rm d}\rho_{1}\ldots\int_{0}^{1}{\rm d}\rho_{N}\delta(\sum\rho_
{\mu}-1)\prod_{\nu=1}^{N}\rho_{\nu}^{n_{\nu}}=\frac{\prod_{\nu=1}^{N}n_{\nu}!}
{\left(\sum_{\nu=1}^{N}n_{\nu}+N-1\right)!}\:,
\]
\noindent it is easy to verify that the orthonormality relation holds:
\begin{equation}\label{e6}
\langle\Psi_{P,\{n\}}|\Psi_{P,\{n'\}}\rangle=\langle P,n|P,n'\rangle=
\delta_{\{n\},\{n'\}}\:.
\end{equation}
\noindent The copmpleteness relation take place as well
\begin{equation}\label{e7}
\sum_{\{n\}}|P,n\rangle\langle P,n|=I_{P}\:,
\end{equation}
\noindent
where $|P,n\rangle$ and $\langle P,n|$ are Dirac's denotations for the vectors
$\Psi_{P,\{n\}}(z)$ and $\Psi_{P,\{n\}}(\tilde{z})$
respectively, and $I_{P}$ is the identical operator in the irreducible space
of representation of signature $P$ .
It is covenient to introduce the operators $a_{\mu}^{\dag}$ and $a^{\mu}$
which act on the basis vectors by formulas:
\begin{eqnarray}
a_{\mu}^{\dag}\Psi_{P,\{n\}}(z)&=&z_{\mu}\Psi_{P,\{n\}}(z)\:\rightarrow
\:a_{\mu}^{\dag}|P,n\rangle=\sqrt{\frac{n_{\mu}+1}{P+1}}|P+1,\ldots,n_
{\mu}+1,\ldots\rangle\:,\nonumber\\
a^{\mu}\Psi_{P,\{n\}}(z)&=&\frac{\partial}{\partial z_{\mu}}\Psi_{P,\{n\}}
(z)\:\rightarrow\:a^{\mu}|P,n\rangle =\sqrt{Pn_{\mu}}
|P-1,\ldots,n_{\mu}-1,\ldots\rangle\:,\nonumber\\
{}[a^{\mu},a_{\nu}^{\dag}]&=&\delta^{\mu}_{\nu},\;
[a^{\mu},a^{\nu}]=[a^{\dag}_
{\mu},a_{\nu}^{\dag}]=0\:.\label{e8}
\end{eqnarray}
\noindent One can find the action of these operators on the left,
\begin{eqnarray}
\langle P,n|a_{\mu}^{\dag}&=&\sqrt{\frac{n_{\mu}}{P}}\langle P-1,\ldots,n_{\mu}
-1,\ldots|=\frac{1}{P}\frac{\partial}{\partial\tilde{z}^{\mu}}\Psi_{P,\{n\}}(
\tilde{z})\:,\label{e9}\\
\langle P,n|a^{\mu}&=&\sqrt{(P+1)(n_{\mu}+1)}\langle P+1,\ldots,n_{\mu}+1,
\ldots|=(P+1)\tilde{z}^{\mu}\Psi_{P,\{n\}}(\tilde{z})\:.\nonumber
\end{eqnarray}
\noindent Their quadratic combinations $A^{\nu}_{\mu}$ can serve as generators
in each irreducible representation of signature $P$ ,
\begin{eqnarray}
A^{\nu}_{\mu}&=&a_{\mu}^{\dag}a^{\nu}=z_{\mu}\frac{\partial}{\partial z_{\nu}},
\; \left[A^{\nu}_{\mu},A^{\kappa}_{\lambda}\right]=\delta^{\nu}_{\lambda}A^
{\kappa}_{\mu}-\delta^{\kappa}_{\mu}A^{\nu}_{\lambda}\:,\label{e10}\\
A^{\nu}_{\mu}|P,n\rangle&=&\sqrt{n_{\nu}(n_{\mu}+1)}|P,\ldots,n_{\nu}-1,\ldots,
n_{\mu}+1,\ldots\rangle\:,\;\mu\neq\nu\:,\nonumber\\
A_{\mu}^{\mu}|P,n\rangle&=&n_{\mu}|P,n\rangle,\; \sum_{\mu}A^{\mu}_{\mu}|P,n
\rangle=P|P,n\rangle\:.\nonumber
\end{eqnarray}
\noindent Obviously, the $A^{\mu}_{\mu}$ are Cartan's generators and $(n_{1},
\ldots,n_{N})$ the weight vector.
The independent generators $\hat{\Gamma}_{a},\:a=\overline{1,N^{2}-1},$
can be expressed in terms of the operators $A_{\mu}^{\nu}$:
\begin{equation}\label{e11}
\hat{\Gamma}_{a}=\left(\Gamma_{a}\right)^{\nu}_{\mu}A^{\mu}_{\nu},\;
\left[\hat{\Gamma}_{a},\hat{\Gamma}_{b}\right]=\imath f_{abc}\hat{\Gamma}_{c}
\;,
\end{equation}
where $\Gamma_{a}$ are generators in the fundamental representation, $\left[
\Gamma_{a},\Gamma_{b}\right]=\imath f_{abc}\Gamma_{c}$.
The quadratic Casimir operator $C_{2}=\sum_{a}\hat{\Gamma}_{a}^{2}$ can be only
expressed via the operators $A^{\nu}_{\mu}$ by means of the well known formula
\begin{equation}\label{e12}
\sum_{a}\left(\Gamma_{a}\right)^{\nu}_{\mu}\left(\Gamma\right)^{\kappa}_
{\lambda}=\frac{1}{2}\delta^{\nu}_{\lambda}\delta^{\kappa}_{\mu}-\frac{1}
{2N}\delta^{\nu}_{\mu}\delta^{\kappa}_{\lambda}\;,
\end{equation}
and evaluated in every irreducible representation explicitly,
\begin{equation}\label{e13}
C_{2}=\frac{1}{2}\tilde{A}^{\nu}_{\mu}\tilde{A}^{\mu}_{\nu}=\frac{P(N+P)(N-1)}
{2N},\; \tilde{A}^{\nu}_{\mu}-\frac{\delta_{\mu}^{\nu}}{N}\sum_{\lambda}A^
{\lambda}_{\lambda}\;.
\end{equation}
Now we are goig to construct the orbits of highest weights ( of a vector of the
basis (\ref{e3}) with the maximal length $\sqrt{\sum n^{2}_{\mu}}=P$ ). Let
this highest weight be the state $\Psi_{P,\{P,0\ldots 0\}}(z)=(z_{1})^{P}$.
Then we get,
in accordance with (\ref{e2}) :
\begin{equation}\label{e14}
T(g)\Psi_{P,\{P,0\ldots \}}(z)=\left[z_{\mu}g^{\mu}_{1}\right]^{P}=\langle
z,\tilde{u}\rangle^{P},\; \tilde{u}^{\mu}=g^{\mu}_{1}\;,
\end{equation}
\noindent where the vector $\tilde{u}\in\widetilde{\bf C}^{N}$ is the first
column of the $SU(N)$ matrix in the fundamental representation.
If we interprete the representation space as a Hilbert one of quantum states,
then we have to identify all the states differ each other by a constant phase.
Let us turn from that point of view to the states of the orbit (14). One can
notice, that the transformation $\arg\tilde{u}^{\mu}\rightarrow\arg\tilde{u}^
{\mu}+\lambda$, changes all the states (\ref{e14}) by the constant phase
$\exp(iP\lambda)$.
So, one can treate the transformation as gauge one in certain sense. To select
only physical different quantum states $(CS)$ from all the states of the orbit,
one has to impose a gauge condition on $\tilde{u}$ which fixes the total phase
of the orbit (\ref{e14}) . Such a condition may be chosen in the form $\sum_{
\mu}\arg\tilde{u}^{\mu}=0$.
Taken into account that the quantities $\tilde{u}$ obey the condition
$\sum|\tilde{u}^{\mu}|^{2}=1$, by the origin, as elements of the first column
of the $SU(N)$ matrix, we get the explicit form of the $CS$ of the $SU(N)$
group in the space $\Pi_{P}$:
\begin{eqnarray}
\Psi_{P,\tilde{u}}(z)=\langle z,\tilde{u}\rangle^{P}\;,\label{e15}\\
\sum_{\mu}|\tilde{u}^{\mu}|^{2}=1,\; \sum_{\mu}\arg \tilde{u}^{\mu}=0 .
\label{e16}
\end{eqnarray}
\noindent In the same manner we construct the orbit of the highest weight $\Psi
_{P,\{P,0\ldots 0\}}(\tilde{z})=\left(\tilde{z}^{1}\right)^{P}$ in the space
$\tilde{\Pi}_{P}$ , and the corresponding $CS$ have the form:
\begin{eqnarray}
\Psi_{P,u}(\tilde{z})=\langle u,\tilde{z}\rangle^{P}, \label{e17}\\
\sum_{\mu}|u_{\mu}|^{2}=1,\; \sum_{\mu}\arg u_{\mu}=0 .\label{e18}
\end{eqnarray}
\noindent Obviously, $\Psi_{P,\tilde{u}}(z)=\overline{\Psi_{P,u}(\tilde{z})},
\;z\leftrightarrow\tilde{z}, u\leftrightarrow\tilde{u}$.
It is easy to see that all the elements of the discrete basis (\ref{e3}) with
the weight vectors of the form $(n_{\mu}=\delta^{\nu}_{\mu}P,\: \mu=\overline
{1,N})$
belong to the $CS$ set (\ref{e15}) with parameters $(\tilde{u}^{\mu}=\delta^{
\mu}_{\nu}P,\;\mu=\overline{1,N})$. The analogous statement is valid regarding
to the dual basis and to the $CS$ (\ref{e17}) .
The quantities $\tilde{u}$ and $u$, which parametrize the $CS$ (\ref{e15})
and (\ref{e16}) ,
are elements of the coset space $SU(N)/U(N-1)$, in accordance with the fact
that the stationary subgroups of both the initial vectors from the spaces $\Pi_
{P}$ and $\tilde{\Pi}_{P}$ are $U(N-1)$. At the same time, the coset space is
the so called projective space $CP^{N-1}$ (we remember that the complex
projective space is defined as a set of all nonzero vectors $z$ in ${\bf C}^{N}
$, where $z$ and $\lambda z,\:\lambda\neq 0$, are equivalent \cite{ab16} ). The
eq.(\ref{e15}) or (\ref{e17}) , are just the possible conditions which define
the projective space. The coordinates $u$ or $\tilde{u}$ are called
homogeneouse ones
in the $CP^{N-1}$. Thus, the $CS$ constructed are parametrized by the elements
of
the projective space $CP^{N-1}$, which is a symplectic manifold \cite{ab16} and
therefore can be considered as the phase space of a classical mechanics.
To decompose the $CS$ in the discrete basises, one can use the scalar product
(\ref{e5}) directly, but there exists more simple way. One can use the relation
(\ref{e4}) , on
account of the right side of eq. (\ref{e4}) can be treated as $CS$ (\ref{e15})
or
(\ref{e17}) . So,
it follows from (\ref{e4}) :
\begin{equation}\label{e19}
\Psi_{P,\tilde{u}}(z)=\sum_{\{n\}}\Psi_{P,\{n\}}(\tilde{u})\Psi_{P,\{n\}}(z)\:.
\end{equation}
That implies:
\begin{equation}\label{e20}
\langle P,u|P,n\rangle=\Psi_{P,\{n\}}(u),\: \langle P,n|P,u\rangle=\Psi_{P,\{n
\}}(\tilde{u}),
\end{equation}
where $|P,u\rangle$ and $\langle P,u|$ are Dirac's denotations for the $CS
\; \Psi_{P,\tilde{u}}(z)$ and $\Psi_{P,u}(\tilde{z})$ respectively.So we come
to the important for the understanding result: the discrete basises in the
spaces
$\Pi_{P}$ and $\tilde{\Pi}_{P}$ are ones in the $CS$ representation.
The completeness relation for the $CS$ can be extracted from the eq.(\ref{e6}).
Using
the formulas (\ref{e20}) in the integral (\ref{e6}) , we get:
\[
\int\langle P,n|P,u\rangle\langle P,u|P,n'\rangle\rm d\mu_{P}(\bar{u},u)=
\delta_{\{n\},\{n'\}}\;.
\]
That proves the completeness relation
\begin{equation}\label{e21}
\int|P,u\rangle\langle P,u|\rm d\mu_{P}(\bar{u},u)=I_{P}\;.
\end{equation}
\section {Uncertainty relation}
The orbit of each vector of the discrete basis $|P,n\rangle$ (\ref{e3}) and ,
particularly, the $CS$ constructed, are eigen for a nonlinear operator $C'_{2}
$, which is defined by its action on an arbitrary vector $|\Psi\rangle$ as
\begin{equation}\label{e22}
C'_{2}|\Psi\rangle=\sum_{a}\langle\Psi|\hat{\Gamma}_{a}|\Psi\rangle\hat{\Gamma}_
{a}|\Psi\rangle\;.
\end{equation}
First, we note that $T^{-1}(g)C'_{2}T(g)=C'_{2}$ , where $T(g)$ are operators
of the representation. Indeed, it follows from the relation $T^{-1}(g)\hat{
\Gamma}_{a}T(g)=t^{c}_{a}\hat{\Gamma}_{c}$ and $[C_{2},T(g)]=0$, that $t^{c}_
{a}$ is an orthogonal matrix, so that
\begin{eqnarray*}
T^{-1}(g)C'_{2}T(g)|\Psi\rangle&=&\sum_{a}\langle\Psi|T^{-1}(g)\hat{\Gamma}_{a}
T(g)|\Psi\rangle T^{-1}(g)\hat{\Gamma}_{a}T(g)|\Psi\rangle \\
&=&\sum_{a}\langle\Psi|\hat{\Gamma}_{a}|\Psi\rangle\hat{\Gamma}_{a}|\Psi\rangle=
C'_{2}|\Psi\rangle\;.
\end{eqnarray*}
After that, it is easy to show, that the orbit $T(g)|P,n\rangle$ is eigen for
$C'_{2}$ .We write:
\begin{equation}\label{e23}
C'_{2}T(g)|P,n\rangle=T(g)C'_{2}|P,n\rangle=T(g)\sum_{a}\langle
P,n|\hat{\Gamma}
_{a}|P,n\rangle\hat{\Gamma}_{a}|P,n\rangle\:,
\end{equation}
\noindent and use the formulas (\ref{e11}) , (\ref{e12}) in the right side of
(\ref{e23}) ,
\begin{eqnarray*}
&&\sum_{a}\langle P,n|\hat{\Gamma}_{a}|P,n\rangle\hat{\Gamma}_{a}|P,n\rangle\\
&=&\frac{1}{2}\left[\langle P,n|A^{\nu}_{\mu}|P,n\rangle A^{\mu}_{\nu}-\frac{1}
{N}\sum_{\mu}A^{\mu}_{\mu}\right]|P,n\rangle=\lambda(P,n)|P,n\rangle\:,\\
\lambda(P,n)&=&\frac{1}{2}\left(\sum_{\mu}n^{2}_{\mu}-P^{2}/N\right)=
\frac{1}{2}\sum_{\mu}(n_{\mu}-P/N)^{2} .
\end{eqnarray*}
The latter results in
\begin{equation}\label{e24}
C'_{2}T(g)|P,n\rangle=\lambda(P,n)T(g)|P,n\rangle\:.
\end{equation}
The eigen value $\lambda(P,n)$ attains the maximum for the highest weights,
for which $\sum_{\mu}n^{2}_{\mu}=P^{2}=\max$. The $CS \;|P,u\rangle$ belong to
the orbit of the highest weight $\{n\}=\{P,0,\ldots,0\}$. So we get:
\begin{equation}\label{e25}
C'_{2}|P,u\rangle=\frac{P^{2}(N-1)}{2N}|P,u\rangle\;
\end{equation}
One can introduce a dispersion of the square of the length of the isospin
vector \cite{ab12} ,
\begin{equation}\label{e26}
\Delta C_{2}=\langle\Psi|\sum_{a}\hat{\Gamma}^{2}_{a}|\Psi\rangle-\sum_{a}
\langle\Psi|\hat{\Gamma}_{a}|\Psi\rangle^{2}=\langle\Psi|C_{2}-C'_{2}|\Psi
\rangle\:.
\end{equation}
The dispersion serve as a measure of the uncertainty of the state
$|\Psi\rangle$.
Due to the properties of the operators $C_{2}$ and $C'_{2}$ , it is group
invariant and has the least value $P(N-1)/2$ for the orbits of highest weights,
paticularly for the $CS$ constructed, with respect to all the orbits of the
discrete basis (\ref{e3}) .
The relative dispersion of the square of the length of the isospin vector has
the value in the $CS$ :
\begin{equation}\label{e27}
\Delta C_{2}/C_{2}=\frac{N}{N+P}\:,
\end{equation}
\noindent and tends to zero with $h\rightarrow 0,\;h=\frac{1}{P}$ .That fact
indicates already, that the quantity $h$ plays here the role of the Planck
constant. In the Sect.5 this analogy is traced in more details.
\section {The $CS$ overlapping}
The overlapping of the CS can be evaluated in different ways. For instance,
using the completeness relation (20) and formulas (\ref{e19}) , (\ref{e4}) , we
get:
\begin{eqnarray}
\langle P,u|P,v\rangle&=&\sum_{\{n\}}\langle P,u|P,n\rangle\langle P,n|P,v
\rangle\nonumber\\
&=&\sum_{\{n\}}\Psi_{P,\{n\}}(u)\Psi_{P,\{n\}}(\tilde{v})=\langle u,
\tilde{v}\rangle^{P}\:.\label{e28}
\end{eqnarray}
\noindent Comparing the result with eq. (\ref{e14}) , one can write
\begin{equation}
\langle P,u|P,v\rangle=\Psi_{P,\tilde{v}}(u)\;,\label{e29}
\end{equation}
\noindent what confirms once again, that the spaces $\Pi_{P}$ and $\tilde{\Pi}
_{P}$ are, in quantum mechanical sense, merely the spaces of abstract vectors
in the $CS$ representation.
Let $\Psi_{P}(u)$ be a vector $|\Psi\rangle$ in the $CS$ representation,
$\Psi
_{P}(u)=\langle P,u|\Psi\rangle$.Then the formula take place
\begin{equation}\label{e30}
\Psi_{P}(u)=\int\langle P,u|P,v\rangle\Psi_{P}(v){\rm d}\mu_{P}(\bar{v},v)\:.
\end{equation}
\noindent That means, the $CS$ overlapping playes the role of the $\delta$-
function in the $CS$ representation.
The modulus of the $CS$ overlapping (\ref{e28}) possesses of the properties:
\begin{eqnarray}
|\langle P,u|P,v\rangle|&<&1,\;\lim_{P\rightarrow\infty}|\langle P,u|P,v\rangle
=0,\ {\rm if } u\neq v\:,\nonumber\\
|\langle P,u|P,v\rangle|&=&1,\ {\rm only,\ if } u=v\:.\label{e31}
\end{eqnarray}
\noindent That follows from the Cauchy inequality for the scalar product (\ref
{e1}) , $|\langle u,\tilde{v}\rangle|\leq\sqrt{\langle u,\tilde{u}\rangle
\langle v,\tilde{v}\rangle}$, and from the conditions on the parameters of the
$CS$, $\langle u,\tilde{u}\rangle=\langle v,\tilde{v}\rangle=1$.
One can introduce a function $s(u,v)$ of the coordinates of two points of
the projective space $CP^{N-1}$ ,
\begin{equation}\label{e32}
s^{2}(u,v)=-\ln|\langle P,u|P,v\rangle|^{2}=-P\ln|\langle u,\tilde{v}\rangle|^
{2}\;.
\end{equation}
The properties of the modulus of the $CS$ overlapping (\ref{e31}) allows one to
interprete the function as a symmetric. We remember, that a real and positive
symmetric obeys only two axioms of a distance: $s(u,v)=s(v,u)$ and $s(u,v)=0$,
if and only if $u=v$, exepting the triangle axiom. For the $CS$ of the
Heisenberg-Weyl group the function $s^{2}(u,v)=-\ln|\langle u|v\rangle|^{2}=
|u-v|^{2}$, and gives real the square of the distance on the complex plane of
the $CS$ parameters. It turns out, that in case of consideration, the symmetric
$s(u,v)$ generates the metric in the projective space $CP^{N-1}$ .To
demonstrate that, it is convenient to go over from the homogeneous coordinates
$u_{\mu}$ , subjected to the
supplemental conditions (\ref{e18}) , to the local independent coordinates in
$CP^{N-1}$.
For instance, in the domain where $u_{N}\neq 0$, we introduce the local
coordinates $\alpha_{i},\:i=\overline{1,N-1}$,
\begin{eqnarray}
\alpha_{i}&=&u_{i}/u_{N}\:,\label{e33}\\
u_{i}&=&\alpha_{i}u_{N},\; u_{N}=\frac{\exp(-\frac{i}{N}\sum\arg\alpha_{k})}
{\sqrt{1+\sum|\alpha_{k}|^{2}}}\nonumber\:.
\end{eqnarray}
\noindent In the local coordinates (\ref{e33}) the symmetric (\ref{e32}) takes
the form
\begin{equation}\label{e34}
s^{2}(\alpha,\beta)=-P\ln\frac{\lambda(\alpha,\bar{\beta})\lambda(\beta,\bar{
\alpha})}{\lambda(\alpha,\bar{\alpha})\lambda(\beta,\bar{\beta})},
\end{equation}
\noindent where $\lambda(\alpha,\bar{\beta})=1+\sum_{i}\alpha_{i}\bar{\beta}_
{i}$ .
So, we are in position to calculate the square of the "distance" between
two infinitesimal close points $\alpha$ and $\alpha+{\rm d}\alpha$ .For the
${\rm d}s^{2}$, which is defined as the quadratic part of the decomposition of
$s^{2}(\alpha,\alpha+{\rm d}\alpha)$ in the powers of ${\rm d}\alpha$ , one
finds:
\begin{eqnarray}
{\rm d}s^{2}&=&g_{i\bar{k}}{\rm d}\alpha_{i}{\rm d}\bar{\alpha}_{k},\:
g_{i\bar{k}}=P\lambda^{-2}(\alpha,\bar{\alpha})\left[\lambda(\alpha,\bar{
\alpha})\delta_{ik}-\bar{\alpha}_{i}\alpha_{k}\right]\:,\nonumber\\
g_{i\bar{k}}&=&\frac{\partial^{2}F}{\partial\alpha_{i}\partial\bar{\alpha}_{k}},
\; F=P\ln\lambda(\alpha,\bar{\alpha})\:,\label{e35}\\
\det\|g_{i\bar{k}}\|&=&P^{N-1}\lambda^{-N}(\alpha,\bar{\alpha}),\
g^{\bar{k}i}=
\frac{1}{P}\lambda(\alpha,\bar{\alpha})(\delta_{ki}+\bar{\alpha}_{k}\alpha_{i})
\:.\nonumber
\end{eqnarray}
Now one can recognize in the expression (\ref{e35}) so called Fubini-Study
metric
of
the complex projective space $CP^{N-1}$ with the constant holomorphic sectional
curvature $C=4/P$, \cite{ab16} . It follows from (\ref{e35}), we deal with
Kahlerian manifold. As known, a Kahlerian manifold is symplectic one and a
classical mechanics exists on it. The Poisson bracket has the form:
\begin{equation}\label{e36}
\{f,g\}=ig^{\bar{k}i}\left(\frac{\partial f}{\partial\alpha}_{i}\frac{\partial
g}{\partial\bar{\alpha}_{k}}-\frac{\partial f}{\partial\bar{\alpha}_{k}}\frac
{\partial g}{\partial\alpha_{i}}\right)\:.
\end{equation}
\noindent In the next Sect. we show that the classical limit of the commutator
of the operators symbols, connected with the $CS$, generates namely that
Poisson bracket.
\section {The classical limit}
One of the dignity of $CS$ is, they allow one to construct the operators
symbols
in a simple way, i.e. a correspondance between operators and classical
functions on the phase space of a system. The reproduction of actions with
operators on the symbols language is, in fact, equivalent to the quantization
problem.That approach to the quantization was developed by Berezin \cite{ab6}.
Our aim has more restricted character, in that Sect. we are going to
investigate the conditions of the classical limit in terms of operators
symbols constructed by means of the $CS$.
Let us turn to the so called covariant symbol \cite{ab17} , which is, in fact,
the mean value of an operator $\hat{A}$ in the $CS$.
\begin{equation}\label{e37}
Q_{A}(u,\bar{u})=\langle P,u|\hat{A}|P,u\rangle\;.
\end{equation}
\noindent We also restrict ourself with operators which are some polynomial
functions on the generators, of power not more then some given one $M<P$. Such
kind of operators can be written via the operators $a^{\dag}_{\mu}, a^{\nu}$,
using (\ref{e10}) ,(\ref{e11}) , and be presented in the "normal" form,
\begin{equation}\label{e38}
\hat{A}=\sum_{K=0}^{M}A^{\mu_{1}\ldots\mu_{K}}_{\nu_{1}\ldots\nu_{K}}
a^{\dag}_{\mu_{1}}\ldots a^{\dag}_{\mu_{K}}a^{\nu_{1}}\ldots a^{\nu_{K}}
\:.
\end{equation}
\noindent It is easy to find the action of the operators $a^{\dag}_{\mu}, a^{
\nu}$ on the $CS$ and to calculate the symbols (\ref{e37}) ,
\begin{eqnarray}
a^{\dag}_{\mu}|P,u\rangle&=&\frac{1}{P+1}\frac{\partial}{\partial\tilde
{u}^{\mu}}|P+1,u\rangle,\; a^{\mu}|P,u\rangle=P\tilde{u}^{\mu}|P-1,u\rangle\:,
\nonumber\\
\langle P,u|a^{\dag}_{\mu}&=&u_{\mu}\langle P-1,u|,\; \langle P,u|a^{\mu}=
\frac{\partial}{\partial u_{\mu}}\langle P+1,u|\:.\label{e39}
\end{eqnarray}
\noindent So,
\begin{equation}\label{e40}
Q_{A}(u,\bar{u})=\sum_{K=0}^{M}\frac{P!}{(P-K)!}A^{\mu_{1}\ldots\mu_{K}}_{\nu_{1}
\ldots\nu_{K}}u_{\mu_{1}}\ldots u_{\mu_{K}}\bar{u}_{\nu_{1}}\ldots\bar{u}
_{\nu_{K}}\:.
\end{equation}
\noindent Obviously, there is a one-to-one correspondance between an operator
and its covariant symbol.
In the local independent variables $\alpha$ which were defined in (\ref{e33})
the covariant symbol has the form
\begin{equation}\label{e41}
Q_{A}(\alpha,\bar{\alpha})=\sum_{K=0}^{M}\frac{P!}{(P-K)!}\left(1+\sum_{i}^
{N-1}|\alpha_{i}|^{2}\right)^{-K}A^{\mu_{1}\ldots\mu_{K}}_{\nu_{1}\ldots
\nu_{K}}\alpha_{\mu_{1}}\ldots\alpha_{\mu_{K}}\bar{\alpha}_{\nu_{1}}\ldots
\bar{\alpha}_{\nu_{K}}\:,
\end{equation}
\noindent where the summation over the Greek indeces runs from $1$ to $N$ as
before, but one has to count $\alpha_{N}=1$ .
In manipulations it is convenient to deal with the nondiagonal symbols
\begin{eqnarray}
Q_{A}(u,\bar{v})&=&\frac{\langle P,u|\hat{A}|P,v\rangle}{\langle
P,u|P,v\rangle}
\label{e42}\\
&=&\sum_{K=0}^{M}\frac{P!}{(P-K)!}\left(\sum_{\lambda}^{N}u_{\lambda}\bar{v}_
{\lambda}\right)^{-K}\!\!A^{\mu_{1}\ldots\mu_{K}}_{\nu_{1}\ldots\nu_{K}}u_{\mu_
{1}}\ldots u_{\mu_{K}}\bar{v}_{\nu_{1}}\ldots\bar{v}_{\nu_{K}}\:,\nonumber\\
Q_{A}(\alpha,\bar{\beta})&=&\sum_{K=0}^{M}\frac{P!}{(P-K)!}\left(1+\sum
_{i}^{N-1}\alpha_{i}\bar{\beta}_{i}\right)^{-K}\!\!A^{\mu_{1}\ldots\mu_{K}}
_{\nu_{1}\ldots\nu_{K}}\alpha_{\mu_{1}}\ldots\alpha_{\mu_{K}}\bar{\beta}_{\nu
_{1}}\ldots\bar{\beta}_{\nu_{K}}\:,\nonumber\\
\alpha_{i}&=&u_{i}/u_{N},\; \beta_{i}=v_{i}/v _{N},\;\alpha_{N}=\beta_{N}=1\:.
\nonumber
\end{eqnarray}
The symbols $Q_{A}(\alpha,\bar{\beta})$ are analytical functions on $\alpha$
and $\bar{\beta}$ separately and coincide with the covariant symbols
(\ref{e41})
at $\beta\rightarrow\alpha$ . These symbols namely, but not $\langle P,\alpha|
A|P,\beta\rangle$ , are nondiagonal analytical continuation of the covariant
symbols.
Using the completeness relation and the eq.(\ref{e32}) , one can find for the
symbol of the product of two operators $\hat{A}_{1}$ and $\hat{A}_{2}$ :
\begin{equation}\label{e43}
Q_{A_{1}A_{2}}(u,\bar{u})=\int
Q_{A_{1}}(u,\bar{v})Q_{A_{2}}(v,\bar{u})e^{-s^{2}
(u,v)}{\rm d}\mu_{P}(\bar{v},v)\:.
\end{equation}
\noindent Because of $s^{2}(u,v)$ tends to infinity with $P\rightarrow\infty$,
if $u\neq v$ , and equals zero, if $u=v$, one can conclude, that in that limit
the domain $v\approx u$ gives only a contribution to the integral. Thus,
\begin{eqnarray}
\lim_{h\rightarrow 0}Q_{A_{1}A_{2}}(u,\bar{u})&=&Q_{A_{1}}(u,\bar{u})Q_{A_{2}}
(u,\bar{u})\int e^{-s^{2}(u,v)}{\rm d}\mu_{P}(\bar{v},v)\nonumber\\
&=&Q_{A_{1}}(u,\bar{u})Q_{A_{2}}(u,\bar{u})\:,\; h=\frac{1}{P}\:.\label{e44}
\end{eqnarray}
\noindent The integral in (\ref{e44}) equals unity because of the definition
(\ref{e32}) and completeness relation.
If we define according to Beresin \cite{ab6,ab17} the so called star
multiplication of symbols
\begin{equation}\label{e45}
Q_{A_{1}}\star Q_{A_{2}}=Q_{A_{1}A_{2}}\;,
\end{equation}
\noindent then we have for the covariant symbols
\begin{equation}\label{e46}
\lim_{h\rightarrow 0}Q_{A_{1}}\star Q_{A_{2}}=Q_{A_{1}}Q_{A_{2}}\;.
\end{equation}
\noindent That is the first demand of the classical limit. Thus, the quantity
$h$ plays the role of the Planck constant here, as we have already notice
before in Sect.3.
Now we are going to get the next term in the decomposition of the star
multiplication (\ref{e45}) in powers of $h$. That is more appropriate to do
in the local independent coordinates $\alpha$ , because of the decomposition
includes the operation of differentiation with respect to coordinates. The
formula (\ref{e43}) in the local coordinates (\ref{e33}) takes the form
\begin{eqnarray}\label{e47}
Q_{A_{1}A_{2}}(\alpha,\bar{\alpha})&=&\int Q_{A_{1}}(\alpha,\bar{\beta})Q_{A_{2
}}
(\beta,\bar{\alpha})e^{-s^{2}(\alpha,\beta)}{\rm d}\mu_{P}(\bar{\beta},\beta)\:
,\\
{\rm d}\mu_{P}(\bar{\beta},\beta)&=&\frac{(P+N-1)!}{P!P^{N-1}}\det\|g_{l\bar
{m}}(\beta,\bar{\beta})\|\prod_{i=1}^{N-1}\frac{{\rm d}Re\beta_{i}{\rm d}Im
\beta_{i}}{\pi}\:,\nonumber
\end{eqnarray}
where ${\rm d}\mu_{P}(\bar{\beta},\beta)$ is proportional to the well known
G-invariant measure on $CP^{N-1}$ (see eq.(35)).
Decomposing the integrand near the point $\beta=\alpha$, and going over to the
integration over $z=\beta-\alpha$, we get in the zero and first oder in power
$h$ :
\begin{eqnarray}
Q_{A_{1}A_{2}}(\alpha,\bar{\alpha})&=&Q_{A_{1}}(\alpha,\bar{\alpha})Q_{A_{2}}(
\alpha,\bar{\alpha})+\frac{\partial Q_{A_{1}}(\alpha,\bar{\alpha})}{\partial
\bar{\alpha}_{k}}\frac{\partial Q_{A_{2}}(\alpha,\bar{\alpha})}{\partial\alpha_
{i}} \label{e48}\\
&\times&\det\|g_{l,\bar{m}}(\alpha,\bar{\alpha})\|\int\bar{z}_{k}z_{i}e^{-g_{a
\bar{b}}z_{a}\bar{z}_{b}}
\prod_{j=1}^{N-1}\frac{{\rm d}Re z_{j}{\rm d}Im z_{j}}{\pi}\nonumber \\
&=&Q_{A_{1}}(\alpha,\bar{\alpha})Q_{A_{2}}(\alpha,\bar{\alpha}) + g^{i\bar{k}}
\frac{\partial Q_{A_{1}}(\alpha,\bar{\alpha})}{\partial\bar{\alpha}_{k}}
\frac{\partial Q_{A_{2}}(\alpha,\bar{\alpha})}{\partial\alpha_{i}}\:,\nonumber
\end{eqnarray}
where the matrix $g^{i\bar{k}}$ was defined in (\ref{e35}) and is proportional
to $h$ . Taking into account the expression (\ref{e36}) for the Poisson
bracket in the projective space $CP^{N-1}$ , we get for the star commutator of
the symbols
\begin{equation}\label{e49}
Q_{A_{1}}\star Q_{A_{2}}-Q_{A_{2}}\star Q_{A_{1}}=i\{Q_{A_{1}},Q_{A_{2}}\}
+{\rm o}(h)\:.
\end{equation}
Thus, the second demand of the classical limit is satisfied.
\section{Conclusion}
One has to note, that the uniform discription of the $CS$ for all the groups
$SU(N)$, proved to be possible due to the choice of the irreducible
representations
of the groups in the spaces of polynomials of a fixed power, so that the coset
space is parametrized by the homogeneous coordinates from $CP^{N-1}$. If one
constructs orbits, using the concrete structure of the matrices of the group
in the fundamental representation, as that was done in the majority of works
devoted to the $CS$ of the $SU(2)$ group, then the complications of the
generalization of the method are connected with the increasing of the
complicacy of the structure of the $SU(N)$ matrices with the growth of the
number $N$. The
representations in the spaces of polynomials of a power not greater then a
given one, used in \cite{ab7} for the $SU(2)$ group, lead at once to the
parametrization of the coset space by local coordinates from $CP^{N-1}$; due to
the nonsymmetrical form of the expressions in that case, the generalization to
any $SU(N)$ group does not appear to be obvious. Another approach to the
problem is possible in a Fock space, by means of representations of the Jordan-
Schwinger type. To construct explicitly orbits here, it is necessary to
disentangle rather complicated operators of the representation of the group in
the Fock space. We can do that in the $SU(2)$ case, but the complicacy growth
with the number $N$ essentialy.
The $CS$ constructed are useful for the quasi-classical analysis of quantum
equations of $SU(N)$ symmetrical gauge theories. With their help one can, for
instance, derive the so colled Wong equation and find "quantum" corrections
to the equations \cite{ab14} .
|
1,941,325,221,134 | arxiv | \section{Introduction}
The superinsulating state, having infinite resistance at finite temperatures\,\cite{Diamantini1996,Doniach1998,vinokur2008superinsulator,vinokurAnnals,cbkt} is the state dual to superconductivity, endowed with a finite temperature infinite conductance.
Originally\,\cite{Diamantini1996,Doniach1998}, the emergence of superinsulation was attributed to logarithmic Coulomb interactions in two spatial dimensions (2d) arising from the dimensional reduction of the effective Coulomb interactions due to the divergence of the dielectric constant $\varepsilon$\,\cite{vinokur2008superinsulator, vinokurAnnals} near the superconductor-insulator transition (SIT)\,\cite{Efetov1980,Haviland1989,Paalanen1990,Fisher1990,Fisher1990-2,fazio,Goldman2010} in disordered superconducting films. A more recent approach\,\cite{dtv} based on the condensation of magnetic monopoles\,\cite{goddardolive}, however, derived superinsulation as a result of the linear confinement of Cooper pairs by electric strings\,\cite{polyakov_original,polyakov}, which are the S-dual of Abrikosov vortices in superconductors\,\cite{mandelstam, thooft, polyakov_original}. This offers a more general view of superinsulation as a phenomenon that is not specific to two dimensions but can also exist in 3d systems.
This immediately poses a question about the experimental effect that could serve as a hallmark of superinsulation and that could, at the same time, unequivocally discriminate between the 3d and 2d superinsulators, exposing the linear nature of the underlying confinement. In disordered superconducting films that host superinsulating state at the insulating side of the superconductor-insulator transition (SIT), it is the charge Berezinskii-Kosterlitz-Thouless (BKT) transition\,\cite{Berezinskii1970,Berezinskii1971,Kosterlitz1972,Kosterlitz1973} that marks the emergence of the superinsulating state\,\cite{vinokur2008superinsulator,vinokurAnnals} and which was detected experimentally\,\cite{cbkt} by the BKT critical behavior\,\cite{Kosterlitz1974} of the conductance $G\propto\exp[-b/\sqrt{|T/T_{\rs{CBKT}}-1|}]$, where $T_{\rs{CBKT}}$ is the temperature of the charge BKT transition and $b$ is a constant of order unity. This suggests that it is the conductance critical behavior that provides the criterion for identifying the superinsulating state.
The BKT critical scaling of the conductance follows from the exponential critical scaling of the correlation length\,\cite{Kosterlitz1974}
\begin{equation}
\xi_{\pm} \propto \exp\left[\frac{b_{\pm}}{\sqrt{|T/T_{\mathrm c}-1|}}\right] \,,
\label{bkt}
\end{equation}
where $\pm$ subscript labels $T>T_{\mathrm c}$ and $T<T_{\mathrm c}$ regions respectively, and $\xi_{-}$ is interpreted as the maximum size of the bound charge-anticharge pair.
It is known\,\cite{yaffe} that, in two dimensions, both logarithmic and linear confinement lead to the BKT
critical scaling, so we use the notation $T_{\mathrm c}$ for either the BKT or deconfinement transition temperature. Our goal is now to reveal how the deconfinement scaling of Eq.\,(\ref{bkt}) evolves in 3d systems.
We show below that, in 3d, the critical behaviour of superinsulators is modified to the Vogel-Fulcher-Tammann (VFT) critical form\,\cite{vft}
\begin{equation}
\xi_{\pm} \propto \exp\left(\frac{b^{\prime}_{\pm}}{|T/T_{\mathrm c}-1|}\right) \,.
\label{vft}
\end{equation}
This behaviour is characteristic of the one-dimensional confining strings in 3d, where the world-surface elements interact logarithmically as particles in 2d. The VFT scaling is typical of glassy systems and has recently been derived\,\cite{vasin} for the 3d XY model with quenched disorder. Here we show that it arises naturally, without assuming any disorder, for confining strings and is thus a signature for 3d superinsulators.
\section{Confining strings}
The electromagnetic effective action of a superinsulator is given by\,\cite{dtv} $S^{\rs SI}\propto\sum_{x,\mu,\nu}[1-\cos(2e\ell^2F_{\mu\nu})]$, where $\{x\}$ represent the sites a $d$-dimensional lattice, $\ell$ is the corresponding lattice spacing, $e$ is the electron charge, and $F_{\mu\nu}$ is the electromagnetic field strength. This is Polyakov's compact QED action\,\cite{polyakov_original, polyakov}: hence the conclusion\,\cite{dtv} that, in superinsulators, Cooper pair dipoles are bound together into neutral ``mesons" by Polyakov's
confining strings\,\cite{polyakov_confining}.
These strings have an action which is induced by coupling their world-sheet elements to a massive Kalb-Ramond tensor gauge field\,\cite{kalb}. They can be explicitly derived for compact QED\,\cite{quevedo}, the induced electromagnetic action for the superinsulator\,\cite{dtv} and for Abelian-projected SU(2)\,\cite{antonov1, antonov2}. Their world-sheet formulation is thus in term of a non-local, long-range interaction between surface elements\,\cite{confstrings1}. Best suited to derive physical and geometric properties of these strings, however is the corresponding derivative expansion truncated to a certain level $n$\,\cite{confstrings2} (we use natural units $c=1$, $\hbar = 1$),
\begin{eqnarray}
S &&= \int d^2 \xi \sqrt{g} g^{ab}{\cal D}_a x_\mu V_n ({\cal D}^2){\cal D}_b x_\mu \ ,
\nonumber \\
V_n ({\cal D}^2) &&= t \Lambda^2 + \sum_{k=1}^{2n} {c_k \over \Lambda^{2k-2} } ({\cal
D}^2)^k\ ,
\label{nmodel}
\end{eqnarray}
where ${\cal D}_a$ are the covariant derivatives with respect to the induced metric
$g_{ab} = \partial_a x_\mu \partial_b x_\mu$ on the world-sheet ${\bf x}(\xi_0,
\xi_1)$ embedded in $D=d+1$-dimensional Euclidean space-time and $g$ is the metric determinant.
$V_n ({\cal D}^2)$ expresses the level-$n$ truncated derivative expansion of the non-local interaction on the world-sheet, with $\lambda$ being the fundamental ultraviolet (UV) cutoff mass scale.
The first term in the bracket provides the bare surface tension $2t$. The numerical coefficients $c_k$ are alternating in the sign\,\cite{confstrings1} so that the stable truncation must end with an even $ k = 2n$. In particular, the second coefficient is the stiffness parameter accounting for the string rigidity. In confining strings it is actually {\it negative}. The string is stabilized by the last term in the truncation which generates a string tension $\propto \Lambda^2/c_{2n}$ taking control of the fluctuations where the orientational correlations die off and leads to long range correlations, thus avoiding the crumpling affecting most string models\,\cite{kleinert}. For example, in the simplest version with $n=1$, the third term in the derivative expansion, the string hyperfine structure, contains the square of the gradient of the extrinsic curvature matrices and it suppresses the
formation of spikes on the world-sheet.
In a general model, the parameters $c_k$ are free: the only condition that must be imposed on them is the absence of both tachyons and ghosts in the theory. This requires that the Fourier transform $V_n\left( p^2 \right)$ has no zeros on the real $p^2$-axis. The polynomial $V_n \left( p^2 \right)$ thus has $n$ pairs of complex-conjugate zeros in the complex $p^2$-plane.
The associated mass scales represent the $n$ string resonances determining the string structure on finer and finer scales, the first resonance being the hyperfine structure. Increasing $n$ amounts thus to measuring the string on ever finer scales.
To simplify computations one can set all coefficients with odd $k$ to zero, $c_{2m+1} =0$ for $0\le m \le n-1$. This, however, is no drastic restriction since, as was shown in \cite{kleinert}, this is their value at the infrared-stable fixed point anyhow. Of course, when deriving the confining string from compact QED, all coefficients $c_k$ are fixed in terms of the only two dimensionless parameters available, $e/\Lambda^{2-D/2}$, with $e$ the QED coupling constant, and the monopole fugacity $z^2$. In particular\,\cite{confstrings1}
\begin{equation}
t={z^2\over 4(2\pi)^{D/2-1}} \tau^{D/2-2} K_{D/2-2} (\tau) \ ,
\label{baretension}
\end{equation}
where $\tau = \Lambda^{2-D/2} z/e$.
In the following we will consider the confining string model at finite temperatures in the large $D$ approximation. In\,\cite{confstrings2} it was shown that the high temperature limit behaviour of confining strings matches the expected high-temperature behaviour of large $N$ QCD\,\cite{polchinski}. Here we will, instead, concentrate on the critical behaviour at the deconfinement transition, where the renormalised string tension vanishes and strings become infinitely long on the cutoff scale.
\section{Finite temperature behaviour}
Following\,\cite{kleinert2, david}, we introduce a Lagrange multiplier
$\lambda^{ab}$ that forces the induced metric
$\partial_a x_\mu \partial_b x_\mu$ to be equal to the intrinsic metric $g_{ab}$. The action becomes thus
\begin{equation}
S\rightarrow S + \int d^2 \xi \sqrt{g} \left[\Lambda^2\lambda^{ab} (\partial_a x_\mu \partial_b x_\mu - g_{ab}
) \right] \ . \label{cmodel}
\end{equation}
We then parametrize the world-sheet in a Gauss map by $x_\mu (\xi) = (\xi_0, \xi_1,
\phi^i(\xi)),\ i = 2,...,D-2$. $\xi_0$ is taken as a periodic coordinate satisfying
$-\beta/2 \leq \xi_0 \leq \beta/2$, with $\beta = 1/T$ and $T$ the temperature, while
$-R/2 \leq \xi_1 \leq R/2$, $R$ being the string length. Finally, the $\phi^i(\xi)$ describe the $D-2$ transverse
fluctuations. We will be looking for a saddle-point solution with a diagonal metric $g_{ab}
= {\rm diag}\ (\rho_0, \rho_1)$, and a Lagrange multiplier of the form
$\lambda^{ab} = {\rm diag}\ (\lambda_0/\rho_0, \lambda_1/\rho_1)$.
With this Ansatz the action becomes the combination of a tree-level contribution $S_0$ and a fluctuations contribution $S_1$,
\begin{eqnarray}
S &&= S_0 + S_1\nonumber \\
S_0 &&= A_{\rm ext}\ \Lambda^2 \sqrt{\rho_0 \rho_1} \left[ \right. t \left( {\rho_0 + \rho_1
\over \rho_0 \rho_1} \right) + \lambda_0 \left( { 1 -\rho_0 \over \rho_0}
\right)
+ \lambda_1 \left( { 1 -\rho_1 \over \rho_1} \right) \left. \right] \nonumber \ ,\\
S_1&&= \int d^2 \xi \sqrt{g} \left[ g^{ab}\partial_a \phi^i V_n ({\cal D}^2)
\partial_b \phi^i + \Lambda^2 \lambda^{ab} \partial_a \phi^i \partial_b \phi^i \right]\ ,
\end{eqnarray}
with $\beta R = A_{\rm ext}$ the extrinsic area in coordinate space.
Integrating over the transverse fluctuations in the limit $R \to
\infty$ we get
\begin{equation}
S_1 = {D - 2\over 2} R \sqrt{\rho_1} \sum_{l = - \infty}^{+ \infty} \int {d
p_1\over 2 \pi} \ln\left[
(p_1^2 \lambda_1 + \omega_l^2 \lambda_0) \Lambda^2 + p^2 V_n(p^2) \right] \ ,
\end{equation}
where $p^2 = p_1^2 + \omega_l^2$, and $\omega_l = {2 \pi \over \beta \sqrt{\rho_0} } l$.
We will now focus on temperatures such that
\begin{equation}
{c_{2n}\over \Lambda^{4n-2}} {1\over \beta^{4n}} \gg \Lambda^2 t + \sum_{ k = 1 }^{2n -1} {c_{k} \over \Lambda^{2k-2}}
{1\over \beta^{2k}}\ .
\label{valid}
\end{equation}
In this case the highest-order term in the derivative expansion dominates the one-loop term $S_1$
when $ l \neq 0$. This $l \neq 0$ contribution can be computed by using analytic regularization and analytic continuation of the expression $\sum_{n =1}^\infty n^{-z} = \zeta(z)$ for the Riemann zeta function, with $\zeta(-1) = -
1/12$,
\begin{eqnarray}
&&{D - 2\over 2} R \sqrt{\rho_1} \sum_{l = - \infty}^{+ \infty} \int {d
p_1\over 2 \pi}\ {\rm ln}\ {c_{2n}\over \Lambda^{4n-2}}
\left( \omega_l^2 + p_1^2 \right)^{2n +1}\nonumber \\
&&= {D - 2\over 2}\sqrt{\rho_1 \over \rho_0} (2n + 1) 4 \pi {R\over \beta}
\sum_{l = 1}^{+ \infty}\sqrt{l^2}\\
&&= -{D - 2\over 2}\sqrt{\rho_1 \over \rho_0}{(2n + 1) \pi \over3}{ R\over \beta}\ .
\end{eqnarray}
The calculation of the $l=0$ contribution
\begin{equation}
S_1 = {D - 2\over 2} R \sqrt{\rho_1} \int {dp_1\over 2 \pi} \ln \left( p_1^2 \bar V_n(p_1^2)\right) \ ,
\label{lzero}
\end{equation}
with
\begin{equation}
\bar V_n(p_1^2) =
\left( \Lambda^2 (t + \lambda_1) + \sum_{k =1}^{2n}
{c_{k} \over \Lambda^{2k-2}} p_1^{2k}\right) \ ,\label{newpot}
\end{equation}
requires a bit more care. Since we have chosen all $c_k=0$ for all odd $k$ and we have imposed the physical
requirement that the model is ghost- and tachyon-free,
all pairs of complex-conjugate zeros of $\bar V_n\left( p_1^2
\right)$ lie on the imaginary axis and we can represent\,\cite{kleinert} $\bar V_n
\left( p_1^2 \right)$ as
\begin{equation}
{\Lambda^{4n-2}\over c_{2n}} \ \bar V_n\left( p_1^2
\right) = \prod_{k=1}^n \left( p_1^4 + \alpha_k^2 \Lambda^4 \right) \ ,
\label{poten}
\end{equation}
with purely numerical coefficients $\alpha_k$. Using again an analytic regularization and the analytic continuation of the Riemann zeta function we obtain
\begin{eqnarray}
S_1 &&= {D - 2\over 2} R \sqrt{\rho_1} \sum_{k = 1}^{n} \int {d
p_1\over 2 \pi} \ln\left( p_1^4 + \alpha_k^2 \Lambda^4 \right) \nonumber \\
&&= {D - 2\over 2} R \sqrt{\rho_1} \sum_{k = 1}^{n} \int {d
p_1\over 2 \pi} 2 {\rm Re} \ln\left( p_1^4 + i \alpha_k \Lambda^2 \right)
\nonumber \\
&&= {D - 2\over 2} R \sqrt{\rho_1} \sum_{k = 1}^{n} \Lambda \sqrt{2 \alpha_k} \ .
\end{eqnarray}
Summing finally the $l=0$ and the $l \neq 0$ contribution we obtain the full action
\begin{equation}
S = S_0 + {D - 2\over 2} R \sqrt{\rho_1} \left[\sum_{k = 1}^{n} \Lambda
\sqrt{2 \alpha_k} -
{(2n + 1) \pi \over 3 \sqrt{\rho_0}}{ 1\over \beta}\right] \ .
\label{finac}
\end{equation}
The coefficients in the representation (\ref{poten}) are not entirely free. Indeed, in order to match (\ref{newpot}), the
$p_1$-independent term must satisfy
\begin{equation}
\prod_{k=1}^n \alpha_k^2 \Lambda^4 = {\Lambda^{4n}\over c_{2n}}(t + \lambda_1)\ .
\label{gamcon}
\end{equation}
For simplicity's sake we shall also assume that all $\alpha_k$ are equal, implying essentially that there is a unique resonance that determines the fine details of the string oscillations. Then (\ref{gamcon}) implies:
\begin{equation}
\alpha^2_k = ( t + \lambda_1)^{1/n} \alpha^2\ ,\ \ \ \alpha = \left({1\over c_{2n}}\right)^{1/2n} \ .
\label{coeff}
\end{equation}
Since the fluctuations contribution $S_1$ is proportional to $(D-2)$, it is forced onto its ground state in the large $D$ limit. In this limit the metric components $\rho_0$ and $\rho_1$ and the Lagrange multipliers $\lambda_0$ and $\lambda_1$ take on their classical values obtained by setting the respective derivatives of the total action to zero. This gives the four large-$D$ gap equations
\begin{eqnarray}
&&{ 1 -\rho_0 \over \rho_0} = 0 \ , \label{gap1}\\
&&{1 \over \rho_1} = 1 - {D - 2\over 2} { 1 \over 4 \beta \Lambda}
\sqrt{2 \alpha} (\lambda_1 + t )^{1/4n -1} \ , \label{gap2} \\
&&\left[ {1\over 2}(t - \lambda_1) + {1 \over 2\rho_1}(\lambda_1 + t ) - t -
\lambda_0 \right] + {D - 2\over 2} { (2n +1)\pi \over 6 \beta^2 \Lambda^2} = 0 \ ,\label{gap3} \\
&&(t - \lambda_1) - {1 \over \rho_1}(\lambda_1 + t ) +
\nonumber \\
&&+
{D - 2\over 2} {1 \over \beta \Lambda} \left[\sqrt{2 \alpha}\ n \left( \lambda_1
+ t \right)^{1/4n} - {\pi (2n+1) \over 3 \beta \Lambda } \right] = 0
\label{gap4} \ .
\end{eqnarray}
Inserting (\ref{gap4}) and (\ref{gap1}) into (\ref{finac}) and using $\rho_0 =1$ from
(\ref{gap1}) we obtain the action in the form
\begin{equation}
S= A_{\rm ext}\ {\cal T} \ ,
\label{effac}
\end{equation}
with ${\cal T} = \Lambda ^2 2 (\lambda_1 + t )/\sqrt{\rho_1}={\cal T}_0/\sqrt{\rho_1}$ representing the renormalized string tension, expressed in terms of the zero-temperature renormalized string tension ${\cal T}_0$. Eq. (\ref{gap2}) for the spatial metric can be reformulated in the limit $n \to \infty$ as
\begin{equation}
{1\over \rho_1} = 1- {D-2 \over 2} T {\sqrt{2\alpha} \Lambda \over 4} {1\over (\lambda_1+t) \Lambda ^2} \ ,
\label{spatialmetric}
\end{equation}
From here we recognize that the renormalization of the string requires taking the simultaneous limits $(\lambda_1 + t) \to 0$, $\sqrt{2\alpha} \to 0$ and $\Lambda \to \infty$ so that $(\lambda_1 + t) \Lambda^2$ and $\sqrt{2\alpha} \Lambda$ are finite. In this case both the $\rho_1$ metric element and the renormalized string tension acquire finite values. The scale $\sqrt{2\alpha} \Lambda$ represents the renormalized mass $M$ of the string resonance that determines, together with ${\cal T}_0$ all physical properties of the string. In particular, the finite temperature deconfinement critical behaviour is obtained as the limit $(\lambda_1+t) \to 0$. In this limit the strings become infinitely long on the scale of the cutoff and the particles at their ends are liberated. The critical behaviour is embodied by the behaviour of the (dimensionless) correlation length $\xi = 1/\sqrt{\lambda_1 +t} $ near the critical temperature.
\section{Critical behaviour}
In order to study the critical behaviour we derive the gap equation for $(\lambda_1 + t)$ alone, by substituting (\ref{gap2}) into (\ref{gap4}). This gives
\begin{equation}
(\lambda_1 + t ) - {D - 2\over 2} {4 n +1 \over 8 \beta \Lambda}\sqrt{2\alpha}
\left( \lambda_1 + t \right)^{1\over 4n} +\ {D - 2\over 2}{2 n +1 \over 6}
{ \pi \over \beta^2\Lambda^2 } - t = 0 \ .
\label{quart}
\end{equation}
For $(\lambda_1 + t) \ll 1$ the first term can be neglected with respect to the second for large $n \gg 1$, which gives
\begin{equation}
4n\sqrt{2\alpha} (\lambda_1 + t)^{1\over 4n} = n {8\pi\over 3} {T\over \Lambda} -{2\over D-2} {8t\Lambda \over T} \ .
\label{firstmod}
\end{equation}
Dividing by $\sqrt{2\alpha}$ and subtracting on both sides of the equation a term $4n$ we obtain
\begin{equation}
4n\left[ (\lambda_1 + t)^{1\over 4n} -1\right] = {8\pi n\over 3} {T\over M} \left[ 1- {3 \over (D-2)\pi} {\vartheta \over T^2} -{3 \over 2\pi} {M\over T} \right] \ ,
\label{inter1}
\end{equation}
where $\vartheta = 2t\Lambda^2/n$ is the bare string tension divided by $n$. The expression in square brackets on the right hand side can be formulated as
\begin{equation}
\left[ 1- {3 \over (D-2)\pi} {\vartheta \over T^2} -{3 \over 2\pi} {M\over T} \right] =
\left( 1-{T_{+}\over T} \right) \left( 1+ {T_{\_} \over T} \right) \ ,
\label{plusminus}
\end{equation}
where
\begin{equation}
T_{\pm} = {4\over (D-2)} {\vartheta \over M} {1\over \mp1 + \sqrt{1+ {16 \pi \over 3(D-2)} {\vartheta \over M^2}}} \ .
\label{solutions}
\end{equation}
From here we read off the critical deconfinement temperature as $T_c = T_{+}$. As expected it is determined by a combination of the two mass scales $\sqrt{\vartheta}$ and $M$ in the model. Expanding the left-hand side of (\ref{inter1}) around this critical temperature we get
\begin{equation}
4n\left[ (\lambda_1 + t)^{1\over 4n} -1\right] = {8\pi \over 3} \left[ 1+ \left( {T_{\_} \over T} \right) \right] \left( {-n\Delta T\over M} \right) - O\left( n\Delta T^2 \right) \ ,
\label{inter2}
\end{equation}
where $\Delta T = T_c-T$.
Taking the limit $n\to\infty$, one obtains on the left-hand side ${\rm ln}(\lambda_1+t)$. The right-hand side, however, requires more care. Clearly, we recognize immediately that increasing $n\to \infty$ drives the string to its critical point $\Delta T=0$. To establish in detail how this occurs, however, we must resort to the behaviour (\ref{baretension}) of the bare string tension. For the relevant regime of small $\tau$ (strong coupling and low monopole fugacity) the quantity $\vartheta$ appearing in the equations above is given by
\begin{eqnarray}
\vartheta_{D=3} &&= {ze\over 8} {\Lambda^{3\over 2} \over n} \ ,
\nonumber \\
\vartheta_{D=4} &&= {z^2\over 4\pi} {\rm ln}\left( {2e\over z}\right) {\Lambda^2\over n} \ ,
\label{scalingzero}
\end{eqnarray}
The zero-temperature fixed point being given by $\ell = 1/\Lambda =0$, this show that $n$ must scale as $n\propto \ell^{-3/2}$ for $D=3$ and $n\propto \ell^{-2}$ for $D=4$ in the approach to a fixed point. Correspondingly, we have $n\propto (\Delta T)^{-3/2}$ for $D=3$ and $n\propto (\Delta T)^{-2}$ for $D=4$ when approaching the finite temperature critical point. Therefore the gap equation
\begin{equation}
{\rm ln}(\lambda_1 + t) = {8\pi \over 3} \left[ 1+ \left( {T_{\_} \over T} \right) \right] {\rm lim}_{n\to \infty} \left( {-n\Delta T\over M} \right) + \dots \ ,
\label{finalgap}
\end{equation}
leads directly to the critical scaling behaviours given by Eq.\,(\ref{bkt}) and Eq.\,(\ref{vft}) when approaching the deconfinement transition from below,
\iffalse
\begin{eqnarray}
\xi_{d=2} &&= {\rm e}^{c \sqrt{T_c\over \Delta T}} \ ,
\nonumber \\
\xi_{d=3} &&= {\rm e}^{c {T_c\over \Delta T}} \ .
\label{finalcritical}
\end{eqnarray}
\fi
reproducing thus the BKT criticality predicted in $d=2$ by the Svetitsky-Yaffe conjecture\,\cite{yaffe} and the VFT criticality of the deconfinement transition in $d=3$.
Remarkably, exactly this 3d-like signature has been
recently observed\,\cite{ovadia} for the finite temperature insulating phase in InO disordered films, in which the thickness is much larger than the superconducting coherence length. While it seems premature to view this result as a conclusive evidence, yet one can view it as a possible indication of linear confinement in 3d superinsulators. Another important corollary of our results is that the disorder strength in disordered superconducting films plays the role of a parameter tuning the strength of the Coulomb interactions but that disorder in itself is irrelevant for the nature of the various phases around the SIT. Finally, an important and deep implication of our findings is that, since the VFT behavior of Eq.\,(\ref{vft}) is recognized as heralding glassy behavior, our results suggest that, in 3d, topological defects endowed with long range interactions generate a glassy state without any quenched disorder. Note that the VFT behavior in 3d superinsulators arises as characteristic of the deconfinement transition of a strongly interacting gauge theory. The putative glassy behavior below this transition heralds the formation of a {\it quantum glass} arising due to the condensation and entanglement of extended string-like topological excitations.
\subsection{Acknowledgments}
M. C. D. thanks CERN, where she completed this work, for kind hospitality. The work at Argonne (V.M.V.) was supported by the U.S. Department of Energy, Office of Science, Materials Sciences and Engineering Division.
|
1,941,325,221,135 | arxiv | \section{Introduction}
Let $(X,\omega)$ be a closed K\"ahler manifold. Calabi initialed the study of K\"ahler metrics with constant scalar curvature (cscK) in a given K\"ahler class $[\omega]$ on $X$. Recently, Chen-Cheng \cite{MR4301557} solved the geodesic stability conjecture and the properness conjecture on existence of the cscK metrics, via establishing new a priori estimates for the 4th order cscK equation \cite{MR4301558}. However, the Yau-Tian-Donaldson conjecture expects the equivalence between the existence of cscK metrics and algebraic stabilities, which requires further understood of the possible singularity/degeneration developing from the cscK problem. In this article, we work on a priori estimates for the scalar curvature equation of both the singular and degenerate K\"ahler metrics.
Before we move further, we fix some notations. We are given an ample divisor $D$ in $X$, together with its associated line bundle $L_D$. We let $s$ be a holomorphic section of $L_D$ and $h$ be a Hermitian metric on $L_D$.
We set $\Omega$ to be a big and semi-positive cohomology class.
The \textbf{singular scalar curvature equation} in $\Omega$ was introduced in \cite{arXiv:1803.09506} c.f. Definition \ref{Singular cscK eps defn}. Within the same article, we derive estimation of its solution when $0<\beta\leq1,$ which results in existence of cscK metrics on some normal varieties. If we further let $\Omega$ to be K\"ahler, the singularities is called cone singularities and the solution of the singular scalar curvature equation is called \textbf{cscK cone metric}. The uniqueness, existence and the necessary direction of the corresponding log version of the Yau-Tian-Donaldson conjecture were proven in \cite{MR3968885,MR4020314,arXiv:1803.09506,arXiv:2110.02518,MR4088684}. In this article, we continue studying the singular scalar curvature equation, when $$\beta>1.$$
The singular scalar curvature equation has the expression
\begin{equation}\label{Singular cscK}
\omega_\varphi^n=(\omega_{sr}+i\partial\bar\partial\varphi)^n=e^F \omega_{\theta}^n,\quad
\triangle_{\varphi} F=\mathop{\rm tr}\nolimits_{\varphi}\theta-R,
\end{equation}
where, $\omega_{sr}$ is a smooth representative in the class $\Omega$ and $R$ is a real-valued function.
We introduce two parameters to approximate the singular equation \eqref{Singular cscK}, $t$ for the lose of positivity and $\epsilon$ for the degeneration, see Definition \ref{Singular cscK t eps defn}.
We say a solution $(\varphi_{t,\epsilon},F_{t,\epsilon})$ to the approximate equation \eqref{Singular cscK t eps} is \textbf{almost admissible}, if it has uniform weighted estimates independent of $t,\epsilon$, including the $L^\infty$-estimates of both $\varphi_{t,\epsilon}$ and $F_{t,\epsilon}$, the gradient estimate of $\varphi_{t,\epsilon}$ and the $W^{2,p}$-estimate. The precise definition is given in Definition \ref{a priori estimates approximation singular}.
\begin{thm}\label{intro almost admissible}
The solution to the approximate singular scalar curvature equation \eqref{Singular cscK t eps} with bounded entropy is almost admissible.
\end{thm}
The detailed statement of these estimates will be given in the $L^\infty$-estimates (\thmref{L infty estimates Singular equation}), the gradient estimate of $\varphi$ (\thmref{gradient estimate}), the $W^{2,p}$-estimate (\thmref{w2pestimates degenerate Singular equation}).
The singular scalar curvature equation extends the singular Monge-Amp\`ere equation. Actually, the reference metric $\omega_\theta$ in the singular scalar curvature equation \eqref{Singular cscK}, is defined to be a solution to the singular Monge-Amp\`ere equation
\begin{align}\label{Rictheta intro}
\omega_\theta^n=(\omega_{sr}+i\partial\bar\partial\varphi_{\theta})^n=|s|_h^{2\beta-2}e^{h_{\theta}}\omega^n.
\end{align} The precise definition is given in Definition \ref{Reference metric singular} in Section \ref{Reference metrics}.
\begin{rem}
There is a large literature on the singular Monge-Amp\`ere equations \cite{MR2746347,MR944606,MR2647006,MR2505296,MR2233716,MR4319060} and the corresponding K\"ahler-Ricci flow \cite{MR3956691,MR2869020,MR3595934,MR4157554}, see also the references therein. However, more effort is required to tackle the new difficulties arising from deriving estimates for the scalar curvature equation \eqref{Singular cscK}, because of its 4th order nature. The tool we apply here is the integration method, instead of the maximum principle.
\end{rem}
\begin{rem}
We extend Chen-Cheng's estimates \cite{MR4301557} to the singular scalar curvature equation \eqref{Singular cscK}.
\end{rem}
\bigskip
If we focus on the degeneration, setting $\Omega$ to be a K\"ahler class, then the singular equation \eqref{Singular cscK} is named the \textbf{degenerate scalar curvature equation}. The accurate definition is given in Definition \ref{Degenerate cscK defn} and the corresponding approximation is stated in Definition \ref{Degenerate cscK 1 approximation}.
The almost admissible estimates (Definition \ref{a priori estimates approximation}) for the approximate solution $\varphi_\epsilon$ are obtained from \thmref{intro almost admissible}, immediately.
In Section \ref{A priori estimates for approximate degenerate cscK equations}, we further show metric equivalence from the volume ratio bound, i.e. to prove the Laplacian estimate for the degenerate scalar curvature equation, \thmref{cscK Laplacian estimate}.
An almost admissible function is called \textbf{$\gamma$-admissible}, if it admits a weighted Laplacian estimate, which is defined in Definition \ref{admissible}.
\begin{thm}\label{admissible estimate}
The almost admissible solution to the approximate degenerate scalar curvature equation \eqref{Degenerate cscK approximation} with bounded $\|\partial F_\epsilon\|_{\varphi_\epsilon}$ is
\begin{itemize}
\item
admissible, i.e. $\mathop{\rm tr}\nolimits_\omega\om_{\varphi_\epsilon}\leq C$, when $\beta> \frac{n+1}{2}$.
\item
$\gamma$-admissible for any $\gamma>0$, when $1<\beta\leq\frac{n+1}{2}$.
\end{itemize}
\end{thm}
\begin{rem}
The Laplacian estimate holds for the smooth cscK metric $\beta=1$ in \cite[Theorem 1.7]{MR4301557}, the cscK cone metric when $0<\beta<1$ in \cite[Theorem 5.25]{arXiv:1803.09506} and see also \cite[Question 4.42]{arXiv:1803.09506} for further development. However, when $\beta>1$, the Laplacian estimate is quite different from the one when the cone angle is not larger than $1$.
\end{rem}
For the degenerate scalar curvature equation, the Laplacian estimates is more involved. Our new approach is \textit{an integration method with weights}. The proof is divided into seven main steps.
The first step is to transform the approximate degenerate scalar curvature equation \eqref{Degenerate cscK approximation} into an integral inequality with weights. The second step is to deal with the trouble term containing $\triangle F$ by integration by parts. Unfortunately, this causes new difficulties, because of the loss of the weights, see Remark \ref{4th term}.
The third step is to apply the Sobolev inequality to the integral inequality to conclude a rough iteration inequality. However, weights loss in this step again, see Remark \ref{Laplacian estimate: lose weight}. Both the fourth and fifth step are designed to overcome these difficulties. The fourth step aims to construct several inverse weighted inequalities, with the help of introducing two useful parameters $\sigma$ and $\gamma$. While, the fifth step is set for weighted inequalities, by carefully calculating the sub-critical exponent for weights.
As last, with all these efforts, we arrive at an iteration inequality and proceed the iteration argument to conclude the Laplacian estimates.
\bigskip
After we establish the estimation, in Section \ref{Convergence and regularity}, we have two quick applications on the singular/degenerate cscK metrics, which were introduced in \cite{arXiv:1803.09506} as the $L^1$-limit of the approximate solutions, Definition \ref{Singular metric with prescribed scalar curvature}.
One is to show regularities of the singular/degenerate cscK metrics. Precision statements are given in \thmref{existence singular} and \thmref{existence degenerate}.
We see that that the potential function of the degenerate cscK metric is almost admissible and the convergence is smooth outside the divisor $D$. We also find that the volume form $\omega^n_\varphi$ is shown to be prescribed with the given degenerate type along $D$ as
$
|s|_h^{2\beta-2}\omega^n.
$
The volume ratio bound could be further improved to be a global metric bound, under the assumption that the volume ratio of the approximate sequence has bounded gradient. Consequently, the degenerate cscK metric has weighted complex Hessian on the whole manifold $X$. In particular, when the cone angle is larger than half of $n+1$, the complex Hessian is globally bounded.
The other application is to revisit Yau's result on degenerate K\"ahler-Einstein (KE) metrics in the seminal work \cite{MR480350}.
A special situation of the degenerate scalar curvature equation is the degenerate K\"ahler-Einstein equation
\begin{align}\label{critical pt Ding intro}
\omega^n_\varphi=|s|_h^{2\beta-2} e^{h_\omega-\lambda\varphi +c} \omega^n.
\end{align}
In Section \ref{Reference metric and prescribed Ricci curvature problem}, the variational structure of the degenerate K\"ahler-Einstein metrics will be presented. We will show that they are all critical solutions of a logarithm variant of Ding's functional. Ding functional was introduced in \cite{MR967024}.
A direct corollary from \thmref{intro almost admissible} and \thmref{admissible estimate} is
\begin{cor}[Yau \cite{MR480350}]\label{admissible estimate KE}
For any $\beta\geq1$, the solution $\varphi_\epsilon$ to the degenerate K\"ahler-Einstein equation \eqref{critical pt Ding intro} has $L^\infty$-estimate and bounded Hessian.
\end{cor}
\begin{rem}
The accurate statement is given in \thmref{KE Laplacian}.
\end{rem}
For the degenerate Monge-Amp\`ere problem, the difficult fourth order nonlinear term $\triangle F$ in the scalar curvature problem becomes the second order term $\triangle \varphi$, which could be controlled directly, comparing Proposition \ref{Laplacian estimate: integration inequality pro} in Section \ref{Step 2} and \eqref{KE tri F} in Section \ref{Log Kahler Einstein metric}.
\begin{rem}
As a result, Yau's theorem for degenerate K\"ahler-Einstein metrics is recovered, that is when $\lambda\leq 0$, the degenerate Monge-Amp\`ere equation \eqref{critical pt Ding intro} admits a solution with bounded complex Hessian and being smooth outside the divisor $D$.
But, the method we apply to derive the Laplacian estimates is the integration method, which is different from Yau's maximum principle.
\end{rem}
As a further matter, the integration method with weights is applied to derive the gradient estimates in Section \ref{Gradient estimate of vphi} as well.
\begin{rem}Our results also extend the gradient and the Laplacian estimate for the non-degenerate Monge-Amp\`ere equation, which were obtained by the integral method \cite{MR2993005}.
\end{rem}
At last, as a continuation of Question 1.14 in \cite{MR4020314} and Question 1.9 in \cite{arXiv:1803.09506}, we could propose the following uniqueness question.
\begin{ques}
whether the $\gamma$-admissible degenerate cscK metric constructed above is unique (up to automorphisms)?
\end{ques}
It is worth to compare this question to its counterpart for the cone metrics \cite{MR3496771,MR3405866,MR4020314}.
\begin{rem}
On Riemannian surfaces, there are intensive study on constant curvature metrics with large cone angles, see \cite{MR3990195,MR3340376} and references therein.
\end{rem}
\section{Degenerate scalar curvature problems}\label{cscK}
We denote the curvature form of $h$ to be
$
\Theta_D=-i\partial\bar\partial\log h,
$
which represents the class $C_1(L_D)$.
The Poincar\'e-Lelong equation gives us that
\begin{align}\label{PL}
2\pi[D]=i\partial\bar\partial\log|s|_h^2+\Theta_D=i\partial\bar\partial\log|s|^2.
\end{align}
In this section, we assume that the given K\"ahler class $\Omega$ is proportional to $$C_1(X,D):=C_1(X)-(1-\beta)C_1(L_D).$$ W let $\lambda$ be a constant such that
\begin{align}\label{cohomology condition}
C_1(X,D)=\lambda \Omega .
\end{align}
This cohomology condition \eqref{cohomology condition} implies the existence of a smooth function $h_\omega$ such that
\begin{align}\label{h0}
Ric(\omega)=\lambda\omega+(1-\beta)\Theta_D+i\partial\bar\partial h_\omega.
\end{align}
\subsection{Critical points of the log Ding functional}\label{Critical points of the log Ding functional}
The log Ding functional is a modification of the Ding functional \cite{MR967024} by adding the weight $|s|_h^{2\beta-2}$.
\begin{defn}For all $\varphi$ in $\mathcal H_\Omega$, we define the \textbf{log Ding functional} to be
\begin{align*}
F_\beta(\varphi)=-D_{\omega}(\varphi)-\frac{1}{\lambda}\log(\frac{1}{V}\int_X e^{h_\omega-\lambda\varphi}|s|_h^{2\beta-2}\omega^n).
\end{align*}
Here
$
D_{\omega}(\varphi)
:=\frac{1}{V}\frac{1}{n+1}\sum_{j=0}^{n}\int_{X}\varphi\omega^{j}\wedge\omega_{\varphi}^{n-j} .
$
\end{defn}
We compute the critical points of the log Ding functional.
\begin{prop}
The 1st variation of the log Ding functional at $\varphi$ is
\begin{align*}
\delta F_\beta(u)&=-\frac{1}{V}\int_X u \omega_\varphi^n+\left(\int_Xe^{h_\omega-\lambda\varphi}|s|_h^{2\beta-2}\omega^n\right)^{-1} \int_X u e^{h_\omega-\lambda\varphi}|s|_h^{2\beta-2}\omega^n.
\end{align*}
The critical point of the log Ding functional satisfies the following equation
\begin{align}\label{critical pt Ding}
\omega^n_\varphi
&=|s|_h^{2\beta-2} \frac{V\cdot e^{h_\omega-\lambda\varphi } \omega^n}{\int_Xe^{h_\omega-\lambda\varphi} |s|_h^{2\beta-2} \omega^n}.
\end{align}
\end{prop}
\begin{defn}[Log K\"ahler-Einstein metric]
We call the solution of \eqref{critical pt Ding}, \textbf{log K\"ahler-Einstein metric}.
When $0<\beta<1$, it is called K\"ahler-Einstein cone metric, c.f. \cite{MR3761174,MR3911741}.
\end{defn}
Further computation shows that the log K\"ahler-Einstein metric satisfies the following identity
\begin{align}\label{Rictheta}
Ric(\omega_\varphi)=\lambda\omega_\varphi+2\pi (1-\beta)[D].
\end{align}
\begin{prop}
The 2nd variation of the log Ding functional at $\varphi$ is
\begin{align*}
&\delta^2 F_\beta(u,v)=\frac{1}{V}\int_X (\partial u,\partial v)_\varphi\omega_\varphi^n\\
&+\lambda\left\{\frac{\int_X u e^{h_\omega-\lambda\varphi}|s|_h^{2\beta-2}\omega^n \int_Xv e^{h_\omega-\lambda\varphi}|s|_h^{2\beta-2}\omega^n}{(\int_Xe^{h_\omega-\lambda\varphi}|s|_h^{2\beta-2}\omega^n)^{2}}
-\frac{\int_X uv e^{h_\omega-\lambda\varphi}|s|_h^{2\beta-2}\omega^n}{\int_Xe^{h_\omega-\lambda\varphi}|s|_h^{2\beta-2}\omega^n}\right\}.
\end{align*}
\end{prop}
\begin{proof}
We continue the proceeding computation to see that
\begin{align*}
&\delta^2 F_\beta(u,v)=-\frac{1}{V}\int_X u \triangle_\varphi v \omega_\varphi^n\\
&-\left(\int_Xe^{h_\omega-\lambda\varphi}|s|_h^{2\beta-2}\omega^n\right)^{-2} \int_X (-\lambda v) e^{h_\omega-\lambda\varphi}|s|_h^{2\beta-2}\omega^n \int_X u e^{h_\omega-\lambda\varphi}|s|_h^{2\beta-2}\omega^n\\
&+\left(\int_Xe^{h_\omega-\lambda\varphi}|s|_h^{2\beta-2}\omega^n\right)^{-1} \int_X u (-\lambda v) e^{h_\omega-\lambda\varphi}|s|_h^{2\beta-2}\omega^n.
\end{align*}
So the conclusion follows directly.
\end{proof}
Then inserting the critical point equation \eqref{critical pt Ding} into the 2nd variation of the log Ding functional, we have the following corollaries.
\begin{cor}\label{the 2nd variation of the log Ding}
At a log K\"ahler-Einstein metric, the 2nd variation of the log Ding functional becomes
\begin{align*}
\delta^2 F_\beta(u,v)
&=\frac{1}{V}\int_X (\partial u,\partial v)_\varphi\omega_\varphi^n+\lambda\left\{\frac{1}{V^2}\int_X u\omega_\varphi^n\int_X v\omega_\varphi^n-\frac{1}{V}\int_X u v\omega_\varphi^n\right\}.
\end{align*}
\end{cor}
\begin{cor}\label{log Ding convex}The log Ding functional is convex at its critical points, if one of the following condition holds
\begin{itemize}
\item $C_1(X,D)\leq 0$,\label{convex negative}
\item $X$ is a projective Fano manifold, $D$ stays in the linear system of $|K_X^{-1}|$ and $\Omega=C_1(X)$, $0< \beta\leq 1$. \label{convex positive}
\end{itemize}
\end{cor}
\begin{proof}
Since $\lambda\Omega=C_1(X,D)$ by \eqref{cohomology condition}, we have $\lambda\leq 0$. The lemma follows from applying the Cauchy-Schwarz inequality to \lemref{the 2nd variation of the log Ding}.
From the assumptions of the second statement, we have $C_1(X,D)=\beta C_1(X)=\beta\Omega$ and then $\lambda=\beta$ by \eqref{cohomology condition}. Thus convexity follows from the lower bound of the first eigenvalue of the Laplacian for a K\"ahler-Einstein cone metric \cite{MR3761174}.
\end{proof}
\subsection{Reference metric and prescribed Ricci curvature problem}\label{Reference metric and prescribed Ricci curvature problem}
We choose a smooth $(1,1)$-form $\theta\in C_1(X,D).$
Then there exists a smooth function $h_\theta$ such that
\begin{align}\label{h0}
Ric(\omega)=\theta+(1-\beta)\Theta_D+i\partial\bar\partial h_\theta.
\end{align}
\begin{defn}[Reference metric]\label{Reference metric}
A reference metric $\omega_\theta$ is defined to be the solution of the degenerate Monge-Amp\`ere equation
\begin{align}\label{Rictheta}
\omega^n_\theta=|s|_h^{2\beta-2}e^{h_\theta} \omega^n
\end{align}
under the normalisation condition
\begin{align}\label{Rictheta normalisation}
\int_X |s|_h^{2\beta-2}e^{h_\theta} \omega^n=V.
\end{align}
\end{defn}
The reference metric $\omega_\theta=\omega+i\partial\bar\partial\varphi_\theta$ satisfies the equation for prescribing the Ricci curvature problem
\begin{align}\label{Rictheta ricci}
Ric(\omega_\theta)=\theta+2\pi (1-\beta)[D].
\end{align}
The equation \eqref{Rictheta} is a special case of the degenerate complex Monge-Amp\`ere equation \eqref{critical pt Ding}.
\begin{lem}With the reference metric $\omega_\theta$ in \eqref{Rictheta}, the log Ding functional is rewritten as
\begin{align*}
F_\beta(\varphi)=-D_{\omega}(\varphi)-\frac{1}{\lambda}\log(\frac{1}{V}\int_X e^{h_\omega-h_\theta-\lambda\varphi}\omega_\theta^n).
\end{align*}
The critical point satisfies the following equation
\begin{align}\label{critical pt Ding theta}
\omega^n_\varphi
=\frac{V\cdot e^{h_\omega-h_\theta-\lambda\varphi} \omega^n_\theta}{\int_Xe^{h_\omega-h_\theta-\lambda\varphi}\omega^n_\theta}.
\end{align}
\end{lem}
From the log K\"ahler-Einstein equation \eqref{critical pt Ding} and the reference metric equation \eqref{Rictheta}, we have
\begin{align*}
e^{h_\omega-\lambda\varphi}\omega^n_\theta= e^{h_\omega-\lambda\varphi}|s|_h^{2\beta-2}e^{h_\theta} \omega^n=e^{h_\theta}\omega^n_\varphi.
\end{align*}
The 2nd variation of the log Ding functional at $\varphi$ is
\begin{align*}
\delta^2 F_\beta(u,v)&=\frac{1}{V}\int_X (\partial u,\partial v)_\varphi\omega_\varphi^n\\
&+\lambda\left\{\frac{\int_X u e^{h_\omega-h_\theta-\lambda\varphi}\omega^n_\theta\int_Xv e^{h_\omega-h_\theta-\lambda\varphi}\omega^n_\theta}{(\int_Xe^{h_\omega-h_\theta-\lambda\varphi}\omega^n_\theta)^{2}}
-\frac{\int_X uv e^{h_\omega-h_\theta-\lambda\varphi}\omega^n_\theta}{\int_Xe^{h_\omega-h_\theta-\lambda\varphi}\omega^n_\theta}\right\}.
\end{align*}
\subsection{Degenerate Monge-Amp\`ere equations}
The degenerate Monge-Amp\`ere equations \eqref{critical pt Ding} and \eqref{Rictheta} could be summarised as
\begin{align}\label{general}
\omega^n_\varphi=|s|_h^{2\beta-2} e^{F(x,\varphi)}\omega^n.
\end{align}
When the cone angle $\beta>1$, we define a solution of \eqref{general} as following.
\begin{defn}\label{admissible}
A function $\varphi$ is said to be $\gamma$-\textbf{admissible} for some $\gamma\geq 0$, if
\begin{itemize}
\item $\mathop{\rm tr}\nolimits_\omega\om_{\varphi}\leq C |s|_h^{-2\gamma}$, on $X$,
\item $\varphi$ is smooth on $M$.
\end{itemize}
Moreover, we say a function $\varphi$ is a $\gamma$-admissible solution to the equation \eqref{general}, if it is a $\gamma$-admissible function and satisfies \eqref{general} on $M$.
When $\gamma=0$, the function $\varphi$ is said to be \textbf{admissible}.
\end{defn}
Yau initiated the study of these degenerate Monge-Amp\`ere equations in his seminal article \cite{MR480350} on Calabi conjecture.
For the K\"ahler-Einstein cone metrics with cone angle $0<\beta\leq1$, see \cite{MR3761174} and references therein.
\subsubsection{Approximation of the degenerate Monge-Amp\`ere equations}\label{Approximation}
In this section, we discuss some properties of the approximation of the reference metric.
We let
$$
\mathfrak h_\epsilon:=(\beta-1)\log S_\epsilon, \quad S_\epsilon:=|s|^2_h+\epsilon.
$$
We define $\omega_{\theta_\epsilon}$ to be the approximation of the reference metric \eqref{Rictheta},
\begin{align}\label{Rictheta approximation}
\omega^n_{\theta_\epsilon}=e^{h_\theta+\mathfrak h_\epsilon+c_\epsilon} \omega^n
\text{ with }
\int_X e^{h_\theta+\mathfrak h_\epsilon+c_\epsilon} \omega^n=V.
\end{align}
By \eqref{h0}, its Ricci curvature has the formula.
\begin{lem}
\begin{align}\label{Ricomthetaeps}
Ric(\omega_{\theta_\epsilon})&=Ric(\omega)-i\partial\bar\partial h_\theta-i\partial\bar\partial\mathfrak h_\epsilon\\
&=\theta+(1-\beta)\Theta_D+(1-\beta)i\partial\bar\partial\log S_\epsilon.\nonumber
\end{align}
\end{lem}
\begin{lem}\label{h eps}
There exists a nonnegative constant $C$ such that
\begin{align}\label{h eps equation}
CS_\epsilon^{-1}\omega\geq i\partial\bar\partial \log S_\epsilon\geq -\frac{|s|^2_h}{S_\epsilon}\Theta_D\geq -C\omega.
\end{align}
\end{lem}
\begin{proof}
It follows from the calculation
\begin{align*}
i\partial\bar\partial \log S_\epsilon&=\frac{i\partial\bar\partial |s|^2_h}{S_\epsilon}-\frac{i\partial|s|^2_h \bar\partial|s|^2_h }{S_\epsilon^2}.
\end{align*}
The upper bound is direct to see, by using $|\partial S_\epsilon|_{\omega}\leq C S_\epsilon^{\frac{1}{2}}$, that
\begin{align*}
i\partial\bar\partial \log S_\epsilon\leq CS_\epsilon^{-1}.
\end{align*}
While, making use of the identity
\begin{align*}
|s|^4_h i\partial\bar\partial \log|s|^2_h&=|s|^2_hi\partial\bar\partial |s|^2_h-i\partial|s|^2_h \bar\partial|s|^2_h,
\end{align*}
we have
\begin{align*}
i\partial\bar\partial \log S_\epsilon&=\frac{i\partial\bar\partial |s|^2_h}{S_\epsilon}
+\frac{|s|^4_h i\partial\bar\partial \log|s|^2_h - |s|^2_2i\partial\bar\partial |s|^2_h }{S_\epsilon^2}\\
&=\frac{\epsilon i\partial\bar\partial |s|^2_h}{S_\epsilon^2}
+\frac{|s|^4_h i\partial\bar\partial \log|s|^2_h }{S_\epsilon^2}.
\end{align*}
Replaced by the identity above again, it is further reduced to
\begin{align*}
&=\frac{\epsilon |s|^2_h i\partial\bar\partial \log|s|^2_h}{S_\epsilon^2}
+\frac{\epsilon i\partial|s|^2_h \bar\partial|s|^2_h}{|s|^2_hS_\epsilon^2}
+\frac{|s|^4_h i\partial\bar\partial \log|s|^2_h }{S_\epsilon^2}\\
&=\frac{|s|^2_h i\partial\bar\partial \log|s|^2_h }{S_\epsilon}+\frac{\epsilon i\partial|s|^2_h \bar\partial|s|^2_h}{|s|^2_hS_\epsilon^2}
\geq -\frac{|s|^2_h}{S_\epsilon}\Theta_D.
\end{align*}
\end{proof}
\subsubsection{Approximation of the log KE metrics}
Similarly, the approximation for the log KE metric is defined to be
\begin{align}\label{critical pt Ding approximation}
\omega^n_{\varphi_\epsilon}
= e^{h_\omega+\mathfrak h_\epsilon-\lambda\varphi_\epsilon +c_\epsilon} \omega^n,\quad \int_Xe^{h_\omega+\mathfrak h_\epsilon-\lambda\varphi_\epsilon+c_\epsilon} \omega^n=V.
\end{align}
They are critical points of the approximation of the log Ding functional
\begin{align*}
F^\epsilon_\beta(\varphi)=-D_{\omega}(\varphi)-\frac{1}{\lambda}\log[\frac{1}{V}\int_X e^{h_\omega-\lambda\varphi}S_\epsilon^{\beta-1}\omega^n].
\end{align*}
When $\beta>1$, we say an admissible solution to \eqref{critical pt Ding} is a degenerate KE metric. If $0<\beta< 1$, the H\"older space $C^{2,\alpha,\beta}$ was introduced in \cite{MR2975584}. A KE cone metric is a $C^{2,\alpha,\beta}$ solution to \eqref{critical pt Ding} and smooth on $M$.
Corresponding to Corollary \ref{log Ding convex}, we have
\begin{prop}\label{smooth approximation KE}
The following statements of existence of log KE metric and its smooth approximation hold.
\begin{itemize}
\item
When $\lambda>0$ and $0<\beta\leq 1$, we have $F^\epsilon_\beta \geq F_\beta$. Furthermore, if $F_\beta$ is proper, then there exists a KE cone metric of cone angle $\beta$, which has smooth approximation \eqref{critical pt Ding approximation}.
\item
When $\lambda<0$ and $\beta> 1$, we have $F^\epsilon_\beta \geq F_\beta$. Furthermore, there exists a degenerate KE metric, which has smooth approximation \eqref{critical pt Ding approximation}.
\end{itemize}
\end{prop}
\subsection{Degenerate scalar curvature equation}\label{cscK pde}
In this section, we assume $n\geq 2$ and consider the degenerate case when $\beta>1$.
Recall that $\theta$ is a smooth $(1,1)$-form in $C_1(X,D)$ and $\omega_\theta$ is the reference metric defined in \eqref{Rictheta}. We also set $\mathfrak h:=-(1-\beta)\log |s|_h^2$ and define
\begin{align}\label{S const}
\underline S_\beta=\frac{C_1(X,D)[\omega]^{n-1}}{[\omega]^n}.
\end{align}
\begin{defn}\label{Degenerate cscK defn}
The degenerate scalar curvature equation is defined as
\begin{equation}\label{Degenerate cscK}
\omega_{\varphi}^n=e^F \omega_{\theta}^n,\quad
\triangle_{\varphi} F=\mathop{\rm tr}\nolimits_{\varphi}\theta-R.
\end{equation}
Here, the reference metric $\omega_\theta$ is introduced in Definition \eqref{Reference metric}.
When $R=\underline S_\beta$, a solution to the degenerate scalar curvature equation is called a degenerate cscK metric.
\end{defn}
Direct computation shows that
\begin{lem}The scalar curvature of the degenerate scalar curvature equation satisfies that
\begin{align}
S(\omega_\varphi)=R \text{ on }M.
\end{align}
\end{lem}
At last, we close this section by making use of the reference metric \eqref{Rictheta} then rewriting \eqref{Degenerate cscK} as below with respect to the smooth K\"ahler metric $\omega$. We let
\begin{align*}
f=-h_\theta-\mathfrak h, \quad \tilde F=F-f.
\end{align*}
\begin{lem}The degenerate scalar curvature equation satisfies the following equations
\begin{equation}\label{Degenerate cscK 1}
\omega_{\varphi}^n=e^{\tilde F} \omega^n,\quad
\triangle_{\varphi} \tilde F=\mathop{\rm tr}\nolimits_{\varphi}(\theta-i\partial\bar\partial f)-R.
\end{equation}
\end{lem}
\subsection{Log $K$-energy}\label{log K energy section}
Motivated from the log $K$-energy for the cone angle $\beta$ in $(0,1]$ in \cite{MR4020314}, we define the log $K$-energy for large cone angle $\beta>1$ to be
\begin{defn}\label{logKenergy}
The log $K$-energy is defined as
\begin{align}\label{log K energy}
\nu_\beta(\varphi)
&:=E_\beta(\varphi)
+J_{-\theta}(\varphi)+\frac{1}{V}\int_M (\mathfrak h+h_\theta)\omega^n, \quad \forall \varphi\in \mathcal H_\Omega.
\end{align}
Here $\omega_\theta$ is the reference metric, which is the admissible solution to the degenerate complex Monge-Amp\`ere equation \eqref{Rictheta}. Also, the log entropy is defined in terms of the reference metric $\omega_\theta$ as
\begin{align*}
E_\beta(\varphi)=\frac{1}{V}\int_M\log\frac{\omega^n_\varphi}{\omega_\theta^n}\omega_\varphi^{n}
\end{align*}
and the $j_{\chi}$- and $J_{\chi}$-functionals are defined to be
\begin{align*}
j_{\chi}(\varphi):=\frac{1}{V}\int_{X}\varphi
\sum_{j=0}^{n-1}\omega^{j}\wedge
\omega_\varphi^{n-1-j}\wedge \chi,\quad
J_{\chi}(\varphi):=j_{\chi}(\varphi)-\underline{\chi} D_{\omega}(\varphi).
\end{align*}
\end{defn}
\begin{cor}For all $\varphi$ in $\mathcal H_\Omega$, we have
\begin{align*}
\nu_\beta(\varphi)&=\nu_1(\varphi)+(1-\beta)\frac{1}{V}\int_X\log|s|^2_h(\omega^n_\varphi-\omega^n)
+(1-\beta)J_{\Theta_D}(\varphi)\\
&=\nu_1(\varphi)+(1-\beta)\cdot[D_{\omega,D}(\varphi)
-\frac{\mathrm{Vol}(D)}{V}\cdot D_{\omega}(\varphi)].
\end{align*}
Here $\nu_1(\varphi)=E_1(\varphi)
+J_{-Ric(\omega)}(\varphi)$ is the Mabuchi $K$ energy, the entropy of $\omega^n_\varphi$ is
\begin{align*}
&E_1(\varphi)=\frac{1}{V}\int_X \log\frac{\omega^n_{\varphi}}{\omega^n}\omega^n_{\varphi}.
\end{align*} and,
the corresponding volume and the normalisation functional on the divisor $D$ are defined to be
\begin{align*}
\mathrm{Vol}(D)=\int_D \Omega^{n-1},\quad D_{\omega,D}(\varphi)=\frac{n}{V}\int_0^1\int_D \partial_t\varphi \omega_\varphi^{n-1}dt.
\end{align*}
\end{cor}
\begin{cor}\label{K and log K}
Writing in terms of the smooth background metric $\omega$, we have
\begin{align}\label{log K energy equation}
\nu_\beta(\varphi)=E_1(\varphi)
+J_{-\theta}(\varphi)+\frac{1}{V}\int_M (\mathfrak h+h_\theta)(\omega^n-\omega_\varphi^n),
\end{align}for all $\varphi\in \mathcal H_\Omega$.
\end{cor}
We see that the last term has $\beta$ involved,
\begin{align}
\frac{-(1-\beta)}{V}\int_M \log |s|_h^2 (\omega^n-\omega_\varphi^n).
\end{align}
\subsection{Approximate degenerate scalar curvature equation}
We first define the approximate degenerate cscK equation.
\begin{defn}
We say $\omega_{\varphi_\epsilon}$ is an approximation of the degenerate scalar curvature equation, if it satisfies the following PDEs
\begin{equation}\label{Degenerate cscK approximation}
\omega_{\varphi_\epsilon}^n=e^{F_\epsilon} \omega_{\theta_\epsilon}^n,\quad
\triangle_{\varphi_\epsilon} F_\epsilon=\mathop{\rm tr}\nolimits_{\varphi_\epsilon}{\theta}-R.
\end{equation}
Particularly, when $R$ is a constant, we call it the approximate degenerate cscK equation.
\end{defn}
Then we write the approximate equation \eqref{Degenerate cscK approximation} in terms of the smooth background metric $\omega$.
\begin{lem}The approximate degenerate scalar curvature equation satisfies the equations
\begin{equation}\label{Degenerate cscK 1 approximation}
\omega_{\varphi_\epsilon}^n=e^{\tilde F_\epsilon} \omega^n,\quad
\triangle_{\varphi_\epsilon} \tilde F_\epsilon=\mathop{\rm tr}\nolimits_{\varphi_\epsilon}(\theta-i\partial\bar\partial \tilde f_\epsilon)-R.
\end{equation}
Here
$
\tilde F_\epsilon=F_\epsilon-\tilde f_\epsilon,\quad
\tilde f_\epsilon=-h_\theta-\mathfrak h_\epsilon-c_\epsilon.
$
\end{lem}
This section is devoted to prove the Laplacian estimates of the smooth solution $(\varphi_\epsilon,F_\epsilon)$ for the approximate degenerate scalar curvature equations \eqref{Degenerate cscK approximation} or \eqref{Degenerate cscK 1 approximation}.
\subsection{Almost admissible solutions}
In order to clarify the idea of the Laplacian estimates. We introduce the following definition.
\begin{defn}[Almost admissible solution for degenerate equations]\label{a priori estimates approximation}
We say $\varphi_\epsilon$ is an \text{almost admissible solution} to the approximate degenerate scalar curvature equation \eqref{Degenerate cscK 1 approximation}, if the following uniform estimates independent of $\epsilon$ hold
\begin{itemize}
\item $L^\infty$-estimates in \thmref{L infty estimates degenerate equation}:
\begin{align}\label{almost admissible C0}
\|\varphi_\epsilon\|_{\infty},\quad \|F_\epsilon\|_{\infty}\leq C;
\end{align}
\item gradient estimate of $\varphi$ in \thmref{gradient estimate}:
\begin{align}\label{almost admissible C1 sigmaD}
S_\epsilon^{\sigma^1_D}\|\partial\varphi\|_{L^\infty(\omega)}^2\leq C, \quad1>\sigma^1_D>\max\{1-\frac{2\beta}{n+2},0\};
\end{align}
\item $W^{2,p}$-estimate in \thmref{w2pestimates degenerate equation}: for any $p\geq 1$,
\begin{align}\label{almost admissible w2p sigmaD}
\int_X (\mathop{\rm tr}\nolimits_{\omega}\omega_{\varphi_\epsilon})^{p} S_\epsilon^{\sigma^2_D}\omega^n\leq C(p),\quad\sigma^2_D:=(\beta-1)\frac{n-2}{n-1+p^{-1}}.
\end{align}
\end{itemize}
\end{defn}
\begin{rem}
These three estimates will be obtained for more general equation, that is the singular scalar curvature equation introduced in Section \ref{Singular cscK metrics}, see \thmref{L infty estimates Singular equation}, \thmref{gradient estimate} and \thmref{w2pestimates degenerate Singular equation}.
\end{rem}
This definition of the almost admissible solution further gives us the following estimates of $\tilde F_\epsilon$,
\begin{lem}\label{Laplacian estimate: F}
Assume the $L^\infty$-estimate of $F_\epsilon$ holds.
Then there exists a uniform nonnegative constant $C$ depending on $\theta$, $\|h_\theta+c_{\epsilon} \|_{C^2(\omega)}$ and $\Theta_D$ such that
\begin{align*}
&\tilde F_\epsilon\leq C,\quad \triangle \tilde F_\epsilon\geq \triangle F_\epsilon-C,\\
&C[1+(\beta-1)S_\epsilon^{-1}]\mathop{\rm tr}\nolimits_{\varphi_\epsilon}\omega-R\geq \triangle_{\varphi_\epsilon} \tilde F_\epsilon\geq -C \mathop{\rm tr}\nolimits_{\varphi_\epsilon}\omega-R.
\end{align*}
\end{lem}
\begin{proof}
The upper bound of $\tilde F_\epsilon$ follows from the $L^\infty$-estimate of $F_\epsilon$ and
\begin{align*}
\tilde F_\epsilon=F_\epsilon+h_\theta+(\beta-1)\log S_\epsilon+c_\epsilon.
\end{align*}
The rest results are obtained from the following identities
\begin{align*}
\triangle_{\varphi_\epsilon} \tilde F_\epsilon=\mathop{\rm tr}\nolimits_{\varphi_\epsilon}[\theta+i\partial\bar\partial (h_\theta+\mathfrak h_\epsilon)]-R,\quad
i\partial\bar\partial \tilde F_\epsilon=i\partial\bar\partial (F_\epsilon + h_\theta+\mathfrak h_\epsilon)
\end{align*}
and the lower bound of $i\partial\bar\partial\mathfrak h_\epsilon=(\beta-1)i\partial\bar\partial\log S_\epsilon$ from \lemref{h eps}.
\end{proof}
\begin{rem}
This lemma tells us that the directions of the inequalities above when $\beta>1$, are exactly opposite to their counterparts when $0<\beta<1$, where $\tilde F_\epsilon$ has lower bound and both $\triangle_{\varphi_\epsilon} \tilde F_\epsilon$ and $\triangle \tilde F_\epsilon$ have upper bound.
\end{rem}
\begin{lem}\label{Laplacian estimate: p F}
Assume the gradient estimate of $F_\epsilon$ holds i.e.
\begin{align}\label{almost admissible C1 f sigmaD}
\|\partial F_\epsilon\|_{L^\infty(\omega_{\varphi_\epsilon})}\leq C.
\end{align} Then there exists a uniform constant $C$ depending on $\|h_\theta+c_{\epsilon} \|_{C^1(\omega)}$ and $\sup_X S_\epsilon^{-\frac{1}{2}}|\partial S_\epsilon|_\omega$ such that
\begin{align*}
|\partial F_\epsilon|^2\leq C \mathop{\rm tr}\nolimits_\omega\om_{\varphi_\epsilon},\quad
|\partial\tilde F|^2\leq C[1+\mathop{\rm tr}\nolimits_\omega\om_{\varphi_\epsilon}+\frac{(\beta-1)^2}{S_\epsilon}].
\end{align*}
\end{lem}
\begin{proof}
We also make use of the inequality
\begin{align*}
|\partial F_\epsilon|^2\leq |\partial F_\epsilon|^2_{\varphi_\epsilon} \cdot \mathop{\rm tr}\nolimits_\omega\om_{\varphi_\epsilon}.
\end{align*}
We compute and see that
\begin{align*}
|\partial\tilde F_\epsilon|^2&=|\partial (F_\epsilon+h_\theta+(\beta-1)\log S_\epsilon)|^2\\
&\leq C[1+|\partial F_\epsilon|^2+(\beta-1)^2\frac{|\partial |s|_h^2|^2}{S_\epsilon^2}]\\
&\leq C[1+\mathop{\rm tr}\nolimits_\omega\om_{\varphi_\epsilon}+\frac{(\beta-1)^2}{S_\epsilon}].
\end{align*}
In the last inequality, we use $|\partial S_\epsilon|\leq C_{2.4} |S_\epsilon|^{\frac{1}{2}}$ .
\end{proof}
\subsection{Approximation of log $K$-energy}In this subsection, we consider the degenerate cscK equation.
\begin{defn}
\begin{equation}\label{Degenerate cscK 1 approximation const}
\omega_{\varphi_\epsilon}^n=e^{\tilde F_\epsilon} \omega^n,\quad
\triangle_{\varphi_\epsilon} \tilde F_\epsilon=\mathop{\rm tr}\nolimits_{\varphi_\epsilon}(\theta-i\partial\bar\partial \tilde f_\epsilon)-\underline S_\beta.
\end{equation}
\end{defn}
\begin{defn}[Approximate log $K$-energy]\label{log K energy approximation}
The approximate log $K$-energy is defined as
\begin{align}\label{log K energy approximation equation}
\nu^\epsilon_\beta(\varphi)=E_1(\varphi)
+J_{-\theta}(\varphi)-\frac{1}{V}\int_X \tilde f_\epsilon (\omega^n-\omega_\varphi^n), \quad \forall \varphi\in \mathcal H_\Omega.
\end{align}
Here $\tilde f_\epsilon=-(\beta-1)\log S_\epsilon-h_\theta-c_\epsilon$.
\end{defn}
\begin{lem}[\cite{arXiv:1803.09506} Lemma 3.12]\label{1st derivative}
The first derivative of the approximate log $K$-energy is
\begin{align*}
\partial_t\nu^\epsilon_\beta=\frac{1}{V}\int_X \varphi_t\left[\triangle_\varphi \log\frac{\omega^n_\varphi}{\omega^n}-\mathop{\rm tr}\nolimits_\varphi(\theta-i\partial\bar\partial \tilde f_\epsilon)+\underline S_\beta\right]
\omega_\varphi^{n} .
\end{align*}
Its critical point satisfies the approximate degenerate cscK equation \eqref{Degenerate cscK 1 approximation const}.
\end{lem}
\begin{lem}\label{uniform energy bound}
Let $\varphi_\epsilon$ be the solution to the approximate degenerate cscK equation \eqref{Degenerate cscK 1 approximation const} with bounded entropy $E^\beta_{t,\epsilon}=\frac{1}{V}\int_X F_{\epsilon}\omega_{\varphi_{\epsilon}}^n$. Then
\begin{align}
\nu^\epsilon_\beta(\varphi_\epsilon)\geq \nu_\beta(\varphi_\epsilon)-C.
\end{align}
\end{lem}
\begin{proof}
Comparing the log $K$-energy with its approximation \eqref{log K energy approximation equation}, we have
\begin{align*}
\nu_\beta^\epsilon-\nu_\beta=\frac{1}{V}\int_M [(\beta-1)(\log S_\epsilon-\log |s|_h^2)+c_\epsilon](\omega^n-\omega_\varphi^n).
\end{align*}
We see that
\begin{align*}
\frac{\beta-1}{V}\int_X( \log S_\epsilon - \log |s|_h^2)\omega^n\geq 0.
\end{align*}
We use the volume ratio bound from Corollary \ref{Laplacian estimate: F} and the $L^\infty$-estimate of $F$ in \thmref{L infty estimates Singular equation} to show that
\begin{align*}
\frac{\beta-1}{V}\int_X(\log S_\epsilon-\log |s|_h^2 )\omega_{\varphi_\epsilon}^n
&=\frac{\beta-1}{V}\int_X(\log S_\epsilon-\log |s|_h^2)e^{\tilde F_\epsilon} \omega^n\\
&\leq C_1 \frac{\beta-1}{V}\int_X( \log S_\epsilon-\log |s|_h^2) \omega^n\leq C_2.
\end{align*}
In which, the constants $C_1, C_2$ are independent of $\epsilon$. Thus the lemma is proved.
\end{proof}
We compute the second variation of the approximate log $K$-energy and obtain the upper bound of $\nu^\epsilon_\beta(\varphi_\epsilon)$.
\begin{prop}\label{upper bound of approximate log K energy prop}
The second derivative of the approximate log $K$-energy at the critical point is
\begin{align*}
\delta^2 \nu^\epsilon_\beta(u,v)
&=\frac{1}{V}\int_X (\partial\p u, \partial\p v)\omega_\varphi^{n}+\frac{1}{V}\int_X [Ric(\omega)-\theta+i\partial\bar\partial\tilde f_\epsilon](\partial u, \partial v)\omega^n_\varphi.
\end{align*}
\end{prop}
Note that, by \eqref{Ricomthetaeps}, we have $ i\partial\bar\partial \log S_\epsilon\geq- \frac{|s|^2_h}{S_\epsilon}\Theta_D$ and
\begin{align*}
Ric(\omega)-\theta+i\partial\bar\partial\tilde f_\epsilon=Ric(\omega_{\theta_\epsilon})-\theta=(1-\beta)\Theta_D+(1-\beta)i\partial\bar\partial\log S_\epsilon.
\end{align*}
When $\beta\leq 1$, we have
\begin{align*}
Ric(\omega)-\theta+i\partial\bar\partial\tilde f_\epsilon\geq (1-\beta)\Theta_D-(1-\beta) \frac{|s|^2_h}{S_\epsilon}\Theta_D=(1-\beta)\frac{\epsilon}{S_\epsilon}\Theta_D.
\end{align*}
\begin{cor}
The approximate log $K$-energy is convex at its critical points, when $\beta=1$, or $0<\beta< 1$ and $\Theta_D\geq 0$.
\end{cor}
\section{Laplacian estimate for degenerate scalar curvature equation}\label{A priori estimates for approximate degenerate cscK equations}
\begin{thm}[Laplacian estimate]\label{cscK Laplacian estimate}
Suppose that $\varphi_\epsilon$ is an almost admissible solution to \eqref{Degenerate cscK 1 approximation} and the gradient estimate $\|\partial F_\epsilon\|_{L^\infty(\omega_{\varphi_\epsilon})}$ holds.
Assume $\beta>1$ and the degenerate exponent satisfies that
\begin{equation}\label{Laplacian estimate: degenerate exponent}
\left\{
\begin{aligned}
&\sigma_D=0,\text{ when }\beta>\frac{n+1}{2};\\
&\sigma_D>0,\text{ when } \beta\leq\frac{n+1}{2}.
\end{aligned}
\right.
\end{equation}
Then there exists a uniform constant $C$ such that
\begin{align}\label{Laplacian estimate gamma}
\mathop{\rm tr}\nolimits_\omega\om_{\varphi_\epsilon}\leq C \cdot S_\epsilon^{-\sigma_D}.
\end{align}
The uniform constant $C$ depends on the gradient estimate of $\varphi_\epsilon$, the $W^{2,p}$-estimate, $\beta, \sigma_D, c_\epsilon, n$ and
\begin{align*}
&\inf_{i\neq j}R_{i\bar i j\bar j}(\omega), \quad C_S(\omega), \quad \Theta_D,\quad \sup_X S_\epsilon,\quad \sup_X S_\epsilon^{-\frac{1}{2}}|\partial S_\epsilon|_\omega,\quad \|h_\theta+c_{\epsilon} \|_{C^2(\omega)}.
\end{align*}
\end{thm}
\begin{proof}
In this proof, we let $C_1$ to be a constant determined in \eqref{Laplacian estimate: C1}. We denote the degenerate exponent $\sigma_D$ by $\gamma$ and also set
\begin{align*}
v:=\mathop{\rm tr}\nolimits_\omega\om_{\varphi_\epsilon},\quad w:=e^{-C_1\varphi_\epsilon} v, \quad u:= S_\epsilon^\gamma w=S_\epsilon^\gamma e^{-C_1\varphi_\epsilon} \mathop{\rm tr}\nolimits_\omega\om_{\varphi_\epsilon} .
\end{align*}
We will omit the lower index $\epsilon$ for convenience. We will apply the integration method and the proof of this theorem will be divided into several steps as following.
\end{proof}
\subsection{Step 1: integral inequality}
We now transform the approximate degenerate scalar curvature equations \eqref{Degenerate cscK 1 approximation} into an integral inequality.
\begin{prop}[Differential inequality]\label{Laplacian estimate: integral inequality pre prop}
Assume that $\gamma$ is a nonnegative real number.
Then there exist constants $C_{1.4}$ \eqref{Laplacian estimate: C1} and $C_{1.5}$ \eqref{Laplacian estimate: C15} such that the following differential inequality holds,
\begin{align*}
\triangle_\varphi u
\geq C_{1.4}u \mathop{\rm tr}\nolimits_\varphi\omega+e^{-C_1\varphi}\triangle F\cdot S_\epsilon^\gamma-C_{1.5}(1+u).
\end{align*}
\end{prop}
\begin{proof}
According to Yau's computation,
\begin{align}\label{Yau computation}
\triangle_\varphi \log (\mathop{\rm tr}\nolimits_{\omega}\omega_\varphi) &\geq \frac{g_\varphi^{k\bar l}{R^{i\bar j}}_{k\bar l}(\omega)g_{\varphi i\bar j}-S(\omega)+\triangle \tilde F}{\mathop{\rm tr}\nolimits_{\omega}\omega_\varphi}\notag\\
&\geq - C_{1.1}\mathop{\rm tr}\nolimits_\varphi\omega+\frac{\triangle \tilde F}{\mathop{\rm tr}\nolimits_{\omega}\omega_\varphi}.
\end{align} The constant $-C_{1.1}$ is the lower bound the bisectional curvature of $\omega$, i.e. $\inf_{i\neq j}R_{i\bar i j\bar j}(\omega)$.
Moreover,
\begin{align}\label{Laplacian estimate: laplacian w}
\triangle_\varphi \log w
&\geq (C_1- C_{1.1})\mathop{\rm tr}\nolimits_\varphi\omega+\frac{\triangle \tilde F}{v}-C_1 n.
\end{align}
Also, due to \lemref{h eps}, there exists $C_{1.2}$ depending on $\Theta_D$ such that
\begin{align}\label{Laplacian estimate: C12}
\triangle_\varphi \log S_\epsilon\geq -C_{1.2}\mathop{\rm tr}\nolimits_\varphi\omega.
\end{align}
Adding together, we establish the differential inequality for $u= w S_\epsilon^\gamma$,
\begin{align*}
\triangle_\varphi \log u\geq (C_1- C_{1.1}-\gamma C_{1.2})\mathop{\rm tr}\nolimits_\varphi\omega+\frac{\triangle \tilde F}{v}-C_1 n.
\end{align*}Note that we need the positivity of $\gamma$, i.e. $\gamma\geq 0$.
The lower bound of $\triangle\tilde F\geq \triangle F-C_{1.3}$ follows from Corollary \ref{Laplacian estimate: F}. The constant $C_{1.3}$ is
\begin{align}\label{Laplacian estimate: C13}
-C_{1.3}&:=\inf_X [\triangle(h_\theta+\mathfrak h_\epsilon)]
=\inf_X \triangle h_\theta+(\beta-1)\inf_X\triangle\log S_\epsilon\\
&= \inf_X \triangle h_\theta-(\beta-1)C_{1.2}n\notag.
\end{align}
Choosing sufficiently large nonnegative $C_1$ such that
\begin{align}\label{Laplacian estimate: C1}
C_{1.4}:=C_1-C_{1.1}-\gamma \cdot C_{1.2}\geq1,
\end{align}
we find
\begin{align*}
\triangle_\varphi \log u
\geq C_{1.4}\mathop{\rm tr}\nolimits_\varphi\omega+\frac{\triangle F-C_{1.3}}{v}-C_1 n.
\end{align*}
Therefore, it is further written in the form
\begin{align}\label{Laplacian estimate: tri phi u}
\triangle_\varphi u&= u\cdot[\triangle_\varphi \log u+|\partial\log u|_\varphi^2]\geq u \triangle_\varphi \log u\notag\\
&\geq C_{1.4}u \mathop{\rm tr}\nolimits_\varphi\omega+e^{-C_1\varphi}(\triangle F-C_{1.3})S_\epsilon^\gamma-C_1 n u.
\end{align}
Setting
\begin{align}\label{Laplacian estimate: C15}
C_{1.5}:=2\max\{C_{1.3}e^{-C_1\inf_X\varphi}\sup_X S_\epsilon^\gamma,C_1n\},
\end{align} and inserting it to \eqref{Laplacian estimate: tri phi u}, we thus obtain the expected differential inequality.
\end{proof}
We introduce a notion $\tilde u=u+K$ for some nonnegative constant $K$ and set
\begin{align*}
&LHS_1:=\int_X|\partial\tilde u^p|^2S_\epsilon^\gamma\omega^n_\varphi,\quad
LHS_2:=\int_X\tilde u^{2p}u^{\frac{n}{n-1}}S_\epsilon^{-\frac{\gamma+\beta-1}{n-1}}\omega_\varphi^n.
\end{align*}
\begin{prop}[Integral inequality]\label{Laplacian estimate: integration cor}
There exists constants $C_{1.6}(C_1, \sup_X\varphi)$, and $C_{1.7}$ depending on $C_{1.4}$, $\inf_X\varphi$, $\sup_X F$, $\sup_X h_\theta$, $C_1$, $c_\epsilon$ such that
\begin{align}\label{Laplacian estimate: integration cor}
&\frac{2C_{1.6}}{p}LHS_1
+ C_{1.7} LHS_2
\leq RHS:=\mathcal N+C_{1.5}\int_X\tilde u^{2p+1}\omega_\varphi^n,\\
&\mathcal N:=-\int_Xe^{-C_1\varphi}\tilde u^{2p}\triangle F S_\epsilon^{\gamma}e^{\tilde F}\omega^n.\notag
\end{align}
\end{prop}
\begin{proof}
Multiplied with $\tilde u^{2p}, p\geq 1$ and integrated with respect to $\omega^n_\varphi$ on $X$, the differential inequality in Proposition \ref{Laplacian estimate: integral inequality pre prop} yields
\begin{align*}
&\frac{2}{p}\int_X\tilde u|\partial\tilde u^p|^2_\varphi\omega^n_\varphi
=-\int_X\tilde u^{2p}\triangle_\varphi \tilde u\omega^n_\varphi\\
&\leq -\int_X\tilde u^{2p}[C_{1.4}u \mathop{\rm tr}\nolimits_\varphi\omega+e^{-C_1\varphi}\triangle F S_\epsilon^\gamma-C_{1.5} \tilde u]\omega^n_\varphi.
\end{align*}
Replacing $u$ by $e^{-C_1\varphi}vS_\epsilon^\gamma$ and using the relation between two norms $|\cdot|_\varphi$ and $|\cdot|$, i.e. $v |\cdot|_\varphi\geq |\cdot|$, we observe the lower bound of the gradient term
\begin{align}\label{Laplacian estimate: C16}
\frac{2}{p}\int_X\tilde u|\partial\tilde u^p|^2_\varphi\omega^n_\varphi
&\geq \frac{2}{p}\int_X u|\partial\tilde u^p|^2_\varphi\omega^n_\varphi
=\frac{2}{p}\int_Xe^{-C_1\varphi}v S_\epsilon^\gamma |\partial\tilde u^p|^2_\varphi\omega^n_\varphi\notag\\
&\geq C_{1.6}(C_1,\sup_X\varphi) \frac{2}{p}\int_X |\partial\tilde u^p|^2 S_\epsilon^\gamma\omega^n_\varphi.
\end{align}
Consequently, we regroup positive terms to the left hand side of the integral inequality to conclude that
\begin{align}\label{Laplacian estimate: integral inequality pre}
&\frac{2C_{1.6}}{p}\int_X|\partial\tilde u^p|^2S_\epsilon^\gamma\omega^n_\varphi
+ C_{1.4} \int_X\tilde u^{2p} u \mathop{\rm tr}\nolimits_\varphi\omega\om^n_\varphi
\leq RHS.
\end{align}
Substituting $\tilde F=F+h_\theta+(\beta-1)\log S_\epsilon+c_\epsilon$ into the fundamental inequality
$\mathop{\rm tr}\nolimits_\varphi\omega\geq v^{\frac{1}{n-1}}e^{-\frac{\tilde F}{n-1}}$, we see that
\begin{align*}
\mathop{\rm tr}\nolimits_\varphi\omega
\geq u^{\frac{1}{n-1}}e^{\frac{C_1\varphi}{n-1}}S_\epsilon^{\frac{-\gamma}{n-1}}e^{-\frac{F+h_\theta+(\beta-1)\log S_\epsilon+c_\epsilon}{n-1}}
\geq C_{1.7}u^{\frac{1}{n-1}}S_\epsilon^{-\frac{\gamma+\beta-1}{n-1}}.
\end{align*}
Thus
\begin{align*}
\int_X\tilde u^{2p} u \mathop{\rm tr}\nolimits_\varphi\omega\om^n_\varphi
\geq C_{1.7}\int_X\tilde u^{2p}u^{\frac{n}{n-1}}S_\epsilon^{-\frac{\gamma+\beta-1}{n-1}}\omega_\varphi^n.
\end{align*}
The constant $C_{1.7}$ depends on $n,C_1,\inf\varphi, \sup F,\sup h_\theta,c_\epsilon$.
Therefore, \eqref{Laplacian estimate: integration cor} is obtained by inserting this inequality to \eqref{Laplacian estimate: integral inequality pre}.
\end{proof}
\begin{rem}
The constant $C_{1.3}$ depends on the cone angle $\beta\geq1$.
\end{rem}
\begin{rem}
Actually, we could choose $K=0$ and derive the integral inequality for $u$,
\begin{align*}
\frac{2C_{1.6}}{p}\int_X|\partial u^p|^2S_\epsilon^\gamma\omega^n_\varphi
+ C_{1.4} \int_X u^{2p+1} \mathop{\rm tr}\nolimits_\varphi\omega\om^n_\varphi
\leq RHS.
\end{align*}
We denote $k:=(2p+1)\gamma+(\beta-1)$.
With the help of the $L^\infty$-estimates of $\varphi$ and $F$, this inequality is simplified to be
\begin{align}\label{Laplacian estimate: inverse weighted inequalities general}
&\int_Xu^{2p+1}\mathop{\rm tr}\nolimits_\varphi\omega\om^n_\varphi
\geq C_{1.7} \int_Xw^{2p+\frac{n}{n-1}}S_\epsilon^{k-\frac{\beta-1}{n-1}}\omega^n.
\end{align}
\end{rem}
\subsection{Step 2: nonlinear term containing $\triangle F$}\label{Step 2}
The term $\mathcal N$ containing $\triangle F$ in the $RHS$ of the integral inequality \eqref{Laplacian estimate: integration cor} requires further simplification by integration by parts.
\begin{prop}[Integral inequality]\label{Laplacian estimate: integration inequality pro}
There exists a constant $C_{2.5}$ depending on $\inf_X\varphi$, $\sup_X F$, $\|\partial\varphi\|_{L^{\infty}(\omega)}$, $\|\partial F\|_{L^{\infty}(\omega_\varphi)}$, the constant in \lemref{Laplacian estimate: F} and \lemref{Laplacian estimate: p F}, and the constants $C_{1.5}$, $
C_{1.6}$, $C_{1.7}$, $\beta$
such that
\begin{align}
&p^{-1}LHS_1
+ LHS_2 \leq RHS
\leq p C_{2}\cdot RHS_1,\label{Laplacian estimate: integral inequality}\\
&RHS_1:=\int_X \tilde u^{2p}[ u+1+u^{\frac{1}{2}}S_\epsilon^{\frac{\gamma}{2}}+u^{\frac{1}{2}}S_\epsilon^{\frac{\gamma-\sigma_D^1}{2}}
+u^{\frac{1}{2}} S_\epsilon^{\frac{\gamma-1}{2}}
] \omega_\varphi^n.\label{Laplacian estimate: RHS1 defn}
\end{align}
\end{prop}
\begin{proof}
By integration by parts, we split the nonlinear term $\mathcal N$ into four sub-terms,
\begin{align}\label{Laplacian estimate: tri F}
&\mathcal N:=I+II+III+IV
=-C_1\int_X e^{-C_1\varphi}(\partial \varphi,\partial F)\tilde u^{2p}S_\epsilon^{\gamma} e^{\tilde F}\omega^n\\
&+2p\int_Xe^{-C_1\varphi} \tilde u^{2p-1}(\partial \tilde u, \partial F) S_\epsilon^{\gamma} e^{\tilde F}\omega^n\notag\\
&+\int_Xe^{-C_1\varphi}\tilde u^{2p}(\partial F,\partial S_\epsilon^{\gamma}) e^{\tilde F}\omega^n\notag
+\int_Xe^{-C_1\varphi}\tilde u^{2p}(\partial F,\partial \tilde F)S_\epsilon^{\gamma} e^{\tilde F}\omega^n\notag.
\end{align}
The assumption on the gradient estimate of the volume ratio $F$ and \lemref{Laplacian estimate: p F} give us that
$$
|\partial F|\leq C e^{\frac{C_1}{2}\varphi}S_\epsilon^{-\frac{\gamma}{2}}u^{\frac{1}{2}}.
$$
Inserting it with the gradient estimate of $\varphi$ in \eqref{almost admissible C1 sigmaD}. i.e. $$|\partial\varphi|\leq C S_\epsilon^\frac{-\sigma^1_D}{2},\quad \sigma^1_D<1$$ to the first term, we see that
\begin{align*}
I&\leq C_1\int_X e^{-C_1\varphi}|\partial \varphi||\partial F|\tilde u^{2p}S_\epsilon^{\gamma} \omega_\varphi^n
\leq C_{2.1}\int_X e^{-\frac{C_1}{2}\varphi}\tilde u^{2p}u^{\frac{1}{2}}S_\epsilon^{\frac{\gamma-\sigma_D^1}{2}} \omega_\varphi^n,
\end{align*}
where, the constant $C_{2.1}$ depends on $\|S_\epsilon^\frac{\sigma^1_D}{2}\partial\varphi\|_{L^{\infty}(\omega)},\|\partial F\|_{L^{\infty}(\omega_\varphi)}$.
The second term is nothing but using H\"older's inequality and the bound of $|\partial F|^2$,
\begin{align*}
II&=2\int_Xe^{-C_1\varphi}\tilde u^{p}(\partial\tilde u^p, \partial F) S_\epsilon^{\gamma} \omega_\varphi^n\\
&\leq \frac{C_{1.6}}{p}\int_X|\partial\tilde u^p|^2S_\epsilon^\gamma\omega^n_\varphi
+\frac{p}{C_{1.6}}\int_Xe^{-2C_1\varphi}\tilde u^{2p} |\partial F|^2 S_\epsilon^{\gamma} \omega^n_\varphi\\
& \leq \frac{C_{1.6}}{p}\int_X|\partial\tilde u^p|^2S_\epsilon^\gamma\omega^n_\varphi
+p C_{2.2}\int_Xe^{-C_1\varphi}\tilde u^{2p}u \omega^n_\varphi.
\end{align*}
In order to bound the third term, we use $|\partial F|$ again and the fact that
\begin{align}\label{Laplacian estimate: C24}
|\partial S^\gamma_\epsilon|\leq \gamma C_{2.4} |S_\epsilon|^{\gamma-\frac{1}{2}}.
\end{align}
As a result, we have
\begin{align*}
III\leq \gamma C_{2.3}\int_Xe^{-\frac{C_1}{2}\varphi} \tilde u^{2p}u^{\frac{1}{2}} S_\epsilon^{\frac{\gamma-1}{2}} \omega^n_\varphi.
\end{align*}
Due to \lemref{Laplacian estimate: p F} again, we get
$$|\partial\tilde F|^2\leq C[1+e^{\frac{C_1}{2}\varphi}S_\epsilon^{-\frac{\gamma}{2}}u^{\frac{1}{2}}+(\beta-1)S_\epsilon^{-\frac{1}{2}}].$$
Then the fourth term is bounded by
\begin{align*}
IV&\leq\int_Xe^{-\frac{C_1}{2}\varphi}\tilde u^{2p}u^{\frac{1}{2}}|\partial \tilde F|S_\epsilon^{\frac{\gamma}{2}} \omega_\varphi^n\\
&\leq C_{2.5}[\int_Xe^{-\frac{C_1}{2}\varphi}\tilde u^{2p}u^{\frac{1}{2}}S_\epsilon^{\frac{\gamma}{2}} \omega_\varphi^n
+\int_X\tilde u^{2p}u\omega_\varphi^n\\
&+(\beta-1)\int_Xe^{-\frac{C_1}{2}\varphi}\tilde u^{2p}u^{\frac{1}{2}}S_\epsilon^{\frac{\gamma-1}{2}} \omega_\varphi^n].
\end{align*}
Inserting them back to \eqref{Laplacian estimate: tri F}, we have the bound of $\mathcal N$.
Substituting $\mathcal N$ in \eqref{Laplacian estimate: integral inequality pre} and note that $\sigma_D^1< 1$,
we have thus proved \eqref{Laplacian estimate: integral inequality}.
\end{proof}
\begin{rem}
When $\gamma=0$, the third term $III=0$.
\end{rem}
\begin{rem}
When $\beta=1$, $\partial\tilde F=\partial F+\partial h_\theta$. Then the fourth term $IV=\int_Xe^{-C_1\varphi}u^{2p}(\partial F,\partial \tilde F)S_\epsilon^{\gamma} e^{\tilde F}\omega^n$.
\end{rem}
\begin{rem}\label{4th term}
In the third and the fourth term, the power of $S_\epsilon$ loses $\frac{1}{2}$, which cause troubles.
\end{rem}
\subsection{Step 3: rough iteration inequality}
We will apply the Sobolev inequality to the gradient term
$$
LHS_1=\int_X|\partial\tilde u^p|^2S_\epsilon^\gamma\omega^n_\varphi
$$
in \eqref{Laplacian estimate: integral inequality}.
We set
\begin{align*}
k_\gamma:=\gamma+\beta-1+\sigma, \quad \chi:=\frac{n}{n-1}, \quad \tilde\mu:=S_\epsilon^{k_\gamma \chi}\omega^n.
\end{align*}
It is direct to see that
$ k+\sigma=k_\gamma+2p\gamma$ and
\begin{align*}
\|\tilde u\|^{2p\chi}_{L^{2p\chi}(\tilde\mu)}=\int_X(\tilde u^{2p}S_\epsilon^{k_\gamma})^\chi\omega^n=\int_X\tilde u^{2p\chi}\tilde \mu.
\end{align*}
\begin{prop}[Rough iteration inequality]\label{Rough iteration inequality prop}
There exists a constant $C_{3}$ depending on $C_{1.6}$, $C_{1.7}$, the dependence in $C_2$, $\inf_X F$, $\inf_X h_\theta$, $c_\epsilon$ and the Sobolev constant $C_S(\omega)$ such that
\begin{align}\label{Laplacian estimate: Rough iteration inequality}
\|\tilde u\|^{2p}_{L^{2p\chi}(\tilde\mu)}+p LHS_2
\leq C_{3}(p^2 RHS_1 + RHS_2+1)
\end{align}
where $RHS_1$ is given in \eqref{Laplacian estimate: RHS1 defn} and
\begin{align}\label{Laplacian estimate: RHS2 defn}
RHS_2:=\int_X (u^2 \tilde u^{2p-2}S_\epsilon^{\gamma+\sigma}
+C_{3.1} u^2 \tilde u^{2p-2}S_\epsilon^{\gamma+\sigma-1} ) \omega_\varphi^n.
\end{align}
\end{prop}
\begin{proof}We need to deal with the weights.
Recall the equations of the volume form $\omega^n_\varphi$ from \eqref{Degenerate cscK approximation} and the volume form of the approximate reference metric $\omega^n_{\theta_\epsilon}$ by \eqref{Rictheta approximation},
\begin{align*}
\omega^n_{\varphi_\epsilon}=e^{\tilde F_\epsilon}\omega,\quad e^{\tilde F_\epsilon}=e^{F_\epsilon+h_\theta+c_\epsilon}S_\epsilon^{\beta-1}.
\end{align*}
We assume $S_\epsilon\leq 1$.
We see that there exists a constant $C_{3.0}$ depending on $\inf_X F$, $\inf_X h_\theta$ and $c_\epsilon$ such that
\begin{align*}
LHS_1
&\geq C_{3.0} \int_X|\partial\tilde u^p|^2S_\epsilon^{\gamma+\beta-1}\omega^n
\geq C_{3.0} \int_X|\partial \tilde u^p|^2S_\epsilon^{k_\gamma}\omega^n,\quad \sigma\geq 0.
\end{align*}
Using \lemref{Laplacian estimate: key trick} with $p_1=1, p_2=p-1$, we have
\begin{align*}
LHS_1
\geq C_{3.0} \int_X|\partial (u\tilde u^{p-1})|^2S_\epsilon^{k_\gamma}\omega^n.
\end{align*}
Further calculation shows that
\begin{align}\label{Laplacian estimate: main term of LHS}
=C_{3.0} [ \int_X|\partial (u\tilde u^{p-1} S_\epsilon^{\frac{k_\gamma}{2}})|^2\omega^n- \int_Xu^2\tilde u^{2p-2} |\partial S_\epsilon^{\frac{k_\gamma}{2}}|^2\omega^n].
\end{align}
We now make use of the Sobolev inequality to the first term in \eqref{Laplacian estimate: main term of LHS} with $f=u\tilde u^{p-1} S_\epsilon^{\frac{k_\gamma}{2}}$, which states
\begin{align*}
\|f\|_{L^{2\chi}(\omega)}\leq C_S (\|\partial f\|_{L^{2}(\omega)}+\| f\|_{L^{2}(\omega)}),
\end{align*}
that is
\begin{align*}
\int_X|\partial (u\tilde u^{p-1} S_\epsilon^{\frac{k_\gamma}{2}})|^2\omega^n
&\geq C_S^{-1} ( \int_X|u\tilde u^{p-1} S_\epsilon^{\frac{k_\gamma}{2}}|^{2\chi}\omega^n)^{\chi^{-1}}
- \int_Xu^2\tilde u^{2p-2} S_\epsilon^{k_\gamma}\omega^n.
\end{align*}
Note that the power of the weight is increasing from $k_\gamma$ to $k_\gamma\chi$.
Substituting $u=\tilde u-K$ and $\tilde\mu=S_\epsilon^{k_\gamma \chi}\omega^n$ in the main term, we get
\begin{align*}
& \int_X|u\tilde u^{p-1} S_\epsilon^{\frac{k_\gamma}{2}}|^{2\chi}\omega^n
= \int_X|\tilde u-K |^{2\chi}\tilde u^{(p-1)2\chi}\tilde\mu\\
&\geq C(n)[\int_X\tilde u^{2p\chi}\tilde\mu
-K^{2\chi}\int_X\tilde u^{2(p-1)\chi}\tilde\mu].
\end{align*}
With the help of Young's inequality
\begin{align*}
\tilde u^{2(p-1)\chi}\leq\frac{p-1}{p} \tilde u^{2p\chi}+1\leq \tilde u^{2p\chi}+1,
\end{align*}
choosing $0<K<1$ such that $1-K^{2\chi}\geq \frac{1}{2}$, we obtain
\begin{align*}
\int_X|u\tilde u^{p-1} S_\epsilon^{\frac{k_\gamma}{2}}|^{2\chi}\omega^n
\geq C(n)[ \frac{1}{2}\int_X\tilde u^{2p\chi}\tilde\mu-K^{2\chi}]= \frac{C(n)}{2}[\int_X\tilde u^{2p\chi}\tilde\mu-1].
\end{align*}
Using $|\partial S_\epsilon|^2\leq C_{2.4}S_\epsilon$, we can estimate the second term in \eqref{Laplacian estimate: main term of LHS},
\begin{align}\label{Laplacian estimate: trouble term}
\int_Xu^2 \tilde u^{2p-2} |\partial S_\epsilon^{\frac{k_\gamma}{2}}|^2\omega^n
&= \frac{k_\gamma^2}{4} \int_Xu^2 \tilde u^{2p-2} S_\epsilon^{k_\gamma-2}|\partial S_\epsilon|^2\omega^n\notag\\
&\leq C_{3.1} \int_X u^2 \tilde u^{2p-2} S_\epsilon^{k_\gamma-1} \omega^n.
\end{align}
At last, we add these inequalities together to see that
\begin{align*}
LHS_1\geq C_{3.2} \{&C_S^{-1} \left( \int_X\tilde u^{2p\chi}\tilde\mu-1 \right)^{\chi^{-1}}
- \int_Xu^2 \tilde u^{2p-2} S_\epsilon^{k_\gamma}\omega^n\\
&-C_{3.1} \int_Xu^2 \tilde u^{2p-2}S_\epsilon^{k_\gamma-1} \omega^n\}.
\end{align*}
Inserting this inequality to the integral inequality \eqref{Laplacian estimate: integral inequality}, we obtain the rough iteration inequality \eqref{Laplacian estimate: Rough iteration inequality}.
\end{proof}
\begin{rem}\label{Laplacian estimate: lose weight}
We observe that $1$ is subtracted from the weight of $S_\epsilon$ in the second term of $RHS_2$ \eqref{Laplacian estimate: RHS2 defn}, which causes difficulties presented in the weighted inequality, Proposition \ref{Weighted inequality}. We will solve this problem by making use of the inverse weighted inequalities, Proposition \ref{inverse weighted inequalities}.
\end{rem}
\begin{rem}
When $\beta=1$ and $\gamma=\sigma=0$, the trouble term \eqref{Laplacian estimate: trouble term} vanishes.
\end{rem}
We end this section by computing the auxiliary inequality.
\begin{lem}\label{Laplacian estimate: key trick}
We write $p=p_1+p_2$ with $p_1,p_2\geq 0$.
\begin{align*}
\int_X|\partial \tilde u^p|^2S_\epsilon^{k_\gamma}\omega^n\geq \int_X|\partial (u^{p_1}\tilde u^{p_2})|^2S_\epsilon^{k_\gamma}\omega^n.
\end{align*}
\end{lem}
\begin{proof}
It is a direct computation
\begin{align*}
&\partial (u^{p_1}\tilde u^{p_2})
=\partial u^{p_1}\tilde u^{p_2}+u^{p_1}\partial(\tilde u^{p_2})
=p_1 u^{p_1-1}\partial u\tilde u^{p_2}+u^{p_1}p_2\tilde u^{p_2-1}\partial \tilde u\\
&=\partial\tilde u u ^{p_1-1} \tilde u^{p_2-1}[p_1\tilde u+p_2 u]
\leq p \partial\tilde u \tilde u^{p-1}=\partial(\tilde u^p),
\end{align*}
where we use $u\leq \tilde u$.
\end{proof}
\subsection{Step 4: weighted inequality}\label{Step 4}
We compare the left term $\|\tilde u\|^{2p}_{L^{2p\chi}(\tilde\mu)}$ of the rough iteration inequality, Proposition \ref{Rough iteration inequality prop}, with the right terms in ${RHS_1}$ and ${RHS_2}$, which are of the form
\begin{align*}
\int_X\tilde u^{2p} S_\epsilon^{\gamma+\sigma-k'}\omega_\varphi^n.
\end{align*}
\begin{prop}[Weighted inequality]\label{Weighted inequality}
Assume that $n\geq 2$ and $ k'<1$.
Then there exists $1<a<\chi=\frac{n}{n-1}$ such that
\begin{align*}
\int_X\tilde u^{2p} S_\epsilon^{\gamma+\sigma-k'}\omega_\varphi^n
\leq C\int_X\tilde u^{2p} S_\epsilon^{k_\gamma-k'}\omega^n
\leq C_{4.1} \|\tilde u\|^{2p}_{L^{2pa}(\tilde\mu)}
\end{align*}
where $C_{4.1}=\| S_\epsilon^{k_\gamma-k_\gamma\chi-k'}\|_{L^{c}(\tilde\mu)}$ is finite for some $c>n$.
\end{prop}
\begin{proof}
From $\tilde\mu=S_\epsilon^{k_\gamma \chi}\omega^n=S_\epsilon^{(\gamma+\beta-1+\sigma) \chi}\omega^n$, we compute
\begin{align*}
&\int_X\tilde u^{2p} S_\epsilon^{k_\gamma-k'}\omega^n
=\int_X \tilde u^{2p}S_\epsilon^{k_\gamma-k_\gamma\chi-k'} \tilde\mu.
\end{align*}
By the generalisation of H\"older's inequality with $\frac{1}{a}+\frac{1}{c}=1$, this term is dominated by
\begin{align*}
\|\tilde u\|^{2p}_{L^{2pa}(\tilde \mu)}(\int_X S_\epsilon^{(k_\gamma-k_\gamma\chi-k')c} \tilde\mu)^{\frac{1}{c}}.
\end{align*}
In order to make sure the last integral is finite, it is sufficient to ask $2(k_\gamma-k_\gamma\chi-k')c+2k_\gamma\chi+2n>0$, which is equivalent to
\begin{align*}
c<n\frac{k_\gamma+n-1}{k_\gamma+k'(n-1)}:=c_0.
\end{align*}
Since $k'<1$, we have $c_0>n$. Then, we could choose $c$ between $n$ and $c_0$ such that $a<\frac{n}{n-1}$.
\end{proof}
\subsection{Step 5: inverse weighted inequality}\label{Step 3: inverse weighted inequalities}
Our tour to bound each term in $RHS_1$ \eqref{Laplacian estimate: RHS1 defn} and $RHS_2$ \eqref{Laplacian estimate: RHS2 defn}
\begin{align*}
RHS_1&=\int_X \tilde u^{2p}[u+1+u^{\frac{1}{2}}S_\epsilon^{\frac{\gamma}{2}}+u^{\frac{1}{2}}S_\epsilon^{\frac{\gamma-\sigma_D^1}{2}}
+u^{\frac{1}{2}} S_\epsilon^{\frac{\gamma-1}{2}}
] \omega_\varphi^n,\\
RHS_2&=\int_X (u^2\tilde u^{2p-2}S_\epsilon^{\gamma+\sigma}
+C_{3.1}u^2 \tilde u^{2p-2}S_\epsilon^{\gamma+\sigma-1} ) \omega_\varphi^n,
\end{align*}
is via applying Young's inequality repeatedly, with the help of the positive term
\begin{align*}
LHS_2=p\int_X\tilde u^{2p}u^{\frac{n}{n-1}}S_\epsilon^{-\frac{\gamma+\beta-1}{n-1}}\omega_\varphi^n.
\end{align*}
According to Proposition \ref{Weighted inequality}, the second term in $RHS_2$ is the trouble term.
\begin{prop}[Inverse weighted inequality]\label{inverse weighted inequalities}
Assume that the parameters $\sigma$ and $\gamma$ satisfy $\sigma+\gamma<1$ and
\begin{equation}\label{Laplacian estimate: RHS2 parameters condition}
\left\{
\begin{aligned}
&\sigma=\gamma=0,\text{ when }\beta>n;\\
&\frac{1}{2}\geq\sigma>\frac{n-\beta}{n-1},\quad \gamma=0,\text{ when } \frac{n+1}{2}< \beta\leq n;\\
&\sigma<\frac{1}{n+1},\quad \gamma>(1-\sigma)\frac{n-1}{n},\text{ when } 1\leq \beta\leq \frac{n+1}{2}.
\end{aligned}
\right.
\end{equation}
Then there exists an exponent $k'<1$ such that
\begin{align*}
\|\tilde u\|^{2p}_{L^{2p\chi}(\tilde\mu)}
\leq C_5 [p^3 \int_X \tilde u^{2p} S_\epsilon^{\gamma+\sigma-k'} \omega_\varphi^n+1].
\end{align*}
\end{prop}
\begin{proof}
The proof of the inverse weighted inequality is divided into \lemref{Laplacian estimate: RHS1 good} for $RHS_1$, \lemref{Laplacian estimate: RHS2 bound} for $RHS_2$ and \lemref{Laplacian estimate: criteria} for examining the criteria. Adding the resulting inequalities of $RHS_1$ and $RHS_2$, we have
\begin{align*}
p^2 RHS_1 + RHS_2&\leq
\tau pLHS_2+ p^3 C(\tau) \int_X \tilde u^{2p} S_\epsilon^{\gamma+\sigma-\max\{k_2',k_5'\}} \omega_\varphi^n\\
&+\tau pLHS_2+ \frac{C(\tau)}{p} \int_X \tilde u^{2p}S_\epsilon^{\gamma+\sigma-k'_7} \omega_\varphi^n.
\end{align*}
We set a new $k'$ to be $\max\{k_2',k_5',k_7'\}$.
Inserting this inequality to the rough iteration inequality \eqref{Laplacian estimate: Rough iteration inequality} and choosing sufficiently small $\tau$, we therefore obtain the asserted inequality.
\end{proof}
\begin{lem}\label{Laplacian estimate: RHS1 good}
Assume that
\begin{align}\label{Laplacian estimate: RHS1 good condition}
\sigma<\frac{\beta}{n+1},\quad \gamma+\sigma<1.
\end{align} Let $k'=\max\{1+\sigma-\frac{\beta}{n+1},\gamma+\sigma\}$. Then
\begin{align*}
RHS_1
\leq \frac{
\tau}{p}LHS_2+ p C(\tau) \int_X \tilde u^{2p} S_\epsilon^{\gamma+\sigma-\max\{k_2',k_5'\}} \omega_\varphi^n.
\end{align*}
The exponents $k'_2,k_5'$ are given in the following proof, see \eqref{Laplacian estimate: RHS1 exponents k2} and \eqref{Laplacian estimate: RHS1 exponents}, respectively.
\end{lem}
\begin{proof}
We now establish the estimates for the five terms in $RHS_1$.
The 1st one is decomposed as
\begin{align*}
&\int_X \tilde u^{2p} u \omega_\varphi^n
=\int_X (\tilde u^{2p}u^{\frac{n}{n-1}}S_\epsilon^{-\frac{\gamma+\beta-1}{n-1}})^{\frac{1}{a_1}}\tilde u^{2p\frac{1}{b_1}}u^{1-\frac{n}{n-1}\frac{1}{a_1}}S_\epsilon^{\frac{\gamma+\beta-1}{n-1}\frac{1}{a_1}} \omega_\varphi^n.
\end{align*}
By Young's inequality with small $\tau$, it implies that
\begin{align*}
\int_X \tilde u^{2p} u \omega_\varphi^n
\leq \frac{\tau}{ 4p }LHS_2+ p C \int_X \tilde u^{2p} u^{(1-\frac{n}{n-1}\frac{1}{a_1})b_1}S_\epsilon^{\frac{\gamma+\beta-1}{n-1}\frac{b_1}{a_1}} \omega_\varphi^n.
\end{align*}
The conjugate exponents we choose are
$a_1=\frac{n}{n-1}$ and $b_1=n.$
Accordingly, we have the exponent over $u$ is zero and
\begin{align*}
k_1:=\frac{\gamma+\beta-1}{n-1}\frac{b_1}{a_1}=\gamma+\beta-1.
\end{align*}
The estimate of the 3rd term is proceeded in the same way,
\begin{align*}
\int_X \tilde u^{2p} u^{\frac{1}{2}}S_\epsilon^{\frac{\gamma}{2}} \omega_\varphi^n
&=\int_X (\tilde u^{2p}u^{\frac{n}{n-1}}S_\epsilon^{-\frac{\gamma+\beta-1}{n-1}})^{\frac{1}{a_2}}\tilde u^{2p\frac{1}{b_2}}u^{\frac{1}{2}-\frac{n}{n-1}\frac{1}{a_2}}S_\epsilon^{\frac{\gamma}{2}+\frac{\gamma+\beta-1}{n-1}\frac{1}{a_2}} \omega_\varphi^n\\
&\leq \frac{\tau}{ 4p}LHS_2+ p C \int_X \tilde u^{2p} u^{(\frac{1}{2}-\frac{n}{n-1}\frac{1}{a_2})b_2}S_\epsilon^{(\frac{\gamma}{2}+\frac{\gamma+\beta-1}{n-1}\frac{1}{a_2})b_2} \omega_\varphi^n.
\end{align*}
The exponent $a_2$ is set to be $\frac{2n}{n-1}$. Hence, the exponent above $u$ vanishes. Moreover, $b_2=\frac{2n}{n+1}$ and
\begin{align*}
k_2:=(\frac{\gamma}{2}+\frac{\gamma+\beta-1}{n-1}\frac{1}{a_2})b_2
= \gamma+\frac{\beta-1}{n+1}.
\end{align*}
The 4th and 5th terms are treated by Young's inequality with small $\tau$, as well. The estimate for the 4th term is
\begin{align*}
&\int_X \tilde u^{2p} u^{\frac{1}{2}}S_\epsilon^{\frac{\gamma-\sigma_D^1}{2}} \omega_\varphi^n
\leq \frac{\tau}{4 p}LHS_2+ p C \int_X \tilde u^{2p} u^{(\frac{1}{2}-\frac{n}{n-1}\frac{1}{a_2})b_2}S_\epsilon^{k_4} \omega^n
\end{align*}
and the exponent is
\begin{align*}
k_4:=(\frac{\gamma-\sigma_D^1}{2}+\frac{\gamma+\beta-1}{n-1}\frac{1}{a_2})b_2
= \gamma+\frac{\beta-1-n\sigma_D^1}{n+1}.
\end{align*}
While, the estimate for the 5th term is
\begin{align*}
&\int_X \tilde u^{2p} u^{\frac{1}{2}}S_\epsilon^{\frac{\gamma-1}{2}} \omega_\varphi^n
\leq \frac{\tau}{ 4p}LHS_2+ p C \int_X \tilde u^{2p} u^{(\frac{1}{2}-\frac{n}{n-1}\frac{1}{a_2})b_2}S_\epsilon^{k_5} \omega^n,
\end{align*}
with the exponent satisfying
\begin{align*}
k_5:=(\frac{\gamma-1}{2}+\frac{\gamma+\beta-1}{n-1}\frac{1}{a_2})b_2= \gamma+\frac{\beta-1-n}{n+1}.
\end{align*}
We let $k_i'$ satisfy $k_i=\gamma+\sigma-k_i'$. Then we summary the exponents from the above estimates to see that
\begin{align}\label{Laplacian estimate: RHS1 exponents}
&k_1'=\gamma+\sigma-(\gamma+\beta-1)=\sigma-(\beta-1),\notag\\
&k_3'=\gamma+\sigma-( \gamma+\frac{\beta-1}{n+1})=\sigma-\frac{\beta-1}{n+1},\notag\\
&k_4'=\gamma+\sigma-(\gamma+\frac{\beta-1-n\sigma_D^1}{n+1})
=\sigma-\frac{\beta-1-n\sigma_D^1}{n+1},\notag\\
&k_5'=\gamma+\sigma-(\gamma+\frac{\beta-1-n}{n+1})=\sigma-\frac{\beta-1-n}{n+1}.
\end{align}
Since $\sigma_D^1\leq 1$, we observe that $k_5'$ is the largest one among these four exponents.
We further compute that
\begin{align*}
k_5'-1=\sigma-\frac{\beta}{n+1},
\end{align*}
which is negative under the hypothesis of our lemma, i.e. $\sigma<\frac{\beta}{n+1}$.
The integrand of the 2nd term is decomposed as,
\begin{align*}
\int_X\tilde u^{2p}\omega_\varphi^n=
\int_X\tilde u^{2p}S_\epsilon^{\gamma+\sigma-k_2'}\omega_\varphi^n,
\end{align*}
where we choose the exponent
\begin{align}\label{Laplacian estimate: RHS1 exponents k2}
k_2':=\gamma+\sigma<1.
\end{align}
Therefore, we set the exponent to be $\max\{k_2',k_5'\}$ and obtain the required inequality for $RHS_1$.
\end{proof}
We then derive the estimates for the terms in $RHS_2$.
\begin{lem}\label{Laplacian estimate: RHS2 bound}
Assume the following condition holds
\begin{align}\label{Laplacian estimate: RHS2 condition}
\gamma>1-\frac{\beta}{n}-\sigma(1-\frac{1}{n}):=\gamma_0.
\end{align} Then
\begin{align*}
RHS_2
\leq \tau pLHS_2+ \frac{C(\tau)}{p} \int_X \tilde u^{2p}S_\epsilon^{\gamma+\sigma-k'_7} \omega_\varphi^n,
\end{align*}
where $k'_7<1$ is given in \eqref{Laplacian estimate: k7}.
\end{lem}
\begin{proof}The first term in $RHS_2$ is a good term. On the other hand, the trouble term is treated with the help of $LHS_2$,
\begin{align*}
&RHS_2^2:=\int_X u^2 \tilde u^{2p-2}S_\epsilon^{\gamma+\sigma-1} \omega_\varphi^n\\
&=\int_X (\tilde u^{2p}u^{\frac{n}{n-1}}S_\epsilon^{-\frac{\gamma+\beta-1}{n-1}})^{\frac{1}{a_7}}
\tilde u^{2p\frac{1}{b_7}-2} u^{2-\frac{n}{n-1}\frac{1}{a_7}}S_\epsilon^{\gamma+\sigma-1+\frac{\gamma+\beta-1}{n-1}\frac{1}{a_7}} \omega_\varphi^n,
\end{align*}
by applying an argument analogous to the proof of $RHS_1$.
By Young's inequality, we get
\begin{align*}
RHS_2^2
\leq \tau LHS_2+ C \int_X \tilde u^{2p-2b_7} u^{2b_7-\frac{n}{n-1}\frac{b_7}{a_7}}S_\epsilon^{k_7} \omega_\varphi^n.
\end{align*}
Using $\tilde u\geq K$ and choosing $a_7=\frac{n}{2(n-1)}$,
\begin{align*}
RHS_2^2\leq \tau LHS_2+ C \int_X \tilde u^{2p}S_\epsilon^{k_7} \omega_\varphi^n.
\end{align*}
The exponent
\begin{align*}
k_7:=[\gamma+\sigma-1+\frac{\gamma+\beta-1}{n-1}\frac{1}{a_7}]b_7.
\end{align*}
Direct computation shows that
\begin{align}\label{Laplacian estimate: k7}
k'_7:=\gamma+\sigma-k_7=\frac{-\gamma n-\sigma(n-1)+a_7(n-1)-\beta+1}{(a_7-1)(n-1)}.
\end{align}
Thus the conclusion $k'_7<1$ holds, under the condition \eqref{Laplacian estimate: RHS2 condition}.
\end{proof}
At last, we examine the conditions \eqref{Laplacian estimate: RHS1 good condition} and \eqref{Laplacian estimate: RHS2 condition}.
\begin{lem}\label{Laplacian estimate: criteria}
Assume that the parameters $\sigma$ and $\gamma$ satisfy $\sigma+\gamma<1$ and \eqref{Laplacian estimate: RHS2 parameters condition}.
Then the condition \eqref{Laplacian estimate: RHS1 good condition} and \eqref{Laplacian estimate: RHS2 condition} hold.
\end{lem}
\begin{proof}
We check that, when $\beta>n$ and $\sigma=\gamma=0$,
the condition \eqref{Laplacian estimate: RHS1 good condition} is satisfied, i.e.
\begin{align*}
\sigma=0<\frac{n}{n+1}<\frac{\beta}{n+1}.
\end{align*}and
Meanwhile, the condition \eqref{Laplacian estimate: RHS2 condition} is satisfies, too. That is
\begin{align*}
\gamma_0=1-\frac{\beta}{n}-\sigma(1-\frac{1}{n})=1-\frac{\beta}{n}<0=\gamma.
\end{align*}
Similarly, the second criterion implies that
\begin{align*}
\frac{\beta}{n+1}>\frac{1}{2}\geq \sigma.
\end{align*}and
\begin{align*}
\gamma_0=1-\frac{\beta}{n}-\sigma(1-\frac{1}{n})<1-\frac{\beta}{n}-\frac{n-\beta}{n-1}(1-\frac{1}{n})=0=\gamma.
\end{align*}
The third criterion also deduces that
\begin{align*}
\frac{\beta}{n+1}\geq\frac{1}{n+1}> \sigma.
\end{align*}and
\begin{align*}
\gamma_0=1-\frac{\beta}{n}-\sigma(1-\frac{1}{n})\leq (1-\sigma)\frac{n-1}{n}<\gamma.
\end{align*}
Therefore, any one of the three criteria in \eqref{Laplacian estimate: RHS2 condition} guarantees both the conditions \eqref{Laplacian estimate: RHS1 good condition} and \eqref{Laplacian estimate: RHS2 condition}.
\end{proof}
\begin{rem}\label{rough W2p estimate}
Using Young's inequality similar to the proof above, we could also obtain
a $W^{2,p}$ estimate for $\mathop{\rm tr}\nolimits_\omega\om_\varphi$. However, this bound relies on the bound of $\partial F$. Alternatively, we will obtain an accurate $W^{2,p}$ estimate in \thmref{w2pestimates degenerate Singular equation} in Section \ref{W2p estimate}, without any condition on $\partial F$.
\end{rem}
\subsection{Step 6: iteration}\label{Step 6: iteration}
Combining the weighted inequality Proposition \ref{Weighted inequality} with the weighted inequality Proposition \ref{inverse weighted inequalities}, we obtain that
\begin{prop}[Iteration inequality]\label{Iteration inequality 1}
Assume that the parameters $\sigma$ and $\gamma$ satisfy $\sigma+\gamma<1$ and \eqref{Laplacian estimate: RHS2 parameters condition}. Then it holds
\begin{align}
\|\tilde u\|^{2p}_{L^{2p\chi}(\tilde\mu)}
\leq C_6 [p^3 \|\tilde u\|^{2p}_{L^{2pa}(\tilde\mu)}+1].
\end{align}
\end{prop}
We normalise the measure to be one. The norm $\|\tilde u\|_{L^{2p\chi}(\tilde\mu)}$ is increasing in $p$.
We assume $\|\tilde u\|_{L^{2p_0\chi}(\tilde\mu)}\geq 1$ for some $p_0\geq 1$, otherwise it is done.
We develop the iterating process from the iteration inequality
\begin{align}\label{Laplacian estimate: iteration}
&\|\tilde u\|_{L^{2p\frac{n}{n-1}}(\tilde\mu)}
\leq p^{\frac{3}{2}p^{-1}} C_7^{\frac{1}{2}p^{-1}} \|\tilde u\|_{L^{2pa}(\tilde\mu)}.
\end{align}
Setting
\begin{align*}
\chi_a:=\frac{\frac{n}{n-1}}{a}>1,\quad p=\chi_a^i,\quad i=0,1,2,\cdots
\end{align*}
and iterating \eqref{Laplacian estimate: iteration} with $p=\chi_a^m$,
\begin{align*}
&\|\tilde u\|_{L^{2\chi_a^m\frac{n}{n-1}}(\tilde\mu)}
\leq \chi_a^{\frac{3}{2}m\chi_a^{-m}} C_{7}^{\frac{1}{2}\chi_a^{-m}} \|\tilde u\|_{L^{2\chi_a^ma}(\tilde\mu)},
\end{align*}
which is
\begin{align*}
=\chi_a^{\frac{3}{2}m\chi_a^{-m}} C^{\frac{1}{2}\chi_a^{-m}} _{7} \|\tilde u\|_{L^{2\chi_a^{m-1}\frac{n}{n-1}}(\tilde\mu)}.
\end{align*}
We next apply \eqref{Laplacian estimate: iteration} again with $p=\chi^{m-1}$,
\begin{align*}
&\leq \chi_a^{\frac{3}{2}m\chi_a^{-m}+\frac{3}{2}(m-1)\chi_a^{-(m-1)}}C_{7}^{\frac{1}{2}[\chi_a^{-m}+\chi_a^{-(m-1)}]} \|\tilde u\|_{L^{2\chi_a^{m-1}a}(\tilde\mu)}.
\end{align*}
We choose $i_0$ such that $\tilde p_0=\chi_a^{i_0}\geq p_0$.
Repeating the argument above, we arrive at
\begin{align*}
&\leq \chi_a^{\frac{3}{2}\sum_{i=i_0}^m i\chi_a^{-i}}C_{7}^{\frac{1}{2}\sum_{i=i_0}^m{\chi_a^{-i}}}\|\tilde u\|_{L^{2a\tilde p_0}(\tilde\mu)}.
\end{align*}
Since these two series $\sum_{i=i_0}^\infty i\chi_a^{-i}$ and $\sum_{i=i_0}^\infty{\chi_a^{-i}}$ are convergent, we take $m\rightarrow \infty$ and conclude that
\begin{align*}
&\|\tilde u\|_{L^{\infty}}\leq C \|\tilde u\|_{L^{2a\tilde p_0}(\tilde\mu)}\leq C[ \|u\|_{L^{2a\tilde p_0}(\tilde\mu)}+\|K\|_{L^{2a\tilde p_0}(\tilde\mu)}].
\end{align*}
In order to obtain the uniform bound of $\mathop{\rm tr}\nolimits_\omega\om_{\varphi}\cdot S_\epsilon^{\gamma}$, we apply the $L^\infty$ bound of $\varphi$ and $L^{2a\tilde p_0}$ bound of $\mathop{\rm tr}\nolimits_\omega\om_\varphi$ from Definition \ref{a priori estimates approximation}, it is left to compare the exponent of the weight in the integral
\begin{align*}
\int_X e^{-C_12a\tilde p_0\varphi}(\mathop{\rm tr}\nolimits_\omega\om_{\varphi})^{2a\tilde p_0}S_\epsilon^{2a\tilde p_0\gamma+(\gamma+\beta-1+\sigma)\frac{n}{n-1}}\omega^n
\end{align*} with the exponent $\sigma^2_D$ in \eqref{almost admissible w2p sigmaD},
\begin{align*}
[(2a\tilde p_0+1)\gamma+\beta-1+\sigma]\frac{n}{n-1} \geq \sigma^2_D=(\beta-1)\frac{n-2}{n-1+(2a\tilde p_0)^{-1}}.
\end{align*}
In conclusion, we obtain the Laplacian estimate
\begin{align*}
v_\epsilon:=\mathop{\rm tr}\nolimits_\omega\om_{\varphi_\epsilon}\leq C S_\epsilon^{-\gamma}.
\end{align*}
Moreover, $\gamma=0$, when $\beta>\frac{n+1}{2}$.
\subsection{Laplacian estimate for degenerate KE equation}\label{Log Kahler Einstein metric}
We apply the weighted integration method developed for the degenerate scalar curvature equation to the degenerate KE problem, that provides an alternative proof of Yau's Laplacian estimate for the approximate degenerate KE equation \eqref{critical pt Ding approximation},
\begin{align}\label{critical pt Ding approximation estimates}
\omega^n_{\varphi_\epsilon}
= e^{h_\omega+\mathfrak h_\epsilon-\lambda\varphi_\epsilon +c_\epsilon} \omega^n.
\end{align}
Comparing with the approximate degenerate scalar curvature equation \eqref{Degenerate cscK 1 approximation}, we have $\theta=\lambda\omega$, $R=\underline S_\beta=\lambda n$ and
\begin{align*}
F_\epsilon=-\lambda\varphi_\epsilon,\quad \tilde f_\epsilon=-h_\omega-\mathfrak h_\epsilon -c_\epsilon, \quad \tilde F_\epsilon=F_\epsilon-\tilde f_\epsilon.
\end{align*}
We also have
\begin{align}\label{KE tri F}
\triangle F_\epsilon=-\lambda\triangle \varphi_\epsilon=-\lambda(\mathop{\rm tr}\nolimits_\omega\om_{\varphi_\epsilon}-n).
\end{align}
\begin{thm}\label{KE Laplacian}Suppose $\varphi_\epsilon$ is a solution to the approximate KE equation \eqref{critical pt Ding approximation estimates}.
Then there exists a uniform constant $C$ such that
\begin{align}\label{Laplacian estimate}
\mathop{\rm tr}\nolimits_\omega\om_{\varphi_\epsilon}\leq C \text{ on } X
\end{align}
where the uniform constant $C$ depends on
\begin{align*}
\|\varphi_\epsilon\|_\infty,\quad \|h_\omega\|_\infty,\quad\inf_X\triangle h_\omega, \quad
\inf_{i\neq j}R_{i\bar i j\bar j}(\omega), \quad C_S(\omega),\quad \Theta_D,
\end{align*}
and $\lambda,\quad\beta, \quad c_\epsilon, \quad n$.
\end{thm}
\begin{proof}
Substituting \eqref{KE tri F} into the nonlinear term \eqref{Laplacian estimate: integration cor}, we have
\begin{align*}
\mathcal N=\lambda\int_Xe^{-C_1\varphi}\tilde u^{2p}(v-n) S_\epsilon^{\gamma}\omega_\varphi^n
=\lambda\int_X\tilde u^{2p}u\omega_\varphi^n-n\lambda\int_Xe^{-C_1\varphi}\tilde u^{2p} S_\epsilon^{\gamma}\omega_\varphi^n.
\end{align*}
The integral inequality is reduced immediately to
\begin{align}\label{Laplacian estimate: integral inequality pre prop KE}
&\frac{2C_{1.6}}{p}LHS_1
+ C_{1.7} LHS_2
\leq \mathcal (\lambda+C_{1.5})\int_X\tilde u^{2p+1}\omega_\varphi^n.
\end{align}
Modified the constants, it becomes
\begin{align}\label{Laplacian estimate: integral inequality KE}
&p^{-1}LHS_1+ LHS_2
\leq C_2 RHS_1:=C_2 \int_X\tilde u^{2p+1}\omega_\varphi^n.
\end{align}
Following the argument in Proposition \ref{Rough iteration inequality prop}, we have the rough iteration inequality
\begin{align}\label{Laplacian estimate: RHSLHS KE}
\|\tilde u\|^{2p}_{L^{2p\chi}(\tilde\mu)}+p LHS_2
\leq C_{3}(p RHS_1 + RHS_2+1).
\end{align}
Examining the estimates from \lemref{Laplacian estimate: RHS1 good} for $RHS_1$, \lemref{Laplacian estimate: RHS2 bound} for $RHS_2$, we deduce the inverse weighted inequalities
\begin{align*}
\|\tilde u\|^{2p}_{L^{2p\chi}(\tilde\mu)}
\leq C_5 [p \int_X \tilde u^{2p} S_\epsilon^{\gamma+\sigma-k'} \omega_\varphi^n+1]\text{ for some }k'<1,
\end{align*}
under the conditions
\begin{equation*}
\left\{
\begin{aligned}
&\sigma-(\beta-1)<1;\\
&\sigma+\gamma<1;\\
&\gamma>1-\frac{\beta}{n}-\sigma(1-\frac{1}{n}),
\end{aligned}
\right.
\end{equation*}
which are actually automatically satisfied from the criteria
\begin{equation}\label{Laplacian estimate: RHS2 parameters condition ke}
\gamma=0,\quad 1>\sigma>\frac{n-\beta}{n-1}.
\end{equation}
Therefore, we could apply the weighted inequality, Proposition \ref{Weighted inequality}, to derive the iteration inequality. Then we further employ the iteration techniques as Section \ref{Step 6: iteration} to conclude the $L^\infty$ norm of $$\tilde u=e^{-C_1\varphi} \mathop{\rm tr}\nolimits_\omega\om_{\varphi_\epsilon} +K$$ in terms of the $W^{2,p}$-estimate, which could be obtained from \thmref{w2pestimates degenerate Singular equation} in a similar way, or from the argument in Remark \ref{rough W2p estimate}.
\end{proof}
\section{Singular cscK metrics}\label{Singular cscK metrics}
In \cite{MR4020314}, we introduced the singular cscK metrics and proved serval existence results, where we focus on the case $0<\beta<1$. In this section, we consider $\beta>1$ and obtain the $L^\infty$ estimate, the gradient estimate and the $W^{2,p}$ estimate for the singular cscK metrics.
We start from the basic setup of the singular scalar curvature equation we introduced in \cite{MR4020314}.
\begin{defn}
A real $(1,1)$-cohomology class $\Omega$ is called
\textit{big}, if it contains a \textit{K\"ahler current}, which is a closed positive $(1,1)$-current $T$ satisfying $T\geq t\omega_K$ for some $t>0$. A big class $\Omega$ is defined to be \textit{semi-positive}, if it admits a smooth closed $(1,1)$-form representative.
\end{defn}
Recall that $\omega $ is a K\"ahler metric on $X$. We let $\omega_{sr}$ be a smooth representative in the big and semi-positive class $\Omega$.
\subsection{Perturbed K\"ahler metrics}\label{Perturbed Kahler metrics}
In order to modify $\omega_{sr}$ to be a K\"ahler metric,
one way is to perturb $\omega_{sr}$ by adding another K\"ahler metric,
\begin{align*}
\omega_t :=\omega_{sr}+t\cdot \omega\in \Omega_t:=\Omega+t[\omega],\quad\text{for all }t>0.
\end{align*}
The other way is to apply Kodaira's Lemma, namely there exists a sufficiently small number $a_0$ and an effective divisor $E$ such that $\Omega-a_0 [E]$ is ample and
\begin{align}
\omega_{K}:=\omega_{sr}+ i\partial\bar\partial \phi_E>0,\quad \phi_E:=a_0\log h_E
\end{align}
is a K\"ahler metric. In which, $h_E$ is a smooth Hermitian metric on the associated line bundle of $E$ and
$s_E$ is the defining section of $E$.
We also write
\begin{align*}
\tilde\omega_t:=\omega_K+ t\omega.
\end{align*}
\begin{lem}\label{metrics equivalence}
The three K\"ahler metrics $\omega,\omega_K, \tilde\omega_t$ are all equivalent,
\begin{align*}
\omega_K\leq\tilde\omega_t\leq\omega_K+\omega,\quad C_K^{-1}\omega\leq\omega_K\leq C_K\omega.
\end{align*}
\end{lem}
With the help of these three K\"ahler metrics, we are able to measure the $(1,1)$-form $\theta$.
As Lemma 5.6 in \cite{arXiv:1803.09506}, we define the bound of $\theta$, which is independent of $t$, by using $\tilde\omega_t$ as the background metric.
\begin{defn}\label{L infty estimates theta defn}
We write
\begin{align}\label{L infty estimates theta}
C_l\cdot \tilde\omega_t\leq \theta\leq C_u\cdot\tilde\omega_t.
\end{align}
where
$
C_l:=\min\{0,\inf_{(X,\tilde\omega_t)}\theta\},\quad C_u:=\max\{0,\sup_{(X,\tilde\omega_t)}\theta\}.
$
\end{defn}
\begin{lem}
The given $(1,1)$-form $\theta$ has the lower bound
\begin{align}\label{Cl}
\theta\geq C_l \omega_{\theta_{t,\epsilon}}-i\partial\bar\partial\phi_l,\quad \phi_l:=C_l(\varphi_{\theta_{t,\epsilon}}-\phi_E)
\end{align}
and the upper bound
\begin{align}\label{Cu}
\theta\leq C_u \omega_{\theta_{t,\epsilon}}-i\partial\bar\partial\phi_u,\quad \phi_u:=C_u(\varphi_{\theta_{t,\epsilon}}-\phi_E).
\end{align}
\end{lem}
\begin{rem}
In particular, if the big and semi-positive class $\Omega$ is proposional to $C_1(X,D)$, i.e. $\Omega=\lambda C_1(X,D), \lambda=-1,1,$ then we have
\begin{align*}
C_l=0, \text{ when } \lambda=1, \text{ and } C_u=0, \text{ when }\lambda=-1.
\end{align*}
Actually, it even holds $\theta=\lambda \omega_{sr}$.
\end{rem}
\subsection{Reference metrics}\label{Reference metrics}
\begin{defn}[Reference metric]\label{Reference metric singular}
The reference metric $\omega_\theta$ is defined to be the solution of the following singular Monge-Amp\`ere equation
\begin{align}\label{lift reference metric log Fano app limit}
\omega_\theta^n:=(\omega_{sr}+i\partial\bar\partial\varphi_{\theta})^n=|s|_h^{2\beta-2}e^{h_{\theta}}\omega^n
\end{align}
where $h_\theta$ is defined in \eqref{h0}.
\end{defn}
\subsubsection{$t$-purterbation}
Replacing $\omega_{sr}$ by the K\"ahler metric $\omega_t$, we introduce the following perturbed metric.
\begin{defn}The $t$-purterbated reference metric $\omega_{\theta_t}\in \Omega_t$ is defined to be a solution to the following degenerate Monge-Amp\`ere equation
\begin{align}\label{lift reference metric log Fano app both 2}
\omega_{\theta_t}^n:=(\omega_{t}+i\partial\bar\partial\varphi_{\theta_t})^n=|s|_h^{2\beta-2} e^{h_\theta+c_t}\omega^n,
\end{align}
under the condition
\begin{align*}
Vol(\Omega_t)=\int_X|s|_h^{2\beta-2} e^{h_\theta+c_{t}}\omega^n.
\end{align*}
\end{defn}
Using the Poincar\'e-Lelong formula \eqref{PL} and the Ricci curvature of $\omega$ from \eqref{h0}, we compute the Ricci curvature of $\omega_{\theta_{t}}$.
\begin{lem}\label{Ric om theta t}
The Ricci curvature equation for \eqref{lift reference metric log Fano app both 2} reads
\begin{align*}
Ric(\omega_{\theta_{t}})=\theta+2\pi(1-\beta)[D].
\end{align*}
\end{lem}
\subsubsection{$(t,\epsilon)$-approximation}
Similar to Section \ref{Approximation}, we further approximate \eqref{lift reference metric log Fano app both 2} by a family of smooth equation for $\omega_{\theta_{t,\epsilon}}\in \Omega_t$.
\begin{defn}We define the $(t,\epsilon)$-approximate reference metric $$\omega_{\theta_{t,\epsilon}}:=\omega_{t}+i\partial\bar\partial\varphi_{\theta_{t,\epsilon}}$$ satisfy the equation
\begin{align}\label{lift reference metric log Fano app both 3}
\omega_{\theta_{t,\epsilon}}^n=S_\epsilon^{\beta-1} e^{h_{\theta}+c_{t,\epsilon}}\omega^n.
\end{align}
The constant $c_{t,\epsilon}$ is determined by the normalised volume condition
\begin{align*}
Vol(\Omega_t)=\int_XS_\epsilon^{\beta-1}e^{h_{\theta}+c_{t,\epsilon}}\omega^n.
\end{align*}
\end{defn}
The volume is uniformly bounded independent of $t, \epsilon$. In the following, when we say a constant or an estimate is uniform, it means it is independent of $t$ or $\epsilon$.
\begin{lem}\label{Ric om theta t eps}
The Ricci curvature equation for \eqref{lift reference metric log Fano app both 3} reads
\begin{align*}
Ric(\omega_{\theta_{t,\epsilon}})=\theta+(1-\beta)\{2\pi[D]-i\partial\bar\partial(\log|s|_h^2-\log S_\epsilon)\}.
\end{align*}
\end{lem}
\begin{proof}
Taking $-i\partial\bar\partial\log$ on \eqref{lift reference metric log Fano app both 3}, we get
\begin{align*}
Ric(\omega_{\theta_{t,\epsilon}})=-(\beta-1)i\partial\bar\partial\log S_\epsilon-i\partial\bar\partial h_\theta+Ric(\omega).
\end{align*}
Then we have by \eqref{h0} that
\begin{align*}
Ric(\omega_{\theta_{t,\epsilon}})=-(\beta-1)i\partial\bar\partial\log S_\epsilon+\theta+(1-\beta)\Theta_D.
\end{align*}
The conclusion thus follows from \eqref{PL}.
\end{proof}
\subsubsection{Estimation of the reference metric}
We write
\begin{align*}
\tilde\varphi_{\theta_{t,\epsilon}}:=\varphi_{\theta_{t,\epsilon}}-\phi_E.
\end{align*}
Accordingly, we see that
\begin{align*}
\tilde\omega_t=\omega_t+i\partial\bar\partial\phi_E,\quad \omega_{\theta_{t,\epsilon}}=\tilde \omega_t+i\partial\bar\partial\tilde\varphi_{\theta_{t,\epsilon}}.
\end{align*}
By substitution into \eqref{lift reference metric log Fano app both 3}, we rewrite the $(t,\epsilon)$-approximation \eqref{lift reference metric log Fano app both 3} of the reference metric as
\begin{lem}
\begin{align}\label{lift reference metric log Fano app both 2 Phi}
\omega_{\theta_{t,\epsilon}}^n=(\tilde \omega_t+i\partial\bar\partial \tilde\varphi_{\theta_{t,\epsilon}})^n=e^{-f_{t,\epsilon}} \tilde \omega_t^n,
\end{align}
where
\begin{align}\label{Singular cscK metrics: f}
-f_{t,\epsilon}:=(\beta-1)\log S_\epsilon+h_\theta+c_{t,\epsilon}+\log\frac{\omega^n}{\tilde\omega_t^n}.
\end{align}
\end{lem}
\begin{lem}\label{nef tilde f}
There exists a uniform constant $C$ such that
\begin{align*}
&f_{t,\epsilon}\geq C, \quad e^{- f_{t,\epsilon}}\leq C,\quad
|\partial f|^2\leq C[1+(\beta-1)^2S^{-1}_\epsilon]\\
& -C[(\beta-1)S_\epsilon^{-1}+1]\omega\leq i\partial\bar\partial f_{t,\epsilon}\leq C \omega.
\end{align*}
\end{lem}
\begin{proof}
Since the expression of $e^{- f_{t,\epsilon}}$ is
\begin{align*}
e^{- f_{t,\epsilon}}=S_\epsilon^{\beta-1} e^{h_\theta+c_{t,\epsilon}}\frac{\omega^n}{\tilde\omega_t^n},
\end{align*}
we obtain its upper bound from $\beta\geq 1$.
Moreover, its derivatives are estimated by
\begin{align*}
|\partial f|^2&=|\partial [(\beta-1)\log S_\epsilon+h_\theta+c_{t,\epsilon}+\log\frac{\omega^n}{\tilde\omega_t^n}]|^2\\
&\leq C[(\beta-1)^2|\partial |s|_h^2|^2S_\epsilon^{-2}+1]
\leq C[(\beta-1)^2S_\epsilon^{-1}+1].
\end{align*}
and
\begin{align*}
i\partial\bar\partial f_{t,\epsilon}=-i\partial\bar\partial [(\beta-1)\log S_\epsilon+h_\theta+c_{t,\epsilon}+\log\frac{\omega^n}{\tilde\omega_t^n}].
\end{align*}
Thus this lemma is a consequence of \lemref{h eps}.
\end{proof}
\begin{lem}\label{big nef approximate reference metric bound}Suppose that $\varphi_{\theta_{t,\epsilon}}$ is a solution to the approximate equation \eqref{lift reference metric log Fano app both 3}.
Then there exists a uniform constant $C$ independent of $t,\epsilon$ such that
\begin{align*}
\|\varphi_{\theta_{t,\epsilon}}\|_\infty\leq C, \quad
C^{-1} S_\epsilon^{\beta-1} |s_E|^{a(n-1)}_{h_E}\cdot\omega
\leq \omega_{\theta_{t,\epsilon}}
\leq C |s_E|^{-a}_{h_E}\cdot\omega.
\end{align*}
Moreover, the sequence $\varphi_{\theta_{t,\epsilon}}$ $L^1$-converges to $\varphi_\theta$, which is the unique solution to the equation \eqref{lift reference metric log Fano app limit}.
The solution $\varphi_\theta\in C^0(X)\cap C^{\infty}(X\setminus (E\cup D))$.
\end{lem}
\begin{proof}
We outline the proof for readers' convenience.
The $L^\infty$ estimate is obtained, by applying Theorem 2.1 and Proposition 3.1 in \cite{MR2505296}, since $$(|s|_h^2+\epsilon)^{\beta-1} e^{h_{\theta}+c_{t,\epsilon}}$$ is uniformly in $L^p(\omega^n_{sr})$ for $p>1$.
For the Laplacian estimate, we denote
\begin{align*}
v=\mathop{\rm tr}\nolimits_{\tilde \omega_t}\omega_{\theta_{t,\epsilon}},\quad w=e^{-C_1 \tilde\varphi_{\theta_{t,\epsilon}}} v,
\end{align*}
and obtain from \eqref{Laplacian estimate: laplacian w} that
\begin{align}\label{Yau computation C1 semi}
e^{C_1 \tilde\varphi_{\theta_{t,\epsilon}}}\triangle_\varphi w
&\geq v^{1+\frac{1}{n-1}} e^{-\frac{\tilde F_{\theta_{t,\epsilon}}}{n-1}}+\triangle \tilde F_{\theta_{t,\epsilon}}-C_1 n v.
\end{align}
The constant $C_1$ is taken to be $C_{1.1}+1$ and $C_{1.1}$ is the lower bound the bisectional curvature of $\tilde \omega_t$.
We write $\tilde F_{\theta_{t,\epsilon}}=-f_{t,\epsilon}$ and use \lemref{nef tilde f} to show that there exists a uniform constant $C$, independent of $t$ and $\epsilon$ such that
\begin{align*}
\tilde F_{\theta_{t,\epsilon}}\leq C,\quad \triangle \tilde F_{\theta_{t,\epsilon}}\geq -C.
\end{align*}
Then the argument of maximum principle applies. At the maximum point $p$ of $w$, $v(p)$ is bounded above from \eqref{Yau computation C1 semi}. While, at any pint $x\in X$, $w(x)\leq w(p)$, which means
\begin{align*}
v(x)\leq e^{C_1( \tilde\varphi_{\theta_{t,\epsilon}}(x)- \tilde\varphi_{\theta_{t,\epsilon}}(p))} v(p)
\leq e^{C_1(\varphi_{\theta_{t,\epsilon}}(x)- \tilde\varphi_{\theta_{t,\epsilon}}(p))} |s_E|^{-C_1 a_0}_{h_E}(x).
\end{align*}
Moreover, $\triangle_{\tilde \omega_t}\phi_E$ is uniformly bounded independent of $t,\epsilon$.
inserting them into
\begin{align*}
\triangle_{\tilde \omega_t}\varphi_{\theta_{t,\epsilon}}=\mathop{\rm tr}\nolimits_{\tilde \omega_t}\omega_{\theta_{t,\epsilon}}-n+\triangle_{\tilde \omega_t}\phi_E =v-n+\triangle_{\tilde \omega_t}\phi_E,
\end{align*}
we obtain the Laplacian estimate
\begin{align*}
\triangle_{\tilde \omega_t}\varphi_{\theta_{t,\epsilon}} \leq C |s_E|^{-a}_{h_E}
\end{align*}
by taking $a=C_1 a_0.$ The metric bound is thus obtained from the following \lemref{Laplacian to metric bound}, namely
\begin{align*}
C^{-1} |s_E|^{a(n-1)}_{h_E}\tilde \omega_t^n\leq\omega_{\theta_{t,\epsilon}} \leq C|s_E|^{-a}_{h_E} \tilde \omega_t^n
\end{align*}
together with the equivalence of the reference metrics \lemref{metrics equivalence}.
\end{proof}
\begin{lem}\label{Laplacian to metric bound}
Assume we have the Laplacian estimate
\begin{align*}
\triangle_{\tilde \omega_t}\varphi_{\theta_{t,\epsilon}} \leq A.
\end{align*}
Then the metric bound holds
\begin{align*}
C S_\epsilon^{\beta-1} (n+A)^{-(n-1)}\tilde \omega_t \leq \omega_{\theta_{t,\epsilon}}\leq (n+A) \tilde \omega_t.
\end{align*}
\end{lem}
\begin{proof}
The upper bound of $ \omega_{\theta_{t,\epsilon}}$ is obtained from
\begin{align*}
\mathop{\rm tr}\nolimits_{\tilde \omega_t}\omega_{\theta_{t,\epsilon}}=n+\triangle_{\tilde \omega_t}\varphi_{\theta_{t,\epsilon}}\leq n+A.
\end{align*}
The lower bound follows directly from the fundamental inequality and the equation \eqref{lift reference metric log Fano app both 3} of the volume ratio,
\begin{align*}
\mathop{\rm tr}\nolimits_{\omega_{\theta_{t,\epsilon}}}{\tilde \omega_t}\leq(\frac{\omega_{\theta_{t,\epsilon}}^n}{\tilde \omega_t^n})^{-1}(\mathop{\rm tr}\nolimits_{\tilde \omega_t}\omega_{\theta_{t,\epsilon}})^{n-1} \leq CS_\epsilon^{1-\beta} (n+A)^{n-1}.
\end{align*}
\end{proof}
\subsection{Singular scalar curvature equation}
\begin{defn}\label{Singular cscK eps defn}
Let $\Omega$ be a big and semi-positive class and $\omega_{sr}$ is a smooth representative in $\Omega$. The singular scalar curvature equation in $\Omega$ is defined to be
\begin{equation}\label{Singular cscK eps}
\omega_\varphi^n=(\omega_{sr}+i\partial\bar\partial\varphi)^n=e^F \omega_{\theta}^n,\quad
\triangle_{\varphi} F=\mathop{\rm tr}\nolimits_{\varphi}\theta-R.
\end{equation}
The reference metric $\omega_\theta$ is introduced in \eqref{lift reference metric log Fano app limit} and $R$ is a real-valued function.
In particular, when considering the singular cscK equation, we have
\begin{align*}
R=\ul S_\beta=\frac{nC_1(X,D)\Omega^{n-1}}{\Omega^{n}}.
\end{align*}
\end{defn}
Inserting the expression of $\omega_\theta$, i.e. $\omega_\theta^n=e^{-f}\omega^n$, into \eqref{Singular cscK eps}, we get
\begin{lem}
\begin{equation*}
(\omega_{sr}+i\partial\bar\partial\varphi)^n=e^{\tilde F} \omega^n,\quad
\triangle_{\varphi} \tilde F=\mathop{\rm tr}\nolimits_{\varphi}(\theta-i\partial\bar\partial f)-R,
\end{equation*}
where
$\tilde F=F-f,\quad f=-(\beta-1)\log|s|_h^2-h_\theta.$
\end{lem}
\subsection{Approximate singular scalar curvature equation}
We define an analogue of the $t$-perturbation and the $(t,\epsilon)$-approximation of the singular scalar curvature equation.
In general, we consider the perturbed equations.
\begin{defn}\label{Singular cscK t eps defn}The perturbed singular scalar curvature equation is defined as
\begin{equation*}
\omega_{\varphi_t}^n=(\omega_{t}+i\partial\bar\partial\varphi_t)^n=e^{F_t} \omega_{\theta_t}^n,\quad
\triangle_{\varphi_t} F_t=\mathop{\rm tr}\nolimits_{\varphi_t}\theta-R.
\end{equation*}
While, the approximate singular scalar curvature equation is set to be
\begin{equation}\label{Singular cscK t eps}
\omega_{\varphi_{t,\epsilon}}^n=(\omega_t+i\partial\bar\partial\varphi_{t,\epsilon})^n=e^{F_{t,\epsilon}} \omega_{\theta_{t,\epsilon}}^n,\quad
\triangle_{\varphi_{t,\epsilon}} F_{t,\epsilon}=\mathop{\rm tr}\nolimits_{\varphi_{t,\epsilon}}\theta-R.
\end{equation}
\end{defn}
\begin{rem}
The function $R$ is also needed to be perturbed. Since their perturbation are bounded and does not effect the estimates we will derive, we still use the same notation in the following sections for convince.
\end{rem}
\begin{defn}
We approximate the singular cscK equation by
\begin{equation*}
\omega_{\varphi_t}^n=(\omega_{t}+i\partial\bar\partial\varphi_t)^n=e^{F_t} \omega_{\theta_t}^n,\quad
\triangle_{\varphi_t} F_t=\mathop{\rm tr}\nolimits_{\varphi_t}\theta-R_{t}
\end{equation*}
with the constant to be
\begin{equation}\label{approximate average scalar}
R_{t}=\frac{nC_1(X,D)(\Omega+t[\omega])^{n-1}}{(\Omega+t[\omega])^{n}}.
\end{equation}
\end{defn}
By the formulas of $Ric( \omega_{\theta_t})$ from \lemref{Ric om theta t} and $Ric(\omega_{\theta_{t,\epsilon}})$ by \lemref{Ric om theta t eps}, the scalar curvature of both approximations are given as below.
\begin{lem}On the regular part $M=X\setminus D$, we have
\begin{align*}
S(\omega_{\varphi_{t}})=R,\quad
S(\omega_{\varphi_{t,\epsilon}})=R+(1-\beta)\triangle_{\varphi_{t,\epsilon}}(\log S_\epsilon-\log|s|_h^2).
\end{align*}
\end{lem}
\begin{proof}
They are direct obtained by inserting the Ricci curvature equations $Ric(\omega_{\theta_{t}})=\theta+2\pi(1-\beta)[D]
$ and
\begin{align*}
Ric(\omega_{\theta_{t,\epsilon}})=\theta+(1-\beta)\{2\pi[D]-i\partial\bar\partial(\log|s|_h^2-\log S_\epsilon)\}.
\end{align*}
into the scalar curvature equations
\begin{align*}
S(\omega_{\varphi_{t}})=-\triangle_{\varphi_t} F_t+Ric(\omega_{\theta_t}),\quad
S(\omega_{\varphi_{t,\epsilon}})=-\triangle_{\varphi_{t,\epsilon}}F_{t,\epsilon}+Ric(\omega_{\theta_{t,\epsilon}}).
\end{align*}
\end{proof}
We will derive a priori estimates for the smooth solutions to the approximate equation \eqref{Singular cscK t eps}. We set
\begin{align*}
& \tilde\varphi_{{t,\epsilon}}:=\varphi_{{t,\epsilon}}-\phi_E, \quad\tilde F_{t,\epsilon}:=F_{t,\epsilon}-f_{t,\epsilon}.\end{align*}
Then it follows $\omega_{\varphi_{t,\epsilon}}=\omega_t+i\partial\bar\partial \varphi_{t,\epsilon}=\tilde\omega_t+i\partial\bar\partial\tilde\varphi_{t,\epsilon}.
$ Hence, \eqref{Singular cscK t eps} is rewritten for the couple $(\Phi_{t,\epsilon}, \tilde F_{t,\epsilon})$ as following equations regarding to the smooth K\"ahler metric $\tilde\omega_t$.
\begin{lem}
\begin{equation}\label{Singular cscK t eps tilde}
(\tilde\omega_t+i\partial\bar\partial \tilde\varphi_{t,\epsilon})^n=e^{\tilde F_{t,\epsilon}} \tilde\omega_t^n,\quad
\triangle_{\varphi_{t,\epsilon}} \tilde F{_{t,\epsilon}}=\mathop{\rm tr}\nolimits_{\varphi_{t,\epsilon}}(\theta-i\partial\bar\partial f_{t,\epsilon})-R.
\end{equation}
\end{lem}
According to \lemref{nef tilde f}, we see that $f_{t,\epsilon}$ have uniform lower bound and $i\partial\bar\partial f_{t,\epsilon}$ has uniform upper bound, when $\beta>1$. It is different from the the cone metric $0<\beta<1$ or the smooth metric $\beta=1$, whose $f_{t,\epsilon}$ have uniform upper bound and $i\partial\bar\partial f_{t,\epsilon}$ have uniform lower bound.
The a priori estimates in \cite{MR4301557} were extended to the cone metrics in the big and semi-positive class in \cite{arXiv:1803.09506}. In the following sections, including Section \ref{Linfty estimate}, Section \ref{Gradient estimate of vphi} and Section \ref{W2p estimate}, we obtain estimation for the approximate singular scalar curvature equation \eqref{Singular cscK t eps} with $\beta>1$.
\begin{defn}[Almost admissible solution for singular equations]\label{a priori estimates approximation singular}
We say $\varphi_\epsilon$ is an \text{almost admissible solution} to the approximate singular scalar curvature equation \eqref{Singular cscK t eps}, if there are uniform constants independent of $t,\epsilon$ such that the following estimates hold
\begin{itemize}
\item $L^\infty$-estimates in \thmref{L infty estimates Singular equation}:
\begin{align*}
&\|\varphi_{t,\epsilon}\|_\infty\leq C,\quad \|e^{F_{t,\epsilon}}\|_{p;\tilde\omega_t^n},\quad \|e^{\tilde F_{t,\epsilon}}\|_{p;\tilde\omega_t^n} \leq C(p),\quad p\geq 1 ;\\
&\sup_X(F_{t,\epsilon}-\sigma_s\phi_E),\quad-\inf_{X}[F_{t,\epsilon}-\sigma_i\phi_E]\leq C;\\
& \sigma_s:=C_l-\tau,\quad \sigma_i:=C_u+\tau,\quad \forall\tau>0.
\end{align*}
\item gradient estimate of $\varphi$ in \thmref{gradient estimate}:
\begin{align*}
|s_E|^{2a_0 \sigma^1_E } S_\epsilon^{\sigma^1_D}|\partial\tilde\varphi_{t,\epsilon}|^2_{\tilde\omega_t}\leq C, \quad \sigma^1_D\geq 1
\end{align*}
where the singular exponent $\sigma^1_E$ satisfies \eqref{Gradient estimate: sigmaE} and \eqref{Gradient estimate: sigmaE 2};
\item $W^{2,p}$-estimate in \thmref{w2pestimates degenerate Singular equation}:
\begin{align*}
\int_X (\mathop{\rm tr}\nolimits_{\tilde\omega_t}\omega_\varphi)^{p} |s_E|^{\sigma^2_E}_{h_E} S_\epsilon^{\sigma^2_D} \tilde\omega_t^n\leq C(p),\quad\forall p\geq 1
\end{align*}
where $\sigma^2_D>(\beta-1)\frac{n-2-2np^{-1}}{n-1+p^{-1}}$ and $\sigma^2_E$ is given in \eqref{w2pestimates sigma E}.
\end{itemize}
\end{defn}
\begin{thm}\label{almost admissible singular}
The family of solutions to the approximate singular scalar curvature equation \eqref{Singular cscK t eps} with bounded entropy is almost admissible.
\end{thm}
Suppose $\Omega$ is K\"ahler, the singular equation is reduced to the degenerate equation \eqref{Degenerate cscK}.
\begin{thm}\label{almost admissible degenerate}
Assume $\{\varphi_{\epsilon}\}$ is a family of solutions to the approximate degenerate scalar curvature equation \eqref{Degenerate cscK approximation} with bounded entropy. Then $\varphi_{\epsilon}$ is almost admissible, Definition \ref{a priori estimates approximation}.
\end{thm}
\subsection{Singular metrics with prescribed scalar curvature}
In this section, we will prove an existence theorem for singular metrics with prescribed scalar curvature. The idea is to construct solution to the singular equation by taking limit of the solutions to the approximate equation. The proof is an adaption of Theorem 4.40 in our previous article \cite{arXiv:1803.09506}.
We recall the definition of singular canonical metric satisfying particular scalar curvature, see \cite[Definition 4.39]{arXiv:1803.09506}, which is motivated from the study of canonical metrics on connected normal complex projective variety.
\begin{defn}[Singular metrics with prescribed scalar curvature]\label{Singular metric with prescribed scalar curvature}
In a big cohomology class $[\omega_{sr}]$, we say $\omega:=\omega_{sr}+i\partial\bar\partial\varphi$ is a singular metric with prescribed scalar curvature, if $\varphi\in PSH(\omega_{sr})$ is a $L^1$-limit of a sequence of K\"ahler potentials solving the approximate scalar curvature equation \eqref{Singular cscK t eps}.
In addition, we say the singular metric $\omega_\varphi$ is bounded, if $\varphi$ is $\mathcal E^1(\omega_{sr})\cap L^\infty$.
When $\Omega$ is K\"ahler, we name it the degenerate metric instead.
\end{defn}
\subsection{Estimation of approximate solutions}
We need to consider the existence of the solution to the approximate singular scalar curvature equation \eqref{Singular cscK t eps}.
Restricted to the case for log KE metrics, we have seen the smooth approximation in Proposition \ref{smooth approximation KE}. For the cscK cone metrics, $0<\beta\leq 1$, the smooth approximation is shown in \cite[Proposition 4.37]{arXiv:1803.09506}. We collect the results as below. We say a real $(1,1)$-cohomology class $\Omega$ is
\textit{nef}, if the cohomology class $\Omega_t:=\Omega+t[\omega]$ is K\"ahler for all $t>0$.
\begin{prop}Let $\Omega$ be a big and nef class on a K\"ahler manifold $(X,\omega)$, whose automorphism group $Aut(M)$ is trivial. Suppose that $\Omega$ satisfies the cohomology condition
\begin{align*}
\left\{
\begin{array}{lcl}
&&0\leq \eta<\frac{n+1}{n}\alpha_\beta,\quad
C_1(X,D)<\eta\Omega,\\
&& (-n\frac{C_1(X,D)\cdot\Omega^{n-1}}{\Omega^{n}}+\eta)\Omega+(n-1)C_1(X,D)>0.
\end{array}
\right.
\end{align*}
Then $\Omega$ has the cscK approximation property, which precisely asserts that, for small positive $t$, we have
\begin{enumerate}
\item
the log $K$-energy is $J$-proper in $\Omega_t$;
\item there exists a cscK cone metric $\omega_{t}$ in $\Omega_t$;
\item the cscK cone metric $\omega_{t}$ a smooth approximation $\omega_{t,\epsilon}$ in $\Omega_{t}$.
\end{enumerate}
\end{prop}
\begin{ques}
We may ask the question whether the approximate degenerate cscK equation \eqref{Degenerate cscK approximation} has a smooth solution, if the log $K$-energy \eqref{log K energy} is proper.
\end{ques}
It is different from the case $0<\beta\leq 1$, when the approximate log $K$-energy $\nu^\epsilon_\beta$ dominates the log $K$-energy $\nu_\beta$. So the properness of $\nu^\epsilon_\beta$ follows from the properness of $\nu_\beta$ and then implies existence of the approximate solutions by \cite{MR4301558}.
\subsection{Convergence and regularity}\label{Convergence and regularity}
Now we further consider the convergence problem of the family $\{\varphi_{t,\epsilon}\}$ is consisting of smooth approximation solutions for \eqref{Singular cscK t eps} and regularity of the convergent limit.
We normalise the family to satisfy $\sup_X\varphi_{t,\epsilon}=0$. According to the Hartogs's lemma, a sequence of $\omega_{sr}$-psh functions has $L^1$-weak convergent subsequence.
\begin{lem}
The family $\{\varphi_{t,\epsilon}\}$ converges to $\varphi$ in $L^1$, as $t,\epsilon\rightarrow 0$. Moreover, $\varphi$ is an $\omega_{sr}$-psh function with $\sup_X\varphi=0$.
\end{lem}
\begin{lem}
Assumed the uniform bound of $\|\varphi_{t,\epsilon}\|_\infty$ and $\|e^{F_{t,\epsilon}}\|_{p;\tilde\omega_t^n}$, the limit $\varphi$ belongs to $\mathcal E^1(X,\omega_{sr})\cap L^\infty(X)$.
\end{lem}
\begin{proof}
The proof is included in the proof of \cite[Proposition 4.41]{arXiv:1803.09506}.
\end{proof}
In conclusion, we apply the gradient and $W^{2,p}$ estimates in \thmref{almost admissible singular} to obtain
\begin{thm}\label{existence singular}
Assume that the family $\{\varphi_{t,\epsilon}\}$ is the almost admissible solution to the approximate singular scalar curvature equation \eqref{Singular cscK t eps}. Then there exists a bounded singular metric $\omega_\varphi$ with prescribed scalar curvature, moreover, $\omega_\varphi$ has the gradient estimate and the $W^{2,p}$-estimate, as stated in \thmref{almost admissible singular}.
\end{thm}
The convergence behaviour is improved for the degenerate metrics.
\begin{thm}\label{existence degenerate}
Assume that $\Omega$ is K\"ahler and the family $\{\varphi_{\epsilon}\}$, consisting of the approximate degenerate scalar curvature solutions \eqref{Degenerate cscK approximation}, has bounded entropy. Then there exists degenerate metric $\omega_\varphi$ with prescribed scalar curvature, which is bounded, and has the gradient estimate and the $W^{2,p}$-estimate as stated in \thmref{almost admissible degenerate}. Moreover, $\omega_\varphi$ is smooth and satisfies the degenerate scalar curvature equation \eqref{Degenerate cscK defn} outside $D$.
Furthermore, if $\{\varphi_{\epsilon}\}$ has bounded gradients of the volume ratios $\|\partial F_\epsilon\|_{\varphi_\epsilon}$, then the degenerate metric $\omega_\varphi$ has Laplacian estimate, namely it is admissible when $\beta>\frac{n+1}{2}$, and $\gamma$-admissible for any $\gamma>0$ when $1<\beta< \frac{n+1}{2}$.
\end{thm}
\begin{proof}
The first part of the theorem is a direct corollary of \thmref{existence singular}. The smooth convergence outside $D$ is obtained, by using the local Laplacian estimate in Section 6 of the arXiv version of \cite{MR4301557}. By applying the Evans-Krylov estimates and the bootstrap method, the solution is smooth on the regular part $X\setminus D$ and satisfies the equation there.
The global Laplacian estimates for degenerate metrics are obtained from \thmref{cscK Laplacian estimate}.
\end{proof}
In Section \ref{A priori estimates for approximate degenerate cscK equations}, we develop an integration method with weights for the general degenerate scalar curvature equation \eqref{Degenerate cscK approximation}.
In particular, we utilise our theorems to the degenerate K\"ahler-Einstein metrics.
We see that the integration method we develop here provide an alternative method to obtain the Laplacian estimate for the approximate degenerate K\"ahler-Einstein equation \eqref{critical pt Ding approximation}, as shown in Section \ref{Log Kahler Einstein metric}. While, Yau's proof of the Laplacian estimate applies the maximum principle.
\begin{cor}\label{existence log KE}
When $\lambda\leq 0$ and $\beta> 1$, there exists a family of smooth approximate K\"ahler-Einstein metrics \eqref{critical pt Ding approximation}, which converges to an admissible degenerate K\"ahler-Einstein metric.
\end{cor}
\section{$L^\infty$-estimate}\label{Linfty estimate}
This section is devoted to obtain the $L^\infty$-estimate for singular scalar curvature equation \eqref{Singular cscK t eps} for both $\varphi$ and $F$.
\begin{thm}[$L^\infty$-estimate for singular equation]\label{L infty estimates Singular equation}
Assume $\varphi_{t,\epsilon}$ is a solution of the approximate singular scalar curvature equation \eqref{Singular cscK t eps}. Then we have the following estimates.
\begin{enumerate}
\item For any $p\geq 1$, there exists a constant $A_1$ such that
\begin{align*}
\|\varphi_{t,\epsilon}\|_\infty,\quad \sup_X[F_{t,\epsilon}-\sigma_s \phi_E],\quad \|e^{F_{t,\epsilon}}\|_{p;\tilde\omega_t^n},\quad \|e^{\tilde F_{t,\epsilon}}\|_{p;\tilde\omega_t^n}\leq A_1.
\end{align*}
The constant $\sigma_s=C_l-\tau$ and $A_1$ depends on the entropy
$
E^\beta_{t,\epsilon}=\frac{1}{V}\int_X F_{t,\epsilon}\omega_{\varphi_{t,\epsilon}}^n,
$
the alpha invariant $\alpha(\Omega_1)$, $\|e^{C_l\phi_E}\|_{L^{p_0}(\tilde \omega_t^n)}$ for some $p_0\geq 1$ and
\begin{align*}
\inf_X f_{t,\epsilon},\quad C_l=\inf_{(X,\tilde\omega_t)}\theta,\quad\sup_X R,\quad n,\quad p.
\end{align*}
\item There also holds the lower bound the volume ratio
\begin{align*}
\inf_{X}[F_{t,\epsilon}-\sigma_i\phi_E]\geq A_2, \quad \forall\tau>0.
\end{align*}
The constant $\sigma_i=C_u+\tau$ and $A_2$ depends on $\sup_XF_{t,\epsilon}$ and
\begin{align*}
\inf_X f_{t,\epsilon}, \quad C_u=\sup_{(X,\tilde\omega_{t})}\theta,\quad \inf_X R,\quad n.
\end{align*}
\end{enumerate}
\end{thm}
Furthermore, since $\tilde F_{t,\epsilon}$ is dominated by $F+(\beta-1)\log S_\epsilon$ up to a smooth function, the $L^\infty$ estimate of $F_{t,\epsilon}$ gives the volume ratio bound.
\begin{cor}[Volume ratio]
\begin{align}\label{Gradient estimate: volume ratio bound}
C^{-1} e^{\sigma_i\phi_E} S_\epsilon^{\beta-1}\tilde\omega^n_t \leq \omega^n_{\varphi_{t,\epsilon}}=e^{\tilde F_{t,\epsilon}}\tilde\omega^n_t\leq C e^{\sigma_s\phi_E} S_\epsilon^{\beta-1}\tilde\omega^n_t.
\end{align}
\end{cor}
Before we start the proof, we state the corresponding estimate for the degenerate equation.
If we further assume that $\Omega=[\omega_K]$ is K\"ahler, then $\sigma_E=0$.
\begin{thm}[$L^\infty$-estimate for degenerate equation]\label{L infty estimates degenerate equation}
Assume $\varphi_{\epsilon}$ is a solution of the approximate degenerate scalar curvature equation \eqref{Degenerate cscK approximation}.
Then for any $p\geq 1$, there exists a constant $A_0$ such that
\begin{align*}
\|\varphi_{\epsilon}\|_\infty,\quad \|F_{\epsilon}\|_\infty,\quad \|e^{F_{\epsilon}}\|_{p;\omega^n},\quad \|e^{\tilde F_{\epsilon}}\|_{p;\omega^n}\leq A_0.
\end{align*}
\end{thm}
Now we start the proof of \thmref{L infty estimates Singular equation}.
To proceed further, we omit the indexes $(t,\epsilon)$ of \eqref{Singular cscK t eps} for convenience, that is
\begin{equation}\label{Singular cscK t eps 3 short}
\omega^n_\varphi=(\omega_t+i\partial\bar\partial\varphi)^n=e^{F} \omega_{\theta}^n,\quad
\triangle_{\varphi} F=\mathop{\rm tr}\nolimits_{\varphi}\theta-R.
\end{equation}
\subsection{General estimation}
In this section, we extend Chen-Cheng \cite{MR4301557} to the singular setting. We will first summarise a machinery to obtain the $L^\infty$-estimates in Proposition \ref{Linfty estimate v prop} and Proposition \ref{Estimation of I prop}. Then we apply this robotic method to conclude $L^\infty$-estimates under various conditions on $\theta$ in Section \ref{Applcations}.
We let $\varphi_a$ be an auxiliary function defined later in \eqref{vphi a} and set
\begin{align}\label{Linfty estimate u}
w:=b_0F+b_1\varphi+b_2\varphi_a.
\end{align}
The singular scalar curvature equation \eqref{Singular cscK t eps 3 short} gives the identity.
\begin{lem}\label{Linfty estimate tri w}
We write $A_R:=-b_0R+b_1n+b_2\mathop{\rm tr}\nolimits_\varphi\omega_{\varphi_a}$. Then
\begin{align*}
\triangle_\varphi w&=b_0(\mathop{\rm tr}\nolimits_{\varphi}\theta-R)+b_1(n-\mathop{\rm tr}\nolimits_\varphi\omega_t)+b_2(\mathop{\rm tr}\nolimits_\varphi\omega_{\varphi_a}-\mathop{\rm tr}\nolimits_\varphi\omega_t)\\
&=b_0\mathop{\rm tr}\nolimits_{\varphi}\theta-(b_1+b_2)\mathop{\rm tr}\nolimits_\varphi\omega_t+A_R.
\end{align*}
\end{lem}
\begin{rem}\label{Linfty estimate b1}
There are two ways to deal with $\omega_t$, one is $\omega_t=\tilde\omega_t-i\partial\bar\partial\phi_E$, and the other one is $\omega_t=\omega_{sr}+t\omega$. Thus, we have
\begin{align*}
\triangle_\varphi w&=b_0\mathop{\rm tr}\nolimits_{\varphi}\theta-(b_1+b_2)\mathop{\rm tr}\nolimits_\varphi\tilde\omega_t+(b_1+b_2)\triangle_\varphi\phi_E+A_R\\
&=b_0\mathop{\rm tr}\nolimits_{\varphi}\theta-(b_1+b_2)\mathop{\rm tr}\nolimits_\varphi(\omega_{sr}+t\omega)+A_R.
\end{align*}
The constant $b_1$ will be chosen to be negative such that $-(b_1+b_2)$ is positive. Accordingly, we see that
$\triangle_\varphi w\geq b_0\mathop{\rm tr}\nolimits_{\varphi}\theta-t(b_1+b_2)\mathop{\rm tr}\nolimits_\varphi\omega+A_R.$
\end{rem}
We introduce a weight function $H$ and add it to $u$
\begin{align*}
u:=w-H
\end{align*} and utilise various conditions on the $(1,1)$-form $\theta$, aiming to obtain a differential inequality from the identity in \lemref{Linfty estimate tri u}
\begin{align}\label{Linfty estimate tri u}
\triangle_\varphi u\geq A_\theta\mathop{\rm tr}\nolimits_\varphi\tilde\omega_t+A_R.
\end{align} Here, we choose $b_1$ such that $A_\theta>0$.
Then the maximum principle is applied near the maximum of $u$ to conclude the estimate of $u$.
Given a point $z\in X$ and a ball $B_d(z)\subset X$, we let $\eta$ be the local cutoff function $B_d(z)$ regarding to the metric $\tilde\omega_t$ such that $\eta(z)=1$ and $\eta=1-b_3$ outside the half ball $B_{\frac{d}{2}}(z)$.
Then we have the standard estimate of the local cutoff function as
\begin{align}\label{cutoff}
\triangle_\varphi\log\eta\geq -A_{b_3} \mathop{\rm tr}\nolimits_\varphi\tilde\omega_t, \quad A_{b_3}:=[\frac{2b_3}{d(1-b_3)}]^2+\frac{4b_3}{d^2(1-b_3)}.
\end{align}
\begin{prop}\label{Linfty estimate v prop}
Assume that $u$ satisfy \eqref{Linfty estimate tri u}.
We define
\begin{align}\label{Linfty estimate v}
v:=b_4 u=b_4 (b_0 F-H+b_1\varphi+b_2\varphi_a)
\end{align} and set $\tilde d=8d^{-2}$ and $0<b_3<<1$ satisfy
\begin{align}\label{Linfty estimate Atheta b3}
b_3= \frac{A_\theta}{\tilde d \cdot b_4^{-1}+A_\theta}\text{ such that }
A_{b_3}\leq \tilde d \frac{b_3}{1-b_3}= b_4 A_\theta.
\end{align} Then the following estimates hold.
\begin{enumerate}[(i)]
\item\label{Linfty estimate v prop 1}
The upper bound of $v$ is $b_4^{-1}\sup_X v\leq C(n)d I^{\frac{1}{2n}}$ where
\begin{align}\label{I}
I:=b_3^{-2n} \int_{A_R\leq 0} (A_R)_-^{2n} e^{2n v+2(F-f)}\tilde \omega_t^n .
\end{align}
\item\label{Linfty estimate v prop 3} The upper bound of $b_0F-H$ is
\begin{align}\label{Linfty estimate F H}
b_0F-H\leq C(n)d I^{\frac{1}{2n}}-b_2\varphi_a.
\end{align}
\item\label{Linfty estimate v prop 4}
Assume the exponent $p\geq 1$ and the constant $b_2$ satisfies
\begin{align}\label{Linfty estimate b2}
b_2 p\leq \alpha(\Omega_1).
\end{align}
Then it holds
\begin{align}\label{integral F auxiliary}
\|e^{b_0F-H}\|^p_{L^p(\omega^n)}
\leq e^{p \cdot b_4^{-1}\sup_X v}\int_X e^{-\alpha\varphi_a}\omega_1^n.
\end{align}
\item \label{Linfty estimate eF}
When $b_0=1$,
assume $e^H\in L^{p_H^+}(\tilde\omega_t^n)$ for some $p_H^+\geq 1$. Then for any $\tilde p\leq \frac{pp_H^+}{p+p_H^+}$, we have $\|e^{\tilde F}\|_{L^{\tilde p}(\tilde\omega_t^n)}\leq
e^{-\inf_X f}\|e^{F}\|_{L^{\tilde p}(\tilde\omega_t^n)}$ and
\begin{align*}
\|e^{F}\|_{L^{\tilde p}(\tilde\omega_t^n)}\leq
\|e^{F-H}\|_{L^p(\tilde\omega_t^n)}\|e^{H}\|_{L^{p_H^+}(\tilde\omega_t^n)},
\end{align*}
which implies $\|\varphi\|_\infty,\|\varphi_a\|_\infty\leq C$. Consequently, \eqref{Linfty estimate F H} becomes
\begin{align*}
F\leq H+C(n)d I^{\frac{1}{2n}}-b_2\|\varphi_a\|_\infty.
\end{align*}
\item\label{Linfty estimate v prop 2}
When $b_0=1$, we assume $e^{-H}\in L^{p_H}(\tilde \omega_t^n)
$ and $b_4$ satisfies
\begin{align}\label{Linfty estimate b4}
0<b_4\leq \min\{(-4nb_1)^{-1}\alpha(\Omega_1),(4n)^{-1}p_H\}.
\end{align}
Then the estimate of $I$ is given in Proposition \ref{Estimation of I prop}.
\end{enumerate}
\end{prop}
\begin{proof}
Now we combine the inequality for $u$ and the cutoff function $\eta$.
Inserting \eqref{Linfty estimate tri u} and \eqref{cutoff} to $\triangle_\varphi (v+\log\eta)$, we get
\begin{align*}
&\triangle_\varphi (v+\log\eta)\geq (b_4 A_\theta-A_{b_3})\mathop{\rm tr}\nolimits_\varphi\tilde\omega_t+b_4 A_R \geq b_4 A_R.
\end{align*}
Furthermore, it implies
\begin{align*}
\triangle_\varphi (e^{v}\eta)\geq b_4 A_R e^{v}\eta.
\end{align*}
Therefore, applying the Aleksandrov maximum principle yo this differential inequality, we obtain the estimate of $v$ in \eqref{Linfty estimate v prop 1}.
Before we obtain the estimate of $I$ in \eqref{Linfty estimate v prop 2}, which will be given in Proposition \ref{Estimation of I prop}, we derive the upper bound of $b_0F-H$ and an $L^p$ bound of $e^{b_0F-H}$.
Since $\varphi, \varphi_a$ are $\omega_t$-psh functions, we could modify a constant such that $\varphi,\varphi_a\leq 0$. Hence, $b_1\varphi\geq 0$ and the formula \eqref{Linfty estimate v} of $v$ gives
\begin{align*}
b_0F-H\leq b_4^{-1}\sup_Xv-b_2\varphi_a.
\end{align*}
Thus the upper bound \eqref{Linfty estimate F H} of $b_0F-H$ in \eqref{Linfty estimate v prop 3} is obtained by inserting \eqref{I} to the inequality above.
Note that $\omega_1=\omega_{sr}+\omega$ and $\omega_{sr}\geq0$, we have $\omega\leq \omega_1$ and $\varphi_a$ is also psh with respect to $\omega_1=\omega_{sr}+\omega$.
Accordingly, we apply the $\alpha$-invariant of $\Omega_1=[\omega_1]$ to conclude the $L^p(\omega^n)$ bound of $e^{b_0F-H}$ in \eqref{Linfty estimate v prop 4}.
The proof of \eqref{Linfty estimate eF} is given as following.
While, $\omega$ is equivalent to $\tilde\omega_t$, i.e. $\tilde\omega_t\leq (C_K+t)\omega$, we could replace $L^p(\omega^n)$ norm by $L^p(\tilde\omega_t)$ norm,
$$\|e^{b_0F-H}\|_{L^p(\tilde\omega_t^n)}\leq (C_K+t)\|e^{b_0F-H}\|_{L^p(\omega^n)}.$$
Under the hypothesis $e^H\in L^{p_0}(\tilde\omega_t^n)$, the estimate of
$\|e^{F}\|_{ L^p(\tilde\omega_t^n)}$ is obtained by applying the H\"older inequality to \eqref{integral F auxiliary}. Moreover, the upper bound of $e^{-f}$ from \lemref{nef tilde f} implies the $L^p(\tilde\omega_t^n)$ bound of $e^{\tilde F}$. By the equation
$
\omega_\varphi^n=e^{\tilde F}\tilde\omega_t^n,
$ the bound $\|e^{\tilde F}\|_{L^{\tilde p}(\tilde\omega_t^n)}$ further
implies the uniform bound of $\varphi$, due to \lemref{big nef approximate reference metric bound}.
The auxiliary function $\varphi_a$ is defined to be a solution to the following approximation of the singular Monge-Amp\`ere equation
\begin{align}\label{vphi a}
\omega_{\varphi_a}^n=E^{-1 }\omega_\theta^n e^F\sqrt{F^2+1},\quad E=V^{-1}\int_Xe^F\sqrt{F^2+1}\omega_\theta^n.
\end{align}
Similarly, the volume element of the auxiliary function $\omega_{\varphi_a}$ is also $L^p$, by applying $\|e^{F}\|_{L^{\tilde p}(\tilde\omega_t^n)}$. For the same reason, $\varphi_a$ is also uniformly bounded, too.
Inserting the resulting estimate of $\varphi_a$ into \eqref{Linfty estimate F H}, we have the upper bound of $F-H$.
\end{proof}
The rest of the proof is to estimate $I$, \eqref{Linfty estimate v prop 2} in Proposition \ref{Linfty estimate v prop}.
\begin{prop}[Estimation of $I$]\label{Estimation of I prop}
In general, we have
\begin{align}\label{Linfty estimate rough I bound}
I\leq e^{-2\inf_X f} \int_{A_R\leq 0} |b_3^{-1}(b_0R-b_1n)|^{2n} e^{-2nb_4 H}e^{2nb_4b_1\varphi}e^{(2nb_4b_0+2)F} \tilde \omega_t^n
\end{align}
and in the integral domain $\{x\in X\vert A_R\leq 0\}$ of $I$,
\begin{align}\label{F and entropy}
F\leq (\frac{|b_0R-b_1n|}{b_2n})^n (E^\beta_{t,\epsilon}+2e^{-1}+1).
\end{align}
If we further assume that $|b_0R-b_1n|$ is bounded,
$
e^{-H}\in L^{p_H}(\tilde \omega_t^n)
$
and $b_4$ satisfies \eqref{Linfty estimate b4}.
Then
\begin{align}\label{Linfty estimate b4 cor}
I^2\leq C^2(C_K+1)^n \|e^{-H}\|^{p_H}_{L^{p_H}(\tilde\omega_t^n)} \int_{X} e^{-\alpha(\Omega_1)\varphi} \omega_1^n,
\end{align}
where $C=e^{-2\inf_X f} e^{(2nb_4+2)\sup_{A_R\leq0}F} \|b_3^{-1}(b_0R-b_1n)\|^{2n}_\infty$.
\end{prop}
\begin{proof}
We use the lower bound of $f$ from \lemref{nef tilde f} in \eqref{I},
\begin{align*}
I\leq e^{-2\inf_X f} \int_{A_R\leq 0} (A_R)_-^{2n} e^{2n v}e^{2F}\tilde \omega_t^n.
\end{align*}
Then we deal with each factors in the integrand.
Since $\varphi_a$ is an $\omega_t$-psh function, we have $\varphi_a\leq 0$. Thus \eqref{Linfty estimate v} tells us
\begin{align*}
e^{2nv}\leq e^{2nb_4 b_0F} e^{-2nb_4 H}e^{2nb_4b_1\varphi}.
\end{align*}
Using the expression of $A_R$ in \lemref{Linfty estimate tri w}, we get
$0\geq A_R\geq -b_0R+b_1n.$
So, we estimate
\begin{align*}
(A_R)_-^{2n}\leq| b_0R-b_1n|^{2n}.
\end{align*}
Inserting these estimates into the expression of $I$, we obtain \eqref{Linfty estimate rough I bound}.
The bound of $F$ is obtained in terms of the entropy $E^\beta_{t,\epsilon}$, by using the auxiliary function $\varphi_a$.
Since $A_R\leq 0$, applying the geometric mean inequality to the expression of $A_R$ in \lemref{Linfty estimate tri w}, we have
\begin{align*}
b_2 n( \frac{\omega^n_\varphi}{\omega^n_{\varphi_a}})^{\frac{-1}{n}}\leq b_2\mathop{\rm tr}\nolimits_\varphi\omega_{\varphi_a}\leq b_0R-b_1n.
\end{align*}
Substituting \eqref{vphi a} into the inequality above, we have
\begin{align*}
F\leq \sqrt{F^2+1}\leq (\frac{b_0R-b_1n}{b_2n})^n E.
\end{align*}
At last, we use that fact that $E$ is bounded by the entropy $E^\beta_{t,\epsilon}$ from Lemma 5.4 in \cite{arXiv:1803.09506}, to conclude \eqref{F and entropy}.
When $2nb_4b_0+2\geq 0$, we let $C=e^{-2\inf_X f} b_3^{-1} e^{(2nb_4b_0+2)\sup_X F}$ and get
\begin{align*}
I\leq C \int_{A_R\leq 0} (b_0R-b_1n)^{2n} e^{-2nb_4 H}e^{2nb_4b_1\varphi}\tilde \omega_t^n.
\end{align*}
By H\"older inequality, it is further bounded by
\begin{align*}
C\|b_0R-b_1n\|_\infty^{2n} \int_X e^{-4nb_4 H}\tilde \omega_t^n\int_{X} e^{4nb_4b_1\varphi}\tilde \omega_t^n.
\end{align*} with $p_R> 2n$.
The integral $ \int_{X} e^{-4nb_4 H}\tilde \omega_t^n$ is finite, when $4nb_4\leq p_H$.
As \lemref{integral F auxiliary}, we use $\tilde\omega_t\leq \omega_K+\omega\leq (C_K+1)\omega$ by \lemref{metrics equivalence} and $\omega\leq \omega_1=\omega_{sr}+\omega$ by semi-positivity of $\omega_{sr}$.
So, the integral
\begin{align*}
\int_Xe^{4nb_4b_1\varphi}\tilde \omega_t^n\leq (C_K+1)^n\int_Xe^{4nb_4b_1\varphi} \omega_1^n
\end{align*} is also bounded, once $-4nb_4b_1\leq \alpha(\Omega_1)$.
\end{proof}
\begin{rem}
In the proof, we could use
$ \int_X(b_0R-b_1n)^{p_R}\tilde \omega_t^n $, $p_R>2n$ instead.
\end{rem}
\subsection{Applications}\label{Applcations}
Now we are ready to make use of different properties on $\theta$ to estimate $\varphi$ and $F$, with the help of Proposition \ref{Linfty estimate v prop} and its corollaries.
We clarify the steps in practice. Firstly, we derive \eqref{Linfty estimate tri u} to write down the formulas of $A_\theta$, $A_R$ and $H$. Secondly, we ask $A_\theta$ to be strictly positive to determine the value of $b_1$. While, $b_2$ is chosen as \eqref{Linfty estimate b2} depending on the auxiliary function $\varphi_a$. Thirdly, we use the expression of $H$ to verify both conditions including $\|e^H\|_{L^{p_H^+}(\tilde\omega_t^n)}$ and $\|e^{-H}\|_{L^{p_H}(\tilde \omega_t^n)}$ in \eqref{Linfty estimate eF} of Proposition \ref{Linfty estimate v prop} and Proposition \ref{Estimation of I prop}, respectively. Consequently, $b_4$ is determined form $b_1$, $p_H$ in \eqref{Linfty estimate b4}. At last, we could obtain the value of $b_3$ by \eqref{Linfty estimate Atheta b3} and compute $I$ in Proposition \ref{Estimation of I prop}.
Now we start our applications.
As we observe in \cite{arXiv:1803.09506} that the particular property of the given $(1,1)$-form $\theta\in C_1(X,D)$ leads to various differential inequalities. The most general bound on $\theta$ is Definition \ref{L infty estimates theta defn}
\begin{align*}
C_l\cdot \tilde\omega_t\leq \theta\leq C_u\cdot\tilde\omega_t.
\end{align*}
\begin{prop}\label{L infty estimates f sup vphi prop}
Suppose that $e^{C_l\phi_E}\in L^{p_0}(\tilde \omega_t^n)$ for some $p_0\geq 1$.
Then there exists a constant $C$ such that for all $p>p_0$, $\tau>0$,
\begin{align}\label{L infty estimates f sup vphi}
\|e^{F}\|_{L^{p}(\tilde\omega_t^n)}, \quad
\|e^{\tilde F}\|_{L^{p}(\tilde\omega_t^n)},\quad
\|\varphi\|_\infty, \quad \sup_X(F-\sigma_s\phi_E)\leq C,
\end{align}
where $\sigma_s=C_l-\tau$.
The constant $C$ depends on $E^\beta_{t,\epsilon}$, $C_l$, $\alpha(\Omega_1)$, $\sup_X R$, $\inf_X f$, $\sup_X\phi_E,n,p$.
\end{prop}
\begin{proof}
We let $b_0=1$ and insert the lower bound \eqref{Cl} of $\theta$, i.e. $\theta\geq C_l\tilde\omega_t$, together with $\omega_t=\tilde\omega_t-i\partial\bar\partial\phi_E$ to \lemref{Linfty estimate tri w},
\begin{align*}
\triangle_\varphi w
\geq C_l\mathop{\rm tr}\nolimits_{\varphi}\tilde\omega_t-(b_1+b_2)(\mathop{\rm tr}\nolimits_\varphi\tilde\omega_t-i\partial\bar\partial\phi_E)+A_R.
\end{align*}
Comparing with \eqref{Linfty estimate tri u}, we read from this inequality that
\begin{align*}
H=(b_1+b_2)\phi_E,\quad A_\theta=C_l-(b_1+b_2),\quad A_R=-R+b_1n+b_2\mathop{\rm tr}\nolimits_\varphi\omega_{\varphi_a}.
\end{align*}
Given a fixed $p\geq 1$, by \eqref{Linfty estimate b2}, we further take
$b_2=p^{-1}\alpha(\Omega_1).$ Letting $A_\theta=t_0>0$, we have $b_1=C_l-b_2-t_0$. Also, from \eqref{Linfty estimate Atheta b3}, we get $b_3= \frac{t_0}{8d^{-2}b_4^{-1}+t_0}$.
As a result, we see that $H=(C_l-t_0)\phi_E.$
Clearly, $-H$ is bounded above. Moreover, since $e^{C_l\phi_E}\in L^{p_0}(\tilde \omega_t^n)$ and $e^{-t_0\phi_E}\in L^{p_1}(\tilde \omega_t^n)$ as long as $t_0$ is small enough, we have
\begin{align*}
e^{H}=e^{(C_l-t_0)\phi_E}\in L^{p^+_H}(\tilde \omega_t^n)\text{ for all }p^+_H\leq \frac{p_0p_1}{p_0+p_1}.
\end{align*}
We also choose $b_4\leq (-4nb_1)^{-1}\alpha$ by \eqref{Linfty estimate b4}.
According to Proposition \ref{Estimation of I prop}, we could examine that the integral $I$ is finite and all estimates are independent of $t$ and $\epsilon$.
Therefore, the hypotheses of $H$ in \eqref{Linfty estimate v prop 2} in Proposition \ref{Linfty estimate v prop} and \eqref{Linfty estimate eF} of Proposition \ref{Linfty estimate v prop} are satisfied and we conclude the estimates of
$\|e^{F}\|_{L^{\tilde p}(\tilde\omega_t^n)}$,
$\|e^{\tilde F}\|_{L^{\tilde p}(\tilde\omega_t^n)}$,
$\|\varphi\|_\infty$ and $\sup_X(F-H)\leq C$.
\end{proof}
\begin{prop}\label{L infty estimates f inf prop}
Under the assumption in Proposition \ref{L infty estimates f sup vphi prop}, it holds for any $\tau>0$,
\begin{align}\label{L infty estimates f inf}
\inf_X[F-\sigma_i\phi_E]\geq -C,\quad \sigma_i=C_u+\tau.
\end{align}
The constant $C$ depends on $\|\varphi\|_\infty$, $C_u$, $\inf_X R$, $\inf_X f$, $\sup_X\phi_E$, $n$.
\end{prop}
\begin{proof}
We apply Proposition \ref{Linfty estimate v prop} again, taking
\begin{align*}
w=-F+b_1\varphi,\quad b_0=-1,\quad b_2=0.
\end{align*}
Substituting $\theta\leq C_u\tilde\omega_t$ into \lemref{Linfty estimate tri w}, we obtain
\begin{align*}
\triangle_\varphi w\geq -C_u\mathop{\rm tr}\nolimits_\varphi\tilde\omega_t-b_1\mathop{\rm tr}\nolimits_\varphi\omega_t+A_R, \quad A_R=R+b_1n.
\end{align*}
By $\tilde\omega_t=\omega_t+i\partial\bar\partial\phi_E$, it is reduced to
\begin{align*}
\triangle_\varphi w\geq -C_u\mathop{\rm tr}\nolimits_\varphi\tilde\omega_t-b_1\mathop{\rm tr}\nolimits_\varphi(\tilde\omega_t-i\partial\bar\partial\phi_E)+A_R.
\end{align*}
Accordingly, $H=b_1\phi_E$ and $A_\theta=-C_u-b_1$.
We choose $b_1=-C_u-t_0$ such that
\begin{align*}
A_\theta=t_0,\quad H=-(C_u+t_0)\phi_E.
\end{align*}
We see that $e^{-H}$ is bounded above.
From \eqref{Linfty estimate F H}, $-F-H\leq C(n)d I^{\frac{1}{2n}}$.
We verify the estimate of $I$ in Proposition \ref{Estimation of I prop}. We let $b_4=\frac{1}{n}$. Due to \eqref{Linfty estimate Atheta b3}, we have
$b_3=\frac{t_0}{8d^{-2}b_4^{-1}+t_0}$. Then we get
\begin{align}\label{Linfty estimate inf F I}
I\leq e^{-2\inf_X f} \int_{A_R\leq 0} |b_3^{-1}(-R-b_1n)|^{2n} e^{-2 H}e^{-2(C_u+t_0)\varphi} \tilde \omega_t^n,
\end{align}
which is finite, by using the upper bound of $-H=(C_u+t_0)\phi_E$ and $\inf_X\varphi$. Therefore, the upper bound of $-F+(C_u+t_0)\phi_E$ is derived.
\end{proof}
We observe that we could remove $\tau$.
\begin{cor}\label{L infty estimates f sup vphi prop cor}
Assume that $|R-C_ln|\leq A_s t$ for some constant $A_s$. Then we have \eqref{L infty estimates f sup vphi} with $\sigma_s=C_l$.
\end{cor}
\begin{proof}
The proof is identical to Proposition \ref{L infty estimates f sup vphi prop}, but choosing $b_0=1$, $A_\theta=t$ and $H=(C_l-t)\phi_E$. We also take $b_2=t$. Since $t\rightarrow 0$, we have
$ pb_2=pt\leq \alpha(\Omega_1)$, which is the condition \eqref{Linfty estimate b2}.
Then $b_1=C_l-b_2-t=C_l-2t$ and $b_4=\min\{\frac{1}{n},\frac{\alpha}{-4nb_1}\}$. Also,
\begin{align*}
\frac{t}{8d^{-2}b_4^{-1}+1} \leq b_3= \frac{t}{8d^{-2}b_4^{-1}+t}\leq \frac{1}{8d^{-2}b_4^{-1}} .
\end{align*}
The scalar curvature assumption gives
\begin{align*}
|R-b_1n|=|R-(C_l-2t)n|\leq |R-C_ln|+2tn\leq (A_s+2n) t.
\end{align*}
We have $F\leq (\frac{|R-b_1n|}{b_2n})^n (E^\beta_{t,\epsilon}+2e^{-1}+1)
$ is bounded in the domain $A_R\leq 0$.
We insert these values to \eqref{Linfty estimate rough I bound} in Proposition \ref{Estimation of I prop} to estimate
\begin{align*}
I\leq e^{-2\inf_X f} \int_{A_R\leq 0} |b_3^{-1}(R-b_1n)|^{2n} e^{-2nb_4 H} e^{2nb_4b_1\varphi}e^{(2nb_4+2)F} \tilde \omega_t^n.
\end{align*} By further using the bound of $e^{-H}$ and $|b_3^{-1}(R-b_1n)|$, we have $I$ is bounded, if $b_4$ is smaller than $\min\{\frac{1}{n},\frac{\alpha}{-2nb_1q}\}$. Consequently, the estimates \eqref{L infty estimates f sup vphi} follow from Proposition \ref{Linfty estimate v prop} and \eqref{Linfty estimate eF} of Proposition \ref{Linfty estimate v prop}. We find that all constants are independent of $t$, so we could further take $t\rightarrow 0$ such that the weight $\sigma_s=C_l$.
\end{proof}
\begin{cor}\label{L infty estimates f inf prop cor}
Assume $|R-C_un|\leq A_s t$ for some constant $A_s$. Then \eqref{L infty estimates f inf} holds with $\sigma_i=C_u$.
\end{cor}
\begin{proof}
We learn from Proposition \ref{L infty estimates f inf prop} that $A_\theta=t, H=-(C_u+t)\phi_E$. We have $b_2=0$ and $b_4=\frac{1}{n}$. Also, $b_1=-C_u-t$ is bounded when $0\leq t\leq 1$ and
\begin{align*}
b_3=\frac{t}{8d^{-2}b_4^{-1}+t}\geq Ct.
\end{align*}
Inserting the upper bound of $e^{-H}$ and $|b_3^{-1}(-R-b_1n)|$ in \eqref{Linfty estimate inf F I}, we get the estimate of $I$ is independent of $t$. Therefore, we conclude from Proposition \ref{L infty estimates f inf prop} with $t\rightarrow 0$.
\end{proof}
\begin{rem}
For the cscK problem, the averaged scalar curvature $R_t$ \eqref{approximate average scalar} of the approximate singular cscK metric is close to $\ul S_\beta=\frac{C_1(X,D)\Omega^{n-1}}{\Omega^n}$, namely
\begin{align*}
R_t-\ul S_\beta\leq C t.
\end{align*} So, the assumption in both corollaries means some sort of pinching of the eigenvalues of the representative $\theta\in C_1(X,D)$.
\end{rem}
\section{Gradient estimate of $\varphi$}\label{Gradient estimate of vphi}
In this section, we obtain the gradient estimate for the singular cscK metric, extending the results for non-degenerate cscK metrics \cite{MR4301557}. We will use the singular exponent $\sigma_E$ to measure the singularity of the given big and semi-positive class $\Omega$. Meanwhile, we will use the degenerate exponent $\sigma_D$ to reflect how the degeneracy of the singular cscK equation \eqref{Singular cscK t eps} could effect the gradient estimate. It is surprising to see that, when the cone angle $\beta> \frac{n+2}{2}$, the degenerate exponent does not appear in the gradient estimate. That means the gradient estimates remains exactly the same to the one for the non-degenerate metric.
\begin{thm}[{Gradient estimate of $\varphi$}]\label{gradient estimate}
Suppose that $\varphi_\epsilon$ is a solution to the approximate singular scalar curvature equation \eqref{Singular cscK t eps tilde}.
\begin{enumerate}
\item
Assume that $\Omega$ is big and semi-positive. Then there exists a constant $C$ such that
\begin{align*}
|s_E|^{2a_0 \sigma_E } S_\epsilon^{\sigma_D}|\partial\psi|^2_{\tilde\omega_t}\leq A_3, \quad \sigma_D\geq 1.
\end{align*}
The singular exponent $\sigma_E$ is sufficiently large, and determined in \eqref{Gradient estimate: sigmaE} and \eqref{Gradient estimate: sigmaE 2}.
\item
If we further assume that $\Omega=[\omega_K]$ is K\"ahler, then $\sigma_E=0$ and the degenerate exponent $\sigma_D$ is weaken to satisfy
\begin{equation}\label{gradient estimate: degenerate exponent}
\left\{
\begin{aligned}
&\sigma_D=0,\text{ when }\beta>\frac{n+2}{2},\\
&\sigma_D>1-\frac{2\beta}{n+2}\geq 0,\text{ when } \beta\leq\frac{n+2}{2}.
\end{aligned}
\right.
\end{equation}
Then there holds the gradient estimate
\begin{align*}
S_\epsilon^{\sigma_D}|\partial\varphi_\epsilon|_{\omega_K}^2\leq A_4.
\end{align*}
\end{enumerate}
\end{thm}
The precise statements will be given in \thmref{gradient estimate sigma 1} and \thmref{gradient estimate sigma 2}, respectively.
\begin{rem}
In the second conclusion, we see that $\sigma_D$ could be chosen to be zero, when $\beta> \frac{n+2}{2}$.
\end{rem}
\begin{proof}
We denote $\tilde\varphi$ by $\psi$ and all the norms are taken with respect to the K\"ahler metric $\tilde\omega_t$ in this proof. We will use the approximate singular scalar curvature equation \eqref{Singular cscK t eps tilde} and omit the lower index for convenience.
\begin{equation}\label{Singular cscK t eps tilde gradient}
(\tilde\omega_t+i\partial\bar\partial \psi)^n=e^{\tilde F} \tilde\omega_t^n,\quad
\triangle_{\psi} \tilde F=\mathop{\rm tr}\nolimits_{\psi}(\theta-i\partial\bar\partial f)-R,
\end{equation}
where, $
\tilde F=F-f, f=-(\beta-1)\log S_\epsilon-h_\theta-c_{t,\epsilon}-\log\frac{\omega^n}{\tilde\omega_t^n}.
$
We will divide the proof into several steps in this section.
\end{proof}
\subsection{Differential inequality}
Let $K$ be a positive constant determined later and $H$ be an auxiliary function on $\tilde F$ and $\psi$.
We set
\begin{align*}
&v:=|\partial\psi|^2+K,\quad u:=e^{H} v.
\end{align*}
Then we calculate that
\begin{lem}\label{Gradient estimate: Differential identity lem}
\begin{align}\label{Gradient estimate: Differential identity}
u^{-1}\triangle_\psi u&= \triangle_\psi H+|\partial H|^2_\psi
+2\frac{H_i\psi_{i\bar i}\psi_{\bar i}+H_i\psi_{\bar j\bar i}\psi_{j}}{(1+\psi_{i\bar i})v}\notag \\
&+\frac{R_{i\bar i j\bar j}(\tilde\omega_t)\psi_{j}\psi_{\bar j}+|\psi_{ij}|^2+|\psi_{i\bar i}|^2}{(1+\psi_{i\bar i})v}+\frac{2Re(\tilde F_i\psi_{\bar i})}{v}.
\end{align}
\end{lem}
\begin{proof}
We compute the Laplacian of $\log v$ under the normal coordinates with respect to the metric $\tilde\omega_t$,
\begin{align*}
\triangle_\psi \log v=-|\partial\log v|^2_\varphi+\frac{R_{i\bar i j\bar j}(\tilde\omega_t)\psi_{j}\psi_{\bar j}+|\psi_{ij}|^2+|\psi_{i\bar i}|^2}{(1+\psi_{i\bar i}) v}+\frac{2\Re(\tilde F_i\psi_{\bar i})}{v}.
\end{align*}
Applying
$
\triangle_\psi \log u=\triangle_\psi H+\triangle_\psi\log v,
$
we get
\begin{align*}
u^{-1}\triangle_\psi u&=| \partial\log u|^2_\psi+\triangle_\psi \log u\\
&=|\partial\log u|^2_\psi+\triangle_\psi H-|\partial\log v|^2_\psi\\
&+\frac{R_{i\bar i j\bar j}(\tilde\omega_t)\psi_{j}\psi_{\bar j}+|\psi_{ij}|^2+|\psi_{i\bar i}|^2}{(1+\psi_{i\bar i}) v}+\frac{2\Re(\tilde F_i\psi_{\bar i})}{v}.
\end{align*}
We further calculate that
\begin{align*}
|\partial\log u|^2_\psi-|\partial\log v|^2_\psi&=(\partial H,2\partial\log v+\partial H)_\psi=|\partial H|^2_\psi+2(\partial H,\partial v)_\psi v^{-1}\\
&=|\partial H|^2_\psi+2\frac{H_i\psi_{i\bar i}\psi_{\bar i}+H_i\psi_{\bar j\bar i}\psi_{j}}{(1+\psi_{i\bar i})v}.
\end{align*}
In summary, the desired identity is obtained by adding them together.
\end{proof}
The differential inequality is obtained from the identity \eqref{Gradient estimate: Differential identity}, after dropping off the positive terms and carefully choosing the weight function $H$.
Firstly, we remove the positive terms in \eqref{Gradient estimate: Differential identity}.
\begin{lem}\label{Gradient estimate: Differential inequality lem}
Let $K\geq 1$ and $-C_{1.1}:=\inf_{X}R_{i\bar ij\bar j}(\tilde\omega_t)$. Then it holds
\begin{align}\label{tri Z}
u^{-1}\triangle_\psi u&\geq \triangle_\psi H+2\Re[(H_i+\tilde F_i)\psi_{\bar i}]v^{-1}
\\
&+[(-C_{1.1}-1)+v^{-1}]\mathop{\rm tr}\nolimits_{\psi}\tilde\omega_t
+(\mathop{\rm tr}\nolimits_{\tilde\omega_t}\omega_\psi-2n)v^{-1}.\notag
\end{align}
\end{lem}
\begin{proof}
By removing the positive terms
\begin{align*}
|\partial H|^2_\psi|\partial\psi|^2+2\frac{H_i\psi_{\bar j\bar i}\psi_{j}}{1+\psi_{i\bar i}}+\frac{|\psi_{ij}|^2}{1+\psi_{i\bar i}}\geq 0,
\end{align*}
and inserting the lower bound of the bisectional curvature
$R_{i\bar i j\bar j}(\tilde\omega_t)\psi_{j}\psi_{\bar j}\geq -C_{1.1} v,$
into \eqref{Gradient estimate: Differential identity}, we get
\begin{align*}
e^{-H}\triangle_\psi u&\geq v\triangle_\psi H+K|\partial H|^2_\psi
+2\frac{H_i\psi_{i\bar i}\psi_{\bar i}}{1+\psi_{i\bar i}}\notag \\
&-C_{1.1}v\mathop{\rm tr}\nolimits_\psi\tilde\omega_t+\frac{|\psi_{i\bar i}|^2}{1+\psi_{i\bar i}}+2\Re(\tilde F_i\psi_{\bar i}).
\end{align*}
While, we compute
\begin{align*}
2\frac{H_i\psi_{i\bar i}\psi_{\bar i}}{1+\psi_{i\bar i}}
=2H_i\psi_{\bar i}-2\frac{H_i\psi_{\bar i}}{1+\psi_{i\bar i}} .
\end{align*}
By Young's inequality, it follows
\begin{align*}
-\frac{2H_i\psi_{\bar i}}{1+\psi_{i\bar i}}\geq -|\partial H|^2_\psi -|\partial \psi|^2\mathop{\rm tr}\nolimits_\psi\tilde\omega_t\geq -|\partial H|^2_\psi -v\mathop{\rm tr}\nolimits_\psi\tilde\omega_t.
\end{align*}
By substitution into the inequality above, we have
\begin{align*}
e^{-H}\triangle_\psi u&\geq v\triangle_\psi H+(K-1)|\partial H|^2_\psi
+(-C_{1.1}-1)v\mathop{\rm tr}\nolimits_\psi\tilde\omega_t\\
&+\frac{|\psi_{i\bar i}|^2}{1+\psi_{i\bar i}}+2\Re[(H_i+\tilde F_i)\psi_{\bar i}].
\end{align*}
Rewriting the positive term
\begin{align*}
&\frac{|\psi_{i\bar i}|^2}{1+\psi_{i\bar i}}=\frac{|\psi_{i\bar i}|^2-1+1}{1+\psi_{i\bar i}}
=\sum_i[\psi_{i\bar i}-1+\frac{1}{1+\psi_{i\bar i}}]\\
&=\mathop{\rm tr}\nolimits_{\tilde\omega_t}\omega_\psi-2n+\mathop{\rm tr}\nolimits_{\psi}\tilde\omega_t
\end{align*} and inserting it back to the inequality, we have obtained \eqref{tri Z}.
\end{proof}
Then we continue the proof of Proposition \eqref{Gradient estimate: Differential inequality prop}.
In order to deal with the gradient term in \eqref{tri Z}, we further choose
\begin{align*}
B=-\sigma_E\psi+e^{-\psi}, \quad H:=-F+B+\sigma_D\log S_\epsilon.
\end{align*}
In which, we see that
\begin{align*}
e^{-\psi}=e^{-\varphi+\phi_E}=e^{-\varphi} |s_E|^{2a_0}_{h_E},
\end{align*}
which is bounded by using the $L^\infty$-estimate of $\varphi$.
\begin{lem}
\begin{align}\label{Gradient estimate: Differential inequality gradient A}
&2Re[(H_i+\tilde F_i)\psi_{\bar i}]v^{-1}
\geq 2(-\sigma_E-e^{-\psi})(1-Kv^{-1})+A_w v^{-\frac{1}{2}}
\end{align}
with the weight
\begin{align}\label{Gradient estimate: Aw}
A_w:=
-2C_{2}-2(\beta-1+\sigma_D)|\partial\log S_\epsilon|.
\end{align}
\end{lem}
\begin{proof}
We compute
\begin{align*}
B_i=(-\sigma_E-e^{-\psi})\psi_i\text{ and } H+\tilde F=B-f+\sigma_D\log S_\epsilon
\end{align*} to get
\begin{align*}
2Re[(H_i+\tilde F_i)\psi_{\bar i}]
= 2(-\sigma_E-e^{-\psi})|\partial\psi|^2+2\Re[-f_i\psi_{\bar i}+\sigma_D(\log S_\epsilon)_i\psi_{\bar i}].
\end{align*}
Inserting $f=-(\beta-1)\log S_\epsilon-h_\theta-c_{t,\epsilon}-\log\frac{\omega^n}{\tilde\omega_t^n}$ into the identity above, we have that
\begin{align*}
-f_i\psi_{\bar i}+\sigma_D(\log S_\epsilon)_i\psi_{\bar i}
&=(h_\theta+\log\frac{\omega^n}{\tilde\omega_t^n})_i\psi_{\bar i}
+(\beta-1+\sigma_D)(\log S_\epsilon)_i\psi_{\bar i}\\
&\geq - [C_{2}+(\beta-1+\sigma_D)|\partial\log S_\epsilon|]\cdot |\partial\psi| ,
\end{align*}
where $C_2$ depends on $\|\partial(h_\theta+\log\frac{\omega^n}{\tilde\omega_t^n}) \|_{L^\infty(\tilde\omega_t)}$.
Consequently, we have proved \eqref{Gradient estimate: Differential inequality gradient A} by using $|\partial\psi|^2=v-K$.
\end{proof}
Meanwhile, we bound the Laplacian term $\triangle_\psi H$ in \eqref{tri Z}.
\begin{lem}
\begin{align}\label{Gradient estimate: Differential inequality laplacian A}
\triangle_\psi H
\geq \tilde A_\theta\mathop{\rm tr}\nolimits_\psi\tilde\omega_t+e^{-\psi}|\partial\psi|^2_\psi+(\inf R-\sigma_En-e^{-\psi}n).
\end{align}
where, $C_{1.4}:=\inf_{(X,\tilde\omega_t)} i\partial\bar\partial\log S_\epsilon$ and
\begin{align*}
\tilde A_\theta=-\sup_{(X,\tilde\omega_t)}\theta+\sigma_E+e^{-\psi}+\sigma_D C_{1.4}.
\end{align*}
\end{lem}
\begin{proof}
We take the Laplacian of $B=-\sigma_E\psi+e^{-\psi}$,
\begin{align*}
\triangle_\psi B=-\sigma_E\triangle_\psi\psi+e^{-\psi}|\partial\psi|^2_\psi-e^{-\psi}\triangle_\psi\psi,
\end{align*} Then we calculate the Laplacian of the auxiliary function $H$ with the singular scalar curvature equation \eqref{Singular cscK t eps tilde gradient},
\begin{align*}
&\triangle_\psi H=-\triangle_\psi F+\triangle_\psi B+ \sigma_D\triangle_\psi \log S_\epsilon\\
&=-\mathop{\rm tr}\nolimits_{\psi}\theta+R+e^{-\psi}|\partial\psi|^2_\psi+(-\sigma_E-e^{-\psi})(n-\mathop{\rm tr}\nolimits_\psi\tilde\omega_t)
+ \sigma_D\triangle_\psi \log S_\epsilon,
\end{align*}
which implies the lower bound
\begin{align*}
\triangle_\psi H&\geq[-\sup_{(X,\tilde\omega_t)}\theta+\sigma_E+e^{-\psi}]\mathop{\rm tr}\nolimits_\psi\tilde\omega_t+\inf R+e^{-\psi}|\partial\psi|^2_\psi-(\sigma_E+e^{-\psi})n\\
&+ \sigma_D\triangle_\psi \log S_\epsilon.
\end{align*}
Since $\sigma_D\geq 0$, the asserted inequality follows from the lower bound of $ i\partial\bar\partial \log S_\epsilon$ from \lemref{h eps}, namely
\begin{align*}
\triangle_\psi\log S_\epsilon\geq C_{1.4}\mathop{\rm tr}\nolimits_\psi\tilde\omega_t.
\end{align*}
\end{proof}
We set three bounded quantities to be
\begin{align*}
A_\theta:=\tilde A_\theta-C_{1.1}-1=-\sup_{(X,\tilde\omega_t)}\theta+\sigma_E+e^{-\psi}+\sigma_D C_{1.4}-C_{1.1}-1,
\end{align*}
\begin{align}\label{Gradient estimate: constants A}
& A_s:=\inf R-(n+2)(\sigma_E+e^{-\psi}),\quad A_c:=2K(\sigma_E+e^{-\psi})-2n.
\end{align}
As a result, inserting the functions $A_s,A_c,A_w$, \eqref{Gradient estimate: Differential inequality gradient A} and \eqref{Gradient estimate: Differential inequality laplacian A} back to \eqref{tri Z}, we obtain that
\begin{align*}
u^{-1}\triangle_\psi u&\geq A_\theta \mathop{\rm tr}\nolimits_\psi\tilde\omega_t
+e^{-\psi}|\partial\psi|^2_\psi +A_s
+(\mathop{\rm tr}\nolimits_{\tilde\omega_t}\omega_\psi +A_c)v^{-1} +A_wv^{-\frac{1}{2}} .
\end{align*}
In order to apply \lemref{Gradient estimate: Differential inequality G} to deduce the lower bound of the first two terms, we need to verify the assumptions
\begin{align*}
A_\theta\geq C_K e^{-\psi},\quad C_K:=1+\frac{K}{n^2(n-1)}.
\end{align*}
They are satisfied, as long as we choose large $\sigma_E$, i.e.
\begin{align}\label{Gradient estimate: sigmaE}
\sigma_E\geq \frac{K}{n^2(n-1)} e^{-\psi}+\sup_{(X,\tilde\omega_t)}\theta-\sigma_D C_{1.4}+C_{1.1}+1.
\end{align}
Consequently, we obtain the desired differential inequality asserted in the following proposition.
\begin{prop}\label{Gradient estimate: Differential inequality prop}
It holds
\begin{align}\label{Gradient estimate: Differential inequality}
\triangle_\psi u&\geq \frac{1}{n-1}e^{-\frac{H+\tilde F}{n}-\psi} u^{1+\frac{1}{n}}\notag\\
&+A_s u
+e^H(\mathop{\rm tr}\nolimits_{\tilde\omega_t}\omega_\psi+A_c) +A_{w}e^{\frac{H}{2}} u^{\frac{1}{2}}.
\end{align}
\end{prop}
At last, we close this section by proving the required lemma.
\begin{lem}\label{Gradient estimate: Differential inequality G}
For any given function $K$, there holds
\begin{align*}
G:=C_K\mathop{\rm tr}\nolimits_\psi\tilde\omega_t+|\partial\psi|^2_\psi
\geq \frac{1}{n-1}e^{-\frac{\tilde F}{n}} v^{\frac{1}{n}}.
\end{align*}
\end{lem}
\begin{proof}
By the inequality of arithmetic and geometric means and the equation \eqref{Singular cscK t eps tilde}, we have
\begin{align*}
\mathop{\rm tr}\nolimits_\psi\tilde\omega_t\geq (\mathop{\rm tr}\nolimits_{\tilde\omega_t}\omega_\psi\cdot e^{-\tilde F})^{\frac{1}{n-1}}.
\end{align*}
Jensen inequality for the concave function $(\cdot)^{\frac{1}{n-1}}$ further implies that
\begin{align*}
(\mathop{\rm tr}\nolimits_{\tilde\omega_t}\omega_\psi)^{\frac{1}{n-1}}\geq \frac{1}{n}\sum_i(1+\psi_{i\bar i})^{\frac{1}{n-1}} n^{\frac{1}{n-1}}.
\end{align*}
Inserting them into the expression of $(n-1)G$, we get
\begin{align*}
(n-1)G
&\geq \sum_i[\frac{n-1}{n}(1+\psi_{i\bar i})^{\frac{1}{n-1}}e^{-\frac{\tilde F}{n-1}}n^{\frac{1}{n-1}}+\frac{K}{n^2(1+\psi_{i\bar i})}+\frac{(n-1)|\psi_i|^2}{1+\psi_{i\bar i}}].
\end{align*}
By $n\geq 2$ and $n-1\geq \frac{1}{n}$, it follows
\begin{align*}
\geq \sum_i[\frac{n-1}{n}(1+\psi_{i\bar i})^{\frac{1}{n-1}}e^{-\frac{\tilde F}{n-1}}+\frac{1}{n}\frac{|\psi_i|^2+n^{-1}K}{1+\psi_{i\bar i}}].
\end{align*}
Using Young's inequality, we thus obtain the lower bound of $(n-1)G$
\begin{align*}
\geq\sum_i [(1+\psi_{i\bar i})^{\frac{1}{n-1}}e^{-\frac{\tilde F}{n-1}}]^{\frac{n-1}{n}} \cdot [\frac{|\psi_i|^2+n^{-1}K}{1+\psi_{i\bar i}}]^{\frac{1}{n}}
= e^{-\frac{\tilde F}{n}} (|\partial\psi|^2+K)^{\frac{1}{n}}.
\end{align*}
\end{proof}
\subsection{Computing weights}
We are examining the coefficients of the inequality \eqref{Gradient estimate: Differential inequality}.
\begin{prop}\label{Gradient estimate: Differential inequality c prop}
Assume that
\begin{align}\label{Gradient estimate: sigmaE 2}
\sigma_E\geq \sigma_i=C_u+\tau.
\end{align} Then there exists nonnegative constants $C,C_1,C_2$ such that
\begin{align}
&\triangle_\psi u\geq C_1[A_m u^{1+\frac{1}{n}}
+e^H\mathop{\rm tr}\nolimits_{\tilde\omega_t}\omega_\psi]-C_2[u+1-A_w e^{\frac{H}{2}} u^{\frac{1}{2}}],\label{Gradient estimate: Differential inequality c}\\
&A_m:=(n-1)^{-1}e^{-\frac{H+\tilde F}{n}-\psi}\geq C|s_E|_{h_E}^{-2a_0(\frac{\sigma_E}{n}+1)} S_\epsilon^{-\frac{\beta-1+\sigma_D}{n} },\label{Gradient estimate: Differential inequality main}\\
&A_w e^{\frac{H}{2}}\geq - C[1+(\beta-1+\sigma_D)S_\epsilon^{-\frac{1}{2}}] S_\epsilon^{\frac{\sigma_D}{2}} |s_E|^{a_0(\sigma_E-\sigma_i)} .\label{Gradient estimate: Differential inequality Aw}
\end{align}
\end{prop}
\begin{proof}
We compute the weights and coefficients in \eqref{Gradient estimate: Differential inequality prop}, including $$A_s, \quad A_c,\quad e^H, \quad e^{-\frac{H+\tilde F}{n}-\psi},\quad A_w.$$
Since $e^{-\psi}$ is bounded, we see that both $A_s$ and $A_c$ in \eqref{Gradient estimate: constants A} are bounded.
Applying the uniform bounds of $\varphi$ and $F$ from \thmref{L infty estimates Singular equation}, namely $$A_1+\sigma_s\phi_E\geq F\geq \sigma_i\phi_E+A_2$$ to $H=-F-\sigma_E\varphi+\sigma_E\phi_E+e^{-\psi}+\sigma_D\log S_\epsilon$, we get
\begin{align}\label{Gradient estimate: Differential inequality eH}
C^{-1} e^{(\sigma_E-\sigma_s)\phi_E} S_\epsilon^{\sigma_D}\leq e^H\leq C e^{(\sigma_E-\sigma_i)\phi_E}S_\epsilon^{\sigma_D}.
\end{align}
Since $\sigma_E\geq \sigma_i$, we know that $e^{(\sigma_E-\sigma_i)\phi_E}$ is bounded above.
Inserting $-F+\tilde F=-f$ and $f=-(\beta-1)\log S_\epsilon-h_\theta-c_{t,\epsilon}-\log\frac{\omega^n}{\tilde\omega_t^n}$ into $-\frac{H+\tilde F}{n}-\psi$, we have
\begin{align*}
&-\frac{H+\tilde F}{n}-\psi
=-\frac{-f-\sigma_E\varphi+\sigma_E\phi_E+e^{-\psi}+\sigma_D\log S_\epsilon}{n}-\varphi+\phi_E\\
&=-\frac{h_\theta+c_{t,\epsilon}+\log\frac{\omega^n}{\tilde\omega_t^n}-(\sigma_E+n)(\varphi-\phi_E)+e^{-\psi}+(\beta-1+\sigma_D)\log S_\epsilon}{n},
\end{align*}
which has the lower bound
\begin{align*}
-\frac{H+\tilde F}{n}-\psi
\geq -C-(\frac{\sigma_E}{n}+1)\phi_E-\frac{\beta-1+\sigma_D}{n}\log S_\epsilon.
\end{align*}
As a result, we obtain the strictly positive lower bound of $A_m$ in \eqref{Gradient estimate: Differential inequality main},
which depends on the upper bound of $|s_E|^2_{h_E}$ and $S_\epsilon$.
By the same proof of \lemref{nef tilde f}, we get
\begin{align*}
A_w=
-2C_{2}-2(\beta-1+\sigma_D)|\partial\log S_\epsilon|
\geq -C[1+(\beta-1+\sigma_D)S_\epsilon^{-\frac{1}{2}}] .
\end{align*}
Thus \eqref{Gradient estimate: Differential inequality Aw} in obtained by using \eqref{Gradient estimate: Differential inequality eH}.
\end{proof}
\subsection{Maximum principle}
We continue the proof of \thmref{gradient estimate}. We have proved that, when $\sigma_D\geq 1$ and $\sigma_E\geq \sigma_i$, it follows according to Proposition \ref{Gradient estimate: Differential inequality c prop} that
$$A_we^{\frac{H}{2}}\geq -C S_\epsilon^{\frac{\sigma_D-1}{2}} |s_E|^{a_0(\sigma_E-\sigma_i)},$$
which is bounded below, near $D$ and $E$.
Therefore, there exists a non-negative constant $C_1,C_2$ such that
\begin{align}\label{Gradient estimate: Differential inequality 1}
\triangle_\psi u&\geq C_1 u^{1+\frac{1}{n}}-C_2[u+ u^{\frac{1}{2}}+1].
\end{align}
We are ready to apply the maximum principle to prove the gradient estimate, \thmref{gradient estimate}, when $\sigma_D\geq1$.
\begin{thm}\label{gradient estimate sigma 1}
Assume that $\Omega$ is big and semi-positive.
Suppose that $\varphi_\epsilon$ is a solution to the approximate singular scalar curvature equation \eqref{Singular cscK t eps tilde}.
Then there exists a constant $A_3$ such that
\begin{align*}
|s_E|^{2a_0 (\sigma_E-\sigma_s )} S_\epsilon|\partial\psi|^2_{\tilde\omega_t}\leq A_3.
\end{align*}
and $\sigma_E$ is determined in \eqref{Gradient estimate: sigmaE} and \eqref{Gradient estimate: sigmaE 2}.
The constant $A_3$ depends on
\begin{align*}
&\|\varphi_\epsilon\|_\infty,\quad \sup_X(F_{t,\epsilon}-\sigma_s\phi_E),\quad\inf_{X}[F_{t,\epsilon}-\sigma_i\phi_E], \\
&\inf_{i\neq j}R_{i\bar i j\bar j}(\tilde\omega_t),\quad\inf_X R,\quad \sup_{(X,\tilde\omega_t)}\theta,\quad \|h_\theta+c_{t,\epsilon}+\log\frac{\omega^n}{\tilde\omega_t^n} \|_{C^1(\tilde\omega_t)},\\
&\sup_X S_\epsilon,\quad \sup_X S_\epsilon^{-\frac{1}{2}}|\partial S_\epsilon|_\omega,\quad \Theta_D,\quad \sup_X \phi_E,\quad \beta,\quad n.
\end{align*}
\end{thm}
\begin{proof}
We proceed with the argument of the maximum principle.
We assume that $p$ is at the maximum point of $Z$. Then the inequality \eqref{Gradient estimate: Differential inequality 1} at $p$ implies that $u(p)$ is uniformly bounded above. But at any point $x\in X$, $u(x)$ is bounded above by the maximum value $u(p),$ which means
\begin{align*}
|\partial\psi|^2(x)+K
\leq e^{-H(x)}u(p).
\end{align*}
Therefore, using $H=-F-\sigma_E\psi+e^{-\psi}+\sigma_D\log S_\epsilon$ and the upper bound of $F$, $\|\varphi\|_\infty$, $\|e^{-\psi}\|_\infty$, we obtain the gradient estimate
\begin{align*}
|\partial\psi|^2(x)+K&\leq e^{F+\sigma_E\varphi-\sigma_E\phi_E-e^{-\psi}-\sigma_D\log S_\epsilon}u(p)\\
&\leq C e^{\sigma_s\phi_E-\sigma_E\phi_E-\sigma_D\log S_\epsilon}\\
&= C |s_E|^{2(\sigma_s-\sigma_E)a_0} S_\epsilon^{-\sigma_D}.
\end{align*}
At last, the desired weighted gradient estimate is obtained.
\end{proof}
\subsection{Integration method}
In this section, we aims to improve $\sigma_D$ to be less than 1. The weighted integral method is applied to obtain the gradient estimate. In order to explain the ideas well, we restrict ourselves in the K\"ahler class $\Omega$.
\begin{thm}\label{gradient estimate sigma 2}
Assume that $\Omega=[\omega_K]$ is K\"ahler.
Suppose that $\varphi_\epsilon$ is a solution to the approximate singular scalar curvature equation \eqref{Singular cscK t eps tilde}. Assume that the degenerate exponent satisfies the condition \eqref{gradient estimate: degenerate exponent}, that is
\begin{align*}
\sigma_D>\max\{1-\frac{2\beta}{n+2},0\}.
\end{align*}
Then there holds the gradient estimate
\begin{align*}
S_\epsilon^{\sigma_D}|\partial\varphi_\epsilon|_{\omega_K}^2\leq A_4.
\end{align*}
The constant $A_4$ depends on the same dependence of $A_3$ in \thmref{gradient estimate sigma 1}, and in additional, the Sobolev constant of $\tilde\omega_t$.
\end{thm}
\begin{rem}
Note $\tilde\omega_t$ is equivalent to $\omega_K$.
\end{rem}
We first obtain the general integral inequality. Then we apply the weighted analysis similar to the proof for the Laplacian estimate in Section \ref{A priori estimates for approximate degenerate cscK equations} to proceed the iteration argument.
\begin{prop}[Integral inequality]\label{Gradient estimate: Integral inequality}
Assume that $\Omega$ is big and semi-positive and $\varphi_\epsilon$ is a solution to the approximate singular scalar curvature equation \eqref{Singular cscK t eps}. We set $\tilde u:=u+K_0$. Then we have
\begin{align*}
& \int_X \tilde u^{p-1} u^{1+\frac{1}{n}}e^{[\sigma_i-(\frac{\sigma_E}{n}+1)]\phi_E} S_\epsilon^{\beta-1-\frac{\beta-1+\sigma_D}{n}}\tilde\omega^n_t\\
&+\int_X\tilde u^{p-2}[ (p-1) |\partial u|_\psi^2+\tilde ue^{(\sigma_E-\sigma_i)\phi_E} S_\epsilon^{\sigma_D} \mathop{\rm tr}\nolimits_{\tilde\omega_t}\omega_\psi ]e^{\sigma_i\phi_E} S_\epsilon^{\beta-1}\tilde\omega^n_t\notag\\
&\leq C \int_X \tilde u^{p-1} [ u + e^{-(\frac{\sigma_E}{n}+1)\phi_E }
+u^{\frac{1}{2}} e^{(\sigma_E-\sigma_i)\phi_E} S_\epsilon^{\frac{\sigma_D-1}{2}} ]e^{\sigma_s\phi_E}S_\epsilon^{\beta-1}\tilde\omega^n_t.\notag
\end{align*}
The dependence of $C$ is the same to what of $A_3$.
\end{prop}
\begin{proof}
Substituting the differential inequality \eqref{Gradient estimate: Differential inequality c} into the identity
\begin{align*}
&(p-1)\int_X \tilde u^{p-2}|\partial u|_\psi^2\omega_\psi^n=\int_X \tilde u^{p-1}(-\triangle_\psi u)\omega_\psi^n,\quad p\geq 2,
\end{align*}
we have the integral inequality
\begin{align*}
& \int_X \tilde u^{p-1} u^{1+\frac{1}{n}}e^{-(\frac{\sigma_E}{n}+1)\phi_E } S_\epsilon^{-\frac{\beta-1+\sigma_D}{n} } \omega_\psi^n\\
&+\int_X\tilde u^{p-2}[(p-1) |\partial u|_\psi^2+\tilde ue^H\mathop{\rm tr}\nolimits_{\tilde\omega_t}\omega_\psi ]\omega_\psi^n\\
&\leq C \int_X \tilde u^{p-1} [ u
+e^{-(\frac{\sigma_E}{n}+1)\phi_E }
+ u^{\frac{1}{2}} e^{(\sigma_E-\sigma_i)\phi_E} S_\epsilon^{\frac{\sigma_D-1}{2}}]\omega_\psi^n.
\end{align*}
Inserting the volume ratio bound \eqref{Gradient estimate: volume ratio bound},
$
\frac{1}{C} e^{\sigma_i\phi_E} S_\epsilon^{\beta-1}\ \leq \frac{\omega^n_{\psi}}{\tilde\omega^n_t}\leq C e^{\sigma_s\phi_E} S_\epsilon^{\beta-1}
$
into the inequality above, we obtain the integral inequality.
\end{proof}
In the following sections, we focus on the degenerate scale curvature equation in a given K\"ahler class, where $a_0=t=0$ and $\phi_E=0$. Notions introduced in Section \ref{Perturbed Kahler metrics} become $$\omega_{sr}=\tilde\omega_{t=0}=\omega_{t=0}=\omega_K,\quad \omega_\psi=\omega_\varphi.$$ The following corollary is obtained immediately from inserting the bound of $e^H$ in \eqref{Gradient estimate: Differential inequality eH} into Proposition \ref{Gradient estimate: Integral inequality}.
\begin{cor}[Integral inequality: degenerate equation]\label{Gradient estimate: Integral inequality degenerate}
Assume that $\Omega$ is K\"ahler. Then the integral inequality in Proposition \ref{Gradient estimate: Integral inequality} reduces to
\begin{align*}
LHS_1+LHS_2\leq C \cdot RHS_1,
\end{align*}
where, we set
\begin{align*}
LHS_1&:=\int_X\tilde u^{p-2}[(p-1)|\partial u|_\psi^2
+\tilde u S_\epsilon^{\sigma_D}\mathop{\rm tr}\nolimits_{\tilde\omega_t}\omega_\psi] \omega_\psi^n,\notag\\
LHS_2&:=\int_X \tilde u^{p-1} u^{1+\frac{1}{n}} S_\epsilon^{-\frac{\beta-1+\sigma_D}{n}} \omega_\psi^n,\\
RHS_1&:= \int_X \tilde u^{p-1} [ u
+1
+u^{\frac{1}{2}} S_\epsilon^{\frac{\sigma_D-1}{2}} ] \omega_\psi^n.
\end{align*}
\end{cor}
\subsection{Rough iteration inequality}
We now deal with the lower bound of $LHS_1$ by applying the Sobolev inequality.
We introduce some notions for convenience, $q:=p-\frac{1}{2}$, $\chi:=\frac{2n}{2n-1}$ and
\begin{align*}
k:=p\sigma_D+\beta-1,\quad
k_\sigma:=\frac{\sigma_D}{2}+\beta-1+\sigma,\quad \tilde\mu:=S^{k_\sigma\chi}_\epsilon\tilde\omega_t^n.
\end{align*}
Clearly, $k_\sigma=k+\sigma-q\sigma_D$ and
\begin{align*}
\|\tilde u\|^{q}_{L^{q\chi}(\tilde\mu)}=[\int_X \tilde u^{q\chi}\tilde\mu ]^{\chi^{-1}}=[\int_X (\tilde u^{p-\frac{1}{2}}S^{k_\sigma}_\epsilon)^{\frac{2n}{2n-1}}\tilde\omega_t^n ]^{\frac{2n-1}{2n}}.
\end{align*}
\begin{prop}[Rough iteration inequality]\label{Gradient estimate: Rough iteration inequality}
It holds
\begin{align*}
LHS_0+\sqrt{q} LHS_2
\leq C (\sqrt{q}RHS_1+RHS_2),
\end{align*}
where, we denote $LHS_0:=[\int_X (u\tilde u^{q-1})^{\chi}\tilde\mu]^{\chi^{-1}}$ and
\begin{align}\label{Gradient estimate: RHS2}
RHS_2:=\int_X u\tilde u^{q-1}S^{\frac{\sigma_D}{2}+\sigma}_\epsilon\omega_\psi^n+k_\sigma\int_X u\tilde u^{q-1} S^{\frac{\sigma_D-1}{2}+\sigma}_\epsilon \omega_\psi^n.
\end{align}
\end{prop}
\begin{proof}
Applying Young's inequality to the $LHS_1$ of the integral inequality, \corref{Gradient estimate: Integral inequality degenerate}, we get
\begin{align*}
LHS_1\geq \sqrt{p-1}\int_X \tilde u^{p-\frac{3}{2}}S_\epsilon^{\frac{\sigma_D}{2}} |\partial u|\omega_\psi^n
=q^{-1}\sqrt{p-1}\int_X |\partial \tilde u^{q}| S_\epsilon^{\frac{\sigma_D}{2}} \omega_\psi^n.
\end{align*}
Making use of the key \lemref{Laplacian estimate: key trick}, we get
\begin{align*}
LHS_1\geq q^{-1}\sqrt{p-1}\int_X |\partial (u\tilde u^{q-1})| S_\epsilon^{\frac{\sigma_D}{2}} \omega_\psi^n.
\end{align*}
The lower bound of the volume ratio $F$ in \eqref{Gradient estimate: volume ratio bound} further gives
\begin{align*}
LHS_1&\geq Cq^{-1}\sqrt{\frac{1}{2}(p-\frac{1}{2})}\int_X |\partial (u\tilde u^{q-1})|S^{\frac{\sigma_D}{2}+\beta-1}_\epsilon\tilde\omega_t^n\\
&\geq Cq^{-\frac{1}{2}}\int_X |\partial(u \tilde u^{q-1})|S^{\frac{\sigma_D}{2}+\beta-1+\sigma}_\epsilon\tilde\omega_t^n\\
&\geq C q^{-\frac{1}{2}}[\int_X |\partial (u\tilde u^{q-1}S^{k_\sigma}_\epsilon)|\tilde\omega_t^n
-\int_X u\tilde u^{q-1} |\partial S^{k_\sigma}_\epsilon| \tilde\omega_t^n].
\end{align*}
The Sobolev imbedding theorem estimates the first part,
\begin{align*}
\int_X |\partial (u\tilde u^{q-1}S^{k_\sigma}_\epsilon)|\tilde\omega_t^n
&\geq C_S^{-1}\left[\int_X (u\tilde u^{q-1}S^{k_\sigma}_\epsilon)^{\chi}\tilde\omega_t^n \right]^{\chi^{-1}}
-\int_X u\tilde u^{q-1}S^{k_\sigma}_\epsilon\tilde\omega_t^n\\
&=C_S^{-1} LHS_0-\int_X u\tilde u^{q-1}S^{k_\sigma}_\epsilon\tilde\omega_t^n.
\end{align*}
Meanwhile, the second part is estimated by
\begin{align*}
-\int_X u\tilde u^{q-1} |\partial S^{k_\sigma}_\epsilon| \tilde\omega_t^n
\geq -k_\sigma\int_X u\tilde u^{q-1} S^{k_\sigma-\frac{1}{2}}_\epsilon \tilde\omega_t^n.
\end{align*}
In summary, we use the volume ratio bound \eqref{Gradient estimate: volume ratio bound} again and combine these inequalities together to get the lower bound
\begin{align*}
LHS_1&\geq C q^{-\frac{1}{2}}\left\{C_S^{-1}LHS_0
-RHS_2\right\}.
\end{align*}
Therefore, the required inequality is obtained by inserting the inequality of $LHS_1$ to the integral inequality \eqref{Gradient estimate: Integral inequality degenerate}, namely
$
LHS_1+LHS_2\leq C \cdot RHS_1.
$
\end{proof}
\begin{cor}\label{Gradient estimate: Rough iteration inequality cor}
\begin{align*}
\|\tilde u\|^{q}_{L^{q\chi}(\tilde\mu)}+\sqrt{q} LHS_2
\leq C (\sqrt{q}RHS_1+RHS_2+1).
\end{align*}
\end{cor}
\begin{proof}
It follows from $u=\tilde u-K_0$ that
\begin{align*}
LHS^\chi_0=\int_X (u\tilde u^{q-1})^{\chi}\tilde\mu
\geq C(n)[\int_X \tilde u^{q\chi}\tilde\mu-K_0^{\chi}
\int_X \tilde u^{(q-1)\chi}\tilde\mu] .
\end{align*}
By Young's inequality, which states that
\begin{align*}
\tilde u^{(q-1)\chi}\leq \frac{q-1}{q}\tilde u^{q\chi}+\frac{1}{q}\leq \tilde u^{q\chi}+1,
\end{align*}
if follows
\begin{align*}
LHS^\chi_0=\int_X (u\tilde u^{q-1})^{\chi}\tilde\mu
\geq C(n)[(1-K_0^{\chi})\int_X \tilde u^{q\chi}\tilde\mu-K_0^{\chi}\int_X\tilde\mu] .
\end{align*}
Without loss of generality, we normalise $\int_X\tilde\mu=1$.
Then picking small $K_0$ satisfying $K_0^{\chi}\leq \frac{1}{2}$, we get
\begin{align*}
LHS^\chi_0
\geq C(n)[\frac{1}{2}\int_X \tilde u^{q\chi}\tilde\mu-K_0^{\chi}]
\geq \frac{C(n)}{2}[\int_X \tilde u^{q\chi}\tilde\mu-1].
\end{align*}
Inserting to the rough iteration inequality, Proposition \ref{Gradient estimate: Rough iteration inequality}, we obtain the desired result.
\end{proof}
\subsection{$L^p$ control}
In this section, we derive the $L^p$ estimate of $u$ from the integral inequality, Corollary \ref{Gradient estimate: Integral inequality degenerate}.
We set
\begin{align*}
\widetilde{LHS_2}:=\int_X \tilde u^{p+\frac{1}{n}} S_\epsilon^{-\frac{\beta-1+\sigma_D}{n}} \omega_\psi^n,
\end{align*}
which will be used to bound $RHS_1$.
\begin{prop}[$L^p$ control]\label{Gradient estimate: Lp control prop}
Assume that the degenerate exponent satisfies the condition \eqref{gradient estimate: degenerate exponent}. Then
\begin{align*}
&LHS_2\leq \widetilde{LHS_2} \leq C\cdot RHS_1
\leq C(p), \quad \forall p\geq 2.
\end{align*}
\end{prop}
\begin{proof}
We keep the first term on the left hand side of the integral inequality from \corref{Gradient estimate: Integral inequality degenerate},
\begin{align*}
&LHS_2=\int_X \tilde u^{p-1} u^{1+\frac{1}{n}} S_\epsilon^{-\frac{\beta-1+\sigma_D}{n}} \omega_\psi^n\\&\leq C RHS_1= \int_X \tilde u^{p-1} [ u
+1
+u^{\frac{1}{2}} S_\epsilon^{\frac{\sigma_D-1}{2}} ] \omega_\psi^n.\notag
\end{align*}
Replacing $\tilde u$ with $u+K_0$, we get
\begin{align*}
\widetilde{LHS_2}
\leq C(K_0)[LHS_2+\int_X \tilde u^{p-1} \tilde S_\epsilon^{-\frac{\beta-1+\sigma_D}{n}} \omega_\psi^n].
\end{align*}
Then Young's inequality with conjugate exponents $\frac{p+\frac{1}{n}}{p-1}$ gives
\begin{align*}
\int_X \tilde u^{p-1} \tilde S_\epsilon^{-\frac{\beta-1+\sigma_D}{n}} \omega_\psi^n
\leq \tau\widetilde{LHS_2}+C(\tau)\int_X \tilde S_\epsilon^{-\frac{\beta-1+\sigma_D}{n}} \omega_\psi^n.
\end{align*}
The latter integral is bounded by a constant $C$ independent of $p$, since
\begin{align*}
-\frac{\beta-1+\sigma_D}{n}+2(\beta-1)+2n>0.
\end{align*}
In conclusion, we shows that
\begin{align}\label{Gradient estimate: Integral inequality degenerate partial}
\widetilde{LHS_2}\leq C(LHS_2+1)\leq C(RHS_1+1).
\end{align}
In order to proceed further, we apply Young's inequality to each term in $RHS_1$.
For the first term in $RHS_1$, we pick the conjugate exponents
$
p_1=\frac{p+n^{-1}}{p},q_1=\frac{p+n^{-1}}{n^{-1}}
$, then
\begin{align*}
\int_X \tilde u^{p}\omega_\psi^n&=\int_X (\tilde u^{p} S_\epsilon^{-\frac{\beta-1+\sigma_D}{n p_1}} )S_\epsilon^{\frac{\beta-1+\sigma_D}{n p_1}} \omega_\psi^n
\leq \tau\widetilde{LHS_2}
+C(p,\tau)\int_X S_\epsilon^{k_1}\omega^n_\psi.\notag
\end{align*}
The exponent $k_1$ is then computed as
\begin{align*}
k_1:=\frac{\beta-1+\sigma_D}{n}\frac{q_1}{p_1}
=(\beta-1+\sigma_D)p\geq 0.
\end{align*}
Similarly, we estimate the second term as
\begin{align*}
\int_X \tilde u^{p-1}\omega_\psi^n&=\int_X ( \tilde u^{p-1} S_\epsilon^{-\frac{\beta-1+\sigma_D}{n p_2}} )S_\epsilon^{\frac{\beta-1+\sigma_D}{np_2}}\omega_\psi^n
\leq \tau\widetilde{LHS_2}
+C(p,\tau)\int_X S_\epsilon^{k_2}\omega^n_\psi
\end{align*}
with the conjugate exponents $p_2=\frac{p+n^{-1}}{p-1}, q_2=\frac{p+n^{-1}}{1+n^{-1}}
$ and
\begin{align*}
k_2&:=\frac{(\beta-1+\sigma_D)(p-1)}{n+1}\geq 0.
\end{align*}
The third term reduces to
\begin{align*}
\int_X \tilde u^{p-\frac{1}{2}}S_\epsilon^{\frac{\sigma_D-1}{2}} \omega_\psi^n
&=\int_X (\tilde u^{p-\frac{1}{2}} S_\epsilon^{-\frac{\beta-1+\sigma_D}{n p_3}} )S_\epsilon^{\frac{\beta-1+\sigma_D}{n p_3}+\frac{\sigma_D-1}{2}} \omega_\psi^n \\
&\leq \tau\widetilde{LHS_2}
+C(p,\tau)\int_X S_\epsilon^{k_3}\omega^n_\psi
\end{align*}
with $p_3=\frac{p+n^{-1}}{p-\frac{1}{2}}, q_3=\frac{p+n^{-1}}{\frac{1}{2}+n^{-1}}
$ and
\begin{align*}
k_3:=(\frac{\beta-1+\sigma_D}{n p_3}+\frac{\sigma_D-1}{2})q_3
=\frac{(\beta-1)(2p-1)-(np+1)}{n+2}+\sigma_D p.
\end{align*}
In order to make the last integrand $S_\epsilon^{k_3}$ integrable, we need
\begin{align*}
k_3+2(\beta-1)+2n>0,
\end{align*}
which is equivalent to
\begin{align*}
[(n+2)\sigma_D+2(\beta-1)] p+(\beta-1)(2n+3)+2n(n+2)>np+1.
\end{align*}
Clearly, it is sufficient to ask $(n+2)\sigma_D+2(\beta-1)\geq n$.
Therefore, we see that the integrable condition holds when $\sigma_D\geq1-\frac{2\beta}{n+2}$.
Adding them together, we have obtained that there exists a constant $C(p)$ depending on $p$ such that
\begin{align*}
RHS_1\leq 3 \tau\widetilde{LHS_2}+C(p,\tau) \int_X (S_\epsilon^{k_1}+S_\epsilon^{k_2}+S_\epsilon^{k_3})\omega^n_\psi<C(p)
\end{align*}
Inserting it into \eqref{Gradient estimate: Integral inequality degenerate partial}, we get
\begin{align*}
\widetilde{LHS_2}\leq C[3\tau \widetilde{LHS_2}+C(p,\tau)\int_X (S_\epsilon^{k_1}+S_\epsilon^{k_2}+S_\epsilon^{k_3})\omega^n_\psi+1]
\end{align*}
Taking sufficiently small $\tau$, we obtain the estimate of $\widetilde{LHS_2}$ and $RHS_1.$
\end{proof}
\subsection{Weighted inequality}
In this section, we deal with the term
\begin{align*}
\int_X \tilde u^{q} S_\epsilon^{\frac{\sigma_D}{2}+\sigma-k'}\omega^n_\psi,
\end{align*}
which appears on the right hand side of the rough iteration inequality, Corollary \ref{Gradient estimate: Rough iteration inequality cor}. We determine the range of $k'$ in this term such that it is bounded by $\|u\|^{q}_{L^{qa}(\tilde\mu)}$ for some $1<a<\frac{2n}{2n-1}$.
\begin{prop}[Weighted inequality]\label{Gradient estimate: Weighted inequality}
Assume $n\geq 2$ and $ k'<\frac{1}{2}$.
Then there exists $1<a<\frac{2n}{2n-1}$ such that
\begin{align*}
\int_X \tilde u^{q} S_\epsilon^{\frac{\sigma_D}{2}+\sigma-k'}\omega^n_\psi
\leq C_{5.1} \|u\|^{q}_{L^{qa}(\tilde\mu)}.
\end{align*}
Here $\frac{1}{a}+\frac{1}{c}=1$ and $C_{5.1}=\| S_\epsilon^{k_\sigma-k_\sigma\chi-k'}\|_{L^{c}(\tilde\mu)}$ is finite for some $c>2n$.
\end{prop}
\begin{proof}
With the help of the bound of the volume ratio $\tilde F$, we get
\begin{align*}
\int_X \tilde u^{q} S_\epsilon^{\frac{\sigma_D}{2}+\sigma-k'}\omega^n_\psi
\leq C&\int_X \tilde u^{q}S_\epsilon^{k_\sigma-k'} \tilde\omega_t^n
=C\int_X \tilde u^{q}S_\epsilon^{k_\sigma-k_\sigma\chi-k'} \tilde\mu.
\end{align*}
By applying the generalisation of H\"older's inequality with the conjugate exponents $a,c$, it is
\begin{align*}
\leq C\|\tilde u\|^{q}_{L^{qa}(\tilde\mu)}(\int_X S_\epsilon^{(k_\sigma-k_\sigma\chi-k')c} \tilde\mu)^{\frac{1}{c}}.
\end{align*}
As we have seen before, the last integral is finite, if we let
\begin{align*}
2(k_\sigma-k_\sigma\chi-k')c+2k_\sigma\chi+2n>0,
\end{align*} which is equivalent to
\begin{align*}
c<2n\frac{k_\sigma+2^{-1}(2n-1)}{k_\sigma+k'(2n-1)}:=c_0.
\end{align*}
The assumption $k'<\frac{1}{2}$ infers that $c_0>2n$. Consequently, the exponent $c$ is chosen to be between $2n$ and $c_0$ such that $a<\frac{2n}{2n-1}$.
\end{proof}
\subsection{Inverse weighted inequality}
We further estimate the the right hans side of the rough iteration inequality, Corollary \ref{Gradient estimate: Rough iteration inequality cor}, which contains two parts, $RHS_1$ and $RHS_2$,
\begin{align*}
RHS_1&= \int_X \tilde u^{p-1} [ u
+1
+u^{\frac{1}{2}} S_\epsilon^{\frac{\sigma_D-1}{2}} ] \omega_\psi^n,\\
RHS_2&=\int_X \tilde u^{q-1} u S^{\frac{\sigma_D}{2}+\sigma}_\epsilon\omega_\psi^n+k_\sigma\int_X \tilde u^{q-1} u S^{\frac{\sigma_D-1}{2}+\sigma}_\epsilon \omega_\psi^n.
\end{align*}
By comparing the weights in each term on the both sides of the rough iteration inequality,
\begin{align*}
-\frac{\beta-1+\sigma_D}{n}, \quad \frac{\sigma_D-1}{2},
\end{align*}roughly speaking,
we observe that the critical cone angle is
\begin{align*}
\frac{\beta-1}{n}=\frac{1}{2}.
\end{align*}
We will see accurate proof in the next inverse weighted inequality.
We examine all five integrals in $RHS_1$ and $RHS_2$.
\begin{align*}
& I:=\int_X \tilde u^{p-1} u \omega_\psi^n ,\quad
II:=\int_X \tilde u^{p-1}\omega_\psi^n,\quad
III:=\int_X \tilde u^{p-1} u^{\frac{1}{2}}S_\epsilon^{\frac{\sigma_D-1}{2}}\omega_\psi^n ,\\
&VI:=\int_X\tilde u^{q-1} u S^{\frac{\sigma_D}{2}+\sigma}_\epsilon\omega_\psi^n,\quad
V:=\int_X\tilde u^{q-1} u S_\epsilon^{\frac{\sigma_D-1}{2}+\sigma}\omega_\psi^n.
\end{align*}
\begin{prop}[Inverse weighted inequality]\label{Gradient estimate: Inverse weighted inequality}
Assume that the degenerate exponent satisfies $\sigma_D<1$, $\sigma=0$ and the condition \eqref{gradient estimate: degenerate exponent}, equivalently,
\begin{align}
&\sigma_D=0,\text{ when }\frac{\beta-1}{n}>\frac{1}{2};\label{Gradient estimate: Inverse weighted inequality angle}\\
&\sigma_D>1-\frac{2\beta}{n+2}\geq 0,\text{ when } \frac{\beta-1}{n}\leq\frac{1}{2}.\label{Gradient estimate: Inverse weighted inequality sigmaD}
\end{align}
Then there exists $0<k'<\frac{1}{2}$ such that
\begin{align*}
I,II,III,VI,V
\leq \epsilon \widetilde{LHS_2}
+C(\epsilon, n)\int_X \tilde u^{q}S_\epsilon^{\frac{\sigma_D}{2}+\sigma-k'}\omega_\psi^n.
\end{align*}
\end{prop}
\begin{proof}
We will apply the $\epsilon$-Young's inequality and proceed similar to the proof of Proposition \ref{Gradient estimate: Lp control prop}, by using the positive term
\begin{align*}
&LHS_2=\int_X \tilde u^{p-1} u^{1+\frac{1}{n}} S_\epsilon^{-\frac{\beta-1+\sigma_D}{n}} \omega_\psi^n.
\end{align*} But the constant should be chosen to be independent of $p$.
In order to estimate the first one, we decompose the first term as
\begin{align*}
I&=\int_X \tilde u^{p-1} u \omega_\psi^n\\
&=\int_X ( \tilde u^{p-1} u^{1+\frac{1}{n}} S_\epsilon^{-\frac{\beta-1+\sigma_D}{n}})^{\frac{1}{a_1}}
\tilde u^{\frac{p-1}{b_1}} u^{1-(1+\frac{1}{n})\frac{1}{a_1}}S_\epsilon^{(\frac{\beta-1+\sigma_D}{n})\frac{1}{a_1}}\tilde\omega_t^n\\
&\leq \epsilon LHS_2
+C(\epsilon, n)\int_X \tilde u^{p-1}u^{1-\frac{b_1}{a_1 n}}S_\epsilon^{k_1}\omega_\psi^n.
\end{align*}
By the choice of the conjugate exponents $a_1:=\frac{n+2}{n}$ and $b_1:=\frac{n+2}{2}$,
we get $1-\frac{b_1}{a_1 n}=\frac{1}{2}$ and
\begin{align*}
I\leq \epsilon LHS_2+C(\epsilon, n)\int_X \tilde u^{q}S_\epsilon^{k_1}\omega_\psi^n.
\end{align*}
We further compute
\begin{align*}
k_1&=(\frac{\beta-1+\sigma_D}{n})\frac{b_1}{a_1}=\frac{\beta-1+\sigma_D}{2}.
\end{align*}
Due to the weighted inequality, Proposition \ref{Gradient estimate: Weighted inequality}, we need to put condition on
\begin{align*}
k_1'=\frac{\sigma_D}{2}+\sigma-k_1=\frac{-(\beta-1)}{2}+\sigma.
\end{align*}
The second one is direct, by $\tilde u\geq K_0$,
\begin{align*}
II=\int_X \tilde u^{p-1}\omega_\psi^n\leq \int_X \tilde u^{q} K_0^{-\frac{1}{2}}\omega_\psi^n.
\end{align*}
So, the exponent is
\begin{align*}
k_2'=\frac{\sigma_D}{2}+\sigma.
\end{align*}
The third term $III$ is
\begin{align*}
&\int_X \tilde u^{p-1} u^{\frac{1}{2}} S_\epsilon^{\frac{\sigma_D-1}{2}}\omega_\psi^n\\
&=\int_X ( \tilde u^{p-1} u^{1+\frac{1}{n}} S_\epsilon^{-\frac{\beta-1+\sigma_D}{n}})^{\frac{1}{a_3}} \tilde u^{\frac{p-1}{b_3}} u^{\frac{1}{2}-(1+\frac{1}{n})\frac{1}{a_3}}S_\epsilon^{\frac{\sigma_D-1}{2}+(\frac{\beta-1+\sigma_D}{n})\frac{1}{a_3}}\omega_\psi^n.
\end{align*}
By Young's inequality,
\begin{align*}
III\leq \epsilon LHS_2
+C(\epsilon, n)\int_X \tilde u^{p-1}u^{\frac{b_3}{2}-(1+\frac{1}{n})\frac{b_3}{a_3}}S_\epsilon^{k_3}\omega_\psi^n.
\end{align*}
We choose the exponents $b_3=\frac{3n+4}{2n+4}> 1$ such that
\begin{align*}
\frac{b_3}{2}-(1+\frac{1}{n})\frac{b_3}{a_3}=\frac{1}{4}.
\end{align*}
Accordingly,
\begin{align*}
III
\leq \epsilon LHS_2
+C(\epsilon, n)\int_X \tilde u^{p-\frac{3}{4}}S_\epsilon^{k_3}\omega_\psi^n.
\end{align*}
Since $\tilde u\geq K_0$, we have $\tilde u^{-\frac{1}{4}}\leq K_0^{-\frac{1}{4}}$.
Hence,
\begin{align*}
III\leq \epsilon LHS_2
+C(\epsilon, n, K_0)\int_X \tilde u^{q}S_\epsilon^{k_3}\omega_\psi^n.
\end{align*}
We calculate the exponent $k_3=[\frac{\sigma_D-1}{2}+(\frac{\beta-1+\sigma_D}{n})a_3^{-1}]b_3$, which is
\begin{align*}
k_3=\frac{\sigma_D(na_3+2)-na_3+2(\beta-1)}{2n(a_3-1)}
=(\frac{\sigma_D}{2}+\sigma)-k'_3.
\end{align*}
Consequently, we have
\begin{align*}
k'_3=\frac{-\sigma_D(n+2)-2(\beta-1)+na_3}{2n(a_3-1)}+\sigma.
\end{align*}
We estimate the fourth one,
\begin{align*}
&IV=\int_X \tilde u^{q-1} uS^{\frac{\sigma_D}{2}+\sigma}_\epsilon\omega_\psi^n\leq \int_X \tilde u^{q}S_\epsilon^{k_4}\omega_\psi^n.
\end{align*}
where
\begin{align*}
k_4=\frac{\sigma_D}{2}+\sigma
=(\frac{\sigma_D}{2}+\sigma)-k_4',\quad k'_4=0.
\end{align*}
The fifth term is then estimated by
\begin{align*}
V\leq\epsilon LHS_2+C(\epsilon,n)\int_X \tilde u^{q}S_\epsilon^{k_5}\omega_\psi^n.
\end{align*}
The exponent $k_5=[(\frac{\sigma_D-1}{2}+\sigma)+(\frac{\beta-1+\sigma_D}{n})a_5^{-1}]b_5$ is further reduced to
\begin{align*}
k_5=\frac{\sigma_D(na_5+2)+na_5(2\sigma -1)+2(\beta-1)}{2n(a_5-1)}
=(\frac{\sigma_D}{2}+\sigma)-k_5'
\end{align*}
and
\begin{align*}
k_5'=\frac{-\sigma_D(n+2)-2(\beta-1)+na_5}{2n(a_5-1)}-\frac{\sigma}{a_5-1}.
\end{align*}
At last, we verify the conditions such that $k_i'<\frac{1}{2}, i=1,2,3,4,5$.
From $\sigma=0$ and $\sigma_D<1$, we have
\begin{align*}
k_1'=\frac{-(\beta-1)}{2}+\sigma<\frac{1}{2},\quad k'_2=\frac{\sigma_D}{2}+\sigma<\frac{1}{2}, \quad
k'_4=0<\frac{1}{2}.
\end{align*}
By using the weight condition \eqref{Gradient estimate: Inverse weighted inequality angle}, namely $\beta-1>\frac{n}{2}$, we have
\begin{align*}
k'_3=\frac{-2(\beta-1)+na_3}{2n(a_3-1)}<\frac{-n+na_3}{2n(a_3-1)}=\frac{1}{2}.
\end{align*}
Meanwhile, under the weight condition \eqref{Gradient estimate: Inverse weighted inequality sigmaD}, i.e. $\sigma_D>1-\frac{2\beta}{n+2}$, we see that
\begin{align*}
k'_3&=\frac{-\sigma_D(n+2)-2(\beta-1)+na_3}{2n(a_3-1)}\\
&<\frac{-[1-\frac{2\beta}{n+2}](n+2)-2(\beta-1)+na_3}{2n(a_3-1)}
=\frac{-n+na_3}{2n(a_3-1)}=\frac{1}{2}.
\end{align*}
The expression of $k'_5$ is the same to $k_3'$ when $\sigma=0$. So, $$k'_5<\frac{1}{2},$$ provided the weight conditions \eqref{Gradient estimate: Inverse weighted inequality angle} and \eqref{Gradient estimate: Inverse weighted inequality sigmaD} respectively.
In conclusion, simplify choosing $k'=\max\{k_1',k_2',k_3',k_4',k_5'\}$, we arrive at our desired inverse weighted inequality.
\end{proof}
\subsection{Iteration inequality}
In this section, we combined all inequalities in the previous sections to obtain the iteration inequality and complete the iteration scheme.
\begin{prop}[Iteration inequality]\label{Gradient estimate: Iteration inequality prop}
\begin{align}\label{Gradient estimate: Iteration inequality}
\|\tilde u\|_{L^{q\chi}(\tilde\mu)}
\leq C q^{\frac{1}{2q}} \|\tilde u\|_{L^{qa}(\tilde \mu)}, \quad 1<a<\chi.
\end{align}
\end{prop}
\begin{proof}
Since $q=p-\frac{1}{2}>1$, we obtain the rough iteration inequality \corref{Gradient estimate: Rough iteration inequality cor}, which states
\begin{align*}
\|\tilde u\|^{q}_{L^{q\chi}(\tilde\mu)}+\sqrt{q}LHS_2
\leq C\sqrt{q} (RHS_1+RHS_2+1).
\end{align*}
Applying the inverse weighted inequality, Proposition \ref{Gradient estimate: Inverse weighted inequality}, we have
\begin{align*}
RHS_1+RHS_2
\leq 4\tau LHS_2
+C(\tau, n)\int_X \tilde u^{q}S_\epsilon^{\frac{\sigma_D}{2}+\sigma-k'}\omega_\psi^n.
\end{align*}
Choosing sufficiently small $\tau$ and inserting back to the rough iteration inequality, we get
\begin{align*}
\|\tilde u\|^{q}_{L^{q\chi}(\tilde\mu)}
\leq C\sqrt{q}( \int_X \tilde u^{q}S_\epsilon^{\frac{\sigma_D}{2}+\sigma-k'}\omega_\psi^n+1).
\end{align*}
Then the weighted inequality, Proposition \ref{Gradient estimate: Weighted inequality}, implies the desired interation inequality
\begin{align*}
\|\tilde u\|^{q}_{L^{q\chi}(\tilde\mu)}
\leq C\sqrt{q}( \|\tilde u\|^{q}_{L^{qa}(\tilde\mu)}+1).
\end{align*}
\end{proof}
Finally, we finish the proof of the gradient estimate, \thmref{gradient estimate}. We assume $\|\tilde u\|_{L^{q_0a}(\tilde \mu)} \geq 1$ for some $q_0\geq\frac{3}{2}$ and rewrite the iteration inequality \eqref{Gradient estimate: Iteration inequality} by using $\tilde\chi:=\frac{\chi}{a}>1$,
\begin{align*}
\|\tilde u\|_{L^{q \chi }(\tilde\mu)}
\leq C^{q^{-1}} q^{\frac{1}{2q}} \|\tilde u\|_{L^{qa}(\tilde \mu)} .
\end{align*}
To proceed the iteration argument, we set $q=\tilde\chi^m$, then we have
\begin{align*}
&\|\tilde u\|_{L^{\tilde\chi^{m}\chi }(\tilde\mu)}
\leq C^{\tilde\chi^{-m}} \tilde\chi^{\frac{m}{2\tilde\chi^m}} \|\tilde u\|_{L^{\tilde\chi^m a}(\tilde \mu)}
= C^{\tilde\chi^{-m}} \tilde\chi^{\frac{m}{2\tilde\chi^m}} \|\tilde u\|_{L^{\tilde\chi^{m-1} \chi}(\tilde \mu)}\\
&\leq C^{\tilde\chi^{-m}+\tilde\chi^{-m+1}} \tilde\chi^{\frac{m}{2\tilde\chi^m}+\frac{m-1}{2\tilde\chi^{m-1}}} \|\tilde u\|_{L^{\tilde\chi^{m-2} a}(\tilde \mu)}\\
&\leq\cdots\leq C^{\sum_{i_0\leq i\leq m} \tilde\chi^{-i}} \tilde\chi^{\sum_{i_0\leq i\leq m}\frac{i}{2\tilde\chi^i}} \|\tilde u\|_{L^{ \tilde\chi^{i_0-1} a}(\tilde \mu)}.
\end{align*}
At the final step, we choose sufficiently large $m$ and let $i_0$ satisfy
\begin{align*}
\tilde q_0:=\tilde\chi^{i_0-1} a\geq q_0.
\end{align*}
Since two series in the coefficient are convergent, it remains to check the bound of the last integral
$\|\tilde u\|^{\tilde q_0}_{L^{ \tilde q_0}(\tilde \mu)}$, which is equals to
\begin{align*}
\|\tilde u\|^{\tilde q_0}_{L^{ \tilde q_0}(\tilde \mu)}
=\int_X \tilde u^{\tilde q_0}S_\epsilon^{k_\sigma\chi}\tilde\omega_t^n
=\int_X \tilde u^{\tilde q_0}S_\epsilon^{[\frac{\sigma_D}{2}+\beta-1+\sigma]\frac{2n}{2n-1}}\tilde\omega_t^n.
\end{align*}
In order to estimate the bound of $\|\tilde u\|^{\tilde q_0}_{L^{ \tilde q_0}(\tilde \mu)}$ from the $L^p$ bound of $u$ proved in Proposition \ref{Gradient estimate: Lp control prop}, i.e.
\begin{align*}
\int_X \tilde u^{\tilde q_0+\frac{1}{n}} S_\epsilon^{-\frac{\sigma_D}{n}+\frac{n-1}{n}(\beta-1)} \tilde\omega_t^n \leq C(\tilde q_0),
\end{align*}
we verify that $\tilde u^{\tilde q_0}<\tilde u^{\tilde q_0+\frac{1}{n}} $, $ \sigma=0 $ and the weights
\begin{align*}
\frac{\sigma_D}{2}\frac{2n}{2n-1}>-\frac{\sigma_D}{n}, \quad(\beta-1)\frac{2n}{2n-1}>\frac{n-1}{n}(\beta-1).
\end{align*}
Let $m\rightarrow\infty$, we thus obtain the gradient estimate of $|\partial\psi|^2$ from $\tilde u=e^H(|\partial\psi|^2+K)+K_0$ and the bound of $e^H$.
\section{$W^{2,p}$-estimate}\label{W2p estimate}
\begin{thm}[$W^{2,p}$-estimate for singular equation]\label{w2pestimates degenerate Singular equation}
For any $p\geq 1$, there exits a constant $A_5$ such that
\begin{align*}
\int_X (\mathop{\rm tr}\nolimits_{\tilde\omega_t}\omega_\varphi)^{p} |s_E|^{\sigma_E}_{h_E} S_\epsilon^{\sigma_D} \tilde\omega_t^n\leq A_5,\quad \sigma_D>(\beta-1)\frac{n-2-2np^{-1}}{n-1+p^{-1}}
\end{align*}
where, the singular exponent $\sigma_E$ is defined as following
\begin{align}\label{w2pestimates sigma E}
\sigma_E>2a_0\frac{(n-2)\sigma_s+(b_1-\sigma_sp)p(n-1)-2np^{-1}}{n-1+p^{-1}}.
\end{align} and $b_1$ is given in \eqref{W2p estimate AR b1} and \eqref{W2p estimate b1 2}. The constant $A_5$ depends on the $L^\infty$-estimates
\begin{align*}
&\sup_X(F_{t,\epsilon}-\sigma_s\phi_E),\quad\inf_X(F-\sigma_i\phi_E),\quad \|\varphi\|_\infty
\end{align*}
and the quantities of the background metric $\tilde\omega_t$,
\begin{align*}
&-C_{1.1}=\inf_{i\neq j}R_{i\bar i j\bar j}(\tilde\omega_t), \quad \|e^{-f}\|_{p_0,\tilde\omega_t^n},p_0\geq p+1,\quad \sup_{(X,\tilde\omega_t)} i\partial\bar\partial f,\\
&\sup_X\phi_E,\quad
\sup_{(X,(1+\epsilon)\omega_{K})}\theta,\quad\inf_X R,\quad Vol([\tilde\omega_t]),\quad n , \quad p.
\end{align*}
\end{thm}
\begin{rem}
When $n=2$, we have $\sigma_D=0$.
\end{rem}
We obtain the $W^{2,p}$-estimate for degenerate equation as below.
\begin{thm}\label{w2pestimates degenerate equation}
Suppose that $\Omega=[\omega_K]$ is K\"ahler. Then the singular exponent $\sigma_E$ vanishes and
\begin{align*}
\int_X (\mathop{\rm tr}\nolimits_{\omega_K}\omega_{\varphi_\epsilon})^{p} S_\epsilon^{\sigma_D} \omega_K^n\leq A_5,\quad \sigma_D:=(\beta-1)\frac{n-2}{n-1+p^{-1}}.
\end{align*}
Moreover, written in terms of the volume element $\omega_{\varphi_\epsilon}^n$,
\begin{align}\label{w2pestimates degenerate equation simple}
\int_X (\mathop{\rm tr}\nolimits_{\omega_K}\omega_{\varphi_\epsilon})^{p} S_\epsilon^{-\frac{\beta-1}{n-1}} \omega_{\varphi_\epsilon}^n\leq A_6.
\end{align}
\end{thm}
In the proof, we omit the indexes as before. We also write $\psi:=\tilde\varphi=\varphi-\phi_E$.
We will use the following notations in this section,
\begin{align}\label{w2p u}
v:=\mathop{\rm tr}\nolimits_{\tilde\omega_t}\omega_\varphi,\quad \tilde v:=v+K,\quad K\geq 0.
\end{align}
The proof is divided into the following steps.
\subsection{Differential inequality}
\begin{lem}\label{w2pestimates tri tilde v}
Let the constant $-C_{1.1}=\inf_{i\neq j}R_{i\bar i j\bar j}(\tilde\omega_t)$. Then
\begin{align*}
\triangle_\varphi \log \tilde v \geq - C_{1.1}\mathop{\rm tr}\nolimits_\varphi\tilde\omega_t+\frac{\tilde\triangle \tilde F}{\tilde v},
\end{align*}
where $\tilde\triangle$ is the Laplacian operator regarding to the metric $\tilde\omega_t$.
\end{lem}
\begin{proof}
The proof of the Laplacian of $\tilde v=\mathop{\rm tr}\nolimits_{\tilde\omega_t}\omega_\varphi+K$ is slightly different from Yau's computation for $v=\mathop{\rm tr}\nolimits_{\tilde\omega_t}\omega_\varphi$. We include the proof of $\triangle_\varphi \tilde v$ as below
and refer to Lemma 3.4 \cite{MR3405866} and Proposition 2.22 \cite{MR4020314} for more references.
The Laplacian of $\tilde v$ is given by the identity
\begin{align*}
\triangle_\varphi \tilde v
=g^{i\bar j}g_\varphi^{k\bar l}g_\varphi^{p\bar q}\partial_{\bar l}g_{\varphi p\bar j}
\partial_{k}g_{\varphi i\bar q}-\mathop{\rm tr}\nolimits_{\tilde\omega_t}Ric(\omega_\varphi)
+g_\varphi^{k\bar l}{R^{i\bar j}}_{k\bar l}(\tilde\omega_t)g_{\varphi i\bar j}.
\end{align*}
By the volume ratio $\omega_\varphi^n=e^{\tilde F}\tilde\omega_t^n$, we have
$$Ric(\omega_\varphi)=Ric(\tilde\omega_t)-i\partial\bar\partial\tilde F$$ and then
\begin{align*}
\triangle_\varphi \tilde v
=g^{i\bar j}g_\varphi^{k\bar l}g_\varphi^{p\bar q}\partial_{\bar l}g_{\varphi p\bar j}
\partial_{k}g_{\varphi i\bar q}+ \tilde\triangle \tilde F-S(\tilde\omega_t)
+g_\varphi^{k\bar l}{R^{i\bar j}}_{k\bar l}(\tilde\omega_t)g_{\varphi i\bar j}.
\end{align*}
Actually, it holds under the normal coordinates that
\begin{align*}
g^{i\bar j}g_\varphi^{k\bar l}g_\varphi^{p\bar q}\partial_{\bar l}g_{\varphi p\bar j}
\partial_{k}g_{\varphi i\bar q}= \frac{1}{1+\varphi_{k\bar k}}\frac{1}{1+\varphi_{p\bar p}}\varphi_{p\bar i\bar k}\varphi_{i\bar p k}.
\end{align*}
By $1+\varphi_{p\bar p}\leq \sum_{p}(1+\varphi_{p\bar p})=v\leq \tilde v$, we have
\begin{align*}
g^{i\bar j}g_\varphi^{k\bar l}g_\varphi^{p\bar q}\partial_{\bar l}g_{\varphi p\bar j}
\partial_{k}g_{\varphi i\bar q}
\geq v^{-1} |\partial v |^2_\varphi\geq \tilde v^{-1} |\partial \tilde v|^2_\varphi=\tilde v |\partial \log \tilde v|^2_\varphi.
\end{align*}
The term involving curvature reduces to the following inequality, by M. Paun's trick \cite{MR2470619},
\begin{align*}
&-S(\tilde\omega_t)+g_\varphi^{k\bar l}{R^{i\bar j}}_{k\bar l}(\tilde\omega_t)g_{\varphi i\bar j}
=R_{i\bar i k\bar k}(\tilde\omega_t)\sum_{1\leq i,k\leq n}[\frac{1+\varphi_{i\bar i}}{1+\varphi_{k\bar k}}-1]\\
&=R_{i\bar i k\bar k}(\tilde\omega_t)\sum_{1\leq i<k\leq n}[\frac{1+\varphi_{i\bar i}}{1+\varphi_{k\bar k}}-1]
+R_{i\bar i k\bar k}(\tilde\omega_t)\sum_{1\leq k<i\leq n}[\frac{1+\varphi_{i\bar i}}{1+\varphi_{k\bar k}}-1].
\end{align*}
By symmetric of the Riemannian curvature, it becomes
\begin{align*}
=R_{i\bar i k\bar k}\sum_{1\leq i<k\leq n}[\frac{1+\varphi_{i\bar i}}{1+\varphi_{k\bar k}}+\frac{1+\varphi_{k\bar k}}{1+\varphi_{i\bar i}}-2],
\end{align*}
which is nonnegative and leads to
\begin{align*}
&\geq -C_{1.1}\sum_{1\leq i<k\leq n}[\frac{1+\varphi_{i\bar i}}{1+\varphi_{k\bar k}}+\frac{1+\varphi_{k\bar k}}{1+\varphi_{i\bar i}}-2]\\
&\geq -C_{1.1} \mathop{\rm tr}\nolimits_\varphi\tilde\omega_t \cdot v
\geq -C_{1.1} \mathop{\rm tr}\nolimits_\varphi\tilde\omega_t \cdot \tilde v.
\end{align*}
Inserting all these inequalities to the identity
\begin{align*}
\triangle_\varphi \log \tilde v =\tilde v^{-1}\triangle_\varphi \tilde v-|\partial\log \tilde v|^2_\varphi,
\end{align*}
we obtain that the inequality of $\triangle_\varphi\log \tilde v$.
\end{proof}
Furthermore, we calculate the Laplacian of $u$, which multiplies $\tilde v$ with the weight $e^{-H}$, i.e.
\begin{align*}
u=e^{-H} \tilde v.
\end{align*}
Then
$
\triangle_\varphi \log u =-\triangle_\varphi H+\triangle_\varphi \log \tilde v.
$ Combining with \eqref{w2pestimates tri tilde v}, we have
\begin{lem}
$
\triangle_\varphi \log u \geq -\triangle_\varphi H- C_{1.1}\mathop{\rm tr}\nolimits_\varphi\tilde\omega_t+\frac{\tilde\triangle \tilde F}{\tilde v}.
$
\end{lem}
In particular, we define $H$ as
\begin{align}\label{w2p H}
H:=b_0 F+b_1\psi+b_2 f,\quad b_0\geq 0.
\end{align}
\begin{prop}[Differential inequality]\label{W2p estimate: Differential inequality prop}
\begin{align}\label{W2p estimate: Differential inequality}
\triangle_\varphi u\geq A_\theta u \mathop{\rm tr}\nolimits_\varphi\tilde\omega_t+(A_R-b_2 \triangle_\varphi f) u+e^{-H}\tilde\triangle \tilde F,
\end{align}
where we set the constants
\begin{align}\label{W2p estimate AR b1}
A_\theta:=-b_0C_u+b_1- C_{1.1},\quad A_R:=b_0\inf_X R-b_1n<0,
\end{align}
and choose the positive $b_1$ sufficiently large such that $A_\theta\geq1$.
\end{prop}
\begin{proof}
We use the upper bound of $\theta$ \eqref{Cu}, namely $\mathop{\rm tr}\nolimits_\varphi\theta\leq C_u\mathop{\rm tr}\nolimits_{\varphi}\tilde\omega_t$, to compute the Laplacian of the auxiliary function $H$,
\begin{align*}
\triangle_\varphi (-H)&= -b_0(\mathop{\rm tr}\nolimits_\varphi\theta-R)-b_1(n-\mathop{\rm tr}\nolimits_\varphi\tilde\omega_t)-b_2 \triangle_\varphi f\\
&\geq (-b_0C_u+b_1) \mathop{\rm tr}\nolimits_\varphi\tilde\omega_t+b_0R-b_1n-b_2 \triangle_\varphi f.
\end{align*}
Adding the inequalities for $\triangle_\varphi(-H)$ and $\triangle_\varphi \log \tilde v$ together, we arrive at an inequality for $\log u$,
\begin{align*}
\triangle_\varphi \log u\geq A_\theta \mathop{\rm tr}\nolimits_\varphi\tilde\omega_t+(A_R-b_2 \triangle_\varphi f)+\frac{\tilde\triangle \tilde F}{\tilde v}.
\end{align*}
Alternatively, rewritten in the form of $u=e^{-H}v$, it reduces to the desired inequality for $u$.
\end{proof}
\subsection{Integral inequality}
We integrate the differential inequality \eqref{W2p estimate: Differential inequality} into an integral inequality, by multiplying it with $-u^{p-1}$, $\forall p\geq 1$ and integrating over $X$ with respect to $\omega^n_\varphi$,
\begin{align}\label{W2p semi positive LHSRHS}
LHS_1+LHS_2
\leq \int_X[-A_R+b_2 \triangle_\varphi f] u^p\omega_\varphi^n+II.
\end{align}
where, we denote $II:=-\int_Xu^{p-1} e^{-H}\tilde\triangle \tilde F\omega_\varphi^n$,
\begin{align*}
&LHS_1:=(p-1)\int_Xu^{p-2}|\partial u|^2_\varphi\omega_\varphi^n,\quad
\widetilde{LHS_2}:= \int_X A_\theta u^p \mathop{\rm tr}\nolimits_\varphi\tilde\omega_t\omega_\varphi^n.
\end{align*}
Aapplying the fundamental inequality
$
\mathop{\rm tr}\nolimits_\varphi\tilde\omega_t\geq e^{-\frac{\tilde F}{n-1}} v^{\frac{1}{n-1}},
$ we have
\begin{align*}
\widetilde{LHS_2}\geq LHS_2:= \int_X u^p e^{-\frac{\tilde F}{n-1}} v^{\frac{1}{n-1}}\omega_\varphi^n.
\end{align*}
Now we deal with the second term $II$, which involves $\tilde\triangle\tilde F$.
\begin{prop}\label{W2p II inequality}
Take $b_0=p$, then
\begin{align*}
&\frac{3}{4}LHS_1+\widetilde{LHS_2}\leq- \int_XA_Ru^p\omega_\varphi^n
+RHS_2
\end{align*}
where
\begin{align*}
RHS_2&:=b_2 \int_X\triangle_\varphi f u^p\omega_\varphi^n
+ \frac{b_0+b_2}{b_0-1}\int_Xu^{p-1} e^{-H} \tilde\triangle f \omega_\varphi^n\\
&+\int_X \frac{b_1}{b_0-1} u^{p-1} e^{-H}v \omega_\varphi^n.
\end{align*}
\end{prop}
\begin{proof}
By $\omega_\varphi^n=e^{\tilde F}\tilde \omega_t^n$, we get
$II=-\int_Xu^{p-1} e^{\tilde F-H}\tilde\triangle \tilde F\tilde\omega_t^n,$ which is decomposed to
\begin{align*}
II=II_1+II_2:=\frac{1}{b_0-1}\int_Xu^{p-1} e^{\tilde F-H}\tilde\triangle [(\tilde F-H)+(H-b_0\tilde F)]\tilde\omega_t^n.
\end{align*}
By integration by parts, the first part $II_1$ becomes
\begin{align*}
&=-\frac{p-1}{b_0-1}\int_Xu^{p-2} e^{\tilde F-H}(\partial u,\partial (\tilde F-H))_{\tilde\omega_t}\tilde\omega_t^n\\
&-\frac{1}{b_0-1}\int_Xu^{p-1} e^{\tilde F-H}|\partial (\tilde F-H)|^2_{\tilde\omega_t}\tilde\omega_t^n.
\end{align*}
We choose $b_0>1$ such that the constant before the second integral is negative. Then H\"older inequality gives us the upper bound of $II_1$,
\begin{align*}
II_1\leq\frac{(p-1)^2}{4(b_0-1)}\int_Xu^{p-3} e^{\tilde F-H}|\partial u|^2_{\tilde\omega_t}\tilde\omega_t^n.
\end{align*}
Using $|\partial u|^2_{\tilde\omega_t}\leq v |\partial u|^2_{\varphi}$ and $u=e^{-H} v$, we deduce that
\begin{align*}
II_1\leq\frac{(p-1)^2}{4(b_0-1)}\int_Xu^{p-2} |\partial u|^2_{\varphi}\omega_\varphi^n=\frac{p-1}{4(b_0-1)}LHS_1.
\end{align*}
In order to estimate $II_2$, we calculate $$H-b_0\tilde F=b_0F+b_1\tilde\varphi-b_0\tilde F+b_2f=(b_0+b_2) f+b_1\tilde\varphi$$ and
\begin{align*}
\tilde\triangle (H-b_0\tilde F)=(b_0+b_2) \tilde\triangle f+b_1 v-b_1 n.
\end{align*}
By substitution into the part $II_2$, we get
\begin{align*}
II_2=\frac{1}{b_0-1}\int_Xu^{p-1} e^{-H} [(b_0+b_2) \tilde\triangle f+b_1 v-b_1 n] \omega_\varphi^n.
\end{align*}
If we further choose $b_0>1, b_1>0$, the negative term could be dropped immediately. Hence, $II_2$ reduces to
\begin{align*}
\leq \frac{1}{b_0-1}\int_Xu^{p-1} e^{-H} [(b_0+b_2) \tilde\triangle f+b_1 v] \omega_\varphi^n.
\end{align*}
Inserting $II_1$ and $II_2$ back to \eqref{W2p semi positive LHSRHS} and choosing $b_0$ depending on $p$ such that $$1-\frac{p-1}{4(b_0-1)}>0,$$ we have arrives at the desired weighted inequality.
\end{proof}
In order to estimate $\tilde\triangle f$, we need to use the upper bound of $i\partial\bar\partial f$.
\begin{lem}
If $b_2=0$, then $H=b_0 F+b_1\psi$ and
\begin{align}\label{w2p estimates: upper bound}
LHS_2\leq C\int_X[ u^p+u^{p-1} e^{-H} +u^{p-1} e^{-H} v ] \omega_\varphi^n.
\end{align}
\end{lem}
\begin{proof}
Taking $b_2=0$ in \lemref{W2p II inequality}, we get
\begin{align*}
RHS_2= \frac{b_0}{b_0-1}\int_Xu^{p-1} e^{-H} \tilde\triangle f \omega_\varphi^n
+\frac{b_1}{b_0-1}\int_X u^{p-1} e^{-H}v \omega_\varphi^n.
\end{align*}
We make use of the particular property of $i\partial\bar\partial f$ in the degenerate situation as shown in \lemref{nef tilde f}, i.e. it is bounded above. So, we obtain that
\begin{align*}
RHS_2\leq \frac{b_0 n \sup_{(X,\tilde\omega_t)} i\partial\bar\partial f }{b_0-1}\int_Xu^{p-1} e^{-H} \omega_\varphi^n+ \frac{b_1}{b_0-1}\int_Xu^{p-1} e^{-H} v \omega_\varphi^n.
\end{align*}
\end{proof}
\begin{cor}We further take $K=0$ and rewrite \eqref{w2p estimates: upper bound} in the form
\begin{align}\label{w2p estimates: upper bound cor}
LHS_2=\int_X v^{p+\frac{1}{n-1}} \mu
\leq C\int_X (v^p+v^{p-1}) e^{\frac{\tilde F}{n-1}}\mu
\end{align}
where we denote
\begin{align}\label{W2p mu}
L:=-pH+\tilde F-\frac{\tilde F}{n-1},\quad \mu=e^L\tilde\omega_t^n.
\end{align}
\end{cor}
We further treat the terms on the right hand side.
\begin{prop}
\begin{align}\label{W2p semi positive LHSRHS 1}
\int_X v^{p+\frac{1}{n-1}} \mu
\leq C\int_X v^{p-1} e^h \mu,\quad e^h:=e^{\frac{n}{n-1}\sigma_s\phi_E+\frac{-f}{n-1}}.
\end{align}
where the constant $C$ depends on
\begin{align*}
\sup_X(F-\sigma_s\phi_E),\quad A_\theta,\quad A_R,\quad b_0(p),\quad b_1, \quad\sup_{(X,\tilde\omega_t)} i\partial\bar\partial f.
\end{align*}
\end{prop}
\begin{proof}
Applying Young's inequality with $a=\frac{n}{n-1}$ and $b=n$, we have
\begin{align*}
&\int_X v^p e^{\frac{\tilde F}{n-1}} \mu
=\int_X v^{(p+\frac{1}{n-1})\frac{1}{a}} v^{\frac{p}{b}-\frac{1}{(n-1)a}}e^{\frac{\tilde F}{n-1}} \mu\\
&\leq \tau LHS_2+\int_X v^{p-\frac{b}{(n-1)a}}e^{\frac{b\tilde F}{n-1}} \mu
=\tau LHS_2+\int_X v^{p-1}e^{\frac{n\tilde F}{n-1}} \mu.
\end{align*}
Inserting in to \eqref{w2p estimates: upper bound cor}, we get
\begin{align*}
LHS_2
\leq C[\int_X v^{p-1} \cdot e^{\frac{\tilde F}{n-1}}\mu+\tau LHS_2+\int_X v^{p-1}e^{\frac{n\tilde F}{n-1}} \mu].
\end{align*}
Choosing sufficiently small $\tau$, we obtain
\begin{align*}
LHS_2
\leq C\int_X v^{p-1} ( e^{\frac{\tilde F}{n-1}}+e^{\frac{n\tilde F}{n-1}}) \mu.
\end{align*}
Recall the volume ratio estimate from \eqref{Gradient estimate: volume ratio bound},
$
C^{-1} e^{\sigma_i\phi_E-f} \leq e^{\tilde F}\leq C e^{\sigma_s\phi_E-f}
$ and $e^{-f}$ is of the order $S_\epsilon^{\beta-1}$.
We could compare the weights $e^{\frac{n\tilde F}{n-1}} $ and $e^{\frac{\tilde F}{n-1}} $ and observe that both of them are estimated by $e^h$. Thus we have obtained \eqref{W2p semi positive LHSRHS 1}.
\end{proof}
Now we are ready to derive the $W^{2,p}$-estimates from \eqref{W2p semi positive LHSRHS 1}.
We first compute the weight $\mu=e^L$.
\begin{lem}\label{W2p eL}We have
\begin{align*}
&L\leq[pb_1-\sigma_i(pb_0-\frac{n-2}{n-1})]\phi_E-\frac{n-2}{n-1}f+C,\\
&L\geq[pb_1-\sigma_s(pb_0-\frac{n-2}{n-1})]\phi_E-\frac{n-2}{n-1}f-C.
\end{align*}
The constant $C$ depends on $\|\varphi\|_\infty$, $\sup_X(F-\sigma_s\phi_E)$, $\inf_X(F-\sigma_i\phi_E)$.
\end{lem}
\begin{proof}
We compute that
\begin{align*}
L&=-pb_0 F-pb_1\psi+\frac{n-2}{n-1}\tilde F\\
&=(-pb_0+\frac{n-2}{n-1})F-\frac{n-2}{n-1}f-pb_1\varphi+pb_1\phi_E.
\end{align*}
Making use of the bound of $F$ namely,
\begin{align*}
\sigma_i\phi_E-C\leq F\leq \sigma_s\phi_E+C
\end{align*}
and the bound of $\varphi$ from \thmref{L infty estimates Singular equation}, we obtain the bound of $L$.
\end{proof}
Then we estimate $LHS_2$.
\begin{prop}
We choose
\begin{align}\label{W2p estimate b1 2}
b_1>(b_0-\frac{n-2}{p(n-1)})\sigma_i-(1+\frac{1}{p(n-1)})\sigma_s-\frac{2n}{p}.
\end{align}
Then
\begin{align*}
\int_X v^{p+\frac{1}{n-1}} \mu
\leq C.
\end{align*}
\end{prop}
\begin{proof}
We apply the H\"older inequality with $a=\frac{p+\frac{1}{n-1}}{p-1}$ to \eqref{W2p semi positive LHSRHS 1}
\begin{align*}
\int_X v^{p+\frac{1}{n-1}} \mu
\leq C\int_X v^{p-1} e^h \mu
\leq C(\int_X v^{p+\frac{1}{n-1}}\mu)^{\frac{1}{a}}
(\int_X e^{\frac{ah}{a-1}}\mu)^{1-\frac{1}{a}}.
\end{align*}
We now estimate the integral $\int_X e^{\frac{ah}{a-1}}\mu$.
We calculate that $\frac{a}{a-1}=\frac{p(n-1)+1}{n}$. Then we insert $h$ \eqref{W2p semi positive LHSRHS 1} and the estimate of $\mu$ \eqref{W2p mu} from \lemref{W2p eL} to the last integrand
\begin{align*}
e^{\frac{ah}{a-1}}\mu=e^{\frac{p(n-1)+1}{n-1}\sigma_s\phi_E+\frac{-f}{n-1}\frac{p(n-1)+1}{n}}e^L\tilde\omega_t^n.
\end{align*}
We write
\begin{align*}e^{\frac{ah}{a-1}}\mu\leq e^{k_1\phi_E+k_2(-f)}
\end{align*} and compare the coefficients
\begin{align*}
&k_1=pb_1-(pb_0-\frac{n-2}{n-1})\sigma_i+(p+\frac{1}{n-1})\sigma_s,\\
&k_2=\frac{p}{n}+\frac{1}{(n-1)n}+\frac{n-2}{n-1}.
\end{align*}
Since $k_2>0$, we have $ e^{k_2(-f)}$ is bounded above. Also, $e^{k_1\phi_E}$ is integrable, if $k_1+2n>0$.
\end{proof}
\begin{rem}
We further expand $k_1$,
\begin{align*}
k_1=pb_1-(pb_0-\frac{n-2}{n-1})C_u+(p+\frac{1}{n-1})C_l-(pb_0-\frac{n-3}{n-1}+p)\tau.
\end{align*}
Since $\tau$ could be very small, it is sufficient to ask $$pb_1-(pb_0-\frac{n-2}{n-1})C_u+(p+\frac{1}{n-1})C_l\geq 0.$$
Choosing large $b_1$ as in Proposition \ref{W2p estimate: Differential inequality prop}, for example,
$$
b_1= C_ub_0-\frac{n}{n-1}C_l+C_{1.1}+1,
$$ we also obtain
\begin{align*}
&k_1\geq -\tau (pb_0-\frac{n-3}{n-1}+p).
\end{align*}
Hence, $e^{k_1\phi_E}$ is integral once $\tau$ is small enough.
\end{rem}
Therefore, we complete the proof of the $W^{2,p}$-estimates as following.
\begin{proof}[Proof of \thmref{w2pestimates degenerate Singular equation}]
We set $$\tilde L=k_1\phi_E+k_2(-f),$$ where $k_1,k_2$ are will be determined in the following argument.
The H\"older inequality with $a=\frac{p+\frac{1}{n-1}}{p}$ gives
\begin{align*}
\int_X v^{p} e^{\tilde L} \tilde\omega^n_t
= \int_X v^{p} e^{\frac{L}{a}} e^{\tilde L-\frac{L}{a}}\tilde\omega^n_t
\leq (\int_X v^{p+\frac{1}{n-1}} e^L \tilde\omega^n_t)^{\frac{1}{a}}(\int_X e^{\frac{a\tilde L-L}{a-1}} \tilde\omega^n_t)^{\frac{a-1}{a}}.
\end{align*}
By \lemref{W2p eL}, we get
\begin{align*}
-L&\leq [-pb_1+\sigma_s(pb_0-\frac{n-2}{n-1})]\phi_E+\frac{n-2}{n-1}f+C.
\end{align*}
In order to guarantee the last integral to be finite, we need
\begin{align*}
&\frac{1}{a-1}[ak_1-pb_1+\sigma_s(pb_0-\frac{n-2}{n-1})]+2n>0,\\
&\frac{1}{a-1}(ak_2-\frac{n-2}{n-1})+2n>0.
\end{align*}
We calculate that $\frac{1}{a-1}=p(n-1)$. These are exactly the hypotheses in \thmref{w2pestimates degenerate Singular equation}.
\end{proof}
\bibliographystyle{abbrv}
|
1,941,325,221,136 | arxiv | \section{Introduction}
Event detection (ED) systems extract events of specific types from the given text.
Traditionally, researchers use pipeline approaches~\cite{ahn-2006-stages} where a trigger identification (TI) system is used to identify event triggers in a sentence and then a trigger classifier (TC) is used to find the event types of extracted triggers.
Such a framework makes the task easy to conduct but ignores the interaction and correlation between the two subtasks, being susceptible to cascading errors.
In the last few years, several neural network-based models were proposed to jointly identify triggers and classify event types from a sentence~\cite{chen-etal-2015-event,nguyen-grishman-2015-event,DBLP:conf/aaai/NguyenG18,liu-etal-2018-jointly,yan-etal-2019-event,cui2020EEGCN,cui2021LHGAT}.
These models have achieved promising performance and proved the effectiveness of solving ED in the joint framework.
But they almost followed the supervised learning paradigm and depended on the large-scale human-annotated dataset, while new event types emerge every day and most of them suffer from the lack of sufficient annotated data.
In the case of insufficient resources, existing joint models cannot recognize the novel event types with only few samples, i.e., Few-Shot Event Detection (FSED).
\begin{figure}[!t]
\centering
\includegraphics[scale=0.43]{table.pdf}
\caption{An example from FewEvent dataset revealing the trigger discrepancy. ``[$\cdot$]'' marks the event trigger.}
\label{fig:example}
\end{figure}
One intuitive way to solve this problem is to first identify event triggers in the conventional way and then classify the event types based on the few-shot learning~\cite{vinyals2016matching,snell2017prototypical,DBLP:conf/cvpr/SungYZXTH18}, these two subtasks can be trained jointly by parameter sharing.
Such identify-then-classify paradigm~\cite{DBLP:conf/wsdm/DengZKZZC20} seems to be convincing because TI aims to recognize triggers and does not need to adapt to novel classes, so we just need to solve the TC in the few-shot manner.
Unfortunately, our preliminary experiments reveal that TI tends to struggle when recognizing triggers of novel event types because novel events usually contain completely different triggers with the semantic distinction from the known events, i.e., \textbf{Trigger discrepancy} problem.
Figure \ref{fig:example} gives an example that the trigger ``e-mail'' would only occur in event \textit{E-Mail} but not in \textit{Marry} and triggers of two events have disparate context.
And experiments on FewEvent (a benchmark dataset for FSED) show that 59.21\% triggers in the test set do not trigger any events in the training set and the F1 score of TI with the SOTA TI model BERT-tagger~\cite{yang-etal-2019-exploring} is only 31.06\%.
Thus, the performance of the identify-then-classify paradigm will be limited by the TI part due to the cascading errors.
In this paper, we present a new unified method to solve FSED.
Specifically, we convert this task to a sequence labeling problem and design a double-part tagging scheme using trigger and event parts to describe the features of each word in a sentence.
The key to the sequence labeling framework is to model the dependency between labels.
Conditional Random Field (CRF) is a popular choice to capture such label dependency by learning transition scores of fixed label space in the training dataset.
Nevertheless, in FSED, CRF cannot be applied directly due to the \textbf{label discrepancy} problem, that is the label space of the test set is non-overlapping with the training set since FSED aims to recognize novel event types.
Therefore, the learned transition scores of CRF from the training set do not model the dependency of the novel labels in the test set.
To address the label discrepancy problem, we propose \textbf{P}rototypical \textbf{A}mortized \textbf{C}onditional \textbf{R}andom \textbf{F}ield (PA-CRF), which approximates the transition scores based on the label prototypes~\cite{snell2017prototypical} instead of learning by optimization.
Specifically, we first apply the self-attention mechanism to capture the dependency information between labels and then map the label prototype pairs to the corresponding transition scores.
In this way, PA-CRF can produce label-specific transition scores based on the few supportive samples, which can adapt to arbitrary novel event types.
However, predicting the transition score as a single fixed value actually acts as the point estimation, which usually acquires a large amount of annotated data to achieve accurate estimation.
Estimated from the handful of samples, the transition scores may suffer from the statistical uncertainty due to the random fluctuation of scant data.
To release this issue,
inspired by variational inference~\cite{DBLP:journals/corr/KingmaW13,NEURIPS2018_e1021d43,gordon2018metalearning},
we treat the transition score as the random variable and utilize the Gaussian distribution to approximate its distribution to model the uncertainty.
Thus, our PA-CRF is to estimate the parameters of the Gaussian distribution rather than the transition scores directly, i.e., in the amortized manner~\cite{DBLP:journals/corr/KingmaW13,gordon2018metalearning}.
The Probabilistic Inference~\cite{gordon2018metalearning} is employed based on the Gaussian distribution to make the inference robust by taking the possible perturbation of transition scores into account since the perturbation is also learned in a way that coherently explains the uncertainty of the samples.
To summarize, our contributions are as follows:
\begin{itemize}
\item We devise a tagging-based unified model for FSED. To the best of our knowledge, we are the first to solve this task in a unified manner, free from the cascading errors.
\item We propose a novel model, PA-CRF, which estimates the distributions of transition scores for modeling the specific label dependency in the few-shot sequence labeling setting.
\item Experimental results show that our proposed PA-CRF outperforms other competitive baselines on the FewEvent dataset. Further analyses show the effectiveness of our unified model and the limitation of the identify-then-classify models.
\end{itemize}
\section{Related Work}
\paragraph{Few-shot Event Detection}
Event Detection (ED) aims to recognize the specific type of events in a sentence.
In recent years, various neural-based models have been proposed and achieved promising performance in ED~\cite{chen-etal-2015-event,nguyen-grishman-2015-event,DBLP:conf/aaai/NguyenG18,liu-etal-2018-jointly,yan-etal-2019-event,cui2020EEGCN}.
\citeauthor{chen-etal-2015-event}~\shortcite{chen-etal-2015-event} and \citeauthor{nguyen-grishman-2015-event}~\shortcite{nguyen-grishman-2015-event} proposed the convolution architecture to capture the semantic information in the sentence.
\citeauthor{nguyen-etal-2016-joint-event}~\shortcite{nguyen-etal-2016-joint-event} introduced the recurrent neural network to model the sequence contextual information of words.
Recently, GCN-based models~\cite{DBLP:conf/aaai/NguyenG18,liu-etal-2018-jointly,yan-etal-2019-event,cui2020EEGCN} have been proposed to exploit the syntactic dependency information and achieved state-of-the-art performance.
However, all these models are data-hungry, limiting dramatically their usability and deployability in real-world scenarios.
Recently, there has been an increasing research interest in solving event detection in the few-shot scenarios~\cite{DBLP:conf/wsdm/DengZKZZC20,DBLP:conf/pakdd/LaiDN20,DBLP:journals/corr/abs-2006-10093}, by exploiting the Few-Shot Learning ~\cite{vinyals2016matching,snell2017prototypical,DBLP:conf/icml/FinnAL17,DBLP:conf/cvpr/SungYZXTH18,cong2020DaFeC}.
\citeauthor{DBLP:conf/pakdd/LaiDN20}~\shortcite{DBLP:conf/pakdd/LaiDN20} proposed LoLoss which splits the part of the support set to act as the auxiliary query set to train the model.
\citeauthor{DBLP:journals/corr/abs-2006-10093}~\shortcite{DBLP:journals/corr/abs-2006-10093} introduced two regularization matching losses to improve the performance of models.
These works only focus on the few-shot trigger classification which classifies the event type of the annotated trigger according to the context based on few samples.
This is unrealistic as triggers of novel events are predicted by some existing toolkits in advance.
\citeauthor{DBLP:conf/wsdm/DengZKZZC20}~\shortcite{DBLP:conf/wsdm/DengZKZZC20} first proposed the benchmark dataset, \textit{FewEvent}, for FSED and designed the DMBPN based on the dynamic memory networks.
They train a conventional trigger identifier and a few-shot trigger classifier jointly and evaluated the model performance in the identify-then-classify paradigm.
Moreover, our preliminary experiments reveal that the conventional trigger identification model tends to struggle when recognizing triggers of novel event types because of the trigger discrepancy between different event types.
Thus, errors of the trigger identifier might be propagated to the event classification.
Different from the previous identify-then-classify framework, for the first time, we solve Few-Shot Event Detection with two subtasks in a unified manner.
\paragraph{Few-shot Sequence Labeling}
In recent years, several works~\cite{warmprotoz,hou-etal-2020-shot,DBLP:conf/emnlp/YangK20} have been proposed to solve the few-shot named entity recognition using sequence labeling methods.
\citeauthor{warmprotoz}~\shortcite{warmprotoz} applied the vanilla CRF in the few-shot scenario directly.
\citeauthor{hou-etal-2020-shot}~\shortcite{hou-etal-2020-shot} proposed a collapsed dependency transfer mechanism (CDT) into CRF, which learns label dependency patterns of a set of task-agnostic abstract labels and utilizes these patterns as transition scores for novel labels.
\citeauthor{DBLP:conf/emnlp/YangK20}~\shortcite{DBLP:conf/emnlp/YangK20} trains their model on the training data in a standard supervised learning manner and then uses the prototypical networks and the CDT for prediction in the inference phase.
Different from these methods learning the transition scores by optimization, we build a network to generate the transition scores based on the label prototypes instead.
In this way, we can generate exact label-specific transition scores of arbitrary novel event types to achieve adaptation ability.
And we further introduce the Gaussian distribution to estimate the data uncertainty.
Experiments prove the effectiveness of our method over the previous methods.
\section{Problem Formulation}
We convert event detection to a sequence labeling task.
Each word is assigned a label that contributes to detecting the events.
Labels consist of two parts: the word position in the trigger and the event type.
We use the ``BI'' (Begin, Inside) signs to represent the position information of a word in the event trigger.
The event type information is obtained from a predefined set of events.
Label ``O'' (Other) means that the corresponding word is independent of the target events.
Thus, the total number of labels is $2N+1$ ($N$ for \textit{B-EventType}, $N$ for \textit{I-EventType}, and an additional \textit{O} label), where $N$ is the number of predefined event types.
Furthermore, we formulate the Few-Shot Event Detection in the typical $N$-way-$K$-shot paradigm.
Let $\bm{x} = \{w_1, w_2, \ldots, w_n\}$ denote an $n$-word sequence, and $\bm{y} = \{ y_1, y_2, \ldots, y_n\}$ denote the label sequence of the $\bm{x}$.
Given a \textit{support set} $\mathcal{S} = \{ (\bm{x}^{(i)}, \bm{y}^{(i)}) \}_{i=1}^{N \times K}$ which contains $N$ event types and each event type has only $K$ instances, FSED aims to predict the labels of a unlabeled \textit{query set} $\mathcal{Q}$ based on the \textit{support set} $\mathcal{S}$.
Formally, a $\{ \mathcal{S}, \mathcal{Q} \}$ pair is called a $N$-way-$K$-shot task $\mathcal{T}$.
There exist two datasets consisting of a set of tasks : $\mathcal{D}_{train} = \{ \mathcal{T}^{(i)} \}_{i=1}^{M_{train}}$ and $\mathcal{D}_{test} = \{ \mathcal{T}^{(i)} \}_{i=1}^{M_{test}}$ where $M_{train}$ and $M_{test}$ denote the number of the task in two datasets respectively.
As the name suggests, $\mathcal{D}_{train}$ is used to train models in the training phase while $\mathcal{D}_{test}$ is for evaluation.
It is noted that these two datasets have their own event types, which means that the label space of two datasets is disjoint with each other.
\section{Methodology}
\begin{figure*}[!t]
\centering
\includegraphics[width=1\linewidth]{PA-CRF.pdf}
\caption{Architecture of our proposed PA-CRF. It consists of three modules: a) Emission Module calculates the emission scores for the query instance based on the prototypes derived from the support set. b) Transition Module generates the Gaussian distributed transition scores with respect to prototypes. c) Decoding Module exploits the emission scores and approximated Gaussian distributed transition scores to decode the predicted label sequence with the Monte Carlo Sampling.}
\label{fig:model}
\end{figure*}
\subsection{Overview}
As described above, we formulate FSED as the few-shot sequence labeling task with interdependent labels.
Following the widely used CRF framework, we propose a novel PA-CRF model to model such label dependency in the few-shot setting, and decode the best-predicted label sequence.
Our PA-CRF contains three modules:
1) Emission Module: It first computes the prototype of each label based on the support set, and then calculates the similarity between prototypes and each token in the query set as the emission scores.
2) Transition Module: It exploits the prototypes to generate the parameters of Gaussian distribution of the transition scores for decoding.
3) Decoding Module: Based on the emission scores and Gaussian distributed transition scores, the Decoding Module calculates the probabilities of possible label sequences for the given query set and decodes the predicted label sequence.
Figure \ref{fig:model} gives an illustration of PA-CRF.
We detail each component from the bottom to the top.
\subsection{Emission Module}
The Emission Module assigns the emission scores to each token of sentences in the query set $\mathcal{Q}$ with regard to each label based on the support set $\mathcal{S}$.
\subsubsection{Base Encoder}
Base Encoder aims to embed tokens in both support set $\mathcal{S}$ and query set $\mathcal{Q}$ into real-value embedding vectors to capture the semantic information.
Since BERT~\cite{DBLP:conf/naacl/DevlinCLT19} shows its advanced ability to capture the sequence information and has been widely used in NLP tasks recently, we use it as the backbone.
Given an input word sequence $\bm{x}$, BERT first maps all tokens into hidden embedding representations.
We denote this operation as:
\begin{equation}
\small
\{ \mathbf{h}_1, \mathbf{h}_2, \ldots, \mathbf{h}_n \} = {\rm BERT}(\bm{x})
\end{equation}
where $\mathbf{h}_i \in \mathbb{R}^{d_h}$ refers to the hidden representation of token $w_i$, $d_h$ is the dimension of the hidden representation.
\subsubsection{Prototype Layer}
Prototype Layer is to derive the prototypes of each label from the support set $\mathcal{S}$.
As described in the problem formulation, we use the BIO schema to annotate the event trigger and $N$ event types could contain $2N+1$ labels.
Thus, indeed, we could get $2N+1$ prototypes.
Following the previous work~\cite{snell2017prototypical}, we calculate the prototype of each label by averaging all the word representations with that label in the support set $\mathcal{S}$:
\begin{equation}
\small
\mathbf{c}_i = {\frac{1}{|\mathcal{S}(y_i)|} \sum_{w \in \mathcal{S}(y_i)}} \mathbf{h}, \quad i = 1, 2, \ldots, 2N+1,
\end{equation}
where $\mathbf{c}_i$ denotes the prototype for label $y_i$, $\mathcal{S}(y_i)$ refers to the token set containing all words in the support set $\mathcal{S}$ with label $y_i$, $\mathbf{h}$ represents the corresponding hidden representation of token $w$, and $|\cdot|$ is the number of set elements.
\subsubsection{Emission Scorer}
Emission Scorer aims to calculate the emission score for each token in the query set $\mathcal{Q}$.
The emission scores are calculated according to the similarities between tokens and prototypes.
The computation of the emission score of the label $y_i$ for the word $w_j$ is defined as:
\begin{equation}
\small
f_{E}(y_i, w_j, \mathcal{S}) = d( \mathbf{c}_i, \mathbf{h}_j),
\end{equation}
where $d(\cdot, \cdot)$ is the similarity function.
In practice, we choose the dot product operation to measure the similarity.
Finally, given a word sequence $\bm{x}$, the emission score of the whole sentence with its corresponding ground-truth label sequence $\bm{y}$ is computed as:
\begin{equation}
\small
{\rm EMIT}(\bm{y}, \bm{x}, \mathcal{S}) = \sum_{i=1}^n f_{E}(y_i, w_i, \mathcal{S}).
\label{eqn:emit}
\end{equation}
\subsection{Transition Module}
In vanilla CRF, the transition scores are learnable parameters and optimized from large-scale data to model the label dependency.
However, in the few-shot scenarios, the learned transition scores cannot adapt to the novel label set due to the disjoint label space.
To overcome this problem, we use neural networks to generate the transition scores based on the label prototypes instead of learning transition scores by optimization to achieve adaptation ability.
In this case, a problem needing to be solved is that using few support instances with random data fluctuation to generate transition scores would cause uncertain estimation and result in wrong inference.
To model the uncertainty, we treat the transition score as a random variable and use the Gaussian distribution to approximate its distribution.
Specifically, the Transition Module is to generate the distributional parameters (mean and variance) of transition scores based on the label prototypes.
It consists of two layers: 1) Prototypical Interaction Layer and 2) Distribution Approximator.
Details of each layer are listed in the following parts.
\subsubsection{Prototype Interaction Layer}
Since the transition score is to model the dependency between labels, the individual prototype for each event type with rare dependency information is hard to generate their transition scores.
Thus, we propose a Prototype Interaction Layer which exploits the self-attention mechanism to capture the dependency between labels.
We first calculate the attention scores of each prototype $\bm{c}_i$ with others:
\begin{equation}
\small
\alpha_{ij} = \frac{{\rm exp}(\bm{c}^{(q)}_i \cdot \bm{c}^{(k)}_j)}{\sum_{m=1}^{2N+1} {\rm exp}(\bm{c}^{(q)}_i \cdot \bm{c}^{(k)}_m)},
\end{equation}
where $\bm{c}^{(q)}_i$ and $\bm{c}^{(k)}_i$ are transformed from $\bm{c}_i$ by two linear layers respectively:
\begin{equation}
\small
\begin{aligned}
c^{(q)}_i &= W^{(q)} c_i + b^{(q)} \\
c^{(k)}_i &= W^{(k)} c_i + b^{(k)}
\end{aligned}
\end{equation}
Getting the attention scores, the prototype $\tilde{\bm{c}}_i$ with dependency information is calculated as follows:
\begin{equation}
\small
\tilde{\bm{c}}_i = \sum_{j=1}^{2N+1} \alpha_{ij} \bm{c}^{(v)}_j,
\end{equation}
where $\bm{c}^{(v)}_i$ is also transformed linearly from $\bm{c}_i$:
\begin{equation}
\small
c^{(v)}_i = W^{(v)} c_i + b^{(v)}
\end{equation}
\subsubsection{Distribution Approximator}
This module aims to generate the mean and variance of Gaussian distributions based on the prototypes with dependency information.
We first denote the transition score matrix as $T_r \in \mathbb{R}^{(2N+1) \times (2N+1)}$ for all label pairs, and denote the the $i$-th row $j$-th column element of $T_r$ as $[T_r]_{ij}$ which refers to the transition score for label $y_i$ transiting to label $y_j$.
As treating $[T_r]_{ij}$ as random variable, we use the Gaussian distribution $[\tilde{T_r}]_{ij} \sim \mathcal{N}(\mu_{ij}, \sigma^2_{ij})$ to approximate $[T_r]_{ij}$, where $\mathcal{N} (\cdot, \cdot)$ refers to the Gaussian distribution.
To estimate the mean $\mu_{ij}$ and variance $\sigma_{ij}$ of $[\tilde{T_r}]_{ij}$, we concatenate the corresponding prototypes $\tilde{c}_{i}$ and $\tilde{c}_{j}$ and feed into two feed-forward neural networks respectively:
\small
\begin{align}
\mu_{ij} &= W^{(\mu)}[\tilde{\bm{c}}_i \| \tilde{\bm{c}}_j] + b^{(\mu)} \\
\sigma^2_{ij} &= {\rm exp} \left( W^{(\sigma^2)}[\tilde{\bm{c}}_i \| \tilde{\bm{c}}_j] + b^{(\sigma^2)} \right)
\end{align}
\normalsize
where $[\cdot \| \cdot]$ means the concatenation operation.
Given a label sequence $\bm{y}$, the transition score of the whole label sequence is approximated by:
\begin{equation}
\small
{\rm TRANS}(\bm{y}, \tilde{T_r}) = \sum_{i=1}^{n-1} [\tilde{T_r}]_{\mathbb{I}(y_i)\mathbb{I}(y_{i+1})}
\label{eqn:trans}
\end{equation}
where $\mathbb{I}(y_i)$ refers to the label index of $y_i$.
\subsection{Decoding Module}
Decoding Module derives the probabilities for a specific label sequence of the query set according to the emission scores and approximated Gaussian distributions of transition scores.
Since the approximated transition score is Gaussian distributional and not a single value, we denote the probability density function of the approximated transition score matrix as $q(\tilde{T_r} | \mathcal{S})$.
According to the Probabilistic Inference~\cite{gordon2018metalearning}, the probability of label sequence $\bm{y}$ of a word sequence $\bm{x}$ based on the support set $\mathcal{S}$ is calculated as:
\begin{equation}
\small
\begin{split}
P(\bm{y}|\bm{x}, \mathcal{S}) &= \int P(\bm{y}|\bm{x}, \mathcal{S}, \tilde{T_r}) q(\tilde{T_r} | \mathcal{S}) \mathrm{d}\tilde{T_r}
\end{split}
\end{equation}
Following the CRF algorithm, the probability can be calculated based on the Equation \ref{eqn:emit} and Equation \ref{eqn:trans}:
\begin{equation}
\small
\begin{split}
& P(\bm{y}|\bm{x}, \mathcal{S}) = \\
& \int \frac{1}{Z} {\rm exp} \left( {\rm EMIT}(\bm{y}, \bm{x}, \mathcal{S}) + {\rm TRANS}(\bm{y}, \tilde{T_r}) \right) q(\tilde{T_r} | \mathcal{S}) \mathrm{d}\tilde{T_r}
\end{split}
\label{eqn:intergral}
\end{equation}
where
\begin{equation}
\small
Z = \sum_{\bm{y'} \in Y} {\rm exp} \left( {\rm EMIT}(\bm{y'}, \bm{x}, \mathcal{S}) + {\rm TRANS}(\bm{y'}, \tilde{T_r}) \right)
\end{equation}
and $Y$ refers to all possible label sequences.
In the training phase, we use negative log-likelihood loss as our objective function:
\begin{equation}
\small
\mathcal{L} = - \mathop{\mathbb{E}}\limits_{(\bm{x}, \bm{y}) \sim \mathcal{Q}} \left[ {\rm log}(P(\bm{y}|\bm{x}, \mathcal{S})) \right]
\end{equation}
Due to the hardness to compute the integral of Equation \ref{eqn:intergral}, in practice, we use the Monte Carlo sampling technique~\cite{gordon2018metalearning} to approximate the integral.
To make the sampling process differentiable for optimization, we employ the reparameterization trick~\cite{DBLP:journals/corr/KingmaW13} for each transition score $[\tilde{T_r}]_{ij}$:
\begin{equation}
\small
[\tilde{T_r}]_{ij} = \mu_{ij} + \epsilon \sigma_{ij}, {\rm where} \ \epsilon \sim \mathcal{N}(0, 1)
\end{equation}
In the inference phase, the Viterbi algorithm~\cite{viterbi} is employed to decode the best-predicted label sequence for the query set.
\section{Experiment}
\subsection{Dataset}
We conduct experiments on the benchmark \textit{FewEvent} dataset introduced in the previous work~\cite{DBLP:conf/wsdm/DengZKZZC20}, which is the currently largest few-shot dataset for event detection.
It contains 70,852 instances for 100 event types and each event type owns about 700 instances on average.
Since \citeauthor{DBLP:conf/wsdm/DengZKZZC20}~\shortcite{DBLP:conf/wsdm/DengZKZZC20} do not share their split train/dev/test set, we re-split the FewEvent in the same ratio as \citeauthor{DBLP:conf/wsdm/DengZKZZC20}~\shortcite{DBLP:conf/wsdm/DengZKZZC20}.
We use 80 event types as the training set, 10 event types as the dev set, and the rest 10 event types as the test set.
More statistics of FewEvent dataset are listed in Appendix~\ref{sec:dataset}.
\subsection{Evaluation}
We follow the evaluation metrics in previous event detection works~\cite{chen-etal-2015-event,liu-etal-2018-jointly,cui2020EEGCN}, an event trigger is marked correct if and only if its event type and its offsets in the sentence are both correct.
We adopt the standard micro F1 score to evaluate the results and report the averages and standard deviations over 5 randomly initialized runs.
\section{Implementation Details}
We employ \textit{BERT-BASE-UNCASED}~\cite{DBLP:conf/naacl/DevlinCLT19} as the base encoder.
The maximum sentence length is set as 128.
Our model is trained using \textit{AdamW} optimizer with the learning rate of 1e-5.
All the hyper-parameters are tuned on the dev set manually.
In the training phase, we follow the widely used episodic training~\cite{vinyals2016matching} in few-shot learning. Episodic training aims to mimic N-way-K-shot scenario in the training phase. In each epoch, we randomly sample N event types from the training set and each event type randomly sample K instances as support set and other M instances as the query set.
We train our model with 20,000 iterations on the training set and evaluate its performance with 3,000 iterations on the test set following the episodic paradigm.
We run all experiments using PyTorch 1.5.1 on the Nvidia Tesla T4 GPU, Intel(R) Xeon(R) Silver 4110 CPU with 256GB memory on Red Hat 4.8.3 OS.
\subsection{Baselines}
\begin{table*}[!tb]
\centering
\begin{tabular}{c|l|cccc}
\toprule
Paradigm & Model & 5-Way-5-Shot & 5-Way-10-Shot & 10-Way-5-Shot & 10-Way-10-Shot\\
\midrule
Fine-tuning & PLMEE & 4.43 $\pm$ 0.19 & 4.69 $\pm$ 0.85 & 2.52 $\pm$ 0.28 & 2.76 $\pm$ 0.55 \\
\midrule
\multirow{2}*{Separate} & LoLoss & 30.14 $\pm$ 0.30 & 30.91 $\pm$ 0.29 & 29.33 $\pm$ 0.40 & 30.08 $\pm$ 0.39 \\
& MatchLoss & 29.78 $\pm$ 0.14 & 30.75 $\pm$ 0.15 & 28.75 $\pm$ 0.23 & 29.59 $\pm$ 0.21 \\
\midrule
\multirow{3}*{Multi-task} & LoLoss & 31.51 $\pm$ 1.56 & 31.70 $\pm$ 1.21 & 30.46 $\pm$ 1.38 & 30.32 $\pm$ 0.89 \\
& MatchLoss & 30.44 $\pm$ 0.99 & 30.68 $\pm$ 0.78 & 28.97 $\pm$ 0.61 & 30.05 $\pm$ 0.93 \\
& DMBPN & 37.51 $\pm$ 2.60 & 38.14 $\pm$ 2.32 & 34.21 $\pm$ 1.45 & 35.31 $\pm$ 1.69 \\
\midrule
\multirow{8}*{Unified} & Match & 39.93 $\pm$ 1.67 & 46.02 $\pm$ 1.20 & 30.88 $\pm$ 1.08 & 35.91 $\pm$ 1.19 \\
& Proto & 50.11 $\pm$ 0.77 & 52.97 $\pm$ 0.95 & 43.51 $\pm$ 1.16 & 42.70 $\pm$ 0.98 \\
& Proto-Dot & 58.82 $\pm$ 0.88 & 61.01 $\pm$ 0.23 & 55.04 $\pm$ 1.62 & 58.78 $\pm$ 0.88 \\
& Relation & 28.91 $\pm$ 1.13 & 29.83 $\pm$ 0.78 & 18.49 $\pm$ 1.25 & 21.47 $\pm$ 1.40 \\
\cmidrule{2-6}
& Vanilla CRF & 59.01 $\pm$ 0.81 & 62.21 $\pm$ 1.94 & 56.00 $\pm$ 1.51 & 59.35 $\pm$ 1.09 \\
& CDT & \underline{59.30} $\pm$ 0.23 & \underline{62.77} $\pm$ 0.12 & \underline{56.41} $\pm$ 1.09 & \underline{59.44} $\pm$ 1.83 \\
& StructShot & 57.69 $\pm$ 0.91 & 61.54 $\pm$ 1.23 & 54.54 $\pm$ 0.95 & 57.14 $\pm$ 0.79 \\
\cmidrule{2-6}
& PA-CRF & \textbf{62.25}* $\pm$ 1.42 & \textbf{64.45}* $\pm$ 0.49 & \textbf{58.48}* $\pm$ 0.68 & \textbf{61.64}* $\pm$ 0.81\\
\bottomrule
\end{tabular}
\caption{F1 scores ($10^{-2}$) of different models on the FewEvent test set. Bold marks the highest number among all models, underline marks the second-highest number, and $\pm$ marks the standard deviation. * marks statistically significant improvements over the best baseline with $p < 0.01$ under a boostrap test.}
\label{tab:main}
\end{table*}
To investigate the effectiveness of our proposed method, we compare it with a range of baselines and state-of-the-art models, which can be categorized into three classes: fine-tuning paradigm, identify-then-classify paradigm and unified paradigm.
\textbf{Fine-tuning paradigm} solves the FSED in the standard supervised learning, i.e., pre-training on the large scale dataset and fine-tuning on the handful target data.
We adopt the state-of-the-art model, \textbf{PLMEE}~\cite{yang-etal-2019-exploring}, of the standard ED into the FSED directly.
\textbf{Identify-then-classify models} first perform trigger identification (named as TI) and then classify the event types based on the few-shot learning methods (named as FSTC).
We investigate two typed of identify-then-classify paradigms: separate and multi-task.
For the separate models, the trigger identifier and few-shot trigger classifier are trained separately without parameter sharing.
We first exploit the state-of-the-art BERT-tagger for the TI task.
It uses BERT~\cite{DBLP:conf/naacl/DevlinCLT19} and a linear layer to tag the trigger in the sentence as a sequence labeling task.
Since TI just aims to recognize the occurrence of the trigger, the label set only contains three labels: \textit{O}, \textit{B-Trigger}, \textit{I-Trigger}.
For the FSTC task, we reimplement two FSTC models: \textbf{LoLoss}~\cite{DBLP:conf/pakdd/LaiDN20}, \textbf{MatchLoss}~\cite{DBLP:journals/corr/abs-2006-10093}.
In the multi-task models, we reimplement \textbf{DMBPN}~\cite{DBLP:conf/wsdm/DengZKZZC20} and replace its encoder with BERT for the fair comparison.
DMBPN combines a conventional trigger identification module and a few-shot trigger classification module by parameter sharing.
But in the inference phase, it detects the event trigger still in the identify-then-classify paradigm.
Additionally, we also provide the multi-task version of the LoLoss and MatchLoss which are trained jointly with BERT-tagger with shared BERT parameters.
\textbf{Unified models} perform few-shot event detection with a single model without task decomposition.
Because we are the first to solve this task in a unified way, there is no previous unified model that can be compared.
But for the comprehensive evaluation of our proposed PA-CRF model, we also construct two groups of variants of PA-CRF: non-CRF models and CRF-based models.
Non-CRF models use emission scores to predict via softmax and do not take the label dependency into consideration.
We implement four typical few-shot classifiers:
1) \textbf{Match}~\cite{vinyals2016matching} uses cosine function to measure the similarity,
2) \textbf{Proto}~\cite{snell2017prototypical} uses Euclidean Distance as the similarity metric,
3) \textbf{Proto-Dot} uses dot product to compute the similarity,
4) \textbf{Relation}~\cite{DBLP:conf/cvpr/SungYZXTH18} builds a two-layer neural networks to measure the similarity.
Since CRF with the capacity of modeling label dependency is widely used in sequence labeling task, we implement three kinds of CRF-based models as our baselines:
1) \textbf{Vanilla CRF}~\cite{warmprotoz}: We adopt the vanilla CRF in the FSED task without considering the adaptation problem.
2) \textbf{CDT}~\cite{hou-etal-2020-shot}: As the SOTA of the few-shot NER task,
we re-implement it according to the official code and adapt it in the FSED task to replace our Transition Module.
3) \textbf{StructShot}~\cite{DBLP:conf/emnlp/YangK20}: It is also a few-shot NER model. It first pre-trains on the training set and utilizes the prototypical networks and the CDT for prediction based on the support set in the inference phase.
For the fair comparison, the emission module of these CRF-based baseline models is the same as our PA-CRF.
\subsection{Main Results}
Table \ref{tab:main} summarizes the results of our PA-CRF against other baseline models on the FewEvent test set.
\textbf{Comparison with fine-tuning model}
It is obvious that PLMEE performs poorly in all four few-shot settings and all few-shot-based methods outperform it with an absolute gap, which powerfully proves that the conventional supervised methods is incapable of solving FSED.
\textbf{Comparison with identify-then-classify models}
(1) Most of unified models (except Relation) perform higher than all identify-then-classify models, especially for PA-CRF with huge gaps about 30\%, proving the effectiveness of the unified framework.
(2) Comparing with the separate paradigm, the multi-task paradigm is able to improve performance but it still cannot catch up with the unified paradigm.
(3) DMBPN works better than other two models but still works poorly to handle the FSED due to the limitation of the TI.
We will discuss the bottleneck of the identify-then-classify paradigm in Section \ref{sec:bottleneck}.
\textbf{Comparison with unified models}
(1) Over the best non-CRF baseline model Proto-Dot, PA-CRF achieves substantial improvements of 3.43\%, 3.44\%, 3.44\% and 2.86\% on four few-shot scenarios respectively, which confirms the effectiveness and rationality of PA-CRF to model the label dependency.
(2) Vanilla CRF performs better than other non-CRF baseline methods, which demonstrates that CRF is able to improve the performance by modeling the label dependency, even if the learned transition scores do not match the label space of the test set.
(3) Compared to Vanilla CRF, both CDT and StructShot achieve slightly higher F1 scores, indicating the transition scores of abstract BIO labels can improve the model adaptation ability to some extent.
(4) CDT exceeds the StructShot since CDT is trained based on the episodic training, which makes it learns the class-agnostic token representations.
(5) PA-CRF outperforms CDT (2.95\%, 1.68\%, 2.07\% and 2.20\% in four few-shot settings respectively) with absolute gaps.
We consider that it is because CDT learning the transition scores of the abstract labels cannot model the exact dependency of specific label set, so its adaptation ability is limited.
In contrast, PA-CRF generates the label-specific transition scores based on the label prototype, which can capture the dependency for specific novel event types.
(6) Comparing four few-shot scenarios, we can find that the F1 score increases as the K-shot increases, which shows that more support samples can provide more information of the event type.
The F1 score decreases as the N-way increases when the shot number is fixed, which reveals that the larger way number causes more event types to predict which increases the difficulty of the correct detection.
To summarize, we can draw the conclusion that
(1) The identify-then-classify paradigm is incapable of solving the FSED task.
(2) Compared to the identify-then-classify paradigm, the unified paradigm works more effectively for the FSED task.
(3) Approximating transition scores based on the label prototypes not by optimization, our PA-CRF achieves better adaptation on novel event types.
\subsection{Bottleneck Analysis}
\label{sec:bottleneck}
\begin{table}[!t]
\centering
\begin{tabular}{lccc}
\toprule
Model & TI & FSTC & FSED \\
\midrule
LoLoss & 31.06 & 95.27 & 30.14 \\
DMBPN & 40.64 & 95.44 & 37.51 \\
DMBPN(CDT-TI) & 54.69 & 95.49 & 53.93 \\
PA-CRF & 63.68 & 96.76 & 62.25 \\
\bottomrule
\end{tabular}
\caption{Comparison of PA-CRF and baselines on two subtasks. F1 scores are reported on the FewEvent test set in the 5-way-5-shot setting.}
\label{tab:bottleneck}
\end{table}
To investigate the bottleneck of the identify-then-classify paradigm, we evaluate LoLoss (separate model), DMBPN (multi-task model) and PA-CRF (unified model) on two subtasks: TI and FSTC separately in the 5-way-5-shot setting on the FewEvent test set.
To reduce the influence of the cascading errors, we use the ground truth trigger span for evaluation in the FSTC.
The experimental results are reported in Table \ref{tab:bottleneck}.
From Table \ref{tab:bottleneck}, we find that:
(1) All models achieve more than 95\% F1 score on the FSTC task, indicating that both identify-then-classify and unified models is capable enough of solving the FSTC problem.
(2) For the TI task, two identify-then-classify baselines perform 31.06\% and 40.64\% F1 score respectively, which demonstrates that the conventional TI module has difficulty in adapting to novel event triggers.
Hence, due to the cascading errors, the poorly-performed TI module limits the performance of the identify-then-classify models.
(3) PA-CRF achieves 63.68\% F1 score on TI task, which exceeds the two kinds of identify-then-classify models significantly.
Unlike identify-then-classify models recognizing triggers based on seen triggers, PA-CRF utilizes the trigger representations from the support set of the novel event types to identify novel triggers so our unified model works better in the TI task of FSED.
In conclusion, the conventional trigger identifier cannot identify novel triggers in FSED, and exploiting the support set of novel event types is necessary.
\subsection{Effectiveness Analysis}
To verify the effectiveness of the unified framework, we adapt our best baseline model, CDT, to replace TI module of DMBPN to solve trigger identification in the few-shot manner.
It identifies triggers based on the emission scores between tokens and label prototypes calculating from the support set and learned abstract transition scores.
In this case, we rename it as DMBPN(CDT-TI) and evaluate it in TI and FSTC subtasks.
Results are also reported based on the 5-way-5-shot setting in Table \ref{tab:bottleneck} and we observe that:
The CDT-TI-based DMBPN achieve 54.69\% in TI task, exceeding the conventional TI based models, which shows that solving TI in the few-shot manner by utilizing the support set can reduce the trigger discrepancy to some extent.
Although the performance of FSTC is similar to the original DMBPN, owing to the improvements of TI task, the final performance of FSED exceeds the original DMBPN by 16.42\% but they are still inferior to PA-CRF with a huge gap (8.99\% on TI task).
Therefore, we draw the conclusion that solving FSED in the unified manner can utilize the correlation between two subtasks to improve the model performance significantly.
\subsection{Ablation Study}
\begin{table}[!tb]
\centering
\begin{tabular}{lcc}
\toprule
Model & 5-Shot & 10-Shot \\
\midrule
PA-CRF & 44.39 & 51.06 \\
\enspace - Distribution Estimation & 43.47 & 49.41 \\
\enspace - Interaction Layer & 41.62 & 45.74 \\
\enspace - Transition Score & 39.83 & 45.07 \\
\bottomrule
\end{tabular}
\caption{Ablation study of PA-CRF in 5-Way settings. F1 scores are reported on the FewEvent dev set.}
\label{tab:ablation}
\end{table}
To study the contribution of each component in our PA-CRF model, we run the ablation study on the FewEvent dev set.
From these ablations (see Table~\ref{tab:ablation}), we find that:
(1) - Distribution Estimation: To study whether distributional estimation is helpful to improve the performance, we remove it and make the Distribution Approximator generate a single value as the transition score directly as the point estimation.
And the inference is based on the generated transition scores without Probabilistic Inference.
As a result, the F1 score drops 1.02\% and 1.65\% in two scenarios, respectively.
We attribute these gaps to our proposed Gaussian-based distributional estimation which can model the data fluctuation to relieve the influence of data uncertainty.
(2) - Interaction Layer: To certify that the Prototype Interaction Layer contributes to capturing the information between prototypes, we remove it and evaluate in two scenarios.
We read from Table~\ref{tab:ablation} that F1 scores decrease significantly by 2.77\% and 5.32\% respectively, which indicates that the Prototype Interaction Layer is able to capture the dependency among prototypes.
(3) - Transition Score: To prove the contribution of the label dependency, we remove the Transition Module and only use the emission score for prediction.
Results show that without transition scores, the performance of the model drops dramatically by 4.56\% and 5.99\% respectively, which powerfully proves that the transition score can improve the performance of the few-shot sequence labeling task.
Furthermore, we have conducted case study and error analysis to validate the strength of our PA-CRF and explore its weakness. Details are listed in Appendix~\ref{sec:case_study} and Appendix~\ref{sec:error_study}.
\section{Conclusion}
In this paper, we explore a new viewpoint of solving few-shot event detection in a unified manner.
Specifically, we propose a prototypical amortized conditional random field to generate the transition scores to achieve adaptation ability for novel event types based on the label prototypes.
Furthermore, we present the Gaussian-based distributional estimation to approximate transition scores to relieve the statistical uncertainty of data fluctuation.
Finally, experimental results on the benchmark FewEvent dataset prove the effectiveness of our proposed method.
In the future, we plan to adapt our method to other few-shot sequence labeling tasks such as named entity recognition.
\section*{Acknowledgements}
We would like to thank all reviewers for their insightful comments and suggestions.
This work is supported in part by the Strategic Priority Research Program of Chinese Academy of Sciences (grant No. XDC02040400) and the Youth Innovation Promotion Association of CAS (Grant No. 2021153).
\bibliographystyle{acl_natbib}
|
1,941,325,221,137 | arxiv | \section{Introduction}
In financial markets, asset prices are constantly fluctuating. The strength of volatility is usually expressed in terms of the logarithmic variance\footnote{Variance, standard deviation, or logarithmic standard deviation are also used in other literature. For brevity, this paper will use 'volatility' to denote the 'logarithmic variance' uniformly.} of returns\footnote{i.e., rate of returns.}. Most financial applications, such as risk management, derivatives pricing, portfolio management, etc., require a reasonable estimation of the current volatility. After long-term observations, it has been found that: 1) The volatility embedded in the series does not remain constant but seems to follow another distribution, which leads to the phenomenon of heavy tails of returns; 2) the distribution behind volatility is not fixed, but changes over time and exhibits a certain degree of positive autocorrelation.
A popular model for this phenomenon is Stochastic Volatility\cite{Andersen2009StochasticV} (SV). It first defines an unobservable stochastic process that expresses the change in volatility, then makes it instant variances of returns, and finally generates a sequence of observable values. Suppose the change in volatility is defined via a linear Gaussian model, such as Autoregressive Moving Average Model (ARMA). In that case, this approach yields understandable results and always reliably converges in computation. However, it also has two shortcomings: 1) through the observation of actual data, we find that the increment of volatility does not necessarily obey a Gaussian distribution, and it is also heavy-tailed in most cases; 2) when volatility is estimated, the linear Gaussian model has no closed-form posterior. Therefore, it needs to depend, or partly depend on sampling, to express the volatility distribution in the form of a bunch of particles. This sampling process often consumes much time. Moreover, when it works with parameter estimation in Expectation Maximization (EM), it is not easy to judge whether the whole process has converged.
To this end, we reconsider this problem from the Dynamic Bayesian Network (DBN) perspective. We note that the usual SV model can be viewed as a two-layer state space model \cite{Bishop_2006}, where the change in volatility is defined in the transition equation, and the price change is defined in the observation equation. To make this model easier to compute, we define it as follows: 1) Note that for the rate parameter of a gamma distribution, its conjugate prior is also a gamma distribution. Therefore, we first use a series of inter-connected gamma distributions to represent volatility. 2) Through such a direct connection, the obtained volatility increment is not independent but depends on the absolute value of the last step. However, the increments between every two steps are independent, similar to a random walk. 3) At every interval, let the volatility correspond to the first gamma distribution by a prior of the precision (=1/variance) of a normal distribution. (Or equivalently, we are inserting 'dummy node' between adjacent steps.) 4) The set of all random variables corresponding to normal distributions constitutes the return sequence, which can be observed.
The primary advantage of this model is computation. At each node of this model, the posterior maintains the form of the gamma distribution, which makes the approximate estimation of states relatively fast. It may take multiple scans of the sequence to converge, instead of only two passes are needed by the forward-backward approach commonly used in DBNs. However, this is not a problem in practice. Because the model parameters cannot be manually set in advance, the state estimation is often performed inside the iteration of the EM algorithm; even if the state estimation only scans the sequence once per round, it will converge to the (local) optimum with model parameters eventually. Moreover, the items that do not change in each scan can be calculated outside the EM's loop in advance, further speeding up the process (Sec. \ref{sec:algo}).
The significance of this model is more than the calculation. It can be shown that this construction implements a random walk of volatility. Like the usual Gaussian random walk, its incremental offset is $0$, and the variance can also take values between $(0,\infty)$. However, the kurtosis it can reach is between $(3,6)$, which is larger than the $3$ of the Gaussian random walk, which means that, to some extent, it can express the heavy tail of the volatility increment. The heavy-tail inside volatility exists, and it has been discussed in empirical research \cite{carnero2004persistence}. This supports the view that our model provides at least a different option, which does not need additional random variables to express but inherently incorporates this effect.
We have tested the Gam-Chain model in different markets at different resolutions. Experiments show that although the VI method can only get an approximate solution for volatility, it can scale a significant part of instruments' returns to the standard normal distribution, benefiting from the heavy tail of the gamma distribution. As a comparison, if we generate the posterior based on the lognormal chain, it will be difficult to use directly, for its tail is too thin, and thus the normalization effect is poor. Although there is no significant difference in the algorithmic complexity of the VI method compared to MC, it can run faster since it does not require sampling during state estimation but only basic arithmetic operations ($+$, $-$, $\times$ and $\div$). In addition, it is easy to judge convergence because the calculation process is entirely deterministic.
\section{Related Work}
\subsection{Volatility Clustering}
The phenomenon of volatility clustering was first discussed in \cite{mandelbrot1963variation}, which mentioned: "large changes tend to be followed by large changes, of either sign, and small changes tend to be followed by small changes." \cite{Cont2001EmpiricalPO} lists volatility clustering as one of the "stylized facts" and believes that it is common in securities markets in different periods and countries. \cite{lux2000volatility} uses an artificial market to simulate this phenomenon by placing a certain percentage of chartists and fundamentalists. In this market, when the price crosses a certain threshold, the chartist's trading causes volatility to explode, but the fundamentalist gradually redirects it towards stability.
Two popular classes of models have been used to describe this phenomenon: the Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model\cite{bollerslev1986generalized} and the SV model\cite{heston1993closed}. The common thing is that they describe the current volatility as a function of past volatilities. If the function is ARMA and is entirely deterministic (i.e., the noise is zero), then it is GARCH; if the function itself is derived from another random process, then it is SV. The latter includes a large class of models, and some popular variants are introduced in \cite{Andersen2009StochasticV}, including discrete and continuous, linear and nonlinear, etc. In estimation methods, GARCH usually uses a two-step MLE method, first estimating the residual of returns and then estimating the ARMA coefficients for the volatility, while SV generally needs to be estimated using pseudo-likelihood or Markov chain Monte Carlo (MCMC)\cite{Broto2004EstimationMF}.
The models and their variants discussed in this article (Sec. \ref{sec:gam} and \ref{sec:var}) is probably one of the simplest forms of expressing SV by directly assuming that volatility follows a random walk (Eq. \ref{eqn:frame}). The reasons for restricting this form are: 1) this paper can focus the discussion on the core idea and leave extensions to possible future works; 2) it is probably enough for many applications where overfitting caused by excessive parameters is undesirable\cite{pymc3}.
\subsection{Dynamic Bayesian Network}
SV can be naturally represented as DBN. Work in this area can be traced back to \cite{Jacquier1994BayesianAO}, which uses MCMC for model estimation. Among them, we pay special attention to applying variational methods to this problem. There are mainly two approaches here: 1) Keep the form of SV unchanged, in which the variational method is used to obtain an approximate solution, and then the sampling is performed. For example, \cite{Kleppe2012FittingGS} uses Laplace approximation (LA) to generate proposal distributions to improve sampling efficiency. Further, it is possible to cancel the MC process and directly use nested LA for approximation \cite{Bermudez2021IntegratedNL}. 2) Change the form of SV to make it more suitable for VI. A key element here is the gamma distribution. The variance-gamma distribution can be obtained by directly using the gamma distribution to represent the variance and compounding it with the normal distribution \cite{Madan1990TheVG}. Subsequently, \cite{Gelman2004PriorDF} suggested switching to an inverse gamma distribution (or equivalently, with all returns' precision (=1/sigma) distributed as a gamma) in order to keep the posterior's form unchanged in Bayesian inference. Further, \cite{Langren2015SwitchingTN} uses the inverse gamma process to describe the variance of the variance for better option pricing results. Based on the conjugate relationship of inverse gamma and normal, \cite{LenGonzlez2018EfficientBI} samples directly from the posterior to obtain estimates of fluctuations. \cite{Santos2018ABG} and \cite{Rezende2022ANF} change the observation distribution from normal to Generalized Error Distribution (GED), which still uses the gamma distribution for its precision so that the likelihood can be marginalized and operated in closed form. The heavy tail inside variance can also be expressed.
The difference in this paper is that we directly use the nonlinear gamma chain to express the fluctuation. We no longer resort to other more complex or indirect methods.
\section{Gamma Chain Model}
\subsection{Problem Statement}
Consider a SV model of the form:
\begin{equation}\label{eqn:frame}
\begin{aligned}
\boldsymbol{y}_t-\boldsymbol{y}_{t-1} &\sim N(0,\boldsymbol{u}_t^{-1}), \\
\log \boldsymbol{u}_t-\log \boldsymbol{u}_{t-1} &\sim f^*(\boldsymbol{\theta})
\end{aligned}
\end{equation}
Among them, $\{\boldsymbol{y}_t\}$ is the observation sequence, $N$ is the normal distribution (for the convenience of the variational derivation in Sec. \ref{sec:se}, the variance here is set as the reciprocal of $\boldsymbol{u}_t$), $\{\boldsymbol{u}_t\}$ is the volatility\footnote{Different from above, it is actually logarithmic precision, i.e. negative logarithmic variance . This setting is only for convenience and does not affect our final result. } sequence (unobservable), $f^*$ is a probability distribution function. It depends only on its parameter $\boldsymbol{\theta}$ and does not change over time. Obviously, $\{\log\boldsymbol{u}_t\}$ has the property of independent increment, and $\{\Delta\log\boldsymbol{u}_t\}$ is stationary. when
$f^*(\boldsymbol{\theta})\triangleq N(0,S^2)$
, Eq. \ref{eqn:frame} is
\begin{equation}\label{eqn:logn}
\begin{aligned}
\boldsymbol{y}_t-\boldsymbol{y}_{t-1} &\sim N(0,\boldsymbol{u}_t^{-1}), \\
\boldsymbol{u}_t &\sim LogN(\log \boldsymbol{u}_{t-1},S^2)
\end{aligned}
\end{equation}
where LogN is lognormal distribution. Here $\{\log \boldsymbol{u}_t\}$ obeys a Gaussian random walk process. Note that the variance of $\{\Delta\log \boldsymbol{u}_t\}$ is $S^2$ and the kurtosis is $3$.
The questions to be studied in this section is, 1) Can a new $f^*$ be defined such that the kurtosis of $\{\Delta\log\boldsymbol{u}_t\}$ is greater than 3? 2) How to quickly estimate ${\boldsymbol{u}_t}$ for a given observation ${\boldsymbol{y}_{1:T}}$; 3) Estimation of parameter $\boldsymbol{\theta}$.
\subsection{Model Definition}\label{sec:gam}
As a preliminary attempt, we tentatively use straightforwardly a gamma chain to express the change in volatility, which is
\begin{equation}\label{eqn:naive}
\boldsymbol{u}_t \sim Ga(A,\boldsymbol{u}_{t-1})
\end{equation}
Where $Ga(A,\boldsymbol{u}_{t-1})$ represents the gamma distribution with shape $A$ and rate $\boldsymbol{u}_{t-1}$. To align Eq. \ref{eqn:naive} with Eq. \ref{eqn:frame}, we denote $\boldsymbol{w}_t\triangleq \Delta\log(\boldsymbol{u}_t)$, then Eq. \ref{eqn:naive} is rewritten as (details in \ref{sec:dev_inc})
\begin{equation}\label{eqn:naive2}
p(\boldsymbol{w}_t) =\frac{e^{ -e^{\boldsymbol{w}_t}\boldsymbol{u}_{t-1}^2} \left(e^{\boldsymbol{w}_t}\boldsymbol{u}_{t-1}^2 \right)^A}{\Gamma (A)}
\end{equation}
where $\Gamma$ is the gamma function. Note that the distribution parameter in Eq. \ref{eqn:naive2} is $(A,\boldsymbol{u}_{t-1})$, where $\boldsymbol{u}_{t-1}$ is obviously not time-invariant, and does not meet the requirements of $f^*$ in Eq. \ref{eqn:frame}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.55\linewidth]{design}
\caption{Compare the lognormal distribution chain (a) and the gamma distribution chain (b). Black represents observations, blue represents fluctuations, and red represents dummy nodes. The solid line represents a closed-form local posterior, and the dashed line represents that the closed-form no longer exists after adding that node.}
\label{fig:design}
\end{figure}
To fix this defect, we insert a gamma-distributed random variable $\boldsymbol{v}_t$ after each $\boldsymbol{u}_t$ (refer to Fig. \ref{fig:design}) to construct a random process as
\begin{equation}\label{eqn:fix}
\boxed{
\begin{aligned}
\boldsymbol{y}_t-\boldsymbol{y}_{t-1} &\sim N(0,\boldsymbol{u}_t^{-1}),\\
\boldsymbol{u}_t &\sim Ga(A,\boldsymbol{v}_{t-1}), \\
\boldsymbol{v}_{t-1} &\sim Ga(A,\boldsymbol{u}_{t-1})
\end{aligned}
}
\end{equation}
We marginalize out $\boldsymbol{v}_{t-1}$ and align it again to the Eq. \ref{eqn:frame} (details in \ref{sec:dev_inc}):
\begin{equation}\label{eqn:fix2}
\begin{aligned}
p(\boldsymbol{w}_t) = \frac{e^{-A \boldsymbol{w}_t} \left(e^{\boldsymbol{w}_t}+1\right)^{-2 A} \Gamma (2 A)}{\Gamma (A)^2}
\end{aligned}.
\end{equation}
$\boldsymbol{u}_{t-1}$ no longer exists here because it has been eliminated, and only parameter $A$ remains. Thus we can define a qualified $f^*$ as the right side of Eq. \ref{eqn:fix2}. Next, the variance and kurtosis are calculated to compare the expressivity with Eq. \ref{eqn:logn}. The moment generation function of Eq. \ref{eqn:fix2} is (details in \ref{sec:dev_mgf}):
\begin{equation}\label{eqn:mgf}
\phi(\lambda)=\mathbb{E}[e^{\lambda \boldsymbol{w}_t}]=\frac{\Gamma (A-\lambda) \Gamma (A+\lambda)}{\Gamma (A)^2}.
\end{equation}
Deriving Eq. \ref{eqn:mgf} and calculating the central moments from the 1st to 4th order, its variance can be obtained as
\begin{equation}\label{eqn:var}
V=Var(\boldsymbol{w}_t)=2 \psi^{(1)}(A)
\end{equation}
, and kurtosis as (details in \ref{sec:dev_kurt})
\begin{equation}\label{eqn:kurt}
K=Kurt(\boldsymbol{w}_t)=3 + \frac{\psi^{(3)}(A)}{2 \left[\psi^{(1)}(A)\right]^2}
\end{equation}
Where $\psi$ is the digamma function. Note that $\underset{A\to 0}{\text{lim}}V=\infty$, $\underset{A\to \infty }{\text{lim}}V=0$, thus the variance can be assume to $(0,\infty)$, which is equivalent to the expressive power of Eq. \ref{eqn:logn}. The value range of kurtosis is wider, such as $\underset{A\to 0}{\text{lim}}K=6$, $\underset{A\to \infty }{\text{lim}} K=3$. It can be shown that the kurtosis of Eq. \ref{eqn:fix2} is exactly within the interval (3,6) (details in \ref{sec:proof_kurt}).
\subsection{State Estimation}\label{sec:se}
Under the Bayesian perspective, $\boldsymbol{z}\triangleq\{\boldsymbol{u},\boldsymbol{v}\}_{1:T}$ is the set of all state variables. State estimation is to find the posterior $p(\boldsymbol{z}\mid\boldsymbol{y})$ under the given observation $\boldsymbol{y}\triangleq\{\boldsymbol{y}_{1:T}\}$. It should be pointed out that neither the model of Eq. \ref{eqn:logn} nor the model of Eq. \ref{eqn:fix} can analytically give an exact posterior. However, for the Eq. \ref{eqn:fix}, at each local $p(\boldsymbol{u}_t\mid\boldsymbol{v}_{t},\boldsymbol{y}_t,\boldsymbol{v} _{t+1})$, $p(\boldsymbol{v}_t\mid\boldsymbol{u}_{t-1},\boldsymbol{u}_t)$, yet it can give the exact posterior, and keep the form of the gamma distribution unchanged. Thus, we can use variational inference to find every local analytical solution and iteratively find a global approximate solution.
Based on variational inference \cite{Bishop_2006}, we set the optimization objective to maximize the loss function
\begin{equation}\label{eqn:target}
\mathcal{L}(q)=\int q(\boldsymbol{z})\ln \{ \frac{p(\boldsymbol{z},\boldsymbol{y})}{q(\boldsymbol{z})} \} \text{d} \boldsymbol{z}
\end{equation}
where $q$ is the posterior probability to be solved. Due to the difficulty of finding an exact solution, we further give the mean filed assumption \cite{Bishop_2006}:
\begin{equation}\label{eqn:q}
q(\boldsymbol{z})=\prod_{i=1}^{2 T} q_i(\boldsymbol{z}_i)
\end{equation}
After substituting Eq. \ref{eqn:q} into Eq. \ref{eqn:target}, then for each $q_i$, the optimal solution should satisfy
\begin{equation}\label{eqn:q2}
q_i^*(\boldsymbol{z}_i)=\frac{\exp \{ \mathbb{E}_{j\neq i}[\ln p(\boldsymbol{z},\boldsymbol{y})] \} }
{\int \exp \{ \mathbb{E}_{j\neq i}[\ln p(\boldsymbol{z},\boldsymbol{y})] \} \text{d}\boldsymbol{z}_i }
\end{equation}.
Then, substitute Eq. \ref{eqn:frame} and Eq. \ref{eqn:fix} into Eq. \ref{eqn:q2}. and simplify it. We can see that $q_i^*(\boldsymbol{z}_i) \sim Ga(\boldsymbol{a}_i,\boldsymbol{b}_i)$, and their parameters are
\begin{equation}\label{eqn:u}
\boldsymbol{a}_t^{(u)},\boldsymbol{b}_t^{(u)}=\left\{\begin{matrix}
A+3/2,& \boldsymbol{a}_{t+1}^{(v)}/\boldsymbol{b}_{t+1}^{(v)}+(\Delta\boldsymbol{y}_{t})^2/2& & for &t=1\\
\\
2 A+1/2,& \boldsymbol{a}_{t}^{(v)}/\boldsymbol{b}_{t}^{(v)}+\boldsymbol{a}_{t+1}^{(v)}/\boldsymbol{b}_{t+1}^{(v)}+(\Delta\boldsymbol{y}_{t})^2/2& & for &t>1
\end{matrix}\right.
\end{equation}
\begin{equation}\label{eqn:v}
\boldsymbol{a}_t^{(v)},\boldsymbol{b}_t^{(v)}=\left\{\begin{matrix}
2 A,& \boldsymbol{a}_{t-1}^{(u)}/\boldsymbol{b}_{t-1}^{(u)}+\boldsymbol{a}_{t}^{(u)}/\boldsymbol{b}_{t}^{(u)}& & for &t<T\\
\\
A,& \boldsymbol{a}_{t-1}^{(u)}/\boldsymbol{b}_{t-1}^{(u)} && for &t=T
\end{matrix}\right.
\end{equation}
At the beginning of the iteration, we set the initial value of all $\{\boldsymbol{a}_t,\boldsymbol{b}_t\}$ to $\{1/2,\boldsymbol{y}_{t}^2/2\}$ (that is, the corresponding posterior of one single observation, where the prior is $Ga(0,0)$). After iteratively updating Eq. \ref{eqn:u} and Eq. \ref{eqn:v}, the desired result is obtained after convergence.
\subsection{Parameter Estimation}
Below, we estimate the parameter $A$ by the EM algorithm \cite{Bishop_2006}. In each iteration, it maximizes the following objective (M-step) based on the state estimate (E-step) in Sec. \ref{sec:se}
\begin{equation}\label{eqn:qfunc}
\begin{aligned}
\mathcal{Q}(A,A^{\text{(old)}})\triangleq
\mathbb{E}\left[\mathit{l}(A)\mid\boldsymbol{y},A^{\text{(old)}}\right]
\end{aligned}
\end{equation}
where $\mathit{l}(A)\triangleq \sum_{t=1}^T\log p(\boldsymbol{u}_t,\boldsymbol{v}_t, \Delta\boldsymbol{y}_t\mid A )$.
Substitute Eq. \ref{eqn:fix} into Eq. \ref{eqn:qfunc}, and get
\begin{equation}\label{eqn:qq}
\begin{aligned}
\begin{aligned}
\mathcal{Q}(A,A^{\text{(old)}})=&
A \left\{\sum_{t=1}^{T} 2 \mathbb{E}\left[\log \boldsymbol{u}_t\right]+ \mathbb{E}\left[\log\boldsymbol{v}_{t}\right] +\sum_{t=2}^{T} \mathbb{E}\left[ \log \boldsymbol{v}_{t-1}\right]\right\}\\
&-\sum_{t=1}^{T} 2 \log \Gamma (A)\\
&+\text{const}.
\end{aligned}
\end{aligned}
\end{equation}
Its gradient is
\begin{equation}\label{eqn:grad}
\begin{aligned}
\nabla\mathcal{Q}(A,A^{\text{(old)}})=&
\sum_{t=1}^{T} 2 \mathbb{E}\left[\log \boldsymbol{u}_t\right]+ \mathbb{E}\left[\log\boldsymbol{v}_{t}\right] +\sum_{t=2}^{T} \mathbb{E}\left[ \log \boldsymbol{v}_{t-1}\right]\\
&-\sum_{t=1}^{T} 2 \psi^{(0)}(A)
\end{aligned}
\end{equation}
In addition, for the posterior $z\sim Ga(a,b)$ of any hidden state, we have
\begin{equation}\label{eqn:expec}
\begin{aligned}
\mathbb{E}[z]&=a/b\\
\mathbb{E}[\log z]&=\psi^{(0)}(a)-\log b
\end{aligned}
\end{equation}
Just substitute Eq. \ref{eqn:u} and Eq. \ref{eqn:v} into Eq. \ref{eqn:expec} to calculate the expected expectation of Eq. \ref{eqn:grad} to get the current gradient. At the beginning of each M-step, set the initial value of $A$ to $1$, and then use the gradient ascent method to find $A$ that maximizes Eq. \ref{eqn:qfunc}.
\subsection{Algorithm}\label{sec:algo}
\begin{algorithm}[!htbp]
\SetAlgoLined
\While{$A$ has not coveraged}{\label{step:loop1}
\tcp{E Step}
\For{t=1:T}{
update $\boldsymbol{b}_t^{(u)}$,$\boldsymbol{b}_t^{(v)}$\label{step:s}
}
\For{t=1:T}{
calculate $\mathbb{E}[\boldsymbol{u}_t],\mathbb{E}[\log\boldsymbol{u}_t],\mathbb{E}[\log\boldsymbol{v}_t]$\label{step:e}
}
\BlankLine
\tcp{M Step}
\For{t=1:T}{\label{step:mloop}
calc $\nabla\mathcal{Q}_t(A,A^{\text{(old)}})$\label{step:g}
}\label{step:mloop_end}
$A\gets A^{(old)}+\lambda\sum_t\nabla\mathcal{Q}_t(A,A^{\text{(old)}})$ \tcp{gradient ascent}\label{step:a}
}\label{step:loop1_end}
\caption{The brief procedure of estimation of the states and parameter for Gam-Chain.}
\label{alg:pro}
\end{algorithm}
See Algo. \ref{alg:pro} for the program's main process. Note that the running time of
Line \ref{step:s} is $s_{\text{a}}$\footnote{'a' represents arithmetic calculation.}, the running time of line \ref{step:e} is $e$, Line \ref{step:g}-Line \ref{step:a} is $g$. And, the number of iterations of the loop \ref{step:loop1}-\ref{step:loop1_end} is $L$, then the running time of the algorithm is roughly $\mathcal{O}\left(L*T*(s_{\text{a}}+g +e+a)\right)$. During implementation, we should try to put the repeated calculations outside the loop as much as possible. For example, the $(\Delta\boldsymbol{y}_{t})^2/2$ operation in Eq. \ref{eqn:u} actually only needs to be calculated once. For another example, although both Line \ref{step:e} and \ref{step:a} contain $\psi^{(0)}(A)$, it can be extracted up to the outer loop \ref{step:loop1}-\ref{step:loop1_end}, and it complexity will not increase with sequence length.
For each step: Since $g+a$ is usually arithmetic operations, there should be not too much impact on performance; $e$ needs to calculate the logarithm, which is the same whether in this algorithm or Algo. \ref{alg:lcm}. Thus, the key to boosting is $s_{\text{a}}$, and its runtime should be critical. In Sec. \ref{sec:perf} we will give detailed comparisons.
\subsection{Several Variants}\label{sec:var}
\subsubsection{Gam-Chain/MC}\label{sec:var1}
For the algorithm in Sec. \ref{sec:algo}, we name it Gam-Chain/VI. We can also use MC to estimate the model of Eq. \ref{eqn:fix}. Compared with Gam-Chain/VI, it differs only in E-step, where particle smoothing is used for state estimation\cite{godsill2004monte}.
\begin{algorithm}[!htbp]
\SetAlgoLined
\setcounter{AlgoLine}{2}
\tcp{E Step}
\tcp{forward sampling}
\For{t=1:T}{
\For{i=1:N}{\tcp{number of particles}
$\begin{aligned}
\boldsymbol{w}_{\boldsymbol{u}_t}^{(i)} &\propto \boldsymbol{w}_{\boldsymbol{v}_{t-1}}^{(i)} p(\boldsymbol{y}_t\mid\boldsymbol{u}_{t}) \\
\boldsymbol{w}_{\boldsymbol{v}_t}^{(i)}&=\boldsymbol{w}_{\boldsymbol{u}_t}^{(i)}
\end{aligned}$\tcp{update weights}
}
}
\tcp{backward smoothing}
\For{t=1:T}{
\For{i=1:N}{
$\begin{aligned}
&\boldsymbol{w}_{\boldsymbol{u}_t\mid\boldsymbol{v}_t}^{(i)} \propto \boldsymbol{w}_{\boldsymbol{u}_{t}}^{(i)} p(\widetilde{\boldsymbol{v}}_t\mid\boldsymbol{u}_{t}^{(i)}) \\
&\text{choose }\widetilde{\boldsymbol{v}}_{t-1}\text{ with probability }\boldsymbol{w}_{\boldsymbol{u}_t\mid\boldsymbol{v}_t}^{(i)}\\
&\boldsymbol{w}_{\boldsymbol{v}_t\mid\boldsymbol{u}_{t+1}}^{(i)}\propto \boldsymbol{w}_{\boldsymbol{v}_t}^{(i)}p(\widetilde{\boldsymbol{u}}_{t+1}\mid\boldsymbol{v}_{t}^{(i)})\\
&\text{choose }\widetilde{\boldsymbol{u}}_{t}\text{ with probability }\boldsymbol{w}_{\boldsymbol{v}_t\mid\boldsymbol{u}_{t+1}}^{(i)}
\end{aligned}$
}
}
\tcp{estimate of expectation}
\For{t=1:T}{
$\begin{aligned}\mathbb{E}[\boldsymbol{u}_t]&\approx \textstyle\sum_{i=1}^N \boldsymbol{w}_t^{(i)} \boldsymbol{u}_t^{(i)}\\
\mathbb{E}[\log\boldsymbol{u}_t]&\approx \textstyle\sum_{i=1}^N \boldsymbol{w}_t^{(i)} \log\boldsymbol{u}_t^{(i)}\\
\mathbb{E}[\log\boldsymbol{v}_t]&\approx \textstyle\sum_{i=1}^N \boldsymbol{w}_t^{(i)} \log\boldsymbol{v}_t^{(i)}\end{aligned}$
}
\caption{The MC version of Gam-Chain's state estimation.}
\label{alg:gcm}
\end{algorithm}
The complexity of this Algo. \ref{alg:gcm} is $\mathcal{O}\left(L*T*(4*N*s_{\text{g}}+g+e+a)\right)$\footnote{'g' represents the calculation of the gamma function.}. Compared to Algo. \ref{alg:pro}, it differs at $2*N*s_{\text{g}}$ and $s_{\text{a}}$. In order to compare more fairly, when comparing performance at Sec. \ref{sec:perf}, for any algorithm that requires MC, we set the number of particles to the minimum value, i.e., $2$; when comparing accuracy at Sec. \ref{sec:acc}, we set the number of particles to a large enough value, i.e., $20$.
\subsubsection{LogN-Chain/MC}
Similar to Gam-Chain/MC, we can also define the MC version of Eq. \ref{eqn:logn} as shown in Algo. \ref{alg:lcm}.
\begin{algorithm}[!htbp]
\SetAlgoLined
\setcounter{AlgoLine}{2}
\tcp{E Step}
\tcp{forward sampling}
\For{t=1:T}{\label{step:e2}
\For{i=1:N}{
$\begin{aligned}
\boldsymbol{w}_{\boldsymbol{u}_t}^{(i)} &\propto \boldsymbol{w}_{\boldsymbol{u}_{t-1}}^{(i)} p(\boldsymbol{y}_t\mid\boldsymbol{u}_{t})
\end{aligned}$\tcp{update weights}
}
}
\tcp{backward smoothing}
\For{t=1:T}{
\For{i=1:N}{
$\begin{aligned} &\boldsymbol{w}_{\boldsymbol{u}_t\mid\boldsymbol{u}_{t+1}}^{(i)}\propto \boldsymbol{w}_{\boldsymbol{u}_t}^{(i)}p(\widetilde{\boldsymbol{u}}_{t+1}\mid\boldsymbol{u}_{t}^{(i)})\\
&\text{choose }\widetilde{\boldsymbol{u}}_{t}\text{ with probability }\boldsymbol{w}_{\boldsymbol{u}_t\mid\boldsymbol{u}_{t+1}}^{(i)}
\end{aligned}$
}
}
\tcp{estimate of expectation}
\For{t=1:T}{
$\begin{aligned}
\mathbb{E}[\log^2\boldsymbol{u}_t]&\approx \textstyle\sum_{i=1}^N \boldsymbol{w}_t^{(i)} \log^2\boldsymbol{u}_t^{(i)}\\
\mathbb{E}[\log\boldsymbol{u}_t\log\boldsymbol{u}_{t-1}]&\approx \textstyle\sum_{i=1}^N\textstyle\sum_{j=1}^N\boldsymbol{w}_t^{(i)}\boldsymbol{w}_{t-1}^{(j)} \log\boldsymbol{u}_t^{(i)}\log\boldsymbol{u}_{t-1}^{(j)}
\end{aligned}$
}
\BlankLine
\tcp{M Step}
$S^2\leftarrow 1/T \left(\textstyle\sum_{i=2}^T \mathbb{E}[\log^2\boldsymbol{u}_t]-2 \mathbb{E}[\log\boldsymbol{u}_t\log\boldsymbol{u}_{t-1}]+\mathbb{E}[\log^2\boldsymbol{u}_{t-1}]\right)$
\caption{The MC version of LogN-Chain's EM process.}
\label{alg:lcm}
\end{algorithm}
Its complexity is the same as Algo. \ref{alg:gcm}, the difference is that only '$\exp$' needs to be calculated in Line \ref{step:e2} instead of $\Gamma$. However, note that $\Gamma(A)$ is the same for every EM iteration and it can be extracted outside the loop \ref{step:mloop}-\ref{step:mloop_end} in Algo. \ref{alg:pro}, so there should be no substantial difference in performance between them.
\subsubsection{LogN-Chain/VI}
Although the posterior of Eq. \ref{eqn:logn} has no closed form, it is still possible to approximate it with LA \cite{Kleppe2012FittingGS}.
\begin{algorithm}[!htbp]
\SetAlgoLined
\setcounter{AlgoLine}{2}
\tcp{E Step}
\For{t=1:T}{
$\begin{aligned}\boldsymbol{\mu}_{t}&\leftarrow \frac{S^2}{4}+\frac{\boldsymbol{\mu}_{t-1}}{2}+\frac{\boldsymbol{\mu}_{t+1}}{2}-W\left(\frac{1}{4} S^2 \boldsymbol{y}_t^2 e^{\frac{S^2}{4}+\frac{\boldsymbol{\mu}_{t-1}}{2}+\frac{\boldsymbol{\mu}_{t+1}}{2}}\right)\\
\boldsymbol{\sigma}^2_{t}&\leftarrow\frac{2}{e^{\boldsymbol{\mu}_t} \boldsymbol{y}_t^2+4/S^2}\end{aligned}$\label{step:s4}
}
\tcp{estimate expectations}
\For{t=1:T}{
$\begin{aligned}
\mathbb{E}[\log^2\boldsymbol{u}_t]&=\boldsymbol{\mu}^2_t+\boldsymbol{\sigma}_t^2\\
\mathbb{E}[\log\boldsymbol{u}_t]&=\boldsymbol{\mu}_t
\end{aligned}$
}
\BlankLine
\tcp{M Step}
$S^2\leftarrow 1/T \left(\textstyle\sum_{i=2}^T \boldsymbol{\mu}_t^2+\boldsymbol{\sigma}_t^2-2 \boldsymbol{\mu}_t \boldsymbol{\mu}_{t-1}+\boldsymbol{\mu}_{t-1}^2+\boldsymbol{\sigma}_{t-1}^2\right)$\label{step:m4}
\caption{The VI version of LogN-Chain's EM process.where $W$ at line \ref{step:s4} is Lambert W function.}
\label{alg:lca}
\end{algorithm}
Specifically in the E-step of Algo. \ref{alg:lca}, we use $LogN(\boldsymbol{\mu}_{\boldsymbol{u}_{t}},\boldsymbol{\sigma}^2_{\boldsymbol{u}_{t}})$ distribution to approximate $p(\boldsymbol{u}_{t}|\boldsymbol{y}_{1:T})$. And, based on the assumption of mean-field, there is $\mathbb{E}[\log\boldsymbol{u}_t \log\boldsymbol{u}_{t-1} ]=\mathbb{E}[\log\boldsymbol{ u}_t]\mathbb{E}[\log\boldsymbol{u}_{t-1}]$. Therefore, the calculation of the M-step can be simplified to the expression at Line \ref{step:m4} in Algo. \ref{alg:lca}. Overall, Algo. \ref{alg:lca} has the same algorithmic complexity as Algo. \ref{alg:pro}, but is much slower than arithmetic operations due to the use of Lambert W functions (refer to Sec. \ref{sec:perf} for details) ).
For the convenience of discussion, when the above four algorithms are mentioned later, they are sorted and named as C1\footnote{i.e., the 1st combination.} (LogN-Chain/VI), C2 (LogN-Chain/MC), C3 (Gam-Chain/VI) ), C4 (Gam-Chain/MC). We need to pay special attention to \textbf{C3} as this is the primary method recommended in this paper.
\section{Experiments}
\subsection{Data}
For the datasets, we selected crypto\footnote{obtained from Binance exchange.}, Nasdaq\footnote{obtained with pandas\_datareader.}, and Forex\footnote{obtained from www.myfxbook.com, quoted in USD.} markets. Among them, the volatility of the cryptocurrencies is the most extreme, which is conducive to testing the state estimation capability of the model. Moreover, like Forex, it trades 24 hours daily, facilitating comparisons between resolutions. The Nasdaq market is not only huge in volume and high in stock diversity but also has no intra-day limits. This is consistent with Eq. \ref{eqn:frame} that the support of historical returns should be $(-\infty,\infty)$. In terms of time span, we directly selected thousands of periods before the day of the experiment; at the minute and hour level of which, the cryptocurrency includes extreme fluctuations such as the LUNA crash; at the day level, Nasdaq includes the COVID crash, which is representative.
In data preprocessing, we converted all raw closing prices into log. returns, whose properties are shown in Tab. \ref{tab:data}. When there are no transactions, i.e., volume=0, the corresponding data points are removed. These empty points are meaningless in business. If they are not eliminated, the returns will not satisfy one continuous distribution but a mixture of zero and non-zero values, which is inconsistent with the basic assumption in Eq. \ref{eqn:frame}.
\begin{table}[htbp]
\caption{Summary of Datasets. The 'Len.' in the header is the average length of the sequence (with no-transaction blanks removed). Define $\boldsymbol{r}=\Delta \log( \boldsymbol{\boldsymbol{price}})$, $\boldsymbol{v}=\Delta \log(\boldsymbol{r}^2)$. $\sigma_r$ represents the standard deviation of $\boldsymbol{r}$, $\gamma_r$ represents the kurtosis of $\boldsymbol{r}$, $\sigma_{v}$ represents the standard deviation of $\boldsymbol{v}$ , $\gamma_{v}$ represents the kurtosis of $\boldsymbol{v}$. The dataset D4 contains Nasdaq 100 Index constituents. The dataset D5 is a subset of Nasdaq by sorting tradable Nasdaq tickers alphabetically and picking the top 300.}
\centering
\begin{tabular}{@{}lllllllllll@{}}
\toprule
ID & Market & Start T. & End T. & \#Ins. & Freq. & Len. & $\sigma_r$\tiny{($\times 10^{-3}$)} & $\gamma_r$ & $\sigma_{v}$ & $\gamma_{v}$ \\ \midrule
D1 & crypto & 22-05-01 & 22-05-31 & 345 & 1m & 28463 & 4.146 & 82.16 & 1.893 & 4.646 \\
D2 & crypto & 21-01-01 & 22-05-31 & 381 & 1h & 9350 & 20.79 & 91.48 & 2.851 & 3.743 \\
D3 &
crypto &
17-08-17 &
22-05-31 &
406 &
1d &
573.1 &
107.6 &
29.77 &
3.069 &
3.893 \\
D4 &
nasdaq &
17-01-03 &
22-05-31 &
102 &
1d &
1283.7 &
59.06 &
116.9 &
3.173 &
4.058 \\
D5 &
nasdaq &
17-01-03 &
22-05-31 &
300 &
1d &
734.5 &
72.58 &
87.55 &
2.777 &
5.011 \\
D6 & forex & 22-05-20 & 22-05-31 & 27 & 1m & 15840 & 0.294 & 23.41 & 3.123 & 2.937 \\
D7 & forex & 22-01-01 & 22-05-31 & 27 & 1h & 3600 & 1.687 & 14.31 & 3.248 & 4.331 \\
D8 & forex & 20-01-01 & 22-05-31 & 27 & 1d & 881 & 7.409 & 3.857 & 3.154 & 3.783 \\ \bottomrule
\end{tabular}
\end{table}\label{tab:data}
From Tab. \ref{tab:data}, Nasdaq and Forex are indeed less volatile than cryptocurrencies. This is understandable because the tokens traded in the crypto market are neither stocks, guaranteed by future dividends, nor are they legal tender, endorsed by national credit. As a loosely defined "proof of stake," its value is often quite uncertain. In addition, although volatilities are unobservable, we can roughly estimate them point-by-point, shown in the last columns of Tab. \ref{tab:data}. It can be seen that the so-called "kurtosis of variance" should exist in crypto and stock markets (>3), and it seems even more significant in low-frequency data.
\subsection{Result of State Estimation}
\subsubsection{Distribution of Parameters}\label{sec:para}
This section will examine the distribution of parameters estimated by \textbf{C3} over different datasets. As the only parameter in the model, 'A' uniquely determines the following values: the kurtosis of returns $\gamma_{r}$, the variance of volatilities' increments $\sigma^2_{v}$, and the kurtosis of volatilities' increments $\gamma_{v}$. Regarding the relationship between A and the latter two, it can be seen from Sec. \ref{sec:dev_kurt} that both $\sigma^2_{v}$ and $\gamma_{v}$ are decreasing functions of A.
Next, we will study the relationship between $\gamma_{r}$ and A. Consider integrating out $\boldsymbol{u}_t$ in Eq. \ref{eqn:fix}, and let $\boldsymbol{v}_{t-1}\equiv B$ (i.e., remove the autocorrelation in volatilities), then get
\begin{equation}\label{eqn:tdist}
p(\boldsymbol{y}_t\mid B)=\frac{2^A B^A \left(2 B+\boldsymbol{y}_t^2\right)^{-A-\frac{1}{2}} \Gamma \left(A+\frac{1}{2}\right)}{\sqrt{\pi } \Gamma (A)}
\end{equation}
In fact, this is a non-standardized Student's t-distribution\footnote{Compound probability distribution, https://en.wikipedia.org/wiki/Compound\_probability\_distribution}. Its kurtosis is:
\begin{equation}
\frac{3 \Gamma (A-2) \Gamma (A)}{\Gamma (A-1)^2}\text{ if }A>2
\end{equation}
This formula is also a decreasing function of A. To verify this, we separately trained the model of Eq. \ref{eqn:tdist} and compared its As with $\gamma_{r}$ in Tab. \ref{tab:data}, as shown in Fig. \ref{fig:dist01}.
\begin{figure}[htbp]
\centering
\begin{subfigure}[h]{0.45\textwidth}\label{fig:dist0}
\includegraphics[width=\linewidth]{dista0}
\caption{$\gamma_{r}$}
\end{subfigure}
\begin{subfigure}[h]{0.45\textwidth}\label{fig:dist1}
\includegraphics[width=\linewidth]{dista1}
\caption{A}
\end{subfigure}
\caption{distribtions of $\gamma_{r}$ vs. distribtions of A}
\label{fig:dist01}
\end{figure}
In general, the larger the average $\gamma_{r}$ value of the data set, the smaller the corresponding A, and the inverse proportional relationship between them is generally validated.
\begin{figure}[htbp]
\centering
\begin{subfigure}[h]{0.45\textwidth}\label{fig:dist2}
\includegraphics[width=\linewidth]{dista2}
\caption{$\sigma^2_{v}$}
\end{subfigure}
\begin{subfigure}[h]{0.45\textwidth}\label{fig:dist3}
\includegraphics[width=\linewidth]{dista3}
\caption{$\gamma_{v}$}
\end{subfigure}
\begin{subfigure}[h]{0.45\textwidth}\label{fig:dist4}
\includegraphics[width=\linewidth]{dista4}
\caption{A}
\end{subfigure}
\caption{distribtions of $\sigma^2_{v}$, $\gamma_{v}$ vs. distribtions of A}
\label{fig:dist234}
\end{figure}
Further, we run \textbf{C3} on all datasets and get the empirical distribution of A as shown in Fig. \ref{fig:dist234}(c). By comparing with Fig. \ref{fig:dist01}(b), we find that the autocorrelation between volatilities does not strongly impact the model estimation. Compared to Fig. \ref{fig:dist234}(c) with Fig. \ref{fig:dist234}(a), (b), we cannot find a very clear correlation yet. However, possibly due to the introduction of two layers of noise, the model of Eq. \ref{eqn:fix2} estimates $\boldsymbol{u}_t$ more smoothly, which makes it difficult to estimate A too large or too small. If it is too large, $\boldsymbol{u}_t$ will be nearly equal; if it is too small, it will cancel the volatility aggregation. These situations are both difficult to occur in practice; thus, the Eq. \ref{eqn:fix2} gives a more concentrated range of estimates than the Eq. \ref{eqn:tdist}.
\subsubsection{Residual Test}\label{sec:acc}
The state estimation results of C1-C4 are compared below. If $\boldsymbol{u}_t$ is observable, such as $\boldsymbol{u}_t^*$, then all residuals $\left\{\boldsymbol{e}_t\triangleq \boldsymbol{y} _t/\boldsymbol{u}_t^*\right\}$ will be exactly the standard normal distribution, which is so-called 'normalization'. However, since $\boldsymbol{u}_t$ is unobservable, we only have the posterior $p(\boldsymbol{u}_t\mid \boldsymbol{y}_{1:T})$, thus a workaround is, respectively sampling from $p(\boldsymbol{u}_t\mid \boldsymbol{y}_{1:T})$ to obtain $\boldsymbol{u}_t^s$, and then generating a residual set $\left\{\boldsymbol{e}_t^s\right\}$. The more accurate $p(\boldsymbol{u}_t\mid\boldsymbol{y}_{1:T})$ has estimated, the higher the probability $\left\{\boldsymbol{e}_t^s\right\}$ will pass the standard normal distribution test.Here we use the Kolmogorov-Smirnov test, which, as a nonparametric method, compares the difference between the empirical cumulative distribution and the $N(0,1)$'s cumulative distribution. This approach is intuitive and mimics the manual inspection we do in the Q-Q plot.
\begin{table}[htbp]
\caption{Percentage of residuals for each instrument which can pass the KS-test.}
\centering
\begin{tabular}{@{}llllll@{}}
\toprule
& $\Delta \boldsymbol{y}_t$ & C1 & C2 & \textbf{C3} & C4 \\ \midrule
D1 & 0 & 0.1246 & \textbf{0.8260} & 0.6869 & 0.7913 \\
D2 & 0 & 0.1312 & 0.9632 & 0.9160 & \textbf{0.9685} \\
D3 & 0.0073 & 0.4778 & 0.9679 & 0.9088 & \textbf{0.9852} \\
D4 & 0 & 0.0196 & 0.8333 & 0.7745 & \textbf{0.8627} \\
D5 & 0 & 0.1652 & \textbf{0.9449} & 0.8601 & 0.8986 \\
D6 & 0.2352 & 0.8823 & \textbf{1} & 0.9411 & \textbf{1} \\
D7 & 0 & 0 & \textbf{0.9411} & 0.8823 & 0.8823 \\
D8 & 0 & 0.7647 & 0.8823 & 0.8235 & \textbf{0.9411} \\ \bottomrule
\end{tabular}
\end{table}\label{tab:normal}
We run the four algorithms on all datasets and check whether the residuals of each sequence are standard Gaussian, shown in Tab. \ref{tab:normal}. It can be seen that the MC-based algorithm still gets the best results, and whether gamma or lognormal is used, the results are very similar. However, as far as VIs are concerned, there are distinct differences. Based on the VI of Gam-chain, we get results that are 5\%-10\% worse than MC, yet basically acceptable in practice. The simple use of LA, due to the thin tail of the Gaussian distribution, is always worse than Gam-Chain.
\begin{figure}[htbp]
\centering
\begin{subfigure}[h]{0.48\textwidth}\label{fig:qq1}
\includegraphics[width=\linewidth]{qq1}
\caption{$C1$}
\end{subfigure}
\begin{subfigure}[h]{0.48\textwidth}\label{fig:qq2}
\includegraphics[width=\linewidth]{qq3}
\caption{$C3$}
\end{subfigure}
\caption{QQ-plots of residuals generated by C1 and \textbf{C3}. The sequence is Tesla's day-level data, selected from dataset D4.}
\end{figure}\label{fig:qq}
We can also make an intuitive comparison for the effect of C1 and \textbf{C3} under the VI method. Fig. \ref{fig:qq} gives a Q-Q plot of the 'normalized' residuals for a specific sequence under both methods. By comparing the quantiles with the standard normal distribution, it can be seen that C1 underestimates the fluctuation in the tail and overestimates the fluctuation around the mean; in contrast, the difference between \textbf{C3} and the standard normal distribution is much smaller.
\subsection{Performance Comparison}\label{sec:perf}
Next, we will compare the performance of C1-C4 under different sequence lengths\footnote{The environment configuration is as follows. CPU: Intel64 Family 6 Model 142 Stepping 9 GenuineIntel ~2803 Mhz; Memory: 16,223 MB; OS: Win10; Compiler: MSVC 14.16; Additional Dependencies: boost 1.79.0}. If we change the specific instrument in testing, the impact on performance is minimal, so we must choose a long enough sequence to test.
\subsubsection{Running Time of Used Functions}
As we described in Sec. \ref{sec:var}, the key to performance is the functions used in the E-step. We tested the running time of the functions used in four algorithms, as shown in Tab. \ref{tab:functime}\footnote{This is implemented in C++. It does not require a virtual machine like Java or Python. It can directly use pointers (addresses) to read array elements, which is very efficient and removes the overhead of address translation. This allows us to have a more precise assessment of algorithm performance.}.
\begin{table}[htbp]
\caption{The elapsed time of the function being used. The test method is to generate $10^9$ random numbers from $LogN(0,1)$ and take their average running time.}
\centering
\begin{tabular}{@{}lllllllll@{}}
\toprule
$10^{-9}$s & $+/-$ & $\times / \div$ & $e^x$ & $\log(x)$ & $x^y$ & $\Gamma(x)$ & $\psi(x)$ & $W(x)$ \\ \midrule
Time & 3.926 & 7.875 & 26.65 & 29.04 & 67.75 & 109.9 & 561.1 & 426.3 \\ \bottomrule
\end{tabular}
\end{table}\label{tab:functime}
From the table, it can be inferred that, because of the different functions required, according to the discussion in the \ref{sec:var} section, the operation time order of the E-step should be: \textbf{C3}<C2<C4<C1. In M-step, the order of operation time is C1<C2<\textbf{C3}<C4. However, the extra time of \textbf{C3} in the M step caused by the $\psi(x)$ function is limited, for the number of calculations of $\psi(x)$ is fixed to 1 in each iteration. Therefore, it can be expected that when the sequence length $T$ increases, the consumption of \textbf{C3} on the $\psi(x)$ will be covered by the advantages obtained by the E-step, and it will run faster than any other method.
\subsubsection{Actual Measurement}
The time consumption of each step is tested below under different sequence lengths. For fairness, we have fixed the number of iterations to 1000. (It has been observed that there is no significant difference in the number of iterations for C1-C4. Moreover, the number of iterations for a longer sequence may not necessarily be more. Therefore, it is feasible to set a fixed value.) The test results are shown in the Fig. \ref{fig:perf}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\linewidth]{perf}
\caption{Comparison of time consumed. Here BTC/USDT is selected from D1 as the test sequence. It is truncated when a different sequence length is required. For the MC method, we only set the number of particles to 2.}
\end{figure}\label{fig:perf}
The results show that the time of the E-step almost dominates the increase of the running time as the sequence length grows. To be precise, \textbf{C3} only has a growth rate of about 20\% of the commonly used C2. This verifies the effectiveness of the scheme in this paper. In fact, the number of particles in C2 will not be set to 2 in practice, but to a larger number, such as 10. In this way, the speedup of \textbf{C3} will be multiplied. Moreover, its calculation process is deterministic, unlike the MC method, which requires additional iterations to determine whether or not it has converged.
\section{Conclusion}
This paper presented an alternative scheme for estimating stochastic volatility based on the variational method. It can quickly estimate volatility for a large number of series, and the calculation process is entirely deterministic, so the convergence is also easy to judge. This is of particular practical value for high-frequency trading. In fact, the VI and MC methods are independent, and it is possible to perform fast initialization for any MC method using our method. Compared with the LA that belongs to the VI-class methods, the approximate posterior tail obtained by LA is heavier and does not need to calculate the Lambert W function. Therefore, it can improve both accuracy and performance.
In the future, we will consider introducing a more complex gamma network that can express the autocorrelation effect between volatility; or using multiple layers of latent gamma variables to achieve a broader range of kurtosis representation. In addition, this scheme has further room for optimization, such as using Taylor expansion to quickly calculate the digamma function, which will further improve the performance.
|
1,941,325,221,138 | arxiv | \section{Introduction}
The theory of stochastic optimal control is based on the assumption that the probability distribution of uncertain variables (e.g., disturbances) is fully known. However, this assumption is often restrictive in practice, because estimating an accurate distribution requires large-scale high-resolution sensor measurements over a long training period or multiple periods.
Situations in which uncertain variables are not directly observed are much more challenging; computational methods, such as filtering or statistical learning techniques, are often used to obtain the (posterior) distribution of the uncertain variables given limited observations. The accuracy of the obtained distribution is often unsatisfactory, as it is subject to the quality of the collected data, computational methods, and prior knowledge regarding the variables. If poor distributional information is employed in constructing a stochastic optimal controller, it does not guarantee optimality and can even cause catastrophic system behaviors~(e.g., \cite{Nilim2005, Samuelson2017}).
To overcome this issue of limited distribution information in stochastic control, we investigate a \emph{distributionally robust control} approach.
This emerging minimax stochastic control method minimizes a cost function of interest, assuming that the distribution of uncertain variables is not completely known, but is contained in a pre-specified \emph{ambiguity set} of probability distributions.
In this paper, we model the ambiguity set as a statistical ball centered at an empirical distribution with a radius measured by the \emph{Wasserstein metric}.
This modeling approach provides a straightforward means to incorporate data samples into distributionally robust control problems.
Our focus is to show that the resulting stochastic control problems have
several salient features
in terms of computational tractability and out-of-sample performance guarantee.
Due to its superior statistical properties,
the Wasserstein ambiguity set has recently received a great deal of attention in distributionally robust optimization~(e.g., \cite{Esfahani2015, Zhao2018, Gao2016, Blanchet2018}), learning~(e.g., \cite{Sinha2018, Chen2018}) and filtering~\cite{Shafieezadeh2018}.
Specifically, the Wasserstein ball contains both continuous and discrete distributions while statistical balls with the $\phi$-divergence such as the Kullback-Leibler divergence centered at a discrete empirical distribution is not sufficiently rich to contain relevant continuous distributions.
Furthermore, the Wasserstein metric addresses the closeness between two points in the support, unlike the $\phi$-divergence.
Due to the incapability of the $\phi$-divergence in terms of taking into account the distance between two support elements, the associated ambiguity set may contain irrelevant distributions~\cite{Gao2016}.
For these reasons, we chose the Wasserstein metric to handle distribution ambiguity,
although several other types of ambiguity sets have been proposed in the context of single-stage optimization
by using moment constraints~(e.g., \cite{Popescu2007, Delage2010, Zymler2013}), confidence sets~(e.g., \cite{Wiesemann2014}), and the $\phi$-divergences~(e.g., \cite{BenTal2013, Jiang2016}).
\subsection{Related Work}
Distributionally robust sequential decision-making problems have been studied in the context of finite Markov decision processes (MDPs) and continuous-state stochastic control.
In the finite MDP setting,
dynamic programming (DP) approaches have been proposed~\cite{Xu2012, Yu2016, Yang2017lcss}.
In~\cite{Xu2012}, moment-based ambiguity sets are used to impose constraints on the moments of distributions, such as mean and covariance.
This approach is further extended to
handle more types of constraints, such as confidence sets and mean absolute deviation~\cite{Yu2016},
by using the lifting technique given in \cite{Wiesemann2014}.
Distributionally robust MDPs with Wasserstein balls are studied in~\cite{Yang2017lcss}, which provides computationally tractable reformulations and useful analytical properties.
Continuous-state distributionally robust control problems can be considered as a class of minimax stochastic control on Borel spaces~\cite{Gonzalez2003}.
In the case of linear dynamics and quadratic cost functions,
\cite{VanParys2016} focuses on linear policies and proposes tractable semidefinite program formulation when moment constraints are imposed.
A DP method is also proposed for moment-based ambiguity sets and applied to probabilistic safety specification problems~\cite{Yang2018}.
On the other hand, \cite{Tzortzis2019} uses a total variation ball to model distribution ambiguity and proposes a modified version of the classical policy iteration algorithm.
Furthermore, a Riccati equation-based approach is also developed in the linear-quadratic regulator setting with the total variation ambiguity set~\cite{Tzortzis2016} and the relative entropy constraint~\cite{Petersen2000}.
\subsection{Contributions}
Departing from the aforementioned control approaches that indirectly use data samples, we consider \emph{continuous-state} distributionally robust control problems with Wasserstein ambiguity sets and develop a dynamic programming method to solve and analyze problems by directly using the data.
The following is a summary of the main contributions of this work.
First, we propose computationally tractable
value and policy iteration algorithms
with explicit estimates of the number of iterations necessary for obtaining an $\epsilon$-optimal policy.
The original Bellman equation involves an infinite-dimensional minimax optimization problem, where the inner maximization problem is over probability measures in the Wasserstein ball.
To alleviate the computational issue without sacrificing optimality, we
reformulate Bellman operators by
using modern DRO based on Kantorovich duality~\cite{Esfahani2015, Gao2016}.
Second, we show that the resulting distributionally robust policy $\pi^\star$ has a probabilistic \emph{out-of-sample performance guarantee} by using the contraction property of associated Bellman operators and a measure concentration inequality.
In other words, when $\pi^\star$ is used,
a probabilistic bound holds on the closed-loop performance evaluated
under a new set of samples that are selected independently of the training data.
We observe that the contraction property of the Bellman operator seamlessly connects a single-stage performance guarantee to its multi-stage counterpart in a manner that is independent of the number of stages.
Third, we consider a Wasserstein penalty problem and derive an explicit expression of the optimal control policy and the worst-case distribution policy, along with a Riccati-type equation in the linear-quadratic setting.
We also show that the resulting control policy converges to the optimal policy of the corresponding linear-quadratic-Gaussian (LQG) problem as the penalty parameter tends to $+\infty$.
The performance and utility of the proposed method are demonstrated through an investment-consumption problem and a power system frequency control problem.
This paper is significantly extended from its preliminary version~\cite{Yang2017cdc}, which models distribution ambiguity by using confidence sets.
Specifically, we consider Wasserstein ambiguity sets and investigate new salient features of the corresponding distributionally robust control framework such as $(i)$ a characterization of the worst-case distribution policy, $(ii)$ an out-of-sample performance guarantee, and $(iii)$ an explicit expression of the solution to linear-quadratic problems.
\subsection{Organization}
In Section~\ref{sec:setting}, we define optimal distributionally robust policies under ambiguous uncertainty and formulate the corresponding distributionally robust stochastic control problem as a dynamic game.
In Section~\ref{sec:sol}, we develop a tractable semi-infinite program formulation of the Bellman equation and characterize one of the worst-case distribution policies by using Kantorovich duality.
In Section~\ref{sec:perf},
we examine
a probabilistic out-of-sample performance guarantee of the distributionally robust policy.
In Section~\ref{sec:pen},
we present the Wasserstein penalty problem and its explicit solution obtained from a Riccati-type solution.
Finally, in Section~\ref{sec:exp}, we provide the results of our numerical experiments.
\subsection{Notation}
Given a Borel space $X$, we denote $\mathcal{P}(X)$ by the set of Borel probability measures on $X$.
In addition, $\mathbb{B}_\xi (X)$ denotes the Banach space of measurable functions $v$ on $X$ with a finite weighted sup-norm, i.e.,
$\| v \|_\xi := \sup_{\bm{x} \in X} (| v(\bm{x}) | / \xi (\bm{x}) ) < \infty$ given a measurable weight function $\xi : X \to \mathbb{R}$.
Let $\mathbb{B}_{lsc}(X)$ be the set of lower semicontinuous functions in $\mathbb{B}_\xi (X)$.
\section{Distributionally Robust Control of Stochastic Systems}
\label{sec:setting}
\subsection{Ambiguity in Stochastic Systems}
Consider a discrete-time stochastic system of the form
\begin{equation}
x_{t+1} = f (x_t, u_t, w_t),
\end{equation}
where $x_t \in \mathcal{X} \subseteq \mathbb{R}^n$ and $u_t \in \mathcal{U} \subseteq \mathbb{R}^m$ denote the system state and control input, respectively. Here, $w_t \in \mathcal{W} \subseteq \mathbb{R}^l$ is a random disturbance. The probability distribution of $w_t$ is denoted by $\mu_t$.
However, in practice, the probability distribution is not fully known
and is difficult to estimate accurately.
We assume that $\mathcal{X}$, $\mathcal{U}$ and $\mathcal{W}$
are Borel subsets of $\mathbb{R}^n$, $\mathbb{R}^m$ and $\mathbb{R}^l$, respectively.
Suppose that $w_t$'s are i.i.d. and that we have access to the sample $\{\hat{w}^{(1)}, \ldots, \hat{w}^{(N)}\}$ of $w_t$.
One of the most straightforward approaches is to use the sample average approximation (SAA) method and solve the corresponding optimal control problem with the empirical distribution.
This SAA-control problem can be formulated as
\begin{equation}\label{saa_opt}
\mbox{\small (SAA-control)} \; \inf_{\pi \in \Pi} \; \mathbb{E}^{\pi}_{w_t \sim \nu_N} \bigg [ \sum_{t=0}^\infty \alpha^t c(x_t, u_t) \mid x_0 = \bm{x} \bigg ],
\end{equation}
where
$\nu_N$ denotes the empirical distribution constructed from the $N$-samples:
\begin{equation}\label{emp_dist}
\nu_N := \frac{1}{N} \sum_{i=1}^N \delta_{\hat{w}^{(i)}}
\end{equation}
with the Dirac delta measure $\delta_{\hat{w}^{(i)}}$ concentrated at $\hat{w}^{(i)}$.
Here, $\alpha \in (0,1)$ is a discount factor, $c: \mathcal{X} \times \mathcal{U} \to \mathbb{R}$ is a stage-wise cost function of interest,
and $\mathbb{E}_{w_t \sim \nu_N}^\pi$ denotes the expected value taken with respect to the probability measure induced by the control policy $\pi$ and the empirical distribution $\nu$.
As the number of samples, $N$, tends to infinity, the empirical distribution $\nu$ well approximates the true distribution $\mu$; thus, an optimal policy of the SAA-control problem presents a near-optimal performance.
Unfortunately, it takes a long simulation period or multiple episodes to obtain a large number of samples. Furthermore, in practice, it is likely that the sample data do not reflect the true distribution due to inaccurate sensor measurements or data corruption by malicious attackers (e.g., hackers).
To resolve these issues in data-driven stochastic control, we propose an optimization method to construct a policy that is robust against errors in the empirical distribution~\eqref{emp_dist}.
More specifically, our policy minimizes the \emph{worst-case} total cost that is calculated under a probability distribution contained in a given set $\mathcal{D} \subset \mathcal{P}(\mathcal{W})$, which is called the \emph{ambiguity set} of probability distributions.
The ambiguity set can be designed to adequately characterize errors in the empirical distribution.
\subsection{Distributionally Robust Policy}
To formulate a concrete distributionally robust control problem, we consider a \emph{Markov (or stochastic) game with complete information} (e.g.,~\cite{Kuenle2000, Gonzalez2003}), which is a class of two-player zero-sum dynamic games: Player~I (controller) determines a policy to minimize the total cost while Player~II (adversary) selects the disturbance distribution $\mu_t$ of $w_t$ from the ambiguity set $\mathcal{D}$ to maximize the same cost value.
Let $H_t$ be the set of \emph{histories} up to stage $t$, whose element is of the form $h_t := (x_0, u_0, \cdots, x_{t-1}, u_{t-1}, x_t)$.\footnote{All the results in this paper are valid with histories of the form $\tilde{h}_t := (x_0, u_0, w_0, \mu_0, \cdots, x_{t-1}, u_{t-1}, w_{t-1}, \mu_{t-1}, x_t)$ that also contains Player II's actions $(\mu_0, \cdots, \mu_{t-1})$; that is because under Assumption~\ref{ass_sc}, without loss of optimality, it suffices to focus on stationary policies that depend only on current state information. We intentionally use the reduced version of histories, as
the realized distributions may not be observable in practice. }
The set of admissible control strategies (for Player I) is given by
$\Pi := \{ \pi := (\pi_0, \pi_1, \ldots) \: | \: \pi_t (\mathcal{U}(x_t) | h_t) = 1 \; \forall h_t \in H_t\}$, where $\pi_t$ is a stochastic kernel from $H_t$ to $\mathbb{R}^m$ and $\mathcal{U}(x_t) \subseteq \mathcal{U}$ is the set of admissible control actions (given that the system state is $x_t$ at stage $t$).
Similarly, the set of Player II's admissible strategies is defined by $\Gamma := \{\gamma := (\gamma_0, \gamma_1, \ldots ) \: | \: \gamma_t (\mathcal{D} | h_t^e) = 1 \; \forall h_t^e \in H_t^e\}$, where
$H_t^e$ is the set of \emph{extended histories} up to stage $t$, whose element is of the form $h_t^e := (x_0, u_0, \mu_0, \cdots, x_{t-1}, u_{t-1}, \mu_{t-1}, x_t, u_t)$ and $\gamma_t$ is a stochastic kernel from $H_t$ to $\mathcal{P}(\mathcal{W})$.
Note that
the ambiguity set $\mathcal{D}$ is the action space of Player II.
Here, we allow Player II can change the distribution of $w_t$ over time.
Thus, the strategy space for Player II is larger than necessary, and this gives an advantage to the adversary.
However, later we will show that an optimal policy of Player II is stationary under some assumption (see Proposition~\ref{prop:wd}).
We consider the following infinite-horizon discounted cost function:
\begin{equation} \label{dr_opt}
\begin{split}
J_{\bm{x}}(\pi, \gamma) := \mathbb{E}^{\pi, \gamma} \bigg [
\sum_{t=0}^{\infty} \alpha^t c (x_t, u_t) \mid x_0 = \bm{x}
\bigg ],
\end{split}
\end{equation}
where
$\mathbb{E}^{\pi, \gamma}$ denotes expectation with respect to the probability measure induced by the strategy pair $(\pi, \gamma) \in \Pi \times \Gamma$.
Before defining a concrete stochastic control problem,
we impose the following standard assumption for measurable selection in semicontinuous models~\cite{Gonzalez2003}:
\begin{assumption}\label{ass_sc}
Let $\mathbb{K} := \{
(\bm{x}, \bm{u}) \in \mathcal{X} \times \mathcal{U} \mid
\bm{u} \in \mathcal{U}(\bm{x})
\}$.
\begin{enumerate}
\item The function $c$ is lower semicontinuous on $\mathbb{K}$, and
\[
| c(\bm{x}, \bm{u}) | \leq b \xi (\bm{x}) \quad \forall (\bm{x}, \bm{u}) \in \mathbb{K},
\]
for some constant $b \geq 0$ and continuous function $\xi: \mathcal{X} \to [1, \infty)$ such that ${\xi}' (\bm{x}, \bm{u}) := \int_{\mathcal{W}} \xi(f(\bm{x}, \bm{u}, w)) \bm{\mu} (\mathrm{d}w)$ is continuous on $\mathbb{K}$ for any $\bm{\mu} \in \mathcal{D}$.
In addition, there exists a constant $\beta \in [1, 1/\alpha)$ such that
${\xi}'(\bm{x}, \bm{u}) \leq \beta \xi (\bm{x})$ for all $(\bm{x}, \bm{u}) \in \mathbb{K}$;
\item For each continuous bounded function $\chi: \mathcal{X} \to \mathbb{R}$, the function ${\chi}' (\bm{x}, \bm{u}) := \int_{\mathcal{W}} \chi (f(\bm{x}, \bm{u}, w)) \bm{\mu} (\mathrm{d} w)$ is continuous on $\mathbb{K}$ for any $\bm{\mu} \in \mathcal{D}$;
\item The set $\mathcal{U}(\bm{x})$ is compact for every $\bm{x} \in \mathcal{X}$, and the set-valued mapping $\bm{x} \mapsto \mathcal{U}(\bm{x})$ is upper semicontinuous.
\end{enumerate}
\end{assumption}
The first condition trivially holds when $c$ is bounded. In fact, $\xi$ is a weight function introduced to relax the boundedness assumption.
Assumption~\ref{ass_sc} ensures the existence of an optimal policy $\pi^\star$, which is deterministic and stationary, of a minimax control problem with the cost function~\eqref{dr_opt}~\cite[Theorem 4.1]{Gonzalez2003}.
Furthermore, the corresponding optimal value function lies in $\mathbb{B}_{lsc}(\mathcal{X})$ as discussed later.
We now define the \emph{optimal distributionally robust policies} as follows:
\begin{definition}
A control policy $\pi^\star \in \Pi$ is said to be an \emph{optimal distributionally robust policy} if it satisfies
\begin{equation} \label{eq:defn}
\sup_{\gamma \in \Gamma} \; J_{\bm{x}} (\pi^\star, \gamma) \leq \sup_{\gamma' \in \Gamma} \; J_{\bm{x}} (\pi, \gamma') \quad \forall \pi \in \Pi.
\end{equation}
\end{definition}
In words, an optimal distributionally robust policy achieves the minimal cost under the most adverse policies that select disturbance distributions in the ambiguity set $\mathcal{D}$.
Such a desirable policy can be obtained by solving the following problem:
\begin{equation}\label{dr_opt}
\begin{split}
\mbox{(DR-control)} \quad \inf_{\pi \in \Pi} \sup_{\gamma \in \Gamma} \; J_{\bm{x}}(\pi, \gamma),
\end{split}
\end{equation}
which we call the distributionally robust control (DR-control) problem.
The existence of an optimal policy under Assumption~\ref{ass_sc} will be formalized in Theorem~\ref{thm:ds} in Section~\ref{sec:opt}.
The most important part of this formulation is the inner maximization problem over all disturbance distribution policies in $\Gamma$, which encodes distributional uncertainty through $\mathcal{D}$.
An optimal policy $\pi^\star$ has a performance guarantee in the form of an upper-bound, $\sup_{\gamma \in \Gamma} J_{\bm{x}} (\pi^\star, \gamma)$, if the ambiguity set is sufficiently large to contain the true distribution.
This performance guarantee may not be valid when a different control policy is used, as shown in~\eqref{eq:defn}.
\subsection{Wasserstein Ambiguity Set}
To complete the formulation of the DR-control problem, we consider a specific class of ambiguity sets using the Wasserstein metric.
Let $\mathcal{D}$ be a statistical ball centered at the empirical distribution $\nu_N$ defined by \eqref{emp_dist} with radius $\theta > 0$:
\begin{equation}\label{ball}
\mathcal{D} := \{
{\mu} \in \mathcal{P}(\mathcal{W}) \mid W_p({\mu}, \nu_N ) \leq \theta
\}.
\end{equation}
Here, the distance between the two probability distributions is measured by the Wasserstein metric of order $p \in [1, \infty)$,
\begin{equation}\label{wasserstein}
\begin{split}
W_p({\mu}, \nu_N) := \min_{\kappa \in \mathcal{P}(\mathcal{W}^2)}
\bigg\{
&\bigg [\int_{\mathcal{W}^2} d(w, w')^p \: \kappa (\mathrm{d}w, \mathrm{d}w')\bigg ]^{\frac{1}{p}} \mid \Pi^1 \kappa = {\mu}, \Pi^2 \kappa = \nu_N
\bigg\},
\end{split}
\end{equation}
where $d$ is a metric on $\mathcal{W}$, and $\Pi^i \kappa$ denotes the $i$th marginal of $\kappa$ for $i=1, 2$.
The Wasserstein distance between two probability distributions represents the minimum cost of transporting or redistributing mass from one to another via non-uniform perturbation, and the optimization variable $\kappa$ can be interpreted as a transport plan.
The minimization problem to identify an optimal transport plan $\kappa$ in \eqref{wasserstein} is called the \emph{Monge-Kantorovich problem}.
The minimum of this problem can be found by solving the following dual problem:
\[
W_p(\mu, \nu_N)^p = \sup_{\varphi, \psi \in \Phi} \bigg [
\int_{\mathcal{W}} \varphi (w) \: \mu (\mathrm{d} w) +
\int_{\mathcal{W}} \psi (w') \:\nu_N (\mathrm{d} w')
\bigg ],
\]
where $\Phi := \{
(\varphi, \psi) \in L^1 (\mathrm{d} \mu) \times L^1(\mathrm{d} \nu_N) \mid
\varphi (w) + \psi (w') \leq d(w, w')^p \; \forall w, w' \in \mathcal{W}
\}$.
This equivalence is known as the \emph{Kantorovich duality principle}.
Then, the Wasserstein ball \eqref{wasserstein} can be expressed as follows:
\begin{lemma}\label{lem:ball}
The Wasserstein ambiguity set defined by \eqref{ball} is equivalent to
\begin{equation} \nonumber
\begin{split}
{\mathcal{D}} &= \bigg \{ {\mu} \in \mathcal{P} (\mathcal{W}) \mid
\int_{\mathcal{W}} \varphi (w) \: {\mu} (\mathrm{d} w) \: + \frac{1}{N}\sum_{i=1}^N \inf_{w \in \mathcal{W}} [ d(w, \hat{w}^{(i)})^p - \varphi (w) ] \leq \theta^p \:\: \forall \varphi \in L^1(\mathrm{d}{\mu})
\bigg \}.
\end{split}
\end{equation}
\end{lemma}
A proof for this lemma is contained in Appendix~\ref{app:ball}.
Note that the minimization problem in the reformulated Wasserstein ball is finite dimensional, unlike the original Monge-Kantorovich problem.
In the following section, we propose computationally tractable value and policy iteration algorithms
by using the reformulation results
in DRO based on Kantorovich duality.
\section{Dynamic Programming Solution and Analysis}
\label{sec:sol}
Our first goal is to
develop a computationally tractable dynamic programming (DP) solution for the DR-control problem \eqref{dr_opt}.
We begin by characterizing an optimality condition using the Bellman's principle.
\subsection{Bellman's Principle of Optimality}\label{sec:opt}
For any $v \in \mathbb{B}_\xi (\mathcal{X})$, let $T$ be the Bellman operator of the DR-control problem~\eqref{dr_opt}, defined by
\begin{equation}\nonumber
(T v) (\bm{x}) := \inf_{\bm{u} \in \mathcal{U}(\bm{x})} \sup_{\bm{\mu} \in \mathcal{D}} \bigg [ c (\bm{x}, \bm{u})
+ \alpha \int_{\mathcal{W}} v(f (\bm{x}, \bm{u}, w)) \bm{\mu}(\mathrm{d} w) \bigg ]
\end{equation}
for every $\bm{x} \in \mathcal{X}$.
Assumption~\ref{ass_sc} enables us to conduct the contraction analysis with respect to the weighted sup-norm $\| \cdot\|_\xi$ defined by
\[
\| v \|_\xi := \sup_{\bm{x} \in \mathcal{X}} \frac{|v(\bm{x}) |}{\xi(\bm{x})}.
\]
The second and third conditions in Assumption~\ref{ass_sc} play a critical role in preserving the lower semicontinuity of the value function when applying the Bellman operator
as well as
in the existence and optimality of deterministic stationary policies.
Let $\Pi^{DS}$ be the set of deterministic stationary policies, i.e.,
$\Pi^{DS} := \{\pi: \mathcal{X} \to \mathcal{U} \mid \pi (x_t) = u_t \in \mathcal{U}(x_t)$, \mbox{$\pi$ measurable}\}.
Then, the following lemmas hold:
\begin{lemma}[Contraction and Monotonicity]~\label{lem:contraction}
Suppose that Assumption~\ref{ass_sc} holds.
Then, $Tv \in \mathbb{B}_{lsc}(\mathcal{X})$ for any $v \in \mathbb{B}_{lsc}(\mathcal{X})$.
Furthermore, the Bellman operator $T: \mathbb{B}_{lsc}(\mathcal{X}) \to \mathbb{B}_{lsc}(\mathcal{X})$ is a $\tau$-contraction mapping with respect to $\| \cdot \|_\xi$, where $\tau := \alpha\beta \in (0,1)$\footnote{Here, the constant $\beta \in [1, 1/\alpha)$ is defined in Assumption~\ref{ass_sc}-1).},
i.e.,
\[
\| Tv - Tv' \|_\xi \leq \tau \| v - v' \|_\xi \quad \forall v, v' \in \mathbb{B}_{lsc}(\mathcal{X}).
\]
Furthermore, $T$ is monotone, i.e.,
\[
T v \leq T v' \quad \forall v, v' \in \mathcal{X}_\xi (\mathcal{X})\mbox{ s.t. } v \leq v'.
\]
\end{lemma}
\begin{lemma}[Measurable selection]\label{lem:ms}
Suppose that Assumption~\ref{ass_sc} holds.
There exist a measurable function $v^\star \in \mathbb{B}_{lsc}(\mathcal{X})$ and a deterministic stationary policy $\pi^\star \in \Pi^{DS}$ such that
\begin{enumerate}
\item $v^\star$ is the unique function in $\mathbb{B}_{lsc}(\mathcal{X})$ that satisfies the following Bellman equation:
\begin{equation}\label{bellman}
v = Tv;
\end{equation}
\item given any fixed $\bm{x} \in \mathcal{X}$,
\begin{equation} \nonumber
\begin{split}
&v^\star (\bm{x}) = \sup_{\bm{\mu} \in \mathcal{D}} \bigg [
c (\bm{x}, \pi^\star (\bm{x})) + \alpha \int_{\mathcal{W}} v^\star (f (\bm{x}, \pi^\star (\bm{x}), w)) \: \bm{\mu}(\mathrm{d} w)
\bigg ]
\end{split}
\end{equation}
and
$\lim_{t \to \infty} \alpha^t \mathbb{E}^{\pi, \gamma} [ v^\star (x_t) ] = 0$ for all $(\pi, \gamma) \in \Pi \times \Gamma$.
\end{enumerate}
\end{lemma}
These lemmas follow immediately from~\cite[Lemma 4.4 and Theorem 4.1]{Gonzalez2003}.
In fact, for any $v \in \mathbb{B}_{lsc}(\mathcal{X})$, there exists $\hat{\bm{u}} \in \mathcal{U}(\bm{x})$ such that
$(T v)(\bm{x}) = \sup_{\bm{\mu} \in \mathcal{D}} [
c (\bm{x}, \hat{\bm{u}}) + \alpha \int_{\mathcal{W}} v( f( \bm{x}, \hat{\bm{u}}, w)) \: \bm{\mu} (\mathrm{d} w)
]$
for every $\bm{x} \in \mathcal{X}$ under Assumption~\ref{ass_sc} (see \cite[Lemma~3.3]{Gonzalez2003}).\footnote{Thus, the outer minimization problem in the definition of $T$ admits an optimal solution when $v \in \mathbb{B}_{lsc}(\mathcal{X})$, and ``$\inf$" can be replaced by ``$\min$."}
If we let $\pi^\star (\bm{x}) := \hat{\bm{u}}$ for each $\bm{x} \in \mathcal{X}$, then $\pi^\star$ is an optimal distributionally robust policy, which is deterministic and stationary. More specifically, the following principle of optimality holds:
\begin{theorem}[Existence and optimality of deterministic stationary policy]\label{thm:ds}
Suppose that Assumption~\ref{ass_sc} holds.
Then, $(v^\star, \pi^\star) \in \mathbb{B}_{lsc}(\mathcal{X}) \times \Pi^{DS}$ defined in Lemma~\ref{lem:ms} satisfies
\[
v^\star(\bm{x}) = \inf_{\pi \in \Pi} \sup_{\gamma \in \Gamma} \; J_{\bm{x}} (\pi, \gamma) = \sup_{\gamma \in \Gamma} \; J_{\bm{x}} (\pi^\star, \gamma) \quad \forall \bm{x} \in \mathcal{X}.
\]
In words, $v^\star$ is the optimal value function of the DR-control problem~\eqref{dr_opt}, and $\pi^\star$ is an optimal policy, which is
deterministic and stationary.
\end{theorem}
The existence and optimality results are shown in a more general minimax control setting in \cite[Theorem 4.1]{Gonzalez2003}.
\subsection{Value Iteration}
To compute the optimal value function $v^\star$,
we first consider a \emph{value iteration} (VI) approach, $v_{k+1} := T v_k$, where $v_k$ denotes the value function evaluated at the $k$th iteration and $v_0$ is initialized as an arbitrary function in $\mathbb{B}_{lsc}(\mathcal{X})$.
By the contraction property of $T$ (Lemma~\ref{lem:contraction}),
the Banach fixed-point theorem implies that
$v_k$ converges to $v^\star$ pointwise as $k$ tends to $\infty$ under Assumption~\ref{ass_sc}.
However, this approach requires us to solve the infinite-dimensional minimax optimization problem in the Bellman operator for each $\bm{x} \in \mathcal{X}$ in each iteration.
To alleviate this issue,
we reformulate the problem into a computationally tractable form
by using modern Wasserstein DRO~\cite{Esfahani2015, Gao2016}.
\begin{proposition}\label{prop:si}
Suppose that the function $w \mapsto v(f(\bm{x}, \bm{u}, w))$ lies in $L^1 (\mathrm{d} \nu_N)$ for each $(\bm{x}, \bm{u}) \in \mathbb{K}$.
Then, the Bellman operator $T$ can be expressed as
\begin{equation} \label{semi}
\begin{split}
(T v)(\bm{x}) =
\inf_{\bm{u}, \lambda, \ell} \; &\bigg [ \lambda \theta^p + c (\bm{x}, \bm{u}) + \frac{1}{N} \sum_{i=1}^N \ell_i \bigg ]\\
\mbox{s.t.} \; & \alpha v( f(\bm{x}, \bm{u}, w)) - \lambda d( w, \hat{w}^{(i)} )^p \leq \ell_i \; \; \forall w \in \mathcal{W}\\
& \bm{u} \in \mathcal{U}(\bm{x}), \: \lambda \geq 0, \: \ell \in \mathbb{R}^N
\end{split}
\end{equation}
for each $\bm{x} \in \mathcal{X}$,
where the first inequality constraint holds for all $i=1, \ldots, N$.
\end{proposition}
This reformulation can be obtained by using Kantorovich duality on the Wasserstein ambiguity set (Lemma~\ref{lem:ball}).
It is shown in \cite[Theorem 1]{Gao2016} that there is no duality gap.
Note that
the reformulated optimization problem in Proposition~\ref{prop:si} has finite-dimensional decision variables as $\bm{u} \in \mathcal{U}(\bm{x}) \subseteq \mathcal{U} \subseteq \mathbb{R}^m$, $\lambda \in \mathbb{R}$ and $\ell \in \mathbb{R}^N$.
However, the first inequality constraint must hold for all $w$ in the support $\mathcal{W}$, which could be a dense set.
Thus, in general, the reformulated problem is a \emph{semi-infinite program}.
This semi-infinite program can be solved
by using
several existing convergent algorithms, such as discretization, sampling-based methods~(see \cite{Reemtsen1991, Hettich1993, Lopez2007, Calafiore2005} and the references therein).
To interpret this reformulation, we consider the following equivalent integral form:
\begin{equation} \nonumber
\begin{split}
&(T v)(\bm{x}) = \inf_{\bm{u} \in \mathcal{U}(\bm{x}), \lambda\geq 0} \bigg [ \lambda \theta^p + \int_{\mathcal{W}} \sup_{w \in \mathcal{W}} \big [ c(\bm{x}, \bm{u}) + \alpha v (f(\bm{x}, \bm{u}, w)) - \lambda d(w, {w}' )^p \big ] \nu_N (\mathrm{d} w') \bigg ].
\end{split}
\end{equation}
The integrand above can be interpreted as a \emph{regularized cost-to-go} function.
The regularized value is then integrated using the empirical distribution $\nu_N$.
The first term $\lambda \theta^p$, which is nonnegative, is added to compensate for this regularization effect and the optimism induced by the empirical distribution so that the reformulated optimization problem is consistent with the original one.
We define an \emph{$\epsilon$-optimal policy} of \eqref{dr_opt} as $\pi_\epsilon \in \Pi$ that satisfies
\[
\| v^{\pi_\epsilon} - v^\star\|_\xi < \epsilon
\]
for $\epsilon > 0$,
where $v^\pi: \mathcal{X} \to \mathbb{R}$ is the (worst-case) value function of a policy $\pi \in \Pi$, i.e.,
\begin{equation}\label{worst_value}
v^\pi (\bm{x}) := \sup_{\gamma \in \Gamma} J_{\bm{x}} (\pi, \gamma).
\end{equation}
The following VI algorithm can be used to find an $\epsilon$-optimal policy:
\begin{enumerate}
\item Initialize $v_0$ as an arbitrary function in $\mathbb{B}_{lsc} (\mathcal{X})$, and set $k := 0$;
\item For each $\bm{x} \in \mathcal{X}$,
compute
\[
v_{k+1} (\bm{x}):= (T v_k) (\bm{x})
\]
by
solving the semi-infinite program~\eqref{semi} with $v:= v_k$;
\item If the stopping criterion is met, then go to Step 4); Otherwise, set $k \leftarrow k+1$ and go to Step 2);
\item For each $\bm{x} \in \mathcal{X}$, set
\[
\hat{\pi} (\bm{x}) := \hat{\bm{u}},
\]
where $\hat{\bm{u}}$ is an optimal $\bm{u}$ of the semi-infinite program~\eqref{semi} that computes $(Tv_{k}) (\bm{x})$, and stop.
\end{enumerate}
Note that
the existence of an optimal $\hat{\bm{u}}$ in Step 4) is guaranteed under Assumption~\ref{ass_sc} by \cite[Lemma 3.3]{Gonzalez2003}.
A typical stopping criterion in VI is $\| v_{k+1} - v_k \|_\xi < \delta$ for some threshold $\delta > 0$.
However,
we can even compute the number of iterations required to achieve the desired precision $\epsilon > 0$.
Given any $\pi \in \Pi^{DS}$ and $v \in \mathbb{B}_\xi(\mathcal{X})$,
let
\[
(T^\pi v)(\bm{x}) := \sup_{\bm{\mu} \in \mathcal{D}} \bigg [
c (\bm{x}, \pi(\bm{x}))
+ \alpha \int_{\mathcal{W}} v(f (\bm{x}, \pi(\bm{x}), w)) \bm{\mu}(\mathrm{d}w)
\bigg]
\]
for all $\bm{x} \in \mathcal{X}$.
The Bellman operator $T^\pi$ has the following properties:
\begin{lemma}\label{lem:cont2}
Suppose that Assumption~\ref{ass_sc} holds.
Then, given any $\pi \in \Pi^{DS}$, we have $T^\pi v \in \mathbb{B}_{\xi}(\mathcal{X})$ for any $v \in \mathbb{B}_{\xi}(\mathcal{X})$.
Furthermore,
the operator $T^\pi: \mathbb{B}_{\xi} (\mathcal{X}) \to \mathbb{B}_{\xi} (\mathcal{X})$ is a $\tau$-contraction mapping with respect to $\| \cdot \|_\xi$, i.e.,
\[
\|T^\pi v - T^\pi v'\|_\xi \leq \tau \| v - v'\|_\xi \quad \forall v, v' \in \mathbb{B}_{\xi} (\mathcal{X}),
\]
where $\tau :=\alpha\beta \in (0,1)$.
Furthermore, $T^{\pi}$ is monotone, i.e.,
\[
T^{\pi} v \leq T^{\pi} v' \quad \forall v, v' \in \mathcal{X}_\xi (\mathcal{X})\mbox{ s.t. } v \leq v'.
\]
\end{lemma}
\begin{proof}
By Assumption~\ref{ass_sc},
it is clear that $T^\pi v \in \mathbb{B}_\xi (\mathcal{X})$ if $v \in \mathbb{B}_\xi (\mathcal{X})$.
Fix arbitrary $v, v' \in \mathbb{B}_{\xi} (\mathcal{X})$, and an arbitrary $\bm{x} \in \mathcal{X}$.
For any $\epsilon > 0$, there exists $\hat{\bm{\mu}} \in \mathcal{D}$ such that
\[
(T^\pi v)(\bm{x}) - \epsilon < c(\bm{x}, \pi (\bm{x})) + \alpha \int_\mathcal{W} v(f (\bm{x}, \pi(\bm{x}), w)) \hat{\bm{\mu}}(\mathrm{d}w).
\]
Thus, we have
\begin{equation}\nonumber
\begin{split}
(T^\pi v)(\bm{x}) - (T^\pi v')(\bm{x}) - \epsilon
& < \alpha \int_\mathcal{W} [v(f (\bm{x}, \pi(\bm{x}), w)) - v'(f(\bm{x}, \pi(\bm{x}), w))] \hat{\bm{\mu}}(\mathrm{d}w)\\
&\leq \alpha \int_\mathcal{W} \| v - v' \|_\xi \xi (f(\bm{x}, \pi(\bm{x}), w)) \hat{\bm{\mu}}(\mathrm{d} w)\\
&\leq \alpha \| v - v' \|_\xi \beta \xi (\bm{x}),
\end{split}
\end{equation}
where the last inequality holds due to Assumption~\ref{ass_sc}-1).
By switching the role of $v$ and $v'$, we also have
$(T^\pi v')(\bm{x}) - (T^\pi v)(\bm{x}) -\epsilon \leq \alpha \beta \| v - v' \|_\xi \xi (\bm{x})$.
Since the two inequalities hold for any $\bm{x} \in \mathcal{X}$ and $\epsilon > 0$, and $\tau = \alpha \beta$, we conclude that
$\| T^\pi v - T^\pi v' \|_\xi \leq \tau \| v - v'\|_\xi$.
It is straightforward to check that $T^\pi$ is monotone.
\end{proof}
This lemma implies that the value function $v^\pi$ is the unique fixed point of $T^\pi$ in $\mathbb{B}_\xi (\mathcal{X})$.
By using the contraction property of $T^\pi$ and $T$,
we can estimate the number of iterations needed to obtain an $\epsilon$-optimal policy as follows:
\begin{proposition}\label{prop:est1}
Suppose that Assumption~\ref{ass_sc} holds.
We assume that given $\epsilon > 0$,
the total number of iterations, $k$, in the VI algorithm satisfies
\[
k > \frac{\log [(1-\tau)^2\epsilon] - \log ( 2b\tau) }{\log \tau },
\]
where $b \geq 0$ and $\tau \in (0,1)$ are the constants defined in Assumption~\ref{ass_sc} and Lemma~\ref{lem:cont2}, respectively.
Then, $\hat{\pi}$ obtained by the VI algorithm is an $\epsilon$-optimal policy, i.e.,
\[
\| v^{\hat{\pi}} - v^\star \|_\xi < \epsilon.
\]
\end{proposition}
\begin{proof}
By Lemma~\ref{lem:cont2} and Theorem~\ref{thm:ds}, we have
$v^{\hat{\pi}}, v_k, v^\star \in \mathbb{B}_\xi (\mathcal{X})$.
We observe that
\begin{equation} \nonumber
\begin{split}
\| v^{\hat{\pi}} - v^\star \|_\xi &= \| T^{\hat{\pi}} v^{\hat{\pi}} - v^\star \|_\xi\\
&\leq \| T^{\hat{\pi}} v^{\hat{\pi}} - T^{\hat{\pi}} v_k \|_\xi + \| T^{\hat{\pi}} v_k - v^\star \|_\xi\\
&\leq \tau \| v^{\hat{\pi}} - v_k\|_\xi + \| T v_k - T v^\star \|_\xi,
\end{split}
\end{equation}
where the last inequality holds because of Lemma~\ref{lem:cont2},
$T^{\hat{\pi}} v_k = T v_k$ and $v^\star = T v^\star$.
By Lemma~\ref{lem:contraction}, we have
\begin{equation} \label{bound1}
\begin{split}
\| v^{\hat{\pi}} - v^\star \|_\xi &\leq \tau \| v^{\hat{\pi}} - v_k\|_\xi + \tau \| v_k - v^\star \|_\xi\\
&\leq \tau \| v^{\hat{\pi}} - v^\star \|_\xi +2\tau \| v_k - v^\star \|_\xi.
\end{split}
\end{equation}
On the other hand, by \cite[Theorem 4.2 (a)]{Gonzalez2003},
\begin{equation} \label{bound2}
\| v_k - v^\star \|_\xi \leq \frac{b}{1-\tau} \tau^k < \frac{1-\tau}{2\tau}\epsilon,
\end{equation}
where the second inequality holds due to the proposed choice of $k$.
Combining \eqref{bound1} and \eqref{bound2}, we conclude that
$\| v^{\hat{\pi}} - v^\star \|_\xi < \epsilon$.
\end{proof}
A practical implementation of the VI algorithm requires a finite-state approximation such as a discretization of the state space.
A review on such approximation methods can be found in a recent monograph~\cite{Saldi2018}.
\subsection{Policy Iteration}
\emph{Policy iteration} (PI) is
an alternative way to construct an $\epsilon$-optimal policy.
The PI algorithm can be described as follows:
\begin{enumerate}
\item Initialize $\pi_0$ as an arbitrary policy in $\Pi^{DS}$, and set $k:=0$;
\item (Policy evaluation) Find the fixed point $v^{\pi_k}$ of $T^{\pi_k}$;
\item (Policy improvement) For each $\bm{x} \in \mathcal{X}$, set
\[
{\pi}_{k+1} (\bm{x}) := \tilde{\bm{u}},
\]
where $\tilde{\bm{u}}$ is an optimal $\bm{u}$ of the semi-infinite program~\eqref{semi} that computes $(Tv^{\pi_k}) (\bm{x})$;
\item If the stopping criterion is met, then stop and set $\tilde{\pi} := \pi_{k+1}$. Otherwise, set $k \leftarrow k+1$ and go to Step 2);
\end{enumerate}
Here, the stopping criterion can be chosen as $\| v^{\pi_k} - v^{\pi_{k-1}} \|_\xi < \delta$ for a positive constant $\delta$.
To perform the policy evaluation step (Step 2) in a computationally tractable manner, we reformulate the infinite-dimensional maximization problem in the definition of $T^\pi$ as finite dimensional by using Wasserstein DRO~\cite{Esfahani2015, Gao2016}.
\begin{proposition}\label{prop:finite}
Suppose that Assumption~\ref{ass_sc} holds and that $v \in \mathbb{B}_{\xi}(\mathcal{X})$.
Then, the operator $T^\pi: \mathbb{B}_{\xi} (\mathcal{X}) \to \mathbb{B}_{\xi} (\mathcal{X})$ satisfies
\begin{equation}\nonumber
\begin{split}
(T^\pi v)(\bm{x}) = \sup_{{(w, q)} \in B}
\Big [ c (\bm{x}, \pi(\bm{x})) +
\frac{\alpha}{N} \sum_{i=1}^N \big [ q_1 v(f(\bm{x}, \pi(\bm{x}), \underline{w}^{(i)})) + q_2 v(f(\bm{x}, \pi(\bm{x}), \overline{w}^{(i)})) \big ]\Big ],
\end{split}
\end{equation}
where
${B} := \big \{ (\underline{w}^{(1)}, \ldots, \underline{w}^{(N)}, \overline{w}^{(1)}, \ldots, \overline{w}^{(N)}) \in \mathcal{W}^{2N}, q \in \Delta \mid \frac{1}{N} \sum_{i=1}^N [ q_1 d( \underline{w}^{(i)}, \hat{w}^{(i)} )^p +q_ 2 d( \overline{w}^{(i)}, \hat{w}^{(i)} )^p ] \leq \theta^p \big \}$.
\end{proposition}
This proposition follows immediately from~\cite[Corollary~2]{Gao2016}.
The optimization variables $\underline{w}^{(1)}, \ldots, \underline{w}^{(N)}$, $\overline{w}^{(1)}, \ldots, \overline{w}^{(N)}$ can be interpreted as
the probability atoms that characterize one of the worst-case distributions.
By the contraction property of $T^{\pi_k}$ (Lemma~\ref{lem:cont2}),
we can find the fixed point $v^{\pi_k}$ of $T^{\pi_k}$ by value iteration.
In other words, we perform $v_{\tau+1} \leftarrow T^{\pi_k} v_{\tau}$, $\tau=0, 1, \ldots$, until convergence. When computing $T^{\pi_k} v_{\tau}$, we solve the finite-dimensional optimization problem in
Proposition~\ref{prop:finite} with $v := v_\tau$ to completely remove the infinite-dimensionality issue inherent in the definition of $T^{\pi_k}$.
In the policy improvement step, we use the semi-infinite program formulation of $T$ in Proposition~\ref{prop:si} instead of directly solving the infinite-dimensional minimax optimization problem in the definition of $T$.
It is well known that $\lim_{k\to \infty} \| v^{\pi_k} - v^\star \|_\xi = 0$ under Assumption~\ref{ass_sc} by the monotonicity and contraction properties of $T$ and $T^{\pi_k}$ (Lemmas~\ref{lem:contraction} and \ref{lem:cont2})~\cite[Proposition 2.5.4]{Bertsekas2012}.
However, it is usually difficult to find the exact fixed point $v^{\pi_k}$ of $T^{\pi_k}$ in the policy evaluation step.
Thus, we propose a modified PI algorithm, which is also called \emph{optimistic policy iteration}~\cite{Puterman1978, Bertsekas2012}:
\begin{enumerate}
\item Initialize $\tilde{v}_0$ as an arbitrary function in $\mathbb{B}_{lsc} (\mathcal{X})$ and $\{M_k\}$ as a sequence of positive integers, and set $k:=1$;
\item (Policy improvement) For each $\bm{x} \in \mathcal{X}$, set
\[
{\pi}_{k} (\bm{x}) := \tilde{\bm{u}},
\]
where $\tilde{\bm{u}}$ is an optimal $\bm{u}$ of the semi-infinite program~\eqref{semi} that computes $(T\tilde{v}_{k-1}) (\bm{x})$;
\item (Policy evaluation) Compute
\[
\tilde{v}_{k} := (T^{\pi_{k}})^{M_k} \tilde{v}_{k-1}
\]
by solving the finite-dimensional optimization problems in Proposition~\ref{prop:finite};
\item If the stopping criterion is met, then stop and set $\tilde{\pi} := \pi_{k}$. Otherwise, set $k \leftarrow k+1$ and go to Step 2);
\end{enumerate}
Note that the modified PI algorithm approximately evaluates the performance of a policy $\pi_{k}$ as $\tilde{v}_{k}$ instead of finding the exact fixed point of $T^{\pi_{k}}$.
Concrete choices of the \emph{order sequence} $\{ M_k \}$
are discussed in \cite{Puterman2014}.
However,
for any choice of $\{ M_k \}$,
the modified PI algorithm converges under Assumption~\ref{ass_sc}~\cite{Bertsekas2012}:
\[
\lim_{k \to \infty} \| \tilde{v}_k - v^\star \|_\xi = 0.
\]
As in the case of VI,
we can estimate the number of iterations required for obtaining an $\epsilon$-optimal policy.
\begin{proposition}
Suppose that Assumption~\ref{ass_sc} holds.
Let $r \in \mathbb{R}$ be a positive constant such that
\[
\| \tilde{v}_0 - T \tilde{v}_0 \|_\xi \leq r.
\]
We assume that given $\epsilon > 0$, the total number of iterations, $k$, in the modified PI algorithm satisfies
\[
k \tau^k < \frac{(1-\tau)^2}{2 r} \epsilon,
\]
where $\tau \in (0,1)$ is the constant defined in Lemma~\ref{lem:cont2}.
Then, $\tilde{\pi} := \pi_k$ obtained by the modified PI algorithm is an $\epsilon$-optimal policy, i.e.,
\[
\| v^{\tilde{\pi}} - v^\star \|_\xi < \epsilon.
\]
\end{proposition}
\begin{proof}
According to Lemma~\ref{lem:cont2} and Theorem~\ref{thm:ds}, we have
$v^{\tilde{\pi}}, \tilde{v}_k, v^\star \in \mathbb{B}_\xi (\mathcal{X})$. By \cite[Lemma 2.5.4]{Bertsekas2012}, we obtain that
\[
\tilde{v}_{k-1} - \frac{k \tau^{k-1}}{1-\tau} r \xi \leq v^\star
\leq \tilde{v}_{k-1} + \frac{\tau^{k-1}}{1-\tau} r \xi,
\]
which implies that
\begin{equation}\label{ineq10}
\| \tilde{v}_{k-1} - v^\star \|_\xi \leq \frac{k \tau^{k-1}}{1-\tau} r.
\end{equation}
On the other hand, $\tilde{\pi} = \pi_k$ is a greedy policy when the value function is chosen as $\tilde{v}_{k-1}$. As in the proof of Proposition~\ref{prop:est1}, we have
$\| v^{\tilde{\pi}} - v^\star \|_\xi \leq \frac{2\tau}{1-\tau} \| \tilde{v}_{k-1} - v^\star \|_\xi$. Thus, by \eqref{ineq10},
\[
\| v^{\tilde{\pi}} - v^\star \|_\xi
\leq \frac{2 k \tau^k}{(1-\tau)^2}r < \epsilon,
\]
where the second inequality holds due to the proposed choice of $k$.
\end{proof}
\subsection{The Worst-Case Distribution Policy}
Given a policy $\pi \in \Pi^{DS}$ (for Player I), the worst-case distribution policy (for Player II) can be found by solving
\[
\sup_{\gamma \in \Gamma} J_{\bm{x}} (\pi, \gamma),
\]
which is an optimal control problem. By the dynamic programming principle, the worst-case value function $v^\pi$, defined by \eqref{worst_value}, is the unique solution to the following Bellman equation:
\[
v^\pi = T^\pi v^\pi
\]
under Assumption~\ref{ass_sc}.
The worst-case value function $v^\pi$ can be computed, for example, via value iteration.
\emph{Given $v^\pi$, how can we characterize the worst-case distribution policy?}
The following proposition indicates that,
if the optimization problem involved in $(T^\pi v^\pi)(\bm{x})$ admits an optimal solution for all $\bm{x} \in \mathcal{X}$, then
there exists an optimal policy for Player II, which is deterministic and stationary, and it generates a finitely-supported worst-case distribution.
\begin{proposition}[Worst-case distribution policy]\label{prop:wd}
Suppose that Assumption~\ref{ass_sc} holds, and that given $\pi \in \Pi^{DS}$
\[
\sup_{\bm{\mu} \in \mathcal{D}} \bigg [ c (\bm{x}, \bm{u})
+ \alpha \int_{\mathcal{W}} v^{\pi}(f (\bm{x}, \pi(\bm{x}), w)) \mathrm{d}\bm{\mu}(w) \bigg ]
\]
admits an optimal solution for any $\bm{x} \in \mathcal{X}$.
Then,
the deterministic stationary policy $\gamma^\pi: \mathcal{X} \to \mathcal{D}$ defined by
\[
\gamma^\pi (\bm{x}) := \frac{1}{2N} \sum_{i = 1}^N \big (\delta_{\underline{w}_{\bm{x}}^{\pi, (i)}} +\delta_{\overline{w}_{\bm{x}}^{\pi, (i)}} \big )\quad \forall \bm{x} \in \mathcal{X}
\]
is an optimal policy (for Player II) that generates a worst-case distribution for each state $\bm{x} \in \mathcal{X}$, where
$w_{\bm{x}}^\pi:= (\underline{w}_{\bm{x}}^{\pi, (1)}, \ldots, \underline{w}_{\bm{x}}^{\pi, (N)}, \overline{w}_{\bm{x}}^{\pi, (1)}, \ldots, \overline{w}_{\bm{x}}^{\pi, (N)})$ is an optimal solution of the maximization problem in Proposition~\ref{prop:finite} with $v := v^{\pi}$.
\end{proposition}
The existence of an optimal policy, which is deterministic and stationary, follows from the dynamic programming principle when the assumptions in the proposition hold.
Thus, it is sufficient for Player II to use the same worst-case distribution for all stages.
The structure of $\gamma^\pi (\bm{x})$ is obtained by applying \cite[Corollary 1]{Gao2016} to the maximization problem in the proposition.
Note that the worst-case distribution of this form is consistent with the discussion below Proposition~\ref{prop:finite}.
By using \cite[Corollary 2]{Gao2016}, we have the following sharper result
of characterizing the worst-case distribution with $N+1$ atoms: if the assumptions in Proposition~\ref{prop:wd} hold,
one of the worst-case distribution policies has the form
\[
\gamma^\pi (\bm{x}) := \frac{1}{N} \sum_{i \neq i_0} \delta_{w_{\bm{x}}^{\pi, (i)}} + \frac{p_0}{N} \delta_{\underline{w}_{\bm{x}}^{\pi, (i_0)}} + \frac{1- p_0}{N} \delta_{\overline{w}_{\bm{x}}^{\pi, (i_0)}},
\]
where $i_0 \in\{1, \ldots, N\}$, $p_0 \in [0,1]$, $\underline{w}_{\bm{x}}^{\pi, (i_0)}, \overline{w}_{\bm{x}}^{\pi, (i_0)} \in \argmin_{w \in \mathcal{W}} \{ \lambda^\star d (w, \hat{w}^{(i_0)})^p - \alpha v (f(\bm{x}, \pi (\bm{x}), w)) \}$, and ${w}_{\bm{x}}^{\pi, (i)} \in\argmin_{w \in \mathcal{W}} \{ \lambda^\star d (w, \hat{w}^{(i)})^p - \alpha v (f(\bm{x}, \pi (\bm{x}), w)) \}$ for all $i \neq i_0$. Here, $\lambda^\star$ is a dual minimizer, which must exist when the worst-case distribution exists~\cite[Corollary 1]{Gao2016}.
It is worth mentioning that Kantorovich duality and DP play a critical role in obtaining
all the results in this section.
Based on the reformulation results and analytical properties of DR-control problems,
we demonstrate their utility in the following sections.
\section{Out-of-Sample Performance Guarantee}
\label{sec:perf}
A potential defect of the SAA-control formulation \eqref{saa_opt} is that
its optimal policy may not perform well if a testing dataset of $w_t$ is different from the training dataset $\{\hat{w}^{(1)}, \ldots, \hat{w}^{(N)}\}$.
This issue occurs even when the testing and training datasets are sampled from the same distribution.
Such a degradation of the optimal decisions in out-of-sample tests is often called the \emph{optimizer's curse} in the literature of decision analysis~\cite{Smith2006}.
We show that an optimal distributionally robust policy can alleviate this issue and provide a guaranteed \emph{out-of-sample performance} if the radius $\theta$ of Wasserstein ambiguity set is carefully determined.
Let ${\pi}_{\hat{w}}^\star \in \Pi$ denote an optimal distributionally robust policy obtained by using the training dataset $\hat{w} := \{\hat{w}^{(1)}, \ldots, \hat{w}^{(N)}\}$ of $N$ samples.
The out-of-sample performance of $\pi^\star$ is measured as
\begin{equation}\label{osp}
\mathbb{E}^{\pi_{\hat{w}}^\star}_{w_t \sim \mu} \bigg [ \sum_{t=0}^\infty \alpha^t c (x_t, u_t) \mid x_0 = \bm{x} \bigg ],
\end{equation}
which represents the expected total cost under a new sample that is generated (according to $\mu$) independent of the training dataset.
Unfortunately, the out-of-sample performance cannot be precisely computed because the true distribution $\mu$ is unknown.
Thus, instead, we aim at establishing a \emph{probabilistic out-of-sample performance guarantee} of the form:
\begin{equation}\label{ppg}
\begin{split}
\mu^N \bigg \{
\hat{w} \mid \: \mathbb{E}^{\pi_{\hat{w}}^\star}_{w_t \sim \mu} \bigg [ \sum_{t=0}^\infty \alpha^t c (x_t, u_t) \mid &\: x_0 = \bm{x} \bigg ] \leq v_{\hat{w}}^\star (\bm{x}) \; \forall \bm{x} \in \mathcal{X} \bigg \} \geq 1-\beta,
\end{split}
\end{equation}
where $v^\star_{\hat{w}}$ denotes the optimal value function of the DR-control problem with the training dataset $\hat{w} := \{\hat{w}^{(1)}, \ldots, \hat{w}^{(N)}\}$,
and $\beta \in (0,1)$.\footnote{Here, $\hat{w}$, $\pi_{\hat{w}}^\star$ and $v_{\hat{w}}^\star$ are viewed as random objects.}
The inequality represents a bound $(1-\beta)$ on the probability that the expected cost incurred by $\pi^\star$ is no greater than the optimal value function. Note that the probability and the expected cost are evaluated with respect to the true distribution $\mu$.
Thus, this inequality provides a probabilistic bound on the performance of $\pi^\star$ evaluated with unseen test samples drawn from $\mu$.
Here, $v_{\hat{w}}^\star$, which depends on $\theta$, plays the role of a certificate for the out-of-sample performance.
Our goal is to identify conditions on the radius $\theta$ under which an optimal distributionally robust policy provides the probabilistic performance guarantee.
We begin by imposing the following assumption on the true distribution $\mu$:
\begin{assumption}[Light tail]\label{lt}
There exists a positive constant $q > p$ such that
\[
\rho := \int_{\mathcal{W}} \exp( \| w \|^q ) \: \mathrm{d} \mu (w) < + \infty.
\]
\end{assumption}
This assumption implies that the tail of $\mu$ decays exponentially.
Under this condition,
the following measure concentration inequality holds:
\begin{theorem}[Measure concentration, Theorem 2 in \cite{Fournier2015}]\label{thm:mc}
Suppose that Assumption \ref{lt} holds.
Let
\[
\nu_N := \frac{1}{N} \sum_{i=1}^N \delta_{\hat{w}^{(i)}}.
\]
Then,
\begin{equation} \nonumber
\begin{split}
&\mu^N \big \{\hat{w} \mid
W_p (\mu, \nu_{N}) \geq \theta
\big \} \leq c_1 \big [ b_1(N, \theta) \bold{1}_{\{\theta\leq 1\}} + b_2(N, \theta) \bold{1}_{\{\theta > 1\}} \big ],
\end{split}
\end{equation}
where
\[
b_1 (N, \theta) := \left \{
\begin{array}{ll}
\exp (-c_2 N \theta^2) & \mbox{if } p > l/2\\
\exp (-c_2 N (\frac{\theta}{\log(2+1/\theta)})^2 ) & \mbox{if } p = l/2\\
\exp (-c_2 N \theta^{l/p} ) &\mbox{otherwise},
\end{array}
\right.
\]
and
\[
b_2 (N, \theta) :=
\exp ( -c_2 N \theta^{q/p} ).
\]
Here,
$c_1, c_2$ are positive constants depending only on $l$, $q$ and $\rho$.
\end{theorem}
This theorem provides
an upper-bound of the probability that the true distribution $\mu$ lies outside of the Wasserstein ambiguity set $\mathcal{D}$.
The measure concentration inequality provides a systematic means to determine the radius for $\mathcal{D}$ to contain the true distribution $\mu$ with probability no less than $(1-\beta)$.
As shown in the following theorem,
the contraction property of Bellman operators enables us to extend the single-stage out-of-performance guarantee to its multi-stage counterpart with no additional requirement on $\theta$.
\begin{theorem}[Out-of-sample performance guarantee]\label{thm:osg}
Suppose that Assumptions~\ref{ass_sc} and \ref{lt} hold.
Let $\pi_{\hat{w}}^\star$ and $v_{\hat{w}}^\star$ denote an optimal policy and the optimal value function of the DR-control problem \eqref{dr_opt} with
the training dataset $\hat{w} := \{\hat{w}^{(1)}, \ldots, \hat{w}^{(N)}\}$ and
the following Wasserstein ball radius:\footnote{This choice includes the radius proposed in~\cite{Esfahani2015} in the single-stage setting as a special case (when $p = 1$ and $l \neq 2$).}
\begin{equation} \nonumber
\begin{split}
\theta (N, \beta) :=
& \left \{
\begin{array}{ll}
\big [\frac{1}{Nc_2} \log (\frac{c_1}{\beta} ) \big ]^{{p}/{q}} & \mbox{if } N < \frac{1}{c_2} \log (\frac{c_1}{\beta}) \\
\big [\frac{1}{Nc_2} \log (\frac{c_1}{\beta} ) \big]^{{1}/{2}}
& \mbox{if } N \geq \frac{1}{c_2} \log (\frac{c_1}{\beta}) \wedge p>\frac{l}{2}\\
\big [\frac{1}{Nc_2} \log (\frac{c_1}{\beta} ) \big]^{{p}/{l}}
& \mbox{if } N \geq \frac{1}{c_2} \log (\frac{c_1}{\beta}) \wedge p<\frac{l}{2}\\
\bar{\theta}
& \mbox{if } N \geq \frac{(\log 3)^2}{c_2} \log (\frac{c_1}{\beta}) \wedge p=\frac{l}{2},
\end{array}
\right.
\end{split}
\end{equation}
where $\bar{\theta}$ satisfies
$\frac{\bar{\theta}}{\log (2 + 1/\bar{\theta})} = [
\frac{1}{Nc_2} \log (\frac{c_1}{\beta} )
]^{{1/2}}$,
and
$c_1, c_2$ are the positive constants in Theorem~\ref{thm:mc}.\footnote{The constants $c_1$ and $c_2$ in Theorem~\ref{thm:mc}
can be calculated using the proof of Theorem~2 in \cite{Fournier2015}.
However, this calculation is often conservative and thus results in a smaller radius $\theta (N, \beta)$ than necessary.
Bootstrapping and cross-validation methods can be used to reduce the conservativeness in the \emph{a priori} bound $\theta (N, \beta)$, as advocated and demonstrated in \cite{Esfahani2015}.}
Then, the probabilistic out-of-sample performance guarantee \eqref{ppg} holds.
\end{theorem}
\begin{proof}
Using Theorem~\ref{thm:mc}, we can confirm that our choice of $\theta$ provides the following probabilistic guarantee:
\begin{equation}\label{g1}
\mu^N \big \{
\hat{w}\mid W_p(\mu, \nu_{N}) \leq \theta (N, \beta)
\big \}\geq 1-\beta.
\end{equation}
Define an operator $T^\star: \mathbb{B}_{\xi}(\mathcal{X}) \to \mathbb{B}_{\xi}(\mathcal{X})$ as
$(T^\star v)(\bm{x}) := \mathbb{E}_{\mu} [ c (\bm{x}, \pi_{\hat{w}}^\star(\bm{x})) + \alpha v(f(\bm{x}, \pi_{\hat{w}}^\star (\bm{x}), w))]$
for all $\bm{x} \in \mathcal{X}$.
It follows from \eqref{g1} that the following single-stage guarantee holds:
\begin{equation}\label{ssg}
\mu^N \big \{
\hat{w} \mid (T^\star {v_{\hat{w}}^\star})(\bm{x}) \leq (T {v_{\hat{w}}^\star}) (\bm{x})
\big \}\geq 1-\beta
\end{equation}
given any fixed $\bm{x} \in \mathcal{X}$.
It is straightforward to check under Assumption~\ref{ass_sc} that $T^\star$ is a monotone contraction mapping.
We now show that if $\mu \in \mathcal{D}$, then $(T^\star)^k { v_{\hat{w}}^\star} \leq { v_{\hat{w}}^\star}$ for any $k=1, 2, \ldots$ using mathematical induction.
For $k=1$, we have $T^\star { v_{\hat{w}}^\star} \leq T{ v_{\hat{w}}^\star} = { v_{\hat{w}}^\star}$ by the minimax definition of $T$.
Suppose now that the induction hypothesis holds for some $k$.
By the monotonicity of $T^\star$ and the definition of $T$, we have
\[
T^\star((T^\star)^k { v_{\hat{w}}^\star}) \leq T^\star { v_{\hat{w}}^\star} \leq T{ v_{\hat{w}}^\star} = { v_{\hat{w}}^\star},
\]
and thus the induction hypothesis is valid for $k+1$.
We now notice that
\[
\lim_{k\to \infty} ((T^\star)^k { v_{\hat{w}}^\star}) (\bm{x}) = \mathbb{E}_{w_t \sim \mu} \bigg [ \sum_{t=0}^\infty \alpha^t c (x_t, \pi_{\hat{w}}^\star(x_t)) \mid x_0 = \bm{x} \bigg ]
\]
since $T^\star$ is a contraction mapping under Assumption~\ref{ass_sc}.
Therefore, if $\mu \in \mathcal{D}$, then
\[
\mathbb{E}_{w_t \sim \mu} \bigg [ \sum_{t=0}^\infty \alpha^t c (x_t, \pi_{\hat{w}}^\star(x_t)) \mid x_0 = \bm{x} \bigg ]
\leq v_{\hat{w}}^\star (\bm{x}) \quad \forall \bm{x} \in \mathcal{X}.
\]
By \eqref{g1},
the probabilistic performance guarantee holds as desired.
\end{proof}
\begin{remark}
Note that the contraction property of $T$ and $T^\star$ plays a critical role in connecting the single-stage performance guarantee~\eqref{ssg} to
the multi-stage guarantee~\eqref{ppg} in a way that is independent of the number of stages.
This is a quite powerful result, because if we have a radius $\theta$ that provides a desirable confidence level $(1-\beta)$ in the single-stage guarantee, we can use the same radius to achieve the same level of confidence in the multi-stage guarantee with no additional requirement.
\end{remark}
\section{Wasserstein Penalty Problem}
\label{sec:pen}
We now consider a slightly different version of the DR-control problem, which can be considered as a relaxation of \eqref{dr_opt} with a fixed penalty parameter $\lambda > 0$:
\[
\inf_{\pi \in\Pi} \sup_{\gamma \in \Gamma'} \; \mathbb{E}^{\pi, \gamma} \bigg [
\sum_{t=0}^\infty \alpha^t (c (x_t, u_t) - \lambda W_p (\mu_t, \nu_N)^p )
\mid x_0 = \bm{x} \bigg],
\]
where the strategy space $\Gamma' := \{\gamma := (\gamma_0, \gamma_{1}, \ldots) \:|$ $\gamma_t (\mathcal{P}(\mathcal{W}) | h_t^e) = 1 \; \forall h_t^e \in H_t^e\}$ of Player~II no longer depends on a Wasserstein ambiguity set.
Instead of using an explicit ambiguity set $\mathcal{D}$, Player II is penalized by $\lambda W_p(\mu_t, \nu_N)^p$, which can be interpreted as the cost of perturbing the empirical distribution~$\nu_N$.
\subsection{Dynamic Programming}
Under Assumption~\ref{ass_sc},
the Bellman operator ${T}'_\lambda: \mathbb{B}_{\xi}(\mathcal{X}) \to \mathbb{B}_{\xi}(\mathcal{X})$ of the Wasserstein penalty problem is defined by
\begin{equation} \nonumber
\begin{split}
&({T}'_\lambda v)(\bm{x}) := \inf_{\bm{u} \in \mathcal{U}(\bm{x})} \sup_{\bm{\mu} \in \mathcal{P}(\mathcal{W})} \mathbb{E}_{\bm{\mu}}\big [ c(\bm{x}, \bm{u}) - \lambda W_p(\bm{\mu}, \nu_N)^p + \alpha v(f(\bm{x}, \bm{u}, w)) \big ].
\end{split}
\end{equation}
for all $\bm{x} \in \mathcal{X}$.
By using the strong duality result \cite[Theorem 1]{Gao2016},
we have the following equivalent form of $T_\lambda'$:
\begin{proposition}\label{prop:reform}
Suppose that the function $w \mapsto v(f(\bm{x}, \bm{u}, w))$ lies in $L^1 (\mathrm{d} \nu_N)$ for each $(\bm{x}, \bm{u}) \in \mathbb{K}$.
Then,
the Bellman operator ${T}'_\lambda$ can be expressed as
\begin{equation}\nonumber
\begin{split}
({T}'_\lambda v) (\bm{x}) &= \inf_{\bm{u} \in \mathcal{U}(\bm{x})} \bigg [ c (\bm{x}, \bm{u}) + \frac{1}{N} \sum_{i=1}^N \sup_{w' \in \mathcal{W}} [\alpha v (f(\bm{x}, \bm{u}, w')) - \lambda d(\hat{w}^{(i)}, w')^p ] \bigg ]
\end{split}
\end{equation}
for all $\bm{x} \in \mathcal{X}$.
Furthermore, we have
\begin{equation} \nonumber
\begin{split}
(T v)(\bm{x}) &= \inf_{\lambda\geq 0} \; [ (T'_\lambda v) (\bm{x}) + \lambda \theta^p] \quad \forall \bm{x} \in \mathcal{X}.
\end{split}
\end{equation}
\end{proposition}
By the results of \cite{Gonzalez2003} in the general minimax control setting,
the optimal value function $v'$ is the unique fixed point (in $\mathbb{B}_{lsc}(\mathcal{X})$) of $T_\lambda'$ under Assumption~\ref{ass_sc} because $T_\lambda'$ is a contraction.
We can use value iteration to evaluate $v'$ due to the Banach fixed point theorem.
Analogous to Theorem~\ref{thm:ds}, there exists
a deterministic stationary policy $\pi'$, which is optimal, where
$\pi'(\bm{x}) \in \argmin_{\bm{u} \in \mathcal{U}(\bm{x})} [c (\bm{x}, \bm{u}) +
\frac{1}{N} \sum_{i=1}^N \sup_{w' \in \mathcal{W}} [\alpha v' (f(\bm{x}, \bm{u}, w')) - \lambda d(\hat{w}^{(i)}, w')^p ]
]$ for all $\bm{x} \in \mathcal{X}$, under Assumption~\ref{ass_sc}.
\subsection{Linear-Quadratic Problem}
We now develop a solution approach, using a Riccati-type equation, to linear-quadratic (LQ) problems with the Wasserstein penalty when
\[
d(w, w')^p := \| w- w'\|^2,
\]
where $\| \cdot \|$ denotes the Euclidean norm on $\mathbb{R}^l$.
Consider a linear system of the form
\begin{equation}\label{lin_sys}
x_{t+1} = Ax_t + B u_t + \Xi w_t,
\end{equation}
where $A \in \mathbb{R}^{n \times n}$, $B \in \mathbb{R}^{n \times m}$, and $\Xi \in \mathbb{R}^{n \times l}$.
We also choose the following
quadratic stage-wise cost function:
\begin{equation}\label{quad_cost}
c(x_t, u_t)= x_t^\top Q x_t + u_t^\top R u_t,
\end{equation}
where $Q = Q^\top \in \mathbb{R}^{n\times n}$ is positive semidefinite, and $R = R^\top \in \mathbb{R}^{m \times m}$ is positive definite.
For the sake of simplicity, we assume that
$\mathbb{E}_{w \sim \nu_N} [ w ] = \frac{1}{N} \sum_{i=1}^N \hat{w}^{(i)} = 0$.
The case of non-zero mean is considered in Appendix~\ref{app:riccati}.
Let $\Sigma := \mathbb{E}_{w \sim \nu_N} [ w w^\top ] = \frac{1}{N} \sum_{i=1}^N \hat{w}^{(i)} (\hat{w}^{(i)})^\top$.
In the LQ setting, we also set $\mathcal{X} := \mathbb{R}^n$, $\mathcal{U}(\bm{x}) \equiv \mathcal{U} := \mathbb{R}^m$, and $\mathcal{W} := \mathbb{R}^l$.
Note that, unlike the standard LQG, the LQ problems with Wasserstein penalty do not assume that the probability distribution of random disturbances is Gaussian. In fact, the main motivation of this distributionally robust LQ formulation is to relax the assumption of Gaussian disturbance distributions in LQG, and to obtain a useful control policy when the true distribution deviates from a Gaussian distribution.
By using DP,
we obtain the following explicit solution of the LQ problem:
\begin{theorem}\label{thm:riccati}
Suppose that
there exists a symmetric positive semidefinite matrix $P \in \mathbb{R}^{n\times n}$ that solves the following equation:
\begin{equation}\label{re_DR}
P = Q + \alpha A^\top P A + \alpha^2 A^\top S A
\end{equation}
with
\begin{equation}\nonumber
\begin{split}
S &:= P \Xi (\lambda I - \alpha \Xi^\top P \Xi)^{-1} \Xi^\top P\\
& - [ I + \alpha \Xi (\lambda I - \alpha \Xi^\top P \Xi)^{-1} \Xi^\top P ]^\top PB
\\
&\times [R + \alpha B^\top \{ P + \alpha P \Xi (\lambda I - \alpha \Xi^\top P \Xi)^{-1} \Xi^\top P \} B ]^{-1}\\
&\times B^\top P [ I + \alpha \Xi (\lambda I - \alpha \Xi^\top P \Xi)^{-1} \Xi^\top P ]
\end{split}
\end{equation}
for a sufficiently large $\lambda$.
Then, ${v}' (\bm{x}) := \bm{x}^\top P \bm{x} + z$
solves the Bellman equation, where $z := \frac{\lambda}{1-\alpha} \mbox{tr}[ \{ \lambda(\lambda I - \alpha \Xi^\top P \Xi)^{-1} - I\} \Sigma]$.
If, in addition, $v'$ is the optimal value function,\footnote{Sufficient conditions for $v'$ to be the optimal value function are provided in \cite{Kim2020}. Under the stabilizability and observability conditions, the algebraic Riccati equation has a unique positive semidefinite solution as well. }
then the unique optimal policy ${\pi'}$ is given by
\[
{\pi'} (\bm{x}) = K \bm{x}\quad \forall \bm{x} \in \mathbb{R}^n,
\]
where
\begin{equation} \nonumber
\begin{split}
K &:= - [R + \alpha B^\top \{ P + \alpha P \Xi (\lambda I - \alpha \Xi^\top P \Xi)^{-1} \Xi^\top P \} B ]^{-1}\\
& \times \alpha B^\top P^\top [ I + \alpha \Xi (\lambda I - \alpha \Xi^\top P \Xi)^{-1} \Xi^\top P ] A.
\end{split}
\end{equation}
Furthermore, if we let
\[
{w}_{\bm{x}}'^{(i)} := (\lambda I - \alpha \Xi^\top P \Xi )^{-1} [\alpha \Xi^\top P (A + BK)\bm{x} + \lambda \hat{w}^{(i)}],
\]
the deterministic stationary policy $\gamma' \in \Gamma'$, defined as
\[
\gamma' (\bm{x}) = \frac{1}{N} \sum_{i=1}^N \delta_{{w}'^{(i)}_{\bm{x}}} \quad \forall \bm{x} \in \mathbb{R}^n,
\]
is an optimal policy for Player II that generates a worst-case distribution for each $\bm{x} \in \mathbb{R}^n$.
\end{theorem}
Its proof is contained in Appendix~\ref{app:riccati}.
We first note that an optimal distributionally robust policy is \emph{linear} in the system state.
Furthermore, the control gain matrix $K$ is independent of the covariance matrix $\Sigma$ as in standard LQG.
The worst-case distribution's support elements $w_{\bm{x}}'^{(i)}$'s are affine in the system state.
More specifically, $w_{\bm{x}}'^{(i)}$ is obtained by scaling the $i$th data sample $\hat{w}^{(i)} \in \mathbb{R}^l$ by the factor of $(\lambda I - \alpha \Xi^\top P \Xi)^{-1} \lambda$ and shifting it by the vector $(\lambda I - \alpha \Xi^\top P \Xi)^{-1} \alpha \Xi^\top P (A+BK) \bm{x}$, which is linear in the system state.
Distributional robustness is controlled by the penalty parameter $\lambda$:
As $\lambda$ increases, the permissible deviation of $\mu_t$ from $\nu_N$ decreases. This is equivalent to decreasing the Wasserstein ball radius $\theta$ in the original DR-control setting.
Thus, by letting $\lambda$ tend to $+\infty$, the optimal distributionally robust policy for the LQ problem converges pointwise to the standard LQ optimal control policy.
\begin{proposition}
Suppose that $(A, B)$ is stabilizable and $(A, C)$ is observable, where $Q = C^\top C$.
Let $\bar{P}$ be the unique symmetric positive definite solution of
the following discrete algebraic Riccati equation:
\begin{equation}\label{re}
\bar{P} = Q + \alpha A^\top \bar{P} A - \alpha^2 A^\top \bar{P} B (R + \alpha B^\top \bar{P} B)^{-1} B^\top \bar{P} A,
\end{equation}
and let
\[
\bar{K} := -\alpha ( R + \alpha B^\top \bar{P} B)^{-1} B^\top \bar{P} A.
\]
Then, for each $\bm{x} \in \mathcal{X}$
\begin{equation}
\begin{split}
\pi'(\bm{x}) &\to \bar{K} \bm{x} \\
w_{\bm{x}}' &\to \hat{w}_{\bm{x}}
\end{split}
\end{equation}
as $\lambda \to \infty$, where $\pi'$ and $w_{\bm{x}}'$ are defined in Theorem~\ref{thm:riccati}.
\end{proposition}
\begin{proof}
Let $P_\lambda$ denote a symmetric positive semidefinite solution of \eqref{re_DR} given any fixed $\lambda \geq \bar{\lambda}$.
As $\lambda$ tends to $+\infty$, the right-hand side of \eqref{re_DR} tends to $Q + \alpha A^\top P_\lambda A - \alpha^2 A^\top P_\lambda B (R + \alpha B^\top P_\lambda B)^{-1} B^\top P_\lambda A$, which corresponds to the right-hand side of \eqref{re} with $\bar{P} = P_\lambda$.
Therefore, $P_\lambda$ solves the algebraic Riccati equation~\eqref{re} as $\lambda \to \infty$.
On the other hand, \eqref{re} admits a unique positive definite solution when $(A, C)$ is observable and $(A, B)$ is stabilizable~(e.g., \cite[Section~2.4]{Lewis2012}).
Thus, $P_\lambda$ converges to $\bar{P}$ as $\lambda \to \infty$.
Likewise, we can show that the feedback gain matrix $K$ and the worst-case distribution's support element $w_{\bm{x}}'^{(i)}$ (defined in Theorem~\ref{thm:riccati}) tend to $\bar{K}$ and $\hat{w}^{(i)}$, respectively, as $\lambda \to \infty$.
Therefore, the result follows.
\end{proof}
\section{Numerical Experiments} \label{sec:exp}
\subsection{Investment-Consumption Problem}
We first demonstrate the performance and utility of DR-control through an investment-consumption problem~(e.g., \cite{Samuelson1969, Hakansson1970}).
Let $x_t$ be the wealth of an investor at stage $t$.
The investor wishes to decide the amount $u_{1,t}$ to be invested in a risky asset (with an i.i.d. random rate of return, $w_t$) and the amount $u_{2,t}$ to be consumed
at stage $t$.
The remaining amount $(x_t - u_{1,t} - u_{2,t})$ is automatically re-invested into a riskless asset with a deterministic rate of return, $\eta$.
Then, the investor's wealth evolves as
\[
x_{t+1} = \eta (x_t - u_{1,t} - u_{2,t}) + w_t u_{1,t}.
\]
We assume that the control actions $u_{1,t}$ and $u_{2,t}$ satisfy
the following constraints:
\[
u_{1,t} + u_{2,t} \leq x_t, \quad u_{1,t}, u_{2,t} \geq 0 \quad \forall t,
\]
i.e., $\mathcal{U}(\bm{x}) := \{ \bm{u} := (\bm{u}_1, \bm{u}_2) \in \mathbb{R}^2 \mid \bm{u}_1 + \bm{u}_2 \leq \bm{x}, \bm{u} \geq 0\}$.
The cost function is given by the following negative expected utility from consumption:
\[
J(\pi, \gamma) := - \mathbb{E}^{\pi, \gamma} \bigg [
\sum_{t=0}^\infty \alpha^t U(u_{2, t})
\bigg ],
\]
where the utility function $U: \mathbb{R} \to \mathbb{R}$ is selected as $U(c) = c - \zeta c^2$.
The following parameters are used in the numerical simulations: $\zeta = 0.25$, $\alpha = 0.9$, $\eta = 1.02$, and $p = 1$.
The data samples $\{ \hat{w}^{(1)}, \ldots, \hat{w}^{(N)} \}$ of $w_t$ are generated according to the normal distribution $\mathcal{N}(1.08, 0.1^2)$.
We numerically approximate the optimal value function $v^\star_{\hat{w}}$ and the corresponding optimal policy $\pi^\star_{\hat{w}}$ on a computational grid
by using the convex optimization approach in~\cite{Yang}.
This method
approximates the Bellman operator by the optimal value of a convex program with a uniform convergence property.
Furthermore, it does not require any explicit interpolation in evaluating the value function and control policies at some state other than the grid points, by using an auxiliary optimization variable to assign the contribution of each grid point to the next state.
The numerical experiments were conducted on a Mac with 4.2 GHz Intel Core i7 and 64GB RAM. The amount of time required for simulations with different grid sizes and $N=10$ are reported in TABLE~\ref{tab:0}.
For the rest of the simulations, we used 71 states (with grid spacing 0.02).
\begin{table}[tb]
\small
\centering
\caption{{Computation time (in seconds) for the investment-consumption problem with different grid sizes}\label{tab:0}}
\begin{tabular}{l*{10}{c}}
\# of states & 36 & 71 & 141 & 281 \\
\hline \hline
Time (sec) & 288. 69 & 854.61 & 2086.15 & 9350.04 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=5.5in]{Investment}
\caption{Depending on the radius $\theta$ and the number of samples $N$,
(a) the reliability $\mu^N \{ \hat{w} | \mathbb{E}^{\pi_{\hat{w}}^\star}_{w_t \sim \mu} [ \sum_{t=0}^\infty \alpha^t r(x_t, u_t)| x_0 = \bm{x} ] \leq v_{\hat{w}}^\star (\bm{x}) \}$, and (b)
the out-of-sample performance (cost) of $\pi_{\hat{w}}^\star$.}
\label{fig:investment}
\end{center}
\end{figure}
\subsubsection{Out-of-sample performance guarantee}
To demonstrate the out-of-sample performance guarantee of an optimal distributionally robust policy,
we compute the following \emph{reliability} of $\pi_{\hat{w}}^\star$:
\[
\mu^N \bigg \{ \hat{w} \mid \mathbb{E}^{\pi_{\hat{w}}^\star}_{w_t \sim \mu} \bigg [ \sum_{t=0}^\infty \alpha^t c (x_t, u_t) \mid x_0 = \bm{x} \bigg ] \leq v_{\hat{w}}^\star (\bm{x}) \bigg \},
\]
which represents the probability that the expected cost incurred by $\pi_{\hat{w}}^\star$ under the true distribution $\mu$ is no greater than $v_{\hat{w}}^\star (\bm{x})$.
As shown in Fig. \ref{fig:investment} (a), the reliability increases with the Wasserstein ball radius $\theta$ and the number $N$ of samples.
This result is consistent with Theorem~\ref{thm:osg}.
Our numerical experiments also confirm that the same radius $\theta$ can be used to achieve the same level of reliability in both single-stage and multi-stage settings as indicated in the theorem.
Fig. \ref{fig:investment} (b) illustrates the out-of-sample cost~\eqref{osp} of $\pi_{\hat{w}}^\star$ with respect to $\theta$ and $N$.
Interestingly, the out-of-sample cost does not monotonically decrease with $\theta$.\footnote{This observation is consistent with the single-stage case in Section 7.2 of \cite{Esfahani2015}.}
For a too-small radius, the resulting DR-policy is not sufficiently robust to obtain the best out-of-sample performance (i.e., the least out-of-sample cost).
On the other hand, if a too-large Wasserstein ambiguity set is selected,
the resulting DR-policy is overly conservative and thus sacrifices the closed-loop performance.
Thus, there exists an optimal radius (e.g., $0.02$ in the case of $N = 20$) that provides the best out-of-sample performance.
\subsubsection{Comparison to SAA}
To compare DR-control~\eqref{dr_opt} with SAA-control~\eqref{saa_opt},
we first compute the out-of-sample performance of $\pi_{\hat{w}}^\star$ and that of the corresponding optimal SAA policy $\pi_{\hat{w}}^{\tiny \mbox{SAA}}$ obtained by using the same training dataset $\hat{w}$.
The radius is selected as the one that provides the best out-of-sample performance.
As shown in Fig.~\ref{fig:investment_SAA}, the proposed DR-policy achieves 8\% lower out-of-sample cost than the SAA-policy when $N = 10$.
As expected, the gap between the two decreases with the number of samples.
Note that the proposed DR-policy designed even with a small number of samples ($N=10$)
maintains its performance under the test dataset that is generated independent of the training dataset,
unlike the corresponding SAA-policy.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=2.7in]{SAA_osp}
\caption{The out-of-sample performance (cost) of the optimal SAA policy $\pi_{\hat{w}}^{\tiny \mbox{SAA}}$ ($\circ$) and the optimal distributionally robust policy $\pi_{\hat{w}}^\star$ ($\diamond$) depending on $N$.}
\label{fig:investment_SAA}
\end{center}
\end{figure}
\subsection{Power System Frequency Control Problem}
Consider an electric power transmission system with $N$ buses (and $\bar{n}$ generator buses).
This system may be subject to ambiguous uncertainty generated from variable renewable energy sources such as wind and solar.
For the frequency regulation of this system,
we use the proposed Wasserstein penalty method to control
the mechanical power input of generator.
Let $\bm{\theta}_i$ and $P_{e, i}$ be the voltage angle (in radian) and the mechanical power input (in per unit), respectively, at generator bus $i$.
The swing equation of this system is then given by
\begin{equation}\label{swing}
M_i \ddot{\bm{\theta}}_i (t) + D_i \dot{\bm{\theta}}_i (t) = P_{m,i} (t) - P_{e, i} (t) \quad \forall i = 1, \ldots, \bar{n},
\end{equation}
where $M_i$ and $D_i$ denote the inertia coefficient (in pu$\cdot$sec$^2$/rad) and the damping coefficient (in pu$\cdot$sec/rad) of the generator at bus $i$.
Here, $P_{e,i}$ is the electrical active power injection (in per unit) at bus $i$ and is given by
$P_{e, i} := \sum_{j=1}^N | V_i| |V_j | (G_{ij} \cos ({\bm \theta}_i - {\bm \theta}_j) + B_{ij} \sin ( {\bm \theta}_i - {\bm \theta}_j) )$,
where $G_{ij}$ and $B_{ij}$ are the conductance and susceptance of the transmission line connecting buses $i$ and $j$, respectively, and $V_i$ is the voltage at bus $i$.
Assuming that all the voltage magnitudes are $1$ per unit, the angle differences $|\bm{\theta}_i - \bm{\theta}_j|$'s are small, and
all the transmission lines are (almost) lossless,
the AC power flow equation can be approximated by the following linearized DC power flow equation:
\begin{equation}\label{dc}
P_{e, i} := \sum_{j=1}^N B_{ij} ({\bm \theta}_i - {\bm \theta}_j) \quad \mbox{or} \quad P_{e} = L \bm{\theta},
\end{equation}
where $P_e := (P_{e,1}, \ldots, P_{e, \bar{n}})$,
$\bm{\theta} := (\bm{\theta}_{1}, \ldots, \bm{\theta}_{\bar{n}})$, and
$L \in \mathbb{R}^{\bar{n} \times \bar{n}}$ is the Kron-reduced Laplacian matrix of this power network.\footnote{The Kron reduction is used to express the system in the reduced dimension $\bar{n}$ by focusing on the interactions of the generator buses~\cite{Bergen1999}.
More precisely, we can obtain the Kron-reduced admittance matrix $Y^{\mbox{\tiny Kron}}$, by eliminating nongenerator bus $k$, as
$Y_{ij}^{\mbox{\tiny Kron}} := Y_{ij} - Y_{ik} Y_{kj} / Y_{kk}$ for all $i, j = 1, \ldots, N$ such that $i, j\neq k$.
The Kron-reduced Laplacian can then be obtained by setting
$L_{ii} := \sum_{k=1, \ldots, \bar{n} : k\neq i} B_{ik}^{\mbox{\tiny Kron}}$ and
$L_{ij} := -B_{ij}^{\mbox{\tiny Kron}}$ for $i \neq j$ , where $B^{\mbox{\tiny Kron}}$ denotes the susceptance of the Kron-reduced admittance matrix~\cite{Dorfler2013}.
}
Let $x(t) := ( \bm{\theta}(t)^\top, \dot{\bm{\theta}}(t)^\top )^\top \in \mathbb{R}^{2\bar{n}}$ and $u(t) := P_{m} (t) \in \mathbb{R}^{\bar{n}}$.
By combining \eqref{swing} and \eqref{dc}, we obtain the following state-space model of the power system~(e.g., \cite{Fazelnia2017}):
\[
\dot{x} (t)
=
\begin{bmatrix}
0 & I \\
-M^{-1} L & M^{-1} D
\end{bmatrix}
x(t)
+
\begin{bmatrix}
0\\
M^{-1}
\end{bmatrix}
u(t),
\]
where $M := \mbox{diag}(M_1, \ldots, M_{\bar{n}})$ and
$D := \mbox{diag}(D_1, \ldots, D_{\bar{n}})$.
We discretize this system using zero-order hold on the input and a sampling time of $0.1$ seconds
to
obtain the matrices $A$ and $B$ of the following discrete-time system model~\eqref{lin_sys}:
\[
x_{t+1} = A x_t + B (u_t + w_t)
\]
where $w_{i,t}$ is the random disturbance (in per unit) at bus $i$ at stage $t$.
It can model uncertain power injections generated by solar or wind energy sources.
The state-dependent portion of the quadratic cost function \eqref{quad_cost} is chosen as
\[
x^\top Q x :=
\bm{\theta}^\top [I - \bold{1} \bold{1}^\top/\bar{n} ] \bm{\theta} + \frac{1}{2} \dot{\bm\theta}^\top M \dot{\bm\theta},
\]
where $\bold{1}$ denotes the $\bar{n}$-dimensional vector of all ones, the first term measures the deviation of rotor angles from their average $\bar{\bm{\theta}} := \bold{1}^\top \bm{\theta}/\bar{n}$, and
the second term corresponds to the kinetic energy stored in the electro-mechanical generators~\cite{Dorfler2014}.
The matrix $R$ is chosen to be the $\bar{n}$ by $\bar{n}$ identity matrix.
The IEEE 39-bus New England test case (with 10 generator buses, 29 load buses, and 40 transmission lines) is used to demonstrate
the performance of the proposed LQ control $\pi_{\hat{w}}'$ with Wasserstein penalty.
The initial values of voltage angles $\bm{\theta}(0)$ are determined by solving the (steady-state) power flow problem using MATPOWER~\cite{Zimmerman2011}.
The initial frequency is set to be zero for all buses except bus 1 at which $\dot{\bm{\theta}}_1 (0) := 0.1$ per unit.
We use $\alpha = 0.9$ in all simulations.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=5.5in]{freq_stability}
\caption{The box plot
of frequency deviation $\dot{\bm{\theta}}_{10}$ controlled by (a) the standard LQG control policy $\pi_{\hat{w}}^{\mathrm{\tiny LQG}}$, and (b) the optimal DR-control policy $\pi_{\hat{w}}'$ with Wasserstein penalty, under the worst-case distribution policy. }
\label{fig:freq}
\end{center}
\end{figure}
\begin{table*}[tb]
\small
\centering
\caption{The amount of time (in seconds) required to decrease and maintain the mean frequency deviation less than $1\%$}\label{tab:1}
\begin{tabular}{l*{10}{c}}
Bus & 1 & 2 & 3 & 4 & 5 & 6 & 7 &8 &9 &10\\
\hline \hline
$\pi^{\mathrm{\tiny LQG}}_{\hat{w}}$ & 73.5 & 70.3 & 59.3 & 21.5 & 21.5 & 24.2 & 21.3 & 62.5 & 36.5 &27.7 \\
$\pi'_{\hat{w}}$ & 25.0 & 24.2 & 19.8 & 12.4 & 12.3 & 11.6 & 12.2 & 20.8 & 14.3 & 14.3 \\
\hline
\end{tabular}
\end{table*}
\subsubsection{Worst-case distribution policy}
We first compare the standard LQG control policy $\pi_{\hat{w}}^{\mathrm{\tiny LQG}}$ and
the proposed DR-control policy $\pi_{\hat{w}}'$ with the Wasserstein penalty under the worst-case distribution policy~$\gamma_{\hat{w}}'$ obtained by using the proof of Theorem~\ref{thm:riccati}.
We set $N = 10$ and $\lambda = 0.03$.
The i.i.d. samples $\{\hat{w}^{(i)}\}_{i=1}^N$ are generated according to the normal distribution $\mathcal{N}(0, 0.1^2I)$.
As depicted in Fig.~\ref{fig:freq},\footnote{The central bar on each box indicates the median; the bottom and top edges of the box indicate the 25th and 75th percentiles, respectively; and the `+' symbol represents the outliers.} $\pi_{\hat{w}}'$ is less sensitive than $\pi_{\hat{w}}^{\mathrm{\tiny LQG}}$ against the worst-case distribution policy.\footnote{The frequency deviation at other buses displays a similar behavior.}
In the $[0, 24]$ (seconds) interval, the frequency controlled by $\pi_{\hat{w}}^{\mathrm{\tiny LQG}}$ fluctuates around non-zero values while $\pi_{\hat{w}}'$
maintains the frequency fluctuation centered approximately around zero.
This is because the proposed DR-method takes into account the possibility of nonzero-mean disturbances, while the standard LQG method assumes zero-mean disturbances.
Furthermore, the proposed DR-method suppress the frequency fluctuation much faster than the standard LQG method: Under $\pi_{\hat{w}}'$, the mean frequency deviation averaging across the buses is less than 1\% for any time after 16.7 seconds.
On the other hand, if the standard LQG control is used,
it takes 41.8 seconds to take
the mean frequency deviation (averaging across the buses) below 1\%.
The detailed results for each bus are reported in Table~\ref{tab:1}.
\subsubsection{Out-of-sample performance guarantee}
We now examine the out-of-sample performance of $\pi_{\hat{w}}'$
and how it depends on the penalty parameter $\lambda$ and the number $N$ of samples.
The i.i.d. samples $\{\hat{w}^{(i)}\}_{i=1}^N$ are generated according to the normal distribution $\mathcal{N}(0, I)$.
Given $\lambda$ and $N$, we define the \emph{reliability} of $\pi_{\hat{w}}'$ as
\[
\mu^N \bigg \{ \hat{w} \mid \mathbb{E}^{\pi_{\hat{w}}'}_{w_t \sim \mu} \bigg [ \sum_{t=0}^\infty \alpha^t c (x_t, u_t) \mid x_0 = \bm{x} \bigg ] \leq v_{\hat{w}}' (\bm{x}) \bigg \}.
\]
As shown in Fig.~\ref{fig:power_reliability},
the reliability decreases with $\lambda$.
This is because when using larger $\lambda$,
the control policy $\pi_{\hat{w}}'$ becomes less robust against the deviation of the empirical distribution from the true distribution.
Increasing $\lambda$ has the effect of decreasing the radius $\theta$ in DR-control.
In addition, the reliability tends to increase as the number $N$ of samples used to design $\pi_{\hat{w}}'$
increases.
This result is consistent with the dependency of the DR-control reliability on the number of samples.
By using this result, we can determine the penalty parameter to attain a desired out-of-sample performance guarantee (or reliability), given the number of samples.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=2.7in]{Power_Reliability}
\caption{The reliability $\mu^N \{ \hat{w} \mid \mathbb{E}^{\pi_{\hat{w}}'}_{w_t \sim \mu} [ \sum_{t=0}^\infty \alpha^t c (x_t, u_t)| x_0 = \bm{x} ] \leq v_{\hat{w}}' (\bm{x}) \}$, in the Wasserstein penalty case, depending on $\lambda$ and $N$.}
\label{fig:power_reliability}
\end{center}
\end{figure}
\section{Conclusions}
In this paper, we considered distributionally robust stochastic control problems with Wasserstein ambiguity sets by directly using the data samples of uncertain variables.
We showed that the proposed framework has several salient features, including $(i)$ computational tractability with error bounds, $(ii)$ an out-of-sample performance guarantee, and $(iii)$ an explicit solution in the LQ setting.
It is worth emphasizing that the Kantorovich duality principle plays a critical role in our DP solution and analysis.
Furthermore, with regard to the out-of-sample performance guarantee,
our analysis provides the unique insight that
the contraction property of the Bellman operators
extends a single-stage guarantee---obtained using a measure concentration inequality---to the corresponding multi-stage guarantee without any degradation in the confidence level.
|
1,941,325,221,139 | arxiv | \section{Introduction} \label{sec:intro}
There are currently more than three thousand confirmed extrasolar planets, many of which were discovered using the \textit{Kepler} space telescope via the transit method. This method has revolutionized our view of planets and the potential for discovering life in the Universe.
Planet transit observations are now so precise that it is possible to characterize the composition and structure of extrasolar planets \citep{Seager2010}. As more powerful telescopes and satellites become available and surveys are launched, it is expected that in the next decade more Earth-like planets will be discovered that will potentially detect the presence of biomarkers in the atmospheres of extrasolar planets \citep{Rauer2014, Ricker2015}.
However, even with all of the progress made in the past decade, there remain a number of challenges. One such challenge is that analyzing planetary transit light curves requires understanding stellar limb darkening, also called the center-to-limb intensity variation (CLIV). The CLIV is the observed change of intensity from the center to the edge of the stellar disk. \cite{Mandel2002} developed an analytic model of a planetary transit assuming a simplified parameterization of stellar CLIV, typically as either a quadratic limb-darkening law or a four-parameter law \citep{Claret2000}.
Representing the CLIV with a simplified limb-darkening law (LDL) has been a reasonable approach for understanding most planetary transit observations, but there are a number of examples where the measured limb darkening disagreed with that predicted from model stellar atmospheres. \cite{Kipping2011a, Kipping2011b} found that limb-darkening parameterizations measured for a sample of \emph{Kepler} transit observations were inconsistent with predictions, raising questions about the physics of stellar atmospheres along with our understanding of planetary transits. \cite{Howarth2011} argued that the differences were the result of the planet's orbit being inclined relative to our line of sight. In that case, the measured limb-darkening parameters differed because the transit observations probed only part of the CLIV whereas the LDLs from model stellar atmospheres are constructed from the entire CLIV. \cite{Howarth2011} was able to resolve those errors for some stars of that sample by fitting limb-darkening coefficients over only part of the CLIV.
In addition to the degeneracy created by the transit inclination,
the representation of the CLIV also impacts attempts to extract
information about the transiting planet's spectrum and composition from
the lightcurve. For example, there have been conflicting claims regarding the composition of the atmosphere of GJ~1214 from transit spectral observations \citep{Croll2011}. Using near-infrared transit spectra, \cite{Croll2011, Gillon2014} and \cite{Caceres2014} determined that the planet's atmosphere must have a small mean-molecular weight, but that result is contested by other observations \citep{Bean2011, Berta2012}.
Similarly, \cite{Hirano2016, Fukui2016} and others report precisions of the order of 1\% for measuring $R_{\rm{p}}/R_\ast$ for planets orbiting F-type stars. \cite{Almenara2015} reported precisions better than 1\% for planets orbiting an evolved metal-poor F-star. These results are very precise yet depend on their assumptions of stellar limb darkening. As such, can we be sure these measurements are accurate?
It is becoming increasingly apparent that the current two-, three- and four-parameter limb-darkening laws are simply inadequate for high precision planetary transit models. We showed \citep[][hereafter Paper 1]{Neilson2016a} that synthetic planetary transit light curves computed directly from model stellar atmosphere CLIV differ from light curves computed from best-fit limb-darkening laws for a solar-like star, where the only difference is the shape of the intensity profile employed. This shows that fitting errors in planetary transit observations do not come only from errors in the limb-darkening parameters but also from the assumption of a specific type of limb-darkening law. These errors range from about 100 to a few hundred parts-per-million and vary as a function of wavelength. Similar results were found independently by \cite{Morello2017}. Hence, assuming a simple limb-darkening law contaminates measurements of extrasolar planet spectra, oblateness and other phenomena.
Limb-darkening laws are not accurate representations of model stellar atmosphere CLIV, particularly near the limb of the star. \cite{Neilson2011} found that, for spherically symmetric model stellar atmospheres, currently favored quadratic limb-darkening laws fit the model CLIV poorly. The \cite{Claret2000} four-parameter law provides a more precise fit, but it is still of limited accuracy near the limb of the star. This result was confirmed for giant and supergiant stars ($\log g \le 3$) \citep{Neilson2013a} as well as for dwarf stars \citep{Neilson2013b}. Specifically, these laws fail for two reasons: the first is the more complex structure of the CLIV that prevents simple limb-darkening laws from fitting the intensity near the limb of the star, and the second being the inability for best-fit limb-darkening laws to accurately reproduce the stellar flux.
These two differences between model CLIV and best-fit limb-darkening laws cause the differences between synthetic planetary transit light curves found in Paper 1. Because the errors in best-fit limb-darkening are a function of stellar properties, it is likely that the errors introduced by assuming a simple limb-darkening parameterization are also a function of stellar properties. In this work, we present computed errors as a function of stellar properties and waveband for dwarf stars. These can be applied to planetary transit observations for the purpose of defining the systematic uncertainties of any fit, as well as determining their impact on additional phenomena such as spectral features and oblateness. In the next section we describe our models and how we measure the differences between synthetic planetary transit light curves computed directly from model CLIV and from limb-darkening laws. In Section~{sec:radius}, we consider the definition of the stellar radius and its impact in our analysis. We present the errors for our model stellar atmosphere grids in Section~\ref{sec:errors} as a function of stellar properties, and we present our results in Sections~\ref{sec:errors_incline} and \ref{sec:correction}. We discuss the impact of these results in terms of the atmospheric extension of a star, i.e., the size of the atmosphere relative to the stellar radius, in Section~\ref{sec:extension}.
\section{Model stellar atmospheres}\label{sec:atmosphere}
Our analysis used the spherically symmetric model stellar
atmospheres from \cite{Neilson2013b}, which were computed using the
\textsc{SAtlas} codes \citep{Lester2008}. These models were computed for stellar masses spanning the range from
$M_\ast = 0.2$ to $1.4~M_\odot$ in steps of $\Delta M_\ast = 0.3~M_\odot$,
effective temperatures $T_{\rm{eff}} = 3500$ to $ 8000~$K in steps of
100~K and surface gravities $\log g = 4$ to $4.75$ in steps of 0.25 dex. This is equivalent to a range of luminosities from about 0.01 to $15~L_\odot$ and radii from $0.3$ to $2~R_\odot$.
For each model the stellar CLIV was computed at 329 wavel for one thousand points of $\mu$, where $\mu$ is the cosine of the angle formed by a point on the stellar disk and the disk center. The model atmosphere employed in Paper 1 is part of this grid of models.
The model CLIVs, integrated over the $BVRIJK$, {\it Kepler}- and
{\it CoRot}-wavebands, were used to compute the corresponding best-fit
limb-darkening coefficients for the quadratic limb-darkening law. We use these CLIV's, calculated using the methods described in Paper 1, and the corresponding best-fit coefficients to compute synthetic planetary transit light curves using the analytic prescription developed by \cite{Mandel2002} for the small-planet assumption, represented by $\rho$ defined as
\begin{equation} \label{eq:def_rho}
\rho \equiv \frac{R_{\rm{p}}}{R_\ast} \leq 0.1.
\end{equation}
While the small-planet assumption is not perfect, we have shown that the \emph{difference} between light curves follows the same behavior regardless of planet radius. Furthermore, all we are truly modeling is the difference between CLIV and limb-darkening as a function of $\mu$. We also note that \cite{Morello2017} found similar results using a different prescription for modeling planetary transits.
Using the synthetic planetary transit light curves computed for each model atmosphere using both the CLIV and limb-darkening coefficients, we compute the average difference and the greatest difference for each waveband and model stellar atmosphere for $\rho = 0.1$. The computed average difference between light curves acts as a measure of the systematic error of the fit for properties, such as relative planet radius, limb-darkening coefficients, and, potentially, secondary quantities such as planetary oblateness and star spots.
The computed flux differences are functions of $\rho$, defined in Equation~\ref{eq:def_rho}. To first order the difference can be written as
\begin{equation}\label{eq:diff1}
\Delta f = (I_{\rm{CLIV}} - I_{\rm{LDL}}) \times \rho^2,
\end{equation}
where $I_{\rm{CLIV}}$ and $I_{\rm{LDL}}$ are the intensities from the CLIV and limb-darkening law, respectively. Because of the definition of $\rho$, the average difference scales as the surface area of the planet relative to the star. For example, if one measures $\rho = 0.05 $ and our model assumes $\rho = 0.1$, then the measured average error will be $(0.05/0.1)^2 = 0.25 \times$ the difference measured in this paper for the same stellar properties.
We also compute the root-mean-square (RMS) flux error as a measure of how well the assumption of a quadratic limb-darkening law fits our more realistic CLIV planetary transit light curve.
\section{Definition of the Stellar Radius} \label{sec:radius}
In a model stellar atmosphere there is no ``edge'' that marks the radius of the star and the transition to empty space. There are several ways to define the stellar radius \citep{Baschek1991}, and we have chosen to use the Rosseland stellar radius, $R_{\rm{Ross}}$, defined as the radius where the Rosseland optical depth, $\tau_{\rm{Ross}}$, has a value of 2/3 because at that radius in the atmosphere the light has $\approx 0.5$ chance of escaping to space without being absorbed. However, there is still some radiation emitted by the star from above this level, and the structure of these levels, and the radiation they emit, are different for our spherical models compared to plane-parallel models. Also, there are other definitions of the stellar radius that are commonly used. One is the limb-darkened radius, $R_{\rm{LD}}$, derived from where the disk visibility observed using optical interferometry goes to zero \citep{Wittkowski2004}, though it should be noted that interferometric visibilities are unreliable for visibilities less than $10^{-4}$ \citep{Baron2014}. To be clear, $R_{\rm{LD}}$ is greater than $R_{\rm{Ross}}$. In the analysis to follow, we will show that the exact definition of $R$ is inconsequential because we are comparing results found using the CLIV directly with results using a LDL representation of the same CLIV, and the definition of $R$ essentially cancels out.
In the next section we explore how the representations of the CLIV differ as a function of stellar properties and inclination.
As in Paper 1, we define the inclination in terms of $\mu$. The conventional definition of the orbital inclination angle, $i$, is the angle between the orbit plane and the plane of the sky, so that $i = 90^\circ$ is an orbit observed edge-on and $i = 0^\circ$ is an orbit observed face-on.
We define a new orbital inclination parameter
\begin{equation}\label{eq:theta0}
\theta_0 \equiv 90^\circ - i
\end{equation}
and scaling
\begin{equation}\label{eq:mu0}
\mu_0 \equiv \frac{a \cos \theta_0}{R_{\rm{LD}}},
\end{equation}
where $a/R_{\rm{LD}}$ is the normalized separation between the star and the planet. The purpose for these definitions is to allow a more direct connection between light curves as a function of inclination with CLIV and limb-darkening laws computed as a function of $\mu$.
With the definition of $\rho$ given in Equation~\ref{eq:def_rho}, we need to return to the definition of the star's radius. In particular, how do we use the spherically symmetric model stellar atmosphere CLIV to fit planetary transit observations and measure the planet radius itself? We suggest two possibilities and reject a third.
The first possible solution follows if one uses the spherical model CLIV, or uses a limb-darkening law derived from fitting the spherical model CLIV. In either case, the approach is to fit the observations and then multiply the measured value of $\rho = R_{\rm{p}}/R_{\rm{LD}}$ by the factor $R_{\rm{LD}}/R_{\rm{Ross}}$ to transform $\rho$ to the Rosseland radius. Using the CLIV from the models makes $R_{\rm{LD}}/R_{\rm{Ross}}$ readily available.
The second option is to construct a planetary transit code that forces the edge of the stellar disk to be $R_{\rm{Ross}}$ such that $\mu = 0$ corresponds to the point $R_{\rm{LD}}$. However, this method also requires knowing the ratio between $R_{\rm{Ross}}$ and $R_{\rm{LD}}$, so the first option is preferred as being simpler for computation.
The third option, which we reject, is to clip the CLIV so that the contribution to the CLIV from the extended part of the atmosphere is removed, and then to rescale the CLIV so that $\mu = 0$ corresponds to $R_{\rm{Ross}}$ \citep{Claret2003, Espinoza2016, Claret2017}. This clipping can be done by knowing where the values $R_{\rm{Ross}}$ and $R_{\rm{LD}}$ are in the model that will be clipped or by assuming that the point in the CLIV where the derivative of the intensity with respect to $\mu$ is greatest. \cite{Aufdenberg2005} has shown that this is approximately the point corresponding to $R_{\rm{Ross}}$.
However, we reject this option because it removes information about the stellar atmosphere and its radiation properties. When we clip the CLIV, we remove information about atmospheric extension and make the CLIV more plane-parallel-like. Furthermore, clipping the CLIV and rescaling the intensity profile will increase the moments of the intensity, in particular the stellar flux. If the stellar flux is increased in a planetary transit fit then the corresponding value of $\rho$ will be smaller. As such, when one clips the CLIV to get a better fit one creates both an inconsistency in the stellar models and biases the fit to smaller values of $\rho$.
Regardless of the method used to incorporate spherically symmetric model stellar atmospheres into fits of transit light curves, the results remains the same. One can either use model knowledge of $R_{\rm{Ross}}/R_{\rm{LD}}$ to improve the analysis or one can continue to use geometrically-unrealistic models or models with inconsistent fluxes due to clipping that will bias any analysis. For the sake of this work, the issue is not of consequence since we will show that the analysis is a \emph{relative} comparison.
\section{Measuring the errors}\label{sec:errors}
\cite{Neilson2013a, Neilson2013b} found that the errors produced by fitting limb-darkening laws to spherically symmetric model stellar atmosphere CLIV varied as a function of atmospheric extension. The extension can be represented as
\begin{equation}\label{eq:atmos_extension}
H_{\rm{p}}/R_\ast \propto T_{\rm{eff}}R_\ast/M_\ast = T_{\rm{eff}}/(gR_\ast)= T_{\rm{eff}}/\sqrt{gM_\ast}
\end{equation}
\citep{Baschek1991, Bessell1991, Neilson2016b}. This extension, also referred to as the stellar mass index (SMI) by \cite{Neilson2016b}, is important because it indicates how the structure of the CLIV changes near the edge of the stellar disk. Because the errors for fitting limb-darkening grow as a function of this extension, we expect the average difference between synthetic light curves also to increase as a function of atmospheric extension.
Before we explore the dependence of the limb darkening on the parameterization of the atmospheric extension, we first consider, for the case of edge-on inclination, $i=90^\circ$, how the average differences change independently as a function of effective temperature, gravity and stellar mass. Under these assumptions we plot the errors for the {\it Kepler}- and $K$-bands, although we have also computed these differences for $BVRIH$-, and \emph{CoRot}-bands.
In Figure~\ref{f1} we plot the average flux difference between the CLIV and the best-fit quadratic limb-darkening law for an entire transit and the greatest difference during the transit as a function of effective temperature. It is notable that these differences trend toward greater values with increasing effective temperatures. Hence, hotter stars with transiting planets will have greater systematic uncertainties, up to 300~ppm for the {\it Kepler}-band and 600~ppm for the $K$-band. This error in flux, $\Delta f = f_{\rm{CLIV}} - f_{\rm{LDL}}$, is also an error in the surface area of the planet relative to the star, which, for the small planet approximation is $\rho^2=0.01$, hence the errors reach about 3\% and 6\% in the {\it Kepler}- and $K$-bands, respectively.
\begin{figure*}[t]
\begin{center}
\plottwo{f1a.eps}{f1b.eps}
\end{center}
\caption{(Left) Average differences between synthetic planetary transit light curves computed using model stellar atmosphere CLIV and using best-fit quadratric limb-darkening laws as a function of effective temperature for the {\it Kepler}-band (top) and $K$-band (bottom). (Right) Same as the left panels but for the RMS difference of the light curves.}\label{f1}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\plottwo{f2a.eps}{f2b.eps}
\end{center}
\caption{(Left) Average differences between synthetic planetary transit light curves computed using model stellar atmosphere CLIV and using best-fit quadratric limb-darkening laws as a function of stellar gravity for the {\it Kepler}-band (top) and $K$-band (bottom). (Right) Same as the left panels but for the RMS difference of the light curves.}\label{f2}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\plottwo{f3a.eps}{f3b.eps}
\end{center}
\caption{(Left) Average differences between synthetic planetary transit light curves computed using model stellar atmosphere CLIV and using best-fit quadratric limb-darkening laws as a function of stellar mass for the {\it Kepler}-band (top) and $K$-band (bottom). (Right) Same as the left panels but for the RMS difference of the light curves.}\label{f3}
\end{figure*}
The errors plotted in Figure~\ref{f1} do have a weak trend with effective temperature, but there is an even more significant spread in the errors, by as much as 200~ppm, at every effective temperature. Because of this spread, we plot the errors as a function of $\log g$ in Figure~\ref{f2}
The errors show essentially no dependence on surface gravity,
with just a very slight increase for lower gravity model atmospheres.
This weak dependence on gravity is disappointing because the
gravity-jitter relation \citep{Bastien2013, Bastien2014} would provide a
quick and simple connection to the errors if they were more sensitive to
the surface gravity. Figure~\ref{f3} plots the errors as a function of stellar mass, which is a component of the surface gravity, showing that there is more of a trend, with the greatest differences occur for the smallest stellar masses.
The results of the three plots imply that the predicted errors trend toward greater absolute values for hotter effective temperature, smaller masses and potentially depends on the stellar gravity. To test this we use the definition of atmospheric extension given in Equation~\ref{eq:atmos_extension} expressed in solar units. In Figure~\ref{f4}, we plot the errors versus the atmospheric extension and find there is a trend, though the range of atmospheric extensions is relatively small for these dwarf stars. \cite{Neilson2016b} computed atmospheric extensions for red giant and supergiant model stellar atmospheres that reach a few hundred $R_\odot/M_\odot$. In Figure~\ref{f4}, there appear to be two trends: one group that has larger variability and contains most of the models in the sample and a second group with few models and errors that are smallest. That latter group corresponds to effective temperatures $\le 3700$~K, which likely corresponds to a shift in the dominant opacities in the model stellar atmospheres.
\begin{figure*}[t]
\begin{center}
\plottwo{f4a.eps}{f4b.eps}
\end{center}
\caption{(Left) Average differences between synthetic planetary transit light curves computed using model stellar atmosphere CLIV and using best-fit quadratric limb-darkening laws as a function of atmospheric extension for the {\it Kepler}-band (top) and $K$-band (bottom). (Right) Same as the left panels but RMS difference of the light curves. Points denoted by black circles are for model stellar atmospheres with $T_{\rm{eff}} \le 3700$~K.}\label{f4}
\end{figure*}
The key result from Figure~\ref{f4} is that the errors between the atmosphere's actual CLIV and the limb-darkening law representation of this CLIV grows as a function of atmospheric extension. This result is consistent with the predictions of \cite{Neilson2013b} that the best-fit limb-darkening coefficients fit the CLIV of model atmosphere most poorly when the models have the greatest extension. The difference between planetary transit light curves computed using model CLIV and those computed using best-fit limb-darkening coefficients is tracing the quality of the fit of those best-fit limb-darkening coefficients.
The greatest differences correspond to the greatest atmospheric extensions, hence the hottest and most evolved stars in our sample with $T_{\rm{eff}} \rightarrow 8000$~K and $\log g \rightarrow 4.0$. That is, the greatest differences correspond to evolved main sequence F-type stars. There have been numerous planet transit detection around F-type stars \citep{Gandolfi2012, Smalley2012, Bayliss2013, Huang2015, Fukui2016} and many of those exoplanets appear to be `bloated' hot Jupiters. Understanding the errors introduced by assuming simple limb-darkening laws could resolve some of this `bloating', especially since we found in Paper 1 that those differences increase when we consider orbits that are inclined from edge on.
\section{The errors as a function of orbital inclination} \label{sec:errors_incline}
In this section, we explore how the differences between synthetic planetary transit light curves computed using model stellar atmosphere CLIV and those computed using best-fit limb-darkening coefficients change as a function of orbital inclination. We represent the inclination using $\mu_0$, defined in Equation~\ref{eq:mu0}, with an edge-on orbit having $\mu_0 = 1$ and a face-on orbit having $\mu_0 = 0$. In Paper 1, we found that the differences between light curves can increase with increasing inclination until $\mu_0 \approx 0.3$, which corresponds to $\theta_0 \approx 70^\circ$, $i \approx 20^\circ$ and impact parameter $b \approx 0.95$, $b \equiv (a/R_\ast)\cos i$, where $a/R_\ast$ is the orbital separation relative to the radius of the star. As a result, for most orbits a change in inclination will lead to greater errors, and the maximum differences between light curves depend on the inclination even more.
\begin{figure*}[t]
\begin{center}
\plottwo{f5a.eps}{f5b.eps}
\end{center}
\caption{(Left) Average differences between synthetic planetary transit light curves computed using model stellar atmosphere CLIV or using best-fit quadratric limb-darkening laws as a function of effective temperature for the {\it Kepler}-band (top) and $K$-band (bottom). (Right) Same as the left panels but for the RMS of the light curves. The red crosses represent transits with $\mu_0 = 1$, blue stars $\mu_0 = 0.7$ and black open squares $\mu_0 = 0.3$. The spread of the average differences and RMS values for each $\mu_0$ arises from variations in stellar mass and gravity for a given stellar effective temperature.}\label{f5}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\plottwo{f6a.eps}{f6b.eps}
\end{center}
\caption{(Left) Average differences between synthetic planetary transit light curves computed using model stellar atmosphere CLIV or using best-fit quadratic limb-darkening laws as a function of atmospheric extension for the {\it Kepler}-band (top) and $K$-band (bottom). (Right) Same as the left panels but for the RMS difference of the light curves. The red crosses represent transits with $\mu_0 = 1$, blue stars $\mu_0 = 0.7$ and black open squares $\mu_0 = 0.3$. }\label{f6}
\end{figure*}
We test the role of orbital inclination by first plotting the errors as a function of effective temperature in Figure~\ref{f5}. The errors due to assuming limb-darkening laws increase as a function of orbital inclination. In the {\it Kepler}-band, the average difference between light curves shifts from about -300~ppm for $\mu_0 = 1$, up to -400~ppm for $\mu_0 = 0.7$ and up to -600~ppm for $\mu_0 = 0.3$. In the $K$-band, the effect is even more significant with differences up to -700~ppm and -1000~ppm for the hottest stars. As such, we are seeing how connected and dependent the light curve and the assumption of limb-darkening laws are on each other \citep{Howarth2011}.
The maximum differences also grow as a function of orbital inclination. As $\mu_0$ decreases from unity to zero, the maximum difference reaches almost 1600~ppm and 2600~ppm in the {\it Kepler}- and $K$-bands, respectively. Those differences correspond to about 16\% and 26\% of the surface area of the assumed planet, hence about 8\% and 13\% of the planet radius for $\delta A = 2\delta R_{\rm{p}}$. For more inclined orbits, we are finding errors that are a significant fraction of the relative planet size.
In Figure~\ref{f6} we show the effect of atmospheric extension
on the difference between the CLIV and the quadratic limb-darkening law.
The results are surprising. In Figure~\ref{f4} we found that the average and maximum difference between synthetic planet transit light curves grow as a function of atmospheric extension. However, we see that for more inclined orbits the average differences increase rapidly as a function of atmospheric extension. When $\mu_0 = 0.3$ we find that the average difference reaches almost $-500$~ppm and $-1100$~ppm in the {\it Kepler}- and $K$-bands, respectively, with an atmospheric extension of $\approx 2~R_\odot/M_\odot$. This suggests that all stars with planets orbiting in inclined orbits will have significant errors for even smaller atmospheric extensions.
These results offer distinct challenges for our understanding of planet transits and secondary effects, such as oblateness, rotation and spots. For instance, for the case of KIC~8462852 \citep{Boyajian2016} the transits have been explained by large families of orbiting comets (or dust clouds)\citep{Bodman2016}. Because that analysis ignores limb darkening in fitting the family of comets, if any of the orbits are inclined then the sizes required for the comets will be significantly wrong. As such, our results show that we must treat limb darkening in planetary transits with greater care and should move from assuming the simple parameterizations to using more realistic models of CLIV.
\section{Correcting the planetary radius}\label{sec:correction}
While the average flux difference offers one measure of the error created by assuming a simple limb-darkening law, it does not offer a significant measurement of biases in the predicted planetary radius.
To address this, we start by defining the $\chi^2$ from the transit model as
\begin{equation}\label{chi1}
\chi^2 \equiv \sum_z \left[ f_{\rm{CLIV}}(\rho, z) - f_{\rm{LDL}}(\rho, z)\right]^2.
\end{equation}
In Equation~\ref{chi1}, $z$ is the projected separation between the center of the planet and the center of the star normalized by the stellar radius. At the edge of the stellar disk $z = 1$. This definition offers potential challenges for working with CLIV computed for stellar models with atmospheric extension that we will discuss in Sect.~\ref{sec:extension}.
We note this is assuming the small-planet approximation for ease, which will differ slightly from more exact methods. However, this analysis will allow us to probe the order-of-magnitude of the predicted errors.
Because both the CLIV and the best-fit limb-darkening coefficients
use the same planet radius and inclination, this $\chi^2$ should ideally be a minimum. However, it is possible to gain improvement by varying some of the parameters. For instance, varying the limb-darkening coefficients changes the predicted stellar flux, which will change the transit depth and, hence, will lead to a different value of the planetary radius. However, this change in limb-darkening coefficients will compound errors in how we understand the host star. Similarly, varying the inclination creates biases in the measured limb-darkening coefficients that alter a fit in the same direction. For simplicity, we minimize the $\chi^2$ function using just the variation of the planetary radius.
\begin{figure*}[t]
\begin{center}
\plottwo{f1c.eps}{f4c.eps}
\end{center}
\caption{Predicted overestimated value of the planetary radius relative to the actual planet size as a function of (Left) effective temperature and (Right) atmospheric extension for the {\it Kepler}- and $K$-bands. Points denoted by black circles in the right plot are for model stellar atmospheres with $T_{\rm{eff}} \le 3700$~K. The difference shown here is solely caused by assuming a quadratic limb-darkening law. }\label{fig:drho}
\end{figure*}
We start by perturbing the radius in the LDL light curve of Equation~\ref{chi1}
\begin{equation}\label{chi2}
\chi^2 \propto \sum_z \left[ f_{\rm{CLIV}}(\rho, z) - f_{\rm{LDL}}(\rho + \delta \rho, z)\right]^2.
\end{equation}
Next we assume the small-planet approximation,
\begin{equation}\label{eq:flux0}
f(\rho,z) = 1 - \rho^2 \frac{I^*}{4\Omega},
\end{equation}
is valid for both the CLIV and LDL light curves. In
Equation~\ref{eq:flux0} $4\Omega$ is the stellar flux and $I^*$ is the amount of flux blocked by the planet as it transits \citep{Mandel2002}. For the purpose of this perturbation, we ignore changes in $I^*$ as a function of planet radius as well as second-order changes in
$\delta \rho$. Therefore
\begin{eqnarray}\label{eq:fldl_drho}
f_\mathrm{LDL}(\rho + \delta \rho, z) & = & 1 - \rho^2 \frac{I^*}{4\Omega}
- 2 \frac{\delta \rho}{\rho} \rho^2 \frac{I^*}{4\Omega} \nonumber \\
& = & f_{\rm{LDL}}(\rho, z) - 2 \frac{\delta \rho}{\rho}\left[1 - f_{\rm{LDL}}(\rho,z)\right].
\end{eqnarray}
We now minimize the $\chi^2$-function, Equation~\ref{chi2}, with
respect to radius to get
\begin{equation}\label{eq:dchi2}
\frac{d\chi^2}{d\rho} = 2 \sum_z \left[ f_{\rm{CLIV}}(\rho, z) - f_{\rm{LDL}}(\rho + \delta \rho, z)\right]\frac{d f_{\rm{LDL}}}{d\rho} = 0.
\end{equation}
Again ignoring changes in $I^*$ as a function of $\rho$, the derivative of Equation~\ref{eq:flux0} gives
\begin{equation}\label{eq:dfldl}
\frac{d f_{\rm{LDL}}}{d\rho} = -2\rho \frac{I^*}{4\Omega} = - \frac{2}{\rho}\left[1 - f_{\rm{LDL}}(\rho,z)\right].
\end{equation}
Using Equations~\ref{eq:fldl_drho} and \ref{eq:dfldl} in
Equation~\ref{eq:dchi2} gives
\begin{eqnarray*}
&&\sum_z \left\{\left[ f_{\rm{CLIV}}(\rho, z) - f_{\rm{LDL}}(\rho, z) + 2\frac{\delta \rho}{\rho}\left(1 - f_{\rm{LDL}}(\rho,z)\right)\right] \right. \\
&&\left. \left[-\frac{2}{\rho}\left(1 - f_{\rm{LDL}}(\rho,z)\right)\right]\right\} = 0.
\end{eqnarray*}
Rearranging and solving for $\delta \rho/\rho$ leads to
\begin{equation}\label{eq:drho}
\frac{\delta \rho}{\rho} = \frac{ \sum_z \left[ (f_{\rm{LDL}} - f_{\rm{CLIV}})(1 - f_{\rm{LDL}}) \right]}{2\sum_z (1- f_{\rm{LDL}})^2}.
\end{equation}
This shows again that the average difference in flux offers a rough measure of the error of the fit that affects the predicted depth of the transit and hence the measured planet radius.
We note that Equation~\ref{eq:drho} appears to be an explicit function of $\rho$ since $f_{\rm{LDL}}$ and $f_{\rm{CLIV}}$ themselves also depend on $\rho$. However, if we insert Equation~\ref{eq:flux0} into Equation~\ref{eq:drho} the relative radius cancels leaving only terms of $I^*/4\Omega$. The amount of flux blocked by the planet, however, is implicitly dependent on the size of the planet, but to first order Equation~\ref{eq:drho} is independent of planet radius. While one could measure these differences using fitting codes, this analysis illustrates how the relative planet radius depends on understanding the limb darkening. Furthermore, we note that this relation implies that the result is independent of stellar radius in that we can replace $\delta \rho/\rho = \delta r_p/r_p$.
Figure~\ref{fig:drho} shows the relative correction to the planet's radius due to assuming the quadratic limb-darkening law. This correction is the expected overestimation of the planet radius by fitting methods that assume this limb-darkening law. We plot the difference of the planet's radius as a function of effective temperature and atmospheric extension. These differences scale with approximately the same behavior as the plots of RMS($f_{\rm{CLIV}} - f_{\rm{LDL}}$). Furthermore, the differences in the planet's radius are significantly greater than indicated by the average flux difference by almost a factor of twenty in the {\it Kepler}-band and by more than a factor of twenty in the $K$-band. We see again that the correction to the planet's radius is also greatest for stars with the greatest effective temperature and atmospheric extensions.
As a result, we find that planetary radii can be overestimated by up to 7000~ppm in the near-IR and 3500~ppm in the optical {\it Kepler}-band. This overestimation is small relative to the assumed size of the planet, especially when the correction is relative to the measured planet size. Assuming the small planet approximation, $\rho = 0.1$, then $\delta \rho = 700$~ppm and $350~$ppm in the optical and near-IR, respectively.
The correction for the planetary radius increases with the inclination of the orbit, similar to that seen for the average flux difference. For instance, Figure~\ref{fig:drho-b} shows that the correction increases by an order of magnitude as $\mu_0 \rightarrow 0$. For the most inclined orbit, and assuming the best-fit limb-darkening coefficients for a quadratic limb-darkening law for each model, the radius correction is about 10\% of the actual planet radius for model stellar atmospheres with the greatest atmospheric extension. This correction will be even greater if we assume even simpler limb-darkening laws, such as a linear law, or a uniform-disk model (\emph{i.e.}, no limb darkening). Therefore, care must be taken to choose the appropriate limb-darkening parameterization or model when measuring precision values of extrasolar planet radius.
\begin{figure}[t]
\begin{center}
\plotone{f6c.eps}
\end{center}
\caption{Predicted overestimated value of planetary radius relative to the actual planet size as a function of atmospheric extension for different inclinations in the {\it Kepler}- and $K$-bands. The red crosses represent transits with $\mu_0 = 1$, blue stars $\mu_0 = 0.7$ and black open squares $\mu_0 = 0.3$. }\label{fig:drho-b}
\end{figure}
Just for interest, we also computed the error in the planetary radius if one assumes a star with no limb darkening, i.e., a uniform-disk model. If one uses this method then the planet radius will be underestimated instead of overestimated by about 2 - 5\%. This check is consistent with the need for more and more free parameters to precisely measure limb darkening and its impact on the exoplanet radius. However, our analysis and past works highlights the fact that the stellar CLIV cannot be accurately represented by simple functions \citep{Neilson2011, Neilson2013a, Neilson2013b}.
\section{Atmospheric extension, stellar radius and transits}\label{sec:extension}
The issue of understanding the stellar radius is important not just for planetary transit fits using spherical model atmospheres, but also for fits using plane-parallel models and limb-darkening laws. The geometry of plane-parallel models contains no information about the stellar radius. As such, it is merely assumed that measurements of stellar radii from asteroseismology or stellar evolution models correspond with the stellar radii in planetary transit fits. Similarly, best-fit limb-darkening laws based on plane-parallel models or fit directly to observations make the same assumption and it is unclear that this is true. Spherically symmetric model stellar atmospheres explicity contain information about the stellar radius, and therefore about the extension of the atmosphere. Because of the atmospheric extension the edge of the star is at a physical radius that is not the Rosseland radius. As such plane-parallel and spherical model stellar atmospheres should not be expected to give the same results when fit to planetary transit or interferometric observations because they make different assumptions about the structure of the photosphere. The challenges for measuring limb darkening and stellar radii (or angular diameters) has been discussed in detail by numerous authors \citep{Wittkowski2004, Neilson2008, Baron2014, Kervella2017}.
Just as the differences between plane-parallel and spherically symmetric model stellar atmospheres lead to different measurements of stellar radii, they are also fit with different precision by various limb-darkening laws. \cite{Neilson2013a,Neilson2013b} showed that six different commonly used limb-darkening laws fit plane-parallel models with much better precision than spherically symmetric models with the same fundamental parameters. The source of this difference is the point of inflection in spherically symmetric CLIV that is a result of including physics of atmospheric extension. Therefore, current limb-darkening laws do not fit the effects of atmospheric extension. This result was found by other works such as \cite{Claret2003} and \cite{Espinoza2016}. However, these works avoid the challenge of fitting atmospheric extension by clipping the spherically symmetric model CLIV to remove all information about the extension.
\cite{Ligi2016} found uncertainties in measuring angular diameters of exoplanet-host stars to be about 1.9\%. This is much greater than the atmospheric extension of these stars. For instance, the Sun has an extension of the order of 0.1\%, based on the ratio of the pressure scale height in the atmosphere and the solar radius. Therefore, these issues around the definition of stellar radius will not be readily apparent for direct measurements. On the other hand, \cite{Mann2017} measured the relative planet radii for three exoplanets to a precision of $\delta \rho/\rho \approx 2-4\%$, while \cite{Murgas2017} reported precisions of the order 1\% and better. Furthermore, the next generation of interferometric observations promise to measure angular diameters to about 0.5\% precision \citep{Zhao2011}. At these uncertainties, the biases introduced by assuming the unphysical limb-darkening laws is becoming important, especially as we attempt to measure spectral properties of exoplanets.
\section{Summary}\label{sec:summary}
In this work, we have taken the CLIVs from the \cite{Neilson2013b} grid of spherically symmetric model stellar atmospheres and the corresponding best-fit limb-darkening coefficients for the quadratic limb-darkening law and computed the differences between synthetic planetary transit light curves using the prescription described by \cite{Neilson2016a}.
We evaluated the error resulting from the use of the limb-darkening laws by computing both the average difference and the greatest difference between the CLIV and LDL transit light curves. These differences were computed as a function of fundamental stellar parameters: effective temperature, gravity and stellar mass along with the inclination of the orbit, which we parameterized as $\mu_0 = \cos (90^\circ - i)$.
The results are striking. Before considering the role of inclination, we found that the average differences between CLIV and limb-darkened transit light curves increased as a function of atmospheric extension, which implies that the average differences are greatest for more evolved F-type stars. When inclination is included the differences increase significantly and depend on the atmospheric extension, indicating the errors are roughly similar for most atmospheric extensions. These negative errors tell us that the relative planetary radii are being overestimated, especially for the F-stars, by as much as 5\% and at least 1\% for an edge-on orbit in the {\it Kepler}-band. \cite{Hirano2016, Fukui2016} and others report precisions of the order of 1\% for measuring $R_{\rm{p}}/R_\ast$ for planets orbiting F-type stars. \cite{Almenara2015} reported precisions better than 1\% for planets orbiting an evolved metal-poor F-star. Given that our models show that $R_{\rm{p}}/R_\ast$ are overestimated, then these measurements have systematic error of at least 1\% that is not accounted for in the fits.
We note that these errors are for the ideal situation where one knows the inclination and where the limb-darkening coefficients are the most accurately determined. Our analysis does not consider the cases where inclination, limb-darkening coefficients and relative radii are fit simultaneously. In those cases limb-darkening coefficient measurements can deviate significantly from those of model stellar atmospheres \citep{Kipping2011a, Kipping2011b}, implying a strong dependence of the limb darkening on other fitting parameters. As the limb-darkening coefficients deviates then so too will the errors. This may not change the errors much, but is something that must be explored in greater detail.
One key conclusion of our work is that we need to measure stellar CLIV both precisely and directly. It is becoming clear that our current assumptions of simple limb-darkening laws are just not good enough for understanding the planetary transit observations. Interferometry is proving to be one method for directly inferring stellar CLIV \citep{Baron2014, Armstrong2016, Kervella2017}. We recently showed that we can use interferometric measurements in combination with spectroscopy and spherically symmetric model stellar atmospheres to measure stellar fundamental parameters including stellar masses. That result is based on measurements of atmospheric extension in stars. We suggest that method will be more robust if combined with planetary transit observations as part of a global fit of stellar and planetary parameters. That work and this is part of an ongoing research project to test limb-darkening and stellar radii measurements from interferometric observations against state-of-the-art model stellar atmospheres. But, the results of our current work are clearly showing that we are reaching the limits of plane-parallel model CLIV and arbitrary limb-darkening laws that have no physics basis.
From this analysis, we produced corrections of the relative radius of an exoplanet $\delta \rho/\rho$ for a grid of stellar atmosphere models for the wavebands $BVRIHK$ and the {\it Kepler}- and {\it CoRot}-bands that are publicly available. While it is preferable to fit the model CLIV to transit light curves and to shift from measuring limb-darkening coefficients to measuring stellar properties, these correction factors can help improve the precision of planetary transit fits of transit spectra.
\acknowledgements{J.B.L. is grateful for funding from NSERC discovery grants. F.B. acknowledges funding from NSF awards AST-1445935 and AST-1616483. H.R.N. and J.B.L. acknowledge that the University of Toronto operates on traditional land of the Huron-Wendat, the Seneca, and most recently, the Mississaugas of the Credit River. The authors are grateful to have the opportunity to work on this land.}
\bibliographystyle{aa}
|
1,941,325,221,140 | arxiv | \section{Introduction}
The program to develop a theory of renormalized higher ($\geq 2$) powers
of white noise \cite{AcBouk09sg} has led to investigate the connections between
central extensions and renormalization.
This has, in its turn, led to the discovery of the unique (up to isomorphisms)
non trivial central extension of the $1$--dimensional Heisenberg algebra $heis(1)$ and of its identification with the Galilei algebra (cf \cite{1}, \cite{4}).
In the present paper we will work in the $1$--dimensional Schroedinger
representation where $heis (1)$ can be realized as the real linear span of
$\{1,p,q\}$ (central element is the identity $1$) and the operators
$$
(q\varphi)(x):=x\varphi(x);
\quad\quad p\varphi(x):=\quad {1\over i}{\partial \over \partial x}\varphi(x)
\quad ; \quad \varphi \in L^{2} (\mathbb R)
$$
are defined on suitable domains in $L^{2} (\mathbb R)$
and satisfy the Heisenberg commutation relations
$$
[q,p] = i
$$
and all other commutators are zero.
In the same representation $Ceheis (1)$ can be realized as
the real linear span of
$$
\{1,p,q,q^2\}
$$
It has been recently proved that the current algebra over $\mathbb R $
of this algebra admits a non--trivial (suitably defined) Fock representation:
this is equivalent to prove the infinite divisibility of the vacuum
distributions of the operators of the form
$$
\alpha q^2 + \beta q + \gamma p \qquad ;\quad \alpha, \beta, \gamma\in \mathbb R
$$
For each $N\in \mathbb N$, the real Lie algebra with generators
$$
\{1,p, q, \dots, q^N \}
$$
in the following denoted $heis (1, 1,N)$ is a natural generalization of the
Galilei algebra.
The analogy with the usual Heisenberg algebra, i.e $heis (1,1,1,)$
naturally suggests the following problems:
\begin{enumerate}
\item[1)] To describe the Lie group generated in the Schroedinger representation by $heis (1,1,N)$.
\item[2)] To find an analogue of the Weyl commutation relations for the group in item 1). This is equivalent to describe the $(1,N)$--polynomial extensions of the Heisenberg group.
\item[3)] To determine the vacuum distribution of the nontrivial elements of $heis (1,1,N)$
and their moments.
These elements have the form
$$
up + \sum_{j=0}^N a_j q^j
$$
with $u\neq 0$ and some $a_j\neq 0$ whith $j\geq 2 $.
\item[4)] To extend the constructions of items $1),2)$ above
to the continuous case, i.e. to describe the current
$*$--Lie algebra of $heis (1,1,N)$ over $\mathbb R$.
\end{enumerate}
The solutions of these problems are discussed in the following.
In particular we show that the group structure emerging from problem 2) is
non trivial and yet controllable (see Theorem \ref{composition law})
thus allowing a solution of problem 3) (see Proposition \ref{struc-sigmaN}).
Problem 4) was solved in \cite{2} for the Galilei algebra ($N=2$)
but the technique used there cannot be applied to the polynomial case.
In section \ref{q-proj-meth} we introduce the {\it q-projection-method}
in order to overcome this difficulty. This reduces the problem to the calculation of the expectation value of functionals of the form $\exp(P(X)+icX)$ where $P$ is a polynomial with real coefficients, $c$ is a real number, and $X$ is a standard Gaussian random variable. If $P$ has degree 2, this gives a different proof of result obtained in \cite{1}. The calculation of the moment in subsection \ref{3.3} and the continuous second quantization versions of the polynomial extensions of the Weyl $C^*$-algebra are new even in the quadratic case.
\section{Polynomial extensions of the Heisenberg algebra: Discrete case}
The one-mode Hesienberg algebra is the $*$-Lie algebra generated
by $b,b^+,1$ with commutation relations
$$[b,b^+]=1,\;[b,1]=[b^+,1]=0$$
and involution
$$(b)^*=b^+$$
The associated position and momentum operators are defined respectively by
$$
q=\frac{b^++b}{\sqrt{2}},\;\;p=\frac{b-b^+}{i\sqrt{2}}
$$
We will use the known fact that, if $\mathcal{L}$ is a Lie algebra
and $a,b\in\mathcal{L}$ are such that
$$
[a,[a,b]]=0
$$
then, for all $w,z\in\mathbb{C}$
\begin{equation}\label{*1}
e^{za}e^{wb}e^{-za}=e^{w(b+z[a,b])}
\end{equation}
\begin{proposition}\label{prop2}
For all $u,w\in \mathbb{C}$ ($w\neq0$) and a polynomial $P$ in one determinate, we have
\begin{equation}\label{*2}
e^{iwp+iuP'(q)}=e^{iu\frac{P(q+w)-P(q)}{w}}e^{iwp}
\end{equation}
\end{proposition}
\begin{proof}
Let $P$ a polynomial in one determinate and $w,z\in\mathbb{C}$ such that $w\neq0$. Then from the identity
$$[P(q),p]=iP'(q)$$
It follows that
$$[P(q),[P(q),p]]=0$$
Hence (\ref{*1}) implies that
\begin{eqnarray}\label{pq}
e^{izP(q)}e^{iwp}e^{-izP(q)}=e^{iw(p-zP'(q))}
\end{eqnarray}
But, one has also
\begin{eqnarray}\label{pp}
e^{iwp}e^{-izP(q)}=(e^{iwp}e^{-izP(q)}e^{-iwp})e^{iwp}=e^{-izP(q+w)}e^{iwp}
\end{eqnarray}
The identities (\ref{pq}) and (\ref{pp}) imply
\begin{equation}\label{pqp}
e^{iw(p-zP'(q))}= e^{izP(q)}e^{-izP(q+w)}e^{iwp}
\end{equation}
With the change of variable $u:=-wz$, (\ref{pqp}) becomes (\ref{*2}).
\end{proof}
Since the real vector space of polynomials of degree $\leq N$ with coefficients in $\mathbb R$, denoted in the following
$$
\mathbb R_N[x]:=\{P'\in \mathbb R[x]:\,\,\mbox{ degree } (P')\leq N\}
$$
has dimension $N+1$, in the following we will use the identification
$$
P'(x)=\sum_{k=0}^Na_kx^k\equiv(a_0,a_1,\cdots,a_N)\in\mathbb R^{N+1}
$$
Now, for all $w\in\mathbb{R}$, we define the linear map $T_w$ by $T_0:=id$ if $w=0$ and if $w\neq0$
\begin{equation}\label{3.1}
T_w:P'\in\mathbb R_N[x]\rightarrow \frac{1}{w}\int_0^wP'(.+y)dy\in \mathbb R_N[x]
\end{equation}
so that
\begin{equation}\label{3.2}
T_wP'={{P(\cdot+w)-P}\over w}
\end{equation}
where $P$ is any primitive of $P'$.
\subsection{Properties of the maps $T_w$}
\begin{lemma}\label{T_w}
For each $w\in\mathbb R\setminus\{0\}$ the linear map $T_w$ is invertible. Moreover, we have
$$T_w^{-1}=\sum_{k=0}^{N}(-1)^k(T(w))^k$$
where $T(w)$ is a lower triangular nilpotent matrix whose coefficients, with respect to the mononial real basis $\{1,x,\dots,x^N\}$ of $R_N[x]$, are given by
\begin{equation}\label{tmn}
t_{mn}(w)=\chi_{\{m<n\}}\frac{n!}{(n+1-m)!m!}w^{n-m},\;\;0\leq m,n\leq N
\end{equation}
where for all set $I$
$$\chi_{\{m<n\}}=\left\{
\begin{array}{cc}
0,& if m\geq n\\
1,& if m<n
\end{array}
\right.
$$
\end{lemma}
\begin{proof}
Let $k\in\{0,1,\cdots,N\}$. If we take
$$
P'(x):=x^k
$$
then, in (\ref{3.2}) we can choose
$$
P(x)={{x^{k+1}}\over{k+1}}
$$
so that
$$
{{P(x+w)-P(x)}\over w}={1\over{w(k+1)}}[(x+w)^{k+1}-x^{k+1}]
$$
Since
\begin{eqnarray*}
(x+w)^{k+1}&=&\sum^{k+1}_{h=0}\pmatrix{k+1\cr h\cr}x^h w^{k+1-h}\\
&=&x^{k+1}+\sum^k_{h=0}\pmatrix{k+1\cr h\cr}x^h w^{k+1-h}
\end{eqnarray*}
it follows that
$$
(x+w)^{k+1}-x^{k+1}=\sum^k_{h=0}\pmatrix{k+1\cr h\cr}x^hw^{k+1-h}
$$
Hence
\begin{equation}\label{Tw1=1}
T_w1=1
\end{equation}
and, for all $k\geq1$
\begin{eqnarray}\label{explicit}
T_w x^k&=&\frac{1}{w(k+1)}\sum_{h=0}^k\pmatrix{k+1\cr h\cr}w^{k+1-h}x^h\nonumber\\
&=&\sum_{h=0}^k\frac{k!}{(k+1-h)!h!}w^{k-h}x^h\nonumber\\
&=&\sum_{h=0}^{k-1}\frac{k!}{(k+1-h)!h!}w^{k-h}x^h+x^k
\end{eqnarray}
Therefore the matrix of $T_w$ in the basis $\{1,\dots,x^N\}$ of $\mathbb{R}_N[x]$, we continue to denote it by $T_w$, has the form $I+T(w)$ where $T(w)$ is a lower triangular nilpotent matrix whose coefficients, with respect to the basis $\{1,x,x^2,\dots,x^N\}$ of $\mathbb R_N[x]$, are given by
\begin{equation}\label{df-T(w)}
t_{mn}(w)=\chi_{\{m<n\}}\frac{n!}{(n+1-m)!m!}w^{n-m},\;\;0\leq m,n\leq N
\end{equation}
Hence $T_w$ is invertible because $det(T_w)=1$. Moreover, one has
\begin{equation}\label{inv-Tw}
T_w^{-1}=(I+T(w))^{-1}=\sum_{k=0}^{N}(-1)^k(T(w))^k
\end{equation}
\end{proof}
Let $S$ denote the action of $\mathbb{R}$ on $\mathbb{R}_N[x]$ by translations
$$(S_uP')(x):=P'(x+u),\;x,\,u\in\mathbb{R}$$
Then
$$
T_uP'={1\over u}(S_u-id)P
$$
where $P$ is a primitive of $P'$.
The shift $S_u$ is a linear homogeneous map of $\mathbb R_N[x]$ into itself. Moreover
\begin{eqnarray*}
\sum^N_{k=0}a_k(x+u)^k&=&\sum^N_{k=0}\sum^k_{h=0}a_k\pmatrix{k\cr h\cr}x^h u^{k-h}\\
&=&\sum^N_{h=0}\Big(\sum^N_{k=h}a_k\pmatrix{k\cr h\cr}u^{k-h}\Big)x^h\\
&=&\sum^N_{h=0}(S_uP')_hx^h
\end{eqnarray*}
where $$(S_uP')_h=\sum^N_{k=h}a_k\pmatrix{k\cr h\cr}u^{k-h}$$
Thus, in the monomial basis $\{1,\dots,x^N\}$ of $\mathbb R_N[x]$, the matrix associated to $S_u$ is
\begin{eqnarray}\label{translation}
(S_u)_{hk}=\chi_{\{h\leq k\}}\pmatrix{k\cr h\cr}u^{k-h}
\end{eqnarray}
Note that $S_u$ is invertible with inverse $S_{-u}$ and
$$
S_uS_v=S_{u+v}
$$
Now denote $\partial$ the action of derivation operator on $\mathbb{R}_N[x]$, i.e
\begin{equation}\label{d-par}
\partial:\sum_{k=0}^Na_kx^k\rightarrow\sum_{k=1}^Nka_kx^{k-1}
\end{equation}
Then clearly
\begin{equation}\label{ds=sd}
\partial S_u=S_u\partial,\;\forall u\in\mathbb{R}
\end{equation}
Let $\mathbb{R}_N[x]_0$ the subalgebra of $\mathbb{R}_N[x]$ of all polynomials vanishing in zero
$$\mathbb{R}_N[x]_0:=\{P'\in \mathbb{R}_N[x]:\;P'(0)=0\}$$
It is clear that
$$\mathbb{R}_N[x]=\mathbb{R}_N[x]_0\oplus\mathbb{R}1$$
so that, thanks to (\ref{Tw1=1}), $T_w$ is uniquely determined by its restriction to $\mathbb{R}_N[x]_0$. Denote
\begin{eqnarray*}
T_w^0&:=&T_w\big|_{\mathbb{R}_N[x]_0}\\
\partial_0&:=&\partial\big|_{\mathbb{R}_N[x]_0}
\end{eqnarray*}
Then (\ref{d-par}) implies that $\partial_0$ is invertible and (\ref{ds=sd}) implies that
\begin{equation}\label{IS=SI}
\partial_0^{-1}S_u=S_u\partial_0^{-1},\;\forall u\in\mathbb{R}
\end{equation}
\begin{lemma}\label{lem2}
The following identities hold:
\begin{enumerate}
\item[1)] $T_wS_u=S_uT_w,\;\forall u,w\in\mathbb{R}$
\item[2)] $T_wT_u=T_uT_w,\;\forall u,w\in\mathbb{R}$
\item[3)] $T_uS_v=(1+\frac{v}{u})T_{u+v}-\frac{v}{u}T_v,\;\forall u,v\in\mathbb{R}$ such that $u\neq0$
\item[4)] $T_uS_{-u}=T_{-u},\;\forall u\in\mathbb{R}$
\item[5)] $T_uT_v=(\frac{1}{v}+\frac{1}{u})T_{u+v}-\frac{1}{u}T_v-\frac{1}{v}T_u, \;\forall u,v\in\mathbb{R}\setminus\{0\}$
\end{enumerate}
\end{lemma}
\begin{proof} In the above notations
$$T_w^0=\frac{1}{u}(S_u-id)\partial_0^{-1}$$
and (\ref{IS=SI}) implies that
$$T_w^0S_u=S_uT_w^0,\;\forall u,w\in\mathbb{R}$$
Since the constant function $1$ is fixed both for $S_u$ and for $T_w$, it follows that
$$T_wS_u=S_uT_w,\;\forall u,w\in\mathbb{R}$$
and therefore also
$$T_wT_u=T_uT_w,\;\forall u,w\in\mathbb{R}$$
which implies that assertions 1) and 2) are satisfied.
Now notice that translations commute with the operation of taking primitives, i.e. if $P$ is a primitive of $P'$, then $\forall v\in\mathbb{R},\;S_vP$ is a primitive of $S_vP'$, that is
\begin{eqnarray*}
T_uS_vP'&=&\frac{1}{u}(S_u-id)S_vP\\
&=&\frac{1}{u}(S_{u+v}-S_v)P\\
&=&\frac{1}{u}(S_{u+v}-id)P+\frac{1}{u}(id-S_v)P\\
&=&\Big(\frac{u+v}{u}T_{u+v}-\frac{v}{u}T_v\Big)P'
\end{eqnarray*}
This proves 3). In particular, choosing $v=-u$, one finds $T_uS_{-u}=T_{-u}$ and 4) holds. Finally, from the identity
$$T_uT_v=\frac{1}{v}T_uS_v-\frac{1}{v}T_u$$
and from 3), the assertion 5) follows.
\end{proof}
\subsection{Polynomial extensions of the Heisenberg algebra}
Recall that the $1$--dimensional Heisenberg group, denoted $Heis(1)$,
is the nilpotent Lie group whose underlying manifold is
$\mathbb R\times \mathbb R^2$ and whose group law is given by
$$
(t,z)\circ (t',z') = (t+t'-2\sigma (z, z') ,z+z'); \quad
t,t'\in\mathbb R \ ; \ z:=(\alpha,\beta),z':=(\alpha',\beta')\in\mathbb R^2
$$
where $\sigma ( \ \cdot \ , \ \cdot \ ) $ the symplectic form on $\mathbb R^2$
given by
$$
\sigma (z, z') := \alpha\beta'-\alpha'\beta; \quad z:=(\alpha,\beta),z':=(\alpha',\beta')\in\mathbb R^2
$$
In the Schroedinger representation of the real Lie algebra $Heis(1)$
one can define the {\it centrally extended Weyl operators} through the map
\begin{equation}\label{un-rep1}
(t,z)\in\mathbb R\times \mathbb R^2\mapsto
W(t,z) := e^{ i (\sqrt 2 (\beta q-\alpha p) + t1) } = W(z)e^{ it }
\end{equation}
where the Weyl operators are defined by
$$
W(z) :=e^{ i \sqrt 2 (\beta q-\alpha p) }\qquad ; \quad z:=(\alpha,\beta)\in\mathbb R^2
$$
From the Weyl relations
\begin{equation}\label{cl-Weyl-rel}
W(u)W(z) = e^{- i\sigma (u, z) } W( z + u)
\end{equation}
one then deduces that
\begin{equation}\label{un-rep1}
W(t,z)W(t',z') = W((t,z)\circ (t',z'))
\end{equation}
i.e. that the map (\ref{un-rep1}) gives a unitary representation of the group $Heis(1)$.\\
The generators of the centrally extended Weyl operators have the form
\begin{equation}\label{upP'q11}
up+a_1q + a_11 =: up+P'(q)
\end{equation}
where $u\in\mathbb R$ and $P$ is a polynomial in one indeterminate
with real coefficients of degree at most $1$. Thus they are exactly
the elements of the one dimensional Heisenberg algebra $heis(1)$.\\
Replacing $P'$, in (\ref{upP'q11}), by a generic polynomual of degree at most $N$
(in one indeterminate with real coefficients ), one still obtains a real Lie algebra
because of the identity
$$
[up+P'(q),vp+Q'(q)]=u[p,Q'(q)]-v[p,P'(q)]=-iuQ''(q)+ivP''(q)
$$
and, since $\mathbb R_N[x]$ has dimension $N+1$, this real Lie algebra
has dimension $N+2$ on $\mathbb R$.
\begin{Definition}\label{defi1}
For $N\in\mathbb N$, the real Lie algebra
$$
heis(1,1,N) :=
\big\{up+P'(q)\quad :\quad u\in\mathbb R,P'\in\mathbb R_N[x]\big\}$$
is called {\it the $(1,N)$--polynomial extensions of the $1$--dimensional
Heisenberg algebra}.
\end{Definition}
The notation $(1,N)$ emphasizes that the polynomials involved have degree $1$
in the momentum operator and degree $\leq N$ in the position operator.
With these notations the $(1,N)$--polynomial extensions
of the centrally extended Weyl operators are the operators of the form
\begin{equation}\label{df-1Npol-cent-ext-W-op}
W(u,P'):=e^{i(up+P'(q)) }
\quad ; \ u\in\mathbb R \quad ; \ P'\in \mathbb{R}_N[x]
\end{equation}
By analogy with the $1$--dimensional Heisenberg group, we expect that
the pairs $(u,P')\in \mathbb R \times \mathbb{R}_N[x]$ form a group for
an appropriately defined composition law. The following theorem shows that
this is indeed the case.
\begin{theorem}\label{composition law}
For any $(u,P'),(v,Q')\in \mathbb R \times \mathbb{R}_N[x]$ one has
\begin{equation}\label{1Npol-centr-ext-Weyl-rel}
W(u,P')W(v,Q')= W((u,P')\circ (v,Q'))
\end{equation}
where
\begin{equation}\label{1Npol-comp-law1}
(u,P')\circ(v,Q'):= (u+v,T^{-1}_{u+v}(T_uP'+T_vS_uQ'))
\end{equation}
\end{theorem}
\begin{proof}
From Proposition \ref{prop2}, one has
$$
W(u,P')= e^{iup+iP'(q)}=e^{iT_uP'(q)}e^{iup}
$$
Therefore
\begin{eqnarray*}
W(u,P')W(v,Q')&= &e^{iup+iP'(q)}e^{ivp+iQ'(q)}\\
&=&e^{iT_uP'(q)}e^{iup}e^{iT_vQ'(q)}e^{ivp}\\
&=&e^{iT_uP'(q)}e^{iT_vQ'(q+u)}e^{i(u+v)p}\\
&=&e^{i[T_uP'(q)+T_vQ'(q+u)]}e^{i(u+v)p}\\
&=&e^{i(u+v)p+iT^{-1}_{u+v}(T_uP'(q)+T_vS_uQ'(q))}\\
&=&W\left(u+v, T^{-1}_{u+v}(T_uP'+T_vS_uQ')\right)
\end{eqnarray*}
This proves (\ref{1Npol-centr-ext-Weyl-rel}) and (\ref{1Npol-comp-law1}).
\end{proof}
\begin{remark}
The associativity of the group law (\ref{1Npol-comp-law1}) can be directly verified using Lemma \ref{lem2}.
\end{remark}
Our next goal is to determine the $(1,N)$--polynomial extensions of the
Weyl commutation relations. To this goal define the ideal
\begin{equation}\label{df-RNx0}
\mathbb{R}_N[x]_0
:=\{P_0\in\mathbb{R}_N[x] \ : \ P_0(0)=0\}
\end{equation}
Thus the projection map from $\mathbb{R}_N[x]$ onto $\mathbb{R}_N[x]_0$ is given by
\begin{equation}\label{df-proj-RNx0}
P'(x)\in \mathbb{R}_N[x]\mapsto (P')_0(x):= P'(x) - P'(0)\in\mathbb{R}_N[x]_0
\end{equation}
and, in the notation (\ref{df-proj-RNx0}), the $(1,N)$--polynomial extensions of
the centrally extended Weyl operators (\ref{df-1Npol-cent-ext-W-op}) take the form
\begin{equation}\label{df-1Npol-cent-ext-W-op2}
W(u,P')=e^{i(up+P'(q)) }=e^{i(up+(P')_0(q)) }e^{iP'(0) }
\quad ; \ u\in\mathbb R \quad ; \ P'\in \mathbb{R}_N[x]
\end{equation}
The analogy between (\ref{df-1Npol-cent-ext-W-op2}) and (\ref{un-rep1})
naturally suggests the following definition.
\begin{Definition}
The unitary operators
\begin{equation}\label{df-1Npol-ext-W-op}
W(u,P'_0):=e^{i(up+P'_0(q))}
\quad ;\quad u\in\mathbb R \quad ;\quad P'_0\in \mathbb{R}_N[x]_0
\end{equation}
will be called {\it $(1,N)$--polynomially extended Weyl operators}.
\end{Definition}
From (\ref{1Npol-centr-ext-Weyl-rel}) and (\ref{1Npol-comp-law1}) we see that,
if $P'_0,Q'_0\in \mathbb{R}_N[x]_0$, then in the notation (\ref{df-proj-RNx0}) one has
\begin{equation}\label{27a}
W(u,P'_0)W(v,Q'_0) = W\left(u+v, T^{-1}_{u+v}(T_uP_0'+T_vS_uQ_0'\right)=
\end{equation}
$$ = W\left(u+v, (T^{-1}_{u+v}(T_uP_0'+T_vS_uQ_0'))_0\right)
e^{i(T^{-1}_{u+v}(T_uP_0'+T_vS_uQ_0'))(0)}
$$
By analogy with the usual Weyl relations we introduce the notation
\begin{equation}\label{df-sigmaN}
\sigma ((u,P_0'),(v,Q_0')) :=
T^{-1}_{u+v}(T_uP_0'+T_vS_uQ_0'))(0)
\end{equation}
i.e. $\sigma ((u,P_0'),(v,Q_0'))$ is the degree zero coefficient of
the polynomial\\ $T_{u+v}^{-1}(T_uP'_1+T_vS_uP'_2)\in \mathbb{R}_N[x]$.
Notice that the map
$$
((u,P_0'),(v,Q_0'))\equiv ((u,v),(Q_0',P_0'))\mapsto \sigma ((u,P_0'),(v,Q_0'))
$$
is linear in the pair $(Q_0',P_0')$ but polynomial in the pair $(u,v)$.
This is an effect of the duality between $p$, which appears at the first power,
and $q$, which appears in polynomial expressions.\\
In order to prove the $(1,N)$--polynomial analogue of the classical
Weyl relations (\ref{cl-Weyl-rel}) one has to compute the scalar factor (\ref{df-sigmaN}).
We will compute more generally all the coefficients of
$T_{u+v}^{-1}(T_uP'_1+T_vS_uP'_2)$. In view of (\ref{inv-Tw}) this leads to
compute the powers of the matrices $T(w)$ given by (\ref{tmn}).
\begin{lemma}\label{LeM}
Let $k\in\{2,\dots,N\}$ and $w\in\mathbb{R}\setminus \{0\}$.
Then, the matrix of $(T(w))^k$, in the monomial basis $\{1,x,\dots,x^N\}$ of $\mathbb{R}_N[x]$, is given by \\
$\big(T^{[k]}_{ij}(w)\big)_{0\leq i,j\leq N}$, where
\begin{equation}\label{k-power}
T^{[k]}_{ij}(w)=t_{ij}(w)C_{i,j}^{[k]}
\end{equation}
the coefficients $t_{ij}(w)$ are given by (\ref{tmn}) and the $C_{i,j}^{[k]}$ are inductively defined by
\begin{equation}\label{28a}
C_{i,j}^{[k]}=\sum_{h=0}^NC_{i,j}^{(h)}C_{i,h}^{[k-1]}
\end{equation}
where
$$C_{i,j}^{(h)}=\frac{(j+1-i)!}{(h+1-i)!(j+1-h)!}\chi_{\{i<h<j\}}(h)$$
and $C_{i,j}^{[1]}=\chi_{\{i<j\}}$ for all $i,j=0,\dots, N$.
\end{lemma}
\begin{proof}
We prove the lemma by induction. For $k=2$, one has
\begin{eqnarray}\label{tihthj}
t_{ih}(w)t_{hj}(w)&=&\chi_{\{i<h\}}\chi_{\{h<j\}}\frac{1}{h+1}\pmatrix{h+1\cr i\cr}w^{h-i}\frac{1}{j+1}\pmatrix{j+1\cr h\cr}w^{j-h}\nonumber\\
&=&\Big[\chi_{\{i<h<j\}}\frac{j!}{(j+1-i)!i!}w^{j-i}\Big]\frac{(j+1-i)!}{(h+1-i)!(j+1-h)!}\nonumber\\
&=&t_{ij}(w)C^{(h)}_{i,j}
\end{eqnarray}
where
$$C_{i,j}^{(h)}=\frac{(j+1-i)!}{(h+1-i)!(j+1-h)!}\chi_{\{i<h<j\}}$$
It follows that
$$T_{ij}^{[2]}(w)=t_{ij}(w)\sum_{h=0}^NC_{i,j}^{(h)}=t_{ij}(w)C^{[2]}_{i,j}$$
Now, let $2\leq k\leq N-1$ and suppose that (\ref{k-power}) holds true. Then, one has
$$T_{ij}^{[k+1]}(w)=\sum_{h=0}^NT_{ih}^{[k]}(w)t_{hj}(w)$$
Using induction assumption, one gets
\begin{equation}\label{k+1}
T_{ij}^{[k+1]}(w)=\sum_{h=0}^Nt_{ih}(w)t_{hj}(w)C_{i,h}^{[k]}
\end{equation}
Therefore the identities (\ref{28a}), (\ref{tihthj}) and (\ref{k+1}) imply that
$$T_{ij}^{[k+1]}(w)=t_{ij}(w)\sum_{h=0}^NC_{i,j}^{(h)}C_{i,h}^{[k]}=t_{ij}(w)C_{i,j}^{[k+1]}$$
\end{proof}\\
As a consequence of Theorem \ref{composition law} and Lemma \ref{LeM}, we are able to explicitly compute the scalar factor in the $(1,N)$-polynomial extensions of the Weyl relations (\ref{27a}).
\begin{proposition}\label{struc-sigmaN}
In the notation (\ref{df-sigmaN}), for any $u,v\in\mathbb{R}$
and for any $P_1',P'_2\in\mathbb{R}_N[x]_0$ given by
$$
P'_1(x)=\alpha_1 x+\dots+\alpha_N x^N,\;P'_2(x)=\beta_1 x+\dots+\beta_N x^N
$$
one has
\begin{eqnarray}\label{df-sigmaN}
\sigma ((u,P_0'),(v,Q_0'))&=&\sum_{j=0}^N\Big[\Big(\frac{1}{j+1}(u+v)^j
\sum_{m=0}^N(-1)^mC_{0,j}^{[m]}\Big)\\
&&\;\;\;\;\;\;\Big(\sum_{j\leq h\leq N}\Big\{t_{jh}(u)\alpha_h+t_{jh}(v)\sum_{h\leq k\leq N}(S_u)_{hk}\beta_k\Big\}\Big)\Big] \nonumber
\end{eqnarray}
where the coefficients $(S_u)_{hk}$ are given by (\ref{translation}) and with the convention
$$
C_{0,j}^{[0]}=\delta_{j,0},\;\alpha_0=\beta_0=0,\;t_{jj}(w)=1,\;\mbox{ for all } j=0,\dots,N.$$
\end{proposition}
\begin{proof}
Let $u,v\in\mathbb{R}$ and $P_1',P'_2\in\mathbb{R}_N[x]_0$ be as in the statement. Then, using together Lemmas \ref{T_w}, \ref{LeM} and identity (\ref{translation}), we show that
\begin{eqnarray*}
T_{u+v}^{-1}(T_uP_1'+T_vS_uP'_2)(x)&=&\!\!\!(T_{u+v}^{-1}(T_uP'_1+T_vS_uP'_2))_0(x)\\
&&\!\!\!\!\!\!\!\!+\sum_{j=0}^N\Big[\Big(\frac{1}{j+1}(u+v)^j\sum_{m=0}^N(-1)^mC_{0,j}^{[m]}\Big)\\
\;\;\;\;\;\;\;\;\;&&\Big(\sum_{j\leq h\leq N}\Big\{t_{jh}(u)\alpha_h+t_{jh}(v)\sum_{h\leq k\leq N}(S_u)_{hk}\beta_k\Big\}\Big)\Big]
\end{eqnarray*}
\end{proof}
\begin{Definition}\label{defi3}
For $N\in\mathbb N$, the real Lie group with manifold
\begin{equation}\label{24a}
\mathbb R \times \mathbb R\times\mathbb{R}_N[x]_0 \equiv \mathbb R \times \mathbb R_N[x]
\end{equation}
and with composition law given by
$$ (u,P')\circ(v,Q'):= (u+v,T^{-1}_{u+v}(T_uP'+T_vS_uQ'))$$
is called {\it the $(1,N)$--polynomial extensions of the $1$--dimensional
Heisenberg group} and denoted $Heis(1,1,N)$.
\end{Definition}
\begin{remark}
The left hand side of (\ref{24a}) emphasiyes that the coordinates \\$(t,u,a_1,\dots,a_N)\in\mathbb{R}^{N+2}$ of an element of $Heis(1,1,N)$ are intuitively interpreted as time $t$, momentum $u$ and coordinates of the first $N$ powers of position $(a_1,\dots,a_N)$. Putting $t=a_0$ is equivalent to the identification $\mathbb{R}\times\mathbb{R}_N[x]_0\equiv \mathbb R_N[x].$
\end{remark}
\subsection{The discrete Heisenberg algebra $\{1,p,q\}$}
From Definition \ref{defi1}, it follows that $\{1,p,q\}$ is a set of generators of $heis(1,1,1)$ so that
$$heis(1,1,1)\equiv heis(1)$$
Moreover the composition law associated to $Heis(1)$ is given by the following.
\begin{proposition}\label{Weyl}
Let $P'_1(x)=\alpha_0+\alpha_1x,\;P'_2(x)=\beta_0+\beta_1x$ and let $u,v\in\mathbb{R}$. Then, we have
$$(u,P'_1)\circ(v,P'_2)=(u+v,Q')$$
where
$$Q'(x)=(\alpha_0+\beta_0+\frac{1}{2}u\beta_1-\frac{1}{2}v\alpha_1)+(\alpha_1+\beta_1)x$$
\end{proposition}
\begin{proof}
Let $u,\,w\in\mathbb{R}$. Then, one has
\begin{eqnarray*}
T_w=\left(
\begin{array}{cc}
1 & \frac{1}{2}w\\
0 & 1
\end{array}
\right),\;\;
T_w^{-1}=\left(
\begin{array}{cc}
1 & -\frac{1}{2}w\\
0 & 1
\end{array}
\right)\;\;
S_u=\left(
\begin{array}{cc}
1 & u\\
0 & 1
\end{array}
\right)\\
\end{eqnarray*}
Let $P'_1(x)=\alpha_0+\alpha_1x,\;P'_2(x)=\beta_0+\beta_1x$. Then, from Thereom \ref{composition law}, the composition law is given by
$$(u,P'_1)\circ(v,P'_2)=(u+v,Q')$$
where
\begin{eqnarray*}
Q'(x)&=&T^{-1}_{u+v}(T_uP'_1+T_vS_uP'_2)(x)\\
&=&(\alpha_0+\beta_0+\frac{1}{2}u\beta_1-\frac{1}{2}v\alpha_1)+(\alpha_1+\beta_1)x
\end{eqnarray*}
\end{proof}
In particular recall that for all $z=\alpha+i\beta$, the Weyl operator $W(z)$ is defined by
$$W(z)=W(\alpha,\beta):=e^{i\sqrt{2}(\beta q-\alpha p)}$$
Put
$$P_1'(x)=\beta\sqrt{2}x,\;P'_2(x)=\beta'\sqrt{2}x$$
Then from the above proposition, one has
$$(-\sqrt{2}\alpha,P_1')\circ(-\sqrt{2}\alpha',P_2')=(-\sqrt{2}(\alpha+\alpha'),Q')$$
where
$$Q'(x)=(\alpha'\beta-\alpha\beta')+(\beta+\beta')\sqrt{2}x$$
Therefore, for all complex numbers $z=\alpha+i\beta,\;z'=\alpha'+i\beta'$, it follows that
\begin{eqnarray*}
W(z)W(z')=W(\alpha,\beta)W(\alpha',\beta')&=&e^{i(\alpha'\beta-\alpha\beta')}e^{i\big(\sqrt{2}(\beta+\beta')p-\sqrt{2}(\alpha+\alpha')p}\\
&=&e^{-i\Im(\bar{z}z')}W(z+z')
\end{eqnarray*}
\begin{remark}
The above proposition proves that the composition law given in Definition \ref{defi3} is indeed a generalization of the composition law of the Heisenberg group.
\end{remark}
\subsection{The discrete Galilei algebra $\{1,p,q,q^2\}$}
In the case of $N=2$, the composition law is given by the following.
\begin{proposition}\label{N=2}
Let $P'_1,\,P'_2\in\mathbb{R}_2[x]$, such that $P'_1(x)=\alpha_0+\alpha_1x+\alpha_2x^2$ and $P'_2(x)=\beta_0+\beta_1x+\beta_2x^2$. Then, for all $u,v\in\mathbb{R}$, we have
$$(u,P'_1)\circ(v,P'_2)=(u+v,Q')$$
where $Q'(x)=\gamma+\beta x+\alpha x^2$ with
\begin{eqnarray*}
\gamma&=&\alpha_0+\beta_0-\frac{1}{2}v\alpha_1+\frac{1}{2}u\beta_1+\frac{1}{6}(v-u)v\alpha_2+\frac{1}{6}(u-v)u\beta_2\\
\beta&=&\alpha_1+\beta_1-v\alpha_2+u\beta_2\\
\alpha&=&\alpha_2+\beta_2
\end{eqnarray*}
\end{proposition}
\begin{proof}
From Lemma \ref{T_w}, the matrices of $T_w$ and $T_w^{-1}$, in the monomial basis $\{1,x,x^2\}$ of $\mathbb{R}_2[x]$, are given by
\begin{eqnarray*}
T_w=\left(
\begin{array}{lcc}
1 & \frac{1}{2}w & \frac{1}{3}w^2\\
0& 1& w\\
0& 0 &1
\end{array}
\right),\;\;
T_w^{-1}=\left(
\begin{array}{lcc}
1 & -\frac{1}{2}w& \frac{1}{6}w^2\\
0& 1& -w\\
0& 0 &1
\end{array}
\right)
\end{eqnarray*}
Moreover, identity (\ref{translation}) implies that
$$S_u=\left(
\begin{array}{lcc}
1 & u & u^2\\
0 & 1& 2u\\
0 & 0 &1
\end{array}
\right)
$$
Let $P'_1(x)=\alpha_0+\alpha_1x+\alpha_2x^2$, $P'_2(x)=\beta_0+\beta_1x+\beta_2x^2$ and $u,v\in\mathbb{R}$. Then, using the explicit form of $T_{u+v}^{-1}, \,T_u$ and $S_u$, it is straightforward to show that
$$T^{-1}_{u+v}(T_uP'_1+T_vS_uP'_2)(x)=\gamma+\beta x+\alpha x^2$$
where $\alpha,\beta,\gamma$ are given in the above proposition. Finally, from Theorem \ref{composition law} we can conclude.
\end{proof}
\section{Polynomial extensions of the Heisenberg algebra: Continuous case}
Let $b_t^{+}$ and $b_t$ satisfy the Boson commutation and duality relations
\begin{eqnarray*}
\lbrack b_t,b_s^{+}\rbrack=\delta(t-s)\,\,;\,\,\lbrack b_t^{+},b_s^{+}\rbrack=\lbrack b_t,b_s\rbrack=0\,\,;\,\,(b_s)^*=b_s^{+}
\end{eqnarray*}
\noindent where $t,s\in \mathbb{R}$ and $\delta$ is the Dirac delta function. Define
\begin{eqnarray*}
q_t=\frac{b_t+b_t^{+}}{\sqrt{2}}\,\,;\,\,p_t= \frac{b_t-b_t^{+}}{i\sqrt{2}}
\end{eqnarray*}
\noindent Then
\begin{eqnarray*}
\lbrack q_t,p_s\rbrack= i\,\delta(t-s)\,\,;\,\,\lbrack q^k_t,p_s\rbrack= ik\,q_t^{k-1}\delta(t-s),\;\,k\geq1
\end{eqnarray*}
\begin{eqnarray*}
\lbrack q_t, q_s\rbrack=\lbrack p_t, p_s\rbrack=\lbrack q^k_t, q^k_s\rbrack = \lbrack q^k_t, q_s\rbrack =0
\end{eqnarray*}
\noindent and
\begin{eqnarray*}
(q_s)^*=q_s\,\,;\,\,(q^k_s)^*=q^k_s \,\,;\,\,(p_s)^*=p_s
\end{eqnarray*}
Now for all real function $f\in L^{2}(\mathbb{R}\cap L^\infty(\mathbb{R})$, we define
$$p(f)=\int_{\mathbb{R}}f(t)p_tdt,\;q^k(f)=\int_{\mathbb{R}}f(t)q_t^kdt,\,\;k\in\mathbb{N}$$
\begin{proposition}\label{prop3}
For all real functions $g, f_1,\dots,f_n\in L^2(\mathbb{R})\cap L^\infty(\mathbb{R})$, we have
$$e^{i\big(p(g)+q^n(f_n)+\dots+q(f_1)\big)}=e^{i\sum_{k=1}^n\sum_{h=0}^k\frac{k!}{(k+1-h)!h!}q^{h}(g^{k-h}f_k)}e^{ip(g)}$$
\end{proposition}
\begin{proof}
Consider real functions $g, f_1,\dots,f_n\in L^2(\mathbb{R})\cap L^\infty(\mathbb{R})$. Put
$$U_t=e^{it\big(p(g)+q^n(f_n)+\dots+q(f_1)\big)}e^{-itp(g)}$$
Then one has
\begin{eqnarray}\label{U(t)}
\partial_tU_t&=&ie^{it\big(p(g)+q^n(f_n)+\dots+q(f_1)\big)}(p(g)+q^n(f_n)+\dots+q(f_1)\big)e^{-itp(g)}\nonumber\\
&&-ie^{it\big(p(g)+q^n(f_n)+\dots+q(f_1)\big)}p(g)e^{-itp(g)}\nonumber\\
&=&ie^{it\big(p(g)+q^n(f_n)+\dots+q(f_1)\big)}(q^n(f_n)+\dots+q(f_1)\big)e^{-itp(g)}\nonumber\\
&=&ie^{it\big(p(g)+q^n(f_n)+\dots+q(f_1)\big)}e^{-itp(g)}\big[e^{itp(g)}(q^n(f_n)+\dots+q(f_1)\big)e^{-itp(g)}\big]\nonumber\\
&=&iU_t\big[e^{itp(g)}(q^n(f_n)+\dots+q(f_1)\big)e^{-itp(g)}\big]
\end{eqnarray}
Note that
$$e^{itp(g)}q^k(f)e^{-itp(g)}=\int_\mathbb{R} e^{itp(g)}q_s^ke^{-itp(g)}f(s)ds$$
Put
$$V_{t,s}=e^{itp(g)}q_s^ke^{-itp(g)}$$
Then, one has
\begin{eqnarray*}
\partial_tV_{t,s}&=&ie^{itp(g)}[p(g),q_s]e^{-itp(g)}\\
&=&ie^{itp(g)}\int_{\mathbb{R}}g(u)[p_u,q_s]e^{-itp(g)}\\
&=&\int_{\mathbb{R}}g(u)\delta(s-u)du=g(s)
\end{eqnarray*}
This gives
$$V_{t,s}=q_s+tg(s)$$
It follows that
$$e^{itp(g)}q_s^ke^{-itp(g)}=(q_s+tg(s))^k$$
Therefore, one gets
\begin{eqnarray}\label{qk}
e^{itp(g)}q^k(f)e^{-itp(g)}&=&\int_\mathbb{R}e^{itp(g)}q_s^ke^{-itp(g)}f(s)ds\nonumber\\
&=&\int_\mathbb{R}(q_s+tg(s))^kf(s)ds\nonumber\\
&=&\sum_{h=0}^k\pmatrix{k\cr h\cr}t^{k-h}\int_\mathbb{R}q_s^{h}(g(s))^{k-h}f(s)ds\nonumber\\
&=&\sum_{h=0}^k\pmatrix{k\cr h\cr}t^{k-h}q^{h}(g^{k-h}f)
\end{eqnarray}
Using identities (\ref{U(t)}) and (\ref{qk}), one gets
\begin{eqnarray*}
\partial_tU_t=U_t\Big(\sum_{k=1}^n\sum_{h=0}^k\pmatrix{k\cr h\cr}t^{k-h}q^{h}(g^{k-h}f_k)\Big)
\end{eqnarray*}
This implies that
$$U_t=e^{i\sum_{k=1}^n\sum_{h=0}^k\frac{k!t^{k-h+1}}{(k+1-h)!h!}q^{h}(g^{k-h}f_k)}$$
Finally, one gets
$$e^{it\big(p(g)+q^n(f_n)+\dots+q(f_1)\big)}=e^{i\sum_{k=1}^n\sum_{h=0}^k\frac{k!t^{k-h+1}}{(k+1-h)!h!}q^{h}(g^{k-h}f_k)}e^{itp(g)}$$
\end{proof}
As a consequence of the above proposition, we prove the following.
\begin{lemma}\label{ref}
For all real functions $g,f_1,\dots,f_n\in L^2(\mathbb{R})\cap L^\infty(\mathbb{R})$, we have
$$e^{i\big(p(g)+q(f_1)+\dots+q^n(f_n)\big)}=e^{iT_g\big(q(f_1)+\dots+q^n(f_n)\big)}e^{ip(g)}$$
where
\begin{equation}\label{Tgqk}
T_gq^k(f_k)=\sum_{h=0}^k\frac{k!}{(k+1-h)!h!}q^h(g^{k-h}f_k)=\int_RT_{g(s)}(q_s^k)f_k(s)ds
\end{equation}
with $T_{g(s)}=I+T(g(s))$, where, for all $s\in\mathbb R$, $T(g(s))$ is given by Lemma \ref{T_w}.
\end{lemma}
\begin{proof}
From Proposition \ref{prop3}, one has
\begin{eqnarray*}
e^{i\big(p(g)+q(f_1)+\dots+q^n(f_n)\big)}&=&e^{i\sum_{k=1}^n\sum_{h=0}^k\frac{k!}{(k+1-h)!h!}q^{h}(g^{k-h}f_k)}e^{ip(g)}\\
&=&e^{i\sum_{k=1}^nT_gq^k(f_k)}e^{ip(g)}
\end{eqnarray*}
Moreover, one has
$$T_gq^k(f_k)=\int_\mathbb{R}\Big(\sum_{h=0}^k\frac{k!}{(k+1-h)!h!}q_s^hg^{k-h}(s)\Big)f_k(s)ds$$
Now using identity (\ref{explicit}), it follows that
$$\sum_{h=0}^k\frac{k!}{(k+1-h)!h!}q_s^hg^{k-h}(s)=T_{g(s)}(q_s^k)$$
This ends the proof.
\end{proof}
\begin{lemma}\label{lem5}
For all real functions $g,f_k\in L^2(\mathbb{R})\cap L^\infty(\mathbb{R})$, we have
$$T_g^{-1}q^k(f_k)=\int_\mathbb{R}T_{g(s)}^{-1}(q^k_s)f_k(s)ds$$
\end{lemma}
\begin{proof}
Let $g,f_k\in L^2(\mathbb{R})\cap L^\infty(\mathbb{R})$ be real functions. Define $F_g$ as follows
$$F_gq^k(f_k)=\int_\mathbb{R}T_{g(s)}^{-1}(q^k_s)f_k(s)ds$$
Then one has
\begin{eqnarray*}
F_g(T_gq^k(f_k))&=&\sum_{h=0}^k\frac{k!}{(k+1-h)!h!}F_gq^h(g^{k-h}f_k)\\
&=&\sum_{h=0}^k\frac{k!}{(k+1-h)!h!}\int_\mathbb{R}T_{g(s)}^{-1}(q^h_s)g^{k-h}(s)f_k(s)ds\\
&=&\int_\mathbb{R}T_{g(s)}^{-1}\Big(\sum_{h=0}^k\frac{k!}{(k+1-h)!h!}q^h_sg^{k-h}(s)\Big)f_k(s)ds\\
&=&\int_\mathbb{R}\big(T_{g(s)}^{-1}T_{g(s)}(q^k_s)\big)f_k(s)ds\\
&=&q^k(f_k)=T_g(F_gq^k(f_k))
\end{eqnarray*}
which proves the required result.
\end{proof}
\begin{theorem}\label{theo2}
Let $g,f_1,\dots,f_n,G,F_1,\dots,F_n\in L^2(\mathbb{R})\cap L^\infty(\mathbb{R})$
be real functions. Then, the composition law associated to the continuous polynomial extensions of the Heisenberg algebra is given as follows
\begin{eqnarray*}
&&(g,q(f_1)+\dots+q^n(f_n))\circ(G,q(F_1)+\dots+q^n(F_n))\\
&&=(g+G,T_{g+G}^{-1}\big(T_g(q(f_1)+\dots+q^n(f_n))+T_GS_g(q(F_1)+\dots+q^n(F_n))\big)
\end{eqnarray*}
where for all real function $f\in L^2(\mathbb{R})\cap L^\infty(\mathbb{R})$, we have
\begin{equation}\label{Sgqk}
S_gq^k(f)=\int_\mathbb{R}(q_s+g(s))^kf(s)ds=\int_\mathbb{R}S_{g(s)}(q_s^k)f(s)ds
\end{equation}
with $S_{g(s)}$ is the translation operator.
\end{theorem}
\begin{proof}
Consider real functions $g,f_1,\dots,f_n,G,F_1,\dots,F_n\in L^2(\mathbb{R})\cap L^\infty(\mathbb{R})$. Then, from Lemma \ref{ref}, one has
\begin{eqnarray*}
&&e^{i\big(p(g)+q(f_1)+\dots+q^n(f_n)\big)}e^{i\big(p(G)+q(F_1)+\dots+q^n(F_n)\big)}\\
&&=e^{i\big(q(f_1)+\dots+q^n(f_n)\big)}e^{ip(g)}e^{i\big(q(F_1)+\dots+q^n(F_n)\big)}e^{ip(G)}\\
&&=e^{i\big(q(f_1)+\dots+q^n(f_n)\big)}\Big(e^{ip(g)}e^{i\big(q(F_1)+\dots+q^n(F_n)\big)}e^{-ip(g)}\Big)e^{ip(g+G)}\\
&&=e^{i\big(q(f_1)+\dots+q^n(f_n)\big)}e^{i\big(e^{ip(g)}q(F_1)e^{-ip(g)}+\dots+e^{ip(g)}q^n(F_n)e^{-ip(g)}\big)}e^{ip(g+G)}
\end{eqnarray*}
But, from identity (\ref{qk}), one has
$$e^{ip(g)}q^k(F_k)e^{-ip(g)}=\int_\mathbb{R}(q_s+g(s))^kF_k(s)ds=S_gq^k(F_k)$$
This gives
\begin{eqnarray*}
&&e^{i\big(p(g)+q(f_1)+\dots+q^n(f_n)\big)}e^{i\big(p(G)+q(F_1)+\dots+q^n(F_n)\big)}\\
&&=e^{i\big(T_g(q(f_1)+\dots+q^n(f_n))+T_GS_g(q(F_1)+\dots+q^n(F_n))\big)}e^{ip(g+G)}\\
&&=e^{i\big(p(g+G)+T_{g+G}^{-1}\big(T_g(q(f_1)+\dots+q^n(f_n))+T_GS_g(q(F_1)+\dots+q^n(F_n))\big)\big)}
\end{eqnarray*}
which ends the proof of the above theorem.
\end{proof}
\subsection{The continuous Heisenberg algebra $\{1,p(f),q(f)\}$}
For all $f\in L^{2}(\mathbb{R})$, define
$$B(f)=\int_\mathbb{R}b_s\bar{f}(s)ds,\;\,B^+(f)=\int_\mathbb{R}b^+_sf(s)ds$$
Let $f=f_1+if_2,\;g=g_1+ig_2\in L^2(\mathbb{R})$ such that $f_i,g_i,i=1,2,$ are real functions. Then, a straightforward computation shows that
$$B(f)+B^+(f)=\sqrt{2}(q(f_1)+p(f_2))=q(f'_1)+p(f'_2)$$
where for all $i=1,2$, $f'_i=\sqrt{2}f_i$. Recall that the Weyl operator, associated to an element $f=f_1+if_2\in L^{2}(\mathbb{R})$, is defined by
$$W(f)=e^{i(B(f)+B^+(f))}=e^{i\sqrt{2}(q(f_1)+p(f_2))}=e^{i(q(f'_1)+p(f'_2))}$$
Note that from Theorem \ref{theo2}, one has
\begin{equation}\label{W1}
e^{i(q(f'_1)+p(f'_2))}e^{i(q(g'_1)+p(g'_2))}=e^{i\big(p(f'_2+g'_2)+T^{-1}_{f'_2+g'_2}\big(T_{f'_2}q(f'_1)+T_{g'_2}S_{f'_2}q(g'_1)\big)\big)}
\end{equation}
But, one has
\begin{equation}\label{W2}
T_{f'_2}q(f'_1)=\frac{1}{2}\int_\mathbb{R}f'_1(s)f'_2(s)ds+q(f'_1)
\end{equation}
and
\begin{equation}\label{W3}
T_{g'_2}S_{f'_2}q(g'_1)=\int_\mathbb{R}g'_1(s)f'_2(s)ds+\frac{1}{2}\int_\mathbb{R}g'_1(s)g'_2(s)ds+q(g'_1)
\end{equation}
Moreover, recall that the matrix of $T^{-1}_{f'_2(s)+g'_2(s)}$ in the canonical basis $\{1,x\}$ of $\mathbb{R}_2[x]$ is given by
$$\left(
\begin{array}{cc}
1 & -\frac{1}{2}(f'_2(s)+g'_2(s))\\
0 & 1
\end{array}
\right)$$
This proves that
\begin{eqnarray}
T^{-1}_{f'_2+g'_2}q(f'_1)=-\frac{1}{2}\int_\mathbb{R}f'_1(s)f'_2(s)ds-\frac{1}{2}\int_\mathbb{R}f'_1(s)g'_2(s)ds+q(f'_1)\label{W4}\\
T^{-1}_{f'_2+g'_2}q(g'_1)=-\frac{1}{2}\int_\mathbb{R}g'_1(s)f'_2(s)ds-\frac{1}{2}\int_\mathbb{R}g'_1(s)g'_2(s)ds+q(g'_1)\label{W5}
\end{eqnarray}
Therefore, using identities (\ref{W1})---(\ref{W5}), one gets
\begin{equation}\label{W6}
e^{i(q(f'_1)+p(f'_2))}e^{i(q(g'_1)+p(g'_2))}=e^{\frac{i}{2}\big(\langle g'_1,f'_2\rangle-\langle f'_1,g'_2\rangle\big)}e^{i\big(q(f'_1+g'_1)+p(f'_2+g'_2)\big)}
\end{equation}
Finally, by taking $f'_i=\sqrt{2}f_i,\;g'_i=\sqrt{2}g_i$, $i=1,2$, in (\ref{W6}), one obtains the well known Weyl commutation relation
$$W(f)W(g)=e^{-i\Im\langle f,g\rangle}W(f+g)$$
\subsection{The continuous Galilei algebra $\{1,p(f),q(f),q^2(f)\}$}
The composition law associated to the continuous Galilei algebra \\$\{1,p(f),q(f),q^2(f),\;f=\bar{f}\in L^2(\mathbb{R})\cap L^\infty(\mathbb{R})\}$ is giving by the following.
\begin{proposition}
For all real functions $g,G,f_1,f_2,F_1,F_2\in L^2(\mathbb R)\cap L^\infty(\mathbb R)$, we have
\begin{eqnarray*}
&&\Big(g, q(f_1)+q^2(f_2)\Big)\circ\Big(G,q(F_1)+q^2(F_2)\Big)\\
&&=\Big(g+G, \frac{1}{2}\langle g,F_1\rangle-\frac{1}{2}\langle G,f_1\rangle+\frac{1}{6}\langle g,(g-G)F_2\rangle+\frac{1}{6}\langle G,(G-g)f_2\rangle\\
&&+q(f_1)+q(F_1)+q(gF_2)-q(Gf_2)+q^2(f_2)+q^2(F_2)\Big)
\end{eqnarray*}
\end{proposition}
\begin{proof}
Recall that the matrices of $T_{g(s)}$, $S_{g(s)}$ and $T_{g(s)+G(s)}^{-1}$ in the monomial basis $\{1,x,x^2\}$ of $\mathbb R_2[x]$ are given by
\begin{eqnarray*}
S_{g(s)}=\left(
\begin{array}{lcc}
1 & g(s) & g(s)^2\\
0 & 1& 2g(s)\\
0 & 0 &1
\end{array}
\right),\;
T_{g(s)}=\left(
\begin{array}{lcc}
1 & \frac{1}{2}g(s) & \frac{1}{3}g(s)^2\\
0& 1& g(s)\\
0& 0 &1
\end{array}
\right),\\
T_{g(s)+G(s)}^{-1}=\left(
\begin{array}{lcc}
1 & -\frac{1}{2}(g(s)+G(s))& \frac{1}{6}(g(s)+G(s))^2\\
0& 1& -(g(s)+G(s))\\
0& 0 &1
\end{array}
\right)
\end{eqnarray*}
Using all togethers the explicit form of the above matrices, identities (\ref{Tgqk}), (\ref{Sgqk}) and Lemma \ref{lem5}, one gets
\begin{eqnarray*}
&&S_g(q(F_1)+q^2(F_2))=\langle g,F_1\rangle+\langle g^2,F_2\rangle+q(F_1)+2q(gF_2)+q^2(F_2)\\
&&T_GS_g(q(F_1)+q^2(F_2))=\langle g,F_1\rangle+\langle g^2,F_2\rangle+\frac{1}{2}\langle G,F_1\rangle+\langle G,gF_2\rangle+\frac{1}{3}\langle G^2,F_2\rangle\\
&&+q(F_1)+2q(gF_2)+q(GF_2)+q^2(F_2)\\
&&T_g(q(f_1)+q^2(f_2))=\frac{1}{2}\langle g,f_1\rangle+\frac{1}{3}\langle g^2,f_2\rangle+q(f_1)+q(gf_2)+q^2(f_2)\\
&&T_{g+G}^{-1}(T_g(q(f_1)+q^2(f_2))+T_GS_g(q(F_1)+q^2(F_2)))=\frac{1}{2}\langle g,F_1\rangle-\frac{1}{2}\langle G,f_1\rangle\\
&&+\frac{1}{6}\langle g,(g-G)F_2\rangle+\frac{1}{6}\langle G,(G-g)f_2\rangle+q(f_1)+q(F_1)+q(gF_2)-q(Gf_2)\\
&&+q^2(f_2)+q^2(F_2)
\end{eqnarray*}
Finally from Theorem \ref{theo2} we can conclude.
\end{proof}
\section{ The q-projection method }\label{q-proj-meth}
Because of the relation (see Proposition \ref{prop2})
\begin{equation}\label{ccr4}
e^{i\alpha(\sqrt{2}p)} e^{i\beta(\sqrt{2}q)}
= e^{i2\alpha\beta}e^{i\beta(\sqrt{2}q)}e^{i\alpha(\sqrt{2}p)}
\end{equation}
the Weyl algebra coincides with the complex linear span of the products
$e^{i\beta(\sqrt{2}q)}e^{i\alpha(\sqrt{2}p)}$. Therefore a state on the Weyl
algebra is completely determined by the expectation values of these products.
In particular the Fock state $\Phi$ is characterized by the property that the
vacuum distribution of the position operator is the standard Gaussian
$$
\sqrt{2}q \ \sim \ \mathcal{N}(0,1)
$$
together with the identity
\begin{equation}\label{decomp4}
e^{i\alpha(\sqrt{2}p)}\Phi=e^{-\alpha^2}e^{-\sqrt{2}\alpha q}\Phi
\end{equation}
which follows from $i\sqrt{2}\alpha p=-\sqrt{2}\alpha q+2\alpha b$. \\
The q-projection method consists in using (\ref{decomp4}) and (\ref{pp})
to reduce the problem to compute vacuum expectation values of products of
the form $e^{-izP(q)}e^{-iwp}$ to the calculation of a single Gaussian integral.\\
In the following sub--sections we illustrate this method starting from
the simplest examples.
\subsection{Vacuum characteristic functions of observables in $Heis(1,1,1)$}
In this section we show that the q-projection method, applied to $Heis(1)$,
gives the standard result for the spectral measure of the Weyl operators.\\
From (\ref{decomp4}) and the CCR it follows that
$$
e^{i(\alpha(\sqrt{2}p)+\beta(\sqrt{2}q))}\Phi
=e^{i\alpha\beta}e^{-\alpha^2}e^{(i\beta-\alpha)(\sqrt{2} q)}\Phi
$$
from which one obtains
\begin{eqnarray*}
\langle \Phi,e^{i(\alpha(\sqrt{2}p)+\beta(\sqrt{2}q))}\Phi\rangle
&=&e^{i\alpha\beta}e^{-\alpha^2}\langle\Phi,e^{(i\beta-\alpha)(\sqrt{2} q)}\Phi\rangle\\
&=&\frac{e^{-\alpha^2}e^{i\alpha\beta}}{\sqrt{2\pi}}
\int_\mathbb{R}e^{i\beta x-\alpha x}e^{\frac{-x^2}{2}}dx\\
&=&\frac{e^{-\frac{\alpha^2}{2}}e^{i\alpha\beta}}{\sqrt{2\pi}}
\int_\mathbb{R}e^{i\beta x}e^{\frac{-(x+\alpha)^2}{2}}dx\\
&=&e^{-\frac{\alpha^2}{2}}\Big[\frac{1}{\sqrt{2\pi}}
\int_\mathbb{R}e^{i\beta x}e^{-\frac{x^2}{2}}dx\Big]\\
&=&e^{-\frac{\alpha^2}{2}}e^{-\frac{\beta^2}{2}}
\end{eqnarray*}
In particular, for all $z=\alpha+i\beta$, $z'=\alpha'+i\beta'$, one has
\begin{eqnarray*}
\langle W(z)\Phi,W(z')\Phi\rangle&=&\langle \Phi,W(-z)W(z')\Phi\rangle\\
&=&e^{-i\Im(z\bar{z'})}\langle\Phi, e^{i\sqrt{2}\big((\beta'-\beta)q-(\alpha'-\alpha)p\big)}\Phi\rangle\\
&=&e^{-i\Im(z\bar{z'})}e^{-\frac{\|z-z'\|^2}{2}}
\end{eqnarray*}
\subsection{Vacuum characteristic functions of observables in $Heis(1,1,2)$}
In this section we use the q-projection method to give a derivation of the
expression of the vacuum characteristic functions of observables in $\{1,p,q,q^2\}$
different from the one discussed in [] [x.].
\begin{proposition}\label{prop1}
For all, $\alpha,\;\beta,\;\gamma\in\mathbb{R}$, one has
$$\langle \Phi, e^{i\alpha(\sqrt{2}q)^2+i\beta(\sqrt{2}q)+\gamma(\sqrt{2}q)}\Phi\rangle=(1-2i\alpha)^{-\frac{1}{2}}e^{\frac{\gamma^2}{2(1-2i\alpha)}}e^{-\frac{\beta^2}{2(1-2i\alpha)}}e^{i\frac{\beta\gamma}{1-2i\alpha}}$$
\end{proposition}
\begin{proof}
Put
$$\Psi_1(\beta):=\langle \Phi, e^{i\alpha(\sqrt{2}q)^2+i\beta(\sqrt{2}q)+\gamma(\sqrt{2}q)}\Phi\rangle=\mathbb{E}(e^{i\alpha X^2+i\beta X+\gamma X})$$
where $X$ is a normal gaussian random variable. Then, one has
\begin{eqnarray}\label{beta1}
\Psi_1'(\beta)&=&i\mathbb{E}(Xe^{i\alpha X^2+i\beta X+\gamma X})\\
&=&\frac{1}{\sqrt{2\pi}}\int_\mathbb{R}xe^{i\alpha x^2+i\beta x+\gamma x}e^{-\frac{x^2}{2}}dx\nonumber
\end{eqnarray}
Taking the changes of variables
$$u(x)=e^{i\alpha x^2+i\beta x+\gamma x},\;v'(x)=xe^{-\frac{x^2}{2}}$$
Then, one gets
$$u'(x)=(2i\alpha x+i\beta +\gamma)e^{i\alpha x^2+i\beta x+\gamma x},\;v(x)=-e^{-\frac{x^2}{2}}$$
This gives
\begin{eqnarray}\label{beta2}
\mathbb{E}(Xe^{i\alpha X^2+i\beta X+\gamma X})=2i\alpha\mathbb{E}(Xe^{i\alpha X^2+i\beta X+\gamma X})+(i\beta+\gamma)\mathbb{E}(e^{i\alpha X^2+i\beta X+\gamma X})
\end{eqnarray}
Therefore, identities (\ref{beta1}) and (\ref{beta2}) imply that
\begin{eqnarray*}
\Psi_1'(\beta)=\frac{i\gamma-\beta}{1-2i\alpha}\Psi_1(\beta)
\end{eqnarray*}
which yields
\begin{equation}\label{Beta}
\Psi_1(\beta)=C(\alpha,\gamma)e^{i\frac{\beta\gamma}{1-2i\alpha}}e^{-\frac{\beta^2}{2(1-2i\alpha)}}
\end{equation}
where
$$C(\alpha,\gamma)=\Psi_1(0):=\Psi_2(\gamma)=\mathbb{E}(e^{i\alpha X^2+\gamma X})$$
Now, one has
\begin{eqnarray*}
\Psi_2'(\gamma)&=&\mathbb{E}(Xe^{i\alpha X^2+\gamma X})\\
&=&\frac{1}{\sqrt{2\pi}}\int_\mathbb{R}xe^{i\alpha x^2+\gamma x}e^{-\frac{x^2}{2}}dx
\end{eqnarray*}
Taking the changes of variables
$$h(x)=e^{i\alpha x^2+\gamma x},\;l'(x)=xe^{-\frac{x^2}{2}}$$
It follows that
$$h'(x)=(2i\alpha x+\gamma)e^{i\alpha x^2+\gamma x},\;l(x)=-e^{-\frac{x^2}{2}}$$
Then, one obtains
\begin{eqnarray*}
\Psi_2'(\gamma)=2i\alpha\Psi_2'(\gamma)+\gamma\Psi_2(\gamma)
\end{eqnarray*}
This implies that
\begin{equation}\label{Gamma}
\Psi_2(\gamma)=C(\alpha)e^{\frac{\gamma^2}{2(1-2i\alpha)}}
\end{equation}
where
\begin{equation}\label{Alpha}
C(\alpha)=\Psi_2(0)=\mathbb{E}(e^{i\alpha X^2})=(1-2i\alpha)^{-\frac{1}{2}}
\end{equation}
Finally, using identities (\ref{Beta}), (\ref{Gamma}) and (\ref{Alpha}), one obtains
$$\langle \Phi, e^{i\alpha(\sqrt{2}q)^2+i\beta(\sqrt{2}q)+\gamma(\sqrt{2}q)}\Phi\rangle=(1-2i\alpha)^{-\frac{1}{2}}e^{\frac{\gamma^2}{2(1-2i\alpha)}}e^{-\frac{\beta^2}{2(1-2i\alpha)}}e^{i\frac{\beta\gamma}{1-2i\alpha}}$$
\end{proof}
\begin{theorem}\label{characteristic}
For all $A,\,B,\,C\in\mathbb{R}$, one has
\begin{eqnarray*}
\langle \Phi,e^{it(A(\sqrt{2}q)^2+B(\sqrt{2}q)+C(\sqrt{2}p))}\Phi\rangle=(1-2itA)^{-\frac{1}{2}}e^{\frac{4C^2(A^2t^4+2iAt^3)-3|M|^2t^2}{6(1-2iAt)}}
\end{eqnarray*}
where $M=B+iC$.
\end{theorem}
\begin{proof}
We have
$$it(Aq^2+Bq+Cp)=itCp+itP'(q)$$
where $P(X)=\frac{1}{3}AX^3+\frac{1}{2}BX^2$. Then, Proposition \ref{prop2} implies that
\begin{equation}\label{decomp}
e^{it(Aq^2+Bq+Cp)}=e^{it\frac{P(q+tC)-P(q)}{tC}}e^{itCp}
\end{equation}
But, one has
\begin{equation}\label{decomp2}
\frac{P(q+tC)-P(q)}{tC}=Aq^2+(tAC+B)q+\frac{1}{2}tBC+\frac{1}{3}A(tC)^2
\end{equation}
Using (\ref{decomp}) and (\ref{decomp2}) for getting
\begin{eqnarray*}
e^{it(Aq^2+Bq+Cp)}=e^{it\big(\frac{1}{3}A(tC)^2+\frac{1}{2}tBC\big)}e^{it\big(Aq^2+(tAC+B)q\big)}e^{itCp}
\end{eqnarray*}
It follows that
\begin{eqnarray}\label{decomp3}
e^{it(A(\sqrt{2}q)^2+B(\sqrt{2}q)+C(\sqrt{2}p))}=&&e^{it\big(\frac{4}{3}A(tC)^2+tBC\big)}e^{it\big(A(\sqrt{2}q)^2+(2tAC+B)(\sqrt{2}q)\big)}\nonumber\\
&&e^{itC(\sqrt{2}p)}
\end{eqnarray}
Therefore from (\ref{decomp3}), (\ref{decomp4}) and (\ref{decomp4}), one has
\begin{eqnarray}\label{decomp5}
e^{it(A(\sqrt{2}q)^2+B(\sqrt{2}q)+C(\sqrt{2}p))}\Phi\!\!\!&=&\!\!\!e^{it\big(\frac{4}{3}AC^2t^2+t(B+iC)C\big)}e^{it\big(A(\sqrt{2}q)^2+(2ACt+(B+iC))(\sqrt{2}q)\big)}\Phi\nonumber\\
&=&e^{it\big(\frac{4}{3}AC^2t^2+tMC\big)}e^{it\big(A(\sqrt{2}q)^2+(2ACt+M)(\sqrt{2}q)\big)}\Phi
\end{eqnarray}
where $M=B+iC$. Now, by taking
\begin{eqnarray*}
\alpha=At,\;\beta=2ACt^2+Bt,\;\gamma=-Ct
\end{eqnarray*}
and using Proposition \ref{prop1}, one gets
\begin{eqnarray}\label{sqrt}
\langle \Phi, e^{it\big(A(\sqrt{2}q)^2+(2ACt+M)(\sqrt{2}q)\big)}\Phi\rangle=&&(1-2iAt)^{-\frac{1}{2}}e^{\frac{C^2t^2}{2(1-2iAt)}}e^{-\frac{(2At^2+Bt)^2}{2(1-2iAt)}}\nonumber\\
&&e^{-i\frac{Ct(2ACt^2+Bt)}{1-2iAt}}
\end{eqnarray}
Finally, identities (\ref{decomp5}) and (\ref{sqrt}) imply that
$$\langle \Phi,e^{it(A(\sqrt{2}q)^2+B(\sqrt{2}q)+C(\sqrt{2}p))}\Phi\rangle=(1-2itA)^{-\frac{1}{2}}e^{\frac{4C^2(A^2t^4+2iAt^3)-3|M|^2t^2}{6(1-2iAt)}}$$
This ends the proof.
\end{proof}\\
Using together Proposition \ref{N=2} and Theorem
\ref{characteristic}, we prove the following theorem.
\begin{theorem}
For all $\alpha_i,\beta_i,\gamma_i\in\mathbb{R},\;i=1,2,$ we have
\begin{eqnarray*}
&&\langle e^{it(\alpha_1(\sqrt{2}q)^2+\beta_1(\sqrt{2}q)+\gamma_1(\sqrt{2}p))}\Phi,e^{it(\alpha_2(\sqrt{2}q)^2+\beta_2(\sqrt{2}q)+\gamma_2(\sqrt{2}p))}\Phi\rangle\\
&&=(1-2i(\alpha_2-\alpha_1)t)^{-\frac{1}{2}}
e^{\frac{Lt^4+Z_1t^3+Z_2t^2}{6(1-2i(\alpha_2-\alpha_1)t)}}
\end{eqnarray*}
where
\begin{eqnarray*}
L&=&-4(\alpha_1\gamma_2-\alpha_2\gamma_1)\big[3(\alpha_1\gamma_2-\alpha_2\gamma_1)+2(\alpha_2-\alpha_1)(\gamma_1+\gamma_2)\big] \\
&&+4(\gamma_2-\gamma_1)^2(\alpha_2-\alpha_1)^2\\
Z_1&=&12\big[(\alpha_2-\alpha_1)(\beta_1\gamma_2-\gamma_1\beta_2)-(\beta_2-\beta_1)(\alpha_1\gamma_2-\alpha_2\gamma_1)\big]\\
&&+4i\big[2(\alpha_2-\alpha_1)(\gamma_2-\gamma_1)^2+(\gamma_1+\gamma_2)(\alpha_2\gamma_1-\alpha_1\gamma_2)\big]\\
Z_2&=& -3(\gamma_2-\gamma_1)^2+6i(\beta_1\gamma_2-\gamma_1\beta_2)
\end{eqnarray*}
\end{theorem}
\begin{proof}
We have
\begin{eqnarray}\label{eqnn}
&&\langle e^{it(\alpha_1(\sqrt{2}q)^2+\beta_1(\sqrt{2}q)+\gamma_1(\sqrt{2}p))}\Phi,e^{it(\alpha_2(\sqrt{2}q)^2+\beta_2(\sqrt{2}q)+\gamma_2(\sqrt{2}p))}\Phi\rangle\nonumber\\
&&=\langle \Phi,e^{-it(\alpha_1(\sqrt{2}q)^2+\beta_1(\sqrt{2}q)+\gamma_1(\sqrt{2}p))}e^{it(\alpha_2(\sqrt{2}q)^2+\beta_2(\sqrt{2}q)+\gamma_2(\sqrt{2}p))}\Phi\rangle
\end{eqnarray}
Put
\begin{eqnarray*}
P'_1(x)=-t\sqrt{2}\beta_1x-2t\alpha_1x^2\\
P'_2(x)=t\sqrt{2}\beta_2x+2t\alpha_2x^2
\end{eqnarray*}
Then, Proposition \ref{N=2} implies that
\begin{equation}\label{ABC}
(-t\sqrt{2}\gamma_1,P'_1)(t\sqrt{2}\gamma_2,P_2')=(t\sqrt{2}(\gamma_2-\gamma_1),Q')
\end{equation}
where $Q'(x)=C+Bx+Ax^2$ with
\begin{eqnarray*}
C&=&(\beta_1\gamma_2-\gamma_1\beta_2)t^2+\frac{2}{3}(\gamma_1+\gamma_2)(\alpha_2\gamma_1-\alpha_1\gamma_2)t^3\\
B&=&\sqrt{2}(\beta_2-\beta_1)t+2\sqrt{2}(\alpha_1\gamma_2-\alpha_2\gamma_1)t^2\\
A&=&2(\alpha_2-\alpha_1)t
\end{eqnarray*}
Therefore, from identities (\ref{eqnn}) and (\ref{ABC}), one obtains
\begin{eqnarray*}
&&\langle e^{it(\alpha_1(\sqrt{2}q)^2+\beta_1(\sqrt{2}q)+\gamma_1(\sqrt{2}p))}\Phi,e^{it(\alpha_2(\sqrt{2}q)^2+\beta_2(\sqrt{2}q)+\gamma_2(\sqrt{2}p))}\Phi\rangle\\
&&=e^{i\big((\beta_1\gamma_2-\gamma_1\beta_2)t^2+\frac{2}{3}(\gamma_1+\gamma_2)(\alpha_2\gamma_1-\alpha_1\gamma_2)t^3\big)}\\
&&\;\;\;\;\;\;\;\langle\Phi,e^{it\Big((\alpha_2-\alpha_1)(\sqrt{2}q)^2+\big((\beta_2-\beta_1)+2(\alpha_1\gamma_2-\alpha_2\gamma_1)t\big)(\sqrt{2}q)+(\gamma_2-\gamma_1)(\sqrt{2}p)\Big)}\Phi\rangle
\end{eqnarray*}
Now, from Theorem \ref{characteristic}, one gets
\begin{eqnarray}\label{Mm}
&&\langle e^{it(\alpha_1(\sqrt{2}q)^2+\beta_1(\sqrt{2}q)+\gamma_1(\sqrt{2}p))}\Phi,e^{it(\alpha_2(\sqrt{2}q)^2+\beta_2(\sqrt{2}q)+\gamma_2(\sqrt{2}p))}\Phi\rangle\nonumber\\
&&=e^{i\big((\beta_1\gamma_2-\gamma_1\beta_2)t^2+\frac{2}{3}(\gamma_1+\gamma_2)(\alpha_2\gamma_1-\alpha_1\gamma_2)t^3\big)}(1-2i(\alpha_2-\alpha_1)t)^{-\frac{1}{2}}\nonumber\\
&&\;\;\;\,\;e^{\frac{4(\gamma_2-\gamma_1)^2\big((\alpha_2-\alpha_1)^2t^4+2i(\alpha_2-\alpha_1)t^3\big)-3|M|^2t^2}{6(1-2i(\alpha_2-\alpha_1)t)}}
\end{eqnarray}
where
$$M=(\beta_2-\beta_1)+2(\alpha_1\gamma_2-\alpha_2\gamma_1)t+i(\gamma_2-\gamma_1)$$
Finally, using the identity (\ref{Mm}), it is easy to show that
\begin{eqnarray*}
&&\langle e^{it(\alpha_1(\sqrt{2}q)^2+\beta_1(\sqrt{2}q)+\gamma_1(\sqrt{2}p))}\Phi,e^{it(\alpha_2(\sqrt{2}q)^2+\beta_2(\sqrt{2}q)+\gamma_2(\sqrt{2}p))}\Phi\rangle\\
&&=(1-2i(\alpha_2-\alpha_1)t)^{-\frac{1}{2}}e^{\frac{Lt^4+Z_1t^3+Z_2t^2}{6(1-2i(\alpha_2-\alpha_1)t)}}
\end{eqnarray*}
where $L, Z_1,Z_2$ are given above.
\end{proof}
\subsection{Vacuum moments of observables in $\{1,p,q,q^2\}$}\label{3.3}
In this subsection we use the results of the preceeding section to deduce
the expression of the vacuum moments of observables in $\{1,p,q,q^2\}$.
The form of these moments was not known and may be used to throw some light
on the still open problem of finding the explicit expression of
the probability distributions of these observables.
\begin{theorem}
Define, for $A,B,C\in\mathbb R$
$$
X:=A(\sqrt{2}q)^2+B(\sqrt{2}q)+C(\sqrt{2}p)
$$
Then
$$
\langle \Phi,X^n\Phi\rangle
=\sum_{i_1+2i_2+\dots+ki_k=n}\frac{2^{3n}n!}{i_1!\dots i_k!}w_1^{i_1}\dots w_k^{i_k}
$$
where
$$
w_k=\frac{A^k}{2k}-\frac{A^{k-2}}{4}\gamma\chi_{\{k\geq2\}}-\frac{A^{k-3}}{8}\beta\chi_{\{k\geq3\}}+\frac{A^{k-4}}{16}\alpha\chi_{\{k\geq4\}}
$$
with
$$
\alpha=\frac{2}{3}A^2C^2,\;\beta=\frac{4}{3}AC^2,\;
\gamma=-\frac{1}{2}|M|^2=-\frac{1}{2}(B^2 +C^2)
$$
\end{theorem}
\begin{proof}
Recall that
$$\mathbb{E}(e^{itX})=e^{\varphi(t)}$$
where
$$\varphi(t)=-\frac{1}{2}\ln(1-2iAt)+\frac{\alpha t^4+i\beta t^3+\gamma t^2}{1-2iAt}$$
with $\alpha,\beta$ and $\gamma$ are given above. Hence, one has
\begin{equation}\label{n-moment}
\mathbb{E}(X^n)=\frac{1}{i^n}\Big(\frac{d}{dt}\Big)^n\Big|_{t=0}e^{\varphi(t)}
\end{equation}
Now we introduce the following formula (cf \cite{3})
\begin{eqnarray}\label{bourbaki}
\frac{d^n}{dt^n}e^{\varphi(t)}
=\sum_{i_1+2i_2+\dots+ki_k=n}\frac{2^{2n}n!}{i_1!\dots
i_k!}\Big(\frac{\varphi^{(1)}(t)}{1!}\Big)^{i_1}\dots
\Big(\frac{\varphi^{(k)}(t)}{k!}\Big)^{i_k}e^{\varphi(t)}
\end{eqnarray}
Put
\begin{eqnarray*}
&&\varphi_1(t)=\alpha t^4+i\beta t^3+\gamma t^2,\;\varphi_2(t)=(1-2iAt)^{-1},\\
&&\varphi_3(t)=-\frac{1}{2}\ln(1-2iAt),\;g(t)=\varphi_1(t)\varphi_2(t)
\end{eqnarray*}
Note that
\begin{eqnarray}\label{derivative}
\varphi_2^{(k)}(t)=(2iA)^kk!(1-2iAt)^{-k-1},\varphi_3^{(k)}(t)=\frac{1}{2}(2iA)^k(k-1)!(1-2iAt)^{-k}
\end{eqnarray}
Then, one gets
\begin{eqnarray*}
g^{(k)}(t)=\sum_{h=0}^k\pmatrix{k\cr h\cr}\varphi_1^{(h)}(t)\varphi_2^{(k-h)}(t)
\end{eqnarray*}
Because $\varphi_1^{(h)}(t)=0$ for all $h\geq5$ and $\varphi_1(0)=\varphi_1'(0)=0$, one gets
\begin{eqnarray}\label{convention}
g^{(k)}(0)=\frac{k!}{(k-2)!}\gamma\varphi_2^{(k-2)}(0)+i\frac{k!}{(k-3)!}\beta\varphi_2^{(k-3)}(0)+\frac{k!}{(k-4)!}\alpha\varphi_2^{(k-4)}(0)
\end{eqnarray}
where by convention $\varphi_2^{(i-j)}(0)=0$ if $i<j$. Therefore, identities (\ref{derivative}) and (\ref{convention}) imply that
\begin{eqnarray*}
g^{(k)}(0)=k!(2iA)^{k-2}\gamma\chi_{\{k\geq2\}}+ik!(2iA)^{k-3}\beta\chi_{\{k\geq3\}}+k!(2iA)^{k-4}\alpha\chi_{\{k\geq4\}}
\end{eqnarray*}
Then, for all $k\geq1$, one obtains
\begin{eqnarray*}
\frac{\varphi^{(k)}(0)}{k!}&=&\varphi_3^{(k)}(0)+g^{(k)}(0)\\
&=&(2i)^k\Big(\frac{A^k}{2k}-\frac{A^{k-2}}{4}\gamma\chi_{\{k\geq2\}}-\frac{A^{k-3}}{8}\beta\chi_{\{k\geq3\}}+\frac{A^{k-4}}{16}\alpha\chi_{\{k\geq4\}}\Big)\\
&=&(2i)^kw_k
\end{eqnarray*}
where
$$w_k=\frac{A^k}{2k}-\frac{A^{k-2}}{4}\gamma\chi_{\{k\geq2\}}-\frac{A^{k-3}}{8}\beta\chi_{\{k\geq3\}}+\frac{A^{k-4}}{16}\alpha\chi_{\{k\geq4\}}$$
Thus, from identity (\ref{bourbaki}), one has
\begin{eqnarray*}
\Big(\frac{d}{dt}\Big)^n\Big|_{t=0}e^{\varphi(t)}&=&\sum_{i_1+2i_2+\dots+ki_k=n}\frac{2^{2n}n!}{i_1!\dots i_k!}(2i)^{i_1+\dots+ki_k}w_1^{i_1}\dots w_k^{i_k}\\
&=&i^n\sum_{i_1+2i_2+\dots+ki_k=n}\frac{2^{2n}2^nn!}{i_1!\dots i_k!}w_1^{i_1}\dots w_k^{i_k}
\end{eqnarray*}
Finally, by using identity (\ref{n-moment}) the result of the above theorem holds true.
\end{proof}
\subsection{Vacuum characteristic functions of observables in $Heis(1,1,N)$}
From (\ref{*2}) and (\ref{decomp4}) we deduce that
\begin{eqnarray*}
\langle \Phi , e^{iwp+iuP'(q)}\Phi \rangle &=&\langle \Phi ,e^{iu\frac{P(q+w)-P(q)}{w}}e^{iwp}\Phi \rangle \\
&=&\langle \Phi ,e^{iu\frac{P(q+w)-P(q)}{w}}e^{iwp}\Phi \rangle \\
&=&e^{-w^2/2}\langle \Phi ,e^{iu\frac{P(q+w)-P(q)}{w}}e^{-wq}\Phi \rangle
\end{eqnarray*}
because
\begin{eqnarray*}
e^{i\alpha(\sqrt{2}p)}\Phi=e^{-\alpha^2}e^{-\sqrt{2}\alpha q}\Phi
\end{eqnarray*}
The q-projection method reduces the problem to an integral of the form
\begin{eqnarray*}
\langle \Phi , e^{iQ(q)}\Phi \rangle
\end{eqnarray*}
where $Q$ is a polynomial.
|
1,941,325,221,141 | arxiv |
\section{Introduction}
\seclabel{introduction}
Many neural networks are over-parameterized
\citep{dauphin-2013-arXiv-big-neural-networks-waste,denil-2013-NIPS-predicting-parameters-in-deep},
enabling compression of each layer
\cite{denil-2013-NIPS-predicting-parameters-in-deep,wen-2016-arXiv-learning-structured-sparsity,han-2015-ICLR-deep-compression:-compressing}
or of the entire network
\cite{li-2018-ICLR-measuring-the-intrinsic-dimension}.
Some compression approaches enable more efficient computation by
pruning parameters, by factorizing matrices, or via other tricks
\cite{
han-2015-ICLR-deep-compression:-compressing,
NIPS1992_647,
NIPS1989_250,
li-2017-ICLR-pruning-filters-for-efficient,
louizos-2017-arXiv-bayesian-compression-for-deep,
luo-2017-ICCV-thinet-a-filter-level,
molchanov-2017-ICLR-pruning-convolutional-neural,
wen-2016-arXiv-learning-structured-sparsity,
yang-2017-CVPR-designing-energy-efficient-convolutional,
yang-2015-CVPR-deep-fried-convnets}.
Unfortunately, although sparse networks created via pruning often work well, training sparse networks directly often
fails, with the resulting networks underperforming their dense counterparts
\cite{li-2017-ICLR-pruning-filters-for-efficient,han-2015-ICLR-deep-compression:-compressing}.
A recent work by Frankle~\&~Carbin~\cite{frankle-2019-ICLR-the-lottery-ticket-hypothesis}\xspace was thus surprising to many researchers when it presented a simple algorithm
for finding sparse subnetworks within larger networks that \emph{are} trainable from scratch.
Their approach to finding these sparse, performant networks is as follows:
after training a network, set all weights smaller than some threshold to zero, pruning them (similarly to other pruning approaches \cite{NIPS2015_5784,han-2015-ICLR-deep-compression:-compressing,li2016pruning}), rewind the rest of the weights to their initial configuration, and then retrain the network from this starting configuration but with the zero weights frozen (not trained). Using this approach, they obtained two intriguing results.
First, they showed that the pruned networks performed well. Aggressively pruned networks (with 95 percent to 99.5 percent of weights pruned) showed no drop in performance compared to the much larger, unpruned network. Moreover, networks only moderately pruned (with 50 percent to 90 percent of weights pruned) often outperformed their unpruned counterparts.
Second, they showed that these pruned networks train well only if they are rewound to their initial state, including the specific initial weights that were used. Reinitializing the same network topology with new weights causes it to train poorly. As pointed out in \cite{frankle-2019-ICLR-the-lottery-ticket-hypothesis}\xspace, it appears that the specific combination of pruning mask and weights underlying the mask form a more efficient subnetwork found within the larger network, or, as named by the original study, a lucky winning ``Lottery Ticket,'' or LT.
While Frankle~\&~Carbin~\cite{frankle-2019-ICLR-the-lottery-ticket-hypothesis}\xspace clearly demonstrated LT networks to be effective, it raises many intriguing questions about the underlying mechanics of these subnetworks. What about LT networks causes them to show better performance? Why are the mask and the initial set of weights so tightly coupled, such that re-initializing the network makes it less trainable? Why does simply selecting large weights constitute an effective criterion for choosing a mask?
We attempt to answer these questions by exploiting the essential steps in the lottery ticket algorithm, described below:
\begin{SCfigure}[20]
\centering
\caption{Different mask criteria can be thought of as segmenting the 2D ($w_i =$ initial weight value, $w_f =$ final weight value) space into regions corresponding to mask values of $1$ vs $0$. The ellipse represents in cartoon form the area occupied by the positively correlated initial and final weights from a given layer. The mask criterion shown, identified by two horizontal lines that separate the whole region into mask-1 (blue) areas and mask-0 (grey) areas, corresponds to the \maskcrit{large\_final} criterion used in \cite{frankle-2019-ICLR-the-lottery-ticket-hypothesis}\xspace: weights with large final magnitude are kept and weights with final values near zero are pruned.
}
\label{fig:kd_large_final_detailed}
\includegraphics[width=0.26\textwidth]{kd_large_final_detailed}
\end{SCfigure}
\begin{enumerate}
\setcounter{enumi}{-1}
\item Initialize a mask $m$ to all ones. Randomly initialize the parameters $w$ of a network \mbox{$f(x; w \odot m)$}
\item Train the parameters $w$ of the network $f(x; w \odot m)$ to completion. Denote the initial weights before training $w_i$ and the final weights after training $w_f$.
\item \emph{Mask Criterion.} Use the mask criterion $M(w_i, w_f)$ to produce a masking score for each currently unmasked weight.
Rank the weights in each layer by their scores, set the mask value for the top $p\%$ to 1, the bottom $(100-p)\%$ to 0, breaking ties randomly. Here $p$ may vary by layer, and we follow the ratios chosen in \cite{frankle-2019-ICLR-the-lottery-ticket-hypothesis}\xspace, summarized in \tabref{arch}.
In \cite{frankle-2019-ICLR-the-lottery-ticket-hypothesis}\xspace the mask selected weights with large final value
corresponding to $M(w_i, w_f) = |w_f|$.
\item \emph{Mask-1 Action.} Take some action with the weights with mask value 1. In \cite{frankle-2019-ICLR-the-lottery-ticket-hypothesis}\xspace these weights were reset to their initial values and marked for training in the next round.
\item \emph{Mask-0 Action.} Take some action with the weights with mask value 0. In \cite{frankle-2019-ICLR-the-lottery-ticket-hypothesis}\xspace these weights were pruned: set to 0 and frozen during any subsequent training.
\item Repeat from 1 if performing iterative pruning.
\end{enumerate}
In this paper we perform ablation studies along the above three dimensions of variability, considering alternate mask criteria (\secref{masks}), alternate mask-1 actions (\secref{oneaction}), and alternate mask-0 actions (\secref{zeroaction}). These studies in aggregate reveal new insights for why lottery ticket networks work as they do.
Along the way we discover the existence of Supermasks---masks that
produce above-chance performance when applied to untrained networks (\secref{supermask}). We make our code available at \url{https://github.com/uber-research/deconstructing-lottery-tickets}.
\section{Mask criteria}
\seclabel{masks}
We begin our investigation with a study of different \emph{Mask Criteria}, or functions that decide which weights to keep vs. prune.
In this paper, we define the mask for each individual weight as a function of the weight's values both at initialization and after training: $M(w_i, w_f)$. We can visualize this function as a set of decision boundaries in a 2D space as shown in \figref{kd_large_final_detailed}.
In \cite{frankle-2019-ICLR-the-lottery-ticket-hypothesis}\xspace, the mask criterion simply keeps weights with large final magnitude;
we refer to this as the \maskcrit{large\_final} mask,
$M(w_i, w_f) = |w_f|$.
We experiment with mask criteria based on final weights (\maskcrit{large\_final} and \maskcrit{small\_final}), initial weights (\maskcrit{large\_init} and \maskcrit{small\_init}), a combination of the two (\maskcrit{large\_init\_large\_final} and \maskcrit{small\_init\_small\_final}), and how much weights move (\maskcrit{magnitude\_increase} and \maskcrit{movement}). We also include \maskcrit{random} as a control case, which chooses masks randomly. These nine masks are depicted along with their associated equations in \figref{masks}.
Note that the main difference between \maskcrit{magnitude\_increase} and \maskcrit{movement} is that those weights that change sign are more likely to be kept in the \maskcrit{movement} criterion than the \maskcrit{magnitude\_increase} criterion.
\newcolumntype{L}{>{\centering\arraybackslash}m{0.095\linewidth}}
\begin{figure}[t]
\vskip 0.15in
\begin{center}
\begin{small}
{\setlength{\tabcolsep}{4pt}
\hspace*{-.75em}\begin{tabular}{LLLLLLLLL}
\toprule
large final &
small final &
large init &
small init &
large init large final &
small init small final &
magnitude increase &
movement &
random \\
$|w_f|$ &
$-|w_f|$ &
$|w_i|$ &
$-|w_i|$ &
\scalebox{.65}{$min(\alpha|w_f|, |w_i|)$} &
\scalebox{.60}{$-max(\alpha|w_f|, |w_i|)$} &
\scalebox{.9}{$|w_f|-|w_i|$} &
$|w_f - w_i|$ &
0\\
\includegraphics[width=1.0\linewidth]{kd_large_final} &
\includegraphics[width=1.0\linewidth]{kd_small_final} &
\includegraphics[width=1.0\linewidth]{kd_large_init} &
\includegraphics[width=1.0\linewidth]{kd_small_init} &
\includegraphics[width=1.0\linewidth]{kd_large_init_large_final} &
\includegraphics[width=1.0\linewidth]{kd_small_init_small_final} &
\includegraphics[width=1.0\linewidth]{kd_magnitude_increase} &
\includegraphics[width=1.0\linewidth]{kd_movement} &
\includegraphics[width=1.0\linewidth]{kd_random} \\
\bottomrule
\end{tabular}}
\end{small}
\end{center}
\caption{Mask criteria studied in this section, starting with \maskcrit{large\_final} that was used in \cite{frankle-2019-ICLR-the-lottery-ticket-hypothesis}\xspace. Names we use to refer to the various methods are given along with the formula that projects each $(w_i, w_f)$ pair to a score. Weights with the largest scores (colored regions) are kept, and weights with the smallest scores (gray regions) are pruned. The $x$ axis in each small figure is $w_i$ and the $y$ axis is $w_f$. In two methods, $\alpha$ is adjusted as needed to align percentiles between $w_i$ and $w_f$. When masks are created, ties are broken randomly, so a score of 0 for every weight results in random masks.
}
\vspace*{-.5em}
\figlabel{masks}
\end{figure}
\figp[t]{pruning_methods_condensed}{1}{
Test accuracy at early stopping iteration
of different mask criteria for four networks at various pruning rates. Each line is a different mask criteria, with bands around the best-performing mask criteria (\maskcrit{large\_final} and \maskcrit{magnitude\_increase}) and the baseline (\maskcrit{random}) depicting the min and max over 5 runs. Stars represent points where \maskcrit{large\_final} or \maskcrit{magnitude\_increase} are significantly above the other at $p < 0.05$. The eight mask criteria form four groups of inverted pairs (each column of the legend represents one such pair) that act as controls for each other. We observe that \maskcrit{large\_final} and \maskcrit{magnitude\_increase} have the best performance, with \maskcrit{magnitude\_increase} having slightly higher accuracy in Conv2 and Conv4.
See \figref{pruning_methods_combined_crop} for results on convergence speed.
\vspace*{-.5em}
}
In this section and throughout the remainder of the paper, we follow the experimental framework from \cite{frankle-2019-ICLR-the-lottery-ticket-hypothesis}\xspace and perform iterative pruning experiments on a 3-layer fully-connected network (FC) trained on MNIST \cite{lecun-1998-IEEE-gradient-based-learning-applied} and on three convolutional neural networks (CNNs), Conv2, Conv4, and Conv6 (small CNNs with 2/4/6 convolutional layers, same as used in \cite{frankle-2019-ICLR-the-lottery-ticket-hypothesis}\xspace) trained on CIFAR-10 \cite{krizhevsky-2009-TR-learning-multiple-layers}. For more architecture and training details, see \secref{arch} in Supplementary Information. We hope to expand these experiments to larger datasets and deeper models in future work. In particular, \cite{lth_large} shows that the original LT Algorithm as proposed do not generalize to ResNet on ImageNet. It would be valuable to see how well the experiments in this paper generalize to harder problems.
Results of all criteria are shown in \figref{pruning_methods_condensed} for the four networks (FC, Conv2, Conv4, Conv6). The accuracy shown is the test accuracy at an early stopping iteration\footnote{The early stopping criterion we employ in this paper is the iteration of minimum validation loss.} of training.
For all figures in this paper, the line depicts the mean over five runs, and the band (if shown) depicts the min and max obtained over five runs. In some cases the band is omitted for visual clarity.
Note that the first six criteria as depicted in \figref{masks} form three opposing pairs; in each case, we observe when one member of the pair performs better than the random baseline, the opposing member performs worse than it. Moreover, the \maskcrit{magnitude\_increase} criterion turns out to work just as well as the \maskcrit{large\_final} criterion, and in some cases significantly better\footnote{We run a t-test for each pruning percentage based on a sample of 5 independent runs for each mask criteria.}.
The conclusion so far is that although \maskcrit{large\_final} is a very competitive mask criterion, the LT behavior is not limited to this mask criterion as other mask criteria (\maskcrit{magnitude\_increase}, \maskcrit{large\_init\_large\_final}, \maskcrit{movement}) can also match or exceed the performance of the original network. This partially answers our question about the efficacy of different mask criteria. Still unanswered: why either of the two front-running criteria (\maskcrit{magnitude\_increase}, \maskcrit{large\_final}) should work well in the first place. We uncover those details in the following two sections.
\section{Mask-1 actions: the sign-ificance of initial weights}
\seclabel{oneaction}
Now that we have explored various ways of choosing which weights to keep and prune, we will consider how we should initialize the kept weights. In particular, we want to explore an interesting observation in \cite{frankle-2019-ICLR-the-lottery-ticket-hypothesis}\xspace which showed that the pruned, skeletal LT networks train well when you rewind to its original initialization, but degrades in performance when you randomly reinitialize the network.
Why does reinitialization cause LT networks to train poorly? Which components of the original initialization are important? To investigate, we keep all other treatments the same as \cite{frankle-2019-ICLR-the-lottery-ticket-hypothesis}\xspace and perform a number of variants in the treatment of 1-masked, trainable weights, in terms of how to reinitialize them before the subnetwork training:
\begin{itemize}
\item ``Reinit'' experiments: reinitialize kept weights based on the original init distribution.
\item ``Reshuffle'' experiments: reinitialize while respecting the original distribution of remaining weights in that layer by reshuffling the kept weights' initial values.
\item ``Constant'' experiments: reinitialize by setting 1-masked weight values to a positive or negative constant; thus every weight on a layer becomes one of three values: $-\alpha$, $0$, or $\alpha$, with $\alpha$ being the standard deviation of each layer's original initialization.
\end{itemize}
All of the reinitialization experiments are based on the same original networks and use the \maskcrit{large\_final} mask criterion with iterative pruning. We include the original LT network (``rewind, large final'') and the randomly pruned network (``random'') as baselines for comparison.
We find that none of these three variants alone are able to train as well as the original LT network, shown as dashed lines in \figref{reinit_exp_condensed}.
However, all three variants work better when we ensure that the new values of the kept weights are of the same sign as their original initial values. These are shown as solid color lines in \figref{reinit_exp_condensed}.
Clearly, the common factor in all working variants including the original rewind action is the sign. As long as you keep the sign, reinitialization is not a deal breaker; in fact, even setting all kept weights to a constant value consistently performs well! The significance of the sign suggests, in contrast to \cite{frankle-2019-ICLR-the-lottery-ticket-hypothesis}, that the basin of attraction for an LT network is actually quite large: optimizers work well anywhere in the correct sign quadrant for the weights, but encounter difficulty crossing the zero barrier between signs.
\figp[t]{reinit_exp_condensed}{1}{The effects of various 1-actions for four networks at various pruning rates. All reinitialization experiments use the \maskcrit{large\_final} mask criterion with iterative pruning. Dotted lines represent the three described methods, and solid lines are those three except with each weight having the same sign as its original initialization. Shaded bands around notable runs depict the min and max over 5 runs. Stars represent points where "rewind (\maskcrit{large\_final})" or "constant, init sign" is significantly above the other at a $p < 0.05$ level, showing no difference in performance between the two. The original \maskcrit{large\_final} and \maskcrit{random} are included as baselines. See \figref{reinit_exps_combined_crop} for results on convergence speed.
\vspace*{-1.5em}
}
\section{Mask-0 actions: masking is training}
\seclabel{zeroaction}
What should we do with weights that are pruned?
This question may seem trivial, as deleting them (equivalently: setting them to zero) is the standard practice. The term ``pruning'' implies the dropping of connections by setting weights to zero, and these weights are thought of as unimportant. However, if the value of zero for the pruned weights is not important to the performance of the network, we should expect that we can set pruned weights to some other value, such as leaving them frozen at their initial values, without hurting the trainability of the network. This turns out to not be the case. We show in this section that zero values actually matter, alternative freezing approach results in better performing networks, and masking can be viewed as a way of training.
Typical network pruning procedures \cite{NIPS2015_5784,han-2015-ICLR-deep-compression:-compressing,li2016pruning} perform two actions on pruned weights: set them to zero, and freeze them in subsequent training (equivalent to removing those connections from the network). It is unclear which of these two components leads to the increased performance in LT networks. To separate the two factors, we run a simple experiment: we reproduce the LT iterative pruning experiments in which network weights are masked out in alternating train/mask/rewind cycles, but try an additional treatment: freeze masked weights at their initial values instead of at zero. If zero isn't special, both should perform similarly.
\figref{freeze_init} shows the results for this experiment. We find that networks perform significantly better when weights are frozen specifically at zero than at random initial values. For these networks masked via the LT \maskcrit{large\_final} criterion\footnote{\figref{kd_large_final_motion} illustrates why the \maskcrit{large\_final} criterion biases weights that were moving toward zero during training toward zero in the mask, effectively pushing them further in the direction they were headed.}, zero would seem to be a particularly good value to set pruned weights to. At high levels of pruning, freezing at the initial values may perform better, which makes sense since having a large number of zeros means having lots of dead connections.
So why does zero work better than initial values? One hypothesis is that the mask criterion we use \emph{tends to mask to zero those weights that were headed toward zero anyway}.
To test out this hypothesis, we propose another mask-0 action halfway between freezing at zero and freezing at initialization: for any zero-masked weight, freeze it to zero if it moves toward zero over the course of training, and freeze it at its random initial value if it moves away from zero. We show two variants of this experiment in \figref{freeze_init}. In the first variant, we apply it directly as stated to zero-masked weights (to be pruned). We see that by doing so we achieve comparable performance to the original LT networks at low pruning rates and better at high pruning rates. In the second variant, we extend this action to one-masked weights too, that is, initialize every weight to zero if they move towards zero during training, regardless of the pruning action on them.
We see that performance of Variant 2 is even better than Variant 1, suggesting that this new mask-0 action we found can be a beneficial mask-1 action too. These results support our hypothesis that the benefit derived from freezing values to zero comes from the fact that those values were moving toward zero anyway\footnote{Additional control variants of this experiment can be seen in Supplementary Information \secref{si:zeroaction}.}. This view on masking as training provides a new perspective on 1) why certain mask criteria work well (\maskcrit{large\_final} and \maskcrit{magnitude\_increase} both bias towards setting pruned weights close to their final values in the previous round of training), 2) the important contribution of the value of pruned weights to the overall performance of pruned networks, and 3) the benefit of setting these select weights to zero as a better initialization for the network.
\begin{figure}[t]
\begin{center}
\includegraphics[width=.95\linewidth]{freeze_exp_all_with_eggs_crop_export_crop}
\caption{
Performance of network pruning using different treatments of pruned weights (mask-0 actions). Horizontal black lines represent the performance of training the original, full network, averaged over five runs. Solid blue lines represent the original LT algorithm, which freezes pruned weights at zero. Dotted blue lines freeze pruned weights at their initial values. Grey lines show the new proposed 0-action---set to zero if they decreased in magnitude by the end of training, otherwise set to their initialization values. Two variants are shown: 1) new treatment applied to only pruned weights (dashdotted grey lines); 2) new treatment applied to all weights (dashed grey lines).
}
\figlabel{freeze_init}
\end{center}
\vspace*{-1.5em}
\end{figure}
\section{Supermasks}
\seclabel{supermask}
The hypothesis above suggests that for certain mask criteria, like \maskcrit{large\_final}, that masking is training: \emph{the masking operation tends to move weights in the direction they would have moved during training}. If so, just how powerful is this training operation?
To answer this, we can start from the beginning---not training the network at all, but simply applying a mask to the randomly initialized network.
It turns out that with a well-chosen mask, an untrained network can already attain a test accuracy far better than chance. This might come as a surprise, because if you use a randomly initialized and untrained network to, say, classify images of handwritten digits from the MNIST dataset, you would expect accuracy to be no better than chance (about 10\%). But now imagine you multiply the network weights by a mask containing only zeros and ones. In this instance, weights are either unchanged or deleted entirely, but the resulting network now achieves nearly 40 percent accuracy at the task! This is strange, but it is exactly what we observe with masks created using the \maskcrit{large\_final} criterion.
In randomly-initialized networks with \maskcrit{large\_final} masks, it is not implausible to have better-than-chance performance since the masks are derived from the training process. The large improvement in performance is still surprising, however, since the only transmission of information from the training back to the initial network is via a zero-one mask based on a simple criterion. We call masks that can produce better-than-chance accuracy without training of the underlying weights ``Supermasks''.
We now turn our attention to finding better Supermasks.
First, we simply gather all masks instantiated in the process of creating the networks shown in \figref{masks}, apply them to the original, randomly initialized networks, and evaluate the accuracy without training the network. Next, compelled by the demonstration in \secref{oneaction} of the importance of signs and in \secref{zeroaction} of keeping large weights, we define a new \maskcrit{large\_final\_same\_sign} mask criterion that selects for weights with large final magnitudes that also maintained the same sign by the end of training. This criterion, as well as the control case of \maskcrit{large\_final\_diff\_sign}, is depicted in \figref{better_than_chance_init_crop}. Performances of Supermasks produced by all $10$ criteria are included in \figref{supermask_combined_small_crop}, compared with two baselines: networks untrained and unmasked (\maskcrit{untrained\_baseline}) and networks fully trained (\maskcrit{trained\_baseline}). For simplicity, we evaluate Supermasks based on one-shot pruning rather than iterative pruning.
\newcolumntype{K}{>{\centering\arraybackslash}m{0.5\linewidth}}
\begin{figure}[t]
\vskip 0.15in
\centering
\begin{minipage}[b]{0.4\hsize}\centering
\includegraphics[width=1\linewidth]{better_than_chance_init_crop}
\end{minipage}
\hspace{3em}
\begin{minipage}[b]{0.25\hsize}\centering
\begin{small}
\begin{tabular}{KK}
\toprule
large final, \vspace*{-.7em} &
large final, \vspace*{-.7em} \\
same sign &
diff sign \\
$\max(0,\frac{w_iw_f}{|w_i|})$ &
$\max(0,\frac{-w_iw_f}{|w_i|})$ \\
\includegraphics[width=1.0\linewidth]{kd_large_final_same_sign} &
\includegraphics[width=1.0\linewidth]{kd_large_final_diff_sign} \\
\bottomrule
\end{tabular}
\end{small}
\end{minipage}
\caption{
\textbf{(left)}\xspace Untrained networks perform at chance (10\% accuracy) on MNIST, if they are randomly initialized, or randomly initialized and randomly masked. However, applying the \maskcrit{large\_final} mask improves the network accuracy beyond the chance level.
\textbf{(right)}\xspace The \maskcrit{large\_final\_same\_sign} mask criterion (left) that tends to produce the best Supermasks. In contrast to the \maskcrit{large\_final} mask in \figref{kd_large_final_detailed}, this criterion masks out the quadrants where the sign of $w_i$ and $w_f$ differ. We include \maskcrit{large\_final\_diff\_sign} (right) as a control.
}
\figlabel{better_than_chance_init_crop}
\vspace*{-1em}
\end{figure}
We see that \maskcrit{large\_final\_same\_sign} significantly outperforms the other mask criteria in terms of accuracy at initialization. We can create networks that obtain a remarkable 80\% test accuracy on MNIST and 24\% on CIFAR-10 without training using this simple mask criterion. Another curious observation is that if we apply the mask to a signed constant (as described in \secref{oneaction}) rather than the actual initial weights, we can produce even higher test accuracy of up to 86\% on MNIST and 41\% on CIFAR-10! Detailed results across network architectures, pruning percentages, and these two treatments, are shown in \figref{supermask_combined_small_crop}.
\figp{supermask_combined_small_crop}{1}{
Comparision of Supermask performances in terms of test accuracy on MNIST and CIFAR-10 classification tasks. Subfigures are across two network structures (top: FC on MNIST, bottom: Conv4 on CIFAR-10), as well as 1-action treatments (left: weights are at their original initialization, right: weights are converted to signed constants).
No training is performed in any network. Within heuristic based Supermasks (excluding \maskcrit{learned\_mask}), the \maskcrit{large\_final\_same\_sign} mask creates the highest performing Supermask by a wide margin. Note that aside from the five independent runs performed to generate uncertainty bands for this plot, every point on this plot is from the same underlying network, just with different masks. See \figref{supermask_combined_full_crop} for performance on all four networks.
\vspace*{-.5em}
}
We find it fascinating that these Supermasks exist and can be found via such simple criteria. As an aside, they also present a method for network compression, since we only need to save a binary mask and a single random seed to reconstruct the full weights of the network.
\vspace*{-.5em}
\subsection{Optimizing the Supermask}
We have shown that Supermasks derived using simple heuristics greatly enhance the performance of the underlying network immediately, with no training involved. In this section we are interested in how far we can push the performance of Supermasks by \emph{training the mask}, instead of training network weights. Similar works in this domain include training networks with binary weights \cite{NIPS2015_5647, courbariaux2016binarized}, or training masks to adapt a base network to multiple tasks \cite{mallya2018piggyback}. Our work differs in that the base network is randomly initialized, never trained, and masks are optimized for the original task.
We do so by creating a trainable mask variable for each layer while freezing all original parameters for that layer at their random initialization values.
For an original weight tensor $w$ and a mask tensor $m$ of the same shape, we have as the effective weight $w' = w_i \odot g(m)$, where $w_i$ denotes the initial values weights are frozen at, $\odot$ is element-wise multiplication and $g$ is a point-wise function that transform a matrix of continuous values into binary values.
We train the masks with $g(m) = \operatorname{Bern}(S(m))$, where $\operatorname{Bern}(p)$ is the bernoulli sampler with probability $p$, and $S(m)$ is the sigmoid function.
The bernoulli sampling adds some stochasticity that helps with training, mitigates the bias of all things starting at the same value, and uses in effect the expected value of $S(m)$, which is especially useful when they are close to 0.5.
By training the $m$ matrix with SGD, we obtained up to 95.3\% test accuracy on MNIST and 65.4\% on CIFAR-10.
Results are shown in \figref{supermask_combined_small_crop}, along with all the heuristic-based, unlearned Supermasks. Note that there is no straightforward way to control for the pruning percentage. Instead, we initialize $m$ with larger or smaller magnitudes, which nudges the network toward pruning more or less. This allows us to produce masks with the amounts of pruning (percentages of zeros) ranging from 7\% to 89\%.
Further details about the training can be seen in \secref{si:supermask}.
\vspace*{-.5em}
\subsection{Dynamic Weight Rescaling}
One beneficial trick in Supermask training is to dynamically rescale the values of weights based on the sparsity of the network in the current training iteration. For each training iteration and for each layer, we multiply the underlying weights by the ratio of the total number of weights in the layer over the number of ones in the corresponding mask. Dynamic rescaling leads to significant improvements in the performance of the masked networks, which is illustrated in \tabref{supermask-testacc}.
\tabref{supermask-testacc} summarizes the best test accuracy obtained through different treatments.
The result shows striking improvement of learned Supermasks over heuristic based ones. Learned Supermasks result in performance close to training the full network, which suggests that a network upon initialization already contains powerful subnetworks that work well without training.
\muchlater{Additionally, the learning of Supermask allows identifying a possibly optimal pruning rate for each layer, since each layer is free to learn the distribution of $0$s in $m$ on their own. For instance, in \cite{frankle-2019-ICLR-the-lottery-ticket-hypothesis}\xspace the last layer of each network is designed to be pruned approximately half as much as the other layers, in our setting this ratio is automatically adjusted.}
\begin{table}[t]
\caption{Test accuracy of the best Supermasks with various initialization treatments. Values shown are the max over any prune percentage and averaged over four or more runs. The first two columns show untrained networks with heuristic-based masks, where ``init'' stands for the initial, untrained weights, and ``S.C.'' is the signed constant approach, which replaces each random initial weight with its sign as described in \secref{oneaction}. The next two columns show results for untrained weights overlaid with learned masks; and the two after add the Dynamic Weight Rescaling (DWR) approach. The final column shows the performance of networks with weights trained directly using gradient descent. Bold numbers show the performance of the best Supermask variation.}
\tablabel{supermask-testacc}
\begin{center} \begin{small}
\renewcommand{\arraystretch}{1.0}
\begin{tabular}{lccccccc}
\toprule
& & & & & DWR & DWR & \\
& & & learned & learned & learned & learned & \\
& mask & mask & mask & mask & mask & mask & \\
& $\odot$ & $\odot$ & $\odot$ & $\odot$ & $\odot$ & $\odot$ & trained \\
Network & init & S.C. & init & S.C. & init & S.C. & weights \\
\midrule
MNIST FC & 79.3 & 86.3 & 95.3 & 96.4 & 97.8 & \textbf{98.0} & 97.7 \\
CIFAR Conv2 & 22.3 & 37.4 & 64.4 & \textbf{66.3} & 65.0 & 66.0 & 69.2 \\
CIFAR Conv4 & 23.7 & 39.7 & 65.4 & 66.2 & 71.7 & \textbf{72.5} & 75.4 \\
CIFAR Conv6 & 24.0 & 41.0 & 65.3 & 65.4 & 76.3 & \textbf{76.5} & 78.3 \\
\bottomrule
\vspace*{-1.5em}
\end{tabular}
\end{small} \end{center}
\end{table}
\section{Conclusion}
In this paper, we have studied how three components to LT-style network pruning---mask criterion, treatment of kept weights during retraining (mask-1 action), and treatment of pruned weights during retraining (mask-0 action)---come together to produce sparse and performant subnetworks. We proposed the hypothesis that networks work well when pruned weights are set close to their final values. Building on this hypothesis, we introduced alternative freezing schemes and other mask criteria that meet or exceed current approaches by respecting this basic rule. We also showed that the only element of the original initialization that is crucial to the performance of LT networks is the sign, not the relative magnitude of the weights. Finally, we demonstrated that the masking procedure can be thought of as a training operation, and
consequently we uncovered the existence of Supermasks, which can produce partially working networks without training.
\subsubsection*{Acknowledgments}
The authors would like to acknowledge
Jonathan Frankle,
Joel Lehman,
Zoubin Ghahramani,
Sam Greydanus,
Kevin Guo,
and members of the Deep Collective research group at Uber AI
for combinations of helpful discussion, ideas, feedback on experiments, and comments on early drafts of this work.
\clearpage
|
1,941,325,221,142 | arxiv | \section{Introduction}
\label{sec:intro}
We consider dynamical fluctuations in systems described by Markov chains. The nature of such fluctuations in physical systems
constrains the mathematical models that can be used to describe them. For example, there are well-known relationships between
equilibrium physical systems and detailed balance in Markov models~\cite[Section 5.3.4]{Gardiner2009a}. Away from equilibrium,
fluctuation theorems~\cite{Gallavotti1995a,Jarzynski1997a,Lebowitz1999a,Maes1999a,Crooks2000a} and associated ideas of local
detailed balance~\cite{Lebowitz1999a,Maes2008b} have shown how the entropy production of a system must be accounted for correctly
when modelling physical systems. However, the mathematical structures that determine the probabilities of non-equilibrium
fluctuations are still only partially understood.
We characterise dynamical fluctuations using an approach based on the \emph{Onsager-Machlup (OM) theory}~\cite{Machlup1953a},
which is concerned with fluctuations of macroscopic properties of physical systems (for example, density or energy). Associated
to these fluctuations is a \emph{large-deviation principle} (LDP), which encodes the probability of rare dynamical trajectories.
The classical ideas of OM theory have been extended in recent years, through the \emph{Macroscopic Fluctuation Theory} (MFT) of
Bertini \emph{et al.}~\cite{Bertini2015a}. This theory uses an LDP to describe path probabilities for the density and current in
diffusive systems, on the hydrodynamic scale. At the centre of MFT is a decomposition of the current into two orthogonal terms,
one of which is symmetric under time-reversal, and another which is anti-symmetric. The resulting theory is a general framework
for the analysis of dynamical fluctuations in a large class of non-equilibrium systems. It also connects dynamical fluctuations
with thermodynamic quantities like free energy and entropy production, and with associated non-equilibrium objects like the
quasi-potential (which extends the thermodynamic free energy to non-equilibrium settings).
Here, we show how several features that appear in MFT can be attributed to a general structure that characterises dynamical
fluctuations in microscopic Markov models. That is, the properties of the hydrodynamic (MFT) theory can be traced back to the
properties of the underlying stochastic processes. Our approach builds on recent work by Mielke, Renger and M.~A.~Peletier, in
which the analogue of the OM theory for reversible Markov chains has been described in terms of a \emph{generalised gradient-flow
structure}~\cite{Mielke2014a}. To describe non-equilibrium processes, that theory must be generalised to include irreversible
Markov chains. This can be achieved using the canonical structure of fluctuations discovered by Maes and
Neto{\v{c}}n\'y~\cite{Maes2008a}. Extending their approach, we decompose currents in the system into two parts, and we identify a
kind of orthogonality relationship associated with this decomposition. However, in contrast to the classical OM theory and to
MFT, the large deviation principles that appear in our approach have non-quadratic rate functions, which means that fluxes have
non-linear dependence on their conjugate forces. Thus, the idea of orthogonality between currents needs to be generalised, just as
the notion of gradient flows in macroscopic equilibrium systems can be extended to generalised gradient flows.
The central players in our analysis are the probability density $\rho$ and the probability current $j$. For a given Markov chain,
the relation between these quantities is fully encoded in the master equation, which also fully specifies the dynamical
fluctuations in that model. However, thermodynamic aspects of the system --- the roles of heat, free energy, and entropy
production --- are not apparent in the master equation. Within the Onsager-Machlup theory, these thermodynamic quantities appear
in the action functional for paths, and solutions of the master equation appear as paths of minimal action. Hence, the structure
that we discuss here, and particularly the decomposition of the current into two components, links the dynamical properties of the
system to thermodynamic concepts, both for equilibrium and non-equilibrium systems.
\subsection{Summary}
\label{sec:Summary}
We now sketch the setting considered in this article (precise definitions of the systems of interest and the relevant currents,
densities and forces will be given in Section~\ref{sec:Onsag-Machl-theory-Markov} below).
We introduce a large parameter $\cal N$, which might be the {size of} the system (as in MFT) or a large number of
copies of the system (an ensemble), as considered for Markov chains in~\cite{Maes2008b}. Then let
$(\hat\rho^{\;\!\cal N}_t,\hat\jmath^{\;\!\cal N}_t)_{t\in[0,T]}$ be the (random) path followed by the system's density and current, in the
time interval $[0,T]$. Consider a random initial condition such that
$\mathrm{Prob}( \hat\rho_0^{\;\!\cal N} \approx \rho ) \asymp \exp[-\mathcal{N} I_0(\rho) ]$, asymptotically as $\mathcal N\to\infty$,
for some rate functional $I_0$. Paths that in addition satisfy a continuity equation $\dot\rho + \operatorname{div} j =0$ have the
asymptotic probability
\begin{equation}
\label{equ:pathwise-general}
\mathrm{Prob}\left( (\hat\rho_t^{\;\!\cal N},\hat\jmath_t^{\;\!\cal N})_{t\in[0,T]} \approx (\rho_t, j_t)_{t\in[0,T]}\right)
\asymp \exp\left\{-\mathcal N I_{[0,T]}\left( (\rho_t, j_t)_{t\in[0,T]}\right)\right\}
\end{equation}
with the \emph{rate functional}
\begin{equation}
\label{eqn:mc_rate_functional}
I_{[0,T]}\bigl((\rho_t, j_t)_{t\in[0,T]}\bigr)= I_0(\rho_0) + \frac12\int_0^T \Phi(\rho_t,j_t, F(\rho_t)) \df t ;
\end{equation}
{here $F(\rho_t)$ is a force (see~\eqref{equ:def-aF} below for the precise definition)} and $\Phi$ is what we call the \emph{generalised OM functional},
{which has the general form}
\begin{equation}
\label{eqn:Phi_function}
\Phi(\rho,j,f):= \Psi(\rho,j) - j \cdot f + \Psi^\star(\rho, f),
\end{equation}
where $j\cdot f$ is a dual pairing between {a current $j$ and a force $f$}, while $\Psi$ and $\Psi^\star$ are a pair of functions which satisfy
\begin{equation}
\label{equ:legend}
\Psi^\star(\rho,f) = \sup_j\bigl[ j\cdot f - \Psi(\rho,j)\bigr],\quad\text{and}\quad
\Psi(\rho,j) = \sup_f\bigl[ j\cdot f - \Psi^\star(\rho,f)\bigr],
\end{equation}
{as well as $\Psi^\star(\rho,f)=\Psi^\star(\rho,-f)$ and $\Psi(\rho,j)=\Psi(\rho,-j)$.
Note that~\eqref{equ:legend} means that the two functions satisfy a Legendre duality.
Moreover, these two functions $\Psi$ and $\Psi^\star$ are strictly convex in their second arguments. Here and throughout, $f$ indicates a force, while $F$ is a function whose (density-dependent) value is a force.
The large deviation principle stated in~\eqref{equ:pathwise-general} is somewhat abstract: for example, $\hat\rho_t^{\;\!\cal N}$ might be defined as a density on a discrete space or on $\mathbb{R}^d$, depending on the system of interest. Specific examples will be given below. In addition, all microscopic parameters of the system (particle hopping rates, diffusion constants, etc.) will enter the (system-dependent) functions $\Psi$, $\Psi^\star$ and $F$.
As a preliminary example, we recall} the classical Onsager theory~\cite{Machlup1953a}, in which one considers $n$ currents
$j=(j^\alpha)_{\alpha=1}^n$ and a set of conjugate {applied forces $F=(F^\alpha)_{\alpha=1}^n$. Examples of currents
might be particle flow or heat flow, and the relevant forces might be pressure or temperature gradients. The large parameter
${\cal N}$ corresponds to the size of a macroscopic system. The theory aims to to describe the typical (average) response of the
current $j$ to the force $F$, and also the fluctuations of $j$.} In this (simplest) case, the density $\rho$ plays no role,
{so the force $F$ has a fixed value in $\mathbb{R}^n$}. The dual pairing is simply
$j\cdot f = \sum_\alpha j^\alpha f^\alpha$ and $\Psi$ is given by
$\Psi(\rho,j)=\frac12 \sum_{\alpha,\beta} j^\alpha R^{\alpha\beta} j^\beta$, where $R$ is a symmetric $n\times n$ matrix with
elements $R^{\alpha\beta}$. The Legendre dual of $\Psi$ is
$\Psi^\star(\rho,f) =\frac12 \sum_{\alpha,\beta} f^\alpha L^{\alpha\beta} f^\beta$, where $L=R^{-1}$ is the \emph{Onsager matrix},
{whose elements are the linear response coefficients of the system}. One sees that $\Psi$ and $\Psi^\star$ can be
interpreted as squared norms for currents and forces respectively. Denoting this norm by $\| j \|^2_{L^{-1}} := \Psi(\rho,j)$,
one has
\begin{equation}
\Phi(\rho,j,f) = \| j - L f \|^2_{L^{-1}}.
\end{equation}
{On applying an external force $F$, the response of the current $j$} is obtained as the minimum of $\Phi$, so $j=LF$ (that is, $j^\alpha = \sum_\beta
L^{\alpha\beta} F^\beta$). One sees that $\Phi$ measures the deviation of the current $j$ from its expected value $LF$, within an
appropriate norm. {From the LDP~\eqref{equ:pathwise-general}, one sees that the size of this deviation determines the probability of observing a current fluctuation of this size.}
In this article, we show in Section~\ref{sec:Onsag-Machl-theory-Markov} that {finite} Markov chains have an LDP rate
functional of the form~\eqref{eqn:Phi_function}, where $\Phi$ (and thus $\Psi^\star$) are \emph{not} quadratic. {In
that case, $\rho$ and $j$ correspond to probability densities and probability currents, while the transition rates of the Markov
chain determine the functions $F$, $\Psi$ and $\Psi^\star$.} Since $\Psi$ and $\Psi^\star$ measure respectively the sizes of
the currents and forces, we interpret them as generalisations of the squared norms that appear in the classical case. The
resulting $\Phi$ is not a squared norm, but it is still a non-negative function that measures the deviation of $j$ from its most
likely value. This leads to nonlinear relations between forces and currents. The MFT theory~\cite{Bertini2015a} also fits in this
framework, as we show in Section~\ref{sec:Connections-to-MFT}: {in that case $\rho,j$ are a particle density and a
particle current. However, there are relationships between the functions $\Phi$ for MFT and for general Markov chains, as we
discuss in Section~\ref{sec:hydro}.}
Hence, the general structure of Equs.~\eqref{equ:pathwise-general}-\eqref{equ:legend} describes classical OM
theory~\cite{Machlup1953a}, MFT, and {finite} Markov chains. A benefit is that the terms have a physical
interpretation. For a path $(\rho, j)$, the time-reversed path is $(\rho^*_t,j^*_t):=(\rho_{T-t},-j_{T-t})$. Since both $\Psi$ and
$\Psi^\star$ are symmetric in their second argument and thus invariant under time reversal, it holds that
{$\Phi(\rho,j,f) - \Phi(\rho^*,j^*,f) = - 2 j \cdot f$.} This allows us to identify
{$j\cdot F(\rho)$} as a rate of entropy production. In contrast, the term
$\Psi(\rho,j) + \Psi^\star(\rho, {F(\rho)})$ is symmetric under time reversal and encodes the frenesy
(see~\cite{Basu2015a}). Thus, within this general structure, the physical significance of
Equations~\eqref{equ:pathwise-general}--\eqref{equ:legend} is that they connect path probabilities to physical notions such as
force, current, entropy production and breaking of time-reversal symmetry. Furthermore, we introduce in
Section~\ref{sec:Decomp-forc-rate} decompositions of forces and the (path-wise) rate
functional. Section~\ref{sec:Connections-to-MFT} shows that some results of MFT originate from generalised orthogonalities of the
underlying Markov chains derived in Section~\ref{sec:Decomp-forc-rate}. Similar results hold for time-average large deviation
principles, as shown in Section~\ref{sec:LDPs-time-averaged}. In Section~\ref{sec:Cons-struct-OM}, we show how some properties of
MFT can be derived directly from the canonical structure~\eqref{equ:pathwise-general}--\eqref{equ:legend}, independent of the
specific models of interest. Hence these results of MFT have analogues in Markov chains. Finally we briefly summarise our
conclusions in Section~\ref{sec:conc}.
\section{Onsager-Machlup theory for Markov chains}
\label{sec:Onsag-Machl-theory-Markov}
In this section, we collect results on forces and currents in Markov chains and on associated LDPs. In particular, we recall the
setting of~\cite{Maes2008b,Maes2008a}; other references for this section are for example~\cite{Schnakenberg1976a} (for the
definition of forces and currents in Markov chains) and~\cite{Mielke2014a} for LDPs.
\subsection{Setting}
\label{sec:Setting}
We consider an irreducible continuous time Markov chain {$X_t$} on a {finite} state space $V$ with a unique stationary
distribution $\pi$ that satisfies $\pi(x)>0$ for all $x\in V$. The transition rate from state $x$ to state $y$ is denoted with
$r_{xy}$. We assume that $r_{xy}>0$ if and only if $r_{yx}>0$.
{We restrict to finite Markov chains for simplicity: the theory can be extended to countable state Markov chains, but
this requires some additional assumptions. Briefly, one requires that the Markov chain should be positively recurrent and
ergodic (see for instance~\cite{Bertini2015b}), for which it is sufficient that (i) the transition rates are not degenerate:
$\sum_{y\in V}r_{xy}<\infty$ for all $x\in V$, and (ii) for each $x\in V$, the Markov chain started in $x$ almost all
trajectories of the Markov chain do not exhibit infinitely many jumps in finite time (``no explosion''). Second, one has to
invoke a summability condition for the currents considered below (see, e.g., equations~\eqref{equ:master} and~\eqref{eq:div}),
such that in particular the discrete integration by parts (or summation by parts) formula~\eqref{equ:parts} holds. Finally,
note that the cited result for existence and uniqueness of the optimal control potential (the solution
to~\eqref{equ:vhi-balance}) is only valid for finite state Markov chains.}
As usual, we can interpret the state space of the Markov chain as a directed graph with vertices $V$ and edges
$E=\left\{xy \bigm| x,y\in V, r_{xy}>0\right\}$, such that $xy\in E$ if and only if $yx\in E$. Let $\rho$ be a probability measure
on $V$. We define rescaled transition rates with respect to $\pi$ as
\begin{equation}
q_{xy}:=\pi(x)r_{xy},
\end{equation}
so that $\rho(x)r_{xy} = \tfrac{\rho(x)}{\pi(x)}q_{xy}$. With this notation, the \emph{detailed balance} condition
$\pi(x) r_{xy} = \pi(y) r_{yx}$ reads $q_{xy} = q_{yx}$, so this equality holds precisely if the Markov chain is reversible
(i.e.~satisfies detailed balance). In general (not assuming reversibility), since $\pi$ is the invariant measure for the Markov
chain, one has (for all $x$) that
\begin{equation}
\label{equ:q-balance}
\sum_y (q_{xy} - q_{yx} ) = 0 .
\end{equation}
We further define the \emph{free energy} $\mathcal F$ on $V$ to be the \emph{relative entropy} (or \emph{Kullback-Leibler
divergence}) with respect to $\pi$,
\begin{equation}
\label{eqn:free_energy}
\mathcal F(\rho) := \sum_{x} \rho(x) \log \Bigl(\frac{\rho(x)}{\pi(x)}\Bigr).
\end{equation}
The \emph{probability current} $J(\rho)$ is defined as~\cite[Equation~(7.4)]{Schnakenberg1976a}
\begin{equation}
\label{equ:master}
J_{xy}(\rho) := \rho(x)r_{xy}-\rho(y)r_{yx} .
\end{equation}
Moreover, for a general current $j$ such that $j_{xy}=-j_{yx}$, we define the \emph{divergence} as
\begin{equation}
\label{eq:div}
\div j(x) := \sum_{y\in V} j_{xy}.
\end{equation}
We say that $j$ is \emph{divergence free} if $\div j(x) = 0$ for every $x \in V$. The time evolution of the probability density
$\rho$ is then given by the master equation
\begin{equation}
\label{eq:master}
\dot \rho_t = -\div J(\rho_t)
\end{equation}
(which is often stated as $\dot\rho_t = \mathcal L^\dag \rho_t$, with the (forward) generator $\mathcal L^\dag$).
\subsection{Non-linear flux-force relation and the associated functionals \texorpdfstring{$\Psi$ and $\Psi^\star$}{}}
\label{sec:Non-linear-flux}
To apply the theory outlined in Section~\ref{sec:Summary}, the next step is to identify the appropriate forces
{$F(\rho)$ and also a set of mobilities $a(\rho)$}. In this section we define these forces,
following~\cite{Schnakenberg1976a,Maes2008b,Maes2008a}. {This amounts to a reparameterisation of the rates of the
Markov process in terms of physically-relevant variables: an example is given in Section~\ref{sec:ring}.}
To each edge in $E$ we assign a \emph{force} $F$ and a \emph{mobility} $a$, as
\begin{equation}
\label{equ:def-aF}
F_{xy}(\rho) := \log \frac{\rho(x) r_{xy} }{ \rho(y) r_{yx} } \quad\text{and}\quad
a_{xy}(\rho) := 2\sqrt{ \rho(x) r_{xy} \rho(y) r_{yx} }.
\end{equation}
Note that $F_{xy}=-F_{yx}$, while $a_{xy}=a_{yx}$: forces have a direction but the mobility is a symmetric property of each edge.
The fact that $F_{xy}$ depends on the density $\rho$ means that these forces act in the space of probability distributions. This
definition of the force is sometimes also called \emph{affinity}~\cite[Equation~(7.5)]{Schnakenberg1976a}; see
also~\cite{Andrieux2007a}. With this definition, the probability current~\eqref{equ:master} is
\begin{equation}
\label{eq:J-sinh}
J_{xy}(\rho) = a_{xy}(\rho) \sinh \bigl( \tfrac12 F_{xy}(\rho) \bigr) ,
\end{equation}
which may be verified directly from the definition $\sinh(x) = ({\rm e}^x-{\rm e}^{-x})/2$. In contrast to the classical OM
theory, this is a \emph{non-linear} relation between forces and fluxes, although one recovers a linear structure for small forces
(recall the classical theory in Section~\ref{sec:Summary}, for which $j=Lf$).
Now consider a current $j$ defined on $E$, with {$j_{xy}=-j_{yx}$}, and a general force $f$ that satisfies
$f_{xy}=-f_{yx}$ (which is not in general given by~\eqref{equ:def-aF}). Define a dual pair on $E$ as
\begin{equation}
\label{eqn:dual_pairing}
j\cdot f := \frac 12\sum_{xy} j_{xy}f_{xy},
\end{equation}
where the summation is over all $xy\in E$ (the normalisation $1/2$ appears because each connected pair of states should be counted
only once, but $E$ is a set of directed edges, so it contains both $xy$ and $yx$, which have the same contribution to $j\cdot f$).
We define the discrete gradient {$\nabla g$ by $\nabla^{x,y}g:=g(y)-g(x)$. The discrete gradient and the divergence
defined in~\eqref{eq:div} satisfy a discrete integration by parts formula: for any function $g\colon V\to\mathbb{R}$, since
$j_{xy} = -j_{yx}$, we have
\begin{equation}
\label{equ:parts}
-\sum_{x \in V} g(x) \div j(x) = \frac 12 \sum_{xy} j_{xy} \nabla^{x,y} g = j\cdot \nabla g.
\end{equation}
} We will show in Section~\ref{sec:Large-Devi-Onsag} that there is an OM functional associated with these forces and currents,
which is of the form~\eqref{eqn:Phi_function}. Since $\Psi$ and $\Psi^\star$ are convex and related by a Legendre transformation,
it is sufficient to specify only one of them. The appropriate choice turns out to be
\begin{equation}
\label{eqn:psi_star}
\Psi^\star(\rho,f):=\sum_{xy}a_{xy}(\rho) \bigl(\cosh\bigl( \tfrac12 f_{xy} \bigr)-1\bigr).
\end{equation}
This means that $\Phi(\rho,j,f)$ defined in~\eqref{eqn:Phi_function} is uniquely minimised for the current
$j_{xy} = j^f_{xy}(\rho)$ with
\begin{equation}
{ j^f_{xy}(\rho) = 2(\delta\Psi^\star/\delta f)_{xy} = a_{xy}(\rho) \sinh(f_{xy}/2), }
\label{equ:def-jF}
\end{equation}
as required for consistency with~\eqref{eq:J-sinh}. From~\eqref{equ:legend} and~\eqref{eqn:dual_pairing}, one has also
\begin{equation}
\label{eqn:new_psi}
\Psi(\rho,j)= \frac 12 \sum_{xy} j_{xy}f^j_{xy}(\rho) -
\sum_{xy} a_{xy}(\rho)\bigl(\cosh\bigl(\tfrac12 f^j_{xy}(\rho) \bigr)-1\bigr),
\end{equation}
where
\begin{equation}
{f^j_{xy}(\rho):=2\operatorname{arcsinh}\left({j_{xy}/a_{xy}(\rho)}\right) }
\end{equation}
is the force required to induce the current $j$.
Physically, $\Psi^\star(\rho,f)$ is a measure of the strength of the force $f$ and $\Psi(\rho,j)$ is a measure of the magnitude of
the current $j$. Consistent with this interpretation, note that $\Psi$ and $\Psi^\star$ are symmetric in their second arguments.
Moreover, for small forces and currents, $\Psi^\star$ and $\Psi$ are quadratic in their second arguments, and can be interpreted
as generalisations of squared norms of the force and current respectively. {Note that equations~\eqref{eqn:psi_star}
and~\eqref{eqn:new_psi} can alternatively be represented as
\begin{equation}
\Psi(\rho,j)= \sum_{xy} \biggl[ \frac 12j_{xy}f^j_{xy}(\rho) - \sqrt{j_{xy}^2+a_{xy}(\rho)^2} + a_{xy}(\rho)\biggr]
\end{equation}
and
\begin{equation}
\Psi^\star(\rho,f):=\sum_{xy}\biggl[ \sqrt{j^f_{xy}(\rho)^2+a_{xy}(\rho)^2} - a_{xy}(\rho)\biggr].
\end{equation}
}
\subsection{Large Deviations and the Onsager-Machlup functional}
\label{sec:Large-Devi-Onsag}
As anticipated in Section~\ref{sec:Summary}, the motivation for the definitions of $\Psi$, $\Psi^\star$, and $F$ is that there is a
large deviation principle for these Markov chains, whose rate function is of the form given in~\eqref{eqn:mc_rate_functional}.
This large deviation principle appears when one considers $\cal N$ {independent} copies of the Markov chain.
We denote the $i$-th copy of the Markov chain by $X^i_t$ and define the empirical density for this copy as
$\hat\rho^{\;\!i}_t(x)=\delta_{X^i_t,x}$, where $\delta$ is a Kronecker delta function. Let the times at which the Markov chain
$X^i_t$ has jumps in $[0,T]$ be $t_1^i, t_2^i, \dots, t^i_{K_i}$. Further denote the state just before the $k$-th jump with
$x_{k-1}^i$ (such that the state after the $k$-th jump is $x_{k}^i$). With this, the empirical current is given by
\begin{equation*}
(\hat \jmath_t^{\;\!i})_{xy}
= \sum_{k=1}^{K_i} \bigl(\delta_{x,x_{k-1}^i} \delta_{y,x_k^i} - \delta_{y,x_{k-1}^i} \delta_{x,x_{k}^i}\bigr) \delta\bigl(t-t_k^i\bigr) ,
\end{equation*}
where $\delta(t-t_k)$ denotes a Dirac delta. Note that $(\hat\jmath_t^{\;\! i})_{xy}=-(\hat\jmath_t^{\;\! i})_{yx}$ and the total
probability is conserved, {as} $\sum_x \div \hat\jmath_t^{\;\! i}(x) =0$ {(which holds for any discrete
vector field with $(\hat\jmath_t^{\;\! i})_{xy}=-(\hat\jmath_t^{\;\! i})_{yx}$)}. With a slight abuse of notation we define a
similar empirical {density and current} for the full set of copies as
\begin{equation}
\label{eqn:N_average}
\hat\rho_t^{\;\!\;\!\cal N}:= \frac 1{{\cal N}}\sum_{i=1}^{\cal N} \hat\rho^{\;\!i}_t, \quad\text{and}\quad
\hat\jmath_t^{\;\!\;\!\cal N} := \frac 1{{\cal N}}\sum_{i=1}^{\cal N}
\hat\jmath^{\;\!i}_t.
\end{equation}
Next, we state the large deviation principle where the OM functional appears. For this, we fix a time interval $[0,T]$ and
consider the large $\mathcal N$ limit. We assume that the $\cal N$ copies at time $t=0$ have initial conditions drawn from the
invariant measure of the process (the generalisation to other initial conditions is straightforward). Then, the probability to
observe a joint density and current $(\rho_t, j_t)_{t\in[0,T]}$ over the time interval $[0,T]$ is in the limit as
$\mathcal N\to\infty$ given by~\eqref{equ:pathwise-general}. That is,
\begin{equation}
\label{eqn:ldp_pathwise_statement}
\mathrm{Prob}\Bigl( (\hat\rho_t^{\;\!\mathcal N},\hat\jmath_t^{\;\!\mathcal N})_{t\in[0,T]} \approx (\rho_t, j_t)_{t\in[0,T]}\Bigr)
\asymp \exp\bigl\{-\mathcal N I_{[0,T]}\bigl( (\rho_t, j_t)_{t\in[0,T]}\bigr)\bigr\}
\end{equation}
with
\begin{equation}
\label{eqn:ldp_pathwise}
I_{[0,T]}\bigl((\rho_t,j_t)_{t\in[0,T]}\bigr) =
\begin{cases}
\mathcal{F}(\rho_0) + \frac 12\int_0^T \Phi(\rho_t,j_t, F(\rho_t)) \df t
& \text{if } \dot\rho_t + \operatorname{div} j_t = 0\\
+\infty & \text{otherwise}
\end{cases}
\end{equation}
Here, $F(\rho)$ is the force defined in~\eqref{equ:def-aF} and the condition $\dot\rho_t + \div j_t = 0$ has to hold for almost
all $t\in[0,T]$. Moreover, $\Phi$ is of the form $\Phi(\rho,j,f) = \Psi(\rho,j) - j\cdot f + \Psi^\star(\rho,f)$ stated
in~\eqref{eqn:Phi_function}, and the relevant functions $\Psi$, $\Psi^\star$ and $\cal F$ are those of~\eqref{eqn:psi_star},
\eqref{eqn:new_psi} and~\eqref{eqn:free_energy}. This LDP was formally derived in~\cite{Maes2008a,Maes2008b}. Since the
quantities defined in~\eqref{eqn:N_average} are simple averages over independent copies of the same Markov chain, this LDP may
also be proven by direct application of Sanov's theorem, which provides an interpretation of $I_{[0,T]}$ as a relative entropy
between path measures; we sketch the derivation in Appendix~\ref{sec:relent}. For finite-state Markov chains,
\eqref{eqn:ldp_pathwise_statement} and~\eqref{eqn:ldp_pathwise} also follow (by contraction) from~\cite[Theorem 4.2]{Renger2017a},
which provides a rigorous proof.
{We emphasise that the arguments $\rho$ and $j$ of the function $\Phi$ correspond to the random variables that appear
in the LDP, while the functions $F$, $\Psi$ and $\Psi^\star$ that appear in $\Phi$ encapsulate the transition rates of the
Markov chain. Thus, by reparameterising the rates $r_{xy}$ in terms of forces $F$ and mobilities $a$, we arrive at a
representation of the rate function which helps to make its properties transparent (convexity, positivity, symmetries such
as~\eqref{equ:gc-finite-time}).}
We note that for reversible Markov chains, the force $F(\rho)$ is a pure gradient $F=\nabla G$ for some potential $G$ (see
Section~\ref{sec:Decomp-forc-rate} below), in which case one may write $j\cdot F=\sum_x \dot\rho(x) G(x)$, which follows from an
integration by parts and application of the continuity equation. In this case, Mielke, M.~A.~Peletier, and
Renger~\cite{Mielke2014a} also identified a slightly different canonical structure to the one presented here, in which the dual
pairing is $\sum_x v(x) G(x)$, for a velocity $v(x)=\dot\rho(x)$ and a potential $G$. The analogues of $\Psi$ and $\Psi^\star$ in
that setting depend on $v$ and $G$ respectively, instead of $j$ and $F$. The setting of~\eqref{eqn:Phi_function}
and~\eqref{equ:legend} is more general, in that the functions $\Psi,\Psi^\star$ for the velocity/potential setting are fully
determined by those for the current/force setting. Also, focusing on the velocity $v$ prevents any analysis of the
divergence-free part of the current, and restricting to potential forces does not generalise in a simple way to irreversible
Markov chains. For this reason, we use the current/force setting in this work.
In a separate development, Maas~\cite{Maas2011a} identified a quadratic cost function for paths (in fact a metric structure) for
which the master equation~\eqref{eq:master} is the minimiser in the case of reversible dynamics. This metric corresponds to the
solution of an optimal mass transfer problem which seems to have no straightforward extension to irreversible systems. Of course,
in the reversible case, the pathwise rate function~\eqref{eqn:ldp_pathwise} has the same minimiser, but is non-quadratic and
therefore does not correspond to a metric structure, so there is no simple geometrical interpretation of~\eqref{eqn:ldp_pathwise}.
It seems that the non-quadratic structure in the rate function is essential in order capture the large deviations encoded
by~\eqref{eqn:ldp_pathwise_statement}.
\subsection{Time-reversal symmetry, entropy production, and the Gallavotti-Cohen theorem}
\label{sec:Time-reversal-symm}
The rate function for the large-deviation principle~\eqref{eqn:ldp_pathwise_statement} is given by~\eqref{eqn:ldp_pathwise}, which
has been written in terms of forces $F$, currents $j$, and densities $\rho$. To explain why it is useful to write the rate
function in this way, we compare the probability of a path $(\rho_t,j_t)_{t\in[0,T]}$ with that of its time-reversed counterpart
$(\rho_t^*,j_t^*)_{t\in[0,T]}$, where $(\rho^*_t,j^*_t) =(\rho_{T-t},-j_{T-t})$ as before.
In this case, the fact that $\Psi$ and $\Psi^\star$ are both even in their second argument means that {\begin{align}
&\hspace{-30pt}-\frac{1}{{\cal N}} \log\frac{ \mathrm{Prob}\Bigl( (\hat\rho_t^{\;\!\mathcal N},\hat\jmath_t^{\;\!\mathcal
N})_{t\in[0,T]} \approx (\rho_t, j_t)_{t\in[0,T]}\Bigr) } {\mathrm{Prob}\Bigl( (\hat\rho_t^{\;\!\mathcal
N},\hat\jmath_t^{\;\!\mathcal N})_{t\in[0,T]} \approx (\rho_t^*, j_t^*)_{t\in[0,T]}\Bigr)
} \nonumber \\
&\hspace{10pt} \asymp I_{[0,T]}\bigl( (\rho_t, j_t)_{t\in[0,T]}\bigr) - I_{[0,T]}\bigl( (\rho_t^*, j_t^*)_{t\in[0,T]}\bigr)
\nonumber \\
& \hspace{10pt} = \mathcal F(\rho_0) - \mathcal F(\rho_T) - \int_0^T j_t\cdot F(\rho_t)\df t .
\label{equ:gc-finite-time}
\end{align}
} This formula is a (finite-time) statement of the Gallavotti-Cohen fluctuation theorem~\cite{Gallavotti1995a,Lebowitz1999a}: see
also~\cite{Crooks2000a,Maes1999a}. It also provides a connection to physical properties of the system being modelled, via the
theory of stochastic thermodynamics~\cite{Seifert2012a}. The terms involving the free energy $\cal F$ come from the initial
conditions of the forward and reverse paths, while the integral of $j\cdot F$ corresponds to the heat transferred from the system
to its environment during the trajectory~\cite[Eqs.~(18), (20)]{Seifert2012a}. This latter quantity -- which is the time-reversal
antisymmetric part of the pathwise rate function -- is related (by a factor of the environmental temperature) to the entropy
production in the environment~\cite{Maes1999a}. The definition of the force $F$ in~\eqref{equ:def-aF} has been chosen so that the
dual pairing $j\cdot F$ is equal to this rate of heat flow: this means that the forces and currents are conjugate variables, just
as (for example) pressure and volume are conjugate in equilibrium thermodynamics. {See also the example in
Section~\ref{sec:ring} below.}
\section{Decomposition of forces and rate functional}
\label{sec:Decomp-forc-rate}
We now introduce a splitting of the force $F(\rho)$ into two parts $F^S(\rho)$ and $F^A$, which are related to the behaviour of
the system under time-reversal, as well as to the splitting of the heat current into ``excess'' and ``housekeeping''
contributions~\cite{Seifert2012a}. We use this splitting to decompose the function $\Phi$ into {three} pieces, which
allows us to compare (for example) the behaviour of reversible and irreversible Markov chains. This splitting also mirrors a
similar construction within Macroscopic Fluctuation Theory~\cite{Bertini2015a}, and this link will be discussed in
Section~\ref{sec:Connections-to-MFT}. Related splittings have been introduced elsewhere;
{see~\cite{Kwon2005a} and~\cite{qian2013} for decompositions of forces in stochastic differential equations}, and~\cite{Carlo2017a} for decompositions of the instantaneous current in interacting particle systems.
\subsection{Splitting of the force according to time-reversal symmetry}
\label{sec:Splitt-force-accord}
We define the \emph{adjoint process} associated with the original Markov chain of interest. The transition rates of the adjoint
process are $r^*_{xy}:=\pi(y)r_{yx}\pi(x)^{-1}$. It is easily verified that the adjoint process has invariant measure $\pi$, so
$q^*_{xy}:=\pi(x)r^*_{xy}=q_{yx}$. Under the assumption that the initial distribution is sampled from the steady state, the
probability to observe a trajectory for the adjoint process coincides with the probability to observe the time-reversed trajectory
for the original process.
From the definition of $F(\rho)$ in~\eqref{equ:def-aF}, we can decompose this force as
\begin{equation}
F_{xy}(\rho) = F^S_{xy}(\rho) + F^A_{xy}
\end{equation}
with
\begin{equation}
\label{equ:FrS}
F^S_{xy}(\rho) := -\nabla^{x,y}\log\frac\rho\pi, \qquad F^A_{xy} := \log\frac{q_{xy}}{q_{yx}}.
\end{equation}
With this choice, we note that the equivalent force for the adjoint process
\begin{equation*}
F^*(\rho) = \log \frac{\rho(x) r^*_{xy} }{ \rho(y) r^*_{yx} },
\end{equation*}
satisfies $F^*(\rho)=F^S(\rho) - F^A$. So taking the adjoint inverts the sign of $F^A$ (the ``antisymmetric'' force) but leaves
$F^S(\rho)$ unchanged (the ``symmetric'' force). For a reversible Markov chain, the adjoint process coincides with the original
one, and $F^A=0$.
\begin{lemma}
\label{lem:one}
Given $\rho$, with the mobility $a(\rho)$ of~\eqref{equ:def-aF}, the forces $F^S(\rho)$ and $F^A$ satisfy
\begin{equation}
\label{eqn:HJ}
\sum_{xy}\sinh\bigl(F^S_{xy}(\rho)/2\bigr)\;\! a_{x,y}(\rho) \sinh\bigl(F^A_{xy}/2\bigr)=0.
\end{equation}
\end{lemma}
\begin{proof}
From the definitions of $F^S(\rho)$, $F^A$, $a_{xy}$ and $\sinh$, one has
\begin{equation*}
a_{xy}(\rho)\sinh( F^S_{xy}(\rho)/2)=\Bigl(\frac{\rho(x)}{\pi(x)}-\frac{\rho(y)}{\pi(y)}\Bigr)\sqrt{q_{xy}q_{yx}}
\end{equation*}
and $\sinh (F^A_{xy}/2) = (q_{xy}q_{yx})^{-1/2}(q_{xy}-q_{yx})/2$. Hence
\begin{multline*}
\qquad\sum_{xy}\sinh\bigl(F^S_{xy}(\rho)/2\bigr)\;\! a_{xy}(\rho) \sinh\bigl(F^A_{xy}/2\bigr)
= \frac 12 \sum_{xy} \Bigl(\frac{\rho(x)}{\pi(x)}-\frac{\rho(y)}{\pi(y)}\Bigr)(q_{xy}-q_{yx})\\
= \sum_x \frac{\rho(x)}{\pi(x)}\sum_y(q_{xy}-q_{yx})=0,\qquad
\end{multline*}
where the last equality uses~\eqref{equ:q-balance}. This establishes~\eqref{eqn:HJ}.
\qed\end{proof}
In Section~\ref{sec:Decomp-force-F}, we will reformulate the so-called Hamilton-Jacobi relation of MFT in terms of forces, and
show that this yields an equation analogous to~\eqref{eqn:HJ}.
\subsection{Physical interpretation of \texorpdfstring{$F^S$}{symmetric force} and \texorpdfstring{$F^A$}{antisymmetric force}}
\label{sec:Phys-interpr-FS}
In stochastic thermodynamics, one may identify $F^A_{xy}$ as the \emph{housekeeping heat} (or \emph{adiabatic entropy production})
associated with a single transition from state $x$ to state $y$, see~\cite{Seifert2012a,Esposito2010a}. (Within the Markov chain
formalism, there is some mixing of the notions of force and energy: usually an energy would be a product of a force and a distance
but there is no notion of a distance between states of the Markov chain, so forces and energies have the same units in our
analysis.) Hence $j\cdot F^A$ is the rate of flow of housekeeping heat into the environment. The meaning of the housekeeping
heat is that for irreversible systems, transitions between states involve unavoidable dissipated heat which cannot be transformed
into work (this dissipation is required in order to ``do the housekeeping'').
To obtain the physical interpretation of $F^S$, we also define
\begin{equation}
\label{eqn:free_energy_dissipation}
D(\rho,j) :
=\frac12 \sum_{xy} j_{xy} \log \frac{\rho(y)\pi(x)}{\rho(x)\pi(y)}.
\end{equation}
For a general path $(\rho_t,j_t)_{t\in[0,T]}$ that satisfies $\dot\rho_t = - \div j_t$, we also identify
\begin{equation}
\label{eq:F-dot}
\frac{d}{dt} \mathcal{F}(\rho_t)
= \sum_x \dot\rho_t(x) \log \frac{\rho_t(x)}{\pi(x)} = \frac 12\sum_{xy} (j_t)_{xy} \nabla^{x,y}\log\frac\rho\pi
= D(\rho_t,j_t) ,
\end{equation}
where we used~\eqref{eqn:free_energy},~\eqref{equ:parts}. That is, $D(\rho,j)$ is the change in free energy induced by the current
$j$. Moreover it is easy to see that
\begin{equation}
\label{eqn:FsGradF}
F^S_{xy}(\rho) = -\nabla^{x,y} \frac{\delta \mathcal{F}}{\delta\rho} ,
\end{equation}
where $\frac{\delta\mathcal F}{\delta \rho}$ denotes the functional derivative of the free energy $\cal F$ {given
in~\eqref{eqn:free_energy}. (Note that the functional derivative $\delta\mathcal F/\delta \rho$ is simply
$\partial{\cal F}/\partial\rho$ in this case, since $\rho$ is defined on a discrete space. We retain the functional notation to
emphasise the connection to the general setting of Section~\ref{sec:Summary}).} Also, the last identity in~\eqref{eq:F-dot} can
be phrased as
\begin{equation}
\label{eqn:JFS-dissipation}
j\cdot F^S(\rho) = -D(\rho,j).
\end{equation}
The same identity, with an integration by parts, shows that
\begin{equation}
\label{eq:div-free-D-null}
D(\rho, j) = 0 \text{ if $j$ is divergence free.}
\end{equation}
Equation~\eqref{eqn:FsGradF} shows that the symmetric force $F^S$ is {minus} the gradient of the free energy, so the
heat flow associated with the dual pairing of $j$ and $F^S$ is equal to (the negative of) the rate of change of the free energy.
It follows that the right hand side of~\eqref{equ:gc-finite-time} can alternatively be written as
$-\int j\cdot F^A\, \mathrm{d}t$.
We also recall from Section~\ref{sec:Non-linear-flux} that the force $F$ acts in the space of probability densities: $F_{xy}$
depends not only on the states $x,y$ but also on the density $\rho$. (Physical forces acting on individual copies of the system
should not depend on $\rho$ since each copy evolves independently, but $F$ includes entropic terms associated with the ensemble of
copies.) To understand this dependence, it is useful to write
$\mathcal{F}(\rho) = -\sum_x \rho(x) \log \pi(x) + \sum_x \rho(x) \log \rho(x)$. We also write the invariant measure in a
Gibbs-Boltzmann form: $\pi(x) = \exp(-U(x))/Z$, where $U(x)$ is the internal energy of state $x$ and $Z=\sum_x \exp(-U(x))$ is a
normalisation constant. Then $-\sum_x \rho(x) \log \pi(x) = \mathbb{E}_\rho(U) + \log Z$ depends on the mean energy of the
system, while $\sum_x \rho(x) \log \rho(x)$ is (the negative of) the mixing entropy, which comes from the many possible
permutations of the copies of the system among the states of the Markov chain. From~\eqref{eqn:FsGradF} one then sees that $F^S$
has two contributions: one term (independent of $\rho$) that comes from the gradient of the energy $U$ and the other (which
depends on $\rho$) comes from the gradient of the entropy. These entropic forces account for the fact that a given empirical
density $\rho^{\;\!{\cal N}}$ can be achieved in many different ways, since individual copies of the system can be permuted among the
different states of the system.
\subsection{Generalised orthogonality for forces}
\label{sec:Decomp-rate-funct}
Recalling the definitions of Section~\ref{sec:Splitt-force-accord}, one sees that the current in the adjoint process satisfies an
analogue of~\eqref{eq:J-sinh}:
\begin{equation}
\label{eqn:def_adj_current}
J^\ast_{xy}(\rho) := a_{xy}(\rho) \sinh \bigl(\tfrac12 F^*_{xy}(\rho)\bigr), \qquad\text{with}\qquad
F^*_{xy}(\rho):= F^S_{xy}(\rho) - F^A_{xy}.
\end{equation}
Comparing with~\eqref{equ:FrS}, one sees that the adjoint process may also be obtained by inverting $F^A$ (while keeping
$F^S(\rho)$ as it is). For $a_{xy}^S(\rho):= a_{xy}(\rho)\cosh(F^A_{xy}/2)$ the symmetric current is defined as
\begin{equation}
\label{eqn:J_S}
J^S_{xy}(\rho) : = a_{xy}^S(\rho)\sinh\bigl( F^S_{xy}(\rho)/2\bigr),
\end{equation}
which satisfies $J^S_{xy}(\rho) = (J_{xy}(\rho) + J^\ast_{xy}(\rho))/2$. It is the same for the process and the adjoint process,
and also coincides with the current for reversible processes (where $q_{xy}=q_{yx}$, or equivalently $F^A=0$). {An
analogous formula can also be obtained for the anti-symmetric current. With
$a^A_{xy}(\rho) := a_{xy}(\rho)\cosh(F^S_{xy}(\rho)/2)=a_{xy}(\pi)\bigl(\frac{\rho(x)}{\pi(x)}+\frac{\rho(y)}{\pi(y)}\bigr)/2$,
the anti-symmetric current is defined as
\begin{equation}
J^A_{xy}(\rho) := a^A_{xy}(\rho)\sinh\bigl(F^A_{xy}/2\bigr).
\end{equation}
It satisfies $J^A_{xy}(\rho) = (J_{xy}(\rho) - J^\ast_{xy}(\rho))/2$.
}
Let $\Psi_S^\star$ be the symmetric version of $\Psi^\star$ obtained from~\eqref{eqn:psi_star} with $a_{xy}(\rho)$ replaced by
$a^S_{xy}(\rho)$. (The Legendre transform of $\Psi^\star_S$ is similarly denoted $\Psi_S$). This leads to a separation of
$\Psi^\star(\rho,F(\rho))$ in a term corresponding to $F^S(\rho)$ and a term corresponding to $F^A$.
\begin{lemma}
\label{lem:psi_split}
The two forces $F^S(\rho)$ and $F^A$ defined in~\eqref{equ:FrS} satisfy
\begin{equation}
\label{eqn:new_psi_star_2}
\Psi^\star(\rho,F(\rho))
=\Psi_S^\star\bigl(\rho,F^S(\rho)\bigr) + \Psi^\star\bigl(\rho,F^A\bigr),
\end{equation}
\end{lemma}
\begin{proof}
Using $\cosh(x+y) = \cosh(x)\cosh(y) + \sinh(x)\sinh(y)$, Lemma~\ref{lem:one} and the definition of $a_{xy}^S(\rho)$, we obtain
that the left hand side of~\eqref{eqn:new_psi_star_2} is given by
\begin{multline}
\sum_{xy} a_{xy}(\rho)\bigl(\cosh(F_{xy}(\rho)/2)-1\bigr)
= \sum_{xy} a_{xy}(\rho)\bigl(\cosh(F^S_{xy}(\rho)/2)\cosh(F^A_{xy}(\rho)/2)-1\bigr)\\
= \sum_{xy} a_{xy}^S(\rho)\bigl(\cosh(F^S_{xy}(\rho)/2) - 1\bigr) + \sum_{xy} a_{xy}(\rho)\bigl(\cosh(F^A_{xy}(\rho)/2) - 1\bigr),
\end{multline}
which coincides with the right hand side of~\eqref{eqn:new_psi_star_2}. \qed
\end{proof}
The physical interpretation of Lemma~\ref{lem:psi_split} is that the strength of the force $F(\rho)$ can be written as separate
contributions from $F^S(\rho)$ and $F^A$. The following corollary allows us to think of a generalised orthogonality of the forces
$F^S(\rho)$ and $F^A$.
\begin{proposition}[Generalised orthogonality]
\label{prop:orth}
The forces $F^S(\rho)$ and $F^A$ satisfy
\begin{equation}
\label{eqn:orth}
\Psi^\star\bigl(\rho,F^S(\rho)+F^A\bigr) = \Psi^\star\bigl(\rho, F^S(\rho)-F^A\bigr).
\end{equation}
\end{proposition}
\begin{proof}
This follows directly from Lemma~\ref{lem:psi_split} and the symmetry of $\Psi^\star(\rho,\cdot)$.
\qed
\end{proof}
We refer to Proposition~\ref{prop:orth} as a generalised orthogonality between $F^S$ and $F^A$ because $\Psi^\star$ is acting as
generalisation of a squared norm (see Section~\ref{sec:Summary}), so~\eqref{eqn:orth} can be viewed as a nonlinear generalisation
of $\| F^S + F^A \|^2 = \| F^S - F^A \|^2$, which would be a standard orthogonality between forces.
Moreover, Lemma~\ref{lem:psi_split} can be used to decompose the OM functional as a sum of three terms.
\begin{corollary}
\label{cor:two}
Let $\Phi_S$ be defined as in~\eqref{eqn:Phi_function} with $(\Psi,\Psi^\star)$ replaced by $(\Psi_S,\Psi_S^\star)$, and
$D(\rho,j)$ as defined in~\eqref{eqn:free_energy_dissipation}. Then
\begin{equation}
\label{eqn:lagrangian_2}
\Phi(\rho,j,F(\rho)) =D(\rho,j) + \Phi_S\bigl(\rho,0,F^S(\rho)\bigr) + \Phi\bigl(\rho,j,F^A\bigr).
\end{equation}
\end{corollary}
\begin{proof}
We use the definition of $\Phi$ in~\eqref{eqn:Phi_function} and~\eqref{eqn:JFS-dissipation} together with
Lemma~\ref{lem:psi_split} to decompose $\Phi(\rho,j,F(\rho))$ as
\begin{equation}
\begin{split}
\Phi(\rho,j,F(\rho)) &= D(\rho,j) +\Psi_S^\star\bigl(\rho,F^S(\rho)\bigr) + \Bigl[ \Psi(\rho,j) - j\cdot F^A
+\Psi^\star\bigl(\rho,F^A\bigr)\Bigr] \\
&=D(\rho,j) + \Phi_S\bigl(\rho,0,F^S(\rho)\bigr) + \Phi\bigl(\rho,j,F^A\bigr),
\end{split}
\end{equation}
which proves the claim.
\qed
\end{proof}
Recall from Section~\ref{sec:Summary} that $\Phi$ measures how much the current $j$ deviates from the typical (or most likely)
current $J(\rho)$. One sees from~\eqref{eqn:lagrangian_2} that it can be large for three reasons. The first term is large if the
current is pushing the system up in free energy (because $D$ is the rate of change of free energy induced by the current $j$).
The second term comes from the time-reversal symmetric (gradient) force $F^S(\rho)$, which is pushing the system towards
equilibrium. The third term comes from the time-reversal anti-symmetric force $F^A$; namely, it measures how far the current $j$
is from the value induced by the force $F^A$.
Corollary~\ref{cor:two} also makes it apparent that the free energy $\mathcal F$ is monotonically decreasing for solutions
of~\eqref{eq:master}, which are minimisers of $I_{[0,T]}$.
\begin{corollary}
The free energy $\mathcal F$ is monotonically decreasing along minimisers of the rate function $I_{[0,T]}$. Its rate of change
is given by
\begin{equation}
\frac d{dt} \mathcal F(\rho_t) = -\Psi^\star_S\bigl(\rho_t,F^S(\rho_t)\bigr) - \Phi\bigl(\rho_t,J(\rho_t),F^A(\rho_t)\bigr).
\end{equation}
\end{corollary}
\begin{proof}
For minimisers of the rate function one has $\Phi=0$. Hence~\eqref{eq:F-dot} and Corollary~\ref{cor:two} imply that
\begin{equation}
\label{eq:F-dot-2}
\frac d{dt} \mathcal F(\rho_t)
= D(\rho,j)
= -\Psi^\star_S\bigl(\rho_t,F^S(\rho_t)\bigr) -
\Phi\bigl(\rho_t,J(\rho_t),F^A(\rho_t)\bigr).
\end{equation}
{Both $\Psi^\star$ and $\Phi$ are non-negative, so $\cal F$ is indeed monotonically decreasing.}
\qed
\end{proof}
\subsection{{Hamilton-Jacobi like equation} for Markov chains}
\label{sec:HJ-for-MC}
It is also useful to note at this point an additional aspect of the orthogonality relationships presented here, which has
connections to MFT (see Section~\ref{sec:Connections-to-MFT}). We formulate an analogue of the Hamilton-Jacobi equation of MFT,
as follows. Define
\begin{equation}
\mathbb{H}(\rho,\xi) = \frac12\left[ \Psi^\star(\rho,F(\rho) + 2\xi) - \Psi^\star(\rho,F(\rho)) \right] ,
\label{equ:ham-markov}
\end{equation}
which we refer to as an {\emph{extended Hamiltonian}}, for reasons discussed in Section~\ref{sec:Hamilt-Jacobi-equat} below {(see also Section~IV.G of~\cite{Bertini2015a})}.
{
The {\it extended Hamilton-Jacobi equation} for a functional $\mathcal S$ is then (cf.~equation \eqref{eqn:HJ_micro} in Section~\ref{sec:Hamilt-Jacobi-equat}) given by
\begin{equation}
\label{equ:HJ-markov}
\mathbb H\left(\rho,\nabla\frac{\delta \mathcal S}{\delta\rho}\right)=0.
\end{equation}
Note that the free energy $\cal F$ defined in~\eqref{eqn:free_energy} solves \eqref{equ:HJ-markov},}
which follows from Proposition~\ref{prop:orth} (using~\eqref{eqn:FsGradF} and that $\Psi^\star$ is symmetric in its second
argument). In fact (see Proposition~\ref{prop:HJ}), the free energy is the maximal solution to this equation. In MFT, the
analogous variational principle can be useful, as a characterisation of the invariant measure of the process. Here, one has a
similar characterisation of the (non-equilibrium) free energy.
Since~\eqref{equ:HJ-markov} {with $\mathcal S = \cal F$} provides a characterisation of the free energy $\cal F$, which is uniquely determined by the invariant
measure $\pi$ of the process, it follows that~\eqref{equ:HJ-markov} must be equivalent to the condition that $\pi$ satisfies
$\div J(\pi)=0$: recall~\eqref{eq:master}. Writing everything in terms of the rates of the Markov chain and its adjoint,
\eqref{equ:HJ-markov} becomes
\begin{equation*}
\sum_x \rho(x) \sum_{y} [r_{xy} - r^*_{xy}] = 0 ,
\end{equation*}
which must hold for all $\rho$: from the definition of $r^*$ one then has $\sum_y \pi(x)r_{xy}=\sum_y \pi(y)r_{yx}$, which is
indeed satisfied if and only if $\pi$ is invariant (cf. equation~\eqref{equ:q-balance}).
\begin{figure}
\begin{center}\includegraphics[width=5cm]{circle.pdf}\end{center}
\caption{ Illustration of a simple Markov chain with $n=5$ states arranged in a circle. The transition rates
between states are $r_{i,i\pm1}$. If the Markov chain is not reversible, there will be a steady-state probability current
${\cal J}$ corresponding to a net drift of the system around the circle.}
\label{fig:circle}
\end{figure}
{
\subsection{Example: simple ring network}
\label{sec:ring}
To illustrate these abstract ideas, we consider a very simple Markov chain, in which $n$ states are arranged in a circle, see
Fig.~\ref{fig:circle}. So $V=\{1,2,\dots,n\}$ and the only allowed transitions take place between state $x$ and states $x\pm 1$
(to incorporate the circular geometry we interpret $n+1=1$ and $1-1=n$). In physics, such Markov chains arise (for example) as
simple models of nano-machines or motors, where an external energy source might be used to drive circular
motion~\cite{fisher07,vaik14}. Alternatively, such a Markov chain might describe a protein molecule that goes through a cyclic
sequence of conformations, as it catalyses a chemical reaction~\cite{lavorel76}. In both cases, the systems evolve stochastically
because the relevant objects have sizes on the nano-scale, so thermal fluctuations play an important role.
To apply the analysis presented here, the first step is to identify forces and mobilities, as in~\eqref{equ:def-aF}. Let
$R_x = \sqrt{r_{x,x+1} r_{x+1,x}}$. The invariant measure may be identified by solving
$\sum_y \pi(x) r_{xy}= \sum_y \pi(y) r_{yx}$ subject to $\sum_y \pi(y)=1$. Finally, one computes the steady state current
${\cal J} = \pi(x) r_{x,x+1} - \pi(x\!+\!1) r_{x+1,x}$, where the right hand side is independent of $x$ (this follows from the
steady-state condition on $\pi$). The original Markov process has $2n$ parameters, which are the rates $r_{x,x\pm 1}$: these are
completely determined by the $n-1$ independent elements of $\pi$, the $n$ mobilities $(R_x)_{x=1}^n$ and the current $\cal J$.
The idea is that this reparameterisation allows access to the physically important quantities in the system.
From the definitions of $\cal J$ and $R$, it may be verified that
\begin{equation*}
2 \pi(x) r_{x,x+1} = \sqrt{{\cal J}^2 + 4 R_x^2 \pi(x)\pi(x\!+\!1)} + {\cal J},
\end{equation*}
and similarly $2 \pi(x\!+\!1) r_{x+1,x} = \sqrt{{\cal J}^2 + 4 R_x^2 \pi(x)\pi(x\!+\!1)} - {\cal J}.$ Then write
\begin{equation}
\rho(x) r_{x,x+1} = R_x \sqrt{\rho(x)\rho(x+1)} \times \sqrt{ \frac{\rho(x)\pi(x\!+\!1)}{\rho(x\!+\!1)\pi(x)} }
\times \left( \frac{\sqrt{{\cal J}^2 + 4 R_x^2 \pi(x)\pi(x\!+\!1)} + {\cal J}}{ \sqrt{{\cal J}^2 + 4 R_x^2 \pi(x)\pi(x\!+\!1)}
- {\cal J} } \right)^{1/2}.
\end{equation}
In this case, we can identify the three terms as
\begin{equation}
\rho(x) r_{x,x+1}
= \frac12 a_{x,x+1}(\rho) \times \exp(F^S_{x,x+1}(\rho)/2) \times \exp(F^A_{x,x+1}/2) ,
\end{equation}
which allows us to read off the mobility $a$ and the forces $F^S$ and $F^A$. The physical meaning of these quantities may not be
obvious from these definitions, but we show in the following that reparameterising the transition rates in this way reveals
structure in the dynamical fluctuations.
For example, equilibrium models (with detailed balance) can be identified via $F^A_{x,x+1}=0$ (for all $x$). In general
$F^A_{x,x+1}$ is the (steady-state) entropy production associated with a transition from $x$ to $x+1$, see
Section~\ref{sec:Phys-interpr-FS}. The steady state entropy production associated with going once round the circuit is
$\sum_x F^A_{x,x+1}=\log \prod_x (r_{x,x+1}/r_{x+1,x})$, as it must be~\cite{Andrieux2007a}.
Now consider the LDP in~\eqref{eqn:ldp_pathwise_statement}. We consider a large number ($\cal N$) of identical nano-scale
devices, each of which is described by an independent copy of the Markov chain. Typically, each device goes around the circle at
random, and the average current is ${\cal J}$ (so each object performs ${\cal J}/n$ cycles per unit time). The LDP describes
properties of the ensemble of devices. If $\cal N$ is large and the distribution of devices over states is $\rho$, then the
(overwhelmingly likely) time evolution of this distribution is $\dot\rho = -\div J(\rho)$, where the current $J$ obeys the simple
formula
\begin{equation}
J_{x,x+1}(\rho) = a_{x,x+1}(\rho) \sinh\left( \tfrac12 [F^S_{x,x+1}(\rho) + F^A_{x,x+1}] \right) ,
\end{equation}
which is~\eqref{eq:J-sinh}, applied to this system. The simplicity of this expression motivates the parametrisation of the
transition rates in terms of forces and mobilities. In addition, if one observes some current $j$ [not necessarily equal to
$J(\rho)$] then the rate of change of free energy of the ensemble can be written compactly as $D(\rho,j) = -j\cdot F^S(\rho)$,
from~\eqref{eqn:JFS-dissipation}. The quantity $j\cdot F^A$ is the rate of dissipation via housekeeping heat (see
Section~\ref{sec:Phys-interpr-FS}). This (physically-motivated) splitting of $j\cdot F=j\cdot (F^S+F^A)$ motivates our
introduction of the two forces $F^S$ and $F^A$. Note that $j \cdot F$ is the rate of heat flow from the system to its environment,
and appears in the fluctuation theorem~\eqref{equ:gc-finite-time}.
Finally we turn to the large deviations of this ensemble of nano-scale objects. There is an
LDP~\eqref{eqn:ldp_pathwise_statement}, whose rate function can be decomposed into three pieces (Corollary~\ref{cor:two}), because
of the generalised orthogonality of the forces $F^S$ and $F^A$ (Lemma~\ref{lem:psi_split}). This splitting of the rate function
is useful because the symmetry properties of the various terms yields bounds on rate functions for some other LDPs obtained from
$\Phi$ by contraction, see Section~\ref{sec:LDPs-time-averaged} below.
}
\section{Connections to MFT}
\label{sec:Connections-to-MFT}
Macroscopic Fluctuation Theory (MFT) is a field theory which describes the mass evolution of particle systems in the
drift-diffusive regime, on the level of hydrodynamics. In this setting, it can be seen as generalisation of Onsager-Machlup
theory~\cite{Machlup1953a}. For a comprehensive review, we refer to~\cite{Bertini2015a}. This section gives an overview of the
theory, {focussing on} the connections to the results presented in Sections~\ref{sec:Onsag-Machl-theory-Markov}
and~\ref{sec:Decomp-forc-rate}.
{We seek to emphasise two points: first, while the particle currents in MFT and the probability current in Markov
chains are very different objects, they both obey large-deviation principles of the form presented in Section~\ref{sec:Summary}.
This illustrates the broad applicability of this general setting. Second, we note that many of} the particle models for which
MFT gives a macroscopic description are Markov chains on discrete spaces. {Starting from this observation, we argue
in Section~\ref{sec:hydro} that some results that are well-known in MFT originate from properties of these underlying Markov
chains, particularly Proposition~\ref{prop:orth} and~Corollary~\ref{cor:two}.}
\subsection{Setting}
\label{sec:Setting-1}
We consider a large number $N$ of indistinguishable particles, moving on a lattice $\Lambda_L$ (indexed by $L\in{\mathbb{N}}$, such that the
number of sites $|\Lambda_L|$ is strictly increasing with $L$). These particles are described by a Markov chain, so the relevant
forces and currents satisfy the equations derived in Sections~\ref{sec:Onsag-Machl-theory-Markov}
and~\ref{sec:Decomp-forc-rate}. The hydrodynamic limit is obtained by letting $L\to\infty$ such that the total density
$N/|\Lambda_L|$ converges to a fixed number $\bar\rho$. In this limit, the lattice $\Lambda_L$ is rescaled into a domain
$\Lambda\subset \R^d$ and one can characterise the system by a local (mass) density $\rho \colon \Lambda\to[0,\infty)$ together
with a local current $j \colon \Lambda\to\R^d$, which evolve deterministically as a function of
time~\cite{Kipnis1999a,Bertini2015a}. This time evolution depends on some (density-dependent) applied forces
$F(\rho) \colon \Lambda\to\R^d$. The force at $x\in\Lambda$ can be written as
\begin{equation}
\label{equ:Fmft}
{ F(\rho)(x)=\hat{f}''(\rho(x))\nabla \rho(x) + E(x),}
\end{equation}
where the gradient $\nabla$ denotes a spatial derivative, the function $\hat{f} \colon [0,\infty)\to\mathbb{R}$ is a free energy
density and $E\colon \Lambda\to\R^d$ is a drift. {(The free energy $\hat{f}$ is conventionally denoted by
$f$~\cite{Bertini2015a}; here we use a different notation since $f$ indicates a force in this work.)} With these definitions,
the deterministic currents satisfy the linear relation~\cite{Maes2015a}
\begin{equation}
\label{equ:mft-Jrho}
J(\rho)=\chi(\rho) F(\rho) ,
\end{equation}
which is the hydrodynamic analogue of~\eqref{eq:J-sinh}. Here, $\chi(\rho) \in \mathbb{R}^{d \times d}$ is a (density-dependent)
mobility matrix.
\subsection{Onsager-Machlup functional}
\label{sec:Onsag-Machl-funct}
Within MFT, the system is fully specified once the functions $f,\chi,E$ are given. These three quantities are sufficient to
specify both the deterministic evolution of the most likely path $\rho$, and the fluctuations away from it. We can again define an
OM functional given by
\begin{equation}
\label{eqn:psi_mft}
\Phi_{\mathrm{MFT}}(\rho,j,f) :=\frac 12\int_\Lambda \bigl(j-\chi f\bigr)\cdot\chi^{-1}\bigl(j - \chi f\bigr) \df x.
\end{equation}
To cast this functional in the form~\eqref{eqn:Phi_function}, we define the dual pair $\int_\Lambda (j\cdot f) \df x$, together
with the Legendre duals
\begin{equation}
\label{eq:MFT-Psi}
\Psi_{\mathrm{MFT}}(\rho,j):=\frac 12\int_\Lambda j\cdot\chi^{-1}j \df x \quad\text{and}\quad
\Psi^\star_{\mathrm{MFT}}(\rho,f):=\frac 12\int_\Lambda f\cdot\chi f \df x .
\end{equation}
Given $\rho$ and $f$, we have that $\Phi_{\mathrm{MFT}}$ is uniquely minimised (and equal to zero) for the current $j = \chi(\rho) f$.
\subsection{Large deviation principle}
\label{sec:Large-devi-princ}
{ Within MFT, one considers an empirical density and an empirical current. We emphasise that these
refer to particles, which are interacting and move on the lattice $\Lambda_L$; this is in contrast to the case of Markov chains,
where the copies of the system were non-interacting and one considers a density and current of probability. The averaged number of
particles at site $i\in\Lambda_L$ is denoted with $\hat\rho_t^{\;\!L}(x_i)$, where $x_i$ is the image in the rescaled domain
$\Lambda$ of site $i\in\Lambda_L$, and the} corresponding particle current is given by $\hat\jmath_t^{\;\!L}$ {(cf.~Section~VIII.F~in \cite{Bertini2015a} for details).} Note that both the
particle density $\hat\rho_t^{\;\!L}$ and the particle current $\hat\jmath_t^{\;\!L}$ are random quantities {(see also
Section~\ref{sec:hydro} below).
In keeping with the setting of Section~\ref{sec:Summary}}, we focus on paths
$(\hat\rho_t^{\!\;L},\hat\jmath_t^{\;\!L})_{t\in[0,T]}$ in the limit as $L\to\infty$, where the probability is, analogous
to~\eqref{equ:pathwise-general}, given by
\begin{equation}
\label{eqn:pathwise_ldp_mft}
\mathrm{Prob}\Bigl( (\hat\rho_t^{L},\hat\jmath_t^{\;\!L})_{t\in[0,T]}\approx
(\rho_t, j_t)_{t\in[0,T]}\Bigr) \asymp \exp\bigl\{-|\Lambda_L| I_{[0,T]}^{\mathrm{MFT}}\bigl( (\rho_t, j_t)_{t\in[0,T]}\bigr)\bigr\}.
\end{equation}
{Note that the parameter ${\cal N}$ in~\eqref{equ:pathwise-general},
which is the speed of the LDP, corresponds to the lattice size $|\Lambda_L|$}. For the force $F(\rho)$ defined in~\eqref{equ:Fmft}, the rate functional
in~\eqref{eqn:pathwise_ldp_mft} is given by
\begin{equation}
\label{equ:pathwise-mft}
I_{[0,T]}^{\mathrm{MFT}}\bigl( (\rho_t, j_t)_{t\in[0,T]}\bigr)\! =\!
\begin{cases}
\mathcal V(\rho_0)\! +\! \frac12\! \int_0^T\!\Phi_{\mathrm{MFT}}(\rho_t,j_t,F(\rho_t)) \df t & \text{if }\dot\rho_t\!+\!\div j_t\! =\! 0\\
+\infty & \text{otherwise}.
\end{cases}
\end{equation}
Here $\cal V$ is the \emph{quasipotential}, which plays the role of a non-equilibrium free energy. We may think of $\mathcal V$ as
the macroscopic analogue of the free energy $\mathcal F$ defined in~\eqref{eqn:free_energy}. It is the rate functional for the
process sampled from the invariant measure, which is consistent with the case for Markov chains in~\eqref{eqn:ldp_pathwise}. We
assume that $\mathcal V$ has a unique minimiser $\pi$, which is the steady-state density profile (so $\mathcal V(\pi)=0$).
An important difference between the Markov chain setting and MFT is that the OM functional for Markov chains is non-quadratic,
which is equivalent to a non-linear flux force relation, whereas MFT is restricted to quadratic OM functionals.
Equation~\eqref{eqn:pathwise_ldp_mft} is the basic assumption in MFT~\cite{Bertini2015a}, in the sense that all systems considered
by MFT are assumed to satisfy this pathwise LDP. In fact, both the process and its adjoint are assumed to satisfy such LDPs (with
similar rate functionals, but different forces)~\cite{Bertini2015a}.
\subsection{Decomposition of the force $F$}
\label{sec:Decomp-force-F}
The force $F$ in~\eqref{equ:Fmft} can be written as the sum of a symmetric and an anti-symmetric part,
$F(\rho)=F_S(\rho)+F_A(\rho)$, just as in Section~\ref{sec:Splitt-force-accord}. The force for the adjoint process is given by
$F^\ast(\rho)=F_S(\rho)-F_A(\rho)$. Note that, unlike in the case of Markov chains, $F_A(\rho)$ can here depend on $\rho$. More
precisely, $F_S(\rho) = -\nabla\frac{\delta\mathcal V}{\delta \rho}$ and $F_A(\rho)$ is given implicitly by
$F_A(\rho) = F(\rho)-F_S(\rho)$.
The symmetric and anti-symmetric currents are defined in terms of the forces $F_S(\rho)$ and $F_A(\rho)$ as
$J_S(\rho) := \chi(\rho)F_S(\rho)$ and $J_A(\rho) := \chi(\rho)F_A(\rho)$. An important result in MFT is the so-called
\emph{Hamilton-Jacobi orthogonality}, which states that
\begin{equation}
\label{eqn:HJ-MFT}
\int_\Lambda J_S(\rho)\cdot\chi(\rho)^{-1} J_A(\rho) \df x = 0.
\end{equation}
In terms of the forces $F_S(\rho)$ and $F_A(\rho)$, we can restate~\eqref{eqn:HJ-MFT} as
\begin{equation}
\label{eq:HF-MFT-force}
\int_\Lambda F_S(\rho) \cdot\chi(\rho) F_A(\rho) \df x = 0 .
\end{equation}
The latter is the quadratic version of the orthogonality~\eqref{eqn:HJ} of Lemma~\ref{lem:one}; it is equivalent to
\begin{equation}
\int_\Lambda \bigl(F_S(\rho) + F_A(\rho)\bigr) \cdot\chi(\rho) \bigl(F_S(\rho) + F_A(\rho)\bigr)\df x
= \int_\Lambda \bigl(F_S(\rho) - F_A(\rho)\bigr) \cdot\chi(\rho) \bigl(F_S(\rho) - F_A(\rho)\bigr)\df x,
\end{equation}
or in other words, from~\eqref{eq:MFT-Psi},
\begin{equation}
\label{eqn:generalised_HJ}
\Psi^\star_{\mathrm{MFT}}(\rho,F_S(\rho)+F_A(\rho))=\Psi^\star_{\mathrm{MFT}}(\rho,F_S(\rho)-F_A(\rho)),
\end{equation}
which is the result of Proposition~\ref{prop:orth} in the context of MFT. One can see~\eqref{eqn:orth}, and hence
Proposition~\ref{prop:orth}, as the natural generalisation to the Hamilton-Jacobi orthogonality~\eqref{eqn:HJ-MFT}. Again, the MFT
describes systems on the macroscopic scale, but the result~\eqref{eqn:generalised_HJ} originates from the result~\eqref{eqn:orth},
on the microscopic level.
{
\subsection{Relating Markov chains to MFT: hydrodynamic limits}
\label{sec:hydro}
We have discussed a formal analogy between current/density fluctuations in Markov chains and in MFT: the large deviation
principles~\eqref{eqn:ldp_pathwise_statement} and~\eqref{eqn:pathwise_ldp_mft} refer to different objects and different limits,
but they both fall within the general setting described in Section~\ref{sec:Summary}. We argue here that the similarities between
these two large deviation principles are not coincidental -- they arise naturally when MFT is interpreted as a theory for
hydrodynamic limits of interacting particle systems.
To avoid confusion between particle densities and probability densities, we introduce (only for this section) a different notation
for some properties of discrete Markov chains, which is standard for interacting particle systems. Let $\eta$ represent a state
of the Markov chain (in place of the notation $x$ of Section~\ref{sec:Onsag-Machl-theory-Markov}), and let $\mu$ be a probability
distribution over these states (in place of the notation $\rho$ of Section~\ref{sec:Onsag-Machl-theory-Markov}). Let $\jmath$ be
the probability current.
We illustrate our argument using the weakly asymmetric {simple} exclusion process (WASEP) in one dimension, so the lattice is
$\Lambda_L=\{1,2,\dots,L\}$, and each lattice site contains at most one particle, so $V=\{0,1\}^L$. The lattice has periodic
boundary conditions and the occupancy of site $i$ is $\eta(i)$. Particles hop to the right with rate $L^2$ and to the left with
rate $L^2(1-({ E}/L))$, but in either case only if the destination site is empty. Here $ E$ is a fixed parameter
(an external field); the dependence of the hop rates on $L$ is chosen to ensure a diffusive hydrodynamic limit (as required for
MFT).
The spatial domain relevant for MFT is $\Lambda=[0,1]$: site $i\in\Lambda_L$ corresponds to position $i/L\in\Lambda$. For any
probability measure $\mu$ on $V$, one can write a corresponding smoothed particle density $\rho^\epsilon$ on $\Lambda$, as
\begin{equation}
\rho^\epsilon(x) = \frac1L \sum_{\eta\in V} \sum_{i=1}^L \mu(\eta) \eta(i/L) \delta^\epsilon(x-(i/L)),
\end{equation}
where $\delta^\epsilon$ is a smoothed delta function (for example a Gaussian with unit weight and width $\epsilon$, or -- more classically
-- a top-hat function of width $\epsilon$, cf.~\cite{Kipnis1999a}). Similarly if there is a probability current $\jmath$ in the
Markov chain, one can write a smoothed particle current as
\begin{equation}
j^\epsilon(x) = \frac1L \sum_{\eta\in V} \sum_{i=1}^L \jmath_{\eta,\eta^{i,i+1}} \delta^\epsilon\left(x-\frac{2i+1}{2L} \right),
\end{equation}
where $\eta^{i,i+1}$ is the configuration obtained from $\eta$ by moving a particle from site $i$ to site $i+1$; if there is no
particle on site $i$ then define $\eta^{i,i+1}=\eta$ so that $\jmath_{\eta,\eta^{i,i+1}}=0$. Physically, $\rho^\epsilon$ is the
average particle density associated to $\mu$, and $j^\epsilon$ is the particle current associated to $\jmath$.
As noted above, MFT is concerned with the limit $L\to\infty$. The LDP~\eqref{eqn:ldp_pathwise_statement} is not relevant for that
limit (it applies when one considers many (${\cal N}\to\infty$) independent copies of the Markov chain, with $L$ being finite for
each copy). However, the rate function $I_{[0,T]}$ that appears in~\eqref{eqn:ldp_pathwise_statement} has an alternative physical
interpretation, as the relative entropy between two path measures: see Appendix~\ref{sec:relent}. This relative entropy can be seen as a
property of the WASEP; there is no requirement to invoke many copies of the system. Physically, the relative entropy measures how
different is the WASEP from an alternative Markov process with a given probability and current $(\mu_t,\jmath_t)_{t\in[0,T]}$.
The key point is that in cases where MFT applies, one expects that the rate function $I^{\rm MFT}_{[0,T]}$ can be related to this relative entropy.
{In fact, there is a deeper relation between relative entropies and rate functionals: it can be shown that Large Deviation Principles are equivalent to $\varGamma$-convergence of relative entropy functionals (see \cite{Mariani2012} for details).
Returning to the WASEP, we consider a particle density $(\rho_t,j_t)_{t\in[0,T]}$ that satisfies $\dot\rho_t = -\div j_t$. One then can find (for each $L$)} a time-dependent probability and current $(\mu_t^L,\jmath_t^L)_{t\in[0,T]}$, with $ \dot\mu^L_t = -\div \jmath^L_t$, {such on taking the limit $\epsilon\to0$ {\it after} $L\to\infty$}, the associated
particle densities $(\rho^\epsilon_t,j^\epsilon_t) \to (\rho_t,j_t)$ {and moreover
\begin{equation}
\label{equ:mft-connect}
\lim_{L\to\infty}\frac{1}{|\Lambda_L|} I_{[0,T]}\bigl( (\mu_t^L, \jmath_t^L)_{t\in[0,T]}\bigr) = I_{[0,T]}^{\mathrm{MFT}}
\bigl( (\rho_t, j_t)_{t\in[0,T]}\bigr).
\end{equation}
In order to find $(\mu_t^L,\jmath_t^L)_{t\in[0,T]}$, one defines a ``controlled'' WASEP (similar to \eqref{equ:tilde-r} in Section~\ref{sec:Optim-contr-theory}), in which the particle hop rates depend on position and time, such that the particle density in the hydrodynamic limit obeys $\dot\rho_t = -\div j_t$.
For interacting particle systems, this ``controlled'' process is usually obtained by adding a time dependent external field to the system that acts on the individual particles. This was first derived for the symmetric SEP in \cite{Kipnis1989} (see also \cite{Benois1995} for a treatment of the zero-range process).
For the WASEP (in a slightly different situation with open boundaries) a proof of \eqref{equ:mft-connect} can e.g.~be found in \cite{Bertini2009}, Lemma 3.7.
}
Moreover, on decomposing $I^{\rm MFT}_{[0,T]}$ and $I_{[0,T]}$ as in~\eqref{eqn:Phi_function}, the separate functions $\Psi$ and
$\Psi^\star$ obey formulae analogous to~\eqref{equ:mft-connect}: this is the sense in which the structure of the MFT rate function
is inherited from the relative entropy of the Markov chains. The quadratic functions $\Psi$ and $\Psi^\star$ in MFT arise because
the forces that appear in the underlying Markov chains are small (compared to unity), so second order Taylor
expansions of $\Psi^\star$ and $\Psi$ give in the limit the accurate description.}
{We will return to this discussion in a later publication. }
\section{LDPs for time-averaged quantities}
\label{sec:LDPs-time-averaged}
So far we have considered large deviation principles for hydrodynamic limits, and for systems consisting of many independent
copies of a single Markov chain. We now show how some of the results derived in Sections~\ref{sec:Onsag-Machl-theory-Markov}
and~\ref{sec:Decomp-forc-rate} also have analogues for large deviations for a single Markov chain, in the large-time limit.
\subsection{Large deviations at level 2.5}
\label{sec:Large-deviations-at}
Analogous to~\eqref{eqn:N_average}, we define the time averaged {empirical measure of a single copy of the Markov chain} $\hat\rho_{[0,T]}$ and the time
averaged empirical current $\hat\jmath_{[0,T]}$ as
\begin{equation}
\label{eqn:defn_time_averages}
\hat\rho_{[0,T]}:=\frac 1T \int_0^T \hat\rho_t \df t
\quad\text{and}\quad
\hat\jmath_{[0,T]}:=\frac 1T\int_0^T\hat\jmath_t \df t
\end{equation}
{ (where we choose $\hat\rho_t=\hat\rho_t^1$ and $\hat\jmath_t = \hat\jmath_t^1$ for the empirical density and current of the single Markov chain, as defined above in Section~\ref{sec:Large-Devi-Onsag}).}
For countable state Markov chains, the quantity $(\hat\rho_{[0,T]},\hat\jmath_{[0,T]})$ satisfies a LDP as $T\to\infty$:
\begin{equation}
\label{equ:l2.5-general}
\mathrm{Prob}\bigl((\hat\rho_{[0,T]}, \hat\jmath_{[0,T]}) \approx (\rho,j)\bigr) \asymp \exp\bigl\{-T I_{2.5}(\rho,j)\bigr\} .
\end{equation}
We refer to such principles as \emph{level 2.5 LDPs}. For countable state Markov chains the rate functional $I_{2.5}(\rho,j)$ was
derived in~\cite{Maes2008b}, and was proven rigorously in~\cite{Bertini2015c,Bertini2015b} for Markov chains in the setting of
Section~\ref{sec:Setting} under some additional conditions (see~\cite{Bertini2015c,Bertini2015b} for the details). We can recast
the rate functional (see~\cite[Theorem 6.1]{Bertini2015c}) as
\begin{equation}
\label{eqn:ldp_2.5}
I_{2.5}(\rho,j) =
\begin{cases}
\frac12 \Phi(\rho,j,F(\rho)) & \text{if }\div j = 0\\ +\infty
& \text{otherwise}
\end{cases} ,
\end{equation}
with $\Phi$ again given by~\eqref{eqn:Phi_function}, together with~\eqref{eqn:dual_pairing},~\eqref{eqn:psi_star}
and~\eqref{eqn:new_psi}.
We have stated this LDP for joint fluctuations of the density and the current. For Markov chains, the LDP for the density and the
\emph{flow} is also known as a level-2.5 LDP~\cite{Bertini2015b}, so our general use of the name level-2.5
for~\eqref{equ:l2.5-general} may be non-standard, but it seems reasonable. The rate functional for the density and the current in
\eqref{equ:l2.5-general} can be obtained by contraction from the rate functional for the density and the flow (see Theorem 6.1
in~\cite{Bertini2015c}).
Using the splitting obtained in Section~\ref{sec:Decomp-rate-funct}, we obtain the following representation for the rate
functional on level-2.5.
\begin{proposition}
\label{prop:split0_2.5}
Let $j$ be divergence free. Then the level-2.5 rate functional~\eqref{eqn:ldp_2.5} is given by
\begin{equation}
\label{eqn:split_2.5}
I_{2.5}(\rho,j) =\frac12\Bigl[\Phi_S\bigl(\rho,0,F^S(\rho)\bigr) + \Phi\bigl(\rho,j,F^A\bigr)\Bigr].
\end{equation}
\end{proposition}
\begin{proof}
We note from~\eqref{eq:div-free-D-null} that $D(\rho,j)$ vanishes for divergence free currents $j$. The result then directly
follows from Corollary~\ref{cor:two}. \qed
\end{proof}
\subsection{Large deviations for currents}
\label{sec:Large-devi-curr}
Proposition~\ref{prop:split0_2.5} is connected to recently-derived bounds on rate functions for currents,
{see~\cite{Gingrich2016a,Gingrich2017a,Pietzonka2016,Polettini2016a}}. Indeed, the rate function for current
fluctuations can be obtained by contraction from level-2.5, as
\begin{equation}
\label{equ:Ijj}
{I_{\rm current}(j)}:=\inf_\rho I_{2.5}(\rho,j).
\end{equation}
Then, following~\cite{Gingrich2017a,Polettini2016a}, it may be shown that for any $\rho,j,f$ one has for $\Phi$ as
in~\eqref{eqn:Phi_function} with~\eqref{eqn:dual_pairing},~\eqref{eqn:psi_star}-\eqref{eqn:new_psi} that
\begin{equation}
\label{eqn:gingirch_bound}
\Phi\bigl(\rho,j,f\bigr) \le \sum_{xy} \bigl(j_{xy}-j^{f}_{xy}(\rho)\bigr)^2 b_{xy}(\rho,f)
\end{equation}
with $b_{xy}(\rho,f)=f_{xy}/(4j^f_{xy}(\rho))$ if $f_{xy}\neq0$; otherwise $b_{xy}$ is continuously extended by taking
$b_{xy}(\rho,f)=1/(2a_{xy}(\rho))$. Hence one has the result of~\cite{Gingrich2016a}, that the curvature of the rate function is
controlled by the housekeeping heat $F^A$, as
\begin{equation}
\label{eqn:Ijj-bound}
{I_{\rm current}(j)}\leq I_{2.5}(\pi,j) = \frac 1 2 \Phi\bigl(\pi,j,F^A\bigr) \leq
\frac 1 2 \sum_{xy} \frac{\bigl(j_{xy}-J_{xy}^{\rm ss}\bigr)^2}{4(J_{xy}^{\rm ss})^2} J_{xy}^{\rm ss} F^A_{xy},
\end{equation}
where $J^{\rm ss}:=J(\pi)$ is the steady state current (recall~\eqref{equ:master}), and the ratio $F^A_{xy}/J^{\ss}_{xy}$ must
again be interpreted as $2/a_{xy}(\rho)$ in the case where $F^A_{xy}$ (and hence $J^{\ss}_{xy}$) vanish. The first step
in~\eqref{eqn:Ijj-bound} comes from~\eqref{equ:Ijj}, the second step uses~\eqref{eqn:split_2.5} as well as $\Phi(\pi,0,F^S)=0$,
and the third uses~\eqref{eqn:gingirch_bound}.
The significance of the splitting~\eqref{eqn:split_2.5} for this result is that $J^{\rm ss}_{xy}F_{xy}^A$ is the rate of flow of
housekeeping heat associated with edge $xy$: the appearance of the housekeeping heat is natural since the bound comes from the
second term in~\eqref{eqn:split_2.5}, which is independent of $F^S$ and depends only on $F^A$.
\subsection{Optimal control theory}
\label{sec:Optim-contr-theory}
It will be useful to introduce ideas of optimal control theory, whose relationship with large deviation theory is discussed
in~\cite{Fleming2006a,Chernyak2014a,Chetrite2015b,Jack2015a}. In parallel with our given transition rates $r_{xy}$ we introduce a
new process, the \emph{controlled process}, where the rates are modified by a \emph{control potential} $\varphi$, as
\begin{equation}
\label{equ:tilde-r}
\tilde{r}_{xy} := r_{xy} \exp((\varphi(y)-\varphi(x))/2).
\end{equation}
For a given probability distribution $\rho$, we seek a potential $\varphi$ such that the controlled process has invariant measure
$\tilde\pi := \rho$. For this we need
\begin{equation*}
\sum_y \left[\rho_x r_{xy} \exp((\varphi(y)-\varphi(x))/2) - \rho_y r_{yx} \exp((\varphi(x)-\varphi(y))/2)\right] = 0,
\end{equation*}
or equivalently
\begin{equation}
\label{equ:vhi-balance}
\div j^{F+\nabla \varphi}(\rho) =
\sum_y a_{xy}(\rho) \sinh\left( (F_{xy}(\rho) + \nabla^{x,y}\varphi)/2\right) = 0.
\end{equation}
We stress that, for any fixed $\rho$,~\eqref{equ:vhi-balance} is equivalent to solving the minimisation problem
\begin{equation}
\label{eqn:minimisation_1}
\inf_{\operatorname{div} j =0}\Phi\bigl(\rho,j,F(\rho)\bigr),
\end{equation}
which is also equivalent to maximisation of the Donsker-Varadhan functional, see for example Chapter IV.4
in~\cite{Hollander2000a}. A proof for the existence and uniqueness of $\varphi$ can, e.g., be found in~\cite{Maes2012a}. Now assume
that $\varphi$ solves~\eqref{equ:vhi-balance}. The resulting controlled process depends on $\rho$ and has rates $\tilde{r}$ given
by~\eqref{equ:tilde-r}. Throughout this section, we use tildes to indicate properties of the controlled process: all these
quantities depend implicitly on the fixed probability $\rho$. Hence the (time-dependent) measure of the controlled process is
$\tilde\rho$.
Repeating the analysis of Section~\ref{sec:Setting} and noting that $\tilde{r}_{xy}\tilde{r}_{yx}=r_{xy}r_{yx}$, we find that
$\tilde a_{xy}(\tilde\rho) := 2\sqrt{\tilde\rho(x)\tilde r_{xy}\tilde\rho(y)\tilde r_{yx}} = a_{xy}(\tilde \rho)$. Also, the
force for the controlled process is
\begin{equation}
\label{equ:ftilde}
\tilde{F}(\tilde\rho) = F(\tilde\rho) + \nabla \varphi,
\end{equation}
which may be decomposed as
\begin{equation}
\begin{split}
\tilde{F}^S(\tilde\rho) \;\!&:= F^S(\tilde\rho) + \nabla \log\frac\rho\pi =-\nabla \log \frac {\tilde\rho}{\rho},\\
\tilde{F}^A \;\!& := F(\rho)+\nabla \varphi= F^A - \nabla \log\frac\rho\pi + \nabla\varphi.
\end{split}
\end{equation}
Thus, the symmetric force in the controlled process vanishes when $\tilde\rho= \rho$. The antisymmetric force $\tilde{F}^A$
represents the force observed in the new non-equilibrium steady state $\rho$. If the original process is reversible, then
$\varphi=\log\frac\rho\pi$ so $\tilde{F}^A=F^A=0$.
It is useful to define $\tilde J_{xy}(\tilde\rho) := a_{xy}(\tilde\rho) \sinh (\tilde{F}_{xy}(\tilde\rho)/2)$ and to identify the
steady-state current for the controlled process as
\begin{equation}
\label{equ:Jss}
\tilde{J}^{\ss} := \tilde{J}(\rho) .
\end{equation}
\subsection{Decomposition of rate functions}
\label{sec:Decomp-rate-funct-1}
The ideas of optimal control theory are useful since they facilitate the further decomposition of the level-2.5 rate function into
several contributions.
\begin{lemma}
\label{lemma:split1_2.5}
Suppose that $\rho$ and $j$ are given and that $\div j=0$. Then
\begin{equation}
\label{equ:i2.5-control}
I_{2.5}(\rho,j) =
\frac12 \Bigl[ \Phi\bigl( \rho, \tilde{J}^{\ss} , F(\rho)\bigr) + \Phi\bigl( \rho, j , \tilde{F}^A \bigr) \Bigr],
\end{equation}
where $\tilde{J}^{\rm ss}$ is given by~\eqref{equ:Jss}, evaluated in the optimally controlled process whose steady state is
$\rho$.
\end{lemma}
\begin{proof}
We write
\begin{align}
2I_{2.5}(\rho,j) & = \Psi(\rho,j) - j\cdot F(\rho) + \Psi^\star\bigl(\rho,F(\rho)\bigr)
\nonumber \\
& = [\Psi(\rho,j) - j\cdot\tilde{F}(\rho) + \Psi^\star( \rho,\tilde{F}(\rho)) ]
\nonumber \\
&\quad+ \Psi^\star(\rho,F(\rho)) - \Psi^\star( \rho,\tilde{F}(\rho)) - j\cdot(F(\rho)-\tilde{F}(\rho))
\nonumber \\
& = \Phi\bigl(\rho, j, \tilde{F}(\rho) \bigr)
+ \Psi^\star(\rho,F(\rho)) - \Psi^\star( \rho,\tilde{F}(\rho)) + j\cdot\nabla\varphi
\label{eqn:split-intermediate}
\end{align}
where the first line is~\eqref{eqn:Phi_function} and~\eqref{eqn:ldp_2.5}; the second line is simple rewriting; and the third uses
the definition of $\Phi$ in~\eqref{eqn:Phi_function} and also~\eqref{equ:ftilde} with $\tilde\rho=\rho$.
The current $\tilde{J}(\rho)$ satisfies $\Phi(\rho,\tilde{J}(\rho),\tilde{F}(\rho))=0$ so one has (by definition of $\Phi$) that
$\Psi^\star( \rho,\tilde{F}(\rho))=\tilde{J}(\rho)\cdot\tilde{F}(\rho)-\Psi(\rho,\tilde{J}(\rho))$. Using this relation together
with~\eqref{equ:ftilde} and~\eqref{eqn:split-intermediate}, one has
\begin{equation}
\qquad 2I_{2.5}(\rho,j) = \Phi\bigl(\rho, j, \tilde{F}(\rho) \bigr)
+ \Psi^\star(\rho,F(\rho)) - \tilde{J}(\rho)\cdot F(\rho)
+\Psi(\rho,\tilde{J}(\rho)) -\tilde{J}(\rho)\cdot\nabla\varphi+ j\cdot\nabla\varphi.\qquad
\end{equation}
Finally we note that $\div\tilde{J}(\rho)=0$ (since $\rho$ is the invariant measure for the controlled process) and $\div j=0$ (by
assumption), so integration by parts yields $\tilde{J}(\rho)\cdot\nabla\varphi=0=j\cdot\nabla\varphi$; using once more the
definition of $\Phi$ yields~\eqref{eqn:final_markov_2}. \qed
\end{proof}
The physical interpretation of~\eqref{equ:i2.5-control} is as follows. The contribution $\frac 12\Phi( \rho, j , \tilde{F}^A )$ is
a rate functional for observing an empirical current $j$ in the controlled process, while
$\frac 12\Phi( \rho, \tilde{J}^{\ss} , F(\rho) )$ is the rate functional for observing an empirical current $\tilde{J}^{\ss}$ in
the original process. Since $\tilde{J}^{\ss}$ is the (deterministic) probability current for the controlled process, one has that
the more the controlled process differs from the original one, the larger will be $\Phi( \rho, \tilde{J}^{\ss}, F(\rho) )$. Hence
the level-2.5 rate functional is large if the controlled process is very different from the original one, as one might expect. The
rate functional also takes larger values if the empirical current $j$ is very different from the probability current of the
controlled process.
We obtain our final representation for the level-2.5 rate functional, consisting of the sum of three different OM functionals.
\begin{proposition}
\label{prop:final_markov_2.5}
Let $j$ be divergence free. We can represent the level-2.5 rate functional~\eqref{eqn:ldp_2.5} as
\begin{equation}
\label{eqn:final_markov_2.5}
I_{2.5}(\rho,j) = \frac12 \left[ \Phi_S\bigl( \rho, 0, F^S(\rho) \bigr)
+ \Phi\bigl(\rho,\tilde{J}^{\ss}, F^A \bigr) + \Phi\bigl( \rho, j , \tilde{F}^A \bigr) \right].
\end{equation}
\end{proposition}
\begin{proof}
This follows immediately from Lemma~\ref{lemma:split1_2.5} followed by an application of Corollary~\ref{cor:two} to
$\Phi\bigl(\rho,\tilde{J}^{\ss}, F^A \bigr)$ and that $D=0$, from~\eqref{eq:div-free-D-null} .
\qed
\end{proof}
The three terms in~\eqref{eqn:final_markov_2.5} also appear in Lemma~\ref{lemma:split1_2.5} and Corollary~\ref{cor:two}, and their
interpretations have been discussed in the context of those results. Briefly, we recall that $I_{2.5}(\rho,j)$ sets the
probability of fluctuations in which a non-typical density $\rho$ and current $j$ are sustained over a long time period. The
first term in~\eqref{eqn:final_markov_2.5} reflects the fact that the free-energy gradient $F^S(\rho)$ tends to push $\rho$
towards the steady state $\pi$, so maintaining any non-typical density is unlikely if $F^S(\rho)$ is large. Similarly, the second
term in~\eqref{eqn:final_markov_2.5} reflects the fact that large non-gradient forces $F^A$ also tend to suppress the probability
that $\rho$ maintains its non-typical value. The final term is the only place in which the (divergence-free) current $j$ appears:
it vanishes if the current $j$ is typical within the controlled process (see Corollary~\ref{cor:final_markov_2}, below); otherwise
it reflects the probability cost of maintaining a non-typical circulating current.
\subsection{ Large deviations at level 2}
\label{sec:Large-deviations-at-level-2}
As well the LDP~\eqref{equ:l2.5-general}, we also consider an (apparently) simpler object, called a \emph{level-2 LDP}, where one
considers the density only. It is formally given by
\begin{equation}
\label{equ:l2-general}
\mathrm{Prob}\left(\hat\rho_T \approx \rho \right) \asymp \exp(-T I_{2}(\rho)).
\end{equation}
The contraction principle for LDPs~\cite[Section~3.6]{Touchette2009a} states that
\begin{equation}
\label{equ:l2-inf}
I_2(\rho) = \inf_{j \;\!:\;\! \div j=0} I_{2.5}(\rho,j).
\end{equation}
Equation~\eqref{equ:i2.5-control} is uniquely minimised in its second argument for the divergence free current $j^{\tilde{F}^A}$,
such that the contraction over all divergence-free vector fields $j$ yields the level-2 rate functional
\begin{equation}
\label{equ:i2-control}
I_{2}(\rho) = \frac12 \Phi\bigl( \rho, \tilde{J}^{\ss} , F(\rho) \bigr).
\end{equation}
The same splitting as above finally allows us to write the level 2 rate functional as follows.
\begin{corollary}
\label{cor:final_markov_2}
The level-2 rate functional can be written as the sum
\begin{equation}
\label{eqn:final_markov_2}
I_{2}(\rho) = \frac12 \Bigl[\Phi_S\bigl( \rho, 0, F^S(\rho) \bigr) + \Phi\bigl(\rho,\tilde{J}^{\ss}, F^A\bigr)\Bigr].
\end{equation}
\end{corollary}
\begin{proof}
This follows from~\eqref{equ:l2-inf} and~\eqref{eqn:final_markov_2.5}, since $\Phi\bigl( \rho, j , \tilde{F}^A \bigr)$ has a
minimal value of zero. \qed
\end{proof}
This last identity extends the results obtained in~\cite{Kaiser2017a} on the accelerated convergence to equilibrium for
irreversible processes using LDPs from the macroscopic scale (i.e.~in the regime of MFT) to Markov chains. The level-2 rate
function in~\eqref{eqn:final_markov_2} can be interpreted as a rate of convergence to the steady state. It was shown
in~\cite{Kaiser2017a} that the rate is higher for irreversible processes, as opposed to reversible ones (as the second term
$\Phi(\rho,\tilde{J}^{\ss}, F^A)=0$ for reversible processes). We remark that splitting techniques for irreversible jump processes
have been used to devise efficient MCMC samplers; see for example~\cite{Bernard2011,Ma2016a}.
\subsection{Connection to MFT}
\label{sec:Level-2.5-level-2-in-MFT}
Under the assumption that no dynamical phase transition takes place, the time averaged density
{$\hat\rho_{[0,T]}^{L}:=\frac 1T\int_0^T\hat\rho_t^{L} \df t$} and current
{$\hat\jmath_{[0,T]}^{\;\!L}:=\frac 1T\int_0^T\hat\jmath_t^{\;\!L}\df t$} in MFT {(recall
Section~\ref{sec:Large-devi-princ} for definitions) also satisfy a joint LDP in the limit $L,T\to\infty$: one takes first
$L\to\infty$ and then $T\to\infty$, see \cite[Equ.~(36)]{Kaiser2017a}. The LDP is similar to~\eqref{equ:l2.5-general}:
\begin{equation}
\mathrm{Prob}\bigl((\hat\rho_{[0,T]}^L, \hat\jmath_{[0,T]}^L) \approx (\rho,j)\bigr) \asymp \exp\bigl\{-T |\Lambda_L| I_{\rm joint}^{\mathrm{MFT}}(\rho,j)\bigr\},
\end{equation}
where the rate function is, for a density profile $\rho$ and a current $j$ with $\operatorname{div}j=0$, given by
\begin{equation}
\label{equ:lev2.5-mft}
I_{\rm joint}^{\mathrm{MFT}}(\rho,j) = \frac12 \Phi_{\mathrm{MFT}}(\rho,j,F(\rho)).
\end{equation}
As for Markov chains (see Section~\ref{sec:Large-deviations-at}) $I_{\rm joint}^{\mathrm{MFT}}(\rho,j) =\infty$ if $j$ is not divergence
free. If $\div j=0$ then the rate function can be} written in the form~\cite{Kaiser2017a}
\begin{equation}
\label{eqn:MFT_ldp_2.5}\quad
{ I_{\rm joint}^{\mathrm{MFT}}(\rho,j)} =\frac 14
\int_\Lambda \nabla \frac{\delta \mathcal V}{\delta \rho}\cdot \chi\nabla \frac{\delta \mathcal V}{\delta \rho} \df x
+ \frac 14\int_\Lambda \nabla \varphi \cdot \chi\nabla \varphi \df x
+ \frac14 \int_\Lambda (J_F-j)\cdot\chi^{-1} (J_F-j) \df x,\quad
\end{equation}
such that a contraction to {to the density only} yields
\begin{equation}
\label{eqn:MFT_ldp_2}
{ I_{\rm density}^{\mathrm{MFT}}(\rho)} =
\frac 14\int_\Lambda \nabla \frac{\delta \mathcal V}{\delta \rho}\cdot \chi\nabla \frac{\delta \mathcal V}{\delta \rho} \df x
+ \frac 14\int_\Lambda \nabla \varphi \cdot \chi\nabla \varphi \df x.
\end{equation}
The function $\varphi$ in~\eqref{eqn:MFT_ldp_2.5} and~\eqref{eqn:MFT_ldp_2} is obtained by solving
\begin{equation}
\div J_F(\rho) = 0, \qquad J_F(\rho) := \chi \nabla \varphi + J_A(\rho).
\end{equation}
Clearly the solution $\varphi$ depends on $\rho$. In essence, we have reduced the minimisation problem~\eqref{equ:l2-inf} to the
solution of this PDE. Comparing with~\eqref{eqn:final_markov_2.5}, we identify the terms $J_F = \chi\tilde{F}^A$ in the MFT
setting, and also $\tilde J^{\ss} = \chi \tilde{F}^A$, so $(\tilde{J}^{\ss}-\chi F^A(\rho)) = \chi\nabla\varphi$. We obtain the
following representations for~\eqref{eqn:MFT_ldp_2.5} and~\eqref{eqn:MFT_ldp_2} reminiscent of
Proposition~\ref{prop:final_markov_2.5} and Corollary~\ref{cor:final_markov_2}.
\begin{proposition}
The {rate functional for the joint density and current in MFT, which is given by~\eqref{eqn:MFT_ldp_2.5},} can be
written in terms of the OM functional~\eqref{eqn:psi_mft} as
\begin{equation}
{ I_{\rm joint}^{\mathrm{MFT}}(\rho,j)} =\frac 12\Bigl[\Phi_{\mathrm{MFT}}(\rho,0,F^S(\rho))
+ \Phi_{\mathrm{MFT}}(\rho,\tilde J^{\ss},F^A(\rho)) + \Phi_{\mathrm{MFT}}(\rho,j,\tilde F^A)\Bigr],
\end{equation}
and~\eqref{eqn:MFT_ldp_2}, the {rate functional for the density in MFT}, is given by
\begin{equation}
\label{equ:i2-mft-control}
{ I_{\rm density}^{\mathrm{MFT}}(\rho)} =\frac 12\Bigl[\Phi_{\mathrm{MFT}}(\rho,0,F^S(\rho)) + \Phi_{\mathrm{MFT}}(\rho,\tilde
J^{\ss},F^A(\rho))\Bigr].
\end{equation}
\end{proposition}
This proposition is equivalent to Proposition 5 of~\cite{Kaiser2017a}, but has now been rewritten in the language of optimal
control theory. As discussed in~\cite{Kaiser2017a}, Equation~\eqref{equ:i2-mft-control} quantifies the extent to which breaking
detailed balance accelerates convergence of systems to equilibrium, at the hydrodynamic level. For this work, the key point is
that this result originates from Corollary~\ref{cor:final_markov_2}, which is the equivalent statement for Markov chains (without
taking any hydrodynamic limit).
\section{Consequences of the structure of the OM functional $\Phi$}
\label{sec:Cons-struct-OM}
We have shown that the rate functions for several LDPs in several different contexts depend on functionals $\Phi$ with the general
structure presented in~\eqref{eqn:Phi_function} and~\eqref{equ:legend}. In this section, we show how this structure alone is
sufficient to establish some features that are well-known in MFT. This means that these results within MFT have analogues for
Markov chains. Our derivations mostly follow the standard MFT routes~\cite{Bertini2015a}, but we use a more abstract notation to
emphasise the minimal assumptions that are required.
\subsection{Assumptions}
\label{sec:assump-min}
The following minimal assumptions are easily verified for Markov chains; they are also either assumed or easily proven for MFT.
The results of this section are therefore valid in both settings.
We consider a process described by a time-dependent density $\rho$ and current $j$, with an associated continuity equation
$\dot\rho = -\div j$ and unique steady state $\pi$. We are given a set of ($\rho$-dependent) forces denoted by $F(\rho)$, a dual
pairing $j\cdot f$ between forces and currents, and a function $\Psi(\rho,j)$ which is convex in $j$ and satisfies
$\Psi(\rho,j)=\Psi(\rho,-j)$. With these choices, the functions $\Psi^\star$ and $\Phi$ are fully specified
via~\eqref{eqn:Phi_function} and~\eqref{equ:legend}. We assume that for initial conditions chosen from the invariant measure, the
system satisfies an LDP of the form~\eqref{equ:pathwise-general} with rate function of the form~\eqref{eqn:mc_rate_functional}.
We define an adjoint process for which the probability of a path $(\rho_t,j_t)_{t\in[0,T]}$ is equal to the probability of the
time-reversed path $(\rho^*_t,j^*_t)_{t\in[0,T]}$ in the original process. As above, we define
$(\rho^*_t,j^*_t)=(\rho_{T-t},-j_{T-t})$. We assume that the adjoint process also satisfies an LDP of the
form~\eqref{equ:pathwise-general}, with rate function $I^*_{[0,T]}$. Hence we must have
\begin{equation}
\label{equ:assume-adjoint}
I^*_{[0,T]}\bigl((\rho_t, j_t)_{t\in[0,T]}\bigr) = I_{[0,T]}\bigl((\rho^*_t, j^*_t)_{t\in[0,T]}\bigr) .
\end{equation}
Moreover, we assume that $I^*_{[0,T]}$ may be obtained from $I$ by replacing the force $F(\rho)$ with some adjoint force
$F^*(\rho)$. That is,
\begin{equation}
\label{equ:assume-Fstar}
I^*_{[0,T]}\bigl((\rho_t, j_t)_{t\in[0,T]}\bigr)= I_0(\rho_0) + \frac12\int_0^T \Phi(\rho_t,j_t, F^*(\rho_t)) \df t.
\end{equation}
Here, $I_0$ is the rate function associated with fluctuations of the density $\rho$, for a system in its steady state. That is,
within the steady state, $\mathrm{Prob}(\hat\rho^{\;\!{\cal N}}\approx \rho) \asymp \exp(-{\cal N} I_0(\rho))$. For Markov chains,
$I_0=\cal F$, the free energy; for MFT we have $I_0=\cal V$, the quasipotential. In the following we refer to $I_0$ as the free
energy.
\subsection{Symmetric and anti-symmetric forces}
Define
\begin{equation}
\label{equ:Fs-Fa-gen}
F^S(\rho) := \frac12[ F(\rho) + F^*(\rho) ], \qquad F^A(\rho) := \frac12[ F(\rho) - F^*(\rho) ].
\end{equation}
As the following proposition shows, $F^S$ is connected to the gradient of the free energy (or quasipotential) $I_0$, and the
forces $F^A$ and $F^S$ satisfy a generalised orthogonality (in the sense of Proposition~\ref{prop:orth}.) The proof follows
Section II.C of~\cite{Bertini2015a}, but uses only the assumptions of Section~\ref{sec:assump-min}, showing that the result
applies also to Markov chains.
\begin{proposition}
\label{prop:free_eng_balance}
The forces $F^S$ and $F^A$ satisfy
\begin{equation}
\label{equ:Fsgrad}
F^S(\rho) = -\nabla \frac{\delta I_0}{\delta \rho},
\end{equation}
and
\begin{equation}
\label{equ:FsFaOrth}
\Psi^\star\bigl(\rho, F^S(\rho) + F^A\bigr) = \Psi^\star\bigl(\rho, F^S(\rho) - F^A\bigr) .
\end{equation}%
\end{proposition}
\begin{proof}
Combining~\eqref{equ:assume-adjoint} and~\eqref{equ:assume-Fstar}, we obtain (for any path $(\rho_t,j_t)_{t\in[0,T]}$ that obeys
the continuity equation $\dot\rho = -\div j$)
\begin{equation}
\label{equ:pathrev}
I_0(\rho_0) + \frac12 \int_0^T\Phi(\rho_t,j_t,F(\rho_t))\df t
=I_0(\rho_T) + \frac12\int_0^T \Phi(\rho_{T-t},-j_{T-t},F^\ast(\rho_{T-t}))\df t.
\end{equation}
Differentiating with respect to $T$ and using~\eqref{eqn:Phi_function} together with $\Psi(\rho,j)=\Psi(\rho,-j)$
and~\eqref{equ:Fs-Fa-gen}, one has
\begin{equation*}
\dot I_0(\rho) + j\cdot F^S(\rho) + \frac12 \big[ \Psi^\star(\rho,F^*(\rho)) - \Psi^\star(\rho,F(\rho)) \big] = 0 .
\end{equation*}
Using the continuity equation and an integration by parts, one finds $\dot I_0(\rho) =j\cdot \nabla \frac{\delta I_0}{\delta
\rho}$, so that
\begin{equation*}
j\cdot \left[ F^S(\rho) + \nabla \frac{\delta I_0}{\delta \rho} \right]
+ \frac12 \big[ \Psi^\star(\rho,F^*(\rho)) - \Psi^\star(\rho,F(\rho)) \big] = 0 .
\end{equation*}
This equation must hold for all $(\rho,j)$, which means that the two terms in square parentheses both vanish separately.
Combining the last equation with~\eqref{equ:Fs-Fa-gen}, we obtain~\eqref{equ:Fsgrad} and~\eqref{equ:FsFaOrth}.
\qed
\end{proof}
Proposition~\ref{prop:free_eng_balance} also yields a variational characterisation of $I_0$. The following corollary is analogous
to Equation~(4.8) of~\cite{Bertini2015a}, as is its proof.
\begin{corollary}
\label{cor:var_free}
The free energy $I_0$ satisfies
{
\begin{equation}
\label{eqn:free_energy_var}
I_0(\hat\rho) = \inf {\frac12\int_{-\infty}^0 \Phi(\rho_t,j_t, F(\rho_t)) \df t },
\end{equation}
}
where the infimum is taken over all paths $(\rho_t, j_t)_{t\in(-\infty,0]}$ that satisfy $\dot\rho_t+\div j_t=0$, as well as
$\lim_{t\to-\infty}\rho_t = \pi$ and $\rho_0 = \hat\rho$. Moreover, the optimal path is given by the time reversal of the solution
of the adjoint dynamics $(\rho_t, -J^\ast(\rho_t))_{t\in(-\infty,0]}$.
\end{corollary}
\begin{proof}
We obtain from~\eqref{equ:pathrev} (together with~\eqref{eqn:mc_rate_functional} and~\eqref{equ:assume-adjoint}) that
{
\begin{equation*}
\frac12\int_{-\infty}^0 \Phi(\rho_t,j_t, F(\rho_t)) \df t
= I_0(\hat\rho) + \frac12\int_{-\infty}^0 \Phi(\rho_t,j_t, F^*(\rho_t)) \df t.
\end{equation*}
}
Taking the infimum on both sides yields~\eqref{eqn:free_energy_var}; indeed the infimum of
{$\frac12\int_{-\infty}^0 \Phi(\rho_t,j_t, F(\rho_t)) \df t$} is $0$, and this infimum is attained uniquely for the
optimal path for~\eqref{eqn:free_energy_var}. To see this, we note that $\Phi(\rho_t,-j_t, F^\ast(\rho_t))$ is uniquely minimised
for $j_t = -J^\ast(\rho_t)$, and $(\rho_t, -J^\ast(\rho_t))_{t\in(-\infty,0]}$ satisfies the conditions above, so the optimal path
is indeed the time-reversal of the solution of the adjoint dynamics. \qed\end{proof}
\subsection{Hamilton-Jacobi like equation for the extended Hamiltonian}
\label{sec:Hamilt-Jacobi-equat}
Another important relationship within MFT is the Hamilton-Jacobi equation~\cite[Equation~(4.13)]{Bertini2015a}. This provides a
characterisation of the quasipotential, as its maximal non-negative solution. The following formulation of that result uses only
the assumptions of Section~\ref{sec:assump-min} and therefore applies also to Markov chains. The functional
\begin{equation}
\mathbb L(\rho,j):=\frac 12 \Phi(\rho,j,F(\rho))
\label{equ:lag}
\end{equation}
can be interpreted as an extended Lagrangian (Note that $\mathbb L(\rho,j)$ should not be interpreted as a Lagrangian in the classical sense, as it depends on density and current $(\rho,j)$, rather than the pair consisting of density and associated velocity $(\rho,\dot\rho)$). {We follow Section~IV.G of~\cite{Bertini2015a}: given a sample path
$(\rho_t,j_t)_{t\in[0,T]}$), define a vector field $A_t=A_0 - \int_0^t j_s \mathrm{d}s$. The initial condition $A_0$ is chosen
so that there is a bijection between the paths $(\rho_t,j_t)_{t\in[0,T]}$ and $(A_t)_{t\in[0,T]}$. For example, in finite
Markov chains, define $\bar\rho$ as a constant density, normalised to unity, and let $A_0=\nabla h$, where $h$ solves
$\div(\nabla h) = (\rho_0-\bar\rho)$, see~\cite{Carlo2017a} for the relevant properties of these vector fields. With this
choice, and using $\dot\rho = -\div j$, one has $\rho_t=\bar\rho + \div A_t$ for all $t$, and one may also write (formally)
$A_t = \div^{-1}(\rho_t-\bar\rho)$. Comparing with~\cite[Section~IV.G]{Bertini2015a}, we write $\rho=\bar\rho+\div A$ instead
of $\rho=\div A$ since for Markov chains one has (for any discrete vector field $A$) that $\sum_x \div A(x)=0$, so it is not
possible to solve $\div A = \rho$ if $\rho$ is normalised to unity (recall that discrete vector fields have by definition
$A_{xy}=-A_{yx}$~\cite{Carlo2017a}).
The fluctuations
of $A$ are therefore determined by the fluctuations of $(\rho,j)$, so the LDP \eqref{equ:pathwise-general} implies a similar LDP for $A$, whose
rate function is
$I^{\rm ex}_{[0,T]}((A_t)_{t\in[0,T]}) = I^{\rm ex}_0(A_0) + \int_0^T \mathbb{L}^{\rm ex}(A_t,\dot A_t)\mathrm{d}t$, where $\mathbb{L}^{\rm ex}$ is {a Lagrangian that depends on $A$ and its time derivative (which we again refer to as extended Lagrangian, cf.~\cite{Bertini2015a}).}
The function $\mathbb{L}$ in (\ref{equ:lag}) is then related to $\mathbb{L}^{\rm ex}$ via the bijection between $(\rho,j)$ and $A$. Considering again the case of Markov chains, the time
evolution of the system depends only on $\div A$ (which is $\rho-\bar\rho$) and not on $A$ itself, one sees that $\mathbb{L}^{\rm ex}(A,\dot A)$
depends only on $\div A$ and $\dot A$ (which is $j$). Hence we write, formally,
$\mathbb L(\rho,j) = \mathbb{L}^{\rm ex}(\div^{-1}(\rho-\bar\rho),-j)$, and we recover~\eqref{equ:lag}.
Hence $\mathbb{L}$ is nothing but the extended Lagrangian $\mathbb{L}^{\rm ex}$, written in different variables: for this reason we {refer to $\mathbb{L}$ as an (extended) Lagrangian.}
To arrive at the corresponding {(extended) Hamiltonian}, one should write
$\mathbb{H}^{\rm ex}(A,\xi) = \sup_{\dot A} [ \xi \cdot \dot{A} - \mathbb{L}^{\rm ex}(A_t,\dot A_t) ]$, or equivalently }
\begin{equation}
\mathbb H(\rho,\xi) =\sup_j \bigl( j \cdot \xi - \mathbb L(\rho,j)\bigr),
\end{equation}
where $\xi$ is a conjugate field for the current $j$. We identify $\mathbb{H}$ as the scaled cumulant generating function
associated with the rate function $I_{2.5}(\rho,j)=\mathbb{L}(\rho,j)$~\cite[Section 3.1]{Touchette2009a}. Analysis of rare
fluctuations in terms of the field $\xi$ is often more convenient than direct analysis of the rate
function~\cite{Lebowitz1999a,Lecomte2007a} and is the basis of the ``$s$-ensemble'' method that has recently been exploited in a
number of physical applications (for example~\cite{Garrahan2009a,Jack2015a}). Using~\eqref{eqn:Phi_function}
and~\eqref{equ:legend}, we obtain
\begin{equation}
\label{equ:H-psi*}
\mathbb H(\rho,\xi) =\frac 12\Psi^\star(\rho,F(\rho)+2\xi) - \frac 12\Psi^\star(\rho,F(\rho)).
\end{equation}
(This generalises the definition~\eqref{equ:ham-markov}, which was restricted to Markov chains.)
To relate this {extended Hamiltonian} to the free energy (quasipotential), {one can define an \emph{extended Hamilton-Jacobi equation}, which is for a functional
$\cal S$ given by}
\begin{equation}
\label{eqn:HJ_micro}
\mathbb H\left(\rho,\nabla\frac{\delta \mathcal S}{\delta\rho}\right)=0.
\end{equation}
The relation of this equation to the free energy is given by the following proposition, which mirrors Equation~(4.18)
of~\cite{Bertini2015a}, but now in our generalised setting, so that it applies also to Markov chains.
\begin{proposition}
\label{prop:HJ}
The free energy $I_0$ is the maximal non-negative solution to~\eqref{eqn:HJ_micro} which vanishes at the steady state $\pi$. In
other words, any functional $\mathcal S$ that solves~\eqref{eqn:HJ_micro} and has $\mathcal S(\pi)=0$ also satisfies
$\mathcal S\le I_0$.
\end{proposition}
\begin{proof}
From~\eqref{equ:Fs-Fa-gen},~\eqref{equ:Fsgrad},~\eqref{equ:FsFaOrth} and $\Psi^\star(\rho,F)=\Psi^\star(\rho,-F)$, one has
\begin{equation}
\label{equ:HJ-solve}
\Psi^\star(\rho,F(\rho)+2\nabla\tfrac{\delta I_0}{\delta\rho})=\Psi^\star(\rho,-F_S(\rho)+F_A(\rho))=\Psi^\star(\rho,F(\rho)).
\end{equation}
Thus~\eqref{equ:H-psi*} yields $\mathbb H\bigl(\rho,\nabla\tfrac{\delta I_0}{\delta\rho}\bigr)=0$, so $I_0$ does indeed
solve~\eqref{eqn:HJ_micro}. In addition,~\eqref{equ:HJ-solve} is valid also with $I_0$ replaced by any $\cal S$ that
solves~\eqref{eqn:HJ_micro}; combining this result with~\eqref{eqn:Phi_function} yields
\begin{equation}
\label{equ:Phi-HJ-bound}
\Phi(\rho,j,F(\rho)) = \Phi\left(\rho,j,F(\rho) +2\nabla \frac{\delta\mathcal S}{\delta\rho}\right)
+ 2 j\cdot \nabla \frac{\delta\mathcal S}{\delta\rho}\ge 2 j\cdot \nabla \frac{\delta\mathcal S}{\delta\rho},
\end{equation}
where the second step uses $\Phi\geq0$. Moreover, for any path $(\rho_t,j_t)_{t\in(-\infty,0]}$ with $\dot\rho_t+\div j_t=0$ and
$\lim_{t\to-\infty}\rho_t = \pi$, we have from~\eqref{equ:Phi-HJ-bound} that
\begin{equation}\nonumber
\qquad I_{(-\infty,0]}\bigl((\rho,j)_{t\in(-\infty,0]}\bigr)
= \int_{-\infty}^0 \Phi(\rho_t,j_t,F(\rho_t)) \df t \ge
\int_{-\infty}^0 j(x)\cdot \nabla \frac{\delta\mathcal S}{\delta\rho}(x) \df t = \mathcal S(\rho_0),\qquad
\end{equation}
where the final equality uses an integration by parts, together with the continuity equation. Finally, taking the infimum over
all paths and using Corollary~\ref{cor:var_free}, one obtains $\mathcal{S}(\rho) \leq I_0(\rho)$, as claimed. \qed\end{proof}
{
\subsection{Generalisation of Lemma~\ref{lem:psi_split}}
Before ending, we note that~\eqref{equ:FsFaOrth} is analogous to Proposition~\ref{prop:orth} in the general setting of this
section, but we have not yet proved any analogue of Lemma~\ref{lem:psi_split}. Hence we have not obtained a generalisation of
Corollary~\ref{cor:two}, nor any of its further consequences. To achieve this, one requires a further assumption within the
general framework considered here, which amounts to a splitting of the Hamiltonian. This assumption holds for MFT and for Markov
chains, and is a sufficient condition for a generalised Lemma~\ref{lem:psi_split}.
To state the assumption, we consider a reversible process in which the forces are $F^S(\rho)$. (For Markov chains we should
consider the process with rates $r^S_{xy} = \frac12( r_{xy} + r_{xy}^*)$; for MFT it is the process with $J(\rho)=J^S(\rho)$ and
the same mobility $\chi$ as the original process.) We assume that such a process exists and that its Hamiltonian can be written as
$\mathbb{H}_S(\rho,\xi) = \frac12 [ \Psi^\star_S(\rho,F^S(\rho) + 2\xi) - \Psi_S^\star(\rho,F^S(\rho))]$ for some function
$\Psi^\star_S$ (compare~\eqref{equ:H-psi*} and see Section~\ref{sec:HJ-for-MC} for the case of Markov chains). Also let the
Hamiltonian for the adjoint process be $\mathbb{H}^*(\rho,\xi)$, which is constructed by replacing $F$ by $F^*$
in~\eqref{equ:H-psi*}. Then, one assumes further that
\begin{equation}
\mathbb{H}_S(\rho,\xi) = \tfrac12 [ \mathbb{H}(\rho,\xi) + \mathbb{H}^*(\rho,\xi) ] ,
\end{equation}
which may be verified to hold for Markov chains and for MFT. Writing $\xi=-F^S/2$ and using~\eqref{equ:H-psi*}
with~\eqref{equ:FsFaOrth} and $\Psi^\star(\rho,f) = \Psi^\star(\rho,-f)$, one then obtains
\begin{equation}
\Psi^\star_S(\rho,F^S(\rho)) = \Psi^\star(F(\rho)) - \Psi^\star(F^A(\rho)) ,
\end{equation}
which is the promised generalisation of Lemma~\ref{lem:psi_split}.
}
\section{Conclusion}
\label{sec:conc}
In this article, we have presented several results for dynamical fluctuations in Markov chains. The central object in our
discussion has been the function $\Phi$, which plays a number of different roles -- it is the rate function for large deviations
at level 2.5 (Equation~\eqref{eqn:ldp_2.5}), and it also appears in the rate function for pathwise large deviation functions
(Equation~\eqref{eqn:mc_rate_functional}). These results -- derived originally by Maes and co-workers~\cite{Maes2008a,Maes2008b}
-- originate from the relationship between $\Phi$ and the relative entropy between path measures (Appendix~\ref{sec:relent}). The
canonical (Legendre transform) structure of $\Phi$ (Equation~\eqref{equ:legend}) and its relation to time reversal
(Equation~\eqref{equ:gc-finite-time}) have also been discussed before~\cite{Maes2008a}.
The function $\Phi$ depends on probability currents $j$ and their conjugate forces $f$. Our Proposition~\ref{prop:orth} and
Corollary~\ref{cor:two} show how the rate functions in which $\Phi$ appears have another level of structure, based on the
decomposition of the forces $F$ in two pieces $F=F^S+F^A$, according to its behaviour under time-reversal. A similar
decomposition is applied in Macroscopic Fluctuation Theory~\cite{Bertini2015a}: the discussion of
Sections~\ref{sec:LDPs-time-averaged} and~\ref{sec:Cons-struct-OM} show how several results of that theory -- which applies on
macroscopic (hydrodynamic) scales -- already have analogues for Markov chains, which provide microscopic descriptions of
interacting particle systems. These results -- which concern symmetries, gradient structures and (generalised) orthogonality
relationships -- show how properties of the rate functions are directly connected to physical ideas of free energy, dissipation,
and time-reversal.
Looking forward, we hope that these structures can be exploited both in mathematics and physics. From a mathematical viewpoint,
the canonical structure and generalised orthogonality relationships may provide new routes for scale-bridging calculations, just
as the geometrical structure identified by Maas~\cite{Maas2011a} has been used to develop new proofs of hydrodynamic
limits~\cite{Fathi2016a}. In physics, a common technique is to propose macroscopic descriptions of physical systems based on
symmetries and general principles -- examples in non-equilibrium (active) systems include~\cite{Toner1995a,Wittkowski2014a}.
However, this level of description leaves some ambiguity as to the best definitions of some physical quantities, such as the local
entropy production~\cite{Nardini2017a}. We hope that the structures identified here can be useful in relating such macroscopic
theories to underlying microscopic behaviour.
\paragraph{Acknowledgements}
We thank Freddy Bouchet, Davide Gabrielli, Juan Garrahan, Jan Maas, Michiel Renger and Hugo Touchette
for useful discussions. MK is supported by a scholarship from the EPSRC Centre for Doctoral Training in Statistical Applied
Mathematics at Bath (SAMBa), under the project EP/L015684/1. JZ gratefully acknowledges funding by the EPSRC through project
EP/K027743/1, the Leverhulme Trust (RPG-2013-261) and a Royal Society Wolfson Research Merit Award. {The authors thank the anonymous referees for their careful reading of the manuscript and for many helpful comments and suggestions.}
|
1,941,325,221,143 | arxiv | \section{Introduction}
In recent years, deep learning has been widely deployed in industrial recommender systems. In addition, due to the stringent latency requirement on returning recommendations upon receiving each request from millions or even billions of user (e.g., in hundreds of milliseconds), more and more recommendation models are first trained on the cloud (e.g., with standard learning algorithms or meta learning algorithms) and then offloaded to mobile devices for real-time inference. Such an on-device learning paradigm leverages the natural advantages of mobile devices being close to users and at data sources, thereby reducing latency and communication overhead, mitigating the cloud-side load, and keeping raw data with sensitive contents (e.g., user behaviors) on local devices.
However, the ubiquitous issue of cross-device data heterogeneity in recommender systems makes the cloud-based global model non-optimal to directly serve each individual user. In particular, different users normally have diverse behavior patterns (e.g., daily active vs. monthly active, and different sequences of browsed, clicked, and purchased goods) and differentiated preferences (e.g., some users like sports-related goods, while some users prefer foods). This implies that each individual user's local data distribution tend to deviate from the global data distribution (i.e., a mixture of all the users' data distributions). As a result, the cloud-base model, which is optimized over the global data distribution, may not perform very well for all the users.
To deal with cross-device data heterogeneity, fine-tuning the cloud-based model on each mobile device with the corresponding user's local samples is a potential solution, considering both effectiveness and efficiency. On the one hand, on-device fine-tuning can adapt to each user's data distribution and achieve extremely model personalization, one personalized recommendation model for one user. On the other hand, given the fact that the size of each individual user's local samples is small, while the cloud-based model is nearly optimal, fine-tuning tends to require only a few model iterations, the overhead of which is affordable to resource-constraint mobile devices.
In this work, we focus on the basic click through rate (CTR) prediction task in recommender systems, which predicts whether a user will click an candidate item or not given the user's profile and the user's historic behavior sequence, and identify atypical issues in on-device fine-tuning. In particular, when each individual user fine-tunes a CTR prediction model with its local samples, the model update is sparse rather than dense (e.g., in the context of most computer vision applications), which was also called ``submodel update'' in ~\cite{niu_mobicom20}. The major reason is that a certain user normally interacts only with a small number of items, and his/her local samples involve a small subspace of the full feature space and will update a small part of the full CTR prediction model (e.g., the embedding vectors of a few local items) in the phase of fine-tuning. In addition to sparse model update, we also observe a brand new issue, called ``CTR drift'', which means that each individual user's local CTR (i.e., the ratio of positive samples in the local dataset for fine-tuning) may deviate from the global CTR (i.e., the ratio of positive samples in the global dataset on the cloud for training out the initial model). Specifically, we collect 3-day data of 5 million users from Mobile Taobao, which is the largest online shopping platform in China, and depict in Figure \ref{ctr_stats} the statistics of the local CTRs and the global CTR. We can observe from Figure \ref{ctr_stats} that the local CTRs of most users deviate from the global CTR.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{.//imgs//ctr_distribution.pdf}
\caption{The distribution of Mobile Taobao users' local CTRs (some tail local CTRs are not displayed), which is based on the statistics over 5 million Taobao users’ 3-day data on the homepage recommendation.}
\label{ctr_stats}
\end{figure}
We further demonstrate that the CTR drift will deteriorate rather than benefit the performance of the CTR prediction model after local sparse fine-tuning. In particular, for a user with his/her local CTR being higher (resp., lower) than the global CTR, the sparsely updated parameters of the fine-tuned model tend to be larger (resp., smaller) than the other parameters, which causes some items having too high (resp., low) predicted CTRs and thus disrupts the ranking result of the candidate items. To deal with this problem, we propose a novel label correction method for on-device fine-tuning (``LCFT'' for short), which is effective, efficient, and yet simple in implementation. In detail, LCFT just needs to correct the labels of local samples, such that the local equivalent CTR, which is defined as the optimal prior CTR that minimizes the training loss over each individual user's dataset, will be equal to the global CTR. Intuitively, the local equivalent CTR determines the magnitude of the updated parameters. Therefore, the label correction procedure can ensure that the updated parameters and the other non-updated parameters will be in the equal magnitude. Based on the alignment of the equivalent local CTR and the global CTR, we provide the theoretical choice of the corrected labels for each individual user. In addition to the way of soft setting, the choice of label correction can also be set to hyperparameters for hard searching in practice.
We summarize the contributions of this work as follows:
\begin{itemize}
\item We study how to effectively and efficiently perform on-device fine-tuning for the CTR prediction model in recommender systems, and identify the bottleneck issue of CTR drift for the first time.
\item To mitigate the negative effect of CTR drift on the local sparse fine-tuning, we propose a novel label correction method LCFT, which requires each user only to change the labels of local samples before fine-tuning. Theoretically, LCFT aligns the local equivalent CTR with the global CTR and further keeps the updated parameters and the other non-updated parameters in the equal magnitude.
\item We extensively evaluate LCFT on five recommendation models over the public MovieLens 20M and Amazon Electronics datasets, as well as an industrial 7-day dataset collected from Mobile Taobao. The offline evaluation results demonstrate that LCFT outperforms cloud-based learning and purely on-device fine-tuning, validating the necessity of label correction.
\item We deploy LCFT in the homepage recommendation of Mobile Taobao for online A/B testing. The online results reveal that LCFT can indeed improve the personalized recommendation performance in terms of both CTR and user activenss. Specifically, LCFT improves CTR by 1.79\% compared with cloud-based learning.
\end{itemize}
\section{Related Work}
\subsection{On-Device Inference and Fine-Tuning}
With the rapid development of mobile devices and the advance of model compression algorithms, how to enable on-device learning emerges as a hot and promising topic.
Much previous work has devoted to on-device inference, where models are first trained on the cloud and then offloaded the mobile devices for inference. \citet{deepeye} and \citet{deepmon} designed DeepEye and DeepMon to support the on-device inference of computer vision models. Siri, the voice assistant developed by Apple, adopted on-device inference to improve the text-to-speech synthesis process \cite{siri1}. \citet{lv_walle} built an end-to-end, general-purpose, and large-scale production system Walle, having deployed many recommendation, computer vision, and natural language tasks to mobile APPs.
Besides on-device inference, on-device fine-tuning is also strongly motivated to tackle data heterogeneity among users. In particular, it is normally non-optimal to directly use the cloud-based global model, trained by standard algorithms or meta-learning algorithms~\cite{maml, person_maml}, to serve each individual user in real time. The technique of fine-tuning, which was initially proposed in the context of big model pre-training to quickly adapt to a specific downstream task~\cite{bert, mae}, is quite convenient and efficient to be applied to resource-constraint mobile devices. For example, \citet{deeptype} focused on the next-word prediction task and proposed to leverage on-device fine-tuning for model personalization, thereby improving prediction accuracy. Their focus was on how to reduce overhead by vocabulary and model compression. In contrast, we focus on the CTR prediction task in recommender systems and consider how to address the newly identified CTR drift for unbiased optimization.
\subsection{Cross-Device Learning}
In addition to on-device learning, where the learning tasks on different mobile devices are independent from each other, another line of existing work focused on cross-device learning, where multiple mobile devices perform joint optimization. Federated learning~\cite{McMahan_aistats17, li_mlsys20,cho_corr20, Li_iclr20, Karimireddy_icml20} is the most popular cross-device learning framework, which allows many users with mobile devices to collaboratively train a global model under the coordination of a cloud server without sharing their local data, such that data privacy can be well protected. Some other work studied different collaboration mechanisms between the cloud and multiple mobile devices to achieve model personalization. \citet{yan_kdd22} proposed MPDA, which requires a certain user retrieve some similar data from other users. From the perspective of domain adaption, other users' data function as large-scale source domains and are retrieved to augment the user’s local data as the target domain. \citet{gu_alibaba21} proposed CoDA, which lets each user train an on-device, two-class classifier for data augmentation.
Complementary to federated learning, which trains out a global model without sharing raw data, on-device fine-tuning can be further applied for model personalization. In contrast to data sharing based work (e.g., \citet{yan_kdd22, gu_alibaba21}), on-device fine-tuning allows each individual user to use only his/her data on local devices, which preserves data privacy and is easy to be deployed in practice.
\subsection{Personalized Recommender Systems}
The basic task of recommender systems is to rank different candidate items according to the metric of CTR. Before deep learning was introduced in recommender systems, logistic regression (LR) was the widely used shallow model. Later, \cite{net_wd} proposed Wide\&Deep, which combines the memorization strength of LR and the generalization ability of deep neural networks. DeepFM~\cite{net_fm} replaced the wide part with a factorization machine. PNN~\cite{net_pnn1,net_pnn2} introduced a production layer before the multi-layer perceptron. Deep interest network (DIN)~\cite{net_din} added an attention layer to extract the relevance of a user’s historical behaviors to a candidate item. For on-device recommendation,\cite{net_edgerec} designed EdgeRec, which introduced several real-time user behavior context, such as duration of item exposure, scroll speed and scroll duration for item exposure, and deleting reason. EdgeRec has achieved remarkable performance improvement and has been widely deployed in Mobile Taobao.
The key focus of previous work in recommender systems was to design diverse model structures to represent user data better or to introduce more personalized features from users. However, it is difficult for a single global recommendation model to be optimal on each individual user's local data, making model personalization become a new trend. This work exactly studies in the way of on-device fine-tuning.
\section{Problem Formulation}
There are $u$ users with mobile devices in total, denoted as $\{1,2,\cdots,u\}$. The dataset of user $i$ for local fine-tuning is denoted as $D_i$. Considering data heterogeneity in practice, different users' datasets tend to follow different distributions. In this work, we focus on the CTR prediction task in recommender systems, which is a standard two-class classification problem. For user $i$'s dataset $D_i$, we let $n_i^+$ denote the size of positive samples, let $n_i^-$ denote the size of negative samples, and let $w_i \overset{\triangle}{=} \frac{n_i^+}{n_i^+ + n_i^-}$ denote the ratio of positive samples or the local CTR. The optimization objective of user $i$ is
\begin{align}\label{local_opt}
L_i = \min_{{\bf h}_i} \sum_{(x,y)\in D_i} l\left({\bf h}_i\left(x\right),y\right),
\end{align}
where ${\bf h}_i$ denotes the local recommendation model, $x$ and $y$ denote the input features and the label of a training sample, and $l(\cdot,\cdot)$ denotes loss function, typically mean square error or cross-entropy loss.
\subsection{Cloud-Based Learning for Model Initialization}
The initial model of each user's local fine-tuning normally comes from cloud-based training, which optimizes a global model over all the users' datasets with the objective:
\begin{align}\label{cloud_opt}
L = \min_{{\bf h}}\frac{1}{u} \sum_{i=1}^u \sum_{(x,y)\in D_i} l\left({\bf h}\left(x\right),y\right).
\end{align}
Equation \ref{cloud_opt} reveals that the discrepancy among different $D_i$'s (i.e., cross-device data heterogeneity) limits the performance of the cloud-based global model ${\bf h}$, when serving each user. Formally, the globally optimal model ${\bf h}$ may be non-optimal for each user's dataset $D_i$, and $L\le \frac{1}{u}\sum_{i=1}^u L_i$.
\subsection{On-Device Fine-Tuning with CTR Drift Problem}
To break the dilemma of cloud-based learning, on-device fine-tuning is a potential solution. In particular, each user $i$ will continue to optimize the global model ${\bf h}$ with its local dataset $D_i$ by minimizing equation \ref{local_opt} independently. In the context of recommender systems, we observe two atypical phenomenons in practice: (1) {\bf Sparse Model Update}: each user fine-tunes the deep recommendation model with its local dataset, updating only part of model parameters rather that the full model. Specifically, for the embedding layer in the CTR prediction model, only the embedding vectors corresponding to the items in each user $i$'s local dataset $D_i$ will be updated; and for the other layers (e.g., MLP with ReLu activation function), only part of the neurons with non-zero inputs will be updated; and (2) {\bf CTR Drift}: each individual user $i$'s local CTR $w_i = \frac{n_i^+}{n_i^+ + n_i^-}$ deviates from the global CTR $w_g \overset{\triangle}{=} \frac{\sum_{i=1}^{u} n_i^+}{\sum_{i=1}^{u} \left(n_i^+ + n_i^-\right)}$, or the ratio of positive samples in the local dataset $D_i$ for on-device fine-tuning differs from the ratio of positive samples in the global dataset $\bigcup_{i=1}^{u} D_i$ for training out the initial model on the cloud. Specifically, by analyzing over 5 million users's data collected from Mobile Taobao in the period of 3 days, we find that the users's local CTRs follow a long-tail distribution as shown in Figure \ref{ctr_stats}, and the local CTRs of most users are far from the global CTR. Further, we find that the issue of CTR drift will seriously degrade the performance of each user's sparse fine-tuning. We give the detailed illustrations as follows.
From the perspective of a certain user, if its local CTR is higher (resp., lower) than the global CTR, the updated model parameters tend to be larger (resp., smaller) than those parameters that are not updated. The discrepancy between the updated and the non-updated parameters will lead to bad prediction performance, because the fine-tuned model tends to output higher (resp., lower) CTR predictions for the items involving more updated parameters. As a result, even if the fine-tuned model has a higher test accuracy and a lower test loss, the ranking result of different items for generating final recommendations may become worse. We take an example for more intuitive illustration.
\begin{example}\label{ctrdrift_ex}
The global CTR is 0.5, and user $i$'s local CTR is 0.25. For user $i$, suppose that the true CTRs of three items $I_1$, $I_2$, and $I_3$ are $0.3$, $0.1$, and $0.2$, respectively, and the CTRs of the three items predicted by the cloud-based model are $0.5$, $0.35$, and $0.6$, respectively. Both $I_1$ and $I_3$ involve $m$ samples in the local dataset $D_i$, and are clicked by $0.3m$ and $0.2m$ times, respectively, while $I_2$ does not appear in $D_i$. Assume that by on-device fine-tuning, the local model can precisely predict the CTRs of $I_1$ and $I_3$ while none of the model parameters related to $I_2$ is updated. Then, the CTRs of $I_1$, $I_2$, and $I_3$ predicted by the fine-tuned model become $0.3$, $0.35$, and $0.2$, respectively.
\end{example}
In Example \ref{ctrdrift_ex}, the ranking result of the cloud-based model is $\{I_3, I_1, I_2\}$, whereas the ranking result of the fine-tuned model is $\{I_2, I_1, I_3\}$. Compared with the true ranking result $\{I_1, I_3, I_2\}$, the fine-tuned model correctly ranks $I_1$ and $I_3$, but incorrectly ranks $I_2$ in the first place. We note that the ranking result after on-device fine-tuning even becomes worse, although the fine-tuned model has a lower test loss than the cloud-based model.
\section{Algorithm Design and Implementation}
To mitigate the effect of CTR drift on the local fine-tuning, we propose a novel label correction method, which is simple in implementation and quite effective and efficient. The key idea is to correct the labels of local samples such that the equivalent CTR with respect to the local optimization objective, formally defined in Definition \ref{eq_ctr}, is consistent of the global CTR after label correction. Therefore, the discrepancy between the updated and the non-updated model parameters after on-device fine-tuning can also be mitigated.
\begin{definition}\label{eq_ctr}
User $i$'s equivalent CTR ${y^0_i}^*$ is the prior CTR that minimizes the loss over its training dataset, formally,
\begin{align}
{y^0_i}^* = \arg\min_{y^0_i} \sum_{(x,y) \in D_i} l(y^0_i, y),
\end{align}
where $L_i^0\overset{\triangle}{=} \sum_{(x,y) \in D_i} l(y^0_i, y)$ denotes the training loss by using the prior CTR $y_i^0$ as the predicted labels.
\end{definition}
By Definition \ref{eq_ctr}, we next calculate the equivalent CTR ${y^0_i}^*$ for user $i$. We consider that $l(\cdot,\cdot)$ takes the mean square error or the cross-entropy loss function. We let $\alpha_i$ and $\beta_i$ ($\alpha_i>\beta_i$) denote the corrected labels of a positive sample and a negative sample, respectively. First, for the mean square error, the training loss is
\begin{equation}
\begin{aligned}
L_i^0 =& n_i^+ \left(\alpha_i - y_i^0 \right)^2 + n_i^- \left(\beta_i-y_i^0\right)^2.
\end{aligned}
\end{equation}
To minimize $L_i^0$, we let $\frac{\partial L_i^0}{\partial y_i^0} = 0$ and have
\begin{align}
-2n_i^+(\alpha_i-y_i^0) - 2n_i^-(\beta_i-y_i^0) = 0.
\end{align}
Given the local CTR $w_i = \frac{n_i^+}{n_i^+ + n_i^-}$, we have the equivalent CTR ${y_i^0}^*$ in the formula of the local CTR $w_i$ and the corrected labels $\alpha_i$ and $\beta_i$:
\begin{align}\label{mse_label}
{y_i^0}^* = w_i \alpha_i + (1-w_i) \beta_i.
\end{align}
Similarly, for the cross-entropy loss, the training loss is
\begin{equation}
\begin{aligned}
L_i^0 = &n_i^+ \left(\alpha_i \ln{y_i^0} + (1-\alpha_i)\ln{(1-y_i^0)} \right) \\
&+ n_i^- \left(\beta_i \ln{y_i^0} + (1-\beta_i)\ln{(1-y_i^0)} \right).
\end{aligned}
\end{equation}
We minimize $L_i^0$ by letting $\frac{\partial L_i^0}{\partial y_i^0} = 0$ and have
\begin{align}
n_i^+\left( \frac{\alpha_i}{y_i^0} - \frac{1-\alpha_i}{1-y_i^0} \right)
+ n_i^- \left( \frac{\beta_i}{y_i^0} - \frac{1-\beta_i}{1-y_i^0} \right) = 0.
\end{align}
Given $w_i = \frac{n_i^+}{n_i^+ + n_i^-}$, we have the equivalent CTR
\begin{align}\label{ce_label}
{y_i^0}^* = w_i \alpha_i + (1-w_i) \beta_i,
\end{align}
which is in the same format as for the mean square error.
After obtaining user $i$'s equivalent CTR ${y_i^0}^*$ in equation \ref{mse_label} or \ref{ce_label}, we let it to be equal to the global CTR $w_g$:
\begin{align}\label{CTR_l=g}
w_i \alpha_i + (1 - w_i) \beta_i = w_g.
\end{align}
We then can obtain the corrected labels $\alpha_i$ and $\beta_i$ using the global CTR $w_g$ and the local CTR $w_i$. Since equation~\ref{CTR_l=g} is indeterminate, we just give two trivial solutions by keeping the label of negative samples unchanged $\beta_i = 0$ or keeping the label of positive samples unchanged $\alpha_i = 1$, and we have
\begin{align}\label{sol_label}
\left\{
\begin{aligned}
\alpha_i &= \frac{w_g}{w_i} \\
\beta_i &= 0
\end{aligned}
\right.
\ \ \ \text{or} \ \ \
\left\{
\begin{aligned}
\alpha_i &= 1 \\
\beta_i &= \frac{w_g - w_i}{1-w_i}
\end{aligned}
\right.
.
\end{align}
To intuitively demonstrate the effect of label correction, we still examine Example \ref{ctrdrift_ex} using the first choice of corrected labels\footnote{For the second choice of label correction, the local predicted CTRs of $I_1$ and $I_3$ become 0.53 and 0.47, and the ranking result is also correct.} $\alpha_i = \frac{w_g}{w_i}$ and $\beta_i = 0$. Given $w_i = 0.25$ and $w_g=0.5$ in Example \ref{ctrdrift_ex}, user $i$ will correct the positive label to $\alpha_i = 2$ and keep the negative label unchanged $\beta_i = 0$, which implies that the labels of $I_1$-related and $I_3$-related clicked samples become 2, while the labels of non-clicked samples keep 0. By on-device fine-tuning, the local model learns the optimal CTRs of $I_1$ and $I_3$, which are 0.6 and 0.4, respectively, and the predicted CTR of $I_2$ will remain 0.35. After label correction, the fine-tuned model can correctly rank the candidate items as $\{I_1, I_3, I_2\}$.
\begin{algorithm}[!t]
\caption{Label Correction Based Fine-Tuning (LCFT)}\label{lcft}
\begin{algorithmic}[1]
\REQUIRE The cloud-based model ${\bf h}$, the global CTR $w_g$;
\FOR{each client $i\in \{1,2,\cdots, u\}$ in parallel}
\STATE Collects training dataset $D_i$ generated on the device and obtains some statistics: the size of positive samples $n_i^+$, the size of negative samples $n_i^-$, and the local CTR $w_i \leftarrow \frac{n_i^+}{n_i^+ + n_i^-}$;
\STATE* {\tt /* Label correction */}
\STATE \colorbox{blue!30}{Gets $\alpha_i,\beta_i$ from the cloud;\ \textbf{(Hard Correction)}}
\IF{$w_i > w_g$}
\STATE \colorbox{red!30}{$\alpha_i \leftarrow \frac{w_g}{w_i}$, $\beta_i\leftarrow 0$;\ \textbf{(Soft Correction 1)}}
\STATE \colorbox{green!30} {$\alpha_i \leftarrow 1$, $\beta_i\leftarrow \frac{w_g - w_i}{1-w_i}$;\ \textbf{(Soft Correction 2)}}
\ELSE
\STATE \colorbox{red!30} {$\alpha_i \leftarrow 1$, $\beta_i\leftarrow \frac{w_g - w_i}{1-w_i}$;\ \textbf{(Soft Correction 1)}}
\STATE \colorbox{green!30}{$\alpha_i \leftarrow \frac{w_g}{w_i}$, $\beta_i\leftarrow 0$;\ \textbf{(Soft Correction 2)}}
\ENDIF
\STATE Corrects the labels of positive and negative samples in $D_i$ to $\alpha_i$ and $\beta_i$, respectively;
\STATE* {\tt /* Fine-tuning */}
\STATE Initializes the local model ${\bf h}_i \leftarrow {\bf h}$;
\STATE Fine-tunes $w_i$ over $D_i$ with corrected labels;
\ENDFOR
\end{algorithmic}
\end{algorithm}
We finally present the implementation details of the proposed label correction based fine-tuning method (LCFT) in Algorithm \ref{lcft}. For each user $i$, it first collects the training data $D_i$ generated on the mobile device and obtain some statistical information (line 2). Based on the statistics and equation \ref{sol_label}, user $i$ can obtain the corrected labels and then perform label correction for $D_i$ (lines 3-11). Finally, each user $i$ can perform on-device fine-tuning, i.e., first initialize ${\bf h}_i$ with the cloud-based global model ${\bf h}$ and then fine-tunes ${\bf h}_i$ over $D_i$ with the corrected labels.
In practice, we propose three label correction strategies for online deployment, two soft correction strategies and one hard correction strategy, as highlighted in different colors. (1) For the two soft choices of corrected labels in equation \ref{sol_label}, we can also determine which specific choice based on the offline experiment before online deployment. Intuitively, soft correction 1 reduces the gap between the positive and negative labels, makes the loss function smoother, and is suitable for the case where the cloud-based model is close to the locally optimal model. In contrast, soft correction 2 amplifies the gap and is suitable for the case where the cloud-based model is far from the locally optimal model. (2) For the hard correction strategy, we can determine the corrected labels $\alpha_i,\beta_i$ based on the offline experiment as well as the local CTRs of each user during online fine-tuning. In particular, we can collect user logs and corresponding item features to build an offline dataset. Based on the offline dataset, we can group the users based on their local CTRs (i.e., different user groups have different CTR intervals), and grid-search the optimal choice of label correction for each group. Then, the cloud server can store the CTR intervals and the corresponding corrected labels. During online deployment, the mobile device can send its latest local CTR, and the cloud server will return the corresponding corrected labels.
\section{Offline and Online Evaluations}
We extensively evaluate the proposed label correction design over both public and industrial datasets. We also deploy our design in the recommender system of Mobile Taobao and conduct online A/B testing.
\subsection{Evaluation on Public Datasets}
We first evaluate over two public datasets. The statistics about users, samples, and the global CTRs are shown in Table \ref{data-stats}. The first dataset is {\bf MovieLens 20M}\footnote{https://grouplens.org/datasets/movielens/20m/}~\cite{movielens}, which contains 20,000,263 ratings from 138,493 users for 27,278 movies with 21 categories. The original user ratings of movies range from 0 to 5. We label the samples with the ratings of 4 and 5 to be positive and label the rest to be negative. Regarding the features of each user’s sample, we take the user’s ID, the historical sequence of positively rated movies, the historical sequence of negatively rated movies, as well as a candidate movie to be recommended and its tag genome. Each movie is represented by a unique ID and a category ID. The tag genome of a candidate movie, provided by the dataset, is a 1,128-dimensional vector and comprises the candidate movie’s relevance scores with respect to all tags. For the split of each user’s training and test sets, we take the samples with the timestamps no more than 1,225,642,324 into the training set and take the remaining samples into the test set. We evaluate on the 5,677 users who have both training and testing data. The second dataset is {\bf Amazon Electronics}\footnote{https://jmcauley.ucsd.edu/data/amazon/}, which contains 1,689,188 reviews contributed by 192,403 users for 63,001 products with 1,361 categories. For each user’s sample, the label is whether a candidate (i.e., reviewed or negatively sampled) product is reviewed or not, and the features include the user ID, the historical sequence of reviewed products, and the candidate product. Each product is represented by a product ID and a category ID. Regarding the split of the training and test sets, the samples with the timestamps no more than 1,385,078,400 fall into the training set, while the rest falls into the test set. We evaluate on the 180,342 users who have both training and testing data.
\begin{table}[!t]
\caption{Statistics of two public datasets and one industrial dataset after pre-processing.}\label{data-stats}
\begin{center}
\resizebox{\columnwidth}{!}{
\begin{tabular}{cccc}
\toprule
& \#\ Users & \#\ Samples & Global CTR \\
\cmidrule{2-4}
MovieLens & 5,677 & 3,229,373 & 0.51 \\
Amazon & 180,342 & 6,227,300 & 0.17 \\
Mobile Taobao & 30,682 & 23,655,827 & 0.04 \\
\bottomrule
\end{tabular}
}
\end{center}
\end{table}
\begin{table*}[!t]
\caption{Offline user-level average AUCs of LCFT and the baselines.}
\label{public_aucs}
\centering
\resizebox{1.95\columnwidth}{!}{
\begin{tabular}{cccccccc}
\toprule
Dataset & Model & Cloud & Local & LCFT & Local vs. Cloud & LCFT vs. Cloud & LCFT vs. Local \\
\cmidrule{1-8}
\multirow{5}{*}{MovieLens} & LR & 0.624 & 0.624 & 0.624 & +0.00\% & +0.01\% & +0.01\% \\
& Wide\&Deep & 0.652 & 0.657 & 0.660 & +0.77\% & +1.23\% & +0.46\% \\
& DeepFM & 0.661 & 0.665 & 0.668 & +0.61\% & +1.06\% & +0.45\% \\
& PNN & 0.671 & 0.675 & 0.676 & +0.60\% & +0.75\% & +0.15\% \\
& DIN & 0.681 & 0.684 & 0.686 & +0.44\% & +0.73\% & +0.29\% \\
\cmidrule{1-8}
\multirow{5}{*}{Amazon} & LR & 0.678 & 0.679 & 0.679 & +0.04\% & +0.05\% & +0.00\% \\
& Wide\&Deep & 0.764 & 0.772 & 0.778 & +1.05\% & +1.83\% & +0.78\% \\
& DeepFM & 0.724 & 0.725 & 0.725 & +0.11\% & +0.15\% & +0.03\% \\
& PNN & 0.755 & 0.754 & 0.755 & {\color{green}-0.07\%} & +0.07\% & +0.13\% \\
& DIN & 0.791 & 0.791 & 0.794 & {\color{green}-0.11\%} & +0.38\% & +0.38\% \\
\cmidrule{1-8}
Taobao & EdgeRec & 0.614 & 0.614 & 0.617 & {\color{green}-0.05\%} & +0.49\% & +0.49\% \\
\bottomrule
\end{tabular}
}
\end{table*}
We take five representative CTR prediction models, including {\bf LR}, {\bf Wide\&Deep}, {\bf DeepFM}, {\bf PNN}, and {\bf DIN}. We also introduce two baselines for comparison. One baseline is {\bf cloud-based learning} (abbreviated as ``{\bf Cloud}''), which trains the global model over all the users' data and is to verify the necessity of on-device fine-tuning; the other baseline is {\bf purely local fine-tuning} (abbreviated as ``{\bf Local}''), which directly lets each mobile device fine-tune the cloud-based model over the user's local dataset and is to verify the necessity of label correction in our design {\bf LCFT}.
Regarding the experimental settings, we choose mini-batch SGD as the optimization algorithm. For cloud-based learning over the MovieLens dataset, we set the batch size to 32 and train 2 epochs with the learning rate starting at 1 and decaying by 0.1 every epoch. For cloud-based learning over the Amazon dataset, we set the batch size to 1,024 and take the same settings for other hyperparameters. For on-device fine-tuning over two public datasets, we set the batch size to 32 and let each user train 1 epoch with the learning rate of 0.01 by default. Specially, for the Amazon dataset, we set the batch size to 16 in the fine-tuning of LR, and set the batch size to 4 and the learning rate to 0.001 in the fine-tuning of PNN. In addition, for offline evaluation, we adopt a golden metric in evaluating the performance of CTR prediction, called Area Under the ROC Curve (AUC). We use the user-level average AUC ~\cite{gauc1,gauc2}, defined as
\begin{align}
AUC_{Avg} = \frac{\sum_{i=1}^u m_i AUC_i}{\sum_{i=1}^u m_i},
\end{align}
where $m_i$ denotes the size of user $i$'s test set, and $AUC_i$ is the AUC over user $i$'s individual testset.
\begin{figure}[!t]
\centering
\subfigure[Movielens]{
\includegraphics[width=0.472\columnwidth]{.//imgs//movie_gauc_gap.pdf}
}
\subfigure[Amazon]{
\includegraphics[width=0.472\columnwidth]{.//imgs//amazon_gauc_gap.pdf}
}
\caption{The improvement of LCFT over purely local fine-tuning for different CTR drifts.}\label{datasets_gaps}
\end{figure}
\begin{figure}[!t]
\centering
\subfigure[Movielens]{
\includegraphics[width=0.472\columnwidth]{.//imgs//movielens_bar.pdf}
}
\subfigure[Amazon]{
\includegraphics[width=0.472\columnwidth]{.//imgs//amazon_bar.pdf}
}
\caption{Proportions of users whose test AUCs increase, stay the same, or decrease compared with cloud-based learning.}\label{lmt_stats}
\end{figure}
We finally present the evaluation results. We first show the average test AUCs of LCFT and the baselines as well as the improvement in Table \ref{public_aucs}. We can observe that LCFT outperforms all the baselines over five models and two public datasets. In contrast, purely local fine-tuning is even worse than cloud-based training on several tasks due to the CTR drift issue. These results demonstrate the necessity of label correction for effective on-device fine-tuning. We further evaluate the effect of different CTR drifts on the performance of LCFT and plot the results in Figure \ref{datasets_gaps}. The global CTRs in the MovieLens and Amazon experiments are 0.51 and 0.17, respectively. For the experiments over the MovieLens dataset, we divide the local CTRs into 10 intervals, and for each interval, we average the improvements of LCFT over purely local fine-tuning for those users whose local CTRs fall into the interval. For the experiments over the Amazon dataset, the negative samples are generated by sampling, and thus the local CTRs are discrete. We group the users based on their negative sampling ratios, and for each group, we still average the improvements of LCFT for all the users in the group. From Figure \ref{datasets_gaps}, we can observe that as the gap between the local CTR and the global CTR becomes larger (i.e., with higher CTR drift), the improvement of LCFT over the purely local fine-tuning generally becomes more evident, especially in the experiments over the Amazon dataset. Such a key observation reveals that the effectiveness of local fine-tuning indeed suffers from the CTR drift. We also depict in Figure \ref{lmt_stats} the proportions of the users whose test AUCs increase, stay the same, and decrease, thereby demonstrating the advantage of LCFT over the cloud-based learning at the user level. We can see that the proportion of the users whose AUCs increase is significantly higher than the proportion of the users whose AUCs decrease.
\subsection{Evaluation on Mobile Taobao Dataset}
\begin{figure}[!t]
\centering
\includegraphics[width = 0.5\columnwidth]{.//imgs//taobao.jpg}
\vspace{0.5em}
\caption{Homepage recommendation of Mobile Taobao.} \label{tb_app}
\vspace{-0.5em}
\end{figure}
We also collect a practical dataset from Mobile Taobao and conduct offline experiments. In what follows, we introduce the application scenario, the dataset, the experimental setups, and the evaluation results.
The application scenario is the homepage of Mobile Taobao, where different items are ranked based on each user's preferences, as shown in Figure \ref{tb_app}. The rule of ranking the items depends on the CTRs of them. Therefore, the basic learning task is CTR prediction. The input data fields include user feature, item exposure user action feature, item page-view user action feature, and item feature. We take the recommendation model, called EdgeRec~\cite{net_edgerec}, which has been offloaded to the Mobile Taobao APP for on-device inference. EdgeRec contains an embedding layer, a GRU layer, an attention layer, and fully connected layers (i.e., MLP layers). In particular, the GRU layer first encodes the user exposure and behavior sequences, and then the target item is on attention with the encoded sequences.
The dataset is collected from March 1, 2021 to March 7, 2021, in total 7 days. For the split of each user’s training and test sets, we take the first 5 days of samples for training and take the last 2 days of samples for testing. We keep the pool of 30,682 users who have more than 256 training samples.
Regarding the fine-tuning settings, we use Adam as the optimization algorithm. We set the batch size to 32 and let each user train 3 epochs with the learning rate of 0.001.
We finally show the user-level average test AUCs in the last row of Table 1. We can observe that the evaluation results of the Mobile Taobao dataset are consistent with those over the public datasets, namely, LCFT outperforms all the baselines. In contrast, the average test AUC of local training decreases a little compared with the cloud-based learning model, which demonstrates the necessity of label correction in industrial application scenarios.
\subsection{Online A/B Testing in Mobile Taobao}
We deploy LCFT on different groups of Mobile Taobao users and report the online A/B testing results from August 1, 2021 to August 6, 2021.
For online A/B testing, we create two non-overlapping buckets, each of which contains roughly 150,000 randomly chosen users, to deploy LCFT and the cloud-based learning model. In particular, the cloud-based model is trained over a 7-day dataset with billions of samples. For the deployment of LCFT, we cluster the users into three groups based on their local CTRs (i.e., the users with low, middle, and high local CTRs). For each user group, the cloud-based global model is fine-tuned using LCFT over the training data from the users in the group and then offloaded to the corresponding mobile devices for real-time inference.
We use three online metrics to extensively evaluate the performance of LCFT and the cloud-based learning. Before giving formal definitions, we first introduce some common abbreviations in recommender systems: ``Clk'' is short for ``click''; ``Exp'' is short for ``exposure''; ``UV'' is short for “Unique Visitor”. Then, the three metrics are defined as follows: (1) {\text{Clk}}/{\text{Exp}} is in fact CTR and is the optimization objective of the learning task in our application scenario; (2) and {\text{Clk}}/{\text{UV}} denotes the average number of clicks per user, which can measure the user activeness.
\begin{table}[!t]
\caption{Online A/B testing results in Mobile Taobao. Metrics are reported in the day-level average.}
\label{abtest}
\centering
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{cccc}
\toprule
Metric & Cloud & LCFT & LCFT vs. Cloud \\
\midrule
CTR & 3.91\% & 3.98\% & {\bf +1.79\%} \\
\midrule
Clk/{\text{UV}} & 4.60 & 4.64 & {\bf + 1.03\%} \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{figure}[!t]
\centering
\subfigure[CTR]{
\includegraphics[width=0.472\columnwidth]{.//imgs//CTR.pdf}
}
\subfigure[Clk/UV]{
\includegraphics[width=0.472\columnwidth]{.//imgs//Clk.pdf}
}
\caption{Online performance of LCFT and cloud-based learnaing in Mobile Taobao.}
\label{online_fig}
\end{figure}
We finally plot in Figure \ref{online_fig} the day-level results from the online metrics and summarize in Table \ref{abtest} the day-level average results. Compared with cloud-based learning, LCFT improves CTR, {\text{Clk}}/{\text{UV}} by 1.79\% and 1.03\%, respectively. These results demonstrate that through fine-tuning with label correction, LCFT indeed improves the online recommendation performance in practice.
\section{Conclusion}
In this work, we have proposed an on-device model fine-tuning with label correction method, called LCFT, for the fundamental CTR prediction task in recommender systems. LCFT mitigates the bottleneck issue of CTR drift through only letting each user correct the labels of the local samples ahead of fine-tuning, thereby aligning the equivalent CTR with the global CTR in theory. Offline and online evaluation results have validated the necessity of label correction and demonstrated the superiority of LCFT over the mainstream cloud-based learning method without fine-tuning.
|
1,941,325,221,144 | arxiv | \section{Introduction}
A traveling charged particle induces coherent radiation when it passes
near a periodic dielectric structure along the direction of its spatial
periodicity. This radiation, called Smith-Purcell radiation
(SPR) \cite{Smith:P::92:p1069:1953,Shestopalov-SPR-book}, can be a novel
radiation source with several remarkable properties. The most important one is the scalability of the output
frequency; the threshold
frequency below which SPR is kinetically impossible varies in inverse proportion to the magnitude of the period. In addition, the SPR is characterized by the presence of resonances at a series of frequencies, which again vary in inverse proportion to the
period.
Owing to these
properties, it has been recognized that SPR can be a basic mechanism for a compact free-electron laser \cite{Wachtel::50:p49-56:1979}.
Since its first observation, SPR has been studied using mainly
metallic diffraction gratings of one-dimensional
periodicity \cite{Wortman:L:D:M::24:p1150-1153:1981,Gover:D:E::1:p723-728:1984,Shih:S:M:C::7:p345-350:1990,Doucas:M:O:W:K::69:p1761-1764:1992,Ishi:S:T:H:I:T:M:K:F::51:pR5212-R5215:1995}.
In most theoretical analyses
made so far,
the gratings have been treated as perfect conductors to
simplify the treatment of the periodic light scattering \cite{vandenBerg::63:p1588:1973,Haeberle:R:S:M::49:p3340-3352:1994,Shibata:H:I:O:I:N:O:U:T:M:K:F::57:p1061-1074:1998,Brownell:W:D::57:p1075-1080:1998}.
In these systems, Wood's anomaly \cite{Wood::48:p928:1935} in the optical
density of states (ODOS) is responsible for the enhanced signals of SPR,
and thus the relevant frequencies of the resonances are determined in a straightforward manner
using simple kinetics.
Recently,
both theoretical \cite{Pendry:M::50:p5062-5073:1994,deAbajo::61:p5743-5752:2000,Ohtaka:Y::91:p477-483:2001,Yamaguti:I:H:O::66:p195202:2002,deAbajo:B::67:p125108:2003,Ochiai:O::69:p125106:2004,Ochiai:O::69:p125107:2004} and
experimental \cite{Yamamoto:S:Y:S:S:I:O:H:K:M:H:M:Y:O::69:p045601:2004} SPR results have been reported
for photonic crystals (PhCs) used in place of metal gratings. It was found that PhCs induced highly coherent SPR because of their multidimensional periodicity
in their dielectric functions.
The SPR spectrum consists of point-like signals as a function of frequency, which show up each time the evanescent light from the electron excites
a photonic band (PhB) mode of high quality factor.
SPR from a PhC is versatile, because PhCs generally have various
parameters, which can now be reliably designed and changed.
However, when an electron beam of ultra-relativistic velocity was used in combination with a PhC, which was finite in the direction of the electron trajectory, unexpected phenomena that contradicted the conventional
understanding of the SPR were experimentally observed \cite{Horiuchi-unpub}.
Such phenomena have not been observed in the gratings of nearly perfect conductors
and are expected to be
absent even in PhCs when an electron beam of slower velocity is used.
This SPR, called unconventional SPR (uSPR) in this paper, is quite distinct in many ways from the
conventional SPR (cSPR)
and thus is easily identified; most importantly, the uSPR spectrum
sweeps the entire region of frequency-momentum phase space, in
contrast to the cSPR which carries information of the phase space
only along the shifted $v$ lines, to be defined later. In the phase space, the uSPR is characterized by
peculiar resonances arising along curves, which are related more or
less to the dispersion relations of PhBs, not to the shifted $v$
lines, with relatively little intensity variation. Therefore, SPR
in the ultra-relativistic regime of the beam velocity is potentially
useful both as a monochromatic light source and as a probe to
investigate the PhB structure.
We should note here that the Cherenkov effect also can be used to probe
the PhB structure with the angle-resolved electron energy loss
spectroscopy \cite{deAbajo:P:Z:R:W:E::91:p143902:2003,deAbajo:R:Z:E::68:p205105:2003,Luo:I:J:J::299:p368-371:2003}.
The conjecture inferred by the experiment, which this paper seeks to verify, was that
the SPR consists of the cSPR and uSPR in the relativistic regime of electron velocity
and that the uSPR is expected to disappear gradually with decreasing velocity, to leave solely the cSPR component for velocities typically less than a few hundred keV.
Very recently, Kesar {\it et al.} \cite{Kesar:H:K:T::71:p016501:2005} reported a systematic discrepancy
between the calculated SPR spectrum in a finite-size grating
and that of the conventional theory assuming infinite size. Since they
focused on the diffraction grating of a perfect conductor, their
discrepancy is not directly related to ours, which arises in systems involving PhCs with a finite dielectric
constant. As for this point, it is noteworthy that a rigorous theory was developed for finite-size grating of infinitely-thin metallic plates \cite{Shestopalov-SPR-book}.
This paper presents a comprehensive theoretical analysis of the uSPR
for the PhCs composed of a finite number of
cylinders. We use the multiple-scattering method of theoretical treatment \cite{Ochiai:O::69:p125106:2004}, which explicitly takes into account the finiteness of the total number of cylinders and treats exactly the multiple Mie scattering among them.
We shall investigate the
properties of the uSPR and the interplay between u and c SPRs by
changing various parameters, such as electron velocity, dielectric
constant of PhCs, length of PhCs, thickness of PhCs and angle of SPR emission.
Most of this
paper will focus on the PhCs of dielectric cylinders of circular cross section.
The remarkable
difference of the present work from the previous theoretical works involving PhCs lies in the
fact that we are dealing with the finiteness of the length of PhC in
the direction of the electron beam.
This paper is organized as follows.
In Sec. II, we present the kinetics for both cSPR and
uSPR and discuss what plays a
key role in inducing uSPR.
Section III is devoted to the comparison of the spectrum of uSPR with that of cSPR.
In Sec. IV, we present detailed analyses of uSPR
by changing various parameters. Finally, we summarize the results in
Sec. V.
\section{Kinetics of conventional and unconventional SPRs}
In the following discussion, we focus on a PhC composed
of infinitely long cylinders. The system under study is schematically
illustrated in Fig. \ref{geometry}.
\begin{figure}
\begin{center}
\includegraphics*[width=0.6\textwidth]{fig1.eps}
\end{center}
\caption{\label{geometry}Schematic illustration of the system under
study. An electron travels with constant velocity $v$ and
impact parameter $b$ below the
one-dimensional periodic array of dielectric or metallic cylinders.
The cylinders are arrayed periodically with axes in the $z$
direction with radius $r$, dielectric constant $\varepsilon$
and the lattice constant $a$. The trajectory of the electron
is parallel to the direction of the periodicity. Angles $\theta$ and $\phi$ are the polar angles of the SPR signals.
This PhC has a mirror plane indicated
by dotted line in the right panel.}
\end{figure}
Cylinders are arrayed periodically (lattice constant: $a$) in the $x$
direction with cylinder axes in the $z$ direction.
An electron travels near the PhC in a trajectory
parallel to the $x$ axis with velocity $v$ and
impact parameter $b$.
We obtain the SPR spectrum from this system as a sum of the plane-wave signals generated by the scattering of the evanescent light emitted by the electron. The whole process of multiple scattering among a finite number of cylinders is dealt with compactly by the
multiple-scattering theory of radiation using the vector
cylindrical waves as a basis of representation \cite{Ochiai:O::69:p125106:2004}.
Let us briefly summarize the kinetics in the theory of cSPR. In the theory of cSPR, a finite periodic structure is simulated by a periodic structure of infinite length in the $x$ direction. A
traveling
electron accompanies the radiation field
that is a superposition
of evanescent waves with respect to frequency $\omega$ and
wave number $k_z$ in the $z$ direction \cite{Ohtaka:Y::91:p477-483:2001}.
The wave vector of each evanescent wave is given by
\begin{eqnarray}
& & {\bf K}^\pm=\left({\omega\over v},\pm\Gamma,k_z \right), \quad
\Gamma=\sqrt{\left({\omega\over c}\right)^2
-\left({\omega\over v}\right)^2-k_z^2}, \label{Gamma}
\end{eqnarray}
where $\Gamma$ is purely imaginary because $v \le c$.
The imaginary part $|\Gamma|$ determines the spatial decay of the
evanescent wave incident on the PhC.
In what follows, it is important to remember the feature of $k_z=0$
that, in the ultimate limit $v \to c$, $|\Gamma|$ tends to zero.
Since $\omega$ and $k_z$ are conserved quantities in the geometry of
Fig. \ref{geometry},
the evanescent waves with different $\omega$ and $k_z$
are independent in the whole scattering process. Thus, the incident
light of $\omega$ and $k_z$ leaves the PhC, after being scattered, with
the same $\omega$ and $k_z$. Therefore, the SPR signals observed in the $xy$ plane may be analyzed by setting $k_z=0$ everywhere.
Since we are now dealing with a perfect periodicity extending from $-\infty$ to $\infty$, we obtain the SPR signal in the form of
Bragg-scattered waves summed over the diffraction channels. The channels are specified by the wave vector ${\bf K}_h^\pm $
defined by
\begin{eqnarray}
& &{\bf K}_h^\pm=\left({\omega\over v}-h,\pm\Gamma_h,k_z \right),\label{khpm}
\quad \Gamma_h=\sqrt{\left({\omega\over c}\right)^2-\left({\omega\over v}-h\right)^2-k_z^2}.
\end{eqnarray}
Here, $h=2\pi n/a \; (n: {\rm integer})$ is a reciprocal lattice point
of the PhC in the $x$ direction.
Before the scattering by the PhC, $\omega$ and $k_x$, the $x$ component
of the wavevector of light, satisfy the relation $k_x=\omega/v$.
The line $\omega=v k_x$, called the $v$ line in this paper, lies outside the light cone in the phase space $(k_x, \omega)$.
After the scattering, the $x$ component of the light of channel $h$
becomes $k_x=\omega/v-h$.
The shifted $v$ line defined by this equation
is inside the light cone in a certain frequency
range. In that frequency range we can detect the SPR signal in this channel
at a far-field observation point. The propagating direction of the SPR signal
of $\omega$ is given by
\begin{eqnarray}
{\bf K}_h^\pm =
\frac{\omega}{c}(\cos\theta,\sin\theta\cos\phi,\sin\theta\sin\phi),
\label{polar-angle}
\end{eqnarray}
in the polar coordinates defined in Fig. \ref{geometry}.
The inside region of the light cone is the leaky region of PhBs, and, accordingly, the ODOS
is nontrivial there. Actually,
ODOS has a sequence of peaks of finite width in the $(k_x,k_z, \omega)$ space. The peak position determines the
dispersion relations $\omega=\omega_n(k_x,k_z)$
of the quasi-guided PhB modes.
Imagine temporarily $k_z=0$, for brevity. The presence of the modes in the $(k_x, \omega)$ space significantly affects the SPR spectrum by causing a sharp resonance
when the dispersion curves of PhBs intersect the shifted $v$ lines. The resonance becomes sharper as the quality
factor of the relevant PhB modes increases \cite{Ohtaka:Y::91:p477-483:2001,Yamaguti:I:H:O::66:p195202:2002}.
Therefore, the SPR from a PhC can have very high quality, which is
intriguing as a new possibility of PhC.
In the above argument,
we assumed the conservation of $k_x$, with an Umklapp allowance taken into account.
In an actual PhC with a finite number $N$ of cylinders, the periodicity or
the translational invariance of the whole system is lost at the sample edges.
One way to take account of the finiteness of $N$ is to treat $k_x$ as defined only approximately with a width of the order of
$\Delta k_x\simeq 2\pi/(Na)$ \cite{Shestopalov-SPR-book}. In this approach,
the shifted $v$ lines are considered to have the finite width,
and the PhB dispersion relation will become detectable within this allowance centering on the shifted $v$ lines of open channels. In reality, however, the uSPR signals appear in the phase space $(k_x, \omega)$ with a much larger distribution than this straightforward $1/N$ blurring \cite{Horiuchi-unpub}.
Let us now consider an ultra-relativistic velocity
$v\simeq c$.
We continue to confine ourselves to the measurement within the $xy$
plane, i.e., $k_z=0$. In this case,
$\Gamma$ is almost zero, and
the evanescent wave incident on the PhC may be regarded practically as a
plane wave with its wavevector directed in the x-direction.
\begin{figure}
\begin{center}
\includegraphics*[width=0.6\textwidth]{fig2.eps}
\end{center}
\caption{\label{unconventional}Schematic illustration of the input evanescent light yielding conventional
(left panel) and unconventional (right panel) SPRs. The
conventional SPR is produced when the evanescent wave has an
appreciable decay constant. The incident light enters
the
PhC from below. The unconventional case arises when the incident
evanescent light has a negligible decay
constant and is regarded
as a plane wave entering the
PhC from its left edge. In this case, the evanescent wave is almost
symmetric with respect to the mirror plane, inducing the even-parity selection rule in the PhB excitation. }
\end{figure}
Therefore, the light-scattering problem is quite similar to that of the light transmission and reflection in the $x$ direction
through the periodic array of cylinders, as depicted in
Fig. \ref{unconventional}.
In this situation, it is obvious that the sample edges play a crucial role.
In particular, we know that $k_x$ is no longer a
good quantum number, no matter how large the total number of the cylinders may be.
As in an ordinary light-transmission experiment involving PhCs, we expect the incident wave of frequency $\omega$ to
excite the PhB modes at the crossing points between the
line of constant $\omega$ and the dispersion curves
$\omega=\omega_n(k_x,0)$ of PhBs in the $(k_x,\omega)$
plane. The point is that the wave vector of the PhB thus determined is not related to the value of $k_x$ on the $v$ line of the incident light. Therefore, we should expect SPR signals over the entire $(k_x, \omega)$ space, not necessarily restricted along the shifted $v$ lines. The signals expected off the shifted $v$ lines characterize the uSPR.
Also, we can expect that the transmitted SPR obtained in the
side opposite to the trajectory is almost identical to
the reflected SPR of the trajectory side. Finally, similar to the
ordinary setup of a plane wave transmitting through a two- and three-dimensional PhC \cite{Sakoda-PC-book}, a selection rule must exist
for the symmetry of the PhB modes to be excited. The PhC in our
problem has a symmetry with respect to the mirror reflection $y \to
-y$ ($y=0$ is the plane bisecting the PhC), and in the
ultra-relativistic regime the incident light is of $y$ independent. Hence, solely the PhB modes of even mirror-symmetry are expected to participate in the resonant light scattering. The even-parity selection rule will thus characterize uSPR spectra.
According to this scenario,
a PhB mode manifests itself in the uSPR. Therefore,
the uSPR will have a rather broad band as a function of
frequency. This is in contrast to the cSPR, in which
excited PhB modes give rise to sharp resonance peaks
only along the shifted $v$ lines.
In addition, since $v\simeq c$, the shifted $v$ lines coincide with
the threshold lines for the opening of a new Bragg diffraction
channel. The channel opening often leaves a singular trace due to Wood's anomaly
in the line shape of wave scattering. Thus, on our shifted $v$ lines,
Wood's
anomaly will occur, together with the resonance peaks of the cSPR associated with the PhB excitation.
In this way, the spectra of the c and u SPRs reveal
a quite rich structure when the electron is ultra-relativistic.
So far, we have concentrate ourselves on the electron traveling parallel
to the $x$ direction, that is, the direction of the periodicity of the PhC
under consideration.
If the velocity vector ${\bf v}$ of the electron is given by
\begin{eqnarray}
{\bf v}=(v_x,0,v_z)=v(\cos\alpha,0,\sin\alpha),
\end{eqnarray}
the kinetics of SPR changes accordingly. In particular, the
dominant component of wave number $k_z$ depends on frequency and is given by
$k_z=v_z \omega/v^2$.
Within the conventional theory
the SPR acquires a significant enhancement when the following three
conditions are fulfilled:
\begin{eqnarray}
\omega=v_x(k_x-h)+v_zk_z,\; k_z={v_z\over v^2}\omega,\;
\omega=\omega_n(k_x,k_z),
\end{eqnarray}
where the first equation defines the shifted $v$ line at nonzero $v_z$.
As in the case of $v_z=0\;(\alpha=0^\circ)$, this scenario of the SPR is insufficient
for an ultra-relativistic electron.
In this case the evanescent wave accompanied by the
electron can be
effectively treated as a plane wave propagating parallel to ${\bf v}$.
This highlights the role of the sample edge of the
finite-size PhC, namely, the broken translational invariance, and the photonic band modes on the entire plane of
$k_z=(v_z/v^2)\omega$ are
excited, not necessary restricted on the shifted $v$ lines.
In the limiting case of vanishing $v_x \; (\alpha=90^\circ)$,
the electron travels parallel
to the cylindrical axis. There, no propagating radiation is
generated from the PhC, as far as the cylinders have infinite length
in the $z$ direction.
This is due to the perfect translational invariance along the axis.
In actual PhC, however, this translational invariance is broken,
yielding a sort of the diffraction radiation.
Thus, as $\alpha$ varies from $0^\circ$ to $90^\circ$,
the conventional theory of the SPR, which assumes the translational
invariance both in the $x$ and $z$ direction, predicts a gradual
disappearance of the SPR. On the other hand, the uSPR gives a novel
radiation irrespective of $\alpha$, in which the broken translational
invariance in the $x$ direction is highlighted at small $\alpha$, and
that in the $z$ direction is highlighted around $\alpha=90^\circ$.
To summarize, the two SPR spectra, c and u SPRs, coexist
in the ultra-relativistic regime,
when a PhC sample is used. To analyze the experimental signals based on the knowledge on the band structure of photons, the length of the PhC or the total number of cylinders must be finite but large enough. For a finite system to have a band structure comparable
to that obtained for an infinite system,
the periods of, $N \ge 8$ will be
enough according to our experience. In contrast to the value of $N$ in the $x$ direction, however,
we are considering a system having small size
in the $y$ direction, such as a PhC made of a monolayer or stacked layers of several monolayers.
The finite size in the $y$ direction needs to be
considered explicitly to obtain the band structure of our PhCs.
\section{Typical example of conventional and unconventional SPRs}
Before making a detailed study of the uSPR properties, we briefly
compare the cSPR spectrum with the uSPR spectrum, using the numerical results for a test system.
We adopt a PhC of a monolayer of periodic array of low-index
cylinders (dielectric constant $\varepsilon=2.05$).
For a radius-to-periodicity ratio $r/a=0.5$,
Fig. \ref{teflon_band} depicts the band structure of the monolayer of an infinite number of cylinders.
\begin{figure}
\begin{center}
\includegraphics*[width=0.6\textwidth]{fig3.eps}
\end{center}
\caption{\label{teflon_band} PhB structure of the monolayer of
low-index cylinders in contact ($\varepsilon=2.05$).
The modes of TE polarization of $k_z=0$ are plotted
as a function of $k_x$.
The PhB modes are classified
according to the parity with respect to the mirror plane bisecting the
monolayer.
The light line $\omega=\pm ck_x$ is indicated by thick solid lines, the shifted $v$ lines of $v=0.5c$ by thin solid lines, and those
of $v=0.99999c$ by dashed lines. The horizontal arrows (six for $v=0.99999c$ and four for $v=0.5c$) are drawn at the intersections between the shifted $v$ lines and the PhB dispersion curves. They correspond to those of Figs. \ref{conv_teflon} and \ref{teflon_v05_Ninf}.
}
\end{figure}
The band structure inside the light cone was obtained by plotting the peak
frequencies of the ODOS, which were calculated as a function of
$k_x$ and $k_z$ \cite{Ohtaka:I:Y::70:p035109:2004}.
The band structure outside the light cone (that of the true-guided
modes with infinite lifetime)
was obtained from the position of the poles of the S matrix, which are found
on the real axis of the complex $\omega$ plane.
In Fig. \ref{teflon_band}, $k_z=0$ is assumed, so that the PhB
modes are decomposed into purely transverse-electric (TE) and
transverse-magnetic (TM)
modes. Only the band structure of the TE modes
is presented, because the incident evanescent wave is
TE-polarized at $k_z=0$.
The PhB modes are further classified by parity with
respect to the mirror plane $y=0$. In Fig. \ref{teflon_band}
the even (odd) parity modes are indicated by red (green) circles.
We should note that
PhB modes having their dispersion curves disconnected in Fig. \ref{teflon_band}
are the ones obtained from the ODOS peaks, which are often too broad to identify the peak position.
Let us first consider the cSPR spectra obtained by the conventional
theory based on the assumption of perfect periodicity in the $x$ direction. We used the parameters $v=0.99999c$, $\phi=180^\circ$ and $b=3.33a$ and assumed that
the radiation was observed in the $xy$ plane $(k_z=0)$, as actually encountered in the millimeter-wave SPR experiments carried out recently \cite{Yamamoto:S:Y:S:S:I:O:H:K:M:H:M:Y:O::69:p045601:2004,Horiuchi-unpub}.
Since the periodicity is perfect, the SPR spectrum appears strictly on the
shifted $v$ lines, which are almost parallel to the light line
$\omega=ck_x$.
Figure \ref{conv_teflon} presents the reflected cSPR spectra along the shifted $v$ line of
$h=1$ and 2 (in units of $2\pi/a$).
\begin{figure}
\begin{center}
\includegraphics*[width=0.6\textwidth]{fig4.eps}
\end{center}
\caption{\label{conv_teflon} Reflected intensity of cSPR on the shifted $v$ lines of $v=0.99999c$
for $\varepsilon=2.05$ and $b=3.33a$.
Perfect periodicity ranging from $x=-\infty$ to $x=\infty$ of the monolayer cylinders is assumed.
The arrows are drawn at the peak positions and agree with those of
Fig. \ref{teflon_band}, which were assigned to the crossing points between the shifted $v$ line and the band structure.}
\end{figure}
The peaks of the cSPR spectrum arise
at the frequency where the shifted $v$ lines $k_x=\omega/v-h$
intersect the PhB structure given in
Fig. \ref{teflon_band}. Several arrows are drawn at the peak positions in
Fig. \ref{conv_teflon} and, to identify each of the peaks, horizontal arrows are added in Fig. \ref{teflon_band} at the corresponding positions in phase space.
Comparing these two figures, we see that the peak lowest in frequency arises from the excitation of an odd-parity PhB mode. Thus, the even selection-rule for the parity of the excited PhB modes does not hold for cSPR, though it somewhat affects their spectral shapes.
Now we turn to the uSPR spectrum. The SPR spectrum, with the finiteness of $N$ considered explicitly,
is obtained over the entire $(k_x, \omega)$ space by summing all the amplitudes of the multiply scattered light from the $N$ cylinders \cite{Ochiai:O::69:p125106:2004}. The numerical result for $N=21$ is given in
Fig. \ref{unconv_teflon},
for the same parameters of $r/a$ and $\varepsilon$ as used
in Fig. \ref{conv_teflon}.
\begin{figure}
\begin{center}
\includegraphics*[width=0.6\textwidth]{fig5_light.eps}
\end{center}
\caption{\label{unconv_teflon}
Reflected SPR intensity map from a finite ($N=21$) monolayer of the
contact cylinders with a low index dielectric constant. The signals appearing off the shifted $v$ lines
characterize the uSPR. Except for $N$, the same parameters as in
Fig. \ref{conv_teflon} were used.
}
\end{figure}
The angle $\theta$- and frequency $\omega$-resolved reflected SPR
intensity is mapped onto the
$(k_x,\omega)$ plane through the relation $k_x=(\omega/c)\cos\theta$.
To be precise, $|f^M(\theta)|^2$ with $-\pi \le \theta \le 0$ defined in
Eq. (33) of Ref. \cite{Ochiai:O::69:p125106:2004} was plotted by using
the above relation.
We observe that the peaks of the SPR intensity are found along\\
(A) the shifted $v$ lines, \\
(B) the curves whose slopes are positive and less than 1, \\
(C) the curves whose slopes are negative, \\
(D) the forward light-line ($\omega=ck_x$), and \\
(E) the flat lines terminated on the backward light-line ($\omega=-ck_x$). \\
For comparison, we superimpose the even-parity PhB structure of TE modes on the intensity map and present the result in
Fig. \ref{unconv_teflon_PB}.
\begin{figure}
\begin{center}
\includegraphics*[width=0.6\textwidth]{fig6_light.eps}
\end{center}
\caption{\label{unconv_teflon_PB}
Figure \ref{unconv_teflon} of the intensity overlaid with
the PhB structure of the even parity (indicated by red circles).
The shifted $v$ lines are indicated by solid lines. }
\end{figure}
From Fig. \ref{unconv_teflon}, we find that type (A) peaks are
rather broad along the shifted $v$ lines as a function of frequency, as
compared to the cSPR spectrum shown in Fig. \ref{conv_teflon}.
A good agreement between the PhB dispersion curves and the high intensity positions of the uSPR
demonstrates that the peaks of type (B)
are attributed to the quasi-guided PhBs of even parity.
Also, no evidence of the excitation of the odd-parity modes is found in
Figs. \ref{unconv_teflon} and \ref{unconv_teflon_PB}.
Thus, the even-parity selection rule of uSPR, predicted in the previous section, indeed holds.
The peaks of type (C) are found, for instance, around
$(k_x a/2\pi,\omega a/2\pi c)=(-0.75,1.25)$, whose origin is also
attributed to excitation of the even-parity PhB modes. The intensity of these peaks
is, however,
small, as compared to that caused by the PhB modes with positive slope.
This seems reasonable, considering that the PhBs of the negative
group velocity will have suppressed
excitation probability at the left edge of the PhC.
The peaks of type (D) are inevitable in finite systems;
the incident wave induces the
quasi-guided waves in the monolayer, which
exit from the right edge, causing
a forward-oriented diffraction there. As a consequence,
broad peaks of the SPR near the forward light-line emerge.
Note that the intensity of the type (D) signal oscillates as a function of
frequency. This is a Fabry-Perot oscillation of the signal intensity with
period decreasing with increasing $N$.
Finally, the signals of type (E) are caused by the presence of pseudo gaps
in the PhB structure.
To see this, we have only to note in Fig. \ref{unconv_teflon_PB} that the flat streaks of type (E) signals appear at the gap positions.
When the frequency of the incident wave lies in a pseudo gap of
the monolayer, the incident wave cannot penetrate deep inside the PhC and is scattered out of the PhC as a type (E) signal,
with a certain angular distribution
centering on the backward direction.
Note that the intense streaks are not related to the band gaps of odd parity.
The above features of the uSPR in finite monolayers will remain
unchanged even for a semi-infinite monolayer, which is made of cylinders of $N=\infty$ but bounded at one end, because what matters in the above discussion is the presence of the left edge of PhC as an entrance surface of a wave propagating in the $x$ direction.
Next, we consider SPR spectrum obtained from a slower electron in a non-relativistic regime. The calculation is made for $v=0.5c$, which is a
typical value for the electron velocity used in scanning electron
microscopes. The parameters except $v$ and $b$ are the same.
As above, we compare two spectra of $N=\infty$ and $N=21$.
First, the reflected SPR spectrum for $N=\infty$ is given
in Fig. \ref{teflon_v05_Ninf}.
\begin{figure}
\begin{center}
\includegraphics*[width=0.6\textwidth]{fig7.eps}
\end{center}
\caption{\label{teflon_v05_Ninf}
The reflected cSPR intensity from the infinite monolayer $(N=\infty)$ of the
low-index contact cylinders. The parameters $v=0.5c$ and $b=0.2a$ were assumed for
the electron beam. Four arrows are assigned to the peak positions and
coincide with those of Fig. \ref{teflon_band}. }
\end{figure}
The spectra reveal a marked resonance at $\omega a/2\pi c=0.621$.
The line shape of the resonance is asymmetric as a function of frequency.
As indicated by arrows, each agreeing precisely with those given to
the shifted $v$ line of Fig. \ref{teflon_band},
the cSPR peaks all appear exactly at the
intersection points of the shifted $v$ line of $v=0.5c$ with the PhB
dispersion curves.
The reflected SPR spectrum for $N=21$ is given in Fig. \ref{teflon_v05_N21}, with the superposition of the PhB structure (of $N=\infty$).
\begin{figure}
\begin{center}
\includegraphics*[width=0.6\textwidth]{fig8_light.eps}
\end{center}
\caption{\label{teflon_v05_N21}
Reflected SPR spectrum from a finite ($N=21$) monolayer of
low-index cylinders in contact. The intensity profile is overlaid
with the corresponding PhB
structure of $N=\infty$. The PhB modes with even (odd)
parity are indicated by red (green) circles.
The parameters $v=0.5c$ and $b=0.2a$ were assumed for
the electron beam.}
\end{figure}
We see at once that high intensity SPR appears only on the shifted
$v$ lines, although very weak structures reminiscent of the finiteness of our PhC are still seen off the shifted $v$ line. This is in clear contrast to the ultra-relativistic
spectra, where marked signals of uSPR existed definitely off the
shifted $v$ lines.
The signals on the shifted $v$ line of $h=1$ have a resonance peak at
$(k_xa/2\pi,\omega a/2\pi c)=(0.24,0.62)$.
This frequency is almost identical to that of the resonance obtained for $N=\infty$ shown in Fig. \ref{teflon_v05_Ninf}. Also, we can perceive the asymmetry of the line shape along the shifted $v$ line, as in the
cSPR spectrum of $N=\infty$.
Therefore, we may conclude that, for slower velocities such as $v=0.5c$, the SPR of the finite PhC
can be understood sufficiently well using the theory of cSPR, based on the assumption $N=\infty$. The uSPR signals are suppressed as follows. The light of $v=0.5c$ is literally evanescent with an appreciable decay constant $|\Gamma|$, so that, while passing through the PhC in the $+y$ direction, the incident light decays much and sees only the surface region of cylinders. Accordingly, the picture of a plane wave with wavevector in the $x$ direction no longer holds and the conventional theory of SPR covers all the features.
\section{Properties of the unconventional SPR}
This section presents the properties of
uSPR in detail by changing various parameters.
As explained in the previous sections, the broken translational
invariance due to finite number of cylinders ($N$) is crucial in the uSPR.
Taking account that the uSPR must vanish in the system of the perfect
translational invariance, it is interesting to investigate the
$N$-dependence of the uSPR in detail.
The number of stacking layers ($N_l$) is also an important factor because ODOS
and thus the PhB structure depends crucially on $N_l$.
Dielectric constant $\varepsilon$ and radius $r$ of the cylinders are other factors that
significantly influence the PhB structure. However, the effects of
changing $r$ are covered, to some extent, by those of $\varepsilon$.
The impact parameter $b$ is not essential, as seen in the following
expression for the total emission power $W$ of SPR, whose
$b$ dependence is collected into a simple scaling law \cite{Ochiai:O::69:p125106:2004}
\begin{eqnarray}
& & W=\int{{\rm d}\omega {\rm d}k_z\over \pi^2} P_{\rm em}(\omega,k_z), \\
& & P_{\rm em}(\omega,k_z)|_{b}=e^{-2|\Gamma|(b-b_0)}P_{\rm em}(\omega,k_z)|_{b_0},
\end{eqnarray}
where $P_{\rm em}(\omega,k_z)|_{b}$ is the $\omega$- and $k_z$-resolved emission power for an impact parameter $b$ and $b_0$ is a reference impact parameter chosen arbitrarily.
Therefore, uSPR and cSPR change in a straightforward way as $b$ varies, with the underlying physics unaltered.
In the following
subsections, therefore, five parameters, $v, \varepsilon, N, N_l$, and $\phi$,
are varied in this order to
see how each affects the spectrum.
\subsection{Velocity}
The velocity of the electron beam is a key parameter in the
uSPR. Indeed, as seen in the previous section, the SPR at $v=0.5c$ is understood using the theory of cSPR, while at $v=0.99999c$ the uSPR also plays an
important role. We shall examine how the conventional picture fails
with varying electron velocity.
An obvious but nonessential $v$-dependence is an increase of the SPR intensity due to the $v$ dependence of the decay-constant $|\Gamma|$; if impact parameter
$b$ is fixed, the overall SPR spectrum behaves as $\exp(-2|\Gamma|b)$. To eliminate this trivial $v$-dependence, we have set
\begin{eqnarray}
b=0.01\beta\gamma a, \quad \mbox{with} \quad \beta={v\over c},
\quad \gamma={1\over \sqrt{1-\beta^2}},
\end{eqnarray}
considering $|\Gamma| \propto 1/(\beta\gamma)$ for $k_z=0$.
The reflected intensity maps for the monolayer of $N=21$
are shown in Figs. \ref{v-change} (a) and (b) for $v=0.7c\quad(\gamma=1.4)$
and in (c) and (d) for $0.99c\quad (\gamma=7.09)$,
along with the PhB structure.
\begin{figure*}
\begin{center}
\includegraphics*[width=0.9\textwidth]{fig9_light.eps}
\end{center}
\caption{\label{v-change}
Dependence of the reflected SPR spectra on electron velocity. The result for a finite monolayer ($N=21$) of contact cylinders is shown for $\varepsilon=2.05$. Panels (a) and (b) are the results for $v=0.7c$, and panels (c) and (d) are those for
$v=0.99c$. Panels (b) and (d) are reproductions of (a) and (c), respectively, overlaid with the PhB structure of $N=\infty$.
The PhB modes with even (odd) parity are indicated
by red (green) circles. See text for
the impact parameter $b$ used in the calculation.}
\end{figure*}
Panels (a) and (c) show only the SPR intensity, while
they are superposed by
the PhB structure in panels (b) and (d).
At $v=0.7c$, there is a marked bright line along the shifted
$v$ line of $h=1$. Along the line, the intensity contrast of the SPR is quite
strong at low frequencies. In particular
a point-like resonance is seen at $\omega a/2\pi c\simeq 0.745$.
As panel (b) shows, this resonance arises just at a crossing between the dispersion
curve of an even-parity PhB and the shifted $v$ line of $h=1$.
Therefore, this is a type (A) signal according to the classification of the last section.
We
can see the flat streaks of strong intensity just at this
frequency,. We note that the signal becomes stronger as $k_x$
approaches the backward light line $\omega=-ck_x$.
This feature is common to all the
horizontal streaks appearing at the frequencies of the pseudo gaps. These are signals of type (E) of the uSPR.
We should note that the PhB
mode, which crosses the shifted $v$ lines, has a negative group velocity, and the excited PhB mode propagates in the $-x$ direction. A backward-oriented
diffraction taking place at the left edge of the PhC explains the tendency towards the line $\omega=-ck_x$.
Analogous flat lines exist, for instance, at
$\omega a/2\pi c\simeq 1.09$.
In addition, we see clearly a high-intensity spectrum appearing almost
parallel to the shifted $v$ lines. The curves are in fact coincident with
the dispersion curves of quasi-guided PhB of the even parity.
Therefore, they are type (B) signals. Note that the odd-parity PhB dispersion curves are also visible, with reduced strength as compared to the even-parity PhBs. Altogether, at $v=0.7c$, cSPR
coexists with uSPR and odd-parity PhBs are seen in the uSPR spectrum,
with weaker intensity than even-parity PhBs, however. Combining this
result with what we have seen in Sec. III for $v=0.5c$ and
$v=0.99999c$,
we may conclude that, as $v$ increases from $v=0.5c$, the uSPR becomes visible
and the even-parity selection rule of uSPR is less stringent at
non-ultra-relativistic velocities.
The result for $v=0.99c$ indeed confirms this conclusion.
At $v=0.99c$, several bright curves arise in Figs. \ref{v-change} (c) and (d) with
little intensity contrast along the PhB dispersion curves.
This is the type (B) signal. We can observe odd-parity excitation of
weak intensity. Therefore, although
the even-parity selection rule is indeed dominant, it is somewhat
relaxed for $v=0.99c$. On the shifted $v$ line, there are signals of
cSPR, as theory predicted for type (A) features in Sec. III.
The breakdown of the even-parity selection rule at these velocities
is explained as follows. With a decrease of $v$, $|\Gamma|$ increases to make the incident evanescent light decay more quickly when passing the monolayer. This increases the asymmetry of the evanescent wave with
respect to the mirror plane and makes
the even-parity selection rule less effective.
The degree of the symmetry of the input wave may be given by the factor
$\exp(-|\Gamma| 2r)$, called here the symmetry factor,
which measures the decay of the evanescent wave while traversing the PhC in the $+y$ direction.
If this factor is unity, the evanescent light seen by the PhC is mirror-symmetric.
At $v=0.99c$, the symmetry factor is 0.408 at $\omega a/2\pi c=1$ and
too small to guarantee strictly the even-parity selection rule.
Therefore, odd-parity PhBs are allowed somewhat as uSPR signals.
The results for the other $v$ are briefly summarized without giving numerical results.
At $v=0.9c$, when the symmetry factor is 0.047, cSPR and uSPR coexist and odd-parity PhBs are seen in the latter.
At $v=0.999c$, the asymmetry factor increases to 0.755. The intensity map gradually tends to the case of
$v=0.99999c$ with the symmetry factor 0.972; signals along the
odd-parity PhBs disappear,
leaving behind only the even-parity signals as type (B) signals.
The horizontal bright streaks appear solely in the regions of
the pseudo-gaps of even-parity bands.
Finally, we should comment on the case of non-zero $v_z$.
The critical velocity of the electron, above which the uSPR begins to
emerge does not change so much by non-zero $v_z$. An important point is that
at ultra-relativistic velocities the evanescent wave can be effectively
regarded as a plane wave.
This is not controlled by $v_z$, but is controlled by $v$, the magnitude
of the velocity vector. However, other features of the uSPR changes
as discussed in the previous section.
In conclusion, in the frequency region $\omega a/2\pi c \sim 1$, the
uSPR is conspicuous when $v$ exceeds 0.7c, and the even-parity
selection rule holds progressively better as $v$ approaches $c$ from $0.9c$.
\subsection{Dielectric constant}
For a PhC with $r$ and $N$ kept fixed at $r=0.5a$ and $N=21$, let us
examine how the SPR spectrum varies as the dielectric constant
$\varepsilon$ of the cylinders changes in the monolayer.
We select three values of $\varepsilon$,
$\varepsilon=4.41, \quad 1-(\omega_p/\omega)^2$ and
$-\infty$. The first case corresponds to the dielectric constant of fused quartz with $\varepsilon$ nearly twice as large as that used above, the second is the dielectric constant of a Drude
metal with $\omega_p$ the plasma frequency,
and the third is the dielectric constant of a perfect conductor.
To avoid the poor convergence of the cylindrical-wave expansion for the metallic cylinders in contact, we created a narrow opening between the cylinders by setting $r=0.45a$ in the Drude case. We assumed
$\omega_p a/2\pi c=1$, i.e., the plasma wavelength
equals the lattice constant. Calculation is made
for the monolayer system using $v=0.99999c$ and $b=3.33a$, as before.
The reflected SPR intensity maps
are shown in Fig. \ref{spr_eps}, together with the corresponding PhB structure obtained for the $N=\infty$ system.
\begin{figure*}
\begin{center}
\includegraphics*[width=0.9\textwidth]{fig10_light.eps}
\end{center}
\caption{\label{spr_eps}
Reflected SPRs from the monolayers of contact cylinders of various dielectric constants.
Panel (a) shows the result of dielectric cylinders of $\varepsilon=4.41$ and $r=0.5a$, panel
(c) is the result of metallic
cylinders of Drude dielectric constant $\varepsilon
=1-\omega_p^2/\omega^2$ with $\omega_pa/2\pi c=1$,
and panel (e) shows the result of
cylinders of perfect metal, i.e., $r=0.5a$ and
$\varepsilon=-\infty$.
Panels (a) and (c) are reproduced in panels (b) and (d),
respectively, with the corresponding PhB superposed. The same parameters as
in Fig. \ref{conv_teflon} were used for the electron beam.
}
\end{figure*}
Panels (a) and (b) depict the result of dielectric cylinders,
panels (c) and (d) treat the Drude
cylinders, and panel (e) presents the spectrum of the cylinders of
a perfect conductor. Panels (b) and (d) also involve the band structures of the monolayer.
Considering $v$ is ultra-relativistic, we only plotted
the even-parity PhB structure. Note that for the perfect
conductor case of panel (e), the ODOS does not present any peaks except for Wood's anomaly and the PhB structure is completely absent.
Clearly, the calculated uSPR intensity shown in panels (a) and (b) is well correlated with the PhB structure. Namely, the bright curves of strong SPR
intensity are type (B) signals having a positive slope and tracing very well the even-parity PhB dispersion curves. In addition, we can recognize type (E) signals of the bright flat lines terminated at the backward light
line, which are seen just at the frequencies of the pseudo-gaps. These features agree with what we have seen in Sec. III. Most importantly, the PhB structure with larger dielectric constant is indeed probed by the uSPR.
As for the Drude case shown in panels (c) and (d),
the PhB structure
is composed of many flat bands.
These PhB modes have their origin in the tight-binding
coupling among cylinders of the surface plasmon polaritons (SPP), localized on each cylinder surface \cite{Yannopapas:M:S::60:p5359-5365:1999,vanderLem:M::2:p395-399:2000,Ito:S::6404:p045117:2001}.
The calculated intensity map demonstrates that these SPP
bands are coupled only weakly to the incoming evanescent wave. In contrast, the PhB
around $\omega a/2\pi c=0.5$, which has a modest group
velocity, is strongly coupled to the evanescent wave, yielding a very
strong SPR signal. We thus conclude that uSPR carries information of PhBs
of SPP origin.
Finally, in Fig. \ref{spr_eps}(e)
the strong intensity of the SPR arises solely on the shifted $v$ lines.
This reflects the absence of the PhB structure in the array of perfect-conductor cylinders.
Thus, we can
conclude that the uSPR is peculiar to dielectric and
metallic PhCs with finite dielectric function and is
completely absent in the systems without PhBs.
\subsection{Number of cylinders}
In the numerical results shown so far, the number of the cylinders is
fixed at either $N=21$ or
$N=\infty$. At small $N$, typically less than 8, the PhB structure
is not clearly visible in the intensity map of the SPR on the
$(k_x,\omega)$ plane. On the other hand, at large $N$ the PhB structure
is clearly visible as demonstrated in Figs. \ref{unconv_teflon} and \ref{unconv_teflon_PB}.
In this region, however, change of the
SPR intensity map with increasing $N$ is less remarkable. Nevertheless,
if we have a close look at the spectral line shapes of the uSPR, they
indeed change with $N$.
To investigate this feature, we consider the SPR spectra at
a fixed solid angle $(\theta,\phi)=(60^\circ,180^\circ)$ as a function of frequency.
Figure \ref{N-dependence} shows the spectra for various $N$.
\begin{figure}
\begin{center}
\includegraphics*[width=0.6\textwidth]{fig11.eps}
\end{center}
\caption{\label{N-dependence} The reflected SPR intensity spectra at
$(\theta,\phi)=(60^\circ,180^\circ)$
from finite-size PhCs of various $N$.
The same parameters as in Fig. \ref{conv_teflon} were used except for $N$. }
\end{figure}
The SPR signals are strongly enhanced around $\omega a/2\pi c=1.195$, which
corresponds to an intersection point between the line of
$k_x-h=(\omega/c)\cos\theta$ (see Eq. (\ref{polar-angle}))
and the PhB structure
$\omega=\omega_n(k_x,0)$. The intersection point is off the shifted $v$
lines and thus is indeed an uSPR signal.
We can clearly observe that as $N$ increases, the intensity at the peak
grows but seems to be saturated to a certain value.
This implies that the radiation intensity of the uSPR per unit length of the
electron trajectory decreases at large $N$ and eventually vanishes
at $N=\infty$.
This property is reasonable because the uSPR is completely forbidden
in the system of perfect translational invariance with $N=\infty$.
On the other hand, we also found that the intensity of the cSPR on the
shifted $v$ lines increases almost linearly with $N$, as expected from
the conventional theory of the SPR.
Therefore, at very large $N$ the cSPR signals will dominate over
the uSPR ones.
However, even at $N=200$ we found that the SPR intensity map does not
differ so much from Fig. \ref{unconv_teflon}, in which the uSPR signals are rather
stronger than the cSPR ones.
Besides, in Fig. \ref{N-dependence} we can clearly observe that the
spectral width of the peak decreases with increasing $N$.
This property reflects better confinement of the radiation energy
for larger $N$. This width should converge to a certain value at
$N=\infty$, which is inversely proportional to the
life-time of the relevant photonic band mode. This is nothing but
the homogeneous broadening of the spectral line width of the uSPR.
\subsection{Number of stacking layers}
So far we have confined ourselves to the monolayer PhC. Now let us stack the identical monolayers periodically in the $y$ direction.
As we increase the number $N_l$ of the stacking layers, the ODOS
reveals a progressively finer structure as a function of frequency.
Each peak of
ODOS corresponds to a quasi-guided PhB mode confined in
the stacked layers. The typical peak-to-peak distance in frequency
is inversely proportional to $N_l$. Moreover, each
peak is generally getting sharper.
If $N_l$ is large enough, the scattering of the evanescent wave in the ultra-relativistic regime
is identical to the transmission and reflection of a TE-polarized plane wave that enters the PhC with its left edge as an entrance surface. The slab PhC in question has a finite thickness $Na$ in the
$x$ direction and has a large extension in the $y$ direction with the entrance surface parallel to the $yz$ plane.
The wave vector component $k_\|$ parallel to the
entrance surface is conserved, and the incident plane wave excites the bulk
PhB
modes having the same $k_\|$. There is no momentum conservation in the $x$ component; in principle the light excites any PhB modes of arbitrary $k_x$. The scattered wave is decomposed into
diffraction channels \cite{Sakoda-PC-book}.
Let us examine the stacked monolayers with a square lattice of the
cylinders.
For uSPR, $(\pm\Gamma,k_z)$ plays the role of $k_\|$
in the above analogy, and thus we may set $k_\| \simeq 0$ in the ultra-relativistic case, provided $k_z=0$.
Accordingly, the incident plane wave has the wave vectors $(k_x,0,0)$ and propagates in
the $\Gamma-X$ direction of the square
lattice. The bulk PhB modes along $\Gamma-X$
are thus excited. The conclusion is thus that uSPR will carry information along $\Gamma-X$ of the bulk PhB modes if both $N$ and $N_l$ are sufficiently large. According to the above
arguments, the SPR intensity is expected to be enhanced
in the forward and backward directions, which correspond to the specular
transmission and reflection. In addition, if $\omega a/2\pi
c >1$, the SPR intensity is expected to be
enhanced also on the curves
\begin{eqnarray}
k_x=\pm\sqrt{\left({\omega\over c}\right)^2-h^2 },
\label{parabola}
\end{eqnarray}
which correspond to the diffraction channels associated with the
reciprocal lattice points $h=2\pi n/a \; (n:{\rm integer})$
along the $k_y$ axis.
Figure \ref{stack} illustrates the reflected SPR intensity
maps and the relevant PhB structure of low-index cylinders in contact. We used $\varepsilon=2.05$, $v=0.99999c$ and $b=3.33a$.
\begin{figure*}
\begin{center}
\includegraphics*[width=0.9\textwidth]{fig12_light.eps}
\end{center}
\caption{\label{stack}
Reflected SPR intensity map of various PhCs of cylinders.
Panel (a) shows the result of the double-layer system of contact
low-index cylinders ($N=21$ and $N_l=2$).
Panel (b) is produced from panel (a) by overlaying the even-parity PhBs. The mirror plane of the parity
lies in between the double-layers.
Panel (c) shows the result for a multi-layer PhC of square lattice of contact
low-index cylinders ($N=8$ and $N_l=20$). Panel (d) is
a reproduction of panel (c), overlaid with the PhB structure along the
$\Gamma-X$ direction. Only the even-parity modes with respect to the
mirror plane relevant to $\Gamma-X$ are shown.
The same parameters as in Fig. \ref{conv_teflon}
were used for the electron beam.
}
\end{figure*}
In Fig. \ref{stack}(a) the spectrum from the
double-layer ($N_l=2$) structure with $N=21$ is shown.
The intensity map overlaid with the PhB structure of
the double layers (but for $N=\infty$) is shown in
panel (b).
As before, we plotted only the even-parity PhB structure. In the double layer, the mirror plane lies midway between the layers. We see the number of bands is almost
twice that of the monolayer band structure shown in Fig. \ref{teflon_band}. This is reasonable, since the degenerate band-structures of each of the monolayers are split in the double layer.
Obviously there is a very good correlation of the strong signals of uSPR
with the band structure
of the even parity.
Figure \ref{stack}(c) shows the reflected intensity
map of the finite multilayers PhC of $N_l=20$ and $N=8$.
We consider this to be a test system simulating the slab-type PhC of square lattices.
We observe at once a signal of high intensity along a hyperbolic curve
whose bottom is found at $(k_xa/2\pi,\omega a/2\pi c)=(0,1)$. Obviously, this curve
corresponds to Eq. (\ref{parabola}) with $h=1$.
Strong SPR signals other than the hyperbolic curve are found at
$\omega a/2\pi c=0.73$, 0.93, and 1.46. To identify these signals,
Fig. \ref{stack}(c) was overlaid with the even-parity PhB structure along
the $\Gamma-X$ direction of the square lattice. The result is shown
in Fig. \ref{stack}(d).
As can be clearly seen, the strong signals correspond to the anti-crossing
points of the even-parity PhB structure.
The bright curve connected to the strong signal around $\omega a/2\pi
c=0.73$ is shown to be along the PhB dispersion curve.
Thus, we can conclude that
the intensity map of the uSPR
correlates well with the corresponding PhB structure even in the case
of stacked monolayers.
\subsection{Azimuthal angle}
So far, we have considered
the case of $k_z=0$ ($\phi=0^\circ$ and $180^\circ$), that is, we have examined
the radiation emitted within the
$xy$ plane. We here investigate the $\phi$ dependence. For this purpose, we write the
differential cross section of SPR in polar coordinates \cite{Ochiai:O::69:p125106:2004}
\begin{eqnarray}
& &{\partial W\over \partial\omega\partial\Omega}=
{q\sqrt{q^2-k_z^2}\over 4\pi\mu_0\omega}(|f^M(\theta')|^2+|f^N(\theta')|^2), \label{dcs}\\
& &k_z=q\sin\theta\sin\phi,\quad q={\omega\over c},\\
& &\theta'=-i\log\left( {\cos\theta+i\sin\theta\cos\phi \over
\sqrt{1-\sin^2\theta\sin^2\phi}}\right).
\end{eqnarray}
Obviously, the cross section must have inversion symmetry under the operation
$\phi\to -\phi$, reflecting the inversion symmetry of the
$z$ coordinate with respect to the electron trajectory located at $z=z_0$.
The radiation emission of non-vanishing $k_z$ is generally small
compared with that of $k_z=0$, and
$k_z$ is a
conserved quantity in the scattering by the PhC. Therefore, the $k_z$ dependence of the observed SPR will be controlled dominantly by that of the decaying
exponential $\exp(-|\Gamma| b)$ of the initial light.
This exponential decreases with increasing $|k_z|$, so that
the radiation is dominated by the SPR of $k_z=0$.
Let us consider the radiation emission toward solid angle $(\theta,\phi)$.
In the ultra-relativistic regime it follows that
\begin{eqnarray}
\Gamma=\sqrt{\left(\frac{\omega}{c}\right)^2-\left({\omega\over v}\right)^2-(\frac{\omega}{c}\sin\theta\sin\phi)^2}
\simeq iq\sin\theta|\sin\phi|.
\end{eqnarray}
Thus, the SPR cross section
at a given $\omega$ and
$\phi(\ne 0^\circ,180^\circ)$ is dominated in the forward ($\theta=0^\circ$)
and backward ($\theta=180^\circ$) directions. Similarly, at a given
$\theta$, the SPR cross section is dominated around the
plane perpendicular ($\phi=0^\circ$ and $180^\circ$) to the cylindrical axis.
Figure \ref{tilte}(a) projects the reflected SPR spectrum
Eq. (\ref{dcs}) for
$\phi=165^\circ$ onto the $(k_x,\omega)$ plane,
and Fig. \ref{tilte} (b) projects the reflected SPR spectrum
for $\theta=90^\circ$ is projected onto the $(k_z,\omega)$ plane.
\begin{figure*}
\begin{center}
\includegraphics*[width=0.9\textwidth]{fig13_light.eps}
\end{center}
\caption{\label{tilte}
Reflected SPR intensity off the $xy$ plane. Spectra from the monolayer of contact low-index cylinders are shown. (a)
Intensity in the $(k_x,\omega)$ plane
at $\phi=15^\circ$. (b) Intensity in the $(k_z,\omega)$ plane
at $\theta=90^\circ$. For the electron beam, the same parameters are
used as in Fig. \ref{conv_teflon}.
}
\end{figure*}
Figure \ref{tilte}(a) verifies that the strong intensity
is limited around the forward and
backward light lines, as asserted above.
The intensity contrast along the backward light
line is related to the PhB structure with finite $k_z$.
In Fig. \ref{tilte}(b), the strong SPR intensity is limited around
$k_z=0$. At $k_z=0$, three marked peaks can be found at $\omega=0.75,1,$
and 1.5. They correspond to the crossing points between the bright
curves of Fig. \ref{unconv_teflon}
and the line of $k_x=0$ (i.e., $\theta=90^\circ$).
Thus, we can conclude that SPR is highly directive within the plane
normal to the cylindrical axis.
\section{Summary and discussions}
To summarize, we have presented a theory of uSPR
that arises when an ultra-relativistic electron beam is used to obtain the SPR
from a finite PhC composed of cylinders. The ultra-relativistic electron
accompanies an evanescent wave whose spatial decay is almost
negligible at $k_z=0$, so that the evanescent wave can be regarded as a plane wave
propagating in the direction of the trajectory. This yields a
peculiar radiation emission from the PhCs, which cannot be explained by
the conventional theory of the SPR in which the finiteness of PhC is treated as infinite.
The spectrum of the uSPR can be used as a probe of the
PhB structure of the quasi-guided modes having the even-parity symmetry
with respect to the relevant mirror plane.
We have also presented properties of the uSPR in detail
by changing several system parameters. We found that
the uSPR coexists with the cSPR at moderate
velocities typically in between $0.7c$ and $0.99c$.
We also found that the uSPR is completely
absent in the perfect-conductor cylinders because of the absence of
PhBs. Otherwise, the spectra of the uSPR
correlate with the corresponding PhB structure very well.
We also found that the cross section of the
SPR at an ultra-relativistic velocity
is highly directive within the plane normal to the cylindrical
axis.
It should be emphasized again that the uSPR is
an unexpected phenomenon, in which the conventional theory assuming
infinite periodicity of the PhCs fails to reproduce its features.
On the other hand, the
present theory explains very well both the c and u SPRs in a unified manner.
There are three key items in the uSPR: presence of PhBs, broken
translational invariance of PhC, and ultra-relativistic velocity of
the electron beam. Lack of either one of the three items prevents
understanding of the uSPR correctly.
In actual PhCs various types of disorder or randomness
are inevitable, yielding the inhomogeneous broadening of
spectral line width of the SPR. For instance, a rigid
vibration of constituent cylinders of the PhC gives rise to
the Brillouin scattering. As a result, the broadening of the line width
is given by the frequency of the vibration.
The relative percentages of various disorder factors
depend crucially on the frequency range concerned.
Thus, when we extract the intrinsic SPR signals from PhC,
we should carefully take account of disorder.
From the point of view of the radiation emission from high-energy electron
interacting with periodic structure, we should comment on the peculiarity
of the uSPR in comparison to the channeling radiation,
or in other words, Kumakhov radiation \cite{Kumakhov-book}. The latter radiation occurs inside
a crystal when a certain condition is satisfied for the incident angle of
the electron with respect to a major crystal direction.
The radiation depends strongly on the
meandering trajectory of the electron trapped around a crystal plane
or a crystal axis and has a monotonic frequency.
On the other hand, the uSPR does not require the meandering of the electron
trajectory. Actually, in our theoretical approach, the trajectory of the electron is assumed to be straight. In addition, the radiation spectrum of the uSPR
is not monotonic for a fixed trajectory and the typical frequency range is
inversely proportional to the lattice constant of the PhC under consideration.
Therefore, the uSPR is not categorized into the channeling radiation.
The uSPR is, in some sense, similar to the transition radiation
\cite{Landau-EDCM-book} because the broken
translational invariance along the electron trajectory is crucial
in both the radiations. However, there is a marked difference in the
directivity between the transition radiation and the uSPR.
Suppose that an ultra-relativistic
electron passes from vacuum to a dielectric medium. It is better to
focus on the radiation into the vacuum side, because the induced
radiation in the medium is a mixture of the transition radiation and the
Cherenkov radiation.
It was shown that this radiation into the vacuum side is backward-oriented.
On the other hand, as we showed in the paper, such a high
directivity into the backward direction is only possible if the
relevant frequency lies in a pseudo gap of the PhB structure.
Among other electron-induced radiations, the uSPR may have the
closest resemblance to the diffraction radiation regarding the
broken translational invariance and the trajectory which does not pass
through any air/dielectric interfaces.
To further clarify the resemblance, a detailed investigation of the
diffraction radiation in PhC is in order.
\section*{Acknowledgments}
The authors would like to thank N. Horiuchi, J. Inoue, Y. Segawa, Y. Shibata,
K. Ishi, Y. Kondo, H. Miyazaki, and S. Yamaguti
for valuable discussions.
This work was supported by Grant-in-Aid (No. 18656022 for T. O. and
No. 17540290 for K. O.)
for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology.
\end{document}
|
1,941,325,221,145 | arxiv | \section{Introduction}\label{sec:intro}
A standard tool for investigating the geometry of a submanifold in
a semi-Riemannian manifold is the second fundamental
form, or shape tensor, see e.g. O'Neill \cite{ONeill1983}). In this
paper we will discuss some aspects of the second fundamental form
for the case that the ambient semi-Riemannian
manifold has Lorentzian signature $(-,+, \dots, + )$ and that the
submanifold is two-dimensional with Lorentzian signature $(-,+)$.
We call such a submanifold a \emph{timelike surface} for short.
Timelike surfaces are interesting objects not only from a mathematical
point of view but also in view of physics. A four-dimensional Lorentzian
manifold can be interpreted as a spacetime in the sense of general
relativity, and a timelike surface $\Sigma$ in such a manifold can be
interpreted as the worldsheet of an object with one spatial dimension.
It is often helpful to think of $\Sigma$ as being realized by a ``track'',
and of timelike curves in $\Sigma$ as being the worldlines of observers who
are bound to the track like someone sitting in a roller-coaster car, cf.
Abramowicz \cite{Abramowicz1990}.
It is the main goal of this paper to give a classification of timelike
surfaces in terms of their second fundamental form, and to discuss the
physical relevance of this classification in view of the roller-coaster
interpretation. As we want to concentrate on properties of timelike
surfaces which are conformally invariant, we base our classification
on the trace-free part of the second fundamental form. We call a timelike
surface ``generic'' if this trace-free part is non-degenerate, in a
sense specified below, and ``special'' otherwise. It turns out that a
degeneracy can occur in four different ways, giving rise
to four different types of special timelike surfaces. Using the
roller-coaster interpreation, we will see how
each of the four special types can be distinguished from generic
timelike surfaces by three observational features: (i) the visual
appearance of the track, (ii) gyroscopic transport along the track,
and (iii) inertial forces perpendicular to the track. For the latter
we use the definition of inertial forces given in
\cite{FoertschHassePerlick2003}. It can be viewed as an
adaptation to general relativity of Huygens' definition of
centrifugal force, which was based
on the curvature of a track. For the history of this notion see
Abramowicz \cite{Abramowicz1990}.
Our discussion of the physics of timelike surfaces is kinematic, as
opposed to dynamic, in the sense that Einstein's field equation is
not used and that we do not specify an equation of motion for our
timelike surfaces. As outlined above, our main physical motivation
is to give an operational approach to inertial forces and its relation
with the visual appearance of a track and with gyroscopic transport.
However, we would like to mention that our results might also be of
interest in view of applications to strings. The worldsheet of a
(classical) string is a timelike surface, and the second fundamental
form of this surface gives some information on the physical properties
of the string, see e.g. Carter \cite{Carter1995}.
The paper is organized as follows. In Section \ref{sec:timesur}
we introduce orthonormal basis vector fields and
lightlike basis vector fields on timelike surfaces and we
classify such surfaces in terms of their second fundamental form.
The physical interpretation of timelike surfaces, based on the
roller-coaster picture, and its relation to the second fundamental
form is discussed in Section \ref{sec:interpretation}. Among
other things, in this section we consider two quite different
splittings of the total inertial force perpendicular to the track
into three terms and we discuss the invariance properties of
these terms. Generic timelike surfaces are treated in Section
\ref{sec:generic}; we show that the relevant properties of
the second fundamental form are encoded in a characteristic
hyperbola and that every generic timelike surface admits a
distinguished reference frame (timelike vector field). The next four
short sections are devoted to the four non-generic cases and to the
observable features by which each of them differs from the generic
case. Finally, in Section \ref{sec:example} we illustrate our
results with timelike surfaces in the Kerr-Newman spacetime.
\section{Timelike surfaces}\label{sec:timesur}
Let $(M,g)$ be an $n$-dimensional Lorentzian manifold. We assume
that the metric $g$ is of class $C^{\infty}$ and we choose the
signature of $g$ to be $(-, + , \dots, +)$. It is our goal to
investigate the geometry of surfaces (i.e., two-dimensional
$C^{\infty}$-submanifolds) of $M$ that are timelike everywhere,
i.e., the metric pulled back to the surface is supposed to have
signature $(-,+)$. If we fix such a surface $\Sigma$, we may choose
at each point $p \in \Sigma$ an orthonormal basis for the tangent
space $T_p \Sigma$. We assume that this can be done globally on
$\Sigma$ with the basis depending smoothly on the foot-point.
This means that we assume $\Sigma$ to be time-orientable and
space-orientable. This gives us two $C^{\infty}$ vector fields
$n$ and $\tau$ on $\Sigma$ that satisfy
\begin{equation}\label{eqn-defntau}
g(\tau, \tau ) = - g(n,n) = 1 \; , \qquad g(n, \tau ) = 0 \, .
\end{equation}
At each point, $n$ and $\tau$ are unique up to a two-dimensional
Lorentz transformation $(n, \tau ) \longmapsto ( n' \,\tau ')$.
If we restrict to transformations that preserve the time-orientation,
i.e., if we require that the two timelike vectors $n$ and $n'$
point into the same half of the light cone, any such Lorentz
transformation is of the form
\begin{equation}\label{eq:trafontau}
n' = \frac{n+v \tau}{\sqrt{1-v^2}} \; , \qquad
\tau ' = \frac{v n + \tau}{\sqrt{1-v^2}}
\end{equation}
where the number $v$ gives the velocity of the $n'$-observers
relative to the $n$-observers, in units of the velocity of
light, $-1 < v < 1$. Of course, $v$ may vary from
point to point.
From the orthonormal basis $(n , \tau )$ we may switch to
a lightlike basis $(l_+,l_-)$ via
\begin{equation}\label{eq:defl}
l_{\pm} = n \pm \tau \; .
\end{equation}
Under a Lorentz transformation (\ref{eq:trafontau}) this
lightlike basis transforms according to
\begin{equation}\label{eq:trafol}
l _{\pm} ' = \frac{1 \pm v}{\sqrt{1-v^2}} \, l _{\pm} \; .
\end{equation}
Thus, the directions of $l_+$ and $l_-$ are invariant with
respect to Lorentz transformations. This reflects the obvious
fact that, at each point of the timelike surface $\Sigma$,
there are precisely two lightlike directions tangent to $\Sigma$.
The integral curves of $l_+$ and $l_-$ give two families
of lightlike curves each of which rules the surface $\Sigma$.
We want to call $\Sigma$ a \emph{photon surface} if both families
are geodesic, and we want to call $\Sigma$ a \emph{one-way photon
surface} if one of the two families is geodesic but the other
is not. Clearly, this terminology refers to the fact that in
general relativity a lightlike geodesic is interpreted as
the worldline of a (classical) photon. More generally, one can define
a photon $k$-surface to be a $k$-dimensional submanifold
of a Lorentzian manifold for which every lightlike geodesic
that starts tangent to $\Sigma$ remains tangent to $\Sigma$.
This notion was introduced, for the case $k = n-1$, in a paper
by Claudel, Virbhadra and Ellis \cite{ClaudelVirbhadraEllis2001};
here we are interested in the case $k=2$ which was already
treated in \cite{FoertschHassePerlick2003} and
\cite{Perlick2005}.
Before discussing photon surfaces and one-way photon surfaces,
we will demonstrate that these notions appear
naturally when timelike surfaces are classified with respect
to their second fundamental form. To work this out, we recall
(cf. e.g. O'Neill \cite{ONeill1983}) that the \emph{second fundamental
form}, or \emph{shape tensor field}, $\Pi$ is well-defined for any
nowhere lightlike submanifold of a semi-Riemannian manifold, in
particular for a timelike surface $\Sigma$ of a Lorentzian manifold,
by the equation
\begin{equation}\label{eqn-defPi}
\Pi (u, w) \, = \, P^{\perp} \big( \nabla _u w \big) \,
\end{equation}
where $u$ and $w$ are vector fields on $\Sigma$. Here $\nabla$ is
the Levi-Civita connection of the metric $g$ and $P^{\perp}$ denotes the
orthogonal projection onto the orthocomplement of $\Sigma$,
\begin{equation}\label{eq:defP}
P^{\perp}(Y) = Y - g(\tau,Y) \, \tau + g(n,Y) \, n \, .
\end{equation}
As $\nabla _u w - \nabla _w u = [u,w]$ must be tangent to $\Sigma$,
we can read from (\ref{eqn-defPi}) the well-known fact that $\Pi$
is a symmetric tensor field along $\Sigma$.
With respect to the lightlike basis $(l_+,l_-)$, the second fundamental
form is characterized by its three components
\begin{equation}\label{eq:defPipm}
\Pi _+ = \Pi (l_+,l_+) \, , \qquad \Pi _- = \Pi (l_-,l_-) \, ,
\qquad \Pi _0 = \Pi (l_+, l_-) = \Pi ( l_-,l_+) \, .
\end{equation}
Note that these three vectors lie in the orthocomplement of $\Sigma$,
so they are necessarily spacelike. In a 4-dimensional Lorentzian
manifold, this orthocomplement is two-dimensional, so $\Pi_+, \Pi_-$
and $\Pi_0$ must be linearly dependent. In higher-dimensional
Lorentzian manifolds, however, these three vectors may be
linearly independent.
From (\ref{eq:trafol}) we can read the transformation behaviour of
$\Pi _+, \Pi_-$ and $\Pi _0$ under Lorentz transformations,
\begin{equation}\label{eq:trafoPi}
\Pi _{\pm} ' = \frac{1 \pm v}{1 \mp v} \, \Pi _{\pm} \; , \qquad
\Pi _0 ' = \Pi _0 \, .
\end{equation}
Thus, $\Pi _+$ and $\Pi _-$ are invariant up to multiplication
with a positive factor whereas $\Pi _0$ is invariant. In particular,
the conditions $\Pi _+ =0$ and $\Pi _- =0$ have an invariant meaning.
Similarly, the statement that $\Pi _+$ and $\Pi _-$ are parallel
(or anti-parallel, respectively) has an invariant meaning.
Note that
$\Pi _0$ is related to the trace of the second fundamental form by
\begin{equation}\label{eq:tracePi}
\Pi _0 = - \text{trace} ( \Pi ) \, .
\end{equation}
We now introduce the following terminology.
\begin{definition}\label{def:generic}
A timelike surface $\Sigma$ is called \emph{generic} (at $p$) if
$\Pi _+$ and $\Pi _-$ are linearly independent (at $p$). Otherwise
it is called \emph{special} (at $p$).
\end{definition}
Clearly, the class of special timelike surfaces can
be subdivided into four subclasses, according to the
following definition.
\begin{definition}\label{def:specialsub}
A timelike surface $\Sigma$ is called
\begin{itemize}
\item[(a)]
\emph{special of the first kind} (at $p$) if $\Pi _+$ and
$\Pi _- $ are both non-zero and parallel, $\Pi _- = \alpha
\Pi _+$ with $\alpha >0$ (at $p$);
\item[(b)]
\emph{special of the second kind} (at $p$) if $\Pi _+$ and
$\Pi _- $ are both non-zero and anti-parallel, $\Pi _- = \alpha
\Pi _+$ with $\alpha <0$ (at $p$);
\item[(c)]
\emph{special of the third kind} (at $p$) if one of the
vectors $\Pi _+$ and $\Pi _- $ is zero and the other is non-zero
(at $p$);
\item[(d)]
\emph{special of the fourth kind} (at $p$) if both $\Pi _+$ and
$\Pi _- $ are zero (at $p$).
\end{itemize}
\end{definition}
Photon surfaces are timelike surfaces that are special of the fourth
kind, whereas one-way photon surfaces are timelike surfaces that are
special of the third kind.
It is obvious from (\ref{eq:trafoPi}) that the property of being generic
or special of the $N$th kind is independent of the chosen orthonormal
frame. Moreover, it is preserved under conformal transformations. If we
multiply the metric $g$ with a conformal factor $e^{2f}$, where
$f$ is a function on $M$, and rescale the basis vectors accordingly,
\begin{equation}\label{eq:conformal}
\tilde{g} = e^{2f} g \, , \quad
\tilde{n} = e^{-f} n \, , \quad
\tilde{\tau} = e^{-f} \tau \, ,
\end{equation}
$\Pi _+$ and $\Pi_-$ are unchanged, whereas $\Pi _0$ transforms
inhomogeneously,
\begin{equation}\label{eq:conformalPi}
\tilde{\Pi} {}_+ = \Pi _+ \, , \quad
\tilde{\Pi} {}_- = \Pi _- \, , \quad
\tilde{\Pi} {}_0 = \Pi _0 + 2 P^{\perp} (U) \, ,
\end{equation}
where $df = g(U, \, \cdot \, )$. Thus, it is always possible to make
$\Pi _0$ equal to zero by a conformal transformation. This is
the reason why we based our classification on $\Pi _+$ and $\Pi _-$
alone.
We will now review the physical interpretation connected with
$\Pi _+$ and $\Pi _-$, and then discuss the different types
of timelike surfaces one by one.
\section{Physical interpretation}\label{sec:interpretation}
If $M$ is 4-dimensional, $(M,g)$ can be interpreted as a spacetime
in the sense of general relativity. As indicated already in the
introduction, we may interpret each timelike surface $\Sigma$
as the worldsheet of a track and each timelike curve in $\Sigma$
as the worldline of an observer who sits in a roller-coaster car
that is bound to the track. We want to
discuss three types of ``experiments'' such an observer can carry
out, viz. (i) sending and receiving light rays, (ii) measuring the
precession of gyroscopes, and (iii) measuring inertial accelerations.
All three types of experiments turn out to be closely
related to the second fundamental form.
In general relativity light rays (i.e. worldlines of classical
photons) are to be identified with lightlike geodesics. If an
observer at one point of the track receives a light ray from
an observer at some other point of the track, the corresponding
lightlike geodesic will, in general, not arrive tangentially
to the track. Thus, the observer who receives the light ray
will get the visual impression that the track is curved.
Photon surfaces are characterized by the property that such
a light ray always arrives tangentially to the track, i.e.,
a photon surface is the worldsheet of a track that appears
straight. In the case of a one-way photon surface this is true
only when looking in one direction (``forward''), but
not when looking in the other direction (``backward'').
If we have chosen an orthonormal basis $(n, \tau )$ on $\Sigma$,
we can interpret the integral curves of $n$ as observers
distributed along the track described by $\Sigma$. If we
want to give an interpretation to $\tau$, we may think of
each of these observers holding a rod in the direction of
the track. We want to investigate whether $\tau$ can be
realized as the axis of a gyroscope that is free to
follow its inertia. According to general relativity, this
is true if and only if $\tau$ remains Fermi-Walker parallel
to itself along each integral curve of $n$ (see e. g. Misner,
Thorne and Wheeler \cite{MisnerThorneWheeler1973},
Sect. 40.7 ), i.e., if and only if $\nabla _n \tau$ is a
linear combination of $n$ and $\tau$. This is true if
and only if $\Pi (n, \tau ) = 0$, which can be rewritten,
in terms of the lightlike vector fields (\ref{eq:defl}),
as $\Pi ( l_+ + l_- , l_+ - l_- ) = 0$. Using the
notation from (\ref{eq:defPipm}), we find that
\begin{equation}\label{eq:gyro}
\Pi _+ = \Pi _-
\end{equation}
is the necessary and sufficient condition for $\tau$
being Fermi-Walker parallel along $n$. If (\ref{eq:gyro})
holds along an integral curve of $n$, a gyroscope carried
by the respective observer will remain parallel to the
track if it is so initially. Note that (\ref{eq:gyro})
is preserved under Lorentz transformations
(\ref{eq:trafoPi}) if and only if $\Pi _+ = \Pi _- = 0$.
Having chosen an orthonormal basis $(n , \tau )$ on $\Sigma$,
we can write any timelike curve in $\Sigma$ as the integral
curve of a vector field $n'$ that is related to $n$ by a
Lorentz transformation according to (\ref{eq:trafontau}),
with a relative velocity $v$ that depends on the foot-point.
We want to calculate the vector
$
\Pi ( n' , n' ) = P^{\perp} \big( \nabla _{n'} n' ) \; .
$
Using (\ref{eq:defl}), (\ref{eq:trafontau}) and
(\ref{eq:defPipm}), we find
\begin{equation}\label{eq:Pinnv}
\Pi (n',n') = \frac{1}{2} \, \Pi _0 \, + \,
\frac{1}{4} \, \frac{1+v}{1-v} \, \Pi _+ \, + \,
\frac{1}{4} \, \frac{1-v}{1+v} \, \Pi _- \, .
\end{equation}
According to general relativity, the vector $\nabla _{n'} n'$
gives the acceleration of an observer traveling on an integral
curve of $n'$, measured relatively to a freely falling object.
If we think of this observer as sitting in a roller-coaster car
bound to the track modeled by $\Sigma$, the vector $- \Pi (n' n')$
gives the acceleration perpendicular to the track of a freely
falling particle relative to the car. This relative acceleration
is what an observer on a roller-coaster feels in his or her
stomach, because the stomach wants to follow its inertia
and move in free fall, whereas the frame of the observer's
body cannot follow this motion as it is strapped to the car.
For this reason, $- \Pi (n',n')$ is to be interpreted as the
(relativistic) \emph{inertial acceleration} of the $n'$-observers.
Multiplication with the mass gives the (relativistic) \emph{inertial
force} onto these observer. Following \cite{FoertschHassePerlick2003},
we can decompose the vector $- \Pi (n',n')$ into gravitational
acceleration $a _{\text{grav}}$, Coriolis acceleration $a_{\text{Cor}}$
and centrifugal acceleration $a _{\text{cent}}$ by rearranging the
right-hand side of (\ref{eq:Pinnv}) according to the following rule.
$a _{\text{grav}}$ comprises all terms which are independent of
$v$, $a_{\text{Cor}}$ comprises all terms of odd powers of $v$,
and $a _{\text{cent}}$ comprises all terms of even powers of $v$,
\begin{equation}\label{eq:defgCc}
\Pi (n',n') =
\underset{- a_{\text{grav}}}{\underbrace{
\frac{1}{2} \Pi _0 + \frac{1}{4}
( \Pi _+ + \Pi _-)}}
+
\underset{-a_{\text{Cor}}}{\underbrace{
\frac{v}{1-v^2} \frac{1}{2} ( \Pi_+ - \Pi _- )}}
+
\underset{- a _{\text{cent}}}{\underbrace{
\frac{v^2}{1-v^2} \frac{1}{2} ( \Pi _+ + \Pi _- )}}
\; .
\end{equation}
(In \cite{FoertschHassePerlick2003} we found it convenient to work
with the corresponding covectors $A_{\text{grav}} = g \big (
a_{\text{grav}} , \, \cdot \, \big)$, etc.) This definition of
gravitational, Coriolis and centrifugal accelerations with
respect to a timelike surface has the advantage that it is
unambiguous in an arbitrary Lorentzian manifold and that it
corresponds, as closely as possible, with the traditional
non-relativistic notions.
(For alternative definitions of inertial accelerations in
arbitrary general-relativistic spacetimes see, e.g.
Abramowicz, Nurowski and Wex \cite{AbramowiczNurowskiWex1993}
or Jonsson \cite{Jonsson2006}.)
Whereas the decomposition (\ref{eq:defgCc}) depends on $n$, the
splitting (\ref{eq:Pinnv}) of the inertial acceleration into three
terms is invariant under Lorentz transformations. This follows from
the transformation properties (\ref{eq:trafoPi}). For that and for
some other calculations in this paper it is convenient to substitute
\begin{equation}\label{eq:eta}
\frac{1}{\sqrt{1-v^2}} = \cosh \eta
\quad \text{and} \quad
\frac{v}{\sqrt{1-v^2}}
= \sinh \eta \, .
\end{equation}
Then the first equation in (\ref{eq:trafoPi}) reads
$\Pi _{\pm} ' = e^{\pm 2 \eta} \, \Pi _{\pm}$. Similarly to the decomposition
(\ref{eq:defgCc}) one can, owing to the different dependencies on
$v$, operationally separate the three terms of the sum in
(\ref{eq:Pinnv}) by measuring the inertial acceleration for different
velocities.
\section{Generic timelike surfaces}\label{sec:generic}
In this section we consider a timelike surface which is generic at all points.
We can then characterize the second fundamental form at each point by the
three non-zero vectors
\begin{equation}\label{eq: Piinv}
\Pi _0 \, , \qquad I_+ = \sqrt{g(\Pi _- , \Pi _- )} \, \Pi _+ \, ,
\qquad I_- = \sqrt{g(\Pi_+, \Pi _+ )} \, \Pi _-
\end{equation}
which, according to (\ref{eq:trafoPi}), are invariant with respect
to Lorentz transformations. Moreover, the two linearly independent
vectors $I _+$ and $I _-$ are conformally invariant, see
(\ref{eq:conformalPi}).
We now fix an orthonormal basis $(n, \tau)$. Then we find all future-oriented
vector fields $n'$ with $g(n',n')=-1$ by a Lorentz transformation
(\ref{eq:trafontau}). If $v$ runs from $-1$ to 1, at each point $p \in \Sigma$
the vector $\Pi (n',n')$ runs through a hyperbola, according to
(\ref{eq:Pinnv}), see Figure \ref{fig:hyper}. We call this the
\emph{characteristic hyperbola} of the second fundamental form at $p$.
The characteristic hyperbola lies in the orthocomplement $P^{\perp} (T_p M)$
of the tangent space $T_p \Sigma$, which is an $(n-2)-$dimensional
Euclidean vector space.
The asymptotes of the characteristic hyperbola are spanned by the
linearly independent vectors $\Pi _+$ and $\Pi _-$ (or, what is the
same, by the invariant vectors $I_+$ and $I_-$). The characteristic
hyperbola is invariant with respect to
Lorentz transformations, see (\ref{eq:trafoPi}), whereas a conformal
transformation produces a translation of the characteristic hyperbola,
see (\ref{eq:conformalPi}).
\begin{figure}[h]
\psfrag{O}{O}
\psfrag{P}{\hspace{-0.4cm} $\tfrac{1}{2} \Pi _0$}
\psfrag{Q}{\hspace{-0.4cm} $\Pi _+$}
\psfrag{R}{\hspace{-0.5cm} $\Pi _-$}
\psfrag{1}{1}
\psfrag{2}{2}
\centerline{\epsfig{figure=hyper.eps,width=6.4cm,angle=15}}
\vspace{-1cm}
\caption{Characteristic hyperbola of the second
fundamental form at a point $p$ of $\Sigma$.}\label{fig:hyper}
\end{figure}
The points on the characteristic hyperbola are in one-to-one correspondence
with future-oriented vectors normalized to $-1$ at $p$. Clearly, the
vertex of the hyperbola, indicated by 1 in Figure \ref{fig:hyper}, defines
a \emph{distinguished observer field} on every generic timelike surface.
From (\ref{eq:Pinnv}) we find that the arrow from the origin to the vertex of the
hyperbola is given by the vector
\begin{equation}\label{eq:turning}
\frac{1}{2} \, \Pi _0 \, + \,
\frac{1}{4} \;
\sqrt[4]{\frac{g(\Pi _- , \Pi _- )}{g( \Pi _+ , \Pi _+ )}}
\; \Pi _+
\, + \,
\frac{1}{4} \;
\sqrt[4]{\frac{g(\Pi _+ , \Pi _+ )}{g( \Pi _- , \Pi _- )}}
\; \Pi _- \;
\end{equation}
which is Lorentz invariant, by (\ref{eq:trafoPi}).
If we choose the distinguished observer field for our $n$, we have in the
orthonormal basis $(n, \tau)$
\begin{equation}\label{eq:defdis}
g(\Pi _+ , \Pi _+ ) = g ( \Pi _- , \Pi _- ) \; .
\end{equation}
This property characterizes the distinguished observer field uniquely.
The distinguished observer field can be determined as the solution of
an eigenvalue problem. Using the invariant vectors $I_{\pm}$ from
(\ref{eq: Piinv}), we can introduce the real-valued bilinear form
\begin{equation}\label{eq:defpi}
\pi (u,w) \, = \, \frac{1}{2} \, g \big( I_+ + I_- , \Pi (u,w) \big)
\end{equation}
where $u$ and $v$ are tangent to $\Sigma$.
If $n$ is the distinguished observer field, the
basis vectors $n$ and $\tau$ satisfy the eigenvalue equations
\begin{equation}\label{eq:eigen}
\pi (n , \, \cdot \, ) = \lambda _1 \, g( n, \, \cdot \, ) \; , \qquad
\pi (\tau , \, \cdot \, ) = \lambda _2 \, g( \tau, \, \cdot \, ) \; .
\end{equation}
To prove this we observe that, for an arbitrary orthonormal frame
$(n, \tau)$,
\begin{equation}\label{eq:proofpi}
\pi ( n , \tau ) \, = \, \frac{1}{8} \,
\big( \, \sqrt{g(\Pi _- , \Pi _- )} - \sqrt{g(\Pi _+ , \Pi _+ )} \, \big) \,
\big( \, \sqrt{g(\Pi _- , \Pi _- ) \, g(\Pi _+ , \Pi _+ )} -
g(\Pi _+ , \Pi _- ) \, \big) \, .
\end{equation}
Clearly, (\ref{eq:eigen}) holds if and only if $\pi (n, \tau ) = 0$.
As $\Pi _+$ and $\Pi _-$ are linearly independent, the last bracket in
(\ref{eq:proofpi}) is different from zero. So (\ref{eq:eigen}) is,
indeed, equivalent to (\ref{eq:defdis}).
A generic timelike surface $\Sigma$ is the worldsheet of a track that
appears curved to the eye of any observer, because $\Pi _+$ and $\Pi _-$
are non-zero. As (\ref{eq:gyro}) cannot be satisfied, a gyroscope
carried along the track cannot stay parallel to the track. With respect
to any observer on the track, Coriolis and centrifugal acceleration are
linearly independent. The distinguished observer field is characterized
by producing a symmetry between the backward and the forward directions.
Finally, we note that the vertex is not the only point on the
characteristic hyperbola that is distinguished. As an alternative,
we may consider the point which is closest to the origin, denoted by 2
in Figure \ref{fig:hyper}. This gives us a second distinguished observer
field. Physically, it is characterized by the fact that the total
inertial acceleration perpendicular to the track becomes minimal. In
contrast to the (first) distinguished observer field, the second
distinguished observer field is not necessarily unique; there may
be one or two such observer fields, corresponding
to the fact that a circle can be tangent to a hyperbola in one or
two points. More importantly, the second distinguished observer
field is not invariant under conformal transformations; this
follows from our earlier observation that a conformal transformation
corresponds to a translation of the characteristic hyperbola. As in
this article we focus on conformally invariant properties, the
second distinguished observer field is of less interest to us.
\section{Special timelike surfaces of the first kind}\label{sec:first}
If $\Sigma$ is special of the first kind, the angle between the two
asymptotes in Figure~\ref{fig:hyper} is zero. Thus, the characteristic
hyperbola degenerates into a straight line which is run through twice,
with a turning point at the tip of the arrow (\ref{eq:turning}).
This turning point corresponds to the distinguished observer field
which is still determined by (\ref{eq:defdis}). However, now it
satisfies even the stronger condition $\Pi _+ = \Pi _-$.
The distinguished observer field is no longer characterized by the
eigenvalue equations (\ref{eq:eigen}) because these equations are
now satisfied by \emph{any} orthonormal basis $(n , \tau )$. Note,
however, that now (and only in this case) the distinguished
observer field satisfies the ``strong'' eigenvalue
equation $\Pi (n, \, \cdot \, ) = \Lambda \otimes g(n, \, \cdot \, )$,
with an ``eigenvalue'' $\Lambda \, \in \, P^{\perp} (T \Sigma )$.
If $\Sigma$ is special of the first kind everywhere, a track modeled
by $\Sigma$ appears curved to the eye of any observer, because $\Pi _+$
and $\Pi _-$ are non-zero. The distinguished observer field satisfies
(\ref{eq:gyro}) which means that a gyroscope carried by a distinguished
observer remains parallel to the track if it was so initially. For all
other observers this is not true. If we write (\ref{eq:defgCc}) for
the case that $n$ is the distinguished observer field, we read that
the Coriolis acceleration is zero for all $v$ whereas the centrifugal
acceleration is non-zero for all $v \neq 0$.
\section{Special timelike surfaces of the second kind}\label{sec:second}
If $\Sigma$ is special of the second kind, the angle between the two
asymptotes in Figure \ref{fig:hyper} is $180^o$. Thus, the characteristic
hyperbola degenerates into a straight line which extends from infinity to
infinity, passing through the tip of the arrow $\frac{1}{2} \Pi _0$.
This point corresponds to the distinguished observer field
which is still determined by (\ref{eq:defdis}). However,
now it satisfies the stronger condition $\Pi _+ = - \Pi _-$.
The distinguished observer field cannot be characterized by the
eigenvalue equations (\ref{eq:eigen}) because the bilinear form
$\pi$ is identically zero.
If $\Sigma$ is special of the second kind everywhere, a track modeled
by $\Sigma$ appears curved to the eye of any observer on the track,
as $\Pi _+$ and $\Pi _-$ are non-zero. Condition (\ref{eq:gyro})
cannot be satisfied, so a gyroscope does not remain
parallel to the track if it was so initially, for any observer. If
we write (\ref{eq:defgCc}) for the case that $n$ is the distinguished
observer field, we read that the centrifugal acceleration is zero for
all $v$ whereas the Coriolis acceleration is non-zero for all $v \neq 0$.
The vanishing of the centrifugal force is a measurable property by
which the distinguished observer field is uniquely determined.
\section{Special timelike surfaces of the third kind}\label{sec:third}
Recall that $\Sigma$ is a one-way photon surface if it
is special of the third kind everywhere.
A one-way photon surface is a timelike surface $\Sigma$ that
is ruled by one family of lightlike geodesics, whereas the
other family of lightlike curves in $\Sigma$ is non-geodesic.
This implies that, if $\Sigma$ is the worldsheet of a track,
the track visually appears straight in one direction but
curved in the other.
For a one-way photon surface one of the two vectors that span the
asymptotes in Figure \ref{fig:hyper} becomes zero. Thus, the
characteristic hyperbola degenerates into a straight line which is
run through once, beginning at infinity and then asymptotically
approaching the tip of the arrow $\frac{1}{2} \Pi _0$. There
is no distinguished observer field because one side of equation
(\ref{eq:defdis}) is nonzero and the other is zero for all
orthonormal bases. By the same token, (\ref{eq:gyro}) is never
satisfied because the vector on one side of this equation is always
zero whereas that on the other side is never zero. Hence,
it is impossible to carry a gyroscope along the track
modelled by $\Sigma$ in such a way that its axis stays
parallel to the track.
From (\ref{eq:defgCc}) we read that, on a one-way photon surface,
the Coriolis acceleration $a_{\text{Cor}}$ and the centrifugal
acceleration $a_{\text{cent}}$ are always parallel. Also, we
read that both $a_{\text{Cor}}$ and $a_{\text{cent}}$ are
necessarily non-zero for $v \neq 0$.
One-way photon surfaces can be easily constructed, on any Lorentzian
manifold, in the following way. Choose at each
point of a timelike curve a lightlike vector, smoothly depending
on the foot-point. With each of these lightlike vectors as the
initial condition, solve the geodesic equation. The resulting
lightlike geodesics generate a smooth timelike surface in the
neighborhood of the timelike curve. Generically, this is a
one-way photon surface. (In special cases it may be a photon
surface.)
\section{Special timelike surfaces of the fourth kind}\label{sec:fourth}
We now turn to photon surfaces, i.e. to timelike surfaces which are
everywhere special of the fourth kind. The worldline of a track is
a photon surface if and only if it is ruled by two families of
lightlike geodesics. As already outlined, this implies that the
track appears straight to the eye of any observer on the track.
For a photon surface the characteristic hyperbola degenerates into
a single point, situated at the tip of the arrow $\frac{1}{2} \Pi _0$.
This fact clearly indicates that all observer fields have equal
rights, i.e., there is no distinguished observer field.
If $\Sigma$ is special of the fourth kind at a point $p$, of the three
components $\Pi _+$, $\Pi _-$ and $\Pi _0$ only the last one
is different from zero at $p$. Thus,
$
\Pi (u,w) = - \frac{1}{2} g(u.w) \Pi _0 \;
$
at $p$, i.e., the second fundamental form $\Pi$ is a
multiple of the first fundamental form $g$. Points
where this happens are called \emph{umbilic points}.
A submanifold is called \emph{totally umbilic} if
all of its points are umbilic. Thus, a timelike surface
is a photon surface if and only if it is totally umbilic.
For a more detailed discussion of totally umbilic
submanifolds of semi-Riemannian manifolds see
\cite{Perlick2005}.
The defining property $\Pi _+ = \Pi _- = 0$ of photon
surfaces implies that (\ref{eq:gyro}) is satisfied
for all orthonormal bases $(n, \tau )$. Hence, a
gyroscope that is initially tangent to the track
modelled by $\Sigma$ remains tangent to the track
forever, independent of its motion along the track.
This property characterizes photon
surfaces uniquely.
From (\ref{eq:defgCc}) we read that for a photon
surface Coriolis acceleration $a_{\text{Cor}}$ and
centrifugal acceleration $a_{\text{cent}}$ vanish
identically. Again, this property characterizes
photon surfaces uniquely.
The most obvious example for a photon surface is a
timelike plane in Minkowski spacetime. A less trivial
example is the timelike surface $\vartheta = \pi /2$,
$r=3m$ in Schwarzschild spacetime. Inertial forces
and gyroscopic transport on this circular track were
discussed in several papers by Marek
Abramowicz with various co-authors, see e.g.
\cite{AbramowiczCarterLasota1988} and
\cite{Abramowicz1990}.
The existence of a photon surface requires a non-trivial
integrability condition, so it is not guaranteed
in arbitrary Lorentzian manifolds, see
\cite{FoertschHassePerlick2003}. In the same paper we
have given several methods of how to construct photon
surfaces. Also, we have determined all photon surfaces
in conformally flat Lorentzian manifolds, and some
examples of photon surfaces in Schwarzschild and
Goedel spacetimes.
\section{Example: Timelike surfaces in Kerr-Newman spacetime}\label{sec:example}
As an example, let $g$ be the Kerr-Newman metric in Boyer-Lindquist
coordinates (see, e.g., Misner, Thorne and Wheeler
\cite{MisnerThorneWheeler1973}, p.877)
\begin{equation}\label{eq:kerr}
g = - \frac{\Delta}{\rho ^2} \, \big( \, dt \, - \,
a \, \mathrm{sin} ^2 \vartheta \, d \varphi \big) ^2 \, + \,
\frac{\mathrm{sin} ^2 \vartheta}{\rho ^2} \, \big(
(r^2 + a^2) \, d \varphi \, - \, a \, dt \, \big) ^2 \, + \,
\frac{\rho ^2}{\Delta} \, dr^2 \, + \, \rho ^2 \, d \vartheta ^2 \, ,
\end{equation}
where $\rho$ and $\Delta$ are defined by
\begin{equation}\label{eq:rhodelta}
\rho ^2 = r^2 + a^2 \, {\mathrm{cos}} ^2 \vartheta
\quad \text{and} \quad
\Delta = r^2 - 2mr + a^2 + q^2 \, ,
\end{equation}
and $m$, $q$ and $a$ are real constants. We shall assume that
\begin{equation}\label{eq:ma}
0 \, < \, m \: , \quad 0 \, \le \, a \: , \quad \sqrt{a^2 + q ^2} \, \le \, m \, .
\end{equation}
In this case, the Kerr-Newman metric describes the spacetime around a rotating
black hole with mass $m$, charge $q$, and specific angular momentum $a$. The
Kerr-Newman metric (\ref{eq:kerr}) contains the Kerr metric ($q=0$), the
Reissner-Nordstr{\"om} metric ($a=0$) and the Schwarzschild metric ($q=0$ and $a=0$)
as special cases which are all discussed, in great detail, in Chandrasekhar
\cite{Chandrasekhar1983}.
By (\ref{eq:ma}), the equation $\Delta = 0$ has two real roots,
\begin{equation}\label{eq:hor}
r_{\pm} = m \pm \sqrt{ m^2 - a^2 - q ^2} \, ,
\end{equation}
which determine the two horizons. We shall restrict to the region
\begin{equation}\label{eq:M+}
M : \quad r_+ < r < \infty \, ,
\end{equation}
which is called the {\em domain of outer communication\/} of the Kerr-Newman
black hole.
For $0 < \vartheta < \pi$ and $r_+ < r < \infty$,
let $\Sigma _{\vartheta, r}$ denote the set of all points in $M$ where $\vartheta$
and $r$ take the respective values. Clearly, $\Sigma _{\vartheta , r}$ is a
smooth two-dimensional timelike submanifold of $M$ homeomorphic to the
cylinder $\mathbb{R} \times S^1$, parametrized by the cordinates $t$ and
$\varphi$. We may interpret $\Sigma _{\vartheta , r}$ as the worldsheet of
a circular track around the rotation axis of the black hole. We want to investigate
for which values of $\vartheta$ and $r$
the timelike surface $\Sigma _{\vartheta, r}$ is special.
We choose the orthonormal basis
\begin{equation}\label{eq:E}
n = \frac{1}{\rho \, \sqrt{\Delta}} \Big( (r^2 + a^2) \partial _t
+ a \partial _{\varphi} \Big) \: , \qquad
\tau = \frac{1}{\rho \, {\mathrm{sin}} \, \vartheta} \,
\big( \partial _{\varphi} + a \, {\mathrm{sin}} ^2 \vartheta \, \partial _t \big) \: .
\end{equation}
By a straight-forward calculation, we find the components $\Pi _{\pm}$
of the second fundamental form with respect to the lightlike basis
$l_{\pm} = n \pm \tau$,
\begin{gather}\label{eq:PipmKN2}
\Pi _{\pm} \, = \, - \, \big( \, 2a^2 \, \text{sin}^2 \, \vartheta
+ \rho^2 \pm 2a\sqrt{\Delta} \text{sin} \, \vartheta \, \big)
\,
\frac{\text{cos} \, \vartheta}{\rho^4 \text{sin} \, \vartheta} \,
\partial _{\vartheta}
\\
\nonumber
\, - \,
\big( 2r \Delta - (r-m) \rho ^2 \pm 2ar \sqrt{\Delta} \, \text{sin} \,
\vartheta \big) \, \frac{1}{\rho^3 } \, \partial _r \; .
\end{gather}
$\Pi _+$ and $\Pi _-$ are linearly dependent if and only if $\vartheta = \pi /2$. Thus,
our circular track gives a special timelike surface if and only if it is in the equatorial plane.
The equation $\Pi _{\pm} =0$ is equivalent to $\vartheta = \pi /2$ and
\begin{equation}\label{eq:crit}
2 \, r \, \Delta - (r-m) \, r^2
\pm 2 \, a \, r \, \sqrt{\Delta} \, =0 \; .
\end{equation}
For each sign, there is precisely one real solution $r_{\pm}^{\mathrm{ph}}$ to
this equation in the domain of outer communication. Thus, among our timelike
surfaces $\Sigma _{\vartheta , r}$ there are precisely two one-way photon
surfaces. They correspond to the well known co-rotating and counter-rotating
circular photon paths in the Kerr-Newman metric. In the Reissner-Nordstr{\"o}m
case, $a=0$, they coincide and give a photon surface at
$r= \frac{3m}{2} + \sqrt{\frac{9 m^2}{4} - 2 q^2}$ (cf., e.g.,
Chandrasekhar \cite{Chandrasekhar1983}, p.218).
For $\vartheta = \pi /2$ and $r > r_{+}^{\mathrm{ph}}$ or $r < r_{-}^{\mathrm{ph}}$,
the timelike surface $\Sigma _{\vartheta , r}$ is special of the first kind, for
$\vartheta = \pi /2$ and $r_{-}^{\mathrm{ph}} < r < r_{+}^{\mathrm{ph}}$ it is
special of the second kind.
|
1,941,325,221,146 | arxiv | \section{Copyright}
All papers submitted for publication by AAAI Press must be accompanied by a valid signed copyright form. They must also contain the AAAI copyright notice at the bottom of the first page of the paper. There are no exceptions to these requirements. If you fail to provide us with a signed copyright form or disable the copyright notice, we will be unable to publish your paper. There are \textbf{no exceptions} to this policy. You will find a PDF version of the AAAI copyright form in the AAAI AuthorKit. Please see the specific instructions for your conference for submission details.
\section{Formatting Requirements in Brief}
We need source and PDF files that can be used in a variety of ways and can be output on a variety of devices. The design and appearance of the paper is strictly governed by the aaai style file (aaai22.sty).
\textbf{You must not make any changes to the aaai style file, nor use any commands, packages, style files, or macros within your own paper that alter that design, including, but not limited to spacing, floats, margins, fonts, font size, and appearance.} AAAI imposes requirements on your source and PDF files that must be followed. Most of these requirements are based on our efforts to standardize conference manuscript properties and layout. All papers submitted to AAAI for publication will be recompiled for standardization purposes. Consequently, every paper submission must comply with the following requirements:
\begin{itemize}
\item Your .tex file must compile in PDF\LaTeX{} --- (you may not include .ps or .eps figure files.)
\item All fonts must be embedded in the PDF file --- including your figures.
\item Modifications to the style file, whether directly or via commands in your document may not ever be made, most especially when made in an effort to avoid extra page charges or make your paper fit in a specific number of pages.
\item No type 3 fonts may be used (even in illustrations).
\item You may not alter the spacing above and below captions, figures, headings, and subheadings.
\item You may not alter the font sizes of text elements, footnotes, heading elements, captions, or title information (for references and mathematics, please see the limited exceptions provided herein).
\item You may not alter the line spacing of text.
\item Your title must follow Title Case capitalization rules (not sentence case).
\item Your .tex file must include completed metadata to pass-through to the PDF (see PDFINFO below).
\item \LaTeX{} documents must use the Times or Nimbus font package (you may not use Computer Modern for the text of your paper).
\item No \LaTeX{} 209 documents may be used or submitted.
\item Your source must not require use of fonts for non-Roman alphabets within the text itself. If your paper includes symbols in other languages (such as, but not limited to, Arabic, Chinese, Hebrew, Japanese, Thai, Russian and other Cyrillic languages), you must restrict their use to bit-mapped figures. Fonts that require non-English language support (CID and Identity-H) must be converted to outlines or 300 dpi bitmap or removed from the document (even if they are in a graphics file embedded in the document).
\item Two-column format in AAAI style is required for all papers.
\item The paper size for final submission must be US letter without exception.
\item The source file must exactly match the PDF.
\item The document margins may not be exceeded (no overfull boxes).
\item The number of pages and the file size must be as specified for your event.
\item No document may be password protected.
\item Neither the PDFs nor the source may contain any embedded links or bookmarks (no hyperref or navigator packages).
\item Your source and PDF must not have any page numbers, footers, or headers (no pagestyle commands).
\item Your PDF must be compatible with Acrobat 5 or higher.
\item Your \LaTeX{} source file (excluding references) must consist of a \textbf{single} file (use of the ``input" command is not allowed.
\item Your graphics must be sized appropriately outside of \LaTeX{} (do not use the ``clip" or ``trim'' command) .
\end{itemize}
If you do not follow these requirements, your paper will be returned to you to correct the deficiencies.
\section{What Files to Submit}
You must submit the following items to ensure that your paper is published:
\begin{itemize}
\item A fully-compliant PDF file that includes PDF metadata.
\item Your \LaTeX{} source file submitted as a \textbf{single} .tex file (do not use the ``input" command to include sections of your paper --- every section must be in the single source file). (The only allowable exception is .bib file, which should be included separately).
\item The bibliography (.bib) file(s).
\item Your source must compile on our system, which includes only standard \LaTeX{} 2020 TeXLive support files.
\item Only the graphics files used in compiling paper.
\item The \LaTeX{}-generated files (e.g. .aux, .bbl file, PDF, etc.).
\end{itemize}
Your \LaTeX{} source will be reviewed and recompiled on our system (if it does not compile, your paper will be returned to you. \textbf{Do not submit your source in multiple text files.} Your single \LaTeX{} source file must include all your text, your bibliography (formatted using aaai22.bst), and any custom macros.
Your files should work without any supporting files (other than the program itself) on any computer with a standard \LaTeX{} distribution.
\textbf{Do not send files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth.
\textbf{Do not send supporting files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth.
\textbf{Obsolete style files.} The commands for some common packages (such as some used for algorithms), may have changed. Please be certain that you are not compiling your paper using old or obsolete style files.
\textbf{Final Archive.} Place your PDF and source files in a single archive which should be compressed using .zip. The final file size may not exceed 10 MB.
Name your source file with the last (family) name of the first author, even if that is not you.
\section{Using \LaTeX{} to Format Your Paper}
The latest version of the AAAI style file is available on AAAI's website. Download this file and place it in the \TeX\ search path. Placing it in the same directory as the paper should also work. You must download the latest version of the complete AAAI Author Kit so that you will have the latest instruction set and style file.
\subsection{Document Preamble}
In the \LaTeX{} source for your paper, you \textbf{must} place the following lines as shown in the example in this subsection. This command set-up is for three authors. Add or subtract author and address lines as necessary, and uncomment the portions that apply to you. In most instances, this is all you need to do to format your paper in the Times font. The helvet package will cause Helvetica to be used for sans serif. These files are part of the PSNFSS2e package, which is freely available from many Internet sites (and is often part of a standard installation).
Leave the setcounter for section number depth commented out and set at 0 unless you want to add section numbers to your paper. If you do add section numbers, you must uncomment this line and change the number to 1 (for section numbers), or 2 (for section and subsection numbers). The style file will not work properly with numbering of subsubsections, so do not use a number higher than 2.
\subsubsection{The Following Must Appear in Your Preamble}
\begin{quote}
\begin{scriptsize}\begin{verbatim}
\def\year{2022}\relax
\documentclass[letterpaper]{article}
\usepackage{aaai22}
\usepackage{times}
\usepackage{helvet}
\usepackage{courier}
\usepackage[hyphens]{url}
\usepackage{graphicx}
\urlstyle{rm}
\def\UrlFont{\rm}
\usepackage{graphicx}
\usepackage{natbib}
\usepackage{caption}
\DeclareCaptionStyle{ruled}%
{labelfont=normalfont,labelsep=colon,strut=off}
\frenchspacing
\setlength{\pdfpagewidth}{8.5in}
\setlength{\pdfpageheight}{11in}
\pdfinfo{
/Title (AAAI Press Formatting Instructions for Authors
Using LaTeX -- A Guide)
/Author (AAAI Press Staff, Pater Patel Schneider,
Sunil Issar, J. Scott Penberthy, George Ferguson,
Hans Guesgen, Francisco Cruz, Marc Pujol-Gonzalez)
/TemplateVersion (2022.1)
}
\end{verbatim}\end{scriptsize}
\end{quote}
\subsection{Preparing Your Paper}
After the preamble above, you should prepare your paper as follows:
\begin{quote}
\begin{scriptsize}\begin{verbatim}
\begin{document}
\maketitle
\begin{abstract}
\end{abstract}\end{verbatim}\end{scriptsize}
\end{quote}
\noindent You should then continue with the body of your paper. Your paper must conclude with the references, which should be inserted as follows:
\begin{quote}
\begin{scriptsize}\begin{verbatim}
\section{Supplementary Material}
\paragraph{Dataset statistics}
Table \ref{tab:dataset_statistics} shows the statistics of the three datasets ICEWS14, ICEWS18, and ICEWS0515. $|\mathcal{X}|$ denotes the cardinality of a set $\mathcal{X}$.
\begin{table}[h]
\centering
\small
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|c c c c c c}
\toprule
Dataset & $|\mathcal{G}_{\mathrm{train}}|$ & $|\mathcal{G}_{\mathrm{valid}}|$ & $|\mathcal{G}_{\mathrm{test}}|$ & $|\mathcal{E}|$ & $|\mathcal{R}|$ & $|\mathcal{T}|$\\
\midrule
14 & 63,685 & 13,823 & 13,222 & 7,128 & 230 & 365\\
18 & 373,018 & 45,995 & 49,545 & 23,033 & 256 & 304\\
0515 & 322,958 & 69,224 & 69,147 & 10,488 & 251 & 4,017\\
\bottomrule
\end{tabular}}
\caption{Dataset statistics with daily time resolution for all three ICEWS datasets.}
\label{tab:dataset_statistics}
\end{table}
\paragraph{Experimental details}
All experiments were conducted on a Linux machine with 16 CPU cores and 32 GB RAM.
The set of tested hyperparameter ranges and best parameter values for TLogic are displayed in Table \ref{tab:hyperparameters}. Due to memory constraints, the time window $w$ for ICEWS18 is set to 200 and for ICEWS0515 to 1000.
The best hyperparameter values are chosen based on the MRR on the validation set.
Due to the small variance of our approach, the shown results are based on one algorithm run. A random seed of 12 is fixed for the rule learning component to obtain reproducible results.
\begin{table}[h]
\centering
\small
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|c c}
\toprule
Hyperparameter & Range & Best\\
\midrule
Number of walks $n$ & $\{10, 25, 50, 100, 200\}$ & 200\\
\midrule
Transition distribution $d$ & $\{\text{unif}, \exp\}$ & $\exp$\\
\midrule
Rule lengths $\mathcal{L}$ & $\{\{1\}, \{2\}, \{3\}, \{1,2,3\}\}$ & $\{1,2,3\}$\\
\midrule
Time window $w$ & $\{30, 90, 150, 210, 270, \infty\}$ & $\infty$\\
\midrule
Minimum candidates $k$ & $\{10, 20\}$ & 20\\
\midrule
$\alpha$ (score function $f$) & $\{0, 0.25, 0.5, 0.75, 1\}$ & 0.5\\
\midrule
$\lambda$ (score function $f$) & $\{0.01, 0.1, 0.5, 1\}$ & 0.1\\
\bottomrule
\end{tabular}}
\caption{Hyperparameter ranges and best parameter values.}
\label{tab:hyperparameters}
\end{table}
All results in the appendix refer to the validation set of ICEWS14. However, the observations are similar for the test set and the other two datasets. All experiments use the best set of hyperparameters, where only the analyzed parameters are modified.
\paragraph{Object distribution baseline}
We apply a simple object distribution baseline when there are no rules for the query relation or no matching body groundings in the graph. This baseline is only added for completeness and does not improve the results in a significant way.
The proportion of cases where there are no rules for the test query relation is 15/26,444 = 0.00056 for ICEWS14, 21/99,090 = 0.00021 for ICEWS18, and 9/138,294 = 0.00007 for ICEWS0515.
The proportion of cases where there are no matching body groundings is 880/26,444 = 0.0333 for ICEWS14, 2,535/99,090 = 0.0256 for ICEWS18, and 2,375/138,294 = 0.0172 for ICEWS0515.
\paragraph{Number of walks and transition distribution}
Table \ref{tab:num_walks_transition} shows the results for different choices of numbers of walks and transition distributions.
The performance for all metrics increases with the number of walks. Exponentially weighted transition always outperforms uniform sampling.
\begin{table}[h]
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{c c|c c c c c}
\toprule
Walks & Transition & MRR & h@1 & h@3 & h@10\\
\midrule
10 & Unif & 0.3818 & 0.2983 & 0.4307 & 0.5404\\
10 & Exp & 0.3906 & 0.3054 & 0.4408 & 0.5530\\
\midrule \relax
25 & Unif & 0.4098 & 0.3196 & 0.4614 & 0.5803\\
25 & Exp & 0.4175 & 0.3270 & 0.4710 & 0.5875\\
\midrule \relax
50 & Unif & 0.4219 & 0.3307 & 0.4754 & 0.5947\\
50 & Exp & 0.4294 & 0.3375 & 0.4837 & 0.6024\\
\midrule \relax
100 & Unif & 0.4266 & 0.3315 & 0.4817 & 0.6057\\
100 & Exp & 0.4324 & 0.3397 & 0.4861 & 0.6092\\
\midrule \relax
200 & Unif & 0.4312 & 0.3366 & 0.4851 & 0.6114\\
200 & Exp & 0.4373 & 0.3434 & 0.4916 & 0.6161\\
\bottomrule
\end{tabular}}
\caption{Results for different choices of numbers of walks and transition distributions.}
\label{tab:num_walks_transition}
\end{table}
\paragraph{Rule length}
Table \ref{tab:rule_length} indicates that using rules of all lengths for application results in the best performance. Learning only cyclic rules probably makes it more difficult to find rules of length 2, where the rule body must constitute a path with no recurring entities, leading to fewer rules and body groundings in the graph.
\begin{table}[h]
\centering
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{c|c c c c c}
\toprule
Rule length & MRR & h@1 & h@3 & h@10\\
\midrule
1 & 0.4116 & 0.3168 & 0.4708 & 0.5909\\
\midrule
2 & 0.1563 & 0.0648 & 0.1776 & 0.3597\\
\midrule
3 & 0.4097 & 0.3213 & 0.4594 & 0.5778\\
\midrule
1,2,3 & 0.4373 & 0.3434 & 0.4916 & 0.6161\\
\bottomrule
\end{tabular}}
\caption{Results for different choices of rule lengths.}
\label{tab:rule_length}
\end{table}
\paragraph{Time window}
Generally, the larger the time window, the better the performance (see Table \ref{tab:time_window}). If taking all previous timestamps leads to a too high memory usage, the time window should be decreased.
\begin{table}[h]
\centering
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{c|c c c c c}
\toprule
Time window & MRR & h@1 & h@3 & h@10\\
\midrule
30 & 0.3842 & 0.3080 & 0.4294 & 0.5281\\
\midrule
90 & 0.4137 & 0.3287 & 0.4627 & 0.5750\\
\midrule
150 & 0.4254 & 0.3368 & 0.4766 & 0.5950\\
\midrule
210 & 0.4311 & 0.3403 & 0.4835 & 0.6035\\
\midrule
270 & 0.4356 & 0.3426 & 0.4892 & 0.6131\\
\midrule
$\infty$ & 0.4373 & 0.3434 & 0.4916 & 0.6161\\
\bottomrule
\end{tabular}}
\caption{Results for different choices of time windows.}
\label{tab:time_window}
\end{table}
\paragraph{Score function}
Using the best hyperparameters values for $\alpha$ and $\lambda$, Table \ref{tab:score_function} shows in the first row the results if only the rules' confidences are used for scoring, in the second row if only the exponential component is used, and in the last row the results for the combined score function. The combination yields the best overall performance. The optimal balance between the two terms, however, depends on the application and metric prioritization.
\begin{table}[h]
\centering
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{c c|c c c c}
\toprule
$\alpha$ & $\lambda$ & MRR & h@1 & h@3 & h@10\\
\midrule
$1$ & arbitrary & 0.3869 & 0.2806 & 0.4444 & 0.5918\\
\midrule
$0$ & 0.1 & 0.4077 & 0.3515 & 0.4820 & 0.6051\\
\midrule
$0.5$ & 0.1 & 0.4373 & 0.3434 & 0.4916 & 0.6161\\
\bottomrule
\end{tabular}}
\caption{Results for different parameter values in the score function $f$.}
\label{tab:score_function}
\end{table}
\paragraph{Rule learning}
The figures \ref{fig:number_rules_1} and \ref{fig:number_rules_2} show the number of rules learned under the two transition distributions. The total number of learned rules is similar for the uniform and exponential distribution, but there is a large difference for rules of lengths 1 and 3.
The exponential distribution leads to more successful longer walks and thus more longer rules, while the uniform distribution leads to a better exploration of the neighborhood around the start node for shorter walks.
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\columnwidth]{number_rules_1.png}
\caption{Total number of learned rules and number of rules for length 1.}
\label{fig:number_rules_1}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\columnwidth]{number_rules_2.png}
\caption{Number of rules for lengths 2 and 3.}
\label{fig:number_rules_2}
\end{figure}
\paragraph{Training and inference time}
The rule learning and rule application times are shown in the figures \ref{fig:learning_time} and \ref{fig:application_time}, dependent on the number of extracted temporal walks during learning.
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\columnwidth]{learning_time.png}
\caption{Rule learning time.}
\label{fig:learning_time}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=0.8\columnwidth]{application_time.png}
\caption{Rule application time.}
\label{fig:application_time}
\end{figure}
The worst-case time complexity for learning rules of length $l$ is $\mathcal{O}(|\mathcal{R}|nlDb)$, where $n$ is the number of walks, $D$ the maximum node degree in the training set, and $b$ the number of body samples for estimating the confidence.
The worst-case time complexity for inference is given by $\mathcal{O}(|\mathcal{G}| + |\mathcal{TR}_{r^q}|D^L|\mathcal{E}|\log(k))$, where $L$ is the maximum rule length in $\mathcal{TR}_{r^q}$ and $k$ the minimum number of candidates.
More detailed steps of the algorithms for understanding these complexity estimations are given by Algorithm \ref{alg:rule_learning_detailed} and Algorithm \ref{alg:rule_application_detailed}.
\begin{figure*}[h]
\centering
\includegraphics[width=1.8\columnwidth]{framework.png}
\caption{Overall framework.}
\label{fig:framework}
\end{figure*}
\begin{algorithm*}[h]
\caption{Rule learning (detailed)}
\label{alg:rule_learning_detailed}
\textbf{Input}: Temporal knowledge graph $\mathcal{G}$.\\
\textbf{Parameters}: Rule lengths $\mathcal{L} \subset \mathbb{N}$, number of temporal random walks $n \in \mathbb{N}$, transition distribution $d \in \{\mathrm{unif}, \exp\}$.\\
\textbf{Output}: Temporal logical rules $\mathcal{TR}$.
\begin{algorithmic}[1]
\FOR{relation $r \in \mathcal{R}$}
\FOR{$l \in \mathcal{L}$}
\FOR{$i \in [n]$}
\STATE{$\mathcal{TR}_r^l \leftarrow \emptyset$}
\STATE {\textbf{According to transition distribution $d$, sample a temporal random walk $W$ of length \mbox{$l+1$} with $t_{l+1} > t_l$.}
\hfill $\triangleright$~See~\eqref{eq:temporal_walk}.\\
Sample uniformly a start edge $(e_s, r, e_o, t)$ with edge type $r$.}
\FOR{step $m \in \{2, \dots, l+1\}$}
\STATE{Retrieve adjacent edges of current object node.}
\IF{$m = 2$}
\STATE{Filter out all edges with timestamps greater than or equal to the current timestamp.}
\ELSE
\STATE{Filter out all edges with timestamps greater than the current timestamp.\\
Filter out the inverse edge of the previously sampled edge.}
\ENDIF
\IF{$m = l+1$}
\STATE{Retrieve all filtered edges that connect the current object to the source of the walk.}
\ENDIF
\STATE{Sample the next edge from the filtered edges according to distribution $d$.\\
\textbf{break} if there are no feasible edges because of temporal or cyclic constraints.}
\ENDFOR
\STATE{\textbf{Transform walk $W$ to the corresponding temporal logical rule $R$.} \hfill $\triangleright$~See~\eqref{eq:temporal_rule}.\\
Save information about the head relation and body relations.\\
Define variable constraints for recurring entities.}
\STATE{\textbf{Estimate the confidence of rule $R$.} \\
Sample $b$ body groundings. For each step $m \in \{1, \dots, l\}$, filter the edges for the correct body relation besides for the timestamps required to fulfill the temporal constraints.\\
For successful body groundings, check the variable constraints.\\
For each unique body, check if the rule head exists in the graph.\\
Calculate rule support / body support.}
\STATE{$\mathcal{TR}_r^l \leftarrow \mathcal{TR}_r^l \cup \{(R, \text{conf}(R))\}$}
\ENDFOR
\ENDFOR
\STATE{$\mathcal{TR}_r \leftarrow \cup_{l \in \mathcal{L}} \mathcal{TR}_r^l$}
\ENDFOR
\STATE{$\mathcal{TR} \leftarrow \cup_{r \in \mathcal{R}} \mathcal{TR}_r$}
\STATE \textbf{return} $\mathcal{TR}$
\end{algorithmic}
\end{algorithm*}
\begin{algorithm*}[h]
\caption{Rule application (detailed)}
\label{alg:rule_application_detailed}
\textbf{Input}: Test query $q = (e^q, r^q, ?, t^q)$, temporal logical rules $\mathcal{TR}$, temporal knowledge graph $\mathcal{G}$.\\
\textbf{Parameters}: Time window $w \in \mathbb{N} \cup \{\infty\}$, minimum number of candidates $k$, score function $f$.\\
\textbf{Output}: Answer candidates $\mathcal{C}$.
\begin{algorithmic}[1]
\STATE{$\mathcal{C} \leftarrow \emptyset$
$\triangleright$ Apply the rules in $\mathcal{TR}$ by decreasing confidence.}
\STATE{\textbf{Retrieve subgraph $\mathcal{SG} \subset \mathcal{G}$ with timestamps $t \in [t^q-w, t^q)$.}\\
$\triangleright$ Only done if the timestamp changes. The queries in the test set are sorted by timestamp.\\
Retrieve edges with timestamps $t \in [t^q-w, t^q)$.\\
Store edges for each relation in a dictionary.}
\IF{$\mathcal{TR}_{r^q} \neq \emptyset$ }
\FOR{rule $R \in \mathcal{TR}_{r^q}$}
\STATE{\textbf{Find all body groundings of $R$ in $\mathcal{SG}$.}\\
Retrieve edges that could constitute walks that match the rule's body. First, retrieve edges whose subject matches $e^q$ and the relation the first relation in the rule body. Then, retrieve edges whose subject match one of the current targets and the relation the next relation in the rule body.\\
Generate complete walks by merging the edges on the same target-source entity.\\
Delete all walks that do not comply with the time constraints.\\
Check variable constraints, and delete the walks that do not comply with the variable constraints.}
\STATE{Retrieve candidates $\mathcal{C}(R)$ from the target entities of the walks.}
\FOR{$c \in \mathcal{C}(R)$}
\STATE{Calculate score $f(R,c)$. \hfill $\triangleright$ See~\eqref{eq:score_function}.}
\STATE{$\mathcal{C} \leftarrow \mathcal{C} \cup \{(c, f(R,c))\}$}
\ENDFOR
\IF{$|\{c \mid \exists R: (c, f(R,c)) \in \mathcal{C}\}| \geq k$}
\STATE{\textbf{break}}
\ENDIF
\ENDFOR
\ENDIF
\STATE \textbf{return} $\mathcal{C}$
\end{algorithmic}
\end{algorithm*}
\section{Introduction}
Knowledge graphs (KGs) structure factual information in the form of triples $(e_s, r ,e_o)$, where $e_s$ and $e_o$ correspond to entities in the real world and $r$ to a binary relation, e.\,g., \textit{(Anna, born in, Paris)}.
This knowledge representation leads to an interpretation as a directed multigraph, where entities are identified with nodes and relations with edge types. Each edge $(e_s, r ,e_o)$ in the KG encodes an observed fact, where the source node $e_s$ corresponds to the subject entity, the target node $e_o$ to the object entity, and the edge type $r$ to the predicate of the factual statement.
Some real-world information also includes a temporal dimension, e.\,g., the event \textit{(Anna, born in, Paris)} happened on a specific date. To model the large amount of available event data that induce complex interactions between entities over time, temporal knowledge graphs (tKGs) have been introduced. Temporal KGs extend the triples to quadruples $(e_s, r, e_o, t)$ to integrate a timestamp or time range $t$, where $t$ indicates the time validity of the static event $(e_s, r, e_o)$, e.\,g., \mbox{\textit{(Angela Merkel, visit, China, 2014/07/04)}}. Figure~\ref{fig:tkg_example} visualizes a subgraph from the dataset ICEWS14 as an example of a tKG. In this work, we focus on tKGs where each edge is equipped with a single timestamp.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{tkg_example.png}
\caption{A subgraph from the dataset ICEWS14 with the entities \textit{Angela Merkel, Barack Obama, France}, and \textit{China}. The timestamps are displayed in the format yy/mm/dd. The dotted blue line represents the correct answer to the query \textit{(Angela Merkel, consult, ?, 2014/08/09)}. Previous interactions between \textit{Angela Merkel} and \textit{Barack Obama} can be interpreted as an explanation for the prediction.}
\label{fig:tkg_example}
\end{figure}
One of the common tasks on KGs is link prediction, which finds application in areas such as recommender systems~\cite{nectr.hildebrandt.2019}, knowledge base completion~\cite{convkb.nguyen.2018}, and drug repurposing~\cite{polo.liu.2021}.
Taking the additional temporal dimension into account, it is of special interest to forecast events for future timestamps based on past information. Notable real-world applications that rely on accurate event forecasting are, e.\,g., clinical decision support, supply chain management, and extreme events modeling.
In this work, we address link forecasting on tKGs, where we consider queries $(e_s, r, ?, t)$ for a timestamp $t$ that has not been seen during training.
Several embedding-based methods have been introduced for tKGs to solve link prediction and forecasting (link prediction with future timestamps), e.g., TTransE~\cite{ttranse.leblay.2018}, TNTComplEx~\cite{tntcomplex.lacroix.2020}, and RE-Net~\cite{renet.jin.2019}.
The underlying principle is to project the entities and relations into a low-dimensional vector space while preserving the topology and temporal dynamics of the tKG. These methods can learn the complex patterns that lead to an event but often lack transparency and interpretability.
To increase the transparency and trustworthiness of the solutions, human-understandable explanations are necessary, which can be provided by logical rules.
However, the manual creation of rules is often difficult due to the complex nature of events. Domain experts cannot articulate the conditions for the occurrence of an event sufficiently formally to express this knowledge as rules, which leads to a problem termed as the knowledge acquisition bottleneck. Generally, symbolic methods that make use of logical rules tend to suffer from scalability issues, which make them impractical for the application on large real-world datasets.
We propose TLogic that automatically mines cyclic temporal logical rules by extracting temporal random walks from the graph. We achieve both high predictive performance and time-consistent explanations in the form of temporal rules, which conform to the observation that the occurrence of an event is usually triggered by previous events.
The main contributions of this work are summarized as follows:
\begin{itemize}
\item We introduce TLogic, a novel symbolic framework based on temporal random walks in temporal knowledge graphs. It is the first approach that directly learns temporal logical rules from tKGs and applies these rules to the link forecasting task.
\item Our approach provides explicit and human-readable explanations in the form of temporal logical rules and is scalable to large datasets.
\item We conduct experiments on three benchmark datasets (ICEWS14, ICEWS18, and ICEWS0515) and show better overall performance compared with state-of-the-art baselines.
\item We demonstrate the effectiveness of our method in the inductive setting where our learned rules are transferred to a related dataset with a common vocabulary.
\end{itemize}
\section{Related Work}
Subsymbolic machine learning methods, e.\,g., embedding-based algorithms, have achieved success for the link prediction task on static KGs. Well-known methods include RESCAL~\cite{rescal.nickel.2011}, TransE~\cite{transe.bordes.2013}, DistMult~\cite{distmult.yang.2015}, and ComplEx~\cite{complex.trouillon.2016} as well as the graph convolutional approaches R-GCN~\cite{rgcn.schlichtkrull.2018} and CompGCN~\cite{compgcn.vashishth.2020}.
Several approaches have been recently proposed to handle tKGs, such as TTransE~\cite{ttranse.leblay.2018}, TA-DistMult~\cite{ta-distmult-transe.garcia-duran.2018}, DE-SimplE~\cite{de-simple.goel.2020}, TNTComplEx~\cite{tntcomplex.lacroix.2020}, CyGNet~\cite{cygnet.zhu.2021}, RE-Net~\cite{renet.jin.2019}, and xERTE~\cite{xerte.han.2021}. The main idea of these methods is to explicitly learn embeddings for timestamps or to integrate temporal information into the entity or relation embeddings.
However, the black-box property of embeddings makes it difficult for humans to understand the predictions. Moreover, approaches with shallow embeddings are not suitable for an inductive setting with previously unseen entities, relations, or timestamps.
From the above methods, only CyGNet, RE-Net, and xERTE are designed for the forecasting task. xERTE is also able to provide explanations by extracting relevant subgraphs around the query subject.
Symbolic approaches for link prediction on KGs like AMIE+~\cite{amie+.galarraga.2015} and AnyBURL~\cite{anyburl.meilicke.2019} mine logical rules from the dataset, which are then applied to predict new links. StreamLearner~\cite{streamlearner.omran.2019} is one of the first methods for learning temporal rules. It employs a static rule learner to generate rules, which are then generalized to the temporal domain. However, they only consider a rather restricted set of temporal rules, where all body atoms have the same timestamp.
Another class of approaches is based on random walks in the graph, where the walks can support an interpretable explanation for the predictions. For example, AnyBURL samples random walks for generating rules. The methods dynnode2vec~\cite{dynnode2vec.mahdavi.2018} and change2vec~\cite{change2vec.bian.2019} alternately extract random walks on tKG snapshots and learn parameters for node embeddings, but they do not capture temporal patterns within the random walks. \citet{ctdne.nguyen.2018} extend the concept of random walks to temporal random walks on continuous-time dynamic networks for learning node embeddings, where the sequence of edges in the walk only moves forward in time.
\section{Preliminaries}
Let $[n] := \{1, 2, \dots, n\}$.
\paragraph{Temporal knowledge graph}
Let $\mathcal{E}$ denote the set of entities, $\mathcal{R}$ the set of relations, and $\mathcal{T}$ the set of timestamps.
A \textit{temporal knowledge graph} (tKG) is a collection of facts $\mathcal{G} \subset \mathcal{E} \times \mathcal{R} \times \mathcal{E} \times \mathcal{T}$, where each fact is represented by a quadruple $(e_s, r, e_o, t)$.
The quadruple $(e_s, r, e_o, t)$ is also called link or edge, and it indicates a connection between the subject entity $e_s \in \mathcal{E}$ and the object entity $e_o \in \mathcal{E}$ via the relation $r \in \mathcal{R}$.
The timestamp $t \in \mathcal{T}$ implies the occurrence of the event $(e_s, r, e_o)$ at time $t$, where $t$ can be measured in units such as hour, day, and year.
For two timestamps $t$ and $\hat{t}$, we denote the fact that $t$ occurs earlier than $\hat{t}$ by $t < \hat{t}$. If additionally, $t$ could represent the same time as $\hat{t}$, we write $t \leq \hat{t}$.
We define for each edge $(e_s, r, e_o, t)$ an inverse edge $(e_o, r^{-1}, e_s, t)$ that interchanges the positions of the subject and object entity to allow the random walker to move along the edge in both directions. The relation $r^{-1} \in \mathcal{R}$ is called the inverse relation of $r$.
\paragraph{Link forecasting}
The goal of the \textit{link forecasting} task is to predict new links for future timestamps. Given a query with a previously unseen timestamp $(e_s, r, ?, t)$, we want to identify a ranked list of object candidates that are most likely to complete the query. For subject prediction, we formulate the query as $(e_o, r^{-1}, ?, t)$.
\paragraph{Temporal random walk}
A \textit{non-increasing temporal random walk} $W$ of length $l \in \mathbb{N}$ from entity $e_{l+1} \in \mathcal{E}$ to entity $e_1 \in \mathcal{E}$ in the tKG $\mathcal{G}$ is defined as a sequence of edges
{\fontsize{9}{10}
\begin{align}
\begin{split}
((e_{l+1}, r_l, e_l, t_l)&, (e_l, r_{l-1}, e_{l-1}, t_{l-1}), \dots, (e_2, r_1, e_1, t_1))\\
& \text{with}\;t_l \geq t_{l-1} \geq \dots \geq t_1,
\label{eq:temporal_random_walk}
\end{split}
\end{align}}
where $(e_{i+1}, r_i, e_i, t_i) \in \mathcal{G}$ for $i \in [l]$.
A non-increasing temporal random walk complies with time constraints so that the edges are traversed only backward in time, where it is also possible to walk along edges with the same timestamp.
\paragraph{Temporal logical rule}
Let $E_i$ and $T_i$ for $i \in [l+1]$ be variables that represent entities and timestamps, respectively. Further, let $r_1, r_2, \dots, r_l, r_h \in \mathcal{R}$ be fixed.
A \textit{cyclic temporal logical rule} $R$ of length $l \in \mathbb{N}$ is defined as
{\fontsize{9}{10}
\begin{equation*}
((E_1, r_h, E_{l+1}, T_{l+1}) \leftarrow \wedge_{i=1}^l (E_i, r_i, E_{i+1}, T_i))
\label{eq:logical_rule}
\end{equation*}}
with the temporal constraints
{\fontsize{9}{10}
\begin{equation}
T_1 \leq T_2 \leq \dots \leq T_l < T_{l+1}.
\label{eq:rule_time_constraints}
\end{equation}}
The left-hand side of $R$ is called the rule head, with $r_h$ being the head relation, while the right-hand side is called the rule body, which is represented by a conjunction of body atoms $(E_i, r_i, E_{i+1}, T_i)$. The rule is called cyclic because the rule head and the rule body constitute two different walks connecting the same two variables $E_1$ and $E_{l+1}$.
A temporal rule implies that if the rule body holds with the temporal constraints given by~\eqref{eq:rule_time_constraints}, then the rule head is true as well for a future timestamp $T_{l+1}$.
The replacement of the variables $E_i$ and $T_i$ by constant terms is called grounding or instantiation.
For example, a grounding of the temporal rule
{\fontsize{9}{10}
$$
((E_1, \textit{consult}, E_2, T_2) \leftarrow (E_1, \textit{discuss by telephone}, E_2, T_1))
$$}
is given by the edges \textit{(Angela Merkel, discuss by telephone, Barack Obama, 2014/07/22)} and \textit{(Angela Merkel, consult, Barack Obama, 2014/08/09)} in Figure~\ref{fig:tkg_example}.
Let rule grounding refer to the replacement of the variables in the entire rule and body grounding refer to the replacement of the variables only in the body, where all groundings must comply with the temporal constraints in \eqref{eq:rule_time_constraints}.
In many domains, logical rules are frequently violated so that confidence values are determined to estimate the probability of a rule's correctness.
We adapt the standard confidence to take timestamp values into account. Let $(r_1, r_2, \dots, r_l, r_h)$ be the relations in a rule $R$. The body support is defined as the number of body groundings,
i.\,e., the number of tuples $(e_1, \dots, e_{l+1}, t_1, \dots, t_l)$ such that $(e_i, r_i, e_{i+1}, t_i) \in \mathcal{G}$ for $i \in [l]$ and $t_i \leq t_{i+1}$ for $i \in [l-1]$.
The rule support is defined as the number of body groundings such that there exists a timestamp $t_{l+1} > t_l$ with $(e_1, r_h, e_{l+1}, t_{l+1}) \in \mathcal{G}$.
The confidence of the rule $R$, denoted by conf($R$), can then be obtained by dividing the rule support by the body support.
\section{Our Framework}
We introduce TLogic, a rule-based link forecasting framework for tKGs. TLogic first extracts temporal walks from the graph and then lifts these walks to a more abstract, semantic level to obtain temporal rules that generalize to new data.
The application of these rules generates answer candidates, for which the body groundings in the graph serve as explicit and human-readable explanations.
Our framework consists of the components rule learning and rule application. The pseudocode for rule learning is shown in Algorithm~\ref{alg:rule_learning} and for rule application in Algorithm~\ref{alg:rule_application}.
\subsection{Rule Learning}
\begin{algorithm}[t]
\caption{Rule learning}
\label{alg:rule_learning}
\textbf{Input}: Temporal knowledge graph $\mathcal{G}$.\\
\textbf{Parameters}: Rule lengths $\mathcal{L} \subset \mathbb{N}$, number of temporal random walks $n \in \mathbb{N}$, transition distribution $d \in \{\mathrm{unif}, \exp\}$.\\
\textbf{Output}: Temporal logical rules $\mathcal{TR}$.
\begin{algorithmic}[1]
\FOR{relation $r \in \mathcal{R}$}
\FOR{$l \in \mathcal{L}$}
\FOR{$i \in [n]$}
\STATE{$\mathcal{TR}_r^l \leftarrow \emptyset$}
\STATE {According to transition distribution $d$, sample a temporal random walk $W$ of length \mbox{$l+1$} with $t_{l+1} > t_l$.
\hfill $\triangleright$~See~\eqref{eq:temporal_walk}.}
\STATE{Transform walk $W$ to the corresponding temporal logical rule $R$. \hfill $\triangleright$~See~\eqref{eq:temporal_rule}.}
\STATE{Estimate the confidence of rule $R$.
}
\STATE{$\mathcal{TR}_r^l \leftarrow \mathcal{TR}_r^l \cup \{(R, \text{conf}(R))\}$}
\ENDFOR
\ENDFOR
\STATE{$\mathcal{TR}_r \leftarrow \cup_{l \in \mathcal{L}} \mathcal{TR}_r^l$}
\ENDFOR
\STATE{$\mathcal{TR} \leftarrow \cup_{r \in \mathcal{R}} \mathcal{TR}_r$}
\STATE \textbf{return} $\mathcal{TR}$
\end{algorithmic}
\end{algorithm}
As the first step of rule learning, temporal walks are extracted from the tKG $\mathcal{G}$.
For a rule of length $l$, a walk of length $l+1$ is sampled, where the additional step corresponds to the rule head.
Let $r_h$ be a fixed relation, for which we want to learn rules. For the first sampling step $m=1$, we sample an edge $(e_1, r_h, e_{l+1}, t_{l+1})$, which will serve as the rule head, uniformly from all edges with relation type $r_h$. A temporal random walker then samples iteratively edges adjacent to the current object until a walk of length $l+1$ is obtained.
For sampling step $m \in \{2, \dots, l+1\}$, let $(e_s, \tilde{r}, e_o, t)$ denote the previously sampled edge and $\mathcal{A}(m,e_o,t)$ the set of feasible edges for the next transition. To fulfill the temporal constraints in \eqref{eq:temporal_random_walk} and \eqref{eq:rule_time_constraints}, we define
{\fontsize{9}{10}
\begin{align*}
&\mathcal{A}(m, e_o, t) := \\
&\begin{cases}
\{(e_o, r, e, \hat{t}) \mid (e_o, r, e, \hat{t}) \in \mathcal{G},\; \hat{t} < t\} &\;\text{if}\; m = 2,\\
\{(e_o, r, e, \hat{t}) \mid (e_o, r, e, \hat{t}) \in \tilde{\mathcal{G}},\; \hat{t} \leq t\} &\;\text{if}\; m \in \{3, \dots, l\},\\
\{(e_o, r, e_1, \hat{t}) \mid (e_o, r, e_1, \hat{t}) \in \tilde{\mathcal{G}},\; \hat{t} \leq t\} &\;\text{if}\; m = l+1,
\end{cases}
\end{align*}}
where $\tilde{\mathcal{G}} := \mathcal{G}\setminus \{(e_o, \tilde{r}^{-1}, e_s, t)\}$ excludes the inverse edge to avoid redundant rules. For obtaining cyclic walks, we sample in the last step $m = l+1$ an edge that connects the walk to the first entity $e_1$ if such edges exist.
Otherwise, we sample the next walk.
The transition distribution for sampling the next edge can either be uniform or exponentially weighted.
We define an index mapping $\hat{m} := (l+1) - (m-2)$ to be consistent with the indices in \eqref{eq:temporal_random_walk}. Then, the exponentially weighted probability for choosing edge $u \in \mathcal{A}\left(m,e_{\hat{m}}, t_{\hat{m}}\right)$ for $m \in \{2, \dots, l+1\}$ is given by
\begin{equation}
\mathbb{P}(u; m, e_{\hat{m}}, t_{\hat{m}}) = \frac{\exp(t_u - t_{\hat{m}})}{\sum\limits_{\hat{u} \in \mathcal{A}\left(m, e_{\hat{m}}, t_{\hat{m}}\right)}\exp(t_{\hat{u}} - t_{\hat{m}})}
\label{eq:exp_distribution}
\end{equation}}
where $t_u$ denotes the timestamp of edge $u$.
The exponential weighting favors edges with timestamps that are closer to the timestamp of the previous edge and probably more relevant for prediction.
The resulting temporal walk $W$ is given by
{\fontsize{9}{10}
\begin{equation}
((e_1, r_h, e_{l+1}, t_{l+1}), (e_{l+1}, r_l, e_l, t_l), \dots, (e_2, r_1, e_1, t_1)).
\label{eq:temporal_walk}
\end{equation}}
$W$ can then be transformed to a temporal rule $R$ by replacing the entities and timestamps with variables. While the first edge in $W$ becomes the rule head $(E_1, r_h, E_{l+1}, T_{l+1})$, the other edges are mapped to body atoms, where each edge $(e_{i+1}, r_i, e_i, t_i)$ is converted to the body atom $(E_i, r_i^{-1}, E_{i+1}, T_i)$. The final rule $R$ is denoted by
{\fontsize{9}{10}
\begin{equation}
((E_1, r_h, E_{l+1}, T_{l+1}) \leftarrow \wedge_{i=1}^l (E_i, r_i^{-1}, E_{i+1}, T_i)).
\label{eq:temporal_rule}
\end{equation}}
In addition, we impose the temporal consistency constraints $T_1 \leq T_2 \leq \dots \leq T_l < T_{l+1}$.
The entities $(e_1, \dots, e_{l+1})$ in $W$ do not need to be distinct since a pair of entities can have many interactions at different points in time. For example, Angela Merkel made several visits to China in 2014, which could constitute important information for the prediction. Repetitive occurrences of the same entity in $W$ are replaced with the same random variable in $R$ to maintain this knowledge.
For the confidence estimation of $R$, we sample from the graph a fixed number of body groundings, which have to match the body relations and the variable constraints mentioned in the last paragraph while satisfying the condition from~\eqref{eq:rule_time_constraints}.
The number of unique bodies serves as the body support. The rule support is determined by counting the number of bodies for which an edge with relation type $r_h$ exists that connects $e_1$ and $e_{l+1}$ from the body. Moreover, the timestamp of this edge has to be greater than all body timestamps to fulfill~\eqref{eq:rule_time_constraints}.
For every relation $r \in \mathcal{R}$, we sample $n \in \mathbb{N}$ temporal walks for a set of prespecified lengths $\mathcal{L} \subset \mathbb{N}$. The set $\mathcal{TR}_r^l$ stands for all rules of length $l$ with head relation $r$ with their corresponding confidences. All rules for relation $r$ are included in $\mathcal{TR}_r := \cup_{l \in \mathcal{L}} \mathcal{TR}_r^l$, and the complete set of learned temporal rules is given by $\mathcal{TR} := \cup_{r \in \mathcal{R}} \mathcal{TR}_r$.
It is possible to learn rules only for a single relation or a set of specific relations of interest.
Explicitly learning rules for all relations is especially effective for rare relations that would otherwise only be sampled with a small probability.
The learned rules are not specific to the graph from which they have been extracted, but they could be employed in an inductive setting where the rules are transferred to related datasets that share a common vocabulary for straightforward application.
\subsection{Rule Application}
\begin{algorithm}[t]
\caption{Rule application}
\label{alg:rule_application}
\textbf{Input}: Test query $q = (e^q, r^q, ?, t^q)$, temporal logical rules $\mathcal{TR}$, temporal knowledge graph $\mathcal{G}$.\\
\textbf{Parameters}: Time window $w \in \mathbb{N} \cup \{\infty\}$, minimum number of candidates $k$, score function $f$.\\
\textbf{Output}: Answer candidates $\mathcal{C}$.
\begin{algorithmic}[1]
\STATE{$\mathcal{C} \leftarrow \emptyset$
$\triangleright$ Apply the rules in $\mathcal{TR}$ by decreasing confidence.}
\IF{$\mathcal{TR}_{r^q} \neq \emptyset$ }
\FOR{rule $R \in \mathcal{TR}_{r^q}$}
\STATE{Find all body groundings of $R$ in $\mathcal{SG} \subset \mathcal{G}$, where $\mathcal{SG}$ consists of the edges within the time window $[t^q-w, t^q)$.}
\STATE{Retrieve candidates $\mathcal{C}(R)$ from the target entities of the body groundings.}
\FOR{$c \in \mathcal{C}(R)$}
\STATE{Calculate score $f(R,c)$. \hfill $\triangleright$ See~\eqref{eq:score_function}.}
\STATE{$\mathcal{C} \leftarrow \mathcal{C} \cup \{(c, f(R,c))\}$}
\ENDFOR
\IF{$|\{c \mid \exists R: (c, f(R,c)) \in \mathcal{C}\}| \geq k$}
\STATE{\textbf{break}}
\ENDIF
\ENDFOR
\ENDIF
\STATE \textbf{return} $\mathcal{C}$
\end{algorithmic}
\end{algorithm}
The learned temporal rules $\mathcal{TR}$ are applied to answer queries of the form $q = (e^q, r^q, ?, t^q)$. The answer candidates are retrieved from the target entities of body groundings in the tKG $\mathcal{G}$.
If there exist no rules $\mathcal{TR}_{r^q}$ for the query relation $r^q$, or if there are no matching body groundings in the graph, then no answers are predicted for the given query.
To apply the rules on relevant data, a subgraph $\mathcal{SG} \subset \mathcal{G}$ dependent on a time window $w \in \mathbb{N} \cup \{\infty\}$ is retrieved. For $w \in \mathbb{N}$, the subgraph $\mathcal{SG}$ contains all edges from $\mathcal{G}$ that have timestamps $t \in [t^q-w, t^q)$. If $w = \infty$, then all edges with timestamps prior to the query timestamp $t^q$ are used for rule application, i.\,e., $\mathcal{SG}$ consists of all facts with $t \in [t_{\mathrm{min}}, t^q)$, where $t_{\mathrm{min}}$ is the minimum timestamp in the graph $\mathcal{G}$.
We apply the rules $\mathcal{TR}_{r^q}$ by decreasing confidence, where each rule $R$ generates a set of answer candidates $\mathcal{C}(R)$.
Each candidate $c \in \mathcal{C}(R)$ is then scored by a function $f: \mathcal{TR}_{r^q} \times \mathcal{E} \rightarrow [0,1]$ that reflects the probability of the candidate being the correct answer to the query.
Let $\mathcal{B}(R,c)$ be the set of body groundings of rule $R$ that start at entity $e^q$ and end at entity $c$.
We choose as score function $f$ a convex combination of the rule's confidence and a function that takes the time difference $t^q - t_1(\mathcal{B}(R,c))$ as input, where $t_1(\mathcal{B}(R,c))$ denotes the earliest timestamp $t_1$ in the body. If several body groundings exist, we take from all possible $t_1$ values the one that is closest to $t^q$.
For candidate $c \in \mathcal{C}(R)$, the score function is defined as
{\fontsize{9}{10}
\begin{equation}
f(R,c) = a \cdot \mathrm{conf}(R) + (1-a) \cdot \exp(-\lambda (t^q - t_1(\mathcal{B}(R,c))))
\label{eq:score_function}
\end{equation}}
with $\lambda > 0$ and $a \in [0,1]$.
The intuition for this choice of $f$ is that candidates generated by high-confidence rules should receive a higher score. Adding a dependency on the timeframe of the rule grounding is based on the observation that the existence of edges in a rule become increasingly probable with decreasing time difference between the edges.
We choose the exponential distribution since it is commonly used to model interarrival times of events. The time difference $t^q - t_1(\mathcal{B}(R,c))$ is always non-negative for a future timestamp value $t^q$, and with the assumption that there exists a fixed mean, the exponential distribution is also the maximum entropy distribution for such a time difference variable. The exponential distribution is rescaled so that both summands are in the range $[0,1]$.
All candidates are saved with their scores as $(c, f(R,c))$ in $\mathcal{C}$.
We stop the rule application when the number of different answer candidates $|\{c \mid \exists R: (c, f(R,c)) \in \mathcal{C}\}|$ is at least $k$ so that there is no need to go through all rules.
\subsection{Candidate Ranking}
For the ranking of the answer candidates, all scores of each candidate $c$ are aggregated through a noisy-OR calculation, which produces the final score
\begin{equation}
1 - \Pi_{\{s \mid (c, s) \in \mathcal{C}\}} (1 - s).
\label{eq:score_aggregation}
\end{equation}}
The idea is to aggregate the scores to produce a probability, where candidates implied by more rules should have a higher score.
In case there are no rules for the query relation $r^q$, or if there are no matching body groundings in the graph, it might still be interesting to retrieve possible answer candidates.
In the experiments, we apply a simple baseline where the scores for the candidates are obtained from the overall object distribution in the training data if $r^q$ is a new relation. If $r^q$ already exists in the training set, we take the object distribution of the edges with relation type $r^q$.
\section{Experiments}
\subsection{Datasets}
We conduct experiments on the dataset Integrated Crisis Early Warning System\footnote{\url{https://dataverse.harvard.edu/dataverse/icews}} (ICEWS), which contains information about international events and is a commonly used benchmark dataset for link prediction on tKGs.
We choose the subsets ICEWS14, ICEWS18, and ICEWS0515, which include data from the years 2014, 2018, and 2005 to 2015, respectively.
Since we consider link forecasting, each dataset is split into training, validation, and test set so that the timestamps in the training set occur earlier than the timestamps in the validation set, which again occur earlier than the timestamps in the test set.
To ensure a fair comparison, we use the split provided by~\citet{xerte.han.2021}\footnote{\url{https://github.com/TemporalKGTeam/xERTE}}.
The statistics of the datasets are summarized in the supplementary material.
\subsection{Experimental Setup}
For each test instance $(e_s^q, r^q, e_o^q, t^q)$, we generate a list of candidates for both object prediction $(e_s^q, r^q, ?, t^q)$ and subject prediction $(e_o^q, (r^q)^{-1}, ?, t^q)$. The candidates are ranked by decreasing scores, which are calculated according to \eqref{eq:score_aggregation}.
The confidence for each rule is estimated by sampling $500$ body groundings and counting the number of times the rule head holds. We learn rules of the lengths 1, 2, and 3, and for application, we only consider the rules with a minimum confidence of $0.01$ and minimum body support of $2$.
We compute the mean reciprocal rank (MRR) and hits@$k$ for $k \in \{1, 3, 10\}$, which are standard metrics for link prediction on KGs. For a rank $x \in \mathbb{N}$, the reciprocal rank is defined as $\frac{1}{x}$, and the MRR is the average of all reciprocal ranks of the correct query answers across all queries.
The metric hits@$k$ (h@$k$) indicates the proportion of queries for which the correct entity appears under the top $k$ candidates.
Similar to~\citet{xerte.han.2021}, we perform time-aware filtering where all correct entities at the query timestamp except for the true query object are filtered out from the answers. In comparison to the alternative setting that filters out all other objects that appear together with the query subject and relation at any timestamp, time-aware filtering yields a more realistic performance estimate.
\paragraph{Baseline methods}
We compare TLogic\footnote{Code available at \url{https://github.com/liu-yushan/TLogic.}} with the state-of-the-art baselines for static link prediction
DistMult~\cite{distmult.yang.2015}, ComplEx~\cite{complex.trouillon.2016}, and AnyBURL~\cite{anyburl.meilicke.2019, anyburl.meilicke.2020} as well as for temporal link prediction
TTransE~\cite{ttranse.leblay.2018}, TA-DistMult~\cite{ta-distmult-transe.garcia-duran.2018}, DE-SimplE~\cite{de-simple.goel.2020}, TNTComplEx~\cite{tntcomplex.lacroix.2020}, CyGNet~\cite{cygnet.zhu.2021}, RE-Net~\cite{renet.jin.2019}, and xERTE~\cite{xerte.han.2021}.
All baseline results except for the results on AnyBURL are from~\citet{xerte.han.2021}. AnyBURL samples paths based on reinforcement learning and generalizes them to rules, where the rule space also includes, e.\,g., acyclic rules and rules with constants. A non-temporal variant of TLogic would sample paths randomly and only learn cyclic rules, which would presumably yield worse performance than AnyBURL. Therefore, we choose AnyBURL as a baseline to assess the effectiveness of adding temporal constraints.
\subsection{Results}
\begin{table*}[t]
\centering
\small
\begin{tabular}{l|cccc|cccc|cccc}
\toprule
Dataset & \multicolumn{4}{|c}{\textbf{ICEWS14}} & \multicolumn{4}{|c}{\textbf{ICEWS18}} & \multicolumn{4}{|c}{\textbf{ICEWS0515}}\\
\midrule
Model & MRR & h@1 & h@3 & h@10 & MRR & h@1 & h@3 & h@10 & MRR & h@1 & h@3 & h@10 \\
\midrule
DistMult & 0.2767 & 0.1816 & 0.3115 & 0.4696 & 0.1017 & 0.0452 & 0.1033 & 0.2125 & 0.2873 & 0.1933 & 0.3219 & 0.4754\\
ComplEx & 0.3084 & 0.2151 & 0.3448 & 0.4958 & 0.2101 & 0.1187 & 0.2347 & 0.3987 & 0.3169 & 0.2144 & 0.3574 & 0.5204\\
AnyBURL & 0.2967 & 0.2126 & 0.3333 & 0.4673 & 0.2277 & 0.1510 & 0.2544 & 0.3891 & 0.3205 & 0.2372 & 0.3545 & 0.5046\\
\midrule
TTransE & 0.1343 & 0.0311 & 0.1732 & 0.3455 & 0.0831 & 0.0192 & 0.0856 & 0.2189 & 0.1571 & 0.0500 & 0.1972 & 0.3802\\
TA-DistMult & 0.2647 & 0.1709 & 0.3022 & 0.4541 & 0.1675 & 0.0861 & 0.1841 & 0.3359 & 0.2431 & 0.1458 & 0.2792 & 0.4421\\
DE-SimplE & 0.3267 & 0.2443 & 0.3569 & 0.4911 & 0.1930 & 0.1153 & 0.2186 & 0.3480 & 0.3502 & 0.2591 & 0.3899 & 0.5275\\
TNTComplEx & 0.3212 & 0.2335 & 0.3603 & 0.4913 & 0.2123 & 0.1328 & 0.2402 & 0.3691 & 0.2754 & 0.1952 & 0.3080 & 0.4286\\
CyGNet & 0.3273 & 0.2369 & 0.3631 & 0.5067 & 0.2493 & 0.1590 & 0.2828 & 0.4261 & 0.3497 & 0.2567 & 0.3909 & 0.5294\\
RE-Net & 0.3828 & 0.2868 & 0.4134 & 0.5452 & 0.2881 & 0.1905 & 0.3244 & 0.4751 & 0.4297 & 0.3126 & 0.4685 & 0.6347\\
xERTE & 0.4079 & 0.3270 & 0.4567 & 0.5730 & 0.2931 & \textbf{0.2103} & 0.3351 & 0.4648 & 0.4662 & \textbf{0.3784} & 0.5231 & 0.6392\\
\midrule
TLogic & \textbf{0.4304} & \textbf{0.3356} & \textbf{0.4827} & \textbf{0.6123} & \textbf{0.2982} & 0.2054 & \textbf{0.3395} & \textbf{0.4853} & \textbf{0.4697} & 0.3621 & \textbf{0.5313} & \textbf{0.6743}\\
\bottomrule
\end{tabular}
\caption{Results of link forecasting on the datasets ICEWS14, ICEWS18, and ICEWS0515. All metrics are time-aware filtered. The best results among all models are displayed in bold.}
\label{tab:results}
\end{table*}
The results of the experiments are displayed in Table~\ref{tab:results}. TLogic outperforms all baseline methods with respect to the metrics MRR, hits@3, and hits@10. Only xERTE performs better than Tlogic for hits@1 on the datasets ICEWS18 and ICEWS0515.
Besides a list of possible answer candidates with corresponding scores, TLogic can also provide temporal rules and body groundings in form of walks from the graph that support the predictions.
Table~\ref{tab:ex_rules} presents three exemplary rules with high confidences that were learned from ICEWS14.
For the query \textit{(Angela Merkel, consult, ?, 2014/08/09)}, two walks are shown in Table~\ref{tab:ex_rules}, which serve as time-consistent explanations for the correct answer \textit{Barack Obama}.
\begin{table*}[t]
\centering
\small
\begin{tabular}{c|c|c}
\toprule
Confidence & Head & Body\\
\midrule
0.963 & $(E_1, \textit{demonstrate or rally}, E_2, T_4)$ & $(E_1, \textit{riot}, E_2, T_1)$ $\wedge$ $(E_2, \textit{make statement}, E_1, T_2)$ $\wedge$ $(E_1, \textit{riot}, E_2, T_3)$\\
\midrule
0.818 & $(E_1, \textit{share information}, E_2, T_2)$ & $(E_1, \textit{express intent to ease sanctions}^{-1}, E_2, T_1)$\\
\midrule
0.750 & $(E_1, \textit{provide military aid}, E_3, T_3)$ & $(E_1, \textit{provide military aid}, E_2, T_1)$ $\wedge$ $(E_2, \textit{intend to protect}^{-1}, E_3, T_2)$\\
\midrule
0.570 & \textit{(Merkel, consult, Obama, 14/08/09)} & \textit{(Merkel, discuss by telephone, Obama, 14/07/22)}\\
\midrule
0.500 & \textit{(Merkel, consult, Obama, 14/08/09)} & \textit{(Merkel, express intent to meet, Obama, 14/05/02)}\\ & & $\wedge$ $(\textit{Obama}, \textit{consult}^{-1}$, \textit{Merkel, 14/07/18)} $\wedge$ $(\textit{Merkel}, \textit{consult}^{-1}$, \textit{Obama, 14/07/29)}\\
\bottomrule
\end{tabular}
\caption{Three exemplary rules from the dataset ICEWS14 and two walks for the query \textit{(Angela Merkel, consult, ?, 2014/08/09)} that lead to the correct answer \textit{Barack Obama}. The timestamps are displayed in the format yy/mm/dd.}
\label{tab:ex_rules}
\end{table*}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{mrr.png}
\caption{MRR performance on the validation set of ICEWS14. The transition distribution is either uniform or exponentially weighted.}
\label{fig:mrr}
\end{figure}
\paragraph{Inductive setting}
One advantage of our learned logical rules is that they are applicable to any new dataset as long as the new dataset covers common relations. This might be relevant for cases where new entities appear. For example, Donald Trump, who served as president of the United States from 2017 to 2021, is included in the dataset ICEWS18 but not in ICEWS14. The logical rules are not tied to particular entities and would still be applicable, while embedding-based methods have difficulties operating in this challenging setting. The models would need to be retrained to obtain embeddings for the new entities, where existing embeddings might also need to be adapted to the different time range.
For the two rule-based methods AnyBURL and TLogic, we apply the rules learned on the training set of ICEWS0515 (with timestamps from 2005/01/01 to 2012/08/06) to the test set of ICEWS14 as well as the rules learned on the training set of ICEWS14 to the test set of ICEWS18 (see Table~\ref{tab:inductive_setting}).
The performance of TLogic in the inductive setting is for all metrics close to the results in Table~\ref{tab:results}, while for AnyBURL, especially the results on ICEWS18 drop significantly. It seems that the encoded temporal information in TLogic is essential for achieving correct predictions in the inductive setting.
ICEWS14 has only 7,128 entities, while ICEWS18 contains 23,033 entities. The results confirm that temporal rules from TLogic can even be transferred to a dataset with a large number of new entities and timestamps and lead to a strong performance.
\subsection{Analysis}
The results in this section are obtained on the dataset ICEWS14, but the findings are similar for the other two datasets.
More detailed results can be found in the supplementary material.
\begin{table*}[t]
\centering
\small
\begin{tabular}{c c|c|c c c c c}
\toprule
$\mathcal{G}_{\mathrm{train}}$ & $\mathcal{G}_{\mathrm{test}}$ & Model & MRR & h@1 & h@3 & h@10\\
\midrule
\multirow{2}{*}{ICEWS0515} & \multirow{2}{*}{ICEWS14} & AnyBURL & 0.2664 & 0.1800 & 0.3024 & 0.4477\\
& & TLogic & 0.4253 & 0.3291 & 0.4780 & 0.6122\\
\midrule
\multirow{2}{*}{ICEWS14} & \multirow{2}{*}{ICEWS18} & AnyBURL & 0.1546 & 0.0907 & 0.1685 & 0.2958\\
& & TLogic & 0.2915 & 0.1987 & 0.3330 & 0.4795\\
\bottomrule
\end{tabular}
\caption{Inductive setting where rules learned on $\mathcal{G}_{\mathrm{train}}$ are transferred and applied to $\mathcal{G}_{\mathrm{test}}$.}
\label{tab:inductive_setting}
\end{table*}
\paragraph{Number of walks}
Figure~\ref{fig:mrr} shows the MRR performance on the validation set of ICEWS14 for different numbers of walks that were extracted during rule learning. We observe a performance increase with a growing number of walks. However, the performance gains saturate between 100 and 200 walks where rather small improvements are attainable.
\paragraph{Transition distribution}
We test two transition distributions for the extraction of temporal walks: uniform and exponentially weighted according to~\eqref{eq:exp_distribution}. The rationale behind using an exponentially weighted distribution is the observation that related events tend to happen within a short timeframe.
The distribution of the first edge is always uniform to not restrict the variety of obtained walks.
Overall, the performance of the exponential distribution consistently exceeds the uniform setting with respect to the MRR (see Figure~\ref{fig:mrr}).
We observe that the exponential distribution leads to more rules of length 3 than the uniform setting (11,718 compared to 8,550 rules for 200 walks), while it is the opposite for rules of length 1 (7,858 compared to 11,019 rules). The exponential setting leads to more successful longer walks because the timestamp differences between subsequent edges tend to be smaller. It is less likely that there are no feasible transitions anymore because of temporal constraints. The uniform setting, however, leads to a better exploration of the neighborhood around the start node for shorter walks.
\paragraph{Rule length}
We learn rules of lengths 1, 2, and 3. Using all rules for application results in the best performance (MRR on the validation set: 0.4373), followed by rules of only length 1 (0.4116), 3 (0.4097), and 2 (0.1563). The reason why rules of length 3 perform better than length 2 is that the temporal walks are allowed to transition back and forth between the same entities. Since we only learn cyclic rules, a rule body of length 2 must constitute a path with no recurring entities, resulting in fewer rules and rule groundings in the graph. Interestingly, simple rules of length 1 already yield very good performance.
\paragraph{Time window}
For rule application, we define a time window for retrieving the relevant data. The performance increases with the size of the time window, even though relevant events tend to be close to the query timestamp. The second summand of the score function $f$ in \eqref{eq:score_function} takes the time difference between the query timestamp $t^q$ and the earliest body timestamp $t_1(\mathcal{B}(R,c))$ into account. In this case, earlier events with a large timestamp difference receive a lesser weight, while generally, as much information as possible is beneficial for prediction.
\paragraph{Score function}
We define the score function $f$ in \eqref{eq:score_function} as a convex combination of the rule's confidence and a function that depends on the time difference $t^q - t_1(\mathcal{B}(R,c))$.
The performance of only using the confidence (MRR: 0.3869) or only using the exponential function (0.4077) is worse than the combination (0.4373), which means that both the information from the rules' confidences and the time differences are important for prediction.
\paragraph{Variance}
The variance in the performance due to different rules obtained from the rule learning component is quite small.
Running the same model with the best hyperparameter settings for five different seeds results in a standard deviation of 0.0012 for the MRR.
The rule application component is deterministic and always leads to the same candidates with corresponding scores for the same hyperparameter setting.
\paragraph{Training and inference time}
The worst-case time complexity for learning rules of length $l$ is $\mathcal{O}(|\mathcal{R}|nlDb)$, where $n$ is the number of walks, $D$ the maximum node degree in the training set, and $b$ the number of body samples for estimating the confidence.
The worst-case time complexity for inference is given by $\mathcal{O}(|\mathcal{G}| + |\mathcal{TR}_{r^q}|D^L|\mathcal{E}|\log(k))$, where $L$ is the maximum rule length in $\mathcal{TR}_{r^q}$ and $k$ the minimum number of candidates. For large graphs with high node degrees, it is possible to reduce the complexity to $ \mathcal{O}\left(|\mathcal{G}| +\vert\mathcal{TR}_{r^q}\vert KLD|\mathcal{E}| \log(k) \right)$ by only keeping a maximum of $K$ candidate walks during rule application.
Both training and application can be parallelized since the rule learning for each relation and the rule application for each test query are independent.
Rule learning with 200 walks and exponentially weighted transition distribution for rule lengths $\{1,2,3\}$ on a machine with 8 CPUs takes 180 sec for ICEWS14, while the application on the validation set takes 2000 sec, with $w=\infty$ and $k = 20$. For comparison, the best-performing baseline xERTE needs for training one epoch on the same machine already 5000 sec, where an MRR of 0.3953 can be obtained, while testing on the validation set takes 700 sec.
\section{Conclusion}
We have proposed TLogic, the first symbolic framework that directly learns temporal logical rules from temporal knowledge graphs and applies these rules for link forecasting.
The framework generates answers by applying rules to observed events prior to the query timestamp and scores the answer candidates depending on the rules' confidences and time differences.
Experiments on three datasets indicate that TLogic achieves better overall performance compared to state-of-the-art baselines.
In addition, our approach also provides time-consistent, explicit, and human-readable explanations for the predictions in the form of temporal logical rules.
As future work, it would be interesting to
integrate acyclic rules, which could also contain relevant information and might boost the performance for rules of length 2.
Furthermore, the simple sampling mechanism for temporal walks could be replaced by a more sophisticated approach, which is able to effectively identify the most promising walks.
\section*{Acknowledgement}
This work has been supported by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) as part of the project RAKI under grant number 01MD19012C and by the German Federal Ministry of Education and Research (BMBF) under grant number 01IS18036A. The authors of this work take full responsibility for its content.
|
1,941,325,221,147 | arxiv | \section{\textbf{Introduction\label{sec1}}}
\noindent For $n\geq1,$ let $X_{1},X_{2},...,X_{n}$ be $n$ independent copies
of a non-negative random variable (rv) $X,$ defined over some probability
space $\left( \Omega,\mathcal{A},\mathbb{P}\right) ,$ with cumulative
distribution function (cdf) $F.\ $We assume that the distribution tail $1-F$
is regularly varying at infinity,\ with index $\left( -1/\gamma_{1}\right)
,$ notation: $1-F\in\mathcal{RV}_{\left( -1/\gamma_{1}\right) }.$ That i
\begin{equation}
\lim_{t\rightarrow\infty}\frac{1-F\left( tx\right) }{1-F\left( t\right)
}=x^{-1/\gamma_{1}},\text{ for any }x>0, \label{first-condition
\end{equation}
where $\gamma_{1}>0,$ called shape parameter or tail index or extreme value
index (EVI), is a very crucial parameter in the analysis of extremes. It
governs the thickness of the distribution right tail: the heavier the tail,
the larger $\gamma_{1}.$ Its estimation has got a great deal of interest for
complete samples, as one might see in the textbook of \cite{BeGeS04}.\ In this
paper, we focus on the most celebrated estimator of $\gamma_{1},$ that was
proposed by \cite{Hill75}
\[
\widehat{\gamma}_{1}^{H}=\widehat{\gamma}_{1}^{H}\left( k\right) :=\frac
{1}{k
{\displaystyle\sum\limits_{i=1}^{k}}
\log X_{n-i+1,n}-\log X_{n-k,n},
\]
where $X_{1,n}\leq...\leq X_{n,n}$ are the order statistics pertaining to the
sample $\left( X_{1},...,X_{n}\right) $ and $k=k_{n}$ is an integer sequence
satisfyin
\begin{equation}
1<k<n,\text{ }k\rightarrow\infty\text{ and }k/n\rightarrow0\text{ as
}n\rightarrow\infty. \label{K
\end{equation}
The consistency of $\widehat{\gamma}_{1}^{H}$ was proved by \cite{Mas82} by
only assuming the regular variation condition $\left( \ref{first-condition
\right) $ while its asymptotic normality was established\ under a suitable
extra assumption, known as the second-order regular variation condition (see
\citeauthor{deHS96}, \citeyear{deHS96} and \citeauthor{deHF06},
\citeyear[page 117]{deHF06}).\medskip
\noindent In the analysis of lifetime, reliability or insurance data, the
observations are usually randomly censored. In other words, in many real
situations the variable of interest $X$ is not always available. An
appropriate way to model this matter, is to introduce a non-negative rv $Y,$
called censoring rv, independent of $X$ and then to consider the rv
$Z:=\min\left( X,Y\right) $ and the indicator variable $\delta
:=\mathbf{1}\left( X\leq Y\right) ,$ which determines whether or not $X$ has
been observed. The cdf's of $Y$ and $Z$ will be denoted by $G$ and $H$
respectively. The analysis of extreme values of randomly censored data is a
new research topic to which \cite{ReTo97} made a very brief reference, in
Section 6.1, as a first step but with no asymptotic results. Considering
Hall's model \cite{Hall82}, \cite{BeGDFF07} proposed estimators for the EVI
and high quantiles and discussed their asymptotic properties, when the data
are censored by a deterministic threshold. More recently, \cite{EnFG08}
adapted various EVI estimators to the case where data are censored, by a
random threshold, and proposed a unified method to establish their asymptotic
normality by imposing some assumptions that are rather unusual to the context
of extreme value theory. The obtained estimators are then used in the
estimation of extreme quantiles under random censorship. \cite{GN11} also made
a contribution to this field by providing a detailed simulation study and
applying the estimation procedures on some survival data sets.\medskip
\noindent We start by a reminder of the definition of the adapted Hill
estimator, of the tail index $\gamma_{1},$ under random censorship.\ The tail
of the censoring distribution is assumed to be regularly varying too, that is
$1-G\in\mathcal{RV}_{\left( -1/\gamma_{2}\right) },$ for some $\gamma
_{2}>0.$\ By virtue of the independence of $X$ and $Y,$ we have $1-H\left(
x\right) =\left( 1-F\left( x\right) \right) \left( 1-G\left( x\right)
\right) $ and therefore $1-H\in\mathcal{RV}_{\left( -1/\gamma\right)
,$\ with $\gamma:=\gamma_{1}\gamma_{2}/\left( \gamma_{1}+\gamma_{2}\right)
.$ Let $\left\{ \left( Z_{i},\delta_{i}\right) ,\text{ }1\leq i\leq
n\right\} $ be a sample from the couple of rv's $\left( Z,\delta\right) $
and $Z_{1,n}\leq...\leq Z_{n,n}$\ represent the order statistics pertaining to
$\left( Z_{1},...,Z_{n}\right) .$ If we denote the concomitant of the $i$th
order statistic by $\delta_{\left[ i:n\right] }$ (i.e. $\delta_{\left[
i:n\right] }=\delta_{j}$ if $Z_{i,n}=Z_{j}),$\ then the adapted Hill
estimator of the tail index $\gamma_{1}$ is defined b
\begin{equation}
\widehat{\gamma}_{1}^{\left( H,c\right) }:=\frac{\widehat{\gamma}^{H
}{\widehat{p}}, \label{gamma
\end{equation}
wher
\begin{equation}
\widehat{\gamma}^{H}:=\frac{1}{k}\sum\limits_{i=1}^{k}\log Z_{n-i+1,n}-\log
Z_{n-k:n} \label{gammaZ
\end{equation}
an
\begin{equation}
\widehat{p}:=\frac{1}{k
{\displaystyle\sum\limits_{i=1}^{k}}
\delta_{\left[ n-i+1:n\right] }, \label{pchapeau
\end{equation}
with $k=k_{n}$ satisfying $\left( \ref{K}\right) .$ Roughly speaking, the
adapted Hill estimator is equal to the quotient of the classical Hill
estimator to the proportion of non censored data.\medskip
\noindent To derive the asymptotic normality of $\widehat{\gamma}_{1}^{\left(
H,c\right) },$ we will adopt a new approach which is different from that of
\cite{EnFG08}. We notice that the asymptotic normality of extreme value theory
based estimators is achieved in the second-order framework
\citep[see][]{deHS96}. Thus, it seems quite natural to suppose that cdf's $F,$
$G$ and $H$ satisfy the well-known second-order condition of regular
variation. That is, we assume that there exist a constant $\tau_{j}<0$ and a
function $A_{j},$ $j=1,2$ not changing sign near infinity, such that for any
$x>0
\begin{equation
\begin{array}
[c]{c
\underset{t\rightarrow\infty}{\lim}\dfrac{\overline{F}\left( tx\right)
/\overline{F}\left( t\right) -x^{-1/\gamma_{1}}}{A_{1}\left( t\right)
}=x^{-1/\gamma_{1}}\dfrac{x^{\tau_{1}}-1}{\tau_{1}},\medskip\\
\underset{t\rightarrow\infty}{\lim}\dfrac{\overline{G}\left( tx\right)
/\overline{G}\left( t\right) -x^{-1/\gamma_{2}}}{A_{2}\left( t\right)
}=x^{-1/\gamma_{2}}\dfrac{x^{\tau_{2}}-1}{\tau_{2}},
\end{array}
\label{second-order
\end{equation}
where $\overline{S}\left( x\right) :=S\left( \infty\right) -S\left(
x\right) ,$ for any $S.$ For convenience, the same condition on cdf $H$ will
be expressed in terms of its quantile function $H^{-1}\left( s\right)
:=\inf\left\{ x:H\left( x\right) \geq s\right\} ,$ $0<s<1.$ There exist a
constant $\tau_{3}<0$ and a function $A_{3}$ not changing sign near zero, such
that for any $x>0
\begin{equation}
\underset{t\downarrow0}{\lim}\dfrac{H^{-1}\left( 1-tx\right) /H^{-1}\left(
1-t\right) -x^{-\gamma}}{A_{3}\left( t\right) }=x^{-\gamma}\dfrac
{x^{\tau_{3}}-1}{\tau_{3}}. \label{second-order H
\end{equation}
\noindent Actually what interests us most is the Gaussian approximation to the
distribution of the adapted estimator $\widehat{\gamma}_{1}^{\left(
H,c\right) },$ similar to that obtained for Hill's estimator $\widehat
{\gamma}_{1}^{H}$ in the case of complete data. Indeed, if $\left(
\ref{second-order}\right) $ holds for $F,$ then, for an integer sequence $k$
satisfying $\left( \ref{K}\right) $ with $\sqrt{n/k}A_{1}\left( n/k\right)
\rightarrow0,$ we have as $n\rightarrow\infty,
\[
\sqrt{k}\left( \widehat{\gamma}_{1}^{H}-\gamma_{1}\right) =\gamma_{1
\sqrt{\frac{n}{k}}\int_{0}^{1}s^{-1}\widetilde{B}_{n}\left( 1-\frac{k
{n}s\right) ds-\gamma_{1}\sqrt{\frac{n}{k}}\widetilde{B}_{n}\left(
1-\frac{k}{n}\right) +o_{p}\left( 1\right) ,
\]
where $\left\{ \widetilde{B}_{n}\left( s\right) ;\text{ }0\leq
s\leq1\right\} $ is a sequence of Brownian bridges (see for instance
\citeauthor{CsMa85}, \citeyear{CsMa85} and \citeauthor{deHF06},
\citeyear[page 163]{deHF06}). In other words, $\sqrt{k}\left( \widehat
{\gamma}_{1}^{\left( H\right) }-\gamma_{1}\right) $ converges in
distribution to a centred Gaussian rv with variance $\gamma_{1}^{2}.$ The
Gaussian approximation above enables to solve many problems with regards to
the asymptotic behavior of several statistics of heavy-tailed distributions,
such as the estimators of: the mean (\citeauthor{Peng-2001},
\citeyear{Peng-2001} and \citeyear{Peng-2004}; \citeauthor{BMNY-2013},
\citeyear{BMNY-2013}), the excess-of-loss reinsurance premium
\citep{NMM-2007}, the distortion risk measures (\citeauthor{NM-2009},
\citeyear{NM-2009} and \citeauthor{BMNZ}, \citeyear{BMNZ}), the Zenga index
\citep{GPZ} and the goodness-of-fit functionals as well
\citep{KP-2008}.\medskip
\noindent The rest of the paper is organized a follows.\ In Section
\ref{sec2}, we state our main result which consists in a Gaussian
approximation to $\widehat{\gamma}_{1}^{\left( H,c\right) }$ only by
assuming the second-order conditions of regular variation $\left(
\ref{second-order}\right) $ and $\left( \ref{second-order H}\right) .$ More
precisely, we will show that there exists a sequence of Brownian bridges
$\left\{ B_{n}\left( s\right) ;\text{ }0\leq s\leq1\right\} $ defined on
$\left( \Omega,\mathcal{A},\mathbb{P}\right) ,$ such that as $n\rightarrow
\infty,
\[
\sqrt{k}\left( \widehat{\gamma}_{1}^{\left( H,c\right) }-\gamma_{1}\right)
=\Psi\left( B_{n}\right) +o_{p}\left( 1\right) ,
\]
for some functional $\Psi$ to be defined in such a way that $\Psi\left(
B_{n}\right) $ is normal with mean $0$ and variance $p\gamma_{1}^{2}.$
Section \ref{sec3} is devoted to an application of the main result as we
derive the asymptotic normality of an excess-of-loss reinsurance premium
estimator. The proofs are postponed to Section \ref{sec4} and some results,
that are instrumental to our needs, are gathered in the Appendix.
\section{\textbf{Main result\label{sec2}}}
\noindent In addition to the Gaussian approximation of $\sqrt{k}\left(
\widehat{\gamma}_{1}^{\left( H,c\right) }-\gamma_{1}\right) ,$ our main
result (stated in Theorem \ref{Theorem1}) consists in the asymptotic
representations, with Gaussian processes, of two other useful statistics,
namely $\sqrt{k}\left( \widehat{p}-p\right) $ and $\sqrt{k}\left(
\frac{Z_{n-k:n}}{H^{-1}\left( 1-k/n\right) }-1\right) .$ The functions
defined below are crucial to our need
\begin{equation}
H^{0}\left( z\right) :=\mathbb{P}\left( Z\leq z,\delta=0\right) =\int
_{0}^{z}\overline{F}\left( y\right) dG\left( y\right) \label{H0
\end{equation}
an
\begin{equation}
H^{1}\left( z\right) :=\mathbb{P}\left( Z\leq z,\delta=1\right) =\int
_{0}^{z}\overline{G}\left( y\right) dF\left( y\right) . \label{H1
\end{equation}
Throughout the paper, we use the notation
\[
h=h_{n}:=H^{-1}\left( 1-k/n\right) ,\text{ }\theta:=H^{1}\left(
\infty\right) \text{ and }p=1-q:=\gamma/\gamma_{1},
\]
and, for two sequences of rv's, we write $V_{n}^{\left( 1\right)
=o_{p}\left( \mathbf{V}_{n}^{\left( 2\right) }\right) $\ and$\ V_{n
^{\left( 1\right) }\approx V_{n}^{\left( 2\right) }$\ to say that, as
$n\rightarrow\infty,$\ $V_{n}^{\left( 1\right) }/V_{n}^{\left( 2\right)
}\rightarrow0$\ in probability and $V_{n}^{\left( 1\right) }=V_{n}^{\left(
2\right) }\left( 1+o_{p}\left( 1\right) \right) $\ respectively.
\begin{theorem}
\textbf{\label{Theorem1}}Assume that the second-order conditions
$(\ref{second-order})$ and $(\ref{second-order H})$ hold. Let $k=k_{n}$ be an
integer sequence satisfying, in addition to $(\ref{K}),$ $\sqrt{k}A_{j}\left(
h\right) \rightarrow0,$ for $j=1,2$ and $\sqrt{k}A_{3}\left( k/n\right)
\rightarrow\lambda<\infty$ as $n\rightarrow\infty.$ Then there exists a
sequence of Brownian bridges $\left\{ B_{n}\left( s\right) ;\text{ }0\leq
s\leq1\right\} $ such that, as $n\rightarrow\infty,
\[
\sqrt{k}\left( \frac{Z_{n-k:n}}{h}-1\right) =\gamma\sqrt{\frac{n}{k
}\mathbb{B}_{n}^{\ast}\left( \frac{k}{n}\right) +o_{p}\left( 1\right) ,
\
\[
\sqrt{k}\left( \widehat{p}-p\right) =\sqrt{\frac{n}{k}}\left(
q\mathbb{B}_{n}\left( \frac{k}{n}\right) -p\widetilde{\mathbb{B}}_{n}\left(
\frac{k}{n}\right) \right) +o_{p}\left( 1\right)
\]
an
\[
\sqrt{k}\left( \widehat{\gamma}_{1}^{\left( H,c\right) }-\gamma_{1}\right)
=\gamma_{1}\sqrt{\frac{n}{k}}\int_{0}^{1}s^{-1}\mathbb{B}_{n}^{\ast}\left(
\frac{k}{n}s\right) ds-\frac{\gamma_{1}}{p}\sqrt{\frac{n}{k}}\mathbb{B
_{n}\left( \frac{k}{n}\right) +o_{p}\left( 1\right) ,
\]
where
\[
\mathbb{B}_{n}\left( s\right) :=B_{n}\left( \theta\right) -B_{n}\left(
\theta-ps\right) ,\text{\ }\widetilde{\mathbb{B}}_{n}\left( s\right)
:=-B_{n}\left( 1-qs\right)
\]
an
\[
\mathbb{B}_{n}^{\ast}\left( s\right) :=\mathbb{B}_{n}\left( s\right)
+\widetilde{\mathbb{B}}_{n}\left( s\right) ,0<s<1,
\]
are sequences of centred Gaussian processes.
\end{theorem}
\begin{corollary}
\label{Cor1}Under the assumptions of Theorem $\ref{Theorem1},$ we hav
\[
\sqrt{k}\left( \widehat{\gamma}_{1}^{\left( H,c\right) }-\gamma_{1}\right)
\overset{d}{\rightarrow}\mathcal{N}\left( 0,p\gamma_{1}^{2}\right) ,\text{
as }n\rightarrow\infty.
\]
$\mathcal{N}\left( 0,a^{2}\right) $ designates the centred normal
distribution with variance $a^{2}.$
\end{corollary}
\noindent To the best of our knowledge, this is the first time that $\sqrt
{k}\left( \widehat{\gamma}_{1}^{\left( H,c\right) }-\gamma_{1}\right) $ is
expressed in terms of Gaussian processes. This asymptotic representation will
be of great usefulness in a lot of applications of extreme value theory under
random censoring, as we will see in the following example.
\section{\textbf{Application: Excess-of-loss reinsurance premium estimation
\label{sec3}}}
\noindent In this section, we apply Theorem $\ref{Theorem1}$ to derive the
asymptotic normality of an estimator of the excess-of-loss reinsurance premium
obtained with censored data. The choice of this example is motivated mainly by
two reasons. The first one is that the area of reinsurance is by far the most
important field of application of extreme value theory. The second is that
data sets with censored extreme observations often occur in insurance. The aim
of reinsurance, where emphasis lies on modelling extreme events, is to protect
an insurance company, called ceding company, against losses caused by
excessively large claims and/or a surprisingly high number of moderate claims.
Nice discussions on the use of extreme value theory in the actuarial world
(especially in the reinsurance industry) can be found, for instance, in
\cite{EKM-1997}, a major textbook on the subject, and \cite{BeTV94}.\medskip
\noindent\ Let $X_{1},...,X_{n}$ $\left( n\geq1\right) $ be $n$ individual
claim amounts of an insured loss $X$ with finite mean. In the excess-of-loss
reinsurance treaty, the ceding company covers claims that do not exceed a
(high) number $R\geq0,$ called retention level, while the reinsurer pays the
part $(X_{i}-R)_{+}:=\max\left( 0,X_{i}-R\right) $ of each claim exceeding
$R.$ Applying Wang's premium calculation principle, with a distortion equal to
the identical function \citep{W96}, to this reinsurance policy yields the
following expression for the net premium for the layer from $R$ to infinit
\[
\Pi(R):=\mathbf{E}\left[ (X-R)_{+}\right] =\int_{R}^{\infty}\overline
{F}\left( x\right) dx.
\]
Taking $h$ as a retention level, we hav
\[
\Pi_{n}=\Pi(h)=h\overline{F}\left( h\right) \int_{1}^{\infty}\frac
{\overline{F}\left( hx\right) }{\overline{F}\left( h\right) }dx.
\]
After noticing that the finite mean assumption yields that $\gamma_{1}<1,$ we
use the first-order regular variation condition $\left( \ref{first-condition
\right) $ together with Potter's inequalities, to ge
\[
\Pi_{n}\sim\frac{\gamma_{1}}{1-\gamma_{1}}h\overline{F}\left( h\right)
,\text{ as }n\rightarrow\infty,\text{ }0<\gamma_{1}<1.
\]
Le
\[
F_{n}\left( x\right) :=1
{\displaystyle\prod\limits_{Z_{i:n}\leq x}^{n}}
\left[ 1-\dfrac{\delta_{\left[ i:n\right] }}{n-i+1}\right]
\]
be the well-known Kaplan-Meier estimator \citep{KM} of cdf $F.$ Then, by
replacing $\gamma_{1},$ $h$ and $\overline{F}\left( h\right) $ by their
respective estimates $\widehat{\gamma}_{1}^{\left( H,c\right) },$
$Z_{n-k:n}$ an
\[
1-F_{n}(Z_{n-k:n})
{\textstyle\prod_{i=1}^{n-k}}
\left( 1-\delta_{\left[ i:n\right] }/\left( n-i+1\right) \right) ,
\]
we define our estimator of $\Pi_{n}$ as follow
\begin{equation}
\widehat{\Pi}_{n}:=\frac{\widehat{\gamma}_{1}^{\left( H,c\right)
}{1-\widehat{\gamma}_{1}^{\left( H,c\right) }}Z_{n-k:n
{\displaystyle\prod_{i=1}^{n-k}}
\left( 1-\frac{\delta_{\left[ i:n\right] }}{n-i+1}\right) .
\label{Pi-chapeau
\end{equation}
The asymptotic normality of $\widehat{\Pi}_{n}$ is established in the
following theorem.
\begin{theorem}
\textbf{\label{Theorem2}}Assume that the assumptions of Theorem
$\ref{Theorem1}$ hold with $\gamma_{1}<1$ and that both cdf's $F$ and $G$ are
absolutely continuous, then
\begin{align*}
\frac{\sqrt{k}\left( \widehat{\Pi}_{n}-\Pi_{n}\right) }{h\overline{F}\left(
h\right) } & =-\frac{p\gamma_{1}^{2}}{1-\gamma_{1}}\sqrt{\frac{n}{k
}\mathbb{B}_{n}^{\ast}\left( \frac{k}{n}\right) \\
& +\frac{\gamma_{1}}{\left( 1-\gamma_{1}\right) ^{2}}\left\{ \sqrt
{\frac{n}{k}}\int_{0}^{1}s^{-1}\mathbb{B}_{n}^{\ast}\left( \frac{k
{n}s\right) ds-p^{-1}\sqrt{\frac{n}{k}}\mathbb{B}_{n}\left( \frac{k
{n}\right) \right\} +o_{p}\left( 1\right) ,
\end{align*}
where $\mathbb{B}_{n}$ and $\mathbb{B}_{n}^{\ast}$ are those defined in
Theorem $\ref{Theorem1}.$
\end{theorem}
\begin{corollary}
\label{Cor2}Under the assumptions of Theorem $\ref{Theorem2},$ we hav
\[
\frac{\sqrt{k}\left( \widehat{\Pi}_{n}-\Pi_{n}\right) }{h\overline{F}\left(
h\right) }\overset{d}{\rightarrow}\mathcal{N}\left( 0,\sigma_{\Pi
^{2}\right) ,\text{ as }n\rightarrow\infty,
\]
wher
\[
\sigma_{\Pi}^{2}:=\frac{p\gamma_{1}^{2}}{\left( 1-\gamma_{1}\right) ^{2
}\left[ p\gamma_{1}^{2}+\frac{1}{\left( 1-\gamma_{1}\right) ^{2}}\right]
,\text{ for }\gamma_{1}<1.
\]
\end{corollary}
\section{\textbf{Proofs\label{sec4}}}
\noindent We begin by a brief introduction on some uniform empirical processes
under random censoring. The empirical counterparts of $H^{j}$ $\left(
j=0,1\right) $ are defined, for $z\geq0,$ b
\[
H_{n}^{j}\left( z\right) :=\#\left\{ i:1\leq i\leq n,\text{ }Z_{i}\leq
z,\delta_{i}=j\right\} /n,\text{ }j=0,1.
\]
In the sequel, we will use the following two empirical processe
\[
\sqrt{n}\left( \overline{H}_{n}^{j}\left( z\right) -\overline{H}^{j}\left(
z\right) \right) ,\text{ }j=0,1;\text{ }z>0,
\]
which may be represented, almost surely, by a uniform empirical process.
Indeed, let us define, for each $i=1,...,n,$ the following r
\[
U_{i}:=\delta_{i}H^{1}\left( Z_{i}\right) +\left( 1-\delta_{i}\right)
\left( \theta+H^{0}\left( Z_{i}\right) \right) .
\]
From \cite{EnKo92}, the rv's $U_{1},...,U_{n}$ are iid $(0,1)$-uniform. The
empirical cdf and the uniform empirical process based upon $U_{1},...,U_{n}$
are respectively denoted b
\[
\mathbb{U}_{n}\left( s\right) :=\#\left\{ i:1\leq i\leq n,\text{ }U_{i}\leq
s\right\} /n\text{ and }\alpha_{n}\left( s\right) :=\sqrt{n}\left(
\mathbb{U}_{n}\left( s\right) -s\right) ,\text{ }0\leq s\leq1.
\]
\cite{DeEn96} state that almost surel
\[
H_{n}^{0}\left( z\right) =\mathbb{U}_{n}\left( H^{0}\left( z\right)
+\theta\right) -\mathbb{U}_{n}\left( \theta\right) ,\text{ for
0<H^{0}\left( z\right) <1-\theta,
\]
an
\[
H_{n}^{1}\left( z\right) =\mathbb{U}_{n}\left( H^{1}\left( z\right)
\right) ,\text{ for }0<H^{1}\left( z\right) <\theta.
\]
It is easy to verify that almost surel
\begin{equation}
\sqrt{n}\left( \overline{H}_{n}^{1}\left( z\right) -\overline{H}^{1}\left(
z\right) \right) =\alpha_{n}\left( \theta\right) -\alpha_{n}\left(
\theta-\overline{H}^{1}\left( z\right) \right) ,\text{ for }0<\overline
{H}^{1}\left( z\right) <\theta, \label{rep-H1
\end{equation}
an
\begin{equation}
\sqrt{n}\left( \overline{H}_{n}^{0}\left( z\right) -\overline{H}^{0}\left(
z\right) \right) =-\alpha_{n}\left( 1-\overline{H}^{0}\left( z\right)
\right) ,\text{ for }0<\overline{H}^{0}\left( z\right) <1-\theta.
\label{rep-H0
\end{equation}
Our methodology strongly relies on the well-known Gaussian approximation given
by \cite{CsCsHM86}: on the probability space $\left( \Omega,\mathcal{A
,\mathbb{P}\right) ,$ there exists a sequence of Brownian bridges $\left\{
B_{n}\left( s\right) ;\text{ }0\leq s\leq1\right\} $ such that for every
$0\leq\xi<1/4
\begin{equation}
\sup_{\frac{1}{n}\leq s\leq1-\frac{1}{n}}\frac{\left\vert \alpha_{n}\left(
s\right) -B_{n}\left( s\right) \right\vert }{\left( s\left( 1-s\right)
\right) ^{1/2-\xi}}=O_{p}\left( n^{-\xi}\right) ,\text{ as }n\rightarrow
\infty. \label{approx
\end{equation}
The following processes will be crucial to our needs
\begin{equation}
\beta_{n}\left( z\right) :=\sqrt{\frac{n}{k}}\left\{ \alpha_{n}\left(
\theta\right) -\alpha_{n}\left( \theta-\overline{H}^{1}\left(
zZ_{n-k:n}\right) \right) \right\} ,\text{ for }0<\overline{H}^{1}\left(
z\right) <\theta\label{betan
\end{equation}
an
\begin{equation}
\widetilde{\beta}_{n}\left( z\right) :=-\sqrt{\frac{n}{k}}\alpha_{n}\left(
1-\overline{H}^{0}\left( zZ_{n-k:n}\right) \right) ,\text{ for
0<\overline{H}^{0}\left( z\right) <1-\theta. \label{beta-tild
\end{equation}
\subsection{Proof of Theorem \ref{Theorem1}}
\noindent First, observe tha
\[
\frac{Z_{n-k:n}}{h}=\frac{H^{-1}\left( H\left( Z_{n-k:n}\right) \right)
}{H^{-1}\left( H_{n}\left( Z_{n-k:n}\right) \right) }.
\]
Let $x_{n}:=\overline{H}\left( Z_{n-k:n}\right) /\overline{H}_{n}\left(
Z_{n-k:n}\right) $ and $t_{n}:=\overline{H}_{n}\left( Z_{n-k:n}\right)
=k/n.$ By using the second-order regular variation condition $\left(
\ref{second-order H}\right) $ we ge
\[
\frac{H^{-1}\left( H\left( Z_{n-k:n}\right) \right) }{H^{-1}\left(
H_{n}\left( Z_{n-k:n}\right) \right) }-x_{n}^{-\gamma}\approx A_{3}\left(
k/n\right) x_{n}^{-\gamma}\dfrac{x_{n}^{\tau_{3}}-1}{\tau_{3}}.
\]
Since $x_{n}\approx1,$ it follows that $x_{n}^{-\gamma}\dfrac{x_{n}^{\tau_{3
}-1}{\tau_{3}}$ tends in probability to zero. This means that
\[
\frac{H^{-1}\left( H\left( Z_{n-k:n}\right) \right) }{H^{-1}\left(
H_{n}\left( Z_{n-k:n}\right) \right) }=\left( \frac{\overline{H}\left(
Z_{n-k:n}\right) }{\overline{H}_{n}\left( Z_{n-k:n}\right) }\right)
^{-\gamma}+o_{p}\left( A_{3}\left( k/n\right) \right) .
\]
Using the mean value theorem, we get
\[
\left( \frac{\overline{H}\left( Z_{n-k:n}\right) }{\overline{H}_{n}\left(
Z_{n-k:n}\right) }\right) ^{-\gamma}-1=-\gamma c_{n}\left( \frac
{\overline{H}\left( Z_{n-k:n}\right) }{\overline{H}_{n}\left(
Z_{n-k:n}\right) }-1\right) ,
\]
where $c_{n}$ is a sequence of rv's lying between $1$ and $\left(
\overline{H}\left( Z_{n-k:n}\right) /\overline{H}_{n}\left( Z_{n-k:n
\right) \right) ^{-\gamma-1}.$ Since $c_{n}\approx1,$ the
\[
\left( \frac{\overline{H}\left( Z_{n-k:n}\right) }{\overline{H}_{n}\left(
Z_{n-k:n}\right) }\right) ^{-\gamma}-1\approx-\gamma\left( \frac
{\overline{H}\left( Z_{n-k:n}\right) }{\overline{H}_{n}\left(
Z_{n-k:n}\right) }-1\right) .
\]
By assumption we have $\sqrt{k}A_{3}\left( k/n\right) \rightarrow
\lambda<\infty,$ the
\[
\sqrt{k}\left( \frac{Z_{n-k:n}}{h}-1\right) =-\gamma\sqrt{k}\left(
\frac{\overline{H}\left( Z_{n-k:n}\right) }{\overline{H}_{n}\left(
Z_{n-k:n}\right) }-1\right) +o_{p}\left( 1\right) .
\]
We have $\overline{H}_{n}\left( Z_{n-k:n}\right) =k/n,$ the
\[
\sqrt{k}\left( \frac{Z_{n-k:n}}{h}-1\right) =\gamma\sqrt{k}\frac{n
{k}\left( \overline{H}_{n}\left( Z_{n-k:n}\right) -\overline{H}\left(
Z_{n-k:n}\right) \right) +o_{p}\left( 1\right) .
\]
which may be decomposed int
\[
\gamma\sqrt{k}\frac{n}{k}\left( \left( \overline{H}_{n}^{1}\left(
Z_{n-k:n}\right) -\overline{H}^{1}\left( Z_{n-k:n}\right) \right) +\left(
\overline{H}_{n}^{0}\left( Z_{n-k:n}\right) -\overline{H}^{0}\left(
Z_{n-k:n}\right) \right) \right) +o_{p}\left( 1\right) .
\]
Using $\left( \ref{betan}\right) $ and $\left( \ref{beta-tild}\right) $
with $z=1,$ leads t
\begin{equation}
\sqrt{k}\left( \frac{Z_{n-k:n}}{h}-1\right) =\gamma\left( \beta_{n}\left(
1\right) +\widetilde{\beta}_{n}\left( 1\right) \right) +o_{p}\left(
1\right) . \label{Zh
\end{equation}
Now, we apply assertions $\left( i\right) $ and $\left( ii\right) $ of
Lemma \ref{Lem2} to complete the proof of the first result of the
theorem.\medskip
\noindent For the second result of the theorem, observe tha
\[
\widehat{p}=\frac{n}{k}\overline{H}_{n}^{1}\left( Z_{n-k:n}\right) ,
\]
then consider the following decompositio
\begin{align}
\widehat{p}-p & =\frac{n}{k}\left( \overline{H}_{n}^{1}\left(
Z_{n-k:n}\right) -\overline{H}^{1}\left( Z_{n-k:n}\right) \right)
\label{phate-p}\\
& +\frac{n}{k}\left( \overline{H}^{1}\left( Z_{n-k:n}\right) -\overline
{H}^{1}\left( h\right) \right) +\left( \frac{n}{k}\overline{H}^{1}\left(
h\right) -p\right) .\nonumber
\end{align}
Notice that from $\left( \ref{betan}\right) ,$ almost surely, we have
\begin{equation}
\frac{n}{k}\left( \overline{H}_{n}^{1}\left( Z_{n-k:n}\right) -\overline
{H}^{1}\left( Z_{n-k:n}\right) \right) =\frac{1}{\sqrt{k}}\beta_{n}\left(
1\right) . \label{beta-p
\end{equation}
The second term in the right-hand side of $\left( \ref{phate-p}\right) $ may
be written a
\begin{equation}
\frac{n}{k}\left( \overline{H}^{1}\left( Z_{n-k:n}\right) -\overline{H
^{1}\left( h\right) \right) =\frac{n}{k}\overline{H}^{1}\left( h\right)
\left( \frac{\overline{H}^{1}\left( Z_{n-k:n}\right) }{\overline{H
^{1}\left( h\right) }-1\right) . \label{prod
\end{equation}
Making use of Lemma \ref{Lem1}, with $z=1$ and $z=Z_{n-k:n}/h,$ we
respectively get as $n\rightarrow\infty
\begin{equation}
\frac{n}{k}\overline{H}^{1}\left( h\right) =p+O\left( A\left( h\right)
\right) \text{ and }\frac{n}{k}\overline{H}^{1}\left( Z_{n-k:n}\right)
=p\left( \frac{Z_{n-k:n}}{h}\right) ^{-1/\gamma}+O_{p}\left( A\left(
h\right) \right) , \label{H1-a
\end{equation}
where $A\left( h\right) ,$ defined later on in Lemma \ref{Lem1}, is a
sequence tending to zero as $n\rightarrow\infty.$ It follows tha
\[
\frac{\overline{H}^{1}\left( Z_{n-k:n}\right) }{\overline{H}^{1}\left(
h\right) }-1=\left( \frac{p}{p+O_{p}\left( A\left( h\right) \right)
}\right) \left( \left( Z_{n-k:n}/h\right) ^{-1/\gamma}-1\right)
+\frac{O_{p}\left( A\left( h\right) \right) }{p+O_{p}\left( A\left(
h\right) \right) }.
\]
Putting things in a simple way, we have, since $A\left( h\right) =o\left(
1\right) ,$
\[
\frac{p}{p+O_{p}\left( A\left( h\right) \right) }=1+o_{p}\left( 1\right)
\text{ and }\frac{O_{p}\left( A\left( h\right) \right) }{p+O_{p}\left(
A\left( h\right) \right) }=O_{p}\left( A\left( h\right) \right) .
\]
Therefor
\[
\frac{\overline{H}^{1}\left( Z_{n-k:n}\right) }{\overline{H}^{1}\left(
h\right) }-1=\left( 1+o_{p}\left( 1\right) \right) \left( \left(
Z_{n-k:n}/h\right) ^{-1/\gamma}-1\right) +O_{p}\left( A\left( h\right)
\right) .
\]
Recalling $\left( \ref{prod}\right) $ and using $\overline{H}^{1}\left(
h\right) $ from $\left( \ref{H1-a}\right) ,$ we ge
\[
\frac{n}{k}\left( \overline{H}^{1}\left( Z_{n-k:n}\right) -\overline{H
^{1}\left( h\right) \right) =p\left( \left( \frac{Z_{n-k:n}}{h}\right)
^{-1/\gamma}-1\right) \left( 1+o_{p}\left( 1\right) \right) +O_{p}\left(
A\left( h\right) \right)
\]
By applying the mean value theorem and using the fact that $Z_{n-k:n
/h\approx1,$ we readily verify tha
\[
\left( \frac{Z_{n-k:n}}{h}\right) ^{-1/\gamma}-1\approx-\frac{1}{\gamma
}\left( \frac{Z_{n-k:n}}{h}-1\right) .
\]
Henc
\begin{equation}
\frac{n}{k}\left( \overline{H}^{1}\left( Z_{n-k:n}\right) -\overline{H
^{1}\left( h\right) \right) =-\frac{p}{\gamma}\left( \frac{Z_{n-k:n}
{h}-1\right) \left( 1+o\left( 1\right) \right) +O_{p}\left( A\left(
h\right) \right) . \label{D1
\end{equation}
From the assumptions on the funcions $A_{1}$ and $A_{2},$ we have $\sqrt
{k}A\left( h\right) \rightarrow0.$By combining $\left( \ref{Zh}\right) $
and $\left( \ref{D1}\right) ,$ we obtai
\begin{equation}
\sqrt{k}\frac{n}{k}\left( \overline{H}^{1}\left( Z_{n-k:n}\right)
-\overline{H}^{1}\left( h\right) \right) =-p\left( \beta_{n}\left(
1\right) +\widetilde{\beta}_{n}\left( 1\right) \right) +o_{p}\left(
1\right) . \label{Z
\end{equation}
For the third term in the right-hand side of $\left( \ref{phate-p}\right) ,$
we use conditions $\left( \ref{second-order}\right) ,$ as in the proof of
Lemma \ref{Lem1}, to hav
\begin{equation}
\sqrt{k}\left( \frac{n}{k}\overline{H}^{1}\left( h\right) -p\right)
\sim\frac{pq}{\gamma_{1}}\left( \frac{\sqrt{k}A_{1}\left( h\right)
}{1-p\tau_{1}}+\frac{q\sqrt{k}A_{2}\left( h\right) }{1-q\tau_{2}}\right) ,
\label{mup
\end{equation}
which tends to $0$ as $n\rightarrow\infty$ because, by assumption, $\sqrt
{k}A_{j}\left( h\right) $ goes to $0,$ $j=1,2.$ Substituting results
$\left( \ref{beta-p}\right) ,$ $\left( \ref{Z}\right) $ and $\left(
\ref{mup}\right) $ in decomposition $\left( \ref{phate-p}\right) ,$ yield
\begin{equation}
\sqrt{k}\left( \widehat{p}-p\right) =q\beta_{n}\left( 1\right)
-p\widetilde{\beta}_{n}\left( 1\right) +o_{p}\left( 1\right) .
\label{p-hate
\end{equation}
The final form of the second result of the theorem is then obtained by
applying assertions $\left( i\right) $ and $\left( ii\right) $ of Lemma
\ref{Lem2}.\medskip
\noindent Finally, we focus on the third result of the theorem. It is clear
that we have the following decompositio
\begin{equation}
\sqrt{k}\left( \widehat{\gamma}_{1}^{\left( H,c\right) }-\gamma_{1}\right)
=\frac{1}{\widehat{p}}\sqrt{k}\left( \widehat{\gamma}^{H}-\gamma\right)
-\frac{\gamma_{1}}{\widehat{p}}\sqrt{k}\left( \widehat{p}-p\right) .
\label{decomp
\end{equation}
Recall that one way to define Hill's estimator $\widehat{\gamma}^{H}$ is to
use the limit
\[
\gamma=\lim_{t\rightarrow\infty}\int_{t}^{\infty}z^{-1}\overline{H}\left(
z\right) /\overline{H}(t)dz.
\]
Then, by replacing $\overline{H}$ by $\overline{H}_{n}$ and letting
$t=Z_{n-k:n},$ we writ
\[
\widehat{\gamma}^{H}=\frac{n}{k}\int_{Z_{n-k:n}}^{\infty}z^{-1}\overline
{H}_{n}(z)dz.
\]
For details, see for instance, \cite[page 69]{deHF06}. Let's consider the
following decomposition $\hat{\gamma}^{H}-\gamma=T_{n1}+T_{n2}+T_{n3},$ wher
\[
T_{n1}:=\frac{n}{k}\int_{Z_{n-k:n}}^{\infty}z^{-1}\left( \overline{H}_{n
^{0}(z)-\overline{H}^{0}(z)+\overline{H}_{n}^{1}(z)-\overline{H
^{1}(z)\right) dz,
\
\[
T_{n2}:=\frac{n}{k}\int_{Z_{n-k:n}}^{h}z^{-1}\overline{H}\left( z\right)
dz\text{ and }T_{n3}:=\frac{n}{k}\int_{h}^{\infty}z^{-1}\overline{H}\left(
z\right) dz-\gamma.
\]
We use the integral convention that $\int_{a}^{b}=\int_{\left[ a,b\right)
$\ as integration is with respect to the measure induced by a right-continuous
function. Making a change of variables in the first term $T_{n1}$ and using
the uniform empirical representations of $\overline{H}_{n}^{0}$ and
$\overline{H}_{n}^{1},$ we get almost surel
\[
\sqrt{k}T_{n1}=\int_{1}^{\infty}z^{-1}\left( \beta_{n}\left( z\right)
+\widetilde{\beta}_{n}\left( z\right) \right) dz.
\]
For the second term $T_{n2},$ we apply the mean value theorem to hav
\[
T_{n2}=\frac{\overline{H}\left( z_{n}^{\ast}\right) }{z_{n}^{\ast}}\frac
{n}{k}\left( h-Z_{n-k:n}\right) ,
\]
where $z_{n}^{\ast}$ is a sequence of rv's between $Z_{n-k:n}$ and $h.$\ It is
obvious that $z_{n}^{\ast}\approx h,$ this implies that $\overline{H}\left(
z_{n}^{\ast}\right) \approx k/n.$ It follows that the right-hand side of the
previous equation is $\approx-\left( Z_{n-k:n}/h-1\right) .$ Hence, from
$\left( \ref{Zh}\right) ,$ we hav
\[
\sqrt{k}T_{n2}=-\gamma\left( \beta_{n}\left( 1\right) +\widetilde{\beta
}_{n}\left( 1\right) \right) +o_{p}\left( 1\right) .
\]
Finally, for $T_{n3},$ we use the second-order conditions $\left(
\ref{second-order}\right) $ to get
\begin{equation}
\sqrt{k}T_{n3}\sim p^{2}\frac{\sqrt{k}A_{1}\left( h\right) }{1-p\tau_{1
}+q^{2}\frac{\sqrt{k}A_{2}\left( h\right) }{1-q\tau_{2}}. \label{Tn4
\end{equation}
Since by assumption $\sqrt{k}A_{j}\left( h\right) \rightarrow0,$ $j=1,2,$ as
$n\rightarrow\infty,$ then $\sqrt{k}T_{n3}\rightarrow0.$ By similar arguments
as the above, we obtai
\begin{equation}
\sqrt{k}\left( \hat{\gamma}^{H}-\gamma\right) =\int_{1}^{\infty
z^{-1}\left( \beta_{n}\left( z\right) +\widetilde{\beta}_{n}\left(
z\right) \right) dz-\gamma\left( \beta_{n}\left( 1\right) +\widetilde
{\beta}_{n}\left( 1\right) \right) +o_{p}\left( 1\right) .
\label{rephill
\end{equation}
\textbf{\ }Combining $\left( \ref{p-hate}\right) $ and $\left(
\ref{rephill}\right) $ yields
\begin{equation}
\sqrt{k}\left( \widehat{\gamma}_{1}^{\left( H,c\right) }-\gamma_{1}\right)
=\frac{1}{p}\int_{1}^{\infty}z^{-1}\left( \beta_{n}\left( z\right)
+\widetilde{\beta}_{n}\left( z\right) \right) dz-\frac{\gamma}{p^{2}
\beta_{n}\left( 1\right) +o_{p}\left( 1\right) . \label{rep-hill-censor
\end{equation}
We achieve the proof of the third result of the theorem by using assertions
$\left( i\right) $ and $\left( ii\right) $ of Lemma \ref{Lem2}.
\subsection{Proof of Corollary \ref{Cor1}}
\noindent From the third result of Theorem \ref{Theorem1}, we deduce that
$\sqrt{k}\left( \widehat{\gamma}_{1}^{\left( H,c\right) }-\gamma
_{1}\right) $ is asymptotically centred Gaussian with varianc
\[
\sigma^{2}=\gamma_{1}^{2}\lim_{n\rightarrow\infty}\mathbf{E}\left[
\sqrt{\frac{n}{k}}\int_{0}^{1}s^{-1}\mathbb{B}_{n}^{\ast}\left( psk/n\right)
ds-\frac{1}{p}\sqrt{\frac{n}{k}}\mathbb{B}_{n}\left( k/n\right) \right]
^{2}.
\]
We check that the processes $\mathbb{B}_{n}\left( s\right) ,$ $\widetilde
{\mathbb{B}}_{n}\left( s\right) $ and $\mathbb{B}_{n}^{\ast}\left(
s\right) $ satisf
\[
p^{-1}\mathbf{E}\left[ \mathbb{B}_{n}\left( s\right) \mathbb{B}_{n}\left(
t\right) \right] =\min\left( s,t\right) -pst,\text{ }q^{-1}\mathbf{E
\left[ \widetilde{\mathbb{B}}_{n}\left( s\right) \widetilde{\mathbb{B}
_{n}\left( t\right) \right] =\min\left( s,t\right) -qst,
\]
an
\[
p^{-1}\mathbf{E}\left[ \mathbb{B}_{n}\left( s\right) \mathbb{B}_{n}^{\ast
}\left( t\right) \right] =\mathbf{E}\left[ \mathbb{B}_{n}^{\ast}\left(
s\right) \mathbb{B}_{n}^{\ast}\left( t\right) \right] =\min\left(
s,t\right) -st.
\]
Then, by elementary calculation (we omit details), we get $\sigma^{2
=p\gamma_{1}^{2}.$\hfill$\Box$\textbf{\medskip}
\subsection{Proof of Theorem \ref{Theorem2}}
First, recall tha
\[
\Pi_{n}=h\overline{F}\left( h\right) \int_{1}^{\infty}\frac{\overline
{F}\left( hx\right) }{\overline{F}\left( h\right) }dx\text{ and
\widehat{\Pi}_{n}=Z_{n-k:n}\left( 1-F_{n}\left( Z_{n-k:n}\right) \right)
\frac{\widehat{\gamma}_{1}^{\left( H,c\right) }}{1-\widehat{\gamma
_{1}^{\left( H,c\right) }}.
\]
Observe that we have the following decompositio
\[
\frac{\widehat{\Pi}_{n}-\Pi_{n}}{h\overline{F}\left( h\right) }=\sum
_{i=1}^{6}S_{ni},
\]
wher
\[
S_{n1}:=\frac{\widehat{\gamma}_{1}^{\left( H,c\right) }}{1-\widehat{\gamma
}_{1}^{\left( H,c\right) }}\frac{Z_{n-k:n}}{h}\left\{ \frac{\left(
1-F_{n}\left( Z_{n-k:n}\right) \right) }{\overline{F}\left( h\right)
}-\frac{\overline{F}\left( Z_{n-k:n}\right) }{\overline{F}\left( h\right)
}\right\} ,
\
\[
S_{n2}:=\frac{\widehat{\gamma}_{1}^{\left( H,c\right) }}{1-\widehat{\gamma
}_{1}^{\left( H,c\right) }}\frac{Z_{n-k:n}}{h}\left\{ \frac{\overline
{F}\left( Z_{n-k:n}\right) }{\overline{F}\left( h\right) }-\left(
\frac{Z_{n-k:n}}{h}\right) ^{-1/\gamma_{1}}\right\} ,
\
\[
S_{n3}:=\frac{\widehat{\gamma}_{1}^{\left( H,c\right) }}{1-\widehat{\gamma
}_{1}^{\left( H,c\right) }}\frac{Z_{n-k:n}}{h}\left\{ \left(
\frac{Z_{n-k:n}}{h}\right) ^{-1/\gamma_{1}}-1\right\} ,\text{ }S_{n4
:=\frac{\widehat{\gamma}_{1}^{\left( H,c\right) }}{1-\widehat{\gamma
_{1}^{\left( H,c\right) }}\left\{ \frac{Z_{n-k:n}}{h}-1\right\} ,
\
\[
S_{n5}:=\frac{\widehat{\gamma}_{1}^{\left( H,c\right) }}{1-\widehat{\gamma
}_{1}^{\left( H,c\right) }}-\frac{\gamma_{1}}{1-\gamma_{1}}\text{ and
}S_{n6}:=\frac{\gamma_{1}}{1-\gamma_{1}}-\frac{\Pi_{n}}{h\overline{F}\left(
h\right) }.
\]
Since $Z_{n-k:n}\approx h$ and $\widehat{\gamma}_{1}^{\left( H,c\right)
}\approx\gamma_{1},$ the
\begin{equation}
S_{n1}\approx-\frac{\gamma_{1}}{1-\gamma_{1}}\frac{F_{n}\left( Z_{n-k:n
\right) -F\left( Z_{n-k:n}\right) }{\overline{F}\left( Z_{n-k:n}\right)
}. \label{Sn1
\end{equation}
In view of Proposition 5 in \cite{Cs96}, we have for any $x\leq Z_{n-k:n},
\begin{align}
\frac{F_{n}\left( x\right) -F\left( x\right) }{\overline{F}\left(
x\right) } & =\frac{H_{n}^{1}\left( x\right) -H^{1}\left( x\right)
}{\overline{H}\left( x\right) }-\int_{0}^{x}\frac{H_{n}^{1}\left( z\right)
-H^{1}\left( z\right) }{\overline{H}^{2}\left( z\right) }dH\left(
z\right) \label{ratio}\\
& -\int_{0}^{x}\frac{\overline{H}_{n}\left( z\right) -\overline{H}\left(
z\right) }{\overline{H}^{2}\left( z\right) }dH^{1}\left( z\right)
+O_{p}\left( \frac{1}{k}\right) .\nonumber
\end{align}
Notice tha
\begin{equation}
\sqrt{n}\left( \overline{H}_{n}\left( z\right) -\overline{H}\left(
z\right) \right) =\sqrt{n}\left( \overline{H}_{n}^{1}\left( z\right)
-\overline{H}^{1}\left( z\right) \right) +\sqrt{n}\left( \overline{H
_{n}^{0}\left( z\right) -\overline{H}^{0}\left( z\right) \right) ,
\label{Hn
\end{equation}
and recall that from representations $\left( \ref{rep-H1}\right) $ and
$\left( \ref{rep-H0}\right) ,$ we hav
\[
\sqrt{n}\left( H_{n}^{1}\left( x\right) -H^{1}\left( x\right) \right)
=\alpha_{n}\left( \theta-\overline{H}^{1}\left( z\right) \right) ,
\
\[
\sqrt{n}\left( \overline{H}_{n}^{1}\left( z\right) -\overline{H}^{1}\left(
z\right) \right) =\alpha_{n}\left( \theta\right) -\alpha_{n}\left(
\theta-\overline{H}^{1}\left( z\right) \right) ,
\]
an
\[
\sqrt{n}\left( \overline{H}_{n}^{0}\left( z\right) -\overline{H}^{0}\left(
z\right) \right) =-\alpha_{n}\left( 1-\overline{H}^{0}\left( z\right)
\right) .
\]
It follows, from $\left( \ref{Hn}\right) ,$ tha
\[
\sqrt{n}\left( \overline{H}_{n}\left( z\right) -\overline{H}\left(
z\right) \right) =\left( \alpha_{n}\left( \theta\right) -\alpha
_{n}\left( \theta-\overline{H}^{1}\left( z\right) \right) \right)
-\alpha_{n}\left( 1-\overline{H}^{0}\left( z\right) \right) .
\]
By using the above representations in $\left( \ref{ratio}\right) ,$ we
obtai
\begin{align*}
& \sqrt{n}\frac{F_{n}\left( x\right) -F\left( x\right) }{\overline
{F}\left( x\right) }\\
& =\frac{\alpha_{n}\left( \theta-\overline{H}^{1}\left( z\right) \right)
}{\overline{H}\left( x\right) }-\int_{0}^{x}\frac{\alpha_{n}\left(
\theta-\overline{H}^{1}\left( z\right) \right) }{\overline{H}^{2}\left(
z\right) }dH\left( z\right) \\
& -\int_{0}^{x}\frac{\alpha_{n}\left( \theta\right) -\alpha_{n}\left(
\theta-\overline{H}^{1}\left( z\right) \right) -\alpha_{n}\left(
1-\overline{H}^{0}\left( z\right) \right) }{\overline{H}^{2}\left(
z\right) }d\overline{H}^{1}\left( z\right) +O_{p}\left( \frac{\sqrt{n}
{k}\right) .
\end{align*}
By writin
\[
\alpha_{n}\left( \theta-\overline{H}^{1}\left( z\right) \right)
=\alpha_{n}\left( \theta\right) -\left( \alpha_{n}\left( \theta\right)
-\alpha_{n}\left( \theta-\overline{H}^{1}\left( z\right) \right) \right)
,
\]
it is easy to check that
\[
\int_{0}^{x}\frac{\alpha_{n}\left( \theta-\overline{H}^{1}\left( z\right)
\right) }{\overline{H}^{2}\left( z\right) }dH\left( z\right)
=\frac{\alpha_{n}\left( \theta\right) }{\overline{H}\left( x\right)
-\int_{0}^{x}\frac{\alpha_{n}\left( \theta\right) -\alpha_{n}\left(
\theta-\overline{H}^{1}\left( z\right) \right) }{\overline{H}^{2}\left(
z\right) }dH\left( z\right) ,
\]
and therefor
\begin{align*}
& \sqrt{n}\frac{F_{n}\left( x\right) -F\left( x\right) }{\overline
{F}\left( x\right) }\\
& =-\frac{\alpha_{n}\left( \theta\right) -\alpha_{n}\left( \theta
-\overline{H}^{1}\left( x\right) \right) }{\overline{H}\left( x\right)
}+\int_{0}^{x}\frac{\alpha_{n}\left( \theta\right) -\alpha_{n}\left(
\theta-\overline{H}^{1}\left( x\right) \right) }{\overline{H}^{2}\left(
z\right) }dH\left( z\right) \\
& -\int_{0}^{x}\frac{\alpha_{n}\left( \theta\right) -\alpha_{n}\left(
\theta-\overline{H}^{1}\left( z\right) \right) -\alpha_{n}\left(
1-\overline{H}^{0}\left( z\right) \right) }{\overline{H}^{2}\left(
z\right) }dH^{1}\left( z\right) +O_{p}\left( \frac{\sqrt{n}}{k}\right) .
\end{align*}
By multiplying both sides of the previous equation by $\sqrt{k/n},$ then by
using the Gaussian approximation $\left( \ref{approx}\right) ,$ in
$x=Z_{n-k:n},$ we ge
\begin{align*}
\sqrt{k}\frac{F_{n}\left( Z_{n-k:n}\right) -F\left( Z_{n-k:n}\right)
}{\overline{F}\left( Z_{n-k:n}\right) } & =-\sqrt{\frac{n}{k}
\mathbf{B}_{n}\left( Z_{n-k:n}\right) +\sqrt{\frac{k}{n}}\int_{0
^{Z_{n-k:n}}\frac{\mathbf{B}_{n}\left( z\right) }{\overline{H}^{2}\left(
z\right) }dH\left( z\right) \\
& -\sqrt{\frac{k}{n}}\int_{0}^{Z_{n-k:n}}\frac{\mathbf{B}_{n}^{\ast}\left(
z\right) }{\overline{H}^{2}\left( z\right) }dH^{1}\left( z\right)
+O_{p}\left( \frac{1}{\sqrt{k}}\right) ,
\end{align*}
where $\mathbf{B}_{n}\left( z\right) $ and $\mathbf{B}_{n}^{\ast}\left(
z\right) $ are two Gaussian processes defined b
\begin{equation}
\mathbf{B}_{n}\left( z\right)
\begin{tabular}
[c]{l
$:=
\end{tabular}
\ B_{n}\left( \theta\right) -B_{n}\left( \theta-\overline{H}^{1}\left(
z\right) \right) \text{ and }\mathbf{B}_{n}^{\ast}\left( z\right)
\begin{tabular}
[c]{l
$:=
\end{tabular}
\ \mathbf{B}_{n}\left( z\right) -B_{n}\left( 1-\overline{H}^{0}\left(
z\right) \right) . \label{Bn*
\end{equation}
The assertions of Lemma \ref{Lem3} and the fact that $1/\sqrt{k}\rightarrow0$
yiel
\begin{align}
& \sqrt{k}\frac{F_{n}\left( Z_{n-k:n}\right) -F\left( Z_{n-k:n}\right)
}{\overline{F}\left( Z_{n-k:n}\right) }\label{ratio2}\\
& =-\sqrt{\frac{n}{k}}\mathbf{B}_{n}\left( Z_{n-k:n}\right) +\sqrt{\frac
{k}{n}}\int_{0}^{h}\frac{\mathbf{B}_{n}\left( z\right) }{\overline{H
^{2}\left( z\right) }dH\left( z\right) -\sqrt{\frac{k}{n}}\int_{0
^{h}\frac{\mathbf{B}_{n}^{\ast}\left( z\right) }{\overline{H}^{2}\left(
z\right) }dH^{1}\left( z\right) +o_{p}\left( 1\right) .\nonumber
\end{align}
Applying the results of Lemma \ref{Lem4} leads t
\[
\sqrt{k}\frac{F_{n}\left( Z_{n-k:n}\right) -F\left( Z_{n-k:n}\right)
}{\overline{F}\left( Z_{n-k:n}\right) }=-p\sqrt{\frac{n}{k}}\mathbb{B
_{n}^{\ast}\left( \frac{k}{n}\right) +o_{p}\left( 1\right) ,
\]
which in turn implies tha
\[
\sqrt{k}S_{n1}\approx\frac{\gamma_{1}p}{1-\gamma_{1}}\sqrt{\frac{n}{k
}\mathbb{B}_{n}^{\ast}\left( \frac{k}{n}\right) .
\]
In view of the second-order regular variation condition $\left(
\ref{second-order}\right) $ for $\overline{F},$ we have $\sqrt{k
S_{n2}\approx\frac{\gamma_{1}}{1-\gamma_{1}}\sqrt{k}A_{1}\left( h\right) ,$
which, by assumption tends to $0.$ As for the term $S_{n3},$ we use Taylor's
expansion and the fact that $Z_{n-k:n}\approx h,$ to ge
\[
\sqrt{k}S_{n3}\approx-\frac{1}{1-\gamma_{1}}\sqrt{k}\left( \frac{Z_{n-k:n
}{h}-1\right) .
\]
By using Theorem $\ref{Theorem1}$ we ge
\[
\sqrt{k}S_{n3}\approx-\frac{\gamma}{1-\gamma_{1}}\sqrt{\frac{n}{k}
\mathbb{B}_{n}^{\ast}\left( \frac{k}{n}\right) .
\]
Similar arguments, applied to $S_{n4},$ yiel
\[
\sqrt{k}S_{n4}\approx-\frac{\gamma_{1}\gamma}{1-\gamma_{1}}\sqrt{\frac{n}{k
}\mathbb{B}_{n}^{\ast}\left( \frac{k}{n}\right) .
\]
In view of the consistency of $\widehat{\gamma}_{1}^{\left( H,c\right) },$
it easy to verify tha
\[
\sqrt{k}S_{n5}\approx\frac{1}{\left( 1-\gamma_{1}\right) ^{2}}\sqrt
{k}\left( \widehat{\gamma}_{1}^{\left( H,c\right) }-\gamma_{1}\right) .
\]
Once again by using Theorem $\ref{Theorem1},$ we ge
\[
\sqrt{k}S_{n5}\approx\frac{\gamma_{1}}{\left( 1-\gamma_{1}\right) ^{2
}\left\{ \sqrt{\frac{n}{k}}\int_{0}^{1}s^{-1}\mathbb{B}_{n}^{\ast}\left(
\frac{k}{n}s\right) ds-\frac{1}{p}\sqrt{\frac{n}{k}}\mathbb{B}_{n}\left(
\frac{k}{n}\right) \right\} .
\]
For the term $S_{n6},$ we writ
\[
\sqrt{k}S_{n6}=-\sqrt{k}\int_{1}^{\infty}\left( \frac{\overline{F}\left(
hx\right) }{\overline{F}\left( h\right) }-x^{-1/\gamma_{1}}\right) dx,
\]
and we apply the uniform inequality of regularly varying functions
\citep[see, e.g., Theorem B. 2.18 in][page 383]{deHF06} to show that $\sqrt
{k}S_{n6}\approx-\sqrt{k}A_{1}\left( h\right) \rightarrow0,$ as
$n\rightarrow\infty.$ Finally, combining the results above, on all six terms
$S_{ni},$ achieves the proof.
\subsection{Proof of Corollary \ref{Cor2}}
\noindent It is clear that$\sqrt{k}\left( \widehat{\Pi}_{n}-\Pi_{n}\right)
/\left( h\overline{F}\left( h\right) \right) $ is an asymptotically
centred Gaussian rv. By using the covariance formulas and after elementary
calculation we show that its asymptotic variance equal
\[
\frac{p\gamma_{1}^{2}}{\left( 1-\gamma_{1}\right) ^{2}}\left[ p\gamma
_{1}^{2}+\frac{1}{\left( 1-\gamma_{1}\right) ^{2}}\right] .
\]
\noindent\textbf{Concluding notes\medskip}
\noindent The primary object of the present work consists in providing a
Gaussian limiting distribution for the estimator of the shape parameter of a
heavy-tailed distribution, under random censorship. Our approach is based on
the approximation of the uniform empirical process by a sequence of Brownian
bridges. The Gaussian representation will be of great use in the statistical
inference on quantities related to the tail index in the context of censored
data, such as high quantiles, risk measures,... It is noteworthy that for
$p=1$ (which corresponds to the non censoring case), our main result (number
three of Theorem \ref{Theorem1}) perfectly agrees with the Gaussian
approximation of the classical Hill estimator, given in Section \ref{sec1}. On
the other hand, the variance we obtain in Corollary \ref{Cor1} is the same as
that given by \cite{EnFG08}.
\section{\textbf{Appendix\label{sec5}}}
\begin{lemma}
\label{Lem1}Assume that conditions $\left( \ref{second-order}\right) $ hold
and let $k:=k_{n}$ be an integer sequence satisfying $(\ref{K}),$ then for
$z\geq1,$ we hav
\[
\frac{n}{k}\overline{H}^{1}\left( zh\right) =pz^{-1/\gamma}+O\left(
A\left( h\right) \right) ,\text{ as }n\rightarrow\infty,
\]
where $A\left( h\right) :=A_{1}\left( h\right) +A_{2}\left( h\right)
+A_{1}\left( h\right) A_{2}\left( h\right) .$
\end{lemma}
\begin{proof}
Let $z\geq1$ and recall that $\overline{H}^{1}\left( z\right) =-\int
_{z}^{\infty}\overline{G}\left( x\right) d\overline{F}\left( x\right) .$
It is clear tha
\[
\overline{H}^{1}\left( zh\right) =-\int_{z}^{\infty}\overline{G}\left(
xh\right) d\overline{F}\left( xh\right) .
\]
Since $\overline{H}\left( h\right) =\overline{G}\left( h\right)
\overline{F}\left( h\right) ,$ the
\[
\frac{\overline{H}^{1}\left( zh\right) }{\overline{H}\left( h\right)
}=-\int_{z}^{\infty}\frac{\overline{G}\left( xh\right) }{\overline{G}\left(
h\right) }d\frac{\overline{F}\left( xh\right) }{\overline{F}\left(
h\right) }.
\]
It is easy to verify tha
\begin{align*}
\frac{\overline{H}^{1}\left( zh\right) }{\overline{H}\left( h\right) } &
=-\int_{z}^{\infty}\left( \frac{\overline{G}\left( xh\right) }{\overline
{G}\left( h\right) }-x^{-1/\gamma_{2}}\right) d\left( \frac{\overline
{F}\left( xh\right) }{\overline{F}\left( h\right) }-x^{-1/\gamma_{1
}\right) \\
& -\int_{z}^{\infty}\left( \frac{\overline{G}\left( xh\right)
{\overline{G}\left( h\right) }-x^{-1/\gamma_{2}}\right) dx^{-1/\gamma_{1
}-\int_{z}^{\infty}x^{-1/\gamma_{2}}d\left( \frac{\overline{F}\left(
xh\right) }{\overline{F}\left( h\right) }-x^{-1/\gamma_{1}}\right) \\
& -\int_{z}^{\infty}x^{-1/\gamma_{2}}dx^{-1/\gamma_{1}}.
\end{align*}
For the purpose of using the second-order regular variation conditions
$\left( \ref{second-order}\right) ,$ we write
\begin{align*}
\frac{\overline{H}^{1}\left( zh\right) }{\overline{H}\left( h\right) } &
=-A_{1}\left( h\right) A_{2}\left( h\right) \int_{z}^{\infty}\frac
{\frac{\overline{G}\left( xh\right) }{\overline{G}\left( h\right)
}-x^{-1/\gamma_{2}}}{A_{2}\left( h\right) }d\frac{\frac{\overline{F}\left(
xh\right) }{\overline{F}\left( h\right) }-x^{-1/\gamma_{1}}}{A_{1}\left(
h\right) }\\
& -A_{2}\left( h\right) \int_{z}^{\infty}\frac{\frac{\overline{G}\left(
xh\right) }{\overline{G}\left( h\right) }-x^{-1/\gamma_{2}}}{A_{2}\left(
h\right) }dx^{-1/\gamma_{1}}-A_{1}\left( h\right) \int_{z}^{\infty
}x^{-1/\gamma_{2}}d\frac{\frac{\overline{F}\left( xh\right) }{\overline
{F}\left( h\right) }-x^{-1/\gamma_{1}}}{A_{1}\left( h\right) }\\
& -\int_{z}^{\infty}x^{-1/\gamma_{2}}dx^{-1/\gamma_{1}}.
\end{align*}
Next, we apply the uniform inequality of regularly varying functions
\citep[see, e.g., Theorem B. 2.18 in][page 383]{deHF06}. For all
$\epsilon,\omega>0,$ there exists $t_{1}$ such that for $hx\geq t_{1}:
\[
\left\vert \frac{\frac{\overline{F}\left( xh\right) }{\overline{F}\left(
h\right) }-x^{-1/\gamma_{1}}}{A_{1}\left( h\right) }-x^{-1/\gamma_{1
}\dfrac{x^{\tau_{1}}-1}{\tau_{1}}\right\vert \leq\epsilon x^{-1/\gamma_{1
}\max\left( x^{\omega},x^{-\omega}\right) ,\text{
\]
Likewise, there exists $t_{2}$ such that for $hx\geq t_{2}:
\[
\left\vert \frac{\frac{\overline{G}\left( xh\right) }{\overline{G}\left(
h\right) }-x^{-1/\gamma_{2}}}{A_{2}\left( h\right) }-x^{-1/\gamma_{2
}\dfrac{x^{\tau_{2}}-1}{\tau_{2}}\right\vert \leq\epsilon x^{-1/\gamma_{2
}\max\left( x^{\omega},x^{-\omega}\right) .
\]
Making use of the previous two inequalities and noting that $\overline
{H}\left( h\right) $ $=k/n$ and
\[
-\int_{z}^{\infty}x^{-1/\gamma_{2}}dx^{-1/\gamma_{1}}=pz^{-1/\gamma
\]
achieve the proof.
\end{proof}
\begin{lemma}
\label{Lem2}In addition to the assumptions of Lemma \ref{Lem1}, suppose that
both cdf's $F$ and $G$ are absolutely continuous, then for $z\geq1,$ we hav
\
\begin{tabular}
[c]{l
$(i)\text{ }\beta_{n}\left( z\right) =\sqrt{\dfrac{n}{k}}\mathbb{B
_{n}\left( \dfrac{k}{n}z^{-\gamma}\right) +o_{p}\left( 1\right) \medskip
$\\
$(ii)\text{ }\widetilde{\beta}_{n}\left( z\right) =\sqrt{\dfrac{n}{k
}\widetilde{\mathbb{B}}_{n}\left( \dfrac{k}{n}z^{-\gamma}\right)
+o_{p}\left( 1\right) \medskip$\\
$(iii)\text{
{\displaystyle\int_{1}^{\infty}}
z^{-1}\left( \beta_{n}\left( z\right) +\widetilde{\beta}_{n}\left(
z\right) \right) dz=\gamma\sqrt{\dfrac{n}{k}}\int_{0}^{1}s^{-1
\mathbb{B}_{n}^{\ast}\left( \dfrac{k}{n}s\right) ds+o_{p}\left( 1\right)
.
\end{tabular}
\]
\end{lemma}
\begin{proof}
Let's begin by assertion $\left( i\right) .$ A straightforward application
of the weak approximation $\left( \ref{approx}\right) $ yield
\[
\beta_{n}\left( z\right) =\sqrt{\frac{n}{k}}\left\{ B_{n}\left(
\theta\right) -B_{n}\left( \theta-\overline{H}^{1}\left( zZ_{n-k:n}\right)
\right) \right\} +o_{p}\left( 1\right) .
\]
Then we have to show tha
\[
\sqrt{\frac{n}{k}}\left\{ B_{n}\left( \theta-\overline{H}^{1}\left(
zZ_{n-k:n}\right) \right) -B_{n}\left( \theta-\frac{k}{n}z^{-\gamma
}\right) \right\} =o_{p}\left( 1\right) .
\]
Indeed, let $\left\{ W_{n}\left( t\right) ;0\leq t\leq1\right\} $ be a
sequence of Wiener processes defined on $\left( \Omega,\mathcal{A
,\mathbb{P}\right) $ so that
\begin{equation}
\left\{ B_{n}\left( t\right) ;0\leq t\leq1\right\} \overset{d}{=}\left\{
W_{n}\left( t\right) -tW_{n}\left( 1\right) ;0\leq t\leq1\right\} .
\label{W
\end{equation}
Then without loss of generality, we writ
\begin{align*}
& \sqrt{\frac{n}{k}}\left\{ B_{n}\left( \theta-\overline{H}^{1}\left(
zZ_{n-k:n}\right) \right) -B_{n}\left( \theta-\frac{k}{n}z^{-\gamma
}\right) \right\} \\
& =\sqrt{\frac{n}{k}}\left\{ W_{n}\left( \theta-\overline{H}^{1}\left(
zZ_{n-k:n}\right) \right) -W_{n}\left( \theta-\frac{k}{n}z^{-\gamma
}\right) \right\} \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\sqrt{\frac{n}{k}}\left( \frac{k
{n}z^{-\gamma}-\overline{H}^{1}\left( zZ_{n-k:n}\right) \right)
W_{n}\left( 1\right) .
\end{align*}
Let $z\geq1$ be fixed and recall that $\overline{H}^{1}\left( zZ_{n-k:n
\right) \approx z^{-\gamma}k/n,$ then it is easy to verify that the second
term of the previous quantity tends to zero (in probability) as $n\rightarrow
\infty.\ $Next we show that the first one also goes to zero in
probability.\ For given $0<\eta<1$ and $0<\varepsilon<1$ small enough, we have
for all large $n$
\[
\mathbb{P}\left( \left\vert \frac{\overline{H}^{1}\left( zZ_{n-k:n}\right)
}{z^{-\gamma}k/n}-1\right\vert >\eta^{2}\frac{\varepsilon^{2}}{4z^{\gamma
}\right) <\varepsilon/2.
\]
Observe now tha
\begin{align*}
& \mathbb{P}\left( \sqrt{\frac{n}{k}}\left\vert W_{n}\left( \theta
-\overline{H}^{1}\left( zZ_{n-k:n}\right) \right) -W_{n}\left(
\theta-\frac{k}{n}z^{-\gamma}\right) \right\vert >\eta\right) \\
& =\mathbb{P}\left( \sqrt{\frac{n}{k}}W_{n}\left( \left\vert \overline
{H}^{1}\left( zZ_{n-k:n}\right) -\frac{k}{n}z^{-\gamma}\right\vert \right)
>\eta\right) \\
& \leq\mathbb{P}\left( \left\vert \frac{\overline{H}^{1}\left(
zZ_{n-k:n}\right) }{z^{-\gamma}k/n}-1\right\vert >\eta^{2}\frac
{\varepsilon^{2}}{4z^{\gamma}}\right) +\mathbb{P}\left( \sup_{0\leq
t\leq\frac{\varepsilon^{2}}{4}\frac{k}{n}}W_{n}\left( t\right) >\eta
\sqrt{k/n}\right) .
\end{align*}
It is clear that the first term of the latter expression tends to zero as
$n\rightarrow\infty.$ On the other hand, since $\left\{ W_{n}\left(
t\right) ;0\leq t\leq1\right\} $ is a martingale then by using the classical
Doob inequality we have, for any $u>0$ and $T>0
\[
\mathbb{P}\left( \sup_{0\leq t\leq T}W_{n}\left( t\right) >u\right)
\leq\mathbb{P}\left( \sup_{0\leq t\leq T}\left\vert W_{n}\left( t\right)
\right\vert >u\right) \leq\frac{\mathbf{E}\left\vert W_{n}\left( T\right)
\right\vert }{u}\leq\frac{\sqrt{T}}{u}.
\]
Letting $T=\eta^{2}\frac{\varepsilon^{2}}{4}\frac{k}{n}$ and $u=\eta\sqrt
{k/n},$ yields
\begin{subequations}
\[
\mathbb{P}\left( \sup_{0\leq t\leq\eta^{2}\frac{\varepsilon^{2}}{4}\frac
{k}{n}}W_{n}\left( t\right) >\eta\sqrt{k/n}\right) \leq\varepsilon/2,
\]
which completes the proof of assertion $\left( i\right) .$ The proof of
assertion $\left( ii\right) $ follows by similar arguments.\ For assertion
$(iii),$ let us write
\end{subequations}
\begin{align*}
& \int_{1}^{\infty}z^{-1}\left( \beta_{n}\left( z\right) +\widetilde
{\beta}_{n}\left( z\right) \right) dz\\
& =\sqrt{\frac{n}{k}}\int_{Z_{n-k:n}}^{Z_{n:n}}z^{-1}\left( \alpha
_{n}\left( \theta\right) -\alpha_{n}\left( \theta-\overline{H}^{1}\left(
z\right) \right) -\alpha_{n}\left( 1-\overline{H}^{0}\left( z\right)
\right) \right) dz,
\end{align*}
which may be decomposed into $T_{n1}^{\left( 1\right) }+T_{n1}^{\left(
2\right) }+T_{n1}^{\left( 3\right) }$ wher
\[
T_{n1}^{\left( 1\right) }:=\sqrt{\frac{n}{k}}\int_{h}^{H^{-1}\left(
1-1/n\right) }z^{-1}\left( \alpha_{n}\left( \theta\right) -\alpha
_{n}\left( \theta-\overline{H}^{1}\left( z\right) \right) -\alpha
_{n}\left( 1-\overline{H}^{0}\left( z\right) \right) \right) dz,
\
\[
T_{n1}^{\left( 2\right) }:=\sqrt{\frac{n}{k}}\int_{Z_{n-k:n}}^{h
z^{-1}\left( \alpha_{n}\left( \theta\right) -\alpha_{n}\left(
\theta-\overline{H}^{1}\left( z\right) \right) -\alpha_{n}\left(
1-\overline{H}^{0}\left( z\right) \right) \right) dz
\]
an
\[
T_{n1}^{\left( 3\right) }:=\sqrt{\frac{n}{k}}\int_{H^{-1}\left(
1-1/n\right) }^{Z_{n,n}}z^{-1}\left( \alpha_{n}\left( \theta\right)
-\alpha_{n}\left( \theta-\overline{H}^{1}\left( z\right) \right)
-\alpha_{n}\left( 1-\overline{H}^{0}\left( z\right) \right) \right) dz.
\]
Once again by using approximation $\left( \ref{approx}\right) ,$ we get
\[
T_{n1}^{\left( 1\right) }=\sqrt{\frac{n}{k}}\int_{1}^{\frac{H^{-1}\left(
1-1/n\right) }{h}}z^{-1}\left( B_{n}\left( \theta\right) -B_{n}\left(
\theta-\overline{H}^{1}\left( hz\right) \right) -B_{n}\left(
1-\overline{H}^{0}\left( hz\right) \right) \right) dz+o_{p}\left(
1\right) .
\]
Since $H^{-1}\left( 1-1/n\right) /h\rightarrow0,$ then by elementary
calculations we show that the latter quantity equals (as $n\rightarrow\infty)
\[
\sqrt{\frac{n}{k}}\int_{1}^{0}z^{-1}\left( B_{n}\left( \theta\right)
-B_{n}\left( \theta-p\frac{k}{n}z^{-\gamma}\right) -B_{n}\left( 1-q\frac
{k}{n}z^{-\gamma}\right) \right) dz+o_{p}\left( 1\right) .
\]
By a change of variables and inverting the integration limits we end up with
\[
\gamma\sqrt{\frac{n}{k}}\int_{0}^{1}s^{-1}\left( B_{n}\left( \theta\right)
-B_{n}\left( \theta-p\frac{k}{n}s\right) -B_{n}\left( 1-q\frac{k
{n}s\right) \right) ds+o_{p}\left( 1\right) ,
\]
which equals the right-hand side of equation $\left( i\right) .$ We have to
show that both $T_{n1}^{\left( 2\right) }$ and $T_{n1}^{\left( 3\right) }$
tend to zero in probability as $n\rightarrow\infty.\ $Observe that
\[
\mathbb{P}\left( \left\vert T_{n1}^{\left( 2\right) }\right\vert
>\eta\right) \leq\mathbb{P}\left( I_{n}>\eta\right) +\mathbb{P}\left(
\left\vert \frac{Z_{n-k:n}}{h}-1\right\vert >\varepsilon\right) ,
\]
wher
\[
I_{n}:=\sqrt{\frac{n}{k}}\int_{h}^{\left( 1+\varepsilon\right) h
z^{-1}\left\vert \alpha_{n}\left( \theta\right) -\alpha_{n}\left(
\theta-\overline{H}^{1}\left( z\right) \right) -\alpha_{n}\left(
1-\overline{H}^{0}\left( z\right) \right) \right\vert dz.
\]
We already have $\mathbb{P}\left( \left\vert Z_{n-k:n}/h-1\right\vert
>\varepsilon\right) \rightarrow0,$ it remains to show that $\mathbb{P}\left(
I_{n}>\eta\right) \rightarrow0$ as well. By applying approximation $\left(
\ref{approx}\right) ,$ we get $I_{n}=\widetilde{I}_{n}+o_{p}\left( 1\right)
,$ wher
\[
\widetilde{I}_{n}:=\sqrt{\frac{n}{k}}\int_{h}^{\left( 1+\varepsilon\right)
h}z^{-1}\left\vert B_{n}\left( \theta\right) -B_{n}\left( \theta
-\overline{H}^{1}\left( z\right) \right) -B_{n}\left( 1-\overline{H
^{0}\left( z\right) \right) \right\vert dz.
\]
Next we show that $\mathbb{P}\left( \widetilde{I}_{n}>\eta\right)
\rightarrow0.$ By letting $B_{n}^{\ast}\left( z\right) :=B_{n}\left(
\theta\right) -B_{n}\left( \theta-\overline{H}^{1}\left( z\right) \right)
-B_{n}\left( 1-\overline{H}^{0}\left( z\right) \right) $ we showed tha
\[
\mathbf{E}\left[ B_{n}^{\ast}\left( x\right) B_{n}^{\ast}\left( y\right)
\right] =\min\left( \overline{H}\left( x\right) ,\overline{H}\left(
y\right) \right) -\overline{H}\left( x\right) \overline{H}\left(
y\right) ,
\]
which implies that $\mathbf{E}\left\vert B_{n}^{\ast}\left( z\right)
\right\vert \leq\sqrt{\overline{H}\left( z\right) }$ and since $\overline
{H}\left( zh\right) \sim\dfrac{k}{n}z^{-1/\gamma},$ the
\[
\mathbf{E}\left\vert \widetilde{I}_{n}\right\vert \leq\sqrt{\frac{n}{k}
\int_{1}^{1+\varepsilon}z^{-1}\sqrt{\overline{H}\left( zh\right)
dz\sim2\gamma\left( 1-\left( 1+\varepsilon\right) ^{-1/2\gamma}\right) ,
\]
\ which tend to zero as $\varepsilon\downarrow0,$ this means that
$\widetilde{I}_{n}\rightarrow0$ in probability. By similar arguments we also
show that $T_{n1}^{\left( 3\right) }\overset{\mathbb{P}}{\rightarrow}0,$
therefore we omit the details.
\end{proof}
\begin{lemma}
\label{Lem3}Under the assumptions of Lemma \ref{Lem1} we hav
\
\begin{tabular}
[c]{l
$\left( i\right) \text{ }\sqrt{\dfrac{k}{n}
{\displaystyle\int_{h}^{Z_{n-k:n}}}
\dfrac{\mathbf{B}_{n}\left( z\right) }{\overline{H}^{2}\left( z\right)
}dH\left( z\right) =o_{p}\left( 1\right) ,\medskip$\\
$\left( ii\right) \text{ }\sqrt{\dfrac{k}{n}
{\displaystyle\int_{h}^{Z_{n-k:n}}}
\dfrac{\mathbf{B}_{n}^{\ast}\left( z\right) }{\overline{H}^{2}\left(
z\right) }dH^{1}\left( z\right) =o_{p}\left( 1\right) .
\end{tabular}
\]
\end{lemma}
\begin{proof}
We begin by proving the first assertion. To this end let us fix $\upsilon>0$
and writ
\begin{align*}
& \mathbb{P}\left( \left\vert \sqrt{\frac{k}{n}}\int_{h}^{Z_{n-k:n
}\mathbf{B}_{n}\left( z\right) \frac{dH\left( z\right) }{\overline{H
^{2}\left( z\right) }\right\vert >\upsilon\right) \\
& \leq\mathbb{P}\left( \left\vert \frac{Z_{n-k:n}}{h}-1\right\vert
>\upsilon\right) +\mathbb{P}\left( \left\vert \sqrt{\frac{k}{n}}\int
_{h}^{\left( 1+\upsilon\right) h}\mathbf{B}_{n}\left( z\right)
\frac{dH\left( z\right) }{\overline{H}^{2}\left( z\right) }\right\vert
>\upsilon\right) .
\end{align*}
It is clear that the first term of the previous expression tends to zero as
$n\rightarrow\infty.$ Then we have to show that the second one goes to zero as
well.\ Indeed, observe that
\[
\mathbf{E}\left\vert \sqrt{\frac{k}{n}}\int_{h}^{\left( 1+\upsilon\right)
h}\mathbf{B}_{n}\left( z\right) \frac{dH\left( z\right) }{\overline{H
^{2}\left( z\right) }\right\vert \leq\sqrt{\frac{k}{n}}\int_{h}^{\left(
1+\upsilon\right) h}\mathbf{E}\left\vert \mathbf{B}_{n}\left( z\right)
\right\vert \frac{dH\left( z\right) }{\overline{H}^{2}\left( z\right) }.
\]
Since $\mathbf{E}\left\vert \mathbf{B}_{n}\left( z\right) \right\vert
\leq\sqrt{\overline{H}^{1}\left( z\right) },$ then the right-hand side of
the latter expression is less than or equal t
\[
\sqrt{\frac{k}{n}}\int_{h}^{\left( 1+\upsilon\right) h}\sqrt{\overline
{H}^{1}\left( z\right) }\frac{dH\left( z\right) }{\overline{H}^{2}\left(
z\right) }\leq\sqrt{\frac{k}{n}}\sqrt{\overline{H}^{1}\left( h\right)
}\left[ \frac{1}{\overline{H}\left( \left( 1+\upsilon\right) h\right)
}-\frac{1}{\overline{H}\left( h\right) }\right] ,
\]
which may be rewritten int
\[
\sqrt{\frac{\overline{H}^{1}\left( h\right) }{\overline{H}\left( h\right)
}}\left[ \frac{\overline{H}\left( h\right) }{\overline{H}\left( \left(
1+\upsilon\right) h\right) }-1\right] .
\]
Since $\overline{H}^{1}\left( h\right) \sim p\overline{H}\left( h\right) $
and $\overline{H}\in\mathcal{RV}_{\left( -\gamma\right) },$ then the
previous quantity tends to
\[
p^{1/2}\left( \left( 1+\upsilon\right) ^{\gamma}-1\right) \text{ as
}n\rightarrow\infty.
\]
Since $\upsilon$ is arbitrary then it may be chosen small enough so that the
latter quantity tends to zero.$\ $By similar arguments we also show assertion
$\left( ii\right) ,$ therefore we omit the details.
\end{proof}
\begin{lemma}
\label{Lem4}Under the assumptions of Lemma \ref{Lem1} we have, for $z\geq1
\
\begin{tabular}
[c]{l
$\left( i\right) \text{ }\sqrt{\dfrac{k}{n}
{\displaystyle\int_{0}^{h}}
\dfrac{\mathbf{B}_{n}\left( z\right) }{\overline{H}^{2}\left( z\right)
}dH\left( z\right) =\sqrt{\dfrac{n}{k}}\mathbb{B}_{n}\left( \dfrac{k
{n}\right) +o_{p}\left( 1\right) ,\medskip$\\
$\left( ii\right) \text{ }\sqrt{\dfrac{k}{n}
{\displaystyle\int_{0}^{h}}
\dfrac{\mathbf{B}_{n}^{\ast}\left( z\right) }{\overline{H}^{2}\left(
z\right) }dH^{1}\left( z\right) =p\sqrt{\dfrac{n}{k}}\mathbb{B}_{n}^{\ast
}\left( \dfrac{k}{n}\right) +o_{p}\left( 1\right) ,\medskip$\\
$\left( iii\right) \sqrt{\dfrac{n}{k}}\mathbf{B}_{n}\left( Z_{n-k:n
\right) =\sqrt{\dfrac{n}{k}}\mathbb{B}_{n}\left( \dfrac{k}{n}\right)
+o_{p}\left( 1\right) .
\end{tabular}
\]
\end{lemma}
\begin{proof}
We only show assertion $\left( i\right) ,$ since $\left( ii\right) $ and
$\left( iii\right) $ follow by similar arguments.\ Observe that
\[
\int_{0}^{h}\frac{dH\left( z\right) }{\overline{H}^{2}\left( z\right)
}=\frac{1}{\overline{H}\left( h\right) }-1,
\]
and $\overline{H}\left( h\right) =k/n,$ the
\[
\sqrt{\frac{n}{k}}\mathbf{B}_{n}\left( h\right) =\sqrt{\frac{k}{n}}\int
_{0}^{h}\frac{\mathbf{B}_{n}\left( h\right) dH\left( z\right)
{\overline{H}^{2}\left( z\right) }+\sqrt{\frac{k}{n}}.
\]
Let us writ
\[
\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\mathbf{B}_{n}\left( z\right)
}{\overline{H}^{2}\left( z\right) }dH\left( z\right) -\sqrt{\frac{n}{k
}\mathbf{B}_{n}\left( h\right) =\sqrt{\frac{k}{n}}\int_{0}^{h
\frac{\mathbf{B}_{n}\left( z\right) -\mathbf{B}_{n}\left( h\right)
}{\overline{H}^{2}\left( z\right) }dH\left( z\right) +\sqrt{\frac{k}{n}}.
\]
We hav
\[
\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\mathbf{B}_{n}\left( z\right)
-\mathbf{B}_{n}\left( h\right) }{\overline{H}^{2}\left( z\right)
}dH\left( z\right) =\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{B_{n}\left(
\theta-\overline{H}^{1}\left( h\right) \right) -B_{n}\left( \theta
-\overline{H}^{1}\left( z\right) \right) }{\overline{H}^{2}\left(
z\right) }dH\left( z\right) .
\]
It is clear tha
\begin{align}
& \sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\mathbf{B}_{n}\left( z\right)
-\mathbf{B}_{n}\left( h\right) }{\overline{H}^{2}\left( z\right)
}dH\left( z\right) \label{diff}\\
& =\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{W_{n}\left( \theta-\overline{H
^{1}\left( h\right) \right) -W_{n}\left( \theta-\overline{H}^{1}\left(
z\right) \right) }{\overline{H}^{2}\left( z\right) }dH\left( z\right)
\nonumber\\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -W_{n}\left( 1\right)
\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\overline{H}^{1}\left( z\right)
-\overline{H}^{1}\left( h\right) }{\overline{H}^{2}\left( z\right)
}dH\left( z\right) ,\nonumber
\end{align}
where $\left\{ W_{n}\left( t\right) ,0\leq t\leq1\right\} $ is the
sequence of Wiener processes defined in $\left( \ref{W}\right) .$ Next we
show that both terms of the last expression tend to zero in probability.
Indeed, it is easy to verif
\begin{align*}
& \mathbf{E}\left\vert \sqrt{\frac{k}{n}}\int_{0}^{h}\frac{W_{n}\left(
\theta-\overline{H}^{1}\left( h\right) \right) -W_{n}\left( \theta
-\overline{H}^{1}\left( z\right) \right) }{\overline{H}^{2}\left(
z\right) }dH\left( z\right) \right\vert \\
& \leq\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\sqrt{\overline{H}^{1}\left(
z\right) -\overline{H}^{1}\left( h\right) }}{\overline{H}^{2}\left(
z\right) }dH\left( z\right) .
\end{align*}
It is clear tha
\[
\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\sqrt{\overline{H}^{1}\left( z\right)
-\overline{H}^{1}\left( h\right) }}{\overline{H}^{2}\left( z\right)
}dH\left( z\right) =\sqrt{\overline{H}\left( h\right)
{\displaystyle\int_{0}^{h}}
\frac{\sqrt{\overline{H}^{1}\left( z\right) -\overline{H}^{1}\left(
h\right) }}{\overline{H}^{2}\left( z\right) }dH\left( z\right) .
\]
Elementary calculations by using L'H\^{o}pital's rule, we infer that the
latter quantity tends to zero as $n\rightarrow\infty.$ Likewise, we also show
tha
\[
\sqrt{\frac{k}{n}}\int_{0}^{h}\frac{\overline{H}^{1}\left( z\right)
-\overline{H}^{1}\left( h\right) }{\overline{H}^{2}\left( z\right)
}dH\left( z\right) \rightarrow0,\text{ as }n\rightarrow\infty,
\]
which implies that the right side of equation $\left( \ref{diff}\right) $
goes to zero in probability. It remains to check that
\[
\sqrt{\frac{n}{k}}\mathbf{B}_{n}\left( h\right) =\sqrt{\frac{n}{k
}\mathbb{B}_{n}\left( \frac{k}{n}\right) +o_{p}\left( 1\right) .
\]
Recalling
\[
\mathbf{B}_{n}\left( h\right) =B_{n}\left( \theta\right) -B_{n}\left(
\theta-\overline{H}^{1}\left( h\right) \right) \text{ and }\mathbb{B
_{n}\left( \frac{k}{n}\right) =B_{n}\left( \theta\right) -B_{n}\left(
\theta-p\frac{k}{n}\right) ,
\]
we writ
\[
\sqrt{\frac{n}{k}}\left( \mathbf{B}_{n}\left( h\right) -\mathbb{B
_{n}\left( \frac{k}{n}\right) \right) =\sqrt{\frac{n}{k}}\left(
B_{n}\left( \theta-pk/n\right) -B_{n}\left( \theta-\overline{H}^{1}\left(
h\right) \right) \right) .
\]
Then, we have to show that this latter tends to zero in probability. By
writing $B_{n}$ in terms of $W_{n}$ as above, it is easy to verify tha
\begin{align*}
& \sqrt{\frac{n}{k}}\mathbf{E}\left\vert B_{n}\left( \theta-pk/n\right)
-B_{n}\left( \theta-\overline{H}^{1}\left( h\right) \right) \right\vert \\
& \leq\sqrt{\frac{n}{k}}\sqrt{\overline{H}^{1}\left( h\right) -pk/n
+\sqrt{\frac{n}{k}}\left\vert \overline{H}^{1}\left( h\right)
-pk/n\right\vert \\
& =\sqrt{\frac{\overline{H}^{1}\left( h\right) }{k/n}-p}+\sqrt{\frac{k}{n
}\left\vert \frac{\overline{H}^{1}\left( h\right) }{k/n}-p\right\vert ,
\end{align*}
which converges to zero as $n\rightarrow\infty,$ since $\overline{H
^{1}\left( h\right) \approx pk/n.$ This achieves the proof.
\end{proof}
|
1,941,325,221,148 | arxiv | \section{#1}}
\newcommand{\col}[2]{\left(\begin{array}{c} #1 \\ #2 \end{array}\right)}
\newcommand{\fl}[1]{ \lfloor #1 \rfloor }
\newcommand{\ket}[1]{\vert #1 \rangle}
\renewcommand{\mod}[1]{\ (\mathrm{mod}\ #1)}
\newcommand{\new}[1]{{\em #1}}
\newcommand{\bs}[1]{\ensuremath{\boldsymbol{#1}}}
\def\mathcal{M}{\mathcal{M}}
\def\mathop{\mathrm Im }{\mathop{\mathrm Im }}
\def\mathop{\mathrm Re }{\mathop{\mathrm Re }}
\def\frac{1}{2}{\frac{1}{2}}
\def\mathbb{Z}{\mathbb{Z}}
\def\mathbb{F}{\mathbb{F}}
\def\mathbb{Q}{\mathbb{Q}}
\def\mathbb{C}{\mathbb{C}}
\def\mathcal{H}{\mathcal{H}}
\def\mathcal{K}{\mathcal{K}}
\def\mathfrak{g}{\mathfrak{g}}
\def\mathcal{L}{\mathcal{L}}
\def\mathbb{R}{\mathbb{R}}
\def\mathcal{N}{\mathcal{N}}
\def\mathbb{P}{\mathbb{P}}
\def\mathbb{P}{\mathbb{P}}
\def\tilde{C}{\tilde{C}}
\def\tilde{\alpha}{\tilde{\alpha}}
\defH-V{H-V}
\deft{t}
\def{\mathrm{Tr}}{{\mathrm{Tr}}}
\def{\mathrm{tr}}{{\mathrm{tr}}}
\def\mathrm{ord}{\mathrm{ord}}
\def\mathcal{O}{\mathcal{O}}
\defG{G}
\def\phi_0{\phi_0}
\def\phi_1{\phi_1}
\def\nu{\nu}
\def\psi_2{\psi_2}
\def\phi_2{\phi_2}
\def\psi_1{\psi_1}
\def\psi_3{\psi_3}
\def\xi{\xi}
\def\phit{\phi_0}
\def\phif{\phi_1}
\def\rho{\rho}
\def\tilde{\F}{\tilde{\mathbb{F}}}
\def{\mathfrak{e}}{{\mathfrak{e}}}
\def{\mathfrak{so}}{{\mathfrak{so}}}
\def{\mathfrak{su}}{{\mathfrak{su}}}
\def{\mathfrak{sp}}{{\mathfrak{sp}}}
\def{\mathfrak{f}}{{\mathfrak{f}}}
\def{\mathfrak{g}}{{\mathfrak{g}}}
\newcommand{\grid}{
\thicklines
\multiput(-200,-200)(0,50){9}{\line(1, 0){400}}
\multiput(-200,-200)(50,0){9}{\line(0, 1){400}}
\thinlines
\multiput(-200,-200)(0,10){41}{\line(1, 0){400}}
\multiput(-200,-200)(10,0){41}{\line(0, 1){400}}
\put(0,0){\circle{5}}
}
\defh^{1, 1}{h^{1, 1}}
\defh^{2, 1}{h^{2, 1}}
\newcommand{\eq}[1]{(\ref{#1})}
\newcommand{\wati}[1]{\footnote{\textbf{WT:\ #1}}}
\newcommand{\gm}[1]{\footnote{\textbf{GM:\ #1}}}
\title{6D F-theory models
and elliptically fibered Calabi-Yau threefolds over semi-toric base surfaces}
\author{
Gabriella Martini and Washington Taylor\\
Center for Theoretical Physics\\
Department of Physics\\
Massachusetts Institute of Technology\\
77 Massachusetts Avenue\\
Cambridge, MA 02139, USA\\[0.7cm]
{\tt gmartini} {\rm at} {\tt mit.edu},
{\tt wati} {\rm at} {\tt mit.edu}
}
\preprint{MIT-CTP-4448}
\abstract{We carry out a systematic study of a class of 6D F-theory
models and associated Calabi-Yau threefolds that are constructed
using base surfaces with a generalization of toric structure. In
particular, we determine all smooth surfaces with a structure
invariant under a single $\mathbb{C}^*$ action (sometimes called
``T-varieties'' in the mathematical literature) that can act as
bases for an elliptic fibration with section of a Calabi-Yau
threefold. We identify 162,404 distinct bases, which include as a
subset the previously studied set of strictly toric bases.
Calabi-Yau threefolds constructed in this fashion include examples
with previously unknown Hodge numbers. There are also bases over
which the generic elliptic fibration has a Mordell-Weil group of
sections with nonzero rank, corresponding to non-Higgsable $U(1)$
factors in the 6D supergravity model; this type of structure does
not arise for generic elliptic fibrations in the purely toric
context.}
\begin{document}
\maketitle
\flushbottom
\section{Introduction}
Since the early days of string theory, much effort has been devoted to
understanding the geometry of string compactifications. Calabi-Yau
threefolds are one of the most central and best-studied classes of
compactification geometries. These manifolds can be used to
compactify ten-dimensional superstring theories to give
four-dimensional supersymmetric theories of gravity and gauge fields
\cite{GSW, Polchinski}. Calabi-Yau
threefolds
that admit an elliptic fibration with section can also be used to compactify F-theory to give
six-dimensional supergravity theories \cite{Vafa-F-theory,
Morrison-Vafa}.
While mathematicians and physicists have used many methods to
construct and study Calabi-Yau threefolds (see \cite{Davies} for a
recent review), one of the main approaches that has been fruitful for
systematically classifying large numbers of Calabi-Yau geometries is
the mathematical framework of \emph{toric geometry} \cite{Fulton}.
The power of toric geometry is that many geometric features of a space
are captured in a simple combinatorial framework that lends itself to
straightforward calculations for many quantities of interest. An
approach was developed by Batyrev \cite{Batyrev} for describing
Calabi-Yau manifolds as hypersurfaces in toric varieties in terms of
the combinatorics of reflexive polytopes. Kreuzer and Skarke have
identified some 473.8 million four-dimensional reflexive polytopes
that can be used to construct Calabi-Yau threefolds in this way
\cite{Kreuzer-Skarke}.
In this paper, following \cite{clusters}, we study a class of
spaces that is more general than the set of toric varieties, but
retains some of the combinatorial simplicity of toric geometry. This
allows us to construct a large class of elliptically fibered
Calabi-Yau threefolds that need not have any realization in a strictly
toric language. In particular, we focus on complex surfaces that
admit at least one $\mathbb{C}^*$ action but not necessarily the action of a
product $\mathbb{C}^*\times \mathbb{C}^*$ needed for a full toric structure. Complex
algebraic varieties of dimension $n$ that admit an effective action of
$(\mathbb{C}^*)^k$ are known in the mathematical literature as
``T-varieties''. In this paper we simply use the term ``$\mathbb{C}^*$-surface''
to denote a surface with a $\mathbb{C}^*$ action, in part to avoid confusion
with the plethora of ``T-'' objects already filling the string theory
literature (``T-duality'', ``T-folds'', ``T-branes'', etc.) A review
of some of the mathematical results and literature on more general
T-varieties can be found in \cite{T-varieties}.
The primary focus of this paper is the systematic classification and
study of all smooth $\mathbb{C}^*$-surfaces that can act as bases for an
elliptic fibration with section where the total space is a Calabi-Yau
threefold. We use these surfaces to compactify F-theory to six
dimensions. The close correspondence between the geometry of the
F-theory compactification surface and the physics of the associated 6D
supergravity theory provides a powerful tool for understanding both
geometry and physics. A general characterization of smooth base
surfaces\footnote{Throughout this paper we use the term ``base
surface'' as shorthand for ``surface that can act as the base of an
elliptically-fibered Calabi-Yau threefold with section''.} (not
necessarily toric or $\mathbb{C}^*$-) was given in \cite{clusters}. The basic
idea is that any base can be classified by the intersection structure
of effective irreducible divisors of self-intersection $-2$ or below.
The intersection structure of the base directly corresponds to the
generic nonabelian gauge group in the ``maximally Higgsed'' 6D
supergravity theory from an F-theory construction. There are strong
constraints on the intersection structures that can arise; these
constraints made possible an explicit enumeration of all toric base
surfaces in \cite{toric}, and the geometry of the associated
Calabi-Yau threefolds was explored in \cite{WT-Hodge}. In this paper
we construct and enumerate the more general class of $\mathbb{C}^*$-bases and
explore the corresponding Calabi-Yau geometries. In particular, we
identify some (apparently) new Calabi-Yau threefolds including some
threefolds with novel properties.
In Section \ref{sec:bases} we describe the class of $\mathbb{C}^*$-base
surfaces. We summarize the results of the complete enumeration of
these bases in Section \ref{sec:enumeration}. The corresponding
Calabi-Yau threefolds, including some models with interesting new
features, are described in Section \ref{sec:features}. Section
\ref{sec:conclusions} contains concluding remarks.
Appendices~\ref{sec:rules}--\ref{sec:appendix-abelian} contain tables
of useful data.
\section{$\mathbb{C}^*$-surfaces and F-theory vacua}
\label{sec:bases}
We begin by briefly summarizing the results of \cite{clusters,
toric}. The methods used here are closely related to those
developed in these papers.
\subsection{6D F-theory models}
A 6D F-theory model is defined by a Calabi-Yau threefold that is an
elliptic fibration (with section) over a complex base surface $B$.
The structure of the resulting 6D supergravity theory is determined by
the geometry of $B$ and the elliptic fibration \cite{Morrison-Vafa}.
In particular, the number of tensor multiplets $T$ in the 6D theory is
related to the topology of the base $B$ through $T = h^{1, 1} (B) -1$.
The elliptic fibration can be described by a Weierstrass
model over $B$
\begin{equation}
y^2 = x^3 + fx + g \,,
\label{eq:Weierstrass}
\end{equation}
where $f, g$ are sections of the line bundles ${\cal O} (-4K),
{\cal O}(-6K),$ with $K$ the
canonical class of $B$. Codimension one vanishing loci of $f,
g$, and the discriminant locus $\Delta = 4f^3+27g^2$, where the
elliptic fibration becomes singular, give rise to vector multiplets
for a nonabelian gauge group in the 6D theory. Codimension two
vanishing loci give matter in the 6D theory; the matter lives in a
representation of the nonabelian gauge group that is determined by the
geometry.
For more detailed background regarding the relation of 6D physics to F-theory geometry,
see the reviews \cite{Morrison-TASI, Denef-F-theory, WT-TASI}; a
recent systematic analysis of these 6D models
from the M-theory point of view appears in
\cite{b-Grimm}).
\subsection{General classification of 6D F-theory
base surfaces}
A general approach to systematically classifying base surfaces $B$ for
6D F-theory compactifications was developed in \cite{clusters}.
This approach is based on identifying irreducible components in the
structure of effective divisors on $B$, composed of
intersecting combinations of curves of negative self-intersection over
which the generic elliptic fibration is singular.
Each such irreducible component corresponds to a ``non-Higgsable cluster'' of
gauge algebra summands and (in some cases) charged matter that appears
in the 6D supergravity model arising from an F-theory compactification
on a generic elliptic fibration over $B$; the term ``non-Higgsable
cluster'' (which we sometimes abbreviate as ``cluster'') refers to the
fact that for any such configuration there are no matter fields that
can lead to a Higgsing of the corresponding nonabelian gauge group.
For example, a single irreducible effective curve in $B$ with
self-intersection $-4$ corresponds to an ${\mathfrak{so}}(8)$ term in the gauge
algebra with no charged matter. The simplest example of this occurs
for F-theory on the Hirzebruch surface $\mathbb{F}_4$
\cite{Morrison-Vafa}. There are strong geometric
constraints (discussed further below) on the configurations of curves
that can live in a base supporting an elliptic fibration. The set of
possible non-Higgsable clusters
is thus rather
small. A complete list is given in Table~\ref{t:clusters}, and
depicted in Figure~\ref{f:clusters}. These
clusters are described by configurations of curves having
self-intersection $-2$ or less, with one curve having
self-intersection $-3$ or below.
General configurations of $-2$
curves with no curves of self-intersection $-3$ or below
are also possible. These clusters carry no gauge group, and generally
represent limiting points of bases without these $-2$ curves; for example,
the Hirzebruch surface $\mathbb{F}_2$ contains a single isolated
$-2$ curve, and is a
limit of the surface $\mathbb{F}_0$.
\begin{table}
\begin{center}
\begin{tabular}{| c |
c |c |
}
\hline
Cluster & gauge algebra & $H_{\rm charged}$
\\
\hline
(-12) &${\mathfrak{e}}_8$ & 0 \\
(-8) &${\mathfrak{e}}_7$& 0 \\
(-7) &${\mathfrak{e}}_7$& 28 \\
(-6) &${\mathfrak{e}}_6$& 0 \\
(-5) &${\mathfrak{f}}_4$& 0 \\
(-4) &${\mathfrak{so}}(8) $& 0 \\
(-3, -2, -2) & ${\mathfrak{g}}_2 \oplus {\mathfrak{su}}(2)$& 8\\
(-3, -2) & ${\mathfrak{g}}_2 \oplus {\mathfrak{su}}(2)$ &8 \\
(-3)& ${\mathfrak{su}}(3)$ & 0 \\
(-2, -3, -2) &${\mathfrak{su}}(2) \oplus {\mathfrak{so}}(7) \oplus
{\mathfrak{su}}(2)$&16 \\
(-2, -2, \ldots, -2) & no gauge group & 0 \\
\hline
\end{tabular}
\end{center}
\caption[x]{\footnotesize Allowed ``non-Higgsable clusters'' of
irreducible effective divisors with self-intersection $-2$ or below,
and corresponding contributions to the gauge algebra and matter
content of the 6D theory associated with F-theory compactifications
on a generic elliptic fibration (with section) over a base
containing each cluster.}
\label{t:clusters}
\end{table}
\begin{figure}
\begin{center}
\begin{picture}(200,130)(- 93,- 55)
\thicklines
\put(-175, 25){\line(1,0){50}}
\put(-150, 47){\makebox(0,0){\small $-m \in$}}
\put(-150, 32){\makebox(0,0){\small $\{-3, -4, \ldots, -8, -12\}$}}
\put(-150,-33){\makebox(0,0){\small ${\mathfrak{su}}(3), {\mathfrak{so}}(8), {\mathfrak{f}}_4$}}
\put(-150,-47){\makebox(0,0){\small ${\mathfrak{e}}_6, {\mathfrak{e}}_7, {\mathfrak{e}}_8$}}
\put(-70,55){\line(1,-1){40}}
\put(-30,35){\line(-1,-1){40}}
\put(-50,45){\makebox(0,0){-3}}
\put(-50,5){\makebox(0,0){-2}}
\put(-50,-40){\makebox(0,0){\small ${\mathfrak{g}}_2 \oplus {\mathfrak{su}}(2)$}}
\put(30,70){\line(1,-1){40}}
\put(30,20){\line(1,-1){40}}
\put(70,45){\line(-1,-1){40}}
\put(45,65){\makebox(0,0){-3}}
\put(44,31){\makebox(0,0){-2}}
\put(60, 0){\makebox(0,0){-2}}
\put(50,-40){\makebox(0,0){\small ${\mathfrak{g}}_2 \oplus {\mathfrak{su}}(2)$}}
\put(130,70){\line(1,-1){40}}
\put(130,20){\line(1,-1){40}}
\put(170,45){\line(-1,-1){40}}
\put(145,65){\makebox(0,0){-2}}
\put(144,31){\makebox(0,0){-3}}
\put(160, 0){\makebox(0,0){-2}}
\put(150,-40){\makebox(0,0){\small ${\mathfrak{su}}(2) \oplus {\mathfrak{so}}(7) \oplus {\mathfrak{su}}(2)$}}
\end{picture}
\end{center}
\caption[x]{\footnotesize Clusters of intersecting
curves that must carry a nonabelian gauge group factor. For each
cluster the corresponding gauge algebra is noted and the gauge algebra and
number of charged matter hypermultiplet are listed in Table~\ref{t:clusters}}
\label{f:clusters}
\end{figure}
The restriction on the types of allowed clusters comes from the
constraint that $f, g$ cannot vanish to degrees $4, 6$ respectively on any curve or
at any point in $B$. If the degree of vanishing is too high on a
curve, there is no way to construct a Calabi-Yau from the elliptic
fibration. If the vanishing is too high at a point, the point must be
blown up to form another base $B'$ that supports a Calabi-Yau. The
vanishing degrees of $f, g$ on a curve can be determined from the
\emph{Zariski decomposition} of $-nK$: using the fact that any effective divisor
$A$ that has a negative intersection with an effective irreducible $B$
having negative self-intersection must contain $B$ as a component ($A
\cdot B < 0, B \cdot B < 0 \Rightarrow A = B + C$ with $C$ effective),
the degree of vanishing of $ A = -4K, -6K, -12K$ on any irreducible
effective curve or combination of curves can be computed; the degree
of vanishing at a point where two curves intersect is simply the sum
of the degrees of vanishing on the curves. The details of this
computation for general combinations of intersecting divisors are
worked out in \cite{clusters}.
All effective irreducible curves of self-intersection $-2$ or below in
the base appear in the clusters described above. Furthermore, again
because of restrictions on the degrees of $f, g$ on curves and intersection
points, only certain combinations of the allowed
clusters can be connected by $-1$ curves in $B$. For example,
if a $-1$ curve intersects
a $-12$
curve, then the $-1$ curve cannot intersect any other curve contained
in a non-Higgsable
cluster except a $-2$ curve contained in a $(-2, -2, -3)$
cluster.
A complete table of the possible combinations
of clusters that can be connected by a $-1$ curve is given in
\cite{clusters}. The part of that table that is relevant for this
paper is reproduced in Appendix~\ref{sec:rules}.
For any base surface $B$, the structure of non-Higgsable clusters
determines the minimal nonabelian gauge group and matter content of
the 6D supergravity theory corresponding to a generic elliptic
fibration with section over $B$. For each distinct base $B$ there can
be a wide range of models with different nonabelian gauge groups,
which can be realized by tuning the Weierstrass model to increase the
degrees of $f, g$ over various divisors. For example, for the
simplest base surface $B = \mathbb{P}^2$, there are thousands of branches of
the theory with different nonabelian gauge group and matter content,
some of which are explored in \cite{0, Braun-0}. Each of these
branches corresponds to a different Calabi-Yau threefold after the
singularities associated with the nonabelian gauge group factors are
resolved. But for each of these models, by maximally Higgsing matter
fields in the supergravity theory the gauge group can be completely
broken and the geometry becomes that of a generic elliptic fibration
over $\mathbb{P}^2$. Focusing on the base surface $B$ dramatically simplifies
the problem of classifying F-theory compactifications and elliptically
fibered Calabi-Yau threefolds, by removing the additional complexity
associated with the details of specific Weierstrass models and
associated fibrations.
The mathematics of minimal surface theory \cite{bhpv, Reid-chapters}
gives a simple picture of how the set of allowed base surfaces $B$ are
connected, unifying the space of 6D supergravity theories that arise
from F-theory. All smooth bases\footnote{aside from the Enriques
surface, which gives rise to a simple 6D model with no gauge group or
matter and is connected in a more complicated way to the branches
associated with other bases.} $B$ for 6D F-theory models can be
constructed by blowing up a finite set of points on one of the minimal
bases $\mathbb{F}_m\ (0 \leq m \leq 12, m \neq 1)$ or $\mathbb{P}^2$ \cite{Grassi}.
The number of distinct topological types for $B$ is finite
\cite{Gross, KMT-II}. In principle, all smooth bases $B$ can be
systematically constructed by successively blowing up points on the
minimal bases allowing only the clusters of irreducible divisors from
Table~\ref{t:clusters}. In \cite{toric}, the complete set of toric
bases was constructed in this fashion, and in this paper we carry out
the analogous construction for $\mathbb{C}^*$-bases.
\subsection{Toric base surfaces}
\label{sec:toric}
\begin{figure}
\setlength{\unitlength}{.8pt}
\begin{center}
\begin{picture}(200,130)(-93, -25)
\put(-235, 60){\line(1,0){80}}
\put(-235, 0){\line(1,0){80}}
\put(-225, 70){\line( 0, -1){80}}
\put( -225,60){\makebox(0,0){\includegraphics[width=0.4cm]{circle.pdf}}}
\put(-165, 70){\line( 0, -1){80}}
\put(-195, 50){\makebox(0,0){$+2$}}
\put(-195, 72){\makebox(0,0){\small{$C_{1}(D_{0})$}}}
\put(-195, -11){\makebox(0,0){\small{$C_{3}(D_{\infty})$}}}
\put(-195, 10){\makebox(0,0){$-2$}}
\put(-217,30){\makebox(0,0){$0$}}
\put(-235,30){\makebox(0,0){\small{$C_{4}$}}}
\put(-176,30){\makebox(0,0){$0$}}
\put(-154,30){\makebox(0,0){\small $C_{2}$}}
\put(-135, 30){\vector( 1, 0){25}}
\put(-80, 60){\line(1,0){75}}
\put(-80, 0){\line(1,0){75}}
\put(-67, 68){\line( -1, -2){23}}
\put( -71,60){\makebox(0,0){\includegraphics[width=0.4cm]{circle.pdf}}}
\put(-67, -8){\line( -1, 2){23}}
\put(-15, 70){\line( 0, -1){80}}
\put(-40, 70){\makebox(0,0){$+1$}
\put(-40, -10){\makebox(0,0){$-2$}}
\put(-8,30){\makebox(0,0){$0$}}
\put(-93,50){\makebox(0,0){$-1$}}
\put(-93,10){\makebox(0,0){$-1$}}
\put(7, 30){\vector( 1, 0){25}}
\put(70, 60){\line(1,0){66}}
\put(70, 0){\line(1,0){66}}
\put(81, 66){\line( -4, -5){22}}
\put(81, -6){\line( -4, 5){22}}
\put(122,0){\makebox(0,0){\includegraphics[width=0.4cm]{circle.pdf}}}
\put(62, 48){\line( 0, -1){36}}
\put(122, 70){\line( 0, -1){80}}
\put(92, 70){\makebox(0,0){$0$}
\put(92, -10){\makebox(0,0){$-2$}}
\put(129,30){\makebox(0,0){$0$}}
\put(59,55){\makebox(0,0){$-1$}}
\put(59,4){\makebox(0,0){$-1$}}
\put(49,30){\makebox(0,0){$-2$}}
\put(138, 30){\vector( 1, 0){25}}
\put(200, 60){\line(1,0){62}}
\put(200, 0){\line(1,0){62}}
\put(211, 65){\line( -4, -5){22}}
\put(211, -5){\line( -4, 5){22}}
\put(192, 48){\line( 0, -1){36}}
\put(248, 65){\line( 1, -2){21}}
\put(248, -5){\line( 1, 2){21}}
\put(226, 72){\makebox(0,0){$D_{0}$}}
\put(226, 50){\makebox(0,0){$0$}}
\put(225, 10){\makebox(0,0){$-3$}}
\put(227, -10){\makebox(0,0){$D_{\infty}$}}
\put(189,55){\makebox(0,0){$-1$}}
\put(189,4){\makebox(0,0){$-1$}}
\put(179,30){\makebox(0,0){$-2$}}
\put(268,50){\makebox(0,0){$-1$}}
\put(268,10){\makebox(0,0){$-1$}}
\end{picture}
\end{center}
\caption[x]{\footnotesize Toric base surfaces for 6D F-theory models
produced by blowing up a sequence of points on $\mathbb{F}_2$.}
\label{f:loop}
\end{figure}
The set of toric base surfaces form a subset of the more general class
of $\mathbb{C}^*$-bases that we focus on in this paper.
The structure of toric bases is relatively simple and
provides a useful foundation for
our analysis of $\mathbb{C}^*$-bases.
A toric surface can be described alternatively in terms of the
\new{toric fan} or in terms of the set of toric effective divisors.
The fan for a compact toric surface is defined \cite{Fulton} by a sequence of
vectors $v_1, \ldots, v_k \in N =\mathbb{Z}^2$ defining 1D cones, or {\it
rays}, along with 2D cones spanned by vectors $v_i, v_{i +1}$
(including a 2D cone spanned by $v_k, v_1$; all related conditions
include this periodicity though we do not repeat this explicitly in
each case), and the 0D cone at the origin. The origin represents the
torus $(\mathbb{C}^*)^2$, while the 1D rays represent divisors and the 2D
cones represent points in the compact toric surface. The surface is
smooth if the rays $v_i, v_{i +1}$ defining each 2D cone span a unit
cell in the lattice. The irreducible effective toric divisors are a
set of curves\footnote{Note that the notation and ordering
used here for curves and associated rays in a toric base differs from
that used in \cite{toric}.} $C_i, i = 1, \ldots, k$, associated with the vectors
$v_i$. The divisors have self-intersection $C_i \cdot C_i = n_i$,
where $-nv_i =v_{i-1} + v_{i +1}$, and nonvanishing pairwise
intersections $C_i \cdot C_{i +1} = C_k \cdot C_1 = 1$.
These
divisors can be depicted graphically as a loop (See
Figure~\ref{f:loop}). The sets of consecutive divisors of
self-intersection $n_i \leq -2$ in the loop are constrained by the
cluster analysis of \cite{clusters} to contain only the sequences
(with either orientation) $(-3, -2), (-3, -2, -2), (-2, -3, -2)$,
$(-m)$, with $3 \leq m \leq 12$, and $(-2, \ldots, -2)$ with any
number of $-2$ curves.
All smooth toric base surfaces (aside from $\mathbb{P}^2$) can be constructed
by starting with a Hirzebruch surface $\mathbb{F}_m$ that is associated with
the divisor self-intersection sequence $[n_1, n_2, n_3, n_4] = [m, 0,
-m, 0]$ ($v_1 = (0,1), v_2 = (1, 0), v_3 = (0, -1), v_4 = (-1, -m)$),
and blowing up a sequence of intersection points between adjacent
divisors. Blowing up the intersection point between divisors $C_i$
and $C_{i +1}$ gives a new toric base with a -1 curve inserted between
these divisors, and self-intersections of the previously intersecting
divisors each reduced by one (See Figure~\ref{f:loop}). Such blow-ups
are the only ones possible that maintain the toric structure by
preserving the action of $(\mathbb{C}^*)^2$ on the base. Each Hirzebruch
surface can be viewed as a $\mathbb{P}^1$ bundle over $\mathbb{P}^1$. The divisors of
self-intersection $\pm m$ can be viewed as sections $\Sigma_\pm$ of
this bundle, which in a local coordinate chart $(z, w) \in\mathbb{C} \times\mathbb{C}$
are at the points $(z,0), (z, \infty)$ in the fibers, while the
divisors of self-intersection $0$ can be viewed as fibers $(0, w)$ and
$(\infty, w)$. All points that can be blown up while maintaining the toric structure are located at the points invariant under the $(\mathbb{C}^*)^2$ action: $(0,
0), (0, \infty), (\infty, 0), (\infty, \infty)$ (or in exceptional
divisors produced when these points are blown up).
In particular, any smooth toric base surface that supports an
elliptically fibered Calabi-Yau threefold
has a description in terms of a
closed loop of divisors containing divisors $D_0, D_\infty$ associated
with the divisors
$C_1, C_3$ in the original Hirzebruch surface (these
are the sections $\Sigma_\pm$) and two linear chains of divisors connecting
$D_0$ to $D_\infty$. These chains are formed from the
divisors $C_2$ and $C_4$ in the original Hirzebruch
surface, along with all exceptional divisors from blowing up points on
these original curves. This is illustrated in Figure~\ref{f:loop}.
Note that some smooth toric base surfaces can be formed in different
ways from blow-ups of Hirzebruch surfaces, so that different divisors
$D_0, D_\infty$
play the role of the sections $\Sigma_\pm$. In enumerating all toric
base surfaces, such duplicates must be eliminated by considering
equivalences of the loops of self-intersection numbers up to rotation
and reflection.
\subsection{$\mathbb{C}^*$-base surfaces}
\label{sec:semi-toric}
A more general class of bases $B$ was described in \cite{toric}.
Generalizing the toric construction by allowing blow-ups at arbitrary
points $(z, 0), (z, \infty)$ can give rise to an arbitrary number of
fibers containing curves of negative self-intersection, while
maintaining the $\mathbb{C}^*$ action ($1 \times\mathbb{C}^*$) that leaves the
sections $\Sigma_\pm$ invariant. The resulting structure is closely
analogous to that of the toric bases described above, except that
there can be more than two chains of intersecting divisors connecting
$D_0, D_\infty$, associated with distinct blown-up fibers of the original
Hirzebruch surface. Graphically, the intersection structure of
effective irreducible divisors for such a base can be depicted as the
pair of horizontal divisors $D_0, D_\infty$ (associated with
$\Sigma_\pm$ in the original Hirzebruch surface), along with an arbitrary number $N$ of chains of divisors
$D_{i, j}, i = 1, \ldots, N$ connecting $D_0, D_\infty$. The $i$th
chain is realized by blowing up a fiber in $\mathbb{F}_m$, and contains
divisors $D_{i, j}, j = 1, \ldots, k_i$, with self-intersections and
adjacent intersections as described above ($D_{i, j} \cdot D_{i, j} =
n_{i, j} < 0$, $D_{i, j} \cdot D_{i, j + 1} = 1$) except that
$D_{i,1} \cdot D_0 = 1, D_{i, k_i} \cdot D_\infty = 1,$ and $D_{i,1}
\cdot D_{i, k_i} = 0$ unless $k_i = 2$. (See
Figure~\ref{f:semi-toric}.) We refer to bases of this form as
\new{$\mathbb{C}^*$-bases}. In this language, the toric bases correspond to
those $\mathbb{C}^*$-bases with $N = 2$ (or fewer) chains. The divisor
intersection structure of a $\mathbb{C}^*$-base is constrained by the set of
allowed clusters found in \cite{clusters} in a similar way to toric
bases, but with additional possibilities associated with the existence
of multiple connections to the divisors $D_0, D_\infty$ that provide
branchings in the intersection structure. For example, if $D_0$ has
self-intersection $D_0 \cdot D_0 = -3$, and is connected to chains
with self-intersections $(n_{1, 1}, n_{1, 2}, \ldots) = (-2, -1,
\ldots)$, $(n_{2, 1}, n_{2, 2}, \ldots) = (-2, -1, \ldots)$, then any
further chains $i > 2$ must satisfy $n_{i,1}= -1, n_{i, 2} \geq -4$.
The complete set of constraints on how divisors can be connected is described in
Appendix~\ref{sec:rules}, including some additional constraints
beyond those described in \cite{clusters}
related to branchings on curves of self-intersection $-n < -1$.
Note that the value of $T = h^{1, 1} (B) -1$ can be determined
directly from the intersection structure of a $\mathbb{C}^*$-base. Each
blow-up adds one to $T$. Each blow-up along a new fiber increases $N$
by one and creates a new $(-1, -1)$ chain with $k_N = 2$, while each
blow-up on an intersection in chain $i$ increases
$k_i$ by 1. Matching with the Hirzebruch surfaces, which have $N = 0,
T = 1$, we have
\begin{equation}
T = h^{1, 1} (B) -1 =(\sum_{i = 1}^{N} k_i) -N + 1 \,.
\label{eq:t-equation}
\end{equation}
\begin{figure}
\begin{center}
\begin{picture}(200,160)(- 100,- 80)
\put(-100, 65){\line(1, 0){198}}
\put(-100, -65){\line(1, 0){198}}
\put(60,75){\makebox(0,0){{\large$D_{0}$}}
\put(60,-75){\makebox(0,0){{\large$D_{\infty}$}}}
\put(-10,75){\makebox(0,0){{\large-3}}
\put(-10,-75){\makebox(0,0){{\large-4}}
\put(-80, 70){\line( -2, -3){28}}
\put(-80, -5){\line( -2, 3){28}}
\put(-80, 5){\line( -2, -3){28}}
\put(-80, -70){\line(-2, 3){28}}
\put(-110,50){\makebox(0,0){{\large$D_{1,1}$}}}
\put(-110,15){\makebox(0,0){{\large$D_{1,2}$}}}
\put(-110,-15){\makebox(0,0){{\large$D_{1,3}$}}}
\put(-110,-50){\makebox(0,0){{\large$D_{1,4}$}}}
\put(-85,48){\makebox(0,0){-2}}
\put(-85,20){\makebox(0,0){-1}}
\put(-85,-22){\makebox(0,0){-3}}
\put(-85,-48){\makebox(0,0){-1}}
\put(5, 70){\line( -2, -3){28}}
\put(5, -5){\line( -2, 3){28}}
\put(5, 5){\line( -2, -3){28}}
\put(5, -70){\line(-2, 3){28}}
\put(-25,50){\makebox(0,0){{\large$D_{2,1}$}}}
\put(-25,16){\makebox(0,0){{\large$D_{2,2}$}}}
\put(-25,-16){\makebox(0,0){{\large$D_{2,3}$}}}
\put(-25,-50){\makebox(0,0){{\large$D_{2,4}$}}}
\put(0,48){\makebox(0,0){-1}}
\put(0,20){\makebox(0,0){-2}}
\put(0,-22){\makebox(0,0){-2}}
\put(-0,-48){\makebox(0,0){-1}}
\put(80, 70){\line( 2, -3){28}}
\put(80, -70){\line(2, 3){28}}
\put(103, 42){\line(0, -1){84}}
\put(80,50){\makebox(0,0){{\large$D_{3,1}$}}}
\put(80,0){\makebox(0,0){{\large$D_{3,2}$}}}
\put(80,-48){\makebox(0,0){{\large$D_{3,3}$}}}
\put(102,55){\makebox(0,0){-1}}
\put(110,0){\makebox(0,0){-2}}
\put(102,-55){\makebox(0,0){-1}}
\end{picture}
\end{center}
\caption[x]{\footnotesize A $\mathbb{C}^*$-base $B$ is characterized by
irreducible effective divisors $D_0, D_\infty$ connected by any
number of linear chains of divisors $D_{i, j}$ with intersections
obeying the cluster rules of \cite{clusters}. All such bases can be
realized as multiple blow-ups of a Hirzebruch surface $\mathbb{F}_m$ that
preserve the action of a single $\mathbb{C}^*$ on $B$}
\label{f:semi-toric}
\end{figure}
\subsection{Curves of self-intersection -9, -10, and -11}
There is one further issue that arises in systematically classifying
toric and/or $\mathbb{C}^*$-bases. As shown in \cite{clusters}, for
any base containing a curve $C$ of self-intersection $-9, -10,$ or
$-11$, there is a point on $C$ where the Weierstrass functions $f, g$
vanish to degrees $4, 6$, so that point in the base must be blown up
for the base to support an elliptically fibered Calabi-Yau threefold.
If the curve $C$ is a divisor
associated with a section, $D_0$ or $D_\infty$, then blowing up this
point simply adds another $(-1, -1)$ chain to the $\mathbb{C}^*$-surface
(Figure~\ref{f:11-12}).
If $C$ is an element of one of the chains, however, then
blowing up the point on $C$ generally
takes the base out of the class of
$\mathbb{C}^*$- or toric surfaces.
For strictly toric or $\mathbb{C}^*$-surfaces, therefore, we should not
include any bases containing curves of self-intersection -9, -10, or
-11. For several reasons, however, we find it of interest to include
bases with such curves even when the blow-up leaves the context of $\mathbb{C}^*$-surfaces. In particular, we are interested in exploring the widest
range of bases possible that can be systematically analyzed. Thus,
while the bases arising from blowing up -9, -10, or -11 curves on
connecting chains generally takes us outside the $\mathbb{C}^*$ context we
have done a complete enumeration of surfaces including these
additional types of curves (which we refer to as ``not-strictly
$\mathbb{C}^*$-bases,'' or ``NSC-bases'' for short), with the understanding that the blown-up
smooth surfaces that support elliptic Calabi-Yau fibrations are no
longer strictly $\mathbb{C}^*$-surfaces. One strong argument in favor of including
these bases in our analysis is that the base with the largest known
value of $h^{1, 1} (B) = 491$, corresponding to the 6D F-theory
compactification with the largest gauge group, is of this type. This
geometry was first identified in \cite{Candelas-pr,
Aspinwall-Morrison-instantons}, and was studied further in
\cite{toric, WT-Hodge}. This geometry arises from a toric
base that contains two $-11$ curves that cannot be associated with
$D_{0, \infty}$, so the blown-up smooth base surface is neither
strictly toric nor $\mathbb{C}^*$. We included toric bases with $-9, -10,
-11$ curves in \cite{toric}. In the next section we discuss how
these are systematically included in the enumeration of $\mathbb{C}^*$-bases in the analysis of this paper.
For (NSC) surfaces that contain -9, -10, or -11 curves, an additional
contribution to $T$ must be added for each blow-up needed to reach a
smooth surface with only allowed clusters ({\it i.e.,} with -12 curves
instead), so the equation for $T$ is modified from \eq{eq:t-equation}
to
\begin{equation}
T = h^{1, 1} (B) -1 =(\sum_{i = 1}^{N} k_i) -N +
c_{11} + 2c_{10} + 3c_{9} +
1 \,,
\label{eq:t-equation-general}
\end{equation}
where $c_k$ is the number of curves with self-intersection $-k$.
\begin{figure}
\setlength{\unitlength}{.83pt}
\begin{center}
\begin{picture}(180,160)(- 100,- 80)
\put(-30, 0){\vector(1,0){30}}
\put(-265, 30){\line(1, 0){225}}
\put(-265, -30){\line(1, 0){225}}
\put(-155,40){\makebox(0,0){0}}
\put(-155,-40){\makebox(0,0){-11}}
\put(-50,-40){\makebox(0,0){{\large$D_{\infty}$}}}
\put(-50,40){\makebox(0,0){{\large$D_{0}$}}}
\put(-255, 32){\line( -1, -3){12}}
\put(-255, -32){\line(-1, 3){12}}
\put(-268,15){\makebox(0,0){-1}}
\put(-268,-15){\makebox(0,0){-1}}
\put(-235, 32){\line( -1, -3){12}}
\put(-235, -32){\line(-1, 3){12}}
\put(-248,15){\makebox(0,0){-1}}
\put(-248,-15){\makebox(0,0){-1}}
\put(-215, 32){\line( -1, -3){12}}
\put(-215, -32){\line(-1, 3){12}}
\put(-228,15){\makebox(0,0){-1}}
\put(-228,-15){\makebox(0,0){-1}}
\put(-195, 32){\line( -1, -3){12}}
\put(-195, -32){\line(-1, 3){12}}
\put(-208,15){\makebox(0,0){-1}}
\put(-208,-15){\makebox(0,0){-1}}
\put(-175, 32){\line( -1, -3){12}}
\put(-175, -32){\line(-1, 3){12}}
\put(-188,15){\makebox(0,0){-1}}
\put(-188,-15){\makebox(0,0){-1}}
\put(-155, 32){\line( -1, -3){12}}
\put(-155, -32){\line(-1, 3){12}}
\put(-168,15){\makebox(0,0){-1}}
\put(-168,-15){\makebox(0,0){-1}}
\put(-135, 32){\line( -1, -3){12}}
\put(-135, -32){\line(-1, 3){12}}
\put(-148,15){\makebox(0,0){-1}}
\put(-148,-15){\makebox(0,0){-1}}
\put(-115, 32){\line( -1, -3){12}}
\put(-115, -32){\line(-1, 3){12}}
\put(-128,15){\makebox(0,0){-1}}
\put(-128,-15){\makebox(0,0){-1}}
\put(-95, 32){\line( -1, -3){12}}
\put(-95, -32){\line(-1, 3){12}}
\put(-108,15){\makebox(0,0){-1}}
\put(-108,-15){\makebox(0,0){-1}}
\put(-75, 32){\line( -1, -3){12}}
\put(-75, -32){\line(-1, 3){12}}
\put(-88,15){\makebox(0,0){-1}}
\put(-88,-15){\makebox(0,0){-1}}
\put(-55, 32){\line( -1, -3){12}}
\put(-55, -32){\line(-1, 3){12}}
\put(-68,15){\makebox(0,0){-1}}
\put(-68,-15){\makebox(0,0){-1}}
\put(-45, 32){\line( 0, -1){64}}
\put(-50,0){\makebox(0,0){0}}
\put(-45,-30){\makebox(0,0){\includegraphics[width=0.4cm]{circle.pdf}}}
\put(15, 30){\line(1, 0){235}}
\put(15, -30){\line(1, 0){235}}
\put(125,40){\makebox(0,0){0}}
\put(125,-40){\makebox(0,0){-12}}
\put(240,-40){\makebox(0,0){{\large$D_{\infty}$}}}
\put(240,40){\makebox(0,0){{\large$D_{0}$}}}
\put(25, 32){\line( -1, -3){12}}
\put(25, -32){\line(-1, 3){12}}
\put(12,15){\makebox(0,0){-1}}
\put(12,-15){\makebox(0,0){-1}}
\put(45, 32){\line( -1, -3){12}}
\put(45, -32){\line(-1, 3){12}}
\put(32,15){\makebox(0,0){-1}}
\put(32,-15){\makebox(0,0){-1}}
\put(65, 32){\line( -1, -3){12}}
\put(65, -32){\line(-1, 3){12}}
\put(52,15){\makebox(0,0){-1}}
\put(52,-15){\makebox(0,0){-1}}
\put(85, 32){\line( -1, -3){12}}
\put(85, -32){\line(-1, 3){12}}
\put(72,15){\makebox(0,0){-1}}
\put(72,-15){\makebox(0,0){-1}}
\put(105, 32){\line( -1, -3){12}}
\put(105, -32){\line(-1, 3){12}}
\put(92,15){\makebox(0,0){-1}}
\put(92,-15){\makebox(0,0){-1}}
\put(125, 32){\line( -1, -3){12}}
\put(125, -32){\line(-1, 3){12}}
\put(112,15){\makebox(0,0){-1}}
\put(112,-15){\makebox(0,0){-1}}
\put(145, 32){\line( -1, -3){12}}
\put(145, -32){\line(-1, 3){12}}
\put(132,15){\makebox(0,0){-1}}
\put(132,-15){\makebox(0,0){-1}}
\put(165, 32){\line( -1, -3){12}}
\put(165, -32){\line(-1, 3){12}}
\put(152,15){\makebox(0,0){-1}}
\put(152,-15){\makebox(0,0){-1}}
\put(185, 32){\line( -1, -3){12}}
\put(185, -32){\line(-1, 3){12}}
\put(172,15){\makebox(0,0){-1}}
\put(172,-15){\makebox(0,0){-1}}
\put(205, 32){\line( -1, -3){12}}
\put(205, -32){\line(-1, 3){12}}
\put(192,15){\makebox(0,0){-1}}
\put(192,-15){\makebox(0,0){-1}}
\put(225, 32){\line( -1, -3){12}}
\put(225, -32){\line(-1, 3){12}}
\put(212,15){\makebox(0,0){-1}}
\put(212,-15){\makebox(0,0){-1}}
\put(245, 32){\line( -1, -3){12}}
\put(245, -32){\line(-1, 3){12}}
\put(232,15){\makebox(0,0){-1}}
\put(232,-15){\makebox(0,0){-1}}
\end{picture}
\end{center}
\caption[x]{\footnotesize The $\mathbb{C}^*$-base with $N = 11, n_0 = 0,
n_\infty = -11$ and 11 $(-1, -1)$ chains has a singular point on $D_\infty$
where the base must be blown up, giving the smooth base with $N = 12,
n_\infty = -12$, and 12 $(-1, -1)$ chains.}
\label{f:11-12}
\end{figure}
\section{Enumeration of bases}
\label{sec:enumeration}
We now summarize the results of a full enumeration of the
set of $\mathbb{C}^*$-base surfaces relevant for F-theory
compactification.
In this section we describe the features relevant for the
corresponding 6D supergravity theories. The following section
(\S\ref{sec:features}) focuses
on aspects of the associated Calabi-Yau geometries.
\subsection{Classification and enumeration of $\mathbb{C}^*$-bases}
Any $\mathbb{C}^*$-surface can be characterized by the self-intersection
numbers $n_0, n_\infty$ of the divisors $D_0, D_\infty$ associated
with the sections $\Sigma_\pm$, the number $N$ of
chains associated with distinct fibers
along which blow-ups have occurred, and the intersection structure of the
divisors connected along each chain, characterized by the integers
$n_{i, j}$ as described above. Thinking of each base $B$ as a blow-up
of $\mathbb{F}_m$, each chain is constructed by first blowing up a
point at the intersection of a fiber $F_i$ in $\mathbb{F}_m$ with either $D_0$
or $D_\infty$, giving a chain of length 2 containing a
pair of intersecting $-1$ curves.
Additional blow-ups can then be performed at the intersections either between pairs of
adjacent divisors along the chain, or between the divisors at the end
of the chain and $D_0$ or $D_\infty$. Each nontrivial chain thus is
associated with a decrease in the self-intersection of $D_0$ or
$D_\infty$ by at least one. Since the minimum values of $n_0,
n_\infty$ are $-12$ (as determined by the allowed non-Higgsable clusters), and their associated intersection numbers for $\Sigma_{\pm}$ on $\mathbb{F}_m$ are
equal and opposite, the maximum number of possible nontrivial chains
is $N = 24$.
Constructions with $N = 2$ or fewer nontrivial chains are realized
already in the toric context described in \cite{toric}; we have
included these cases in the complete analysis here, though the
counting is slightly different due to the treatment of $-9, -10,$ and
$-11$ curves.
To enumerate all possible $\mathbb{C}^*$-bases then, we can proceed by first
constructing all possible nontrivial chains associated with at most 24
blow-ups of points on the ending divisors of each chain (similar constructions were described in \cite{toric, Heckman-mv}). We then
consider all possible combinations of these chains compatible with the
bound on intersection numbers of the sections $D_{0, \infty}$. To
implement this enumeration algorithmically, we consider the set of
partitions of $k$ such that $1 \leq k \leq 24$, which are equivalent
to all valid combinations for the number of blow-ups at the
intersection of distinct fibers with $D_{0}$ or $D_{\infty}$ for any
$\mathbb{F}_{m}$. Then for each $\mathbb{F}_{m}$ and partition $\lambda \vdash k$ such
that $k=k_{1}+\ldots+k_{N}$, we can identify all combinations of
chains associated with $k_{1}, \ldots,k_{N}$ blow-ups on the ending
divisors such that the intersection of these chains with $D_{0}$ and
$D_{\infty}$ collectively satisfy the F-theory rules contained in
Table~\ref{t:clusters} and Table~\ref{t:NHCs}. Doing so for every
achievable combination of $n_{0}, n_{\infty}$ with $\lambda \vdash k$
gives a systematic method for enumerating all valid
$\mathbb{C}^*$-bases.
We have carried out the complete enumeration of all strictly
$\mathbb{C}^*$-bases ({\it i.e.,} including no -9, -10, -11 curves) for
all values of $N$ from 0 to 24. We have also enumerated those bases
which are not strictly $\mathbb{C}^*$ (or toric), due to -9, -10, or -11
curves on the fiber chains. We have not considered bases with -9, -10,
or -11 curves for $D_0$ or $D_\infty$, since blowing up these curves
gives a $\mathbb{C}^*$-base with larger $N$, so including these would simply
amount to overcounting. The cases $N = 0, 1, 2$ represent toric
bases. We have not included toric bases with -9, -10, -11 curves on
fiber chains when there is an equivalent toric base where such a curve
can be mapped to $D_0$ or $D_\infty$ by a rotation of the loop of
toric divisors. These cases are already counted elsewhere in the set
of $\mathbb{C}^*$ bases, as discussed in more detail in the following section.
\subsection{Distribution of bases}
\label{sec:distribution}
The number of distinct bases, uniquely determined by $N, n_0,
n_\infty$, and the intersection configuration of the curves on the $N$
chains (up to the symmetries associated with permutations of the
chains and
the simultaneous reflection of all chains
combined with $n_0 \leftrightarrow n_\infty$), is tabulated for each $N$ from 0
to 24
in
Table~\ref{t:table}.
Note that in cases where the base is toric and could have either $N =
1$ or $N = 2$ (this can occur when there is a curve of
self-intersection 0 that can either appear as a fiber or as one of $D_{0}, D_{\infty}$) we have counted the base as having the minimal value, $N
= 1$.
We find a total of
126,469
smooth
$\mathbb{C}^*$-bases that are acceptable for 6D F-theory
compactifications.
There are an additional 35,935 bases that come from $\mathbb{C}^*$-bases with -9, -10, and -11 curves in the fiber chains, for a total of
162,404 allowed bases.
We use this larger set for the statistical analyses in the remainder
of this paper.
The largest fraction
(nearly 38\%) of these bases come from the case with
3 non-trivial chains, and the number of
allowed bases decreases as the number of non-trivial chains increases
above 3.
\begin{table}
\label{t:table}
\begin{center}
\begin{tabular}{| c |
c |c | c| c| c|
}
\hline
\hline
\# Fibers $N$ & $\mathbb{C}^*$-bases & $\mathbb{C}^*+$ NSC bases & Max $T$ & Min $T$ & Peak $T$
\\
\hline
$\mathbb{P}^2$ & 1 & 1 & 0 & 0 & 0\\
\hline
0 & 10 & 10 & 1 & 1 & 1\\
1 & 9,383 & 14,183 & 193 & 2 & 18 \\
2 & 25,474 & 28,733 & 182 & 3 & 21\\
3 & 44,930 & 61,329 & 171 & 4 & 21\\
4 & 20,980 & 27,134 & 160 & 5 & 21\\
5 & 11,027 & 13,811 & 149 & 6 & 21\\
6 & 6,137 & 7,462 & 138 & 7 & 25\\
7 & 3,485 & 4,133 & 127 & 8 & 25\\
8 & 2,034 & 2,356 &116 & 9 & 25\\
9 & 1,190 & 1,329 &105 & 10 & 25\\
10 & 709 & 768 & 94 & 11 & 25 \\
11 & 423 & 449 & 83 & 12 & 25\\
12 & 262 & 273 & 72 & 13 & 25 \\
13 & 159 & 164 & 61 & 14 & 25\\
14 & 101 & 104 & 61 & 15 & 25\\
15 & 62 & 63 & 39 & 16 & 25\\
16 & 40 & 40 & 30 & 17 & 25 \\
17 & 24 & 24 & 27 & 18 & 25\\
18 & 16 & 16 & 26 & 19 & 25\\
19 & 9 & 9 & 25 & 20 & 25 \\
20 & 6 & 6 & 25 & 21 & 25\\
21 & 3 & 3 &25 & 25 & 25\\
22 & 2 & 2 & 25 & 25 & 25 \\
23 & 1 & 1 & 25 & 25 & 25\\
24 & 1 & 1 & 25 & 25 & 25\\
\hline
total & 126,469 & 162,404 & 193 & 0& 25\\
\hline
\hline
\end{tabular}
\end{center}
\caption[x]{\footnotesize The number of distinct $\mathbb{C}^*$-bases, with
two divisors $D_{0}$ and $D_{\infty}$ associated with sections
$\Sigma_{\pm}$ connected by $N$ chains of curves of negative
self-intersection. Cases $N = 0, 1, 2$ correspond to toric bases.
$P^2$ is listed separately. Toric bases with a single 0-curve that
can either be a section $(N = 2)$ or a fiber $(N = 1)$ are listed as
$N = 1$ bases. NSC refers to bases that are not strictly
$\mathbb{C}^*$-surfaces in that they are described by $\mathbb{C}^*$-surfaces with
-9, -10, or -11 curves in the fiber chains whose blow-up takes the
base outside the set of $\mathbb{C}^*$-surfaces. The bases described using
toric geometry in \cite{toric} all appear in this table, though some
are not strictly toric and appear in column 3 and/or in rows with $N
> 2$ due to additional fibers produced when blowing up $-9, -10,
-11$ curves. Peak $T$ refers to the value of $T$ that occurs for
the greatest number of bases at each $N$. Both NSC and strictly $\mathbb{C}^*$-surfaces were used to determine the maximum, minimum, and peak $T$ values.}
\end{table}
As mentioned above, the tabulation of bases given in
Table~\ref{t:table} differs from that of \cite{toric} in the way that
toric bases with $-9, -10,$ and $-11$ curves are treated. In
\cite{toric}, 61,539 bases were identified based on toric structures
that in some cases included $-9, -10,$ or $-11$ curves. These bases
include the 34,868 strictly toric bases listed for $\mathbb{P}^2$ and $N=0,1$ and $2$
in column 2 of Table~\ref{t:table}, another 8,059 bases that arise
from toric bases with $-9, -10,$ or $-11$ curves on the fiber chains,
included in column 3 of the table, and another 18,612 bases that are
included in rows $N > 2$ and correspond to $\mathbb{C}^*$-bases (with or
without $-9, -10,$ or $-11$ curves on the fiber chains). Bases in the
last category arise from
toric bases with $-9, -10, -11$ curves on the sections $D_0, D_\infty$
that give extra nontrivial fibers when the necessary points are blown
up. To summarize, the analysis here includes all the bases
constructed in \cite{toric}, but discriminates more precisely based on
the detailed structure of these bases.
The parameter $T$ (number of tensor multiplets) serves as the most significant distinguishing
characteristic of 6D supergravity theories, and is related to the
simplest topological feature $h^{1, 1}(B)$ of the base surface $B$. Since each nontrivial fiber involves at least one blowup, and $T =1 $ for all $\mathbb{F}_{m}$, $T \geq N+1$.
The value of $T= h^{1, 1}(B) -1$ for the $\mathbb{C}^*$-bases with $N > 2$
ranges from $T=4$ to $T = 171$. The largest $T$ for any known base is
$T = 193$; this is an NSC base with only one nontrivial fiber ($N = 1$).
Heuristic arguments were given in \cite{toric} that no other
base can have $T > 193$; as discussed in the next section this
conclusion is supported by the results of this paper.
Note that for $N$ from 1 to 13, the maximum value of $T$ drops by
precisely 11 for each increment in $N$. This can be understood from
the appearance of specific maximal sequences containing $-12$
curves, as discussed in \S\ref{sec:groups}.
The number of different bases that appear at each $T$ is plotted in
Figure~\ref{f:t-plot}. The number of bases peaks at $T = 25$.
The distribution of bases also peaks at $T = 25$ for every specific $N > 5$, out to $N = 21,
22, 23$ and $24$, where all possible $\mathbb{C}^*$-bases have $T = 25$. One
feature of many of the bases with $T = 25$ is that they contain
precisely two $(-12)$ clusters giving ${\mathfrak{e}}_8$ gauge summands, and no
other non-Higgsable clusters. The
primary difference between these bases is that they have different
numbers of nontrivial chains and contain different combinations of
intersecting $-2$ curves. As mentioned above, clusters of $-2$ curves
without curves of lower self-intersection generally arise at special
points in moduli spaces of bases without such clusters. Indeed, many
of the $T = 25$ bases with gauge algebra ${\mathfrak{e}}_8\oplus {\mathfrak{e}}_8$ are limits of the same complex geometry. We discuss this issue further
in \S\ref{sec:redundancies}, in the context of the full Calabi-Yau
threefold geometry of the elliptic fibration over $B$. In a similar
fashion, many bases have $T = 21$ and contain one $(-12)$ cluster and
a single $(-8)$ cluster associated with a ${\mathfrak{e}}_7$ algebra summand.
The number of $\mathbb{C}^*$-bases drops off rapidly between about $T = 30$
and $T = 60$. Similar behavior for the toric subset of the bases was
noted in \cite{toric}. In addition to the primary peak at $T = 25$,
there are smaller peaks in the distribution of bases starting at
$T=50$ and appearing at intervals of eleven, so that they are visible at $T=61,\, 72 ,\, 83 , 94 ,\, $ {\it
etc.} This feature is also visible when we consider the $N$-chain
cases separately for $3 \leq N \leq 14$ (see Figure~\ref{f:nt-plot}).
Again, the increment by 11 is related to ${\mathfrak{e}}_8$ chain sequences,
though the geometry seems less restricted as $T$
increases.
\begin{figure}
\begin{center}
\includegraphics[width = 12.0cm]{Tdist_allN_fig.pdf}
\end{center}
\vspace{-1 cm}
\caption[x]{\footnotesize Number of
distinct $\mathbb{C}^*$-bases as a function of the number of tensor
multiplets $T$}
\label{f:t-plot}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width = 12.0cm]{Tdist_N3-4_fig.pdf}
\end{center}
\vspace{-1 cm}
\caption[x]{\footnotesize Number of distinct $\mathbb{C}^*$-bases with $N=3$ Fibers (upper blue data) and $N=4$ Fibers (lower purple data) for different numbers $T$ of tensor multiplets.}
\label{f:nt-plot}
\end{figure}
It may seem surprising that the number of possible bases drops off so
rapidly with $N$. Naively, one might imagine a set of $K$ relatively
short chains that could be combined arbitrarily in roughly $K^N/N!$
ways, which could grow rapidly with increasing $N$ for even modest
values of $K$. The fact that the number of bases decreases rapidly
for increasing $N$ indicates that in fact, not many chains can be
combined arbitrarily. These constraints come in part from the
intersection rules that drastically reduce the possibilities of which
chains can simultaneously intersect one of the sections, and in part
from the increased number of blow-ups at points on the sections that
are needed to build complicated chains. The relatively controlled
dependence on $N$ of the number of models suggests that going beyond
the class of $\mathbb{C}^*$-surfaces to completely generic base surfaces may also give
a reasonably controlled number of possible bases.
\subsection{Gauge groups and chain structure}
\label{sec:groups}
We have investigated a number of aspects of the class of $\mathbb{C}^*$-bases that we have identified. One of the main conclusions of this
investigation is that the structure of the $\mathbb{C}^*$-bases with large
$T$ is very similar to that of toric bases with large $T$. For toric
bases with large $T$, the intersection structure of divisors with
negative self-intersection is largely based on long chains dominated
by sequences of maximal units with gauge algebra ${\mathfrak{e}}_8 \oplus{\mathfrak{f}}_4
\oplus 2 ({\mathfrak{g}}_2 \oplus{\mathfrak{su}} (2))$ (See Figure~\ref{f:e8-sequence}).
The same is true of $\mathbb{C}^*$-bases.
The number of gauge algebra summands of these types are plotted in
Figure~\ref{f:gauge-factors} against $T$, and grow linearly in a
fashion very similar to that in the toric case.
The linear sequences of 12 curves giving the gauge algebra
contributions
${\mathfrak{e}}_8 \oplus{\mathfrak{f}}_4
\oplus 2 ({\mathfrak{g}}_2 \oplus{\mathfrak{su}} (2))$
were
described in \cite{clusters}; we refer to these as ``${\mathfrak{e}}_8$ units''
for simplicity, as they appear frequently in the intersection
structure of bases with large $T$. These units are maximal in the sense that they can not be blown up in any way consistent with the existence of an elliptic fibration.
\begin{figure}
\begin{center}
\begin{picture}(200,100)(- 100,- 150)
\multiput(-145,-100)(50,0){6}{\line(1,-1){40}}
\thicklines
\multiput(-170,-140)(50,0){7}{\line(1,1){40}}
\multiput(-95,-100)(150,0){2}{\line(1,-1){40}}
\multiput(-160,-115)(300,0){2}{\makebox(0,0){$-12$}}
\multiput(-135,-125)(250,0){2}{\makebox(0,0){$-1$}}
\multiput(-110,-115)(200,0){2}{\makebox(0,0){$-2$}}
\multiput(-85,-125)(150,0){2}{\makebox(0,0){$-2$}}
\multiput(-60,-115)(100,0){2}{\makebox(0,0){$-3$}}
\multiput(-35,-125)(50,0){2}{\makebox(0,0){$-1$}}
\multiput(-10,-115)(300,0){1}{\makebox(0,0){$-5$}}
\put(-175,-120){\makebox(0,0){$\cdots$}}
\put(175,-120){\makebox(0,0){$\cdots$}}
\end{picture}
\end{center}
\caption[x]{\footnotesize Bases having large values of $h^{11} (B) =
T + 1$ have an intersection structure dominated by multiple
repetitions of a characteristic sequence of intersecting divisors
associated with non-Higgsable gauge algebra ${\mathfrak{e}}_8 \oplus{\mathfrak{f}}_4
\oplus 2 ({\mathfrak{g}}_2 \oplus{\mathfrak{su}} (2))$.}
\label{f:e8-sequence}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width = 12.0cm]{gaugeplot_allN.pdf}
\end{center}
\vspace{-1 cm}
\caption[x]{\footnotesize Average number of gauge algebra summands as a function of $T$ for summands (ordered from top at $T=171$) ${\mathfrak{g}}_2 \oplus{\mathfrak{su}} (2)$, ${\mathfrak{e}}_8,$ and ${\mathfrak{f}}_4$.}
\label{f:gauge-factors}
\end{figure}
A detailed analysis of the configurations at large $T$ makes this
correspondence clear.
As discussed in \cite{clusters, toric}, the base with $T = 193$ is a (NSC) base with $N = 1$, where the single fiber chain is essentially
16 ${\mathfrak{e}}_8$ units ending in $-12$ curves at $D_0, D_\infty$; the two
$-12$ curves one ${\mathfrak{e}}_8$ unit away from the ends of the fiber are actually $-11$
curves in the toric base, which is what makes this base not strictly toric or
$\mathbb{C}^*$ (and therefore places it in the third column of Table~\ref{t:table}).
The $N = 2$ base with $T = 182$ is identical except that the long
fiber chain contains only 15 ${\mathfrak{e}}_8$ units (again with $-11$
curves one ${\mathfrak{e}}_8$ unit away from each end), and the second chain is a simple $(-1,
-1)$ chain arising from a single blow-up on one of the sections. The
unique $N = 3$ $\mathbb{C}^*$-base with $T
= 171$ is again similar, with $n_0 = n_\infty = -12,$, two $(-1, -1)$
chains, and the longer chain being a sequence of 14 ${\mathfrak{e}}_8$ units, again
with two $-11$ curves in the $\mathbb{C}^*$-base.
Other bases with large $T$ are dominated in a
similar fashion by one or two very long chains, with the other chains
generally being of type $(-1, -1)$ or other very short chains.
For example, there is only one base with $N = 3$ that has two fiber
chains
of length $> 50$ (see Figure~\ref{f:chains-34}). For this base
there are two long chains of length 53 that combine to form a loop
of 9 $e_8$ chains, with a $-12$ curve and the opposite $-5$ curve
for $D_0$ and $D_\infty$ (and the usual pair of $-11$ curves near
the end on each side), and the third fiber chain is of the form
$(-1, -1)$. Over all $N = 3$ bases, when the shortest fiber chain has
length $> 3$ then the next shortest is of length $23$ or less.
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{longest_second_345.pdf}
\end{center}
\caption[x]{\footnotesize Comparison of the lengths of the longest
and second longest chains associated with distinct fibers for bases with $N = 3$ (blue circles), $N = 4$ (purple squares), and $N=5$ (green stars). }
\label{f:chains-34}
\end{figure}
This analysis of large $T$ $\mathbb{C}^*$-bases strengthens the conclusion
argued heuristically in \cite{toric} that no base can
have $T > 193$. In particular, the $\mathbb{C}^*$-bases allow two new types
of topological structure to the intersection diagram: loops and
branches. Neither of these types of structure allows for new
constructions that qualitatively change the nature of the bases at
large $T$. In general, bases with large $T$ have a long linear
sequence of ${\mathfrak{e}}_8$ chains, with additional loops rapidly decreasing
the maximum value of $T$. There are no new features associated with
branches that suggest any mechanism for constructing bases with large
$T$ and more complicated intersection structures.
In fact, the known base with the largest $T$ corresponds to simply a
single long ${\mathfrak{e}}_8$ chain, with no branching or loops at all. The
only way to modify this structure topologically is to add branchings
and loops, and the results that we have found here show that adding up
to two branchings and an arbitrary number of loops does not give any
way of increasing $T$; rather, as the topological complexity increases
the upper bound on possible $T$'s decreases.
Thus,
while we still do
not have a proof that $T = 193$ is the maximum possible for a smooth
base $B$, the absence of new structure in the $\mathbb{C}^*$-base models seems
to make it unlikely that generic bases with even more complicated
branching and looping structures could have larger values of $T$,
even without the $\mathbb{C}^*$ restriction.
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{longest_shortest_345.pdf}
\end{center}
\caption[x]{\footnotesize Comparison of the lengths of the longest
and shortest chains associated with distinct fibers for bases with $N = 3$ (blue circles), $N = 4$ (purple squares), and $N=5$ (green stars). }
\label{f:chains-34-b}
\end{figure}
\section{Calabi-Yau geometry}
\label{sec:features}
Over each F-theory base it is possible to construct an elliptically
fibered Calabi-Yau geometry. Resolving singularities associated with
the nonabelian groups living on the discriminant locus gives a smooth
Calabi-Yau threefold. While tuning the moduli over any base can
increase the degree of vanishing of the discriminant locus and hence
enhance the nonabelian gauge group, which corresponds to a change in the
associated smooth Calabi-Yau threefold, we focus here
primarily on the simplest
threefold over each base, corresponding to the maximally Higgsed 6D
supergravity model.
Each Calabi-Yau threefold $X$ has topological invariants given by the
Hodge numbers $h^{1, 1} (X),\, h^{2, 1} (X)$. A large class of Calabi-Yau
threefolds that are realized as hypersurfaces in toric varieties were
identified by Kreuzer and Skarke \cite{Kreuzer-Skarke}, who produced a
comprehensive database of 474 million Calabi-Yau constructions of this
type. These examples include threefolds with 30,108 distinct pairs of
Hodge numbers. The Hodge numbers associated with allowed toric bases
(including some NSC bases) were computed in \cite{WT-Hodge},
and it was shown that
the distinctive upper boundary of the ``shield region'' spanned by the
Kreuzer and Skarke Hodge data is associated with a trajectory of
blow-ups of the Hirzebruch surface $\mathbb{F}_{12}$. In particular, it was
proven that the maximum possible value of $h^{2, 1} (X)$ for \emph{any}
elliptically-fibered Calabi-Yau threefold (with section) is given by
\begin{equation}
h^{2, 1} (X) \leq 491 \,,
\end{equation}
independent of whether the base is toric or not. In this section we
carry out an analysis of the Hodge numbers for the more general class
of $\mathbb{C}^*$-bases, allowing us to identify some new Calabi-Yau
threefolds with
interesting features.
\subsection{Calabi-Yau geometry of elliptic fibrations over $\mathbb{C}^*$-bases}
The Hodge numbers of the smooth Calabi-Yau threefold $X$
associated with a generic elliptic fibration over the base $B$ are
related to the intersection structure of the base and to the gauge
group and matter content of the associated 6D supergravity theory.
The Hodge number $h^{1, 1} (X)$ is related to the structure of the
base by the Shioda-Tate-Wazir formula \cite{stw}, which in the
language of the 6D supergravity theory states \cite{Morrison-Vafa}
\begin{equation}
h^{1, 1} (X) = h^{1, 1} (B) + {\rm rank} (G) + 1
= T + 2 + {\rm rank} (G_{\rm nonabelian}) + V_{\rm abelian} \,,
\label{eq:h11}
\end{equation}
where $G$ is the full (abelian + nonabelian) gauge group of the 6D
theory. While the nonabelian group is determined as described above
from the singularity structure of the discriminant locus, the rank of
the abelian group corresponds to the rank of the
\emph{Mordell-Weil} group of sections of the fibration.
In general, the rank of the Mordell-Weil group is difficult to compute
mathematically (see, {\it e.g.}, \cite{Hulek-Kloosterman}).
The
Mordell-Weil group is a global feature of the elliptic fibration,
corresponding in the physics context to the number of
$U(1)$ factors in the 6D supergravity theory; the global aspect of
this structure is what makes computation of the group a particularly
challenging problem. Recent
progress on understanding abelian factors in general F-theory
constructions was made in
\cite{Grimm-Weigand,
Park-Taylor, Morrison-Park, Park-abelian, Mayrhofer:2012zy,
Braun:2013yti, bmpw, Cvetic-Klevers, Braun:2013nqa, bmpw-2, Cvetic-Klevers-2, Braun-fate, DPS,
mt-sections}.
The Hodge number $h^{2, 1} (X)$ gives the number of
complex structure moduli of the threefold $X$.
This set of moduli represents all but one of the neutral scalar fields
in the 6D gravity theory -- the remaining scalar field is associated
with the overall K\"ahler modulus on the base $B$.
\begin{equation}
h^{2, 1} (X) = H_{\rm neutral} -1 \,.
\end{equation}
The number of neutral hypermultiplets in the theory on a $\mathbb{C}^*$-base $B$ can be computed from the intersection structure on $B$
by analyzing the generic Weierstrass form
over that base, as we describe below in Section~\ref{sec:neutral}. The
total number of scalar fields is also related through
the 6D gravitational anomaly cancellation condition to the number of
vector multiplets $V$
in the theory \cite{gsw-6, Sagnotti}
\begin{equation}
H_{\rm neutral} + H_{\rm charged}-V = 273-29T \,.
\label{eq:anomaly}
\end{equation}
Since the number of charged hypermultiplets can be computed by adding
the contributions from Table~\ref{t:clusters} for each non-Higgsable
cluster in $B$, and the numbers of neutral hypermultiplets, tensors,
and nonabelian vectors are also computable from the intersection
data on $B$, this gives us a way of computing the total number of
abelian vector multiplets $V_{\rm abelian}$. Using this in
\eq{eq:h11}, we can thus compute both Hodge numbers $h^{1, 1} (X),\, h^{2, 1} (X)$ in a
systematic way for any $\mathbb{C}^*$-base.
\subsection{Counting neutral hypermultiplets}
\label{sec:neutral}
Computing the number of neutral hypermultiplets over a given base $B$
can be done in a systematic fashion by following the sequence of
blow-ups needed to reach $B$ starting from a Hirzebruch surface
$\mathbb{F}_m$.
In the case where $B$
is a toric surface, this analysis is particularly
simple and was described in \cite{toric}.
For a toric base, the number of neutral hypers is related to
the number of free parameters $W$ in the Weierstrass model for $B$
through
\begin{equation}
H_{\rm neutral} = W-w_{\rm aut} + N_{-2} \;\;\;\;\;
{\rm (toric)}\,,
\label{eq:hyper-counting}
\end{equation}
where $w_{\rm aut}$ is the dimension of the automorphism group of $B$,
and $N_{\rm -2}$ is the number of curves of self-intersection $-2$ in
$B$ that do not live in clusters carrying a gauge group. Basically,
automorphisms of $B$ correspond to Weierstrass monomials that do not
represent physical degrees of freedom, while $-2$ curves represent
moduli that have been tuned to a special point, as discussed above.
As shown in \cite{toric}, each curve of self-intersection $k \geq
0$ in the
toric fan contributes $k +1$ to the dimension of the automorphism
group, with two additional universal
automorphisms corresponding to the toric
structure. For toric surfaces, the number of Weierstrass monomials in
$f$ is simply given by the set of points $m$ in the lattice dual to
the toric fan that satisfy $\langle m, v\rangle \geq -4$ for all $v$
in the set of rays generating the fan, and a similar condition with
$\langle m, v\rangle \geq -6$ for monomials in $g$. As discussed in
\cite{toric}, there is a simple geometric picture of this in the toric
context.
More generally, we can start on $\mathbb{F}_m$ with a given number of
monomials $W$ in the Weierstrass degrees of freedom of $f, g$ and
explicitly tune the moduli in $f, g$ to blow up the base at the
desired set of points to realize any base $B$. We can then compute
the number of remaining Weierstrass moduli and apply
\eq{eq:hyper-counting}
(with one slight modification in the counting
of $N_{-2}$ as discussed further below). This procedure can be
carried out in a clear fashion when the base $B$ is
$\mathbb{C}^*$. The simplification in the $\mathbb{C}^*$ case comes from the
fact that we can treat each chain arising from a blown-up fiber on
$\mathbb{F}_m$ in a parallel fashion to the toric case. In the toric case we
are blowing up points on only two fibers, which we can take to be at
the points $z = 0, \infty$ in a local toric
coordinate chart $(z, w) \in\mathbb{C} \times\mathbb{C}$ as discussed in Section
\ref{sec:toric}. Consider the constraints on Weierstrass monomials
coming from blow-ups along the fiber at $z = 0$. Each such constraint
corresponds to imposing a vanishing condition on a set of monomial
coefficients in an expansion $f = \sum_{n, m}c_{n,m}z^nw^m$, and similarly
for $g$. The set of constraints imposed depends upon the sequence of
blow-ups in the fiber, and has a convenient geometric description in
the language of the toric fan and dual space. For a $\mathbb{C}^*$-base
with $N$ chains located at $z = z_1, \ldots, z_N$, we simply impose
the appropriate set of constraints on the monomial coefficients in an
expansion $f = \sum_{n, m}c^{(i)}_{n,m} (z-z_i)^nw^m$ for the $i$th
chain. This
gives a set of linear conditions on the coefficients in $f, g$.
The
dimension of the space of independent solutions to these linear
constraints can then be used to compute the number of independent
Weierstrass monomials $W$, in an analogous fashion to
(\ref{eq:hyper-counting}). As in the toric case, when one of the
sections $D_0, D_\infty$ has self-intersection $k \geq 0$ it
contributes $k +1$ to the dimension of the automorphism group. For a
$\mathbb{C}^*$-base with $N = 3$ or more blown up fibers,
the first two fibers can be fixed at $z = 0, \infty$, using up two
automorphisms originally associated with the fibers of
self-intersection 0 in the toric base.
The blowing up of
the third fiber corresponds to fixing another point in $\mathbb{C}^*$, which can for
example be chosen to be $z = 1$, fixing one of the remaining
automorphisms and leaving only one of the universal two present in all
toric bases.
For each of the $N -3$ fibers beyond the third, blowing up an
additional point involves choosing a value $z_i$ which is itself a
Weierstrass parameter though it does not correspond to a monomial.
Thus, the formula (\ref{eq:hyper-counting}) must be augmented by $N
-3$ when more than 3 fibers are blown up. The complete formula for
the number of neutral hypermultiplets is given at the end of this
section, in \eq{eq:hyper-counting-general}.
It may be helpful to illustrate this method with an example.
Consider first the toric base $B$ with the following parameters
\begin{eqnarray}
N = 2, & & \; n_0 = -2, n_\infty = -1\\
{\rm chain}\ 1: & & (-1, -3, -1, -2)\\
{\rm chain}\ 2: & & (-1, -1)
\end{eqnarray}
This base has one $(-3)$ cluster, so the nonabelian gauge algebra
is ${\mathfrak{su}} (3)$.
This base can be described as $\mathbb{F}_0$ with four points blown up, three
on the fiber $z = 0$ and one on the fiber at $z = \infty$. A graphic
description of the toric fan and monomials in the dual lattice is
given in Figure~\ref{f:example-monomials}.
\begin{figure}
\begin{center}
\centering
\begin{picture}(200,130)(- 100,- 70)
\put(-105, 0){\makebox(0,0){\includegraphics[width=8cm]{example-a.pdf}}}
\put(105, 0){\makebox(0,0){\includegraphics[width=5.5cm]{example-b.pdf}}}
\end{picture}
\end{center}
\caption[x]{\footnotesize An example of counting monomials in a toric
base $B$. The base $\mathbb{F}_0$ has 250 Weierstrass coefficients: $9 \times 9
= 81$ in $f$ (solid dots) and $13 \times 13$ in $g$ (round circles)
in the dual lattice. These are all of the dots and circles in the
diagram on the left.
Each blow-up corresponds to adding a vector to
the fan (above right), which removes some monomials from $f$ and
$g$. For example, blowing up the point corresponding to the red ray
in the diagram on the right removes all monomials below and to the
left of the red line in the left-hand diagram, blowing up on the blue
ray removes all points above and to the left of the blue line, {\it etc.}.
The base $B$ has 136 monomials, 44 in $f$ and 92 in $g$, corresponding
to the dots and circles in the left-hand diagram that lie above the red
and green lines and to the right of the purple line.}
\label{f:example-monomials}
\end{figure}
The monomials in this toric model can be computed using the methods of
\cite{toric}. The monomials in $f$ in the Weierstrass model on
$\mathbb{F}_0$ are of the form $c_{n, m}z^nw^m, 0 \leq n, m \leq 8$, and
similarly for $g= \sum_{n, m = 0}^{12}d_{n, m}z^nw^m$. Blowing up the
points on $\mathbb{F}_0$ imposes conditions on the coefficients $c, d$. For
example, blowing up the point $z = w = 0$ (corresponding to adding the
ray in red in the diagram on the right-hand side of Figure~\ref{f:example-monomials})
imposes the conditions that
$c_{n, m} = 0$ for all $n + m < 4$ and $d_{n, m} = 0$ for $n + m < 6$.
This removes 31 Weierstrass moduli (and reduces the automorphism
group by 2 by turning two $0$-curves into $-1$-curves), giving a
change in the number of neutral hypermultiplets $\Delta H_{\rm
neutral} = -29$, matching \eq{eq:anomaly} with $\Delta T = 1$.
This corresponds to removing all the monomials on the lower-left
corner of the left-hand diagram in Figure~\ref{f:example-monomials}. The analogous constraints imposed by blowing
up the other points on the fiber $z = 0$ as well as the point $z =
\infty, w = 0$ are depicted in an analogous fashion in the Figure.
Together, these constraints reduce the number of Weierstrass
monomials to $W = 136$. This matches with \eq{eq:hyper-counting}
and \eq{eq:anomaly}, with $N_{-2}= w_{\rm aut} = 2$ and $V= 8$
vector multiplets from the ${\mathfrak{su}}(3)$ factor in the gauge algebra.
Thus, we have determined the Hodge numbers of the resolved generic
elliptic fibration over $B$
\begin{equation}
h^{1, 1} (X) = 9, \;\;\;\;\;h^{2, 1} (X) =135 \,.
\end{equation}
\begin{figure}
\setlength{\unitlength}{.83pt}
\begin{center}
\begin{picture}(180,160)(- 100,- 80)
\put(-35, 0){\vector(1,0){25}}
\put(-265, 60){\line(1, 0){225}}
\put(-265, -60){\line(1, 0){225}}
\put(-155,70){\makebox(0,0){-2}}
\put(-155,-70){\makebox(0,0){-1}}
\put(-55,-70){\makebox(0,0){{\large$D_{\infty}$}}}
\put(-55,70){\makebox(0,0){{\large$D_{0}$}}}
\put(-255, 62){\line( -1, -3){12}}
\put(-255, -2){\line(-1, 3){12}}
\put(-255, 2){\line( -1, -3){12}}
\put(-255, -62){\line( -1, 3){12}}
\put(-277,14){\makebox(0,0){{\large$D_{1,2}$}}}
\put(-250,14){\makebox(0,0){-3}}
\put(-277,48){\makebox(0,0){{\large$D_{1,1}$}}}
\put(-250,48){\makebox(0,0){-1}}
\put(-277,-14){\makebox(0,0){{\large$D_{1,3}$}}}
\put(-250,-14){\makebox(0,0){-1}}
\put(-277,-48){\makebox(0,0){{\large$D_{1,4}$}}}
\put(-250,-48){\makebox(0,0){-2}}
\put(-165, 5){\line( 1, -3){23}}
\put(-165, -5){\line(1, 3){23}}
\put(-170,35){\makebox(0,0){{\large$D_{2,1}$}}}
\put(-140,33){\makebox(0,0){-1}}
\put(-170,-35){\makebox(0,0){{\large$D_{2,2}$}}}
\put(-140,-33){\makebox(0,0){-1}}
\put(-45, 62){\line( 0, -1){128}}
\put(-50,0){\makebox(0,0){0}}
\put(-45,60){\makebox(0,0){\includegraphics[width=0.4cm]{circle.pdf}}}
\put(15, 60){\line(1, 0){235}}
\put(15, -60){\line(1, 0){235}}
\put(125,70){\makebox(0,0){-3}}
\put(125,-70){\makebox(0,0){-1}
\put(235,-70){\makebox(0,0){{\large$D_{\infty}$}}}
\put(235,70){\makebox(0,0){{\large$D_{0}$}}}
\put(25, 62){\line( -1, -3){12}}
\put(25, -2){\line(-1, 3){12}}
\put(25, 2){\line( -1, -3){12}}
\put(25, -62){\line( -1, 3){12}}
\put(3,14){\makebox(0,0){{\large$D_{1,2}$}}}
\put(30,14){\makebox(0,0){-3}}
\put(3,48){\makebox(0,0){{\large$D_{1,1}$}}}
\put(30,48){\makebox(0,0){-1}}
\put(3,-14){\makebox(0,0){{\large$D_{1,3}$}}}
\put(30,-14){\makebox(0,0){-1}}
\put(3,-48){\makebox(0,0){{\large$D_{1,4}$}}}
\put(30,-48){\makebox(0,0){-2}}
\put(115, 5){\line( 1, -3){23}}
\put(115, -5){\line(1, 3){23}}
\put(110,35){\makebox(0,0){{\large$D_{2,1}$}}}
\put(140,33){\makebox(0,0){-1}}
\put(110,-35){\makebox(0,0){{\large$D_{2,2}$}}}
\put(140,-33){\makebox(0,0){-1}}
\put(225, 5){\line( 1, -3){23}}
\put(225, -5){\line(1, 3){23}}
\put(220,35){\makebox(0,0){{\large$D_{3,1}$}}}
\put(250,33){\makebox(0,0){-1}}
\put(220,-35){\makebox(0,0){{\large$D_{3,2}$}}}
\put(250,-33){\makebox(0,0){-1}}
\end{picture}
\end{center}
\caption[x]{\footnotesize An example of a $\mathbb{C}^*$-surface $B'$
given by
blowing up the toric surface $B$ from Figure~\ref{f:example-monomials}
at a
point.}
\label{f:st-example}
\end{figure}
Now, consider blowing up a point on a third fiber at $z = 1$ to form
the $\mathbb{C}^*$-base $B'$ shown in Figure~\ref{f:st-example}. Blowing up at
the point $(z, w) = (1, 0)$ imposes the condition that $f$ and $g$
must vanish to degrees 4 and 6 in $(z-1)$ and $w$. Consider for
example $f$. From the geometry depicted in
Figure~\ref{f:example-monomials} it is clear that the only
undetermined coefficients of $f$ at leading orders in $w$ are
\begin{equation}
f = c_{4, 0} z^4 + c_{3, 1} z^3w + c_{4, 1} z^4w + c_{5,1} z^5w
+ \sum_{n = 2}^{6} c_{n,2} z^nw^2 + \cdots \,.
\end{equation}
The condition that this vanishes to degree 4 in $(z-1), w$
forces all coefficients $c_{n, 0}$ and $c_{n,1}$ to vanish, imposes
two constraints at $m = 2$, and one constraint at $m = 3$, totaling 7 new constraints. Similarly, coefficients of $g$ experience
14 further constraints, for a total of 21 constraints. This reduces
the number of Weierstrass monomials to $W = 115$. The equations
\eq{eq:hyper-counting} and \eq{eq:anomaly} still apply, where now
$w_{\rm aut} = 1$ since one of the toric automorphisms is broken by
the reduction to $\mathbb{C}^*$ structure, and $N_{-2} = 1$ because the
blow-up at $z = 1, w = 0$ changes $n_0$ from $-2$ to $-3$.
With one new $-3$ curve the number of vector multiplets becomes $16$,
and the resolved generic elliptically fibered Calabi-Yau $X'$ over $B'$ has
Hodge numbers
\begin{equation}
h^{1, 1} (X') = 12, \;\;\;\;\;h^{2, 1} (X') = 114 \,.
\end{equation}
In this way we can determine the Hodge numbers of the Calabi-Yau
threefolds associated with generic elliptic fibrations over
all $\mathbb{C}^*$-bases. When the additional fibers added are more
complicated, the conditions on $f, g$ at $z = z_i$ can be determined
by simply translating the conditions at $z = 0$ from the toric
picture.
The one remaining subtlety in this general picture is that for general
$\mathbb{C}^*$-bases some combinations of $-2$ curves must be treated
specially \footnote{Thanks to David Morrison for discussions on this
point}. In particular, for
certain configurations of intersecting $-2$ curves associated with
Kodaira-type surface singularities there is a linear combination that
describes a degenerate genus one curve \cite{bhpv}. In these cases, the extra
deformation directions associated with the $-2$ curves are not
independent and the contribution from $N_{-2}$ to
\eq{eq:hyper-counting} is reduced by 1. The $-2$ curve configurations
of these types that appear in $\mathbb{C}^*$-bases are shown in
Figure~\ref{f:2-curves}.
These $-2$ curve configurations can be identified as those where an integral
linear combination of the $-2$ curves is a divisor with vanishing
self-intersection. For this to occur, the
weighting of any $-2$ curve $C_i$ must be 1/2 the total of the weightings of
the $-2$ curves that intersect $C_i$. Some simple combinatorics
shows that the configurations in Figure~\ref{f:2-curves} are the only
possible geometries satisfying this condition that have a single $-2$
curve that intersects more than two others.\footnote{One other class
of configurations satisfies this condition, the
type $I_b^*$ singularity, with two $-2$ curves
each intersecting three $-2$ curves and connected by a single chain
of $-2$ curves, but this cannot appear in a $\mathbb{C}^*$-base since the
chain associated with each fiber must contain at least one $-1$
curve, and in this case the chain connecting the two -2 curves with
triple branching contains itself only $-2$ curves.}
Thus, for $\mathbb{C}^*$-bases that are not toric ($N \geq 3$), the formula
\eq{eq:hyper-counting}
is replaced by
\begin{equation}
H_{\rm neutral} = W-w_{\rm aut} + (N -3) +
N_{-2}
-G_1 \;\;\;\;\;
(\mathbb{C}^*)\,,
\label{eq:hyper-counting-general}
\end{equation}
where $w_{\rm aut} = 1 + {\rm max} (0, 1 + n_0, 1 + n_\infty)$,
$N_{-2}$ is the number of -2 curves,
and
$G_1$ is the number of $-2$ configurations of the types shown in
Figure ~\ref{f:2-curves}.
\begin{figure}
\begin{center}
\begin{picture}(200,200)(- 100,- 200)
\thicklines
\put(-150, -40){\line(1, 0){120}}
\put(-85,-65){\makebox(0,0){(a) $I_0^* (\hat{D}_4)$}}
\put(-85,-50){\makebox(0,0){2}}
\put(-145, -45){\line(1, 4){13}}
\put(-148, -15){\makebox(0,0){1}}
\put(-35, -45){\line(1, 4){13}}
\put(-38, -15){\makebox(0,0){1}}
\put(-110, -45){\line(1, 4){13}}
\put(-113, -15){\makebox(0,0){1}}
\put(-70, -45){\line(1, 4){13}}
\put(-73, -15){\makebox(0,0){1}}
\put(30, -40){\line(1, 0){120}}
\put(95,-65){\makebox(0,0){(b) $IV^* (\hat{E}_6)$}}
\put(95,-50){\makebox(0,0){3}}
\put(35, -45){\line(1, 4){13}}
\put(35, 45){\line(1, -4){13}}
\put(32, -15){\makebox(0,0){2}}
\put(32, 20){\makebox(0,0){1}}
\put(90, -45){\line(1, 4){13}}
\put(90, 45){\line(1, -4){13}}
\put(87, -15){\makebox(0,0){2}}
\put(87, 20){\makebox(0,0){1}}
\put(145, -45){\line(1, 4){13}}
\put(145, 45){\line(1, -4){13}}
\put(142, -15){\makebox(0,0){2}}
\put(142, 20){\makebox(0,0){1}}
\put(-150, -180){\line(1, 0){150}}
\put(-70,-205){\makebox(0,0){(c) $III^* (\hat{E}_7)$}}
\put(-70,-190){\makebox(0,0){4}}
\put(-145, -185){\line(1, 4){10}}
\put(-148, -165){\makebox(0,0){2}}
\put(-75, -185){\line(1, 4){10}}
\put(-75, -125){\line(1, 4){10}}
\put(-75, -114){\line(1, -4){10}}
\put(-78, -165){\makebox(0,0){3}}
\put(-78, -135){\makebox(0,0){2}}
\put(-78, -100){\makebox(0,0){1}}
\put(-15, -185){\line(1, 4){10}}
\put(-15, -125){\line(1, 4){10}}
\put(-15, -114){\line(1, -4){10}}
\put(-18, -165){\makebox(0,0){ 3}}
\put(-18, -135){\makebox(0,0){2}}
\put(-18, -100){\makebox(0,0){1}}
\put(30, -180){\line(1, 0){150}}
\put(110,-205){\makebox(0,0){(d) $II^* (\hat{E}_8)$}}
\put(110,-190){\makebox(0,0){6}}
\put(35, -185){\line(1, 4){10}}
\put(32, -165){\makebox(0,0){3}}
\put(105, -185){\line(1, 4){13}}
\put(105, -95){\line(1, -4){13}}
\put(102, -155){\makebox(0,0){4}}
\put(102, -120){\makebox(0,0){2}}
\put(165, -185){\line(1, 4){8}}
\put(162, -168){\makebox(0,0){5}}
\put(165, -130){\line(1, -4){8}}
\put(162, -150){\makebox(0,0){4}}
\put(165, -140){\line(1, 4){8}}
\put(162, -120){\makebox(0,0){3}}
\put(165, -85){\line(1, -4){8}}
\put(162, -102){\makebox(0,0){2}}
\put(165, -95){\line(1, 4){8}}
\put(162, -75){\makebox(0,0){1}}
\end{picture}
\end{center}
\caption[x]{\footnotesize Configurations of $-2$ curves associated
with Kodaira-type surface singularities associated with degenerate
elliptic fibers. For these configurations, the number of fixed
moduli associated with $-2$ curves is reduced by one.
The numbers given are the weightings needed to give an elliptic curve
with vanishing self-intersection.
Labels correspond to Kodaira singularity type and associated Dynkin diagram.}
\label{f:2-curves}
\end{figure}
\subsection{Distribution of Hodge numbers}
\begin{figure}
\centering
\begin{picture}(200,240)(- 100,- 120)
\put(0,0){\makebox(0,0){\includegraphics[width=12cm]{hodgeoverlay.pdf}}}
\put(160,-98){\makebox(0,0){ $h^{1, 1}$}}
\put( -150,90){\makebox(0,0){ $h^{2, 1}$}}
\end{picture}
\caption[x]{\footnotesize Plot of Hodge numbers comparing $\mathbb{C}^*$
(shown in dark blue, largest dots),
Kreuzer-Skarke (slate blue, medium dots), and toric (green, smallest
dots).
Hodge numbers for $\mathbb{C}^*$-bases that are not also Hodge numbers for
toric (or NS-toric) bases are generally those with small $h^{1, 1},
h^{2, 1}$ with a small number of exceptions having larger Hodge
numbers, including 6 examples that are not found in the Kreuzer-Skarke
database.
}
\label{f:Hodge}
\end{figure}
We have computed the Hodge numbers for the generic threefolds over all
162,404 $\mathbb{C}^*$-bases, including those not strictly $\mathbb{C}^*$-bases coming
from blown up $\mathbb{C}^*$-bases with $-9, -10,$ and $-11$ curves on the
fiber chains. We find a total of 7,868 distinct pairs of Hodge numbers
$h^{1, 1}, h^{2, 1}$, including 344 Hodge number
combinations not found in
\cite{WT-Hodge} from the set of generalized toric bases. These Hodge
numbers are plotted and compared to the Kreuzer-Skarke database
\cite{Kreuzer-Skarke} and the Hodge numbers of threefolds associated
with toric bases in Figure~\ref{f:Hodge}. The
number of models as a function of the sum $h^{1, 1} + h^{2, 1}$ is
plotted in Figure~\ref{f:Hodge-h}.
Most of the new Hodge numbers that appear for $\mathbb{C}^*$-bases and not for
toric bases are in the
region of small Hodge numbers far from the boundary.
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{hodgesum.pdf}
\end{center}
\caption[x]{\footnotesize Distribution of threefolds coming from
generic elliptic fibrations over $\mathbb{C}^*$-bases as a function of the
sum of Hodge numbers.}
\label{f:Hodge-h}
\end{figure}
Note that just as for toric bases, as discussed in \cite{toric},
the set of Calabi-Yau manifolds that can be constructed over any given
$\mathbb{C}^*$-base $B$ can be very large. By tuning parameters in the
Weierstrass model over any given base, so that $f$ and $g$ vanish on
certain divisors to degrees less than $4, 6$, theories with many
different nonabelian gauge group factors can be constructed. Each
such construction gives a different Calabi-Yau threefold after the
singularities in the elliptic fibration are resolved. In this way,
the number of Hodge numbers associated with Calabi-Yau threefolds
fibered over any given $\mathbb{C}^*$-base can be quite large.
Explicit examples of such tunings that give Hodge numbers near the
boundary of the ``shield'' for threefolds over toric bases are
described in \cite{WT-Hodge, WT-Sam}. Similar constructions over
$\mathbb{C}^*$-bases would give a vast range of different Calabi-Yau threefold
constructions.
\subsection{Redundancies from $-2$ clusters}
\label{sec:redundancies}
One striking feature of the distribution of Hodge numbers is that
there are certain Hodge number combinations that are realized by the
threefolds associated with a large number of distinct $\mathbb{C}^*$-bases.
As the most extreme example, there are 1,861
different $\mathbb{C}^*$-bases with Hodge numbers $43, 43$. Many of these are in
fact just different realizations of the same Calabi-Yau threefold.
One principal source of these kinds of redundancies arises from the
appearance of clusters of $-2$ curves in the base that do not carry a
gauge group. As discussed previously, such $-2$ curves indicate
that a modulus of the geometry has been tuned. In general, we expect
that any base containing a cluster of $-2$ curves is just be a special
limit of another base without such clusters. Thus, we can consider
the subset of $\mathbb{C}^*$-bases that do not contain any clusters of only
$-2$ curves. This reduces the number of bases to 68,798.
A graph of
the distribution of the numbers of bases without $-2$ clusters as a
function of $T$ is given in Figure~\ref{f:distribution-no2}.
\begin{figure}
\includegraphics[width=12cm]{Tdist_no2.pdf}
\caption[x]{\footnotesize The number of $\mathbb{C}^*$-bases associated
with different values of $T = h^{1, 1} (B) -1$, when only bases without clusters of
$-2$ curves not carried a gauge group are considered.}
\label{f:distribution-no2}
\end{figure}
The removal of $-2$ clusters removes a great deal of redundancy in
the list of threefolds.
In particular, in the reduced set of bases the extreme jump in the
distribution at $T = 25$ goes away. This is associated with the
removal of a large number of threefolds with Hodge numbers
$(43, 43)$. Looking at the detailed data shows that many of these
$(43, 43)$
models have a closely related structure.
There are 1,575 $\mathbb{C}^*$-bases that have the following
features:
\begin{itemize}
\item Two $(-12)$ clusters and a gauge algebra ${\mathfrak{e}}_8
\oplus{\mathfrak{e}}_8$
\item $T = 25$
\item $h^{1, 1} = 43, h^{2, 1} = 43$ \,.
\end{itemize}
In general, these bases are characterized by ${\mathfrak{e}}_8$ factors on the
sections ($n_{0, \infty} = -12$), and a set of chains containing
various combinations of $-2$ curves. Indeed, it is clear that there
are many ways to construct such $\mathbb{C}^*$-bases, by starting with a
given $\mathbb{F}_m$ and blowing up only points at $w = 0, \infty$. Such
$\mathbb{C}^*$-bases will always have chains of the form $(-1, -2, -2,
\ldots, -2, -1)$. If precisely 24 points are blown up, giving the
necessary factors on the sections, then any combination of chain
lengths satisfying
\begin{equation}
\sum_{i = 1}^{N} (k_i-1) = 24 = T-1
\end{equation}
will have the desired properties. The number of such bases is just
the number of ways of partitioning the 24 blow-ups into a sum of
integers, $p (24) = 1,575$, precisely the number of bases found with
the above features. Of these partitionings, 13 have toric
descriptions (partitions into one or two integers). All 1575
$\mathbb{C}^*$-bases in this set can be understood as limit points of the
same geometry, and are associated with the same smooth Calabi-Yau
threefold. This Calabi-Yau threefold, which has been encountered
previously in the literature (see {\it e.g.} \cite{Candelas-pr}),
seems to have a particularly high degree of symmetry and may be
interesting for other reasons.
Although many of the bases with clusters of $-2$ curves have the same
Hodge numbers as bases without such clusters, this is not universally
true. There are also 3,788 $\mathbb{C}^*$bases that have
$-2$ clusters that have no corresponding $\mathbb{C}^*$-base without such
clusters. An example is given by base $B$ with
\begin{eqnarray}
N = 4, & & \; n_0 = -2, n_\infty = -4\nonumber \\
{\rm chain}\ 1: & & (-2, -1, -4, -1, -3, -1)\nonumber \\
{\rm chains}\ 2-4: & & (-1, -1) \nonumber
\end{eqnarray}
These bases may be
limits of other base surfaces that are fine as complex surfaces but do
not have a description as a $\mathbb{C}^*$-surface.
While as discussed above, in general bases with
$-2$ clusters correspond to limits of bases without such clusters, and
amounts to a redundancy for {\em generic} elliptic fibrations,
keeping track of this information is often useful.
In particular, Weierstrass models over a base with $-2$ clusters can
be tuned to realize smooth resolved Calabi-Yau threefolds that cannot
be realized by tuned Weierstrass models over the more generic bases
without the $-2$ clusters. Thus, bases with $-2$ clusters must be
considered separately in any systematic or complete analysis of a set
of Calabi-Yau threefolds; examples of this arise in
\cite{WT-Sam}.
In terms of the physical F-theory models that can be constructed from
these bases, the different $-2$ configurations and the distinct
configurations of tuned gauge groups that can be realized over them
correspond to distinct classes of 6D supergravity theories with
distinct gauge group, matter, and dyonic string lattice structure.
Thus, in a full consideration of 6D supergravity theories, it would be
necessary to include all of the distinct base choices with Hodge
numbers (43, 43) as initial points for tuned models with different
sets of Hodge numbers.
\subsection{Calabi-Yau threefolds with new Hodge numbers}
\label{sec:Hodge}
Of the roughly 160,000
$\mathbb{C}^*$-bases, we have found precisely 6 that give rise to generic
elliptic fibrations with Hodge numbers that are not found in the
Kreuzer-Skarke database. The simplest such base has
Hodge numbers
$h^{1, 1} = 56, h^{2, 1} = 2$, and the structure
\vspace*{-0.05in}
\begin{eqnarray}
N = 3, & &
n_0 = -5, n_\infty = -6, \label{eq:new-1}
\\
{\rm chain}\ 1: & & (-1, -3, -1, -3, -1) \nonumber\\
{\rm chain}\ 2: & & (-1, -3, -2, -1, -5, -1, -3, -1)\nonumber\\
{\rm chain}\ 3: & & (-1, -3, -2, -2, -1, -6, -1, -3, -1) \nonumber
\end{eqnarray}
The other 5 bases that give new Calabi-Yau threefolds are listed in
Appendix \ref{sec:new-threefolds}.
\subsection{Calabi-Yau threefolds with nontrivial Mordell-Weil rank}
\label{sec:Mordell-Weil}
There are 13 $\mathbb{C}^*$-bases in which the rank of the Mordell-Weil group
is nonzero. This is determined, as described above, by using the
monomial count and the anomaly equation to independently determine
$H_{\rm neutral}$ and $H_{\rm neutral} -V_{\rm abelian}$
for each of the $\mathbb{C}^*$-bases. The bases where these two quantities
differ are those
for which the Mordell-Weil rank is nonzero.
The elliptically fibered Calabi-Yau threefolds over these
bases have multiple linearly independent sections in a group of rank
$r$. This gives rise to $r$ abelian $U(1)$ gauge fields in the
corresponding 6D supergravity theory.
For these 13 bases, therefore, the Mordell-Weil rank is forced to be
nontrivial even for a completely generic elliptic fibration. This is
different from the situation for toric bases, where a complete
analysis of all toric bases using the anomaly condition
\eq{eq:anomaly} confirmed that there are no toric bases over which the
generic elliptic fibration has a nontrivial Mordell-Weil rank.
An example of a base with nonzero Mordell-Weil rank is given by
the following base with
Hodge numbers
$h^{1, 1} = 25, h^{2, 1} = 13$:
\vspace*{-0.05in}
\begin{eqnarray}
N = 3, & &
n_0 = -1, n_\infty = -2, \label{eq:mw-example}
\\
{\rm chain}\ 1: & & (-2, -1, -2) \nonumber\\
{\rm chain}\ 2: & & (-4, -1, -2, -2, -2)\nonumber\\
{\rm chain}\ 3: & & (-4, -1, -2, -2, -2) \nonumber
\end{eqnarray}
The Hodge number $h^{1, 1}$ can be computed from this base using
\eq{eq:h11}, and $T = 11$ from \eq{eq:t-equation}, where ${\cal G}
={\mathfrak{so}}(8)\oplus{\mathfrak{so}} (8)$ from the two $-4$ curves, so $h^{1, 1} (X) = T + 2
+ 8 + r = 21 + r$, where $r$ is the rank of the Mordell-Weil group.
Using the method of \S\ref{sec:neutral}, the number of Weierstrass
monomials can be computed to be $W = 7$. The dimension of the
automorphism group is the generic $w_{\rm aut} = 1$, and $N_{-2} = 9$
is the number of $-2$ curves, with
$G_1 = 1$ as described at the
end of \S\ref{sec:neutral} since the $-2$ curves connected to
$D_\infty$ have the $IV^* (\hat{E}_6)$ form from Figure~\ref{f:2-curves}.
It follows then from
\eq{eq:hyper-counting-general} that $H_{\rm neutral} = 14$, so $h^{2, 1} =
13$. Comparing with \eq{eq:anomaly}, we have
\begin{equation}
V= H_{\rm neutral} + 29T-273 = 60 = 56 + r\,.
\end{equation}
Thus, this base has a generic elliptic fibration with Mordell-Weil
rank $r = 4$, and $h^{1, 1} = 25$.
The 13 examples of bases with Mordell-Weil groups having nonzero
rank are listed in Appendix~\ref{sec:appendix-abelian}
The appearance of such bases, while perhaps surprising, is not
completely unprecedented. It is known that the Shoen manifold, a
class of elliptically fibered Calabi-Yau threefold constructed from a
fiber product of rational elliptic surfaces, generically has
Mordell-Weil rank 9 \cite{Shoen} (see {\it e.g.} \cite{bopr} for a
physics application of this). In fact, all the $\mathbb{C}^*$-bases we have
identified that give Calabi-Yau threefolds with enhanced Mordell-Weil
rank can be related to special limits and blow-ups of the Shoen
manifold\footnote{Thanks to David Morrison for discussions on this
point.}; this connection will be described in further detail
elsewhere \cite{mpt}.
For the bases with nonzero Mordell-Weil rank, an explicit description
of the Weierstrass model can be used to make the extra sections
manifest.
As an example, consider the base
\begin{eqnarray}
N = 4, & &
n_0 = -6, n_\infty = - 6, \label{eq:mw-example-2}
\\
{\rm chain}\ 1: & & (-1, -3, -1, -3, -1) \nonumber\\
{\rm chain}\ 2: & & (-1, -3, -1, -3, -1) \nonumber\\
{\rm chain}\ 3: & & (-1, -3, -1, -3, -1) \nonumber\\
{\rm chain}\ 4: & & (-1, -3, -1, -3, -1) \nonumber
\end{eqnarray}
This base has $T = 17$, $h^{1, 1}= 51, h^{2, 1}= 3$, and Mordell-Weil
rank $r = 4$. An explicit computation of the monomials in the
Weierstrass model gives (placing the extra two fibers at $z = 1, 2$)
\begin{eqnarray}
f & = & Aw^4z^2(z-1)^2(z-2)^2 \label{eq:11-Weierstrass}\\
g & = & Bw^4z^6(z-1)^6(z-2)^6 + Cw^6z^3(z-1)^3(z-2)^3 + Dw^8 \nonumber \,,
\end{eqnarray}
where $A, B, C, D$ are free complex constants ($W = 4$). A nontrivial
section can be associated with a factorization of
the Weierstrass equation $y^2 = x^3 + fx + g$ into the form
\cite{Morrison-Park}
\begin{equation}
(y-\alpha) (y + \alpha) = (x-\lambda) (x^2 + \lambda x-\mu)
\end{equation}
where
\begin{eqnarray}
f & = & -\mu -\lambda^2\\
g & = & \lambda \mu + \alpha^2 \,.
\end{eqnarray}
The Weierstrass coefficients from (\ref{eq:11-Weierstrass}) can take
this form if
\begin{eqnarray}
\lambda & = & aw^2z(z-1)(z-2)\\
\mu & = & bw^4z^2(z-1)^2(z-2)^2\\
\alpha & = & cw^2z^3(z-1)^3(z-2)^3 + dw^4
\end{eqnarray}
where $a, b, c, d$ satisfy
\begin{eqnarray}
A & = & -a^2 -b\\
B & = & c^2\\
C & = & ab + 2cd\\
D & = & d^2\,.
\end{eqnarray}
For given generic values of $A$-$D$ there are 12 solutions for
$a$-$d$. This can be seen by noting that the equations for $B, D$
each have two solutions for $c, d$, and the other two equations
combine to form a cubic for $a$, which has 3 independent solutions.
These twelve solutions represent 4 independent generators of the
Mordell-Weil group. Solutions with $\alpha \rightarrow -\alpha$
correspond to sections $s, -s$ that add to 0. The three solutions for
the cubic also contain one linear dependence in the space of sections,
so that the total number of independent sections is 4, matching the
computed Mordell-Weil rank.
A similar computation can be carried out for the other bases
with enhanced Mordell-Weil rank, though the details are more
complicated and computing the number of independent sections can be
more difficult in other cases. A more detailed analysis of these
models with enhanced Mordell-Weil rank is left to future work.
\section{Conclusions}
\label{sec:conclusions}
In this paper we have initiated a systematic study of a class of
geometries for the bases of elliptically fibered Calabi-Yau threefolds
that goes beyond the framework of toric geometry widely used in
previous work.
We have systematically constructed all smooth surfaces that admit a
single $\mathbb{C}^*$ action and can arise as bases of a Calabi-Yau threefold,
and we have analyzed the properties of these geometries. The 162,404
bases we have explicitly constructed include all $\mathbb{C}^*$-bases and also
a more general class built from $\mathbb{C}^*$-bases containing $-9, -10,$ and
$-11$ curves on which points must be blown up to form $-12$ curves in
bases that do not have a $\mathbb{C}^*$ action.
The bases we have considered can be used for compactification of
F-theory to six dimensions. We have found that the physical
properties of the resulting six-dimensional supergravity theories are
similar in nature and in distribution
to compactifications on toric bases that were
studied earlier. In particular, the $\mathbb{C}^*$-bases with relatively
large numbers $T$ of tensor multiplets give theories with gauge
algebras that are dominated by summands of the form ${\mathfrak{e}}_8 \oplus{\mathfrak{f}}_4
\oplus 2 ({\mathfrak{g}}_2 \oplus{\mathfrak{su}} (2))$, from the same types of chains that
give this structure in the toric case. The largest value of $T$ for
theories with $\mathbb{C}^*$-bases that are not actually toric is $171$, lower
than the largest known value of $T = 193$ that occurs for a toric base
(in both cases, these are examples of bases containing $-11$ curves
that must be blown up to give bases outside the $\mathbb{C}^*$/toric
framework). The overall ``shield'' structure of the set of Hodge
numbers computed by Kreuzer and Skarke based on toric constructions,
the boundary of this region explained in \cite{toric}, and geometric
patterns identified in \cite{toric, Candelas-cs} are essentially
unchanged with the addition of the large number of more general
non-toric $\mathbb{C}^*$-base configurations.
The absence of branching or loop structures in the
$\mathbb{C}^*$-bases that make possible higher values of $T$ supports the
conclusion that $T \leq 193$ is an absolute bound across all bases, as
was argued heuristically in \cite{toric}.
One interesting result of this analysis is that the number of
additional bases added by extending the toric construction to include
the more general class of bases admitting a $\mathbb{C}^*$ action does not
produce a wild increase in the number of possible bases. As more
points are blown up, the set of geometries is generally controlled by
linear structures along the fiber, and the possibility of branching
does not lead to a combinatorial explosion in intersecting divisor
structures. This suggests that a systematic classification of all
smooth bases for elliptically fibered Calabi-Yau threefolds, even
without a single $\mathbb{C}^*$ action, may be computationally tractable.
Such a classification would be quite challenging, since the
intersection structure can become quite complicated -- particularly
for $T \geq 9$ where there can be an infinite number of distinct $-1$
curves on the bases, such as occurs for dP$_9$. Nonetheless, by using the
method of non-Higgsable clusters to characterize possible geometries
it may be possible to get a handle on this problem. We leave this as
a challenge for future work.
We have analyzed the Hodge structure of the smooth Calabi-Yau
threefolds associated with generic elliptic fibrations over all the
$\mathbb{C}^*$-bases. While the Hodge numbers are all within the boundaries
defined by toric bases and the Kreuzer-Skarke ``shield'' shape, we
have identified a number of Calabi-Yau threefolds with novel
properties. There are 6 new threefolds that have Hodge numbers
that we believe have not been previously identified, as well as 13
threefolds in which the Mordell-Weil rank of the elliptic fibration is
nontrivial, corresponding to non-Higgsable $U(1)$ factors in the
corresponding 6D supergravity theory.
The basic approach to constructing a more general class of base
manifolds for elliptic fibrations and F-theory using spaces with
reduced numbers of $\mathbb{C}^*$ actions should be possible for base
threefolds and fourfolds
as well, giving rise to geometries for compactification of F-theory
and M-theory to dimensions from 5 to 2. The analysis of these
constructions becomes significantly more complicated in lower
dimensions, however. In particular, for F-theory compactifications to
four dimensions on elliptically-fibered Calabi-Yau fourfolds with
threefold bases, the analysis of bases with a $\mathbb{C}^*\times\mathbb{C}^*$ or
$\mathbb{C}^*$ action would be much more difficult than for the class of
$\mathbb{C}^*$-surfaces considered here.
One difficulty in analyzing such constructions is the absence of a
clear analogue to the anomaly condition \eq{eq:anomaly} for 6D
theories. The absence of such a condition makes it harder to compute
the Mordell-Weil group in a general 4D case, and hence more difficult to
precisely identify the Hodge numbers of the elliptically-fibered
fourfold.
While some progress was made in \cite{Grimm-Taylor} in identifying 4D
parallels to the 6D anomaly condition and related structures, more
work is needed to have a systematic approach to analyzing F-theory
base spaces with reduced toric symmetry in the 4D case.
\vspace*{0.1in}
{\bf Acknowledgements}: We would like to thank Lara Anderson, Philip
Candelas, Mboyo Esole, Ilarion Melnikov, David Morrison, Daniel Park,
Tony Pantev, and Yinan Wang for helpful discussions. This research was supported
by the DOE under contract \#DE-FC02-94ER40818, and was also supported
in part by the National Science Foundation under Grant
No. PHY-1066293. WT would like to thank the Aspen Center for Physics
for hospitality during part of this work.
|
1,941,325,221,149 | arxiv | \section{Introduction}
The primary intent of the ongoing nuclear collision programmes at
the Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) energies
is to create a new state of matter called Quark Gluon Plasma (QGP).
The bulk properties of this state is governed by the light quarks ($q$) and gluons ($g$).
The heavy quarks [HQs\,$\equiv$ charm (c) and beauty (b)]
are considered as efficient tools to probe the early
state of the system~(see \cite{hfr,Rapp:2008tf,Averbeck:2013oga} for a review)
mainly for the following reasons:
(i) HQs are produced at very early stage;
(ii) The probability of creation and annihilation of the
HQs during the evolution of the fireball is small.
Hence the HQs can witness the entire evolution of
the system as they are created early and
survive the full evolution without being annihilated and/or created.
The data on the suppression of the charm quark at large momentum
($R_{AA}(p_T)$) ~\cite{stare,phenixelat,phenixe,alice}
and their elliptic flow ($v_2$)~\cite{phenixelat,alicev2} have been analyzed
by several authors (see {\it e.g.} \cite{hfr,Rapp:2008tf,Averbeck:2013oga} and references therein)
to characterize the system formed in HIC at RHIC and LHC collisions.
It has been argued that the thermalization times of
HQs are larger than the light quarks and gluons~\cite{shuryak,prlja}
by a factor $\sim M/T$~\cite{moore}, where $M$ is the mass of the HQs and $T$
is the temperature of the medium. Therefore, the evolution of
the HQs in the thermal bath of light quarks and gluons can be
treated within the ambit of Brownian motion, although a detailed investigation
of this problem within the framework of Boltzmann equation has revealed that
the evolution of charm quarks as a Brownian particle requires some corrections~\cite{fs}
(see also ~\cite{Molnar:2006ci,gossiauxv2,gre,you,Uphoff:2014cba,Song:2015sfa}).
Several theoretical attempts have been made
to study the evolution of the HQs within the framework of
Fokker Plank equation
~\cite{hfr,moore,rappv2,hvh,hiranov2,cmko,Das,alberico,jeon,bass,rappprl,ali,hees,qun,lambda,Juan_RAA,Das:2015ana}.
However, the roles of pre-equilibration phase have been ignored in these works. An attempt
has been made in the present work to study this role. This is important because the
HQs are produced in the hard collisions of the colliding partons of the nuclei and
they inevitably interact with the pre-equilibrium phase of the bulk matter.
The effect of the pre-equilibrium phase might be more significant for low-energy nuclear
collisions: for example, in the case Au+Au collisions at RHIC energy
simulations show that equilibration is achieved approximately within $1$ fm/c,
while the lifetime of the QGP phase turns out to be about $5$ fm/c \cite{Ruggieri:2013bda,Ruggieri:2013ova};
hence the lifetime of the out-of-equilibrium phase is approximately $20\%$ of the
total lifetime of the QGP.
The lifetime of the QGP phase is
of about 10 fm/c and the duration of the pre-equilibrium phase is
about 0.4 fm/c for LHC collision condition. Although, this may suggest the dwindling role of the pre-equilibrium phase
for collisions at higher energies, estimates of drag and diffusion coefficients of HQs
done in this work indicate non-negligible contributions
of the pre-equilibrium phase.
The motivation of this work is to estimate the drag and diffusion coefficients of the HQs
interacting elastically with the non-equilibrated gluons constituting
the medium.
The paper is organized as follows. In the next section we discuss the formalism
used to evaluate the drag and diffusion coefficients of the heavy quarks in the
preequilibrium stage. In Section III we summarize the
out-of-equilibrium initial conditions we implement in the calculations.
Section IV is devoted to present the results and
section V contains summary and discussions.
\section{Formalism}
The Boltzmann Transport Equation (BTE) describing the evolution of the HQs in the pre-equilibrated gluonic system can be written as:
\bea
\left[\frac{\partial}{\partial t}+\frac{\vec{p}}{E}\cdot\frac{\partial}{\partial \vec{x}}
+\vec{F}\cdot\frac{\partial}{{\partial \vec{p}}}
\right] f(\vec{x},\vec{p},t)=\left[\frac{\partial f}{\partial t}\right]_{collisions}~.
\label{boltzmann}
\eea
For binary interaction the collisional integral appearing in the right hand
side of BTE can be written as:
\be
\left[\frac{\partial f}{\partial t}\right]_{collisions}= \int d^{3}\vec{k}[w(\vec{p}+\vec{k},\vec{k})
f(\vec{p}+\vec{k})-w(\vec{p},\vec{k})f(\vec{p})].
\label{BTE}
\ee
where $w(\vec{p},\vec{k})$ is the rate of collision which encodes the change of HQs momentum
from $\vec{p}$ to $\vec{p}-\vec{k}$ is given by,
\be
\omega(p,k)=g_G\int\frac{d^3q}{(2\pi)^3}f^\prime(q)v\sigma_{p,q\rightarrow p-k,q+k}
\label{eq:omega1}
\ee
where $f^\prime$ is the phase space distribution of the particles in the bulk,
$v$ is the relative velocity between the two collision partners,
$\sigma$ denotes the cross section and $g_G$ is the statistical
degeneracy of the particles in the bulk.
Considering only the soft scattering approximation~\cite{BS}, the integro-differential
Eq.~\ref{BTE} can be reduced to the Fokker Planck equation:
\be
\frac{\partial f}{\partial t}= \frac{\partial}{\partial p_{i}}\left[A_{i}(\vec{p})f+\frac{\partial}{\partial p_{j}}[B_{ij}
(\vec{p})f]\right]~~,
\label{landaukeq}
\ee
where the kernels are defined as
\be
A_{i}= \int d^{3}\vec{k}w(\vec{p},\vec{k})k_{i}~~,
\label{eqdrag}
\ee
and
\be
B_{ij}= \frac{1}{2} \int d^{3}\vec{k}w(\vec{p},\vec{k})k_{i}k_{j}~.
\label{eqdiff}
\ee
In the limit $\mid\bf{p}\mid\rightarrow 0$, $A_i\rightarrow \gamma p_i$
and $B_{ij}\rightarrow D\delta_{ij}$ where $\gamma$ and $D$
are the drag and diffusion coefficients
respectively.
We notice that under the assumption of soft collisions, the non-linear integro-differential equation, Eq.~\ref{boltzmann}
reduces to a much simpler linear partial differential equation, Eq.~\ref{landaukeq},
provided the function $f^\prime (q)$ is known.
Now we evaluate the $\gamma$ and $D$ for HQs interacting elastically
with the bulk particles in the pre-equilibrium phase that appear in HIC before thermalization.
The $\gamma$ for the process $HQ(p_1) + g(p_2) \rightarrow HQ(p_3) + g(p_4)$ ($p_i$'s are
the respective momenta of the colliding particles)
can be expressed in terms of $A_i$ ~\cite{BS} (see also ~\cite{DKS,vc,Berrehrah:2014tva}) as:
\begin{equation}
\gamma=p_iA_i/p^2~,
\end{equation}
where $A_i$ is given by
\bea
A_i&=&\frac{1}{2E_{p_1}} \int \frac{d^3p_2}{(2\pi)^3E_{p_2}} \int \frac{d^3p_3}{(2\pi)^3E_{p_3}}
\int \frac{d^3p_4}{(2\pi)^3E_{p_4}} \nonumber \\
&&\frac{1}{g_{HQ}}
\sum {\overline {|{\cal{M}}|^2}} (2\pi)^4 \delta^4(p_1+p_2-p_3-p_4)
\nonumber \\
&&{f}(p_2)\{1\pm f(p_4)\}[(p_1-p_3)_i] \equiv \langle \langle
(p_1-p_3)_i\rangle \rangle . \nonumber \\
\label{eq1}
\eea
where $g_{HQ}$ denotes the statistical degeneracy of HQ,
$f(p_2)$ is the momentum distributions of the incident particles,
$1 \pm f(p_4)$ is the
momentum distribution (with Bose enhancement or Pauli suppression)
of the final state particles in the bath and
${\overline {|{\cal{M}}|^2}}$ represents the square modulus of the spin averaged
invariant amplitude for the elastic process, evaluated here using pQCD.
The drag coefficient in Eq.~(\ref{eq1}) is the measure of the average of the momentum transfer, $p_1-p_3$,
weighted by the interaction through $\overline{|{\cal{M}}|^2}$.
Similarly the momentum diffusion coefficient, $D$, can be defined as:
\begin{equation}
D=\frac{1}{4}\left[\langle \langle p_3^2 \rangle \rangle -
\frac{\langle \langle (p_1\cdot p_3)^2 \rangle \rangle }{p_1^2}\right]
\label{eq3}~,
\end{equation}
and it represents the averaged square momentum transfer (variance) through the interaction process mentioned above.
The following general expression has been used to
evaluate the drag and diffusion coefficients numerically with an appropriate choice of $Z(p)$,
\bea
&&\ll Z(p)\gg=\frac{1}{512\pi^4} \frac{1}{E_{p_1}} \int_{0}^{\infty}
\frac{p_2^2 dp_2 d(cos\chi)}{E_{p_2}}
\nonumber \\
&&~~~\hat{f}(p_2)\{ 1\pm f(p_4)\}\frac{\lambda^{\frac{1}{2}}(s,m_{p_1}^2,m_{p_2}^2)}{\sqrt{s}}
\int_{1}^{-1} d(cos\theta_{c.m.})
\nonumber \\
&&~~~~~~~~~~~~~~~~~~
\frac{1}{g_{HQ}} \sum {\overline {|{\cal{M}}|^2}} \int_{0}^{2\pi} d\phi_{c.m.} Z(p) \nonumber \\
\label{transport}
\eea
where $\lambda(s,m_{p_1}^2,m_{p_2}^2)=\{s-(m_{p_1}+m_{p_2})^2\}\{s-(m_{p_1}-m_{p_2})^2\}$.
\section{Initial conditions}
In most of the earlier works the distribution functions
in Eq.~(\ref{transport}) are taken as
equilibrium distribution, {\it i.e.} Fermi-Dirac for quarks and
anti-quarks and Bose-Einstein for gluons.
The transport coefficients are then corresponding to
the motion of the HQs in a thermalised medium assumed to be formed
within the time scale $\sim 1$ fm/c.
In this article instead, we will evaluate the drag and diffusion coefficients of HQ propagating through a
non-thermal gluonic system.
We achieve this by substituting the distribution functions
in Eq.~(\ref{transport}) by pre-equilibrium distribution of gluons to be specified later.
We choose the normalization of the non-equilibrium distributions
in such a way that the gluon density for the chosen distribution
and the thermal distribution at initial temperature coincides.
The initial temperature, $T_i$ is taken as
$0.34$ GeV for RHIC and $0.51$ GeV for LHC collision conditions.
These initial temperatures are chosen from simulations~\cite{Ruggieri:2013bda,Ruggieri:2013ova,Niemi:2011ix}
done to reproduce $v_2$ and spectra of light hadrons, as well as the $R_{AA}$ and
$v_2$ of heavy mesons~\cite{fs,Das:2015ana}.
We compare the drag and diffusion coefficients of HQs in the pre-equilibrium era with those
in the thermalized era at the initial temperatures just mentioned.
We now specify the out-of-equilibrium gluon distribution used in this work for evaluating HQs drag and diffusion
coefficients.
According to the general understanding of the dynamics of pre-equilibrium stage
the initial strong gluon fields (the glasma) shatters into gluon quanta in a time scale
which is of the order of the inverse of the saturation scale $Q_s$;
therefore any model of the pre-equilibrium stage will include a $Q_s$.
The first one we consider
is the classical Yang-Mills (CYM) gluon spectrum (see~\cite{Schenke:2013dpa} for details),
which assumes the initial gluon fields can be expanded in terms
of massless gluonic excitations.
Beside CYM we also consider the model known as factorized KLN
model~\cite{Drescher:2006ca,Hirano:2009ah} which includes the saturation scale
in an effective way through the unintegrated gluon distribution functions which,
for the nucleus $A$, that participates in $A+B$ collision reads:
\begin{equation}
\phi_A\left(x_A,\bm k_T^2,\bm x_\perp\right) =
\frac{1}{\alpha_s(Q_s^2)}\frac{Q_s^2}{\it{max}(Q_s^2,\bm k_T^2)}~
\end{equation}
a similar equation holds for nucleus $B$.
The momentum space gluon distribution is then given by
\begin{eqnarray}
\frac{dN}{dy d\bm p_T}&=&
\frac{{\cal N}}{p_T^2}\int d^2 x_T\int_0^{p_T}d\bm k_T
\alpha_s(Q^2)\nonumber\\
&&\times\phi_A\left(x_A,\frac{(\bm k_T + \bm p_T)^2}{4},\bm x_\perp\right)\nonumber\\
&&\times\phi_B\left(x_B,\frac{(\bm k_T - \bm p_T)^2}{4},\bm x_\perp\right)~,
\label{eq:3}
\end{eqnarray}
where ${\cal N}$ is an overall constant which is fixed by the multiplicity.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=17pc,clip=true]{drag.eps}\hspace{2pc}~
\includegraphics[width=17pc,clip=true]{diffc.eps}\hspace{2pc}
\caption{Drag coefficient (left panel) and diffusion coefficient (right panel)
as a function of momentum for charm quark at RHIC energy.
The results corresponding to thermal gluons (kinetic equilibrium) and thermal quarks and
gluons (chemical equilibrium) are evaluated at a temperature 340 MeV.
}
\label{fig1}
\end{center}
\end{figure}
\section{Results and discussions}
We have evaluated
the drag and diffusion coefficients of HQs propagating through a system of out-of-equilibrium
gluons formed at the very early era of HIC at the RHIC and LHC
energies. The magnitudes of the transport coefficients evaluated in the pre-equilibrium era
are compared with those obtained in the equilibrium phase
(both kinetic and chemical) keeping the number of particles fixed
in both the cases as mentioned above.
The temperature dependence given in Ref.~\cite{zantow} has been used to estimate the value
of the strong coupling, $\alpha_s$ at $T=T_i$.
The Debye screening mass, $m_D=\sqrt{8\alpha_s(N_c+N_f)T^2/\pi}$,
for a system at temperature, $T$ with $N_c$ colors and $N_f$ flavors
has been used to shield the infra-red divergence associated with the $t$-channel
scattering amplitude. $m_D$ is estimated at $T=0.34$ GeV for RHIC energy and
at $T=0.51$ GeV for LHC energy.
For the pre-equilibrium and equilibrium system same values of $m_D$ and
$\alpha_s$ have been used.
Later in this section we will show results where $m_D$ is estimated
in a self-consistent way with the underlying distribution function.
In the left panel of Fig.~\ref{fig1} we plot the drag coefficient
of the charm quark as a function of momentum in the pre-equilibrium phase for CYM and KLN
gluon distributions and compared the results with the equilibrated phase (both kinetic and chemical)
at T=0.34 GeV.
We find that the magnitude of the drag coefficient in the pre-equilibrium
phase is quite large, indeed comparable to the one in the kinetic equilibrium phase,
indicating
substantial amount of interaction of the charm quarks with the pre-equilibrated gluons.
We notice that $\gamma$ for the case of chemically equilibrated QGP (dotted line)
is smaller than the one of the purely gluonic system (solid line in Fig.~\ref{fig1}).
This can be understood by considering the fact that
to keep the total number of bath particles fixed some of the gluons in the purely gluonic system has
to be replaced by quarks in the chemically equilibrated QGP.
Because the gluon appears in more colours than quarks the cross section for $cg$ interaction
is larger than $cq$ which leads to larger drag of HQs in a pre-equilibrated gluonic system than
in a chemically equilibrated QGP.
The CYM distribution gives larger value of the drag coefficient
compared to KLN distribution. The KLN provides harder momentum distribution (compared to CYM)
hence have a smaller difference with the HQs distribution. Therefore, the
momentum transfer between the gluon (with KLN distribution) and the HQ is small. As the
drag coefficient is a measure of the momentum transfer weighted by the interaction strength,
the KLN gives rise to lower drag compared to the one obtained with the CYM distribution.
The variation of $D$ with momentum for charm quarks has been depicted in
the right panel of Fig.~\ref{fig1}.
We find that the magnitude of the $D$ for CYM initial condition is similar
to the case where the gluons are kinetically equilibrated.
However, the $D$ for the KLN initial condition is larger than CYM.
As mentioned before, the KLN has harder momentum distributions than the
CYM, hence KLN distribution
has larger variance compared to CYM. Since the momentum diffusion
is a measure of the variance acquired through interaction with
the bulk, the charm diffusion coefficient is larger for KLN distribution,
which is reflected in the results displayed in Fig.~\ref{fig1}.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=17pc,clip=true]{DbyA.eps}\hspace{2pc}
\caption{$D/\gamma$ as a function of momentum for charm quark at RHIC energy.
The result corresponding to equilibrium cases are evaluated at a temperature 340 MeV.
}
\label{fig300}
\end{center}
\end{figure}
In Fig.\ref{fig300}, we have depicted the diffusion to drag ratio, $D/\gamma$ as function of momentum.
$D/\gamma$ can be used to understand the deviation of the calculated values from the value
obtained by using Fluctuation-Dissipation theorem (FDT)
(green line in the figure).
Since the KLN has a harder momentum distribution, results obtained from
KLN input deviates from FDT more.
In this work, the dependence of the drag/diffusion coefficient on the variation of the
phase space distribution is addressed. To make the present study
more consistent the effects of phase space on the dynamics through
Debye screening mass has been included. It is well-known that the Debye mass depends on the
equilibrium distribution as follows:
\be
m_D^2=\pi\alpha_s g_G \int \frac{d^3p}{(2\pi)^2} \frac{1}{p} (N_cf_g+N_ff_q)
\label{mdd}
\ee
It is interesting
to see how the results are affected when the equilibrium distribution
in Eq.~\ref{mdd} is replaced by the KLN or CYM distributions.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=17pc,clip=true]{drag_1.eps}\hspace{2pc}
\includegraphics[width=17pc,clip=true]{diffc_1.eps}\hspace{2pc}
\caption{Drag coefficient (left panel) and diffusion coefficient (right panel)
as a function of momentum for charm quark at RHIC energy.
Debye mass is computed self consistently using Eq.~\ref{mdd}.
The results corresponding to thermal gluons (kinetic equilibrium) and thermal quarks and
gluons (chemical equilibrium) are evaluated at a temperature 340 MeV. }
\label{fig3}
\end{center}
\end{figure}
Replacing the thermal distribution of gluons in Eq.~(\ref{mdd}) by
the KLN and CYM gluon distributions and setting $f_q=0$ for a gluonic system we estimate $m_D$ and use
this to calculate $\gamma$ and $D$.
The momentum variation of $\gamma$ and $D$ are displayed in Fig.~\ref{fig3} for charm quarks.
It is interesting to note that in this case
$\gamma$ is almost unaffected by the bulk distributions, {\it i.e.}
of KLN, CYM and thermal gluons. We find some difference between the aforementioned bulks and the
chemically equilibrated QGP, which is caused by the less number of gluons
in the latter case as discussed earlier.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=17pc,clip=true]{drag_b_1.eps}\hspace{2pc}
\includegraphics[width=17pc,clip=true]{diffc_b_1.eps}\hspace{2pc}
\caption{Drag coefficient (left panel) and diffusion coefficient (right panel)
as a function of momentum for bottom quark at RHIC energy.
Debye mass is computed selfconsistently using Eq.~\ref{mdd}.
The results corresponding to thermal gluons (kinetic equilibrium) and thermal quarks and
gluons (chemical equilibrium) are evaluated at a temperature 340 MeV. }
\label{fig5}
\end{center}
\end{figure}
The drag (left panel) and diffusion (right panel)
coefficients for $b$ quarks in the pre-equilibrium phase
have been displayed in Fig.~\ref{fig5}.
We notice that these coefficients are smaller for $b$ than $c$.
However, the qualitative variation of $b$ diffusion coefficient
with momentum is similar to $c$.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=17pc,clip=true]{drag_lhc_1.eps}\hspace{2pc}
\includegraphics[width=17pc,clip=true]{diffc_lhc_1.eps}\hspace{2pc}
\caption{Drag coefficient (left panel) and diffusion coefficient (right panel)
as a function of momentum for charm quark at LHC energy.
Debye mass is computed selfconsistently using Eq.~\ref{mdd}.
The results corresponding to thermal gluons (kinetic equilibrium) and thermal quarks and
gluons (chemical equilibrium) are evaluated at a temperature 510 MeV. }
\label{fig7}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=17pc,clip=true]{drag_lhc_b_1.eps}\hspace{2pc}
\includegraphics[width=17pc,clip=true]{diffc_lhc_b_1.eps}\hspace{2pc}
\caption{Drag coefficient (left panel) and diffusion coefficient (right panel)
as a function of momentum for bottom quark at LHC energy.
Debye mass is computed selfconsistently using Eq.~\ref{mdd}.
The results corresponding to thermal gluons (kinetic equilibrium) and thermal quarks and
gluons (chemical equilibrium) are evaluated at a temperature 510 MeV. }
\label{fig9}
\end{center}
\end{figure}
In the left panel of Fig.~\ref{fig7}, the momentum variation of the $\gamma$ of the $c$ quark
in the pre-equilibrium phase is depicted for LHC collision conditions.
This result is compared with the one computed for a QGP at temperature T=0.51 GeV.
Here same values of the coupling for both the
equilibrium and pre-equilibrium system have been used.
The value of $m_D$ is taken from Eq.~(\ref{mdd}) for all the cases. We
find that the magnitude of the drag
coefficient in pre-equilibrium phase is similar to that obtained for the kinetic equilibrium case,
however, for a QGP the value of drag is smaller because of the less number of gluons as mentioned earlier.
In the right panel of Fig.~\ref{fig7} we show the variation of $D$ with $p$ for the $c$ quarks at LHC energy.
Again the variation of the diffusion coefficient at LHC energy is qualitatively similar to
RHIC. Similar to RHIC the diffusion at LHC conditions is larger for the KLN distribution.
Similarly the $\gamma$ and $D$ for $b$ quarks
are plotted in Fig.~\ref{fig9} for LHC collision conditions.
Qualitatively the variation of these coefficients with $p$
is similar to RHIC. Although, the quantitative values
at LHC collision conditions are larger than RHIC as expected owing
to the larger density and temperature..
\section{Summary and discussions}
In this work we have studied the momentum variation of the drag and diffusion coefficients of HQs
in the pre-equilibrium era of heavy ion collisions at RHIC and LHC energies.
The momentum distribution of the pre-equilibrated gluons have
been taken from CYM and KLN formalisms.
This study is motivated by the fact that the effect
of the pre-equilibrium phase on the HQs suppression and elliptic flow
might be relevant for low-energy nuclear collisions.
For example, the simulations of Au+Au collision at RHIC energy
show that the equilibration is achieved approximately within a time scale of $1$ fm/c,
while the lifetime of the QGP phase turns out to be about $5$ fm/c \cite{Ruggieri:2013bda,Ruggieri:2013ova};
hence the system spends about $20\%$ of QGP life-time in the pre-equilibrium phase.
In the case of Pb-Pb collisions at LHC energy the equilibration time
is shorter and lifetime of the QGP phase is larger, hence in this case we expect
a smaller effect vis-a-vis pre-equilibrium phase.
We have compared the magnitudes of the transport coefficients computed
for equilibrated and pre-equilibrated system, keeping the number
of particles same in the two cases. We have found that
the magnitude of the transport coefficients in the pre-equilibrium phase is comparable to
the values obtained with a thermalized gluonic system.
Moreover, we have also found that the transport coefficients in the pre-equilibrium era are larger
than the ones obtained for a chemically equilibrated QGP system. This is due to the
fact that for a fixed number of particles the number of gluons are less
in the equilibrated QGP than the pre-equilibrated gluonic system and the $HQ+q$ cross section
is smaller than the $HQ+g$ cross section.
The results obtained in this work may have significant impact on the experimental
observables, for example on heavy mesons $R_{AA}$ and elliptic flow,
as well as on the suppression of single electron spectra originating from the decays
of heavy mesons and their elliptic flow. We will address these aspects in future works.
\vspace{2mm}
\section*{Acknowledgments}
We acknowledge B. Schenke and R. Venugopalan for kindly sending us the data
for the CYM spectrum. S.K.D, M.R and V.G acknowledge the support by the ERC StG under the QGPDyn
Grant n. 259684.
\section{References}
|
1,941,325,221,150 | arxiv | \section{Introduction}
The singularity theorems of Penrose and Hawking~\cite{Penrose:1964wq,Hawking:1970zqf} state that under the assumption of the presence of matter satisfying physically reasonable energy conditions,
the existence of singularities is unavoidable in General Relativity (GR).
However, it is widely believed that such singularities are simply nonphysical objects which are created by classical theories of gravity, and hence they will be resolved if we can obtain complete quantum gravity in our future.
Bardeen~\cite{Bardeen:1968} proposed the first model of asymptotically flat, static and spherically symmetric black holes (BHs) with a regular center. Such a kind of BH is called a regular black hole (RBH) or a non-singular BH.
At first, this BH model was not obtained as an exact solution of the Einstein equation, but thereafter, Ay\'on-Beato and Garc\'ia~\cite{Ayon-Beato:2000mjt} showed that the Bardeen model can be seen as a solution of the Einstein equation coupled with a physical source of a magnetic monopole in nonlinear electrodynamics (NED).
They also found another type of RBH solution which describes a Reissner-Nordstr\"om type spacetime to the Einstein-NED equation~\cite{Ayon-Beato:1998hmi}.
Subsequently, other RBH models were proposed.
For instance, Dymnikova~\cite{Dymnikova:1992ux} proposed a different type of RBH, which coincides with the Schwarzschild spacetime near infinity and behaves like the de Sitter spacetime near a center.
Hayward proposed a static and spherically symmetric RBH model to resolve the BH information-loss paradox~\cite{Hayward:2005gi}.
Furthermore, Fan and Wang~\cite{Fan:2016hvf} found a wide class of asymptotically flat, static and spherically symmetric RBH solution in NED, which generalize the Bardeen BH~\cite{Bardeen:1968} and the Hayward BH~\cite{Hayward:2005gi}.
Other than these RBHs, numerous types of models and solutions have been proposed so far.
The readers can find useful reviews in refs.~\cite{Ansoldi:2008jw,Lemos:2011dq,Maeda:2021jdc}.
\medskip
The observation of particle motion around BHs is useful to test GR and alternative theories of gravity since it enables one to give the constrains on the parameters of the spin and charge of BH.
So far, many researchers have also studied particle motion around RBHs.
Ref.~\cite{Zhou:2011aa,Garcia:2013zud, Stuchlik:2014qja} investigated circular geodesics in the Bardeen BH and Ay\'on-Beato-Garc\'ia BH.
The gravitational lens of the Bardeen BH was discussed in~\cite{Eiroa:2010wm}.
Moreover, Ref.~\cite{Stuchlik:2019uvf} studied the photon orbits around the Bardeen BH, and determined the BH shadow.
Ref.~\cite{Gao:2020wjz} studied the periapsis shifts of bound orbits of massive particles moving around Bardeen BHs.
Ref.~\cite{Rayimbaev:2020hjs} investigated the particle motion around a special class of the Fan-Wang BH (FWBH) with Maxwell weak-field limit.
Ref.~\cite{Carballo-Rubio:2022nuj} studied the particle motion around the Hayward BH.
A remarkable aspect in non-linear electrodynamics is that photons do not propagate along the null geodesics of the spacetime geometry but rather of an {\it effective geometry}~\cite{Novello:1999pg,Novello:2000km,Stuchlik:2019uvf,Rayimbaev:2020hjs}.
The propagation of photons has been studied for the Ay\'on-Beato-Garc\'ia spacetime in Ref.~\cite{Novello:2000km}, for the Bardeen spacetime in Ref.~\cite{Stuchlik:2019uvf} and for the Hayward spacetime in Ref.~\cite{Toshmatov:2019gxg}. The photon orbits are also studied in rotating versions of several RBHs of NED~\cite{Kumar:2020ltt}.
In this article, we aim to study the motion of massive/massless neutral particles and photons in the FWBH spacetime and derive the general properties of RBHs spacetime because the RBH covers the Bardeen BH and Hayward BH spacetimes as special cases.
\medskip
The rest of the paper is devoted to analyze the particle motion around the FWBHs.
In the next section, we review the FWBHs as solutions in NED, then present the metric and the gauge potential with a magnetic monopole or an electric charge, and further give the conditions for the existence of horizon.
In Sec.~\ref{sec:massive}, we discuss the stability of circular orbits for massive particles around the FWBHs, where the particle motion can be reduced to a one-dimensional potential problem.
In Sec.~\ref{sec:massless}, we similarly consider the motion for massless particles, whose potential can be obtained as the divergence limit of an angular momentum.
However, the massless orbits do not correspond to photon orbits in the Einstein gravity coupled with NED.
Hence, in Sec.~\ref{sec:photon}, we separately analyze photon orbits around the FWBHs.
In Sec.~\ref{sec:shift}, we compute the periapsis shift of the orbits for massive particles in the weak-field limit.
In Sec.~\ref{sec:conclusion}, we summarize our results and discuss possible generalization.
\section{Review of regular black holes}\label{sec:solution}
Here we review the RBHs of general Fang-Wang class~\cite{Fan:2016hvf}, which are given as the solution with NED whose action is given by
\begin{align}
S = \frac{1}{16 \pi G} \int dx^4 \sqrt{-g} (R - {\cal L}({\cal F})),\quad {\cal F} := F_{\alpha\beta} F^{\alpha\beta}.
\end{align}
The field equations derived from the Lagrangian density ${\cal L}({\cal F})$ of NED admit electrically charged solutions or magnetically charged solutions, where in particular, ${\cal L}({\cal F})$ for the latter case can be written as
\begin{align}
{\cal L}({\cal F})=\frac{4\mu}{\alpha}\frac{(\alpha{\cal F})^{\frac{\nu+3}{4}}}{ \left(1+(\alpha{\cal F})^{\frac{\nu}{4}} \right)^{\frac{\mu+\nu}{\nu}} },\label{eq:LF-magnetic}
\end{align}
where $\mu,\nu$ and $\alpha$ are free parameters of the theory.
This theory reproduces the usual Maxwell theory at the weak field limit ${\cal F}\to 0$ only if $\nu=1$, which is the main subject of ref.~\cite{Rayimbaev:2020hjs}.
The asymptotically flat, static and spherically symmetric BH solution with a magnetic monopole is given by~\cite{Fan:2016hvf}
\begin{eqnarray}
ds^2&=&-f(r)dt^2+\frac{dr^2}{f(r)}+r^2(d\theta^2+\sin^2\theta d\phi^2), \quad f(r)= 1-\frac{2M-2q^3\alpha^{-1}}{r}-\frac{2 \alpha^{-1}q^3 r^{\mu-1}}{(r^\nu+q^\nu)^{\frac{\mu}{\nu}}}, \label{eq:metric0}
\end{eqnarray}
and
\begin{align}
A= \frac{q^2}{\sqrt{2\alpha}} \cos\theta d\phi,\quad {\cal F} = \frac{q^4}{\alpha r^4},\label{eq:F-magnetic}
\end{align}
where $M$ is the ADM mass of the spacetime\footnote{See also a comment~\cite{Toshmatov:2018cks} on the definition of the mass.}. The magnetic charge is given by
\begin{align}
& Q_m := \frac{1}{4\pi} \int F = \frac{q^2}{\sqrt{2\alpha}}.
\end{align}
On the other hand, the gauge potential for an electrically charged BH solution with the same metric~(\ref{eq:metric0}) can be written as
\begin{align}
A = \frac{q^2}{2\alpha}\left(\frac{r^\mu(3r^\nu-(\mu-3)q^\nu)}{(r^\nu+q^\nu)^{\frac{\mu+\nu}{\nu}}}-3\right)dt,\label{eq:A-electric}
\end{align}
where $q$ is now related to the electric charge defined by
\begin{align}
Q_e := \frac{1}{4\pi} \int {\cal L}_{{\cal F}} \star F = \frac{q^2}{\sqrt{2\alpha}}.
\end{align}
The field strength is given by
\begin{align}
{\cal F} = -\frac{\mu ^2 q^{2 \nu +4} r^{2 \mu -2} \left(q^{\nu }+r^{\nu
}\right)^{-\frac{2 \mu }{\nu }-4} \left((\mu -3) q^{\nu }-(\nu +3)
r^{\nu }\right)^2}{2 \alpha ^2}.\label{eq:L-electric}
\end{align}
Unlike the magnetic solution, one cannot obtain the explicit form of the NED Lagrangian, but only its on-shell value for the solution as
\begin{align}
{\cal L}({\cal F})=\frac{ 2 q^{3+\nu} r^{\mu-3}}{\alpha} \frac{(\mu-1)q^\nu-(\nu+1)r^\nu}{(q^\nu+r^\nu)^{2+\frac{\mu}{\nu}}}.
\end{align}
In order to eliminate the singularity at the center $r=0$, one must set
\begin{align}
M = \alpha^{-1} q^3\label{eq:Mtoalpha},
\end{align}
in which the metric function $f(r)$ can be written as
\begin{eqnarray}
f(r) = 1 - \frac{2M r^{\mu-1}}{(r^\nu+q^\nu)^\frac{\mu}{\nu}}. \label{eq:metric}
\end{eqnarray}
This metric admits known RBH spacetimes for specific parameter choices
\begin{itemize}
\item $(\mu,\nu)=(3,2)$ : Bardeen BHs~\cite{Bardeen:1968}
\item $(\mu,\nu)=(3,3)$ : Hayward BHs~\cite{Hayward:2005gi}
\end{itemize}
Although the mass and charge are constrained by eq.~(\ref{eq:Mtoalpha}),
we can treat them as independent parameters by adjusting the parameter $\alpha$.
In the following, we use the gravitational radius $r_g$ instead of the mass $M$,
\begin{align}
r_g := 2M.
\end{align}
The equation $f(r)=0$ has two roots $r_-<r_+$ for $r>0$, which corresponds to an inner horizon and an outer horizon.
Let us consider the extreme condition ($r_+=r_-$), given by $f(r)=0$ and $f'(r)=0$, which can be solved as
\begin{eqnarray}
r&=&(\mu-1)^{\frac{1}{\nu}}q=r_g \frac{(\mu-1)^{\frac{\mu}{\nu}} }
{\mu^{\frac{\mu}{\nu}}}:=r_{h}^{\rm ex}
,
\\
q&=&r_g\frac{(\mu-1)^{\frac{\mu-1}{\nu}} }
{\mu^{\frac{\mu}{\nu}}}:=q^{\rm ex}.
\end{eqnarray}
Therefore, we have four distinct cases depending on the charge:
\begin{enumerate}
\item a black hole with a single horizon (Schwarzschild BH) : $q=0$
\item a black hole with two horizons : $0<q<q^{\rm ex}$
\item a degenerate horizon : $q=q^{\rm ex}$
\item an overcharged but regular horizonless spacetime: $q>q^{\rm ex}$
\end{enumerate}
It is straightforward to show $r_h^{\rm ex}$ monotonically increases with respect to both parameters $\mu$ and $\nu$, while
$q^{\rm ex}$ monotonically decreases with respect to $\mu$ and increases with respect to $\nu$. At large values of $\mu$ and $\nu$, $r_h^{\rm ex}$ and $q^{\rm ex}$ approaches to the following limits,
\begin{align}
& r^{\rm ex}_h/r_g \to \left\{\begin{array}{cc} e^{-1/\nu}& ( \mu \to \infty) \\ 1&(\nu\to\infty) \end{array}\right. ,\\
& q^{\rm ex}/r_g \to \left\{\begin{array}{cc} (\mu e)^{-1/\nu}& ( \mu \to \infty) \\ 1&(\nu\to\infty) \end{array}\right. .\label{eq:qex-limits}
\end{align}
In particular, $q^{\rm ex}$ tends to be zero for large $\mu$.
For the later use, we also show
\begin{align}
\frac{\partial (\mu^{1/\nu} q^{\rm ex}(\mu,\nu))}{\partial \mu} < 0
\end{align}
for fixed $\nu$. Together with eq.~(\ref{eq:qex-limits}), this determines the range of $q^{\rm ex}$ for a given $\nu$ as
\begin{align}
e^{-1/\nu} \leq \mu^{1/\nu} q^{\rm ex}/r_g \leq \left(2/3\right)^{2/\nu}\quad (\mu \geq 3).
\label{eq:qex-range}
\end{align}
\section{Circular orbits of massive particles}\label{sec:massive}
The particle motion in the FW spacetimes~(\ref{eq:metric}) is determined by the Lagrangian
\begin{align}
{\cal L}_p = \frac{1}{2} g_{\mu\nu} \dot{x}^\mu \dot{x}^\nu
\end{align}
where the dot denotes the derivative with respect to an affine parameter. The motion should satisfy the constraint $g_{\mu\nu} \dot{x}^\mu \dot{x}^\nu =-\kappa$, in which $\kappa=1$ is for massive and $\kappa=0$ for massless particles. The spherical symmetry of the spacetime allows us to assume the movement takes place in the equatorial surface $\theta = \pi/2$ without loss of generality.
Since the metric is independent of $t$ and $\phi$, their conjugates give two constants of motion, the energy and angular momentum
of the particle
\begin{align}
{\cal E} := -\frac{\partial {\cal L}_p}{\partial \dot{t}} = f \dot{t} ,\quad L := \frac{\partial {\cal L}_p}{\partial \dot{\phi}}=r^2 \dot{\phi}.
\label{eq:geodesic-t-phi}
\end{align}
Then, the equation of motion can be written as a one-dimensional motion in the effective potential
\begin{eqnarray}
\dot{r}^2 + U(r) = {\cal E}^2 ,\quad U(r) :=f(r) \left(\kappa+\frac{L^2}{r^2}\right). \label{eq:geodesic-r-U}
\end{eqnarray}
In this section, we consider the motion of massive particles by setting $\kappa=1$.
In particular, we focus on the circular orbit which corresponds to stationary points of $U$,
\begin{align}
U = {\cal E}^2,\quad U_{,r} = 0.\label{eq:cond-circular}
\end{align}
By fixing the charge parameter $q$, the angular momentum $L$ and energy ${\cal E}$ are given by functions of the orbit radius $r_c$ as
\begin{align}
L^2 (r_c)= \frac{r_g r_c^{\mu +2} \left(r_c^{\nu }-(\mu -1) q^{\nu }\right)}{q^{\nu } \left(2r_c \left(q^{\nu
}+r_c^{\nu }\right)^{\mu /\nu }+(\mu -3) r_g r_c^{\mu }\right)+r_c^{\nu } \left(2r_c \left(q^{\nu
}+r_c^{\nu }\right)^{\mu /\nu }-3 r_g r_c^{\mu }\right)},\label{eq:def-L2}
\end{align}
and
\begin{align}
{\cal E}^2(r_c) = \frac{2\left(q^{\nu }+r_c^{\nu }\right)^{1-\frac{\mu }{\nu }} \left(r_c \left(q^{\nu }+r_c^{\nu }\right)^{\mu /\nu }-r_g r_c^{\mu }\right)^2}{r_c \left(q^{\nu } \left(2r_c \left(q^{\nu
}+r_c^{\nu }\right)^{\mu /\nu }+(\mu -3) r_g r_c^{\mu }\right)+r_c^{\nu } \left(2r_c \left(q^{\nu
}+r_c^{\nu }\right)^{\mu /\nu }-3r_g r_c^{\mu }\right)\right)}.
\end{align}
Note that physical circular orbits should also satisfy $L^2 \geq 0$ and ${\cal E}^2 \geq 0$.
The signature of $dL/dr_c$ coincides with that of $U_{,rr}(r_c,L(r_c))$ as
\begin{align}
U_{,rr} = - U_{,rL} \frac{dL}{dr_c} ,\quad -U_{,rL} = \frac{4Lf}{r(r^2+L^2)}>0,
\end{align}
and hence the stable circular orbits are given equivalently by $U_{,rr}>0$ or $dL/dr_c>0$. Similarly, one can show
\begin{align}
\frac{d {\cal E}^2}{dr_c} = \frac{dL}{dr_c} \frac{2 L f}{r^2},
\end{align}
which means $d{\cal E}^2/dr_c$ also has the same signature as $U_{,rr}$.
As in the Schwarzschild case, circular orbits
cannot exist sufficiently close to the horizon.
The inner edge of the orbital area is characterized by innermost stable circular orbit (ISCO), which is given by
\begin{align}
U = {\cal E}^2,\quad U_{,r} = 0,\quad U_{,rr}=0. \label{eq:ISCO-condition-1}
\end{align}
With the discussion above, the condition $U_{,rr}=0$ is equivalent to $dL/dr_c=0$.
However, we find that this only gives the ISCO as far as the horizon exists, i.e. $q\leq q^{\rm ex}$.
For the horizonless case ($q>q^{\rm ex}$), the ISCO is given by the orbit with zero angular momentum
\begin{align}
U = {\cal E}^2,\quad U_{,r}=0, \quad L=0.
\end{align}
This ``orbit" corresponds to a particle in the static equilibrium between the repulsive force by the inner de Sitter-like core and outer attraction.
The radius for $L=0$ is explicitly given by
\begin{align}
r = r_0(q) := (\mu-1)^{1/\nu} q.\label{eq:ISCO-condition-2}
\end{align}
In Fig.~\ref{fig:qrplot31}, we plot the positions of stable and unstable circular orbits for a given charge.
Although we only show the result for $(\mu,\nu)=(3,1)$, other cases are qualitatively the same.
One can see that the ISCO is given by eq.~(\ref{eq:ISCO-condition-1}) for $0\leq q \leq q^{\rm ex}$, and by eq.~(\ref{eq:ISCO-condition-2}) for $q>q^{\rm ex}$.
For $q>q^{\rm ex}$, although the curve by eq.~(\ref{eq:ISCO-condition-1}) is not the ISCO, it still gives two branches of solution $r=r_s^\pm(q)$ for a slightly overcharged case, where we have an unstable orbits between the two $r_s^- < r < r_s^+$. This unstable region disappears for $q\geq {q_{\star\star}}$, where the threshold value is given by $r_{\star\star}:=r_s^+({q_{\star\star}})=r_s^-({q_{\star\star}})$.
Since both ${\cal E}$ and $L$ must be real, circular orbits do not exists if the solution of eq.~(\ref{eq:cond-circular}) gives either or both of $L^2<0$ or ${\cal E}^2<0$. The border of existence is given by both $L=0$ and $L=\infty$. The $L=\infty$ curve corresponds to the circular orbits for massless particles which is discussed later. This massless curve also has two branches $r=r_\infty^\pm(q)$ bifurcating at $({q_\star},r_\star)$ for a slightly overcharged case, which is already observed in the previous study of $\nu=1$ cases~\cite{Stuchlik2019}. One should note that all these characteristics, in fact, also appear in the Reissner-Nordstr\"om spacetime~\cite{Pugliese:2010ps}.
\begin{figure}[H]
\begin{center}
\includegraphics[width=8.5cm]{figures/qrplot1.pdf}\hspace{4mm}
\includegraphics[width=5cm]{figures/qrplot2.pdf}
\caption{Existence and stability of circular orbits in the Fan-Wang spacetime, plotted in $(q,r)$-space for $(\mu,\nu)=(3,1)$. The regions colored by light blue has stable circular orbits, and light red only has unstable ones. Circular orbits do not exist in the white region. The right figure is a closeup around $q=q^{\rm ex}$. As we mention later, the blue curve corresponds to massless orbits which splits to two branches $r^\pm_{\infty}$.\label{fig:qrplot31}}
\end{center}
\end{figure}
As shown in Fig.~\ref{fig:orbit_cases}, the branching points for $r^\pm_\infty(q)$ and $r^\pm_s(q)$, which we denote ${q_\star}$ and ${q_{\star\star}}$ respectively, gives threshold values of $q$ that causes qualitative changes on the circular orbits.
\begin{figure}[H]
\begin{center}
\includegraphics[width=8cm]{figures/L2E2plot1.pdf}\hspace{0.2cm}
\includegraphics[width=8cm]{figures/L2E2plot2.pdf}\\
\includegraphics[width=8cm]{figures/L2E2plot3.pdf}\hspace{0.2cm}
\includegraphics[width=8cm]{figures/L2E2plot4.pdf}
\caption{Stability of circular orbits are compared with gradients of $L^2$ and ${\cal E}^2$ for each $q$. Orbits only exist if ${\cal E}^2>0$ and $L^2>0$. \label{fig:orbit_cases}}
\end{center}
\end{figure}
We also study the effect of parameters $\mu$ and $\nu$ on circular orbits.
For $\nu=1$, the $\mu$-dependence was studied in the previous work~\cite{Rayimbaev:2020hjs}.
In Fig.~\ref{fig:qrplots3nu}, the $\nu$-dependence of circular orbits are shown for $\mu=3,4$.
In each cases, as $\nu$ grows, the appearance of circular orbits approach to a certain shape.
Especially, the ISCO curve and massless curve approaches to almost straight lines for $q\leq q^{\rm ex}$.
In Fig.~\ref{fig:qrplotsmu1}, the $\mu$-dependence is studied as well. Unlike the $\nu$-dependence, we find that the appearance of circular orbits is quite insensitive to the change in $\mu$.
\begin{figure}[H]
\begin{center}
\includegraphics[width=13cm]{figures/qrplots_3nu.pdf}
\includegraphics[width=13cm]{figures/qrplots_4nu.pdf}
\caption{Circular orbits for $\mu=3,4$ and $\nu=1,\dots,6$. Curves in the plots correspond to those in fig.~\ref{fig:qrplot31}.\label{fig:qrplots3nu}}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=15cm]{figures/qrplots_mu1.pdf}\\
\includegraphics[width=15cm]{figures/qrplots_mu2.pdf}\\
\includegraphics[width=15cm]{figures/qrplots_mu3.pdf}\\
\caption{Circular orbits for $\nu=1,2,3$ with $\mu=3,10,50$. Curves in the plots correspond to those in fig.~\ref{fig:qrplot31}.\label{fig:qrplotsmu1}}
\end{center}
\end{figure}
The parameter dependence of the ISCO radii around the black hole can be roughly estimated by the value at $q=q^{\rm ex}$, say $r^{\rm ex}_{\rm ISCO}=r^+_s(q^{\rm ex})$, as it gives the minimum value of $r_{\rm ISCO}$ for $q \leq q^{\rm ex}$. Fig.~\ref{fig:riscoex} shows $r^{\rm ex}_{\rm ISCO}$ is a monotonically increasing function of $\mu$ and $\nu$.
\begin{figure}[H]
\begin{center}
\includegraphics[width=8cm]{figures/riscoex_mu.pdf}
\includegraphics[width=8cm]{figures/riscoex_nu.pdf}
\caption{$\mu$ and $\nu$-dependence of $r_{\rm ISCO}^{\rm ex}$. $\nu$ is fixed in the left panel and $\mu$ is fixed in the right panel. \label{fig:riscoex}}
\end{center}
\end{figure}
\section{Circular orbits of massless particles}\label{sec:massless}
The effective potential for the massless particle is given by setting $\kappa=0$ in eq.~(\ref{eq:geodesic-r-U}),
\begin{eqnarray}
V:= \frac{f}{r^2}.
\end{eqnarray}
The radius of the circular orbit for massless particles is determined by
\begin{eqnarray}
V_{,r}(r_c,q)
=0,
\end{eqnarray}
which is explicitly written as
\begin{align}
3 r_c^\nu - (\mu-3) q^\nu = 2 r_g^{-1} r_c^{1-\mu} ( r_c^\nu+q^\nu) ^\frac{\mu+\nu}{\nu},\label{eq:massless-Vr-expr}
\end{align}
This is equivalent to the condition~(\ref{eq:cond-circular}) with $L=\infty$, and hence the position of circular orbits are given as $r_\infty^\pm(q)$ in Fig.~\ref{fig:qrplot31}.
Differentiating the orbit $q=q(r_c)$, we obtain
\begin{eqnarray}
V_{,rr}+V_{,rq}\frac{dq}{dr_c}=0.
\end{eqnarray}
This leads to
\begin{eqnarray}
\frac{dq}{dr_c}=\frac{(r_c^\nu+q^\nu)^{\frac{\mu+2\nu}{\nu}}}{\mu r_g^2 q^{\nu-1}(\nu r_g r_c^{\mu+\nu-4}+2r_c^{-3}(r_c^\nu+q^\nu)^\frac{\mu+\nu}{\nu})}V_{,rr},
\end{eqnarray}
where we have used eq.~(\ref{eq:massless-Vr-expr}). From the positivity of the factor before $V_{,rr}$, one can show that $dq/dr_c>0$ and $dq/dr_c<0$ correspond to stable and unstable circular orbits, respectively.
It is easy to see that the massless curve intersects with the curve~(\ref{eq:ISCO-condition-1}) at $({q_\star},r_\star)$ where we have $dq/dr_c=0$.
The curve~(\ref{eq:massless-Vr-expr}) also intersects with the horizon curve $f=0$ at the point $(q,r)=(q^{{\rm ex}},r_h^{{\rm ex}})$, since
\begin{eqnarray}
V_{,r}(q^{{\rm ex}},r_h^{{\rm ex}})=\frac{rf'(q^{\rm ex},r_h^{\rm ex}))-2f(q^{{\rm ex}},r_h^{{\rm ex}})}{(r_h^{{\rm ex}}{})^3}=0.
\end{eqnarray}
\medskip
Therefore, the massless curve has two branches, $r^+_\infty(q)$ for $r_\star \le r\le 3r_g/2$ and $r^-_\infty(q)$ for $r^{\rm ex} \le r\le r_\star$, which correspond to unstable circular orbits and stable circular orbits for massless particles, respectively.
\subsection{$\mu=3$ case}
In general, the condition~(\ref{eq:massless-Vr-expr}) is difficult to solve explicitly. However, we find that the $\mu=3$ case reduces to the problem finding intersections between the following two graphs $y=F(x)$ and $y=G(x)$,
\begin{eqnarray}
(r_c^\nu+q^\nu)^{\frac{3}{\nu}+1}= \frac{3}{2} r_g \mu r_c^{\nu+2}
\Longleftrightarrow
\left\{
\begin{array}{ll}
&y= F(x):=(x+\tilde q)^{\nu+3}\\
&y=G(x):= (3/2)^\nu x^{\nu+2}
\end{array}
\right. .
\end{eqnarray}
where $x:=(r_c/r_g)^\nu$ and $\tilde{q}:=(q/r_g)^\nu$.
In particular, the critical value ${q_\star}$ and $r_\star$ is given by $F(x)=G(x)$ and $F'(x)=G'(x)$, which are analytically solved as
\begin{eqnarray}
r_\star/r_g=\frac{3}{2} \left(\frac{\nu+2}{\nu+3}\right)^{\frac{\nu+3}{\nu}},\quad
{q_\star}/r_g = \frac{3}{2}\frac{(\nu+2)^{\frac{\nu+2}{\nu}} }{(\nu+3)^{\frac{\nu+3}{\nu}} }.
\end{eqnarray}
\section{Circular photon orbits}\label{sec:photon}
Under the NED Lagrangian ${\cal L}({\cal F})$, photons do not propagate along null geodesics of the spacetime geometry, but rather of the so-called effective geometry~\cite{Stuchlik:2019uvf,Rayimbaev:2020hjs}.
The eikonal limit for photons leads to the condition
\begin{align}
\tilde{g}_{\mu\nu} k^\mu k^\nu =0,
\end{align}
and $\tilde{g}_{\mu\nu}$ is the effective geometry given by
\begin{align}
\tilde{g}^{\mu\nu} = g^{\mu\nu} - \frac{4{\cal L}_{{\cal F}\cF}}{{\cal L}_{\cal F}} F^\mu{}_\alpha F^{\alpha \nu},\label{eq:eff-geom}
\end{align}
where ${\cal L}_{\cal F}$ and ${\cal L}_{{\cal F}\cF}$ are the derivatives of ${\cal L}({\cal F})$ with respect to ${\cal F}$.
In the magnetic case, with ${\cal L}({\cal F})$ and $F_{\mu\nu}$ in eqs.~(\ref{eq:LF-magnetic}) and (\ref{eq:F-magnetic}), the effective metric is written as
\begin{align}
\tilde{g}_{\mu\nu}^{(m)} = {\rm diag}\left(-f,\frac{1}{f},\frac{r^2}{\Phi^{(m)}},\frac{r^2\sin^2\theta}{\Phi^{(m)}}\right),
\end{align}
where $\Phi^{(m)}:= 1+2 {\cal L}_{{\cal F}\cF} {\cal F}/{\cal L}_{\cal F}|_{\rm magnetic}$. On the other hand, the electric solution admits the effective metric of
\begin{align}
\tilde{g}_{\mu\nu}^{(e)} = {\rm diag}\left(-\frac{f}{\Phi^{(e)}},\frac{1}{f \Phi^{(e)}},r^2r^2\sin^2\theta\right),
\end{align}
where $\Phi^{(e)}:=1+2 {\cal L}_{{\cal F}\cF} {\cal F}/{\cal L}_{\cal F}|_{\rm electric}$ is now given by the electric counterparts in eqs.~(\ref{eq:A-electric}) and (\ref{eq:L-electric}). It is known that these two cases cannot be distinguished by the photon propagation~\cite{Toshmatov:2021fgm}.
This fact can be seen from a certain kind of duality in NED with the same metric,
\begin{align}
{\cal L}_{\cal F}^2 {\cal F}|_{\rm electric} = -{\cal F}|_{\rm magnetic},
\quad {\cal L}_{\cal F}|_{\rm magnetic} = ({\cal L}_{\cal F})^{-1}|_{\rm electric}.
\end{align}
which leads to
\begin{align}
\Phi^{(m)} = \frac{1}{\Phi^{(e)}},
\end{align}
Therefore, the effective geometries for the electric and magnetic solutions
are related through the conformal transformation\footnote{We do not consider the point $\Phi^{(m)}=0$, where the effective geometry is singular~\cite{Novello:2000km}. The eikonal limit will not be appropriate there.}
\begin{align}
\tilde{g}_{\mu\nu}^{(e)} = \Phi^{(m)}\tilde{g}_{\mu\nu}^{(m)}.
\end{align}
and hence have the same causal structure.
Thus, the effective metrics for both the electrically and magnetically charged spacetimes are different,
but the effective potentials for photons can be shown to be the same, i.e.,
\begin{align}
& \tilde{V} = \frac{f}{r^2 \Phi^{(e)}}= \frac{f}{r^2} \Phi^{(m)} .
\end{align}
For this reason, the photon trajectories coincide in both effective geometries.
With eqs.~(\ref{eq:LF-magnetic}) and (\ref{eq:F-magnetic}), the effective potential becomes
\begin{align}
\tilde{V} = \frac{f}{r^2}\left[\frac{\left(\mu ^2-4 \mu +3\right) q^{2 \nu }-\left(\mu (3 \nu +4)+\nu ^2-4 \nu -6\right) q^{\nu } r^{\nu }+\left(\nu ^2+4 \nu +3\right) r^{2 \nu }}{2 \left(q^{\nu }+r^{\nu
}\right) \left((\nu +3) r^{\nu }-(\mu -3) q^{\nu }\right)}\right].
\end{align}
In Fig.\ref{fig:photonorbits}, we compare the circular photon orbits with that of massless particles for $(\mu,\nu)=(3,1)$.
We find an unstable photon orbit always exists for any $q$ outside the massless orbit as seen in the $\nu=1$ case~\cite{Stuchlik:2019uvf}.
For $q>q^{\rm ex}$, unlike the massless orbit, this unstable orbit still appears outside the ISCO radius $r_0(q)$.
Below a certain critical charge $q < {q_\star}_{,\gamma}$, we also find another two orbits inside the ISCO radius $r_0(q)$. The upper stable branch only exists in the overcharged case $q^{\rm ex}<q<{q_\star}_{,\gamma}$, while the lower unstable branch exists with and without the horizon. The lower branch crosses the horizon at $(q_{c,\gamma},r_{c,\gamma})$.
Remarkably, for $0<q<q_{c,\gamma}$, the lower branch gives a circular orbit between the inner and outer horizon, where no stationary motion is allowed in the spacetime geometry. For $q_{c,\gamma}<q<q^{\rm ex}$, the orbit appears inside the inner horizon.
In Fig.~\ref{fig:Vphotons}, the typical shapes of the effective potential are shown corresponding to the range of $q$.
We also obtain qualitatively the same results for other parameters~(Fig.~\ref{fig:photonorbitsmore}).
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{figures/qrplotM0.pdf}\hspace{0.5cm}
\includegraphics[width=7cm]{figures/qrplotM0up.pdf}
\caption{Circular photon orbits and massless orbits for $(\mu,\nu)=(3,1)$. The right figure is a closeup around $q=q^{\rm ex}$. The $L=0$ line for massive particles is also drawn for reference. \label{fig:photonorbits}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.8cm]{figures/Vphoton4.pdf}\quad
\includegraphics[width=7.8cm]{figures/Vphoton3.pdf}\\
\includegraphics[width=7.8cm]{figures/Vphoton2.pdf}\quad
\includegraphics[width=7.8cm]{figures/Vphoton1.pdf}
\caption{The effective potential for photon orbits with $(\mu,\nu)=(3,1)$ and different charges.\label{fig:Vphotons}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=16cm]{figures/qrplotsM.pdf}
\caption{Comparison between photon orbits and massless orbits. The convention is the same as in Fig.~\ref{fig:photonorbits}. \label{fig:photonorbitsmore}}
\end{center}
\end{figure}
\if0
For the electric case, although we do not have the explicit form of ${\cal L}({\cal F})$, we can evaluate the ``on-shell" value of ${\cal L}_{\cal F}$ and ${\cal L}_{{\cal F} {\cal F}}$ for a given orbit radius $r$ by using the on-shell values of ${\cal F}$ and ${\cal L}$ in eqs.~(\ref{eq:F-onshell}) and (\ref{eq:L-onshell}),
\begin{align}
{\cal L}_{\cal F} = \frac{{\cal L}'}{{\cal F}'},\quad {\cal L}_{{\cal F}\cF} = \frac{{\cal L}_{\cal F}'}{{\cal F}'} = \frac{{\cal L}''}{({\cal F}')^2}-\frac{{\cal L}' {\cal F}''}{({\cal F}')^3},
\end{align}
where $'$ denotes the derivative with respect to $r$.
Then, the effective potential is given by
\begin{align}
V = \frac{f}{r^2} \left(1+\frac{2{\cal L}''{\cal F}}{{\cal L}'{\cal F}'}-\frac{2{\cal F}\cF''}{({\cal F}')^2} \right).
\end{align}
\fi
\section{Periapsis shifts}\label{sec:shift}
Next, we consider the perihelion shift in the massive particle orbits. In ref.~\cite{Gao:2020wjz}, the shift was numerically studied for the orbits close to the horizon. In this article, instead, we focus on the distant orbits where the weak field limit is available to find the analytic formula for the shift.
First, we expand the effective potential~(\ref{eq:geodesic-r-U}) at the large distance from the horizon by assuming $r \gg r_+ \sim r_g$ and $q/r\ll1$,
\begin{align}
& U = 1 - \frac{r_g}{r} + \frac{L^2+r_g \mu q}{r^2} -\left(r_g L^2 + \frac{r_g \mu (\mu-1)q^2}{2}\right)\frac{1}{r^3}+{\cal O}\left(\frac{1}{r^{4}}\right)\quad &(\nu=1),\label{eq:Uexpand-nu1}\\
& U = 1 - \frac{r_g}{r} + \frac{L^2}{r^2} -\left(r_g L^2 - \frac{r_g \mu q^2}{2}\right)\frac{1}{r^3}+{\cal O}\left(\frac{1}{r^{4}}\right)\quad& (\nu=2),\\
& U = 1 - \frac{r_g}{r} + \frac{L^2}{r^2} -\frac{r_g L^2}{r^3}+{\cal O}\left(\frac{1}{r^{4}}\right) \quad &(\nu \geq 3),
\end{align}
where the dominant terms depends on the parameter $\nu$.
Since the leading order correction coincides with that of the Schwarzschild, we will not consider $\nu \geq 3$ cases. In the following, we study $\nu=1$ and $\nu=2$ cases.
Note that, for the valid expansion, $q$ should not be much large
\begin{align}
\frac{L^2}{r_g^2} \gg \frac{\mu q}{r_g}.\label{eq:q-range-PN}
\end{align}
\subsection{$\nu=1$}
In the $\nu=1$ case, the effective potential already differs in the Newtonian order,
\begin{align}
U = 1 - \frac{r_g}{r} + \frac{\alpha^2 \ell^2 r_g^2}{r^2}+{\cal O}\left(\frac{1}{r^3}\right),
\end{align}
where we introduced dimensionless parameters
\begin{align}
\alpha := \sqrt{1 + \frac{\mu r_g q}{L^2}},\quad \ell := \frac{L}{r_g}.
\end{align}
Therefore, the orbit at the Newtonian order can be solved as
\begin{align}
r = \frac{2 \alpha^2 \ell^2 r_g }{1+ e \cos (\alpha \phi)},\quad e := \sqrt{4\alpha^2\ell^2 ({\cal E}^2-1)+1}
\end{align}
For $\alpha > 1$, this orbit causes the precession in the Kepler motion already at the Newtonian order, which appearantly seems to result in a retrograde shift that is opposite to the known Schwarzschild result
\begin{align}
\delta \phi = \frac{2\pi}{\alpha}-2\pi = 2\pi \left(\frac{1}{\sqrt{1+\frac{\mu q}{r_g \ell^2} }}-1\right)<0.
\end{align}
However, for the BH case $q<q^{\rm ex}$, since the weak field limit requires $\ell \gg 1$, $\alpha$ must be close to $1$,
\begin{align}
\alpha \simeq 1 + \frac{\mu q}{2r_g \ell^2},
\end{align}
which leads to the expression
\begin{align}
\delta \phi \simeq - \frac{\pi \mu q}{ r_g \ell^2}.
\end{align}
Note that the this approximation is independent on the value of $\mu$ from eq.~(\ref{eq:qex-range}). This is in the order of $\ell^{-2}$ that is the same order of the post-Newtonian correction from the $L^2/r^3$ term. Hence, we have to take $L^2/r^{3}$ correction into account as done in the Reissner-Nordstr\"om spacetime~\cite{Hong:2017dnf},
\begin{align}
\delta \phi \simeq - \frac{\pi \mu q}{ r_g \ell^2} + \frac{3\pi }{2\ell^2}= \frac{\pi}{\ell^2} \left(\frac{3}{2} - \frac{\mu q}{r_g}\right)
\end{align}
where we ignored the second term in the coefficient of $r^{-3}$ in eq.~(\ref{eq:Uexpand-nu1}) as it becomes of ${\cal O}(\ell^{-4})$.
Using eq.~(\ref{eq:qex-range}), we obtain the upper bound for the charge term
\begin{align}
\mu q/r_g \leq \mu q^{\rm ex}/r_g \leq 4/9,
\end{align}
which indicates the shift remains prograde, $\delta \phi \geq 19\pi/(18\ell^2)$, even at the extremal limit.
If we consider the overcharged case $q>q^{\rm ex}$, we have no upper bound for $q$. Therefore, the shift can become retrograde for $q > 3r_g/(2 \mu)$.\footnote{If one considers $q$ is simply a cut off scale of the quantum gravity origin, it should be in the Planckian order and it is unphysical to discuss the overcharged case. Here we consider $q$ simply as the charge in NED.}
\subsection{$\nu=2$}
In the $\nu=2$ case, the difference only appears in the post-Newtonian correction order
\begin{align}
U = 1-\frac{r_g}{r} + \frac{\ell^2 r_g^2}{r^2} - \frac{(\ell^2-\ell_c^2) r_g^3}{r^3}+{\cal O}\left(\frac{1}{r^4}\right),
\end{align}
where we introduced a dimensionless parameter
\begin{align}
\ell_c := \sqrt{\frac{\mu}{2}} \frac{q}{r_g}.
\end{align}
This shows the charge $q$ slightly lowers the shift as in the $\nu=1$ case,
\begin{align}
\delta \phi = \frac{3\pi}{2\ell^2}\left(1-\frac{\ell^2_c}{\ell^2}\right).\label{eq:dphi-nu2}
\end{align}
A caveat is that, eq.~(\ref{eq:qex-range}) shows $\ell_c$ is bounded above in the BH case as $\ell_c \leq\sqrt{2}/3$, and then eq.~(\ref{eq:q-range-PN}) requires $\ell \gg \ell_c$ even for the overcharged case. Although it does not change the conclusion on the charge effect, this implies the charge correction is of ${\cal O}(\ell^{-4})$, and then one should add the correction from next post-Newtonian order in eq.~(\ref{eq:dphi-nu2}) for the correct estimate.
\section{Summary}\label{sec:conclusion}
In this article, we have investigated the geodesic motion of massive/massless particles and photons around general Fang-Wang spacetimes. For massive and massless particles, we have found that the characteristics of the motions are classified into four cases depending on the strength of the charge (i) $0\leq q\leq q_{{\rm ex}}$, (ii)$q_{{\rm ex}} < q < {q_\star}$, (iii)$q_\star \leq q < {q_{\star\star}}$, (iv)${q_{\star\star}} \leq q$. The case (i) has the ISCO outside the horizon, while the cases (ii)-(iv), which is horizonless, have the ISCO with zero angular momentum where the de Sitter-like expansion and the gravitational attraction balance.
\medskip
The circular photon orbits are also studied
by examining the null geodesics in the effective geometry.
We found three types of orbits, outer, middle and inner orbits.
The outer orbit is unstable and always exists for any charge $q$.
The middle one is stable and only exist in the range $q^{\rm ex} < q < q_{\star,\gamma}$. The inner one is unstable joining with the middle one at $q_{\star,\gamma}$ and exists for $0<q\leq q_{\star,\gamma}$ with and without the horizon. Remarkably, the inner unstable orbit appears between the inner and outer horizon for $0<q<q_{c,\gamma}$, where no stationary motion is allowed in the spacetime geometry.
We have also studied the peliapsis shift by the massive particle.
We have found the shift is characterized by the parameter $\nu$ as
\begin{enumerate}
\item $\nu=1$ : the shift gets the negative correction, which can change the sign of the shift for the overcharged case
\item $\nu=2$ : the shift gets the negative correction, which remains small in the weak field limit
\item $\nu\geq 3$ : the charge effect is ignorable compared to the GR effect
\end{enumerate}
\medskip
We found that the massless particles and photons can move along stable circular orbits for slightly overcharged spacetimes.
Since the existence of stable null circular orbits is known to cause an instability in the spacetime~\cite{Keir:2014oka,Cardoso:2014sna,Cunha:2017qtt},
it would be interesting to pursue the final state of the spacetime with such orbits.
\medskip
The optics inside the horizon would be another interesting subject, due to the existence of the circular orbits of photons.
Other than circular orbits, one can also study the motion of particles and photons falling into the event horizon, which may be an interesting issue as well.
\acknowledgments
The authors thank Tomohiro Harada, Ken-ichi Nakao and Hideki Maeda for useful comments and discussion. The authors also thank Daniele Malafarina
for providing useful comments on the photon orbit.
This work is supported by Toyota Technological Institute Fund for Research Promotion A.
RS was supported by JSPS KAKENHI Grant Number JP18K13541.
ST was supported by JSPS KAKENHI Grant Number 21K03560.
|
1,941,325,221,151 | arxiv | \section{Introduction}
The directional signalling capabilities of base stations (BSs) that have multiple transmit antennas enable a variety of techniques~\cite{SymbollevelandMulticast} for simultaneously transmitting independent messages to multiple single-antenna receivers, including dirty paper coding~\cite{TheCapacityRegion}, vector perturbation precoding~\cite{Avectorperturbationtechnique2}, lattice reduction precoding~\cite{latticereductionaided}, Tomlinson-Harashima precoding~\cite{Precodinginmultiantenna}, rate splitting~\cite{RobustTransmissioninDownlink}, per-symbol beamforming~\cite{ConstructiveMultiuserInterference}, and conventional linear beamforming~\cite{ShiftingtheMIMO}. Of these signalling techniques, conventional linear beamforming has the simplest implementation and will be the focus of this paper. In particular, we will consider scenarios in which the users that have been scheduled for transmission specify the quality-of-service (QoS) that they expect to receive. In that setting, the BS designs the set of beamformers to ensure that the signal-to-interference-and-noise ratio (SINR) at each receiver meets the target level that is implicitly specified by that user's QoS requirements. When the BS has perfect knowledge of the channel to each user, the beamformers that minimize the total transmitted power required to achieve the SINR targets can be efficiently found \cite{Jointoptimal,Reference2,Solutionofthemultiuser,OptimalMultiuserTransmit}. However, in practice these channels are estimated and possibly predicted. In time division duplexing (TDD) systems the estimation is typically performed during the training phase on the uplink, whereas in frequency division duplexing (FDD) systems, each receiver estimates its channel and feeds back a quantized version of that estimate to the BS. Since the BS has only estimates of the users' channels, it can only estimate the receivers' SINRs. Those estimates are, quite naturally, uncertain and hence there is a possibility that a design performed using the estimated channels will fail to meet the SINR targets when the beamformers are implemented.
A prominent approach to designing a precoder that can control the consequent outage is to postulate a model for the uncertainty in the channel estimates and to seek designs that control the outage probability under that uncertainty model. In some cases the approach involves jointly designing the beamforming directions and the power allocated to these directions (e.g.,~\cite{Optimalpowercontrol,Probabilisticallyconstrained,OutageConstrained,LowComplexityRobustMISO}), while in other cases the beamforming directions are designed based on the channel estimates only, and the uncertainty model is incorporated into the design of the power loading; e.g., \cite{Coordinateupdate,Arobustmaximin,ATractableMethod}. Unfortunately, in most settings the outage constraint has proven to be intractable (an exception is the case in \cite{Coordinateupdate}), and hence the goal has been to develop computationally efficient algorithms that can manage the outage probability. One possible strategy for doing so is to seek ``safe" approximations of the robust optimization problem \cite{Reference4}. When such approximations result in a feasible design problem, the solution is guaranteed to satisfy the constraints of the original problem, but these approximations can be quite conservative; e.g., \cite{OutageConstrained,Probabilisticallyconstrained}. An alternative strategy is to develop approximations of the outage constraint that typically provide good performance, but might not necessarily guarantee that their solution is feasible for the original problem; e.g., \cite{LowComplexityRobustMISO,Optimalpowercontrol}. The approach taken in this paper falls into that class.
The development of the proposed offset-based approach begins with the rewriting of the SINR constraint as the non-negativity of a random variable. That random variable is a non-convex quadratic function of the uncertainties, in which the quadratic kernel is a quartic function of the beamformers. Then, we approximate the non-negativity constraint on the random variable by the constraint that its mean is larger than a given multiple of its standard deviation. For the case of Gaussian channel uncertainties, the mean and standard deviation are quadratic and quartic function of the beamformers, respectively. That fact enables the application of semidefinite relaxation techniques to obtain a convex formulation of (a relaxed version of) the approximated problem. While that design technique is quite effective, the computational cost of solving the convex conic program with semidefinite constraints is significant. By making a further approximation that is suitable for systems with reasonably small uncertainties, we obtain a design formulation for which the KKT optimality conditions have a simpler structure. That simpler structure facilitates the development of an approximate solution method that only requires the iterative evaluation of closed-form expressions. Further approximations reveal a connection with the low-complexity technique developed in~\cite{LowComplexityRobustMISO}.
An analysis of the computational cost of these precoder design techniques shows that it is the calculation of the beamforming directions that consumes most of the required computational resources, and that when these directions are defined in-advance, the computational load can be significantly reduced. Accordingly, we develop variants of our precoder design algorithms that perform power loading on a set of fixed beamforming directions. These algorithms have low computational costs, and provide performance that is close to that of the optimal power loading algorithm~\cite{Coordinateupdate}. Furthermore, for systems with a large number of antennas (i.e., ``massive MIMO") in which the channel hardens, we develop a variant of our power loading algorithm that has a computational cost that grows only linearly with the number of antennas.
In practice, the BS has limited power available for transmission, and it is possible that the power required to serve the scheduled users with the required outage probabilities may exceed that limit. In some of these scenarios, some users suffer from a weak channel, or from having their channels closely aligned with those of other users. When that happens, such users consume most of the power transmitted by the BS. This suggests opportunities to reschedule users. On the other hand, some users might be close to the BS and experiencing a relatively strong channel; a case that suggests opportunities for doing some sort of power saving. The proposed power loading algorithm provides an explicit relationship between the required outage probabilities and the consumed power, which allows us to address these issues. Using this explicit power-outage relationship we can reduce the required power when the resulting increases in the outage probabilities are tolerable, and we can identify users that consume excessive amounts of power.
The above-mentioned designs are ``fair" in the sense that they seek to provide each user with their specified outage probability. However, the proposed design techniques are quite flexible, and can accommodate other objectives, such as the sum of the outage probabilities. As we will demonstrate, such designs can improve the average performance of the users.
\section{System model}
We consider a scenario in which a BS that has $N_t$ antennas communicates with $K$ single-antenna users over a narrow-band channel. In the linear beamforming transmission case, the transmitted signal can be written as $\mathbf{x}= \sum_{k=1}^K\mathbf{w}_k s_k,$ where $s_k$ is the normalized data symbol intended for user $k$, and $\mathbf{w}_k$ is the associated beamformer vector. For later reference we let $\mathbf{u}_k =\mathbf{w}_k /\|\mathbf{w}_k \| $ denote the beamforming direction for user $k$, and let $\beta_k=\|\mathbf{w}_k \|^2$ denote the power allocated to that direction. Hence, $\mathbf{w}_k= \sqrt{\beta_k}\mathbf{u}_k $. The received signal at user $k$ is modelled as
\begin{equation}\label{rcvd_sig}
y_k= \mathbf{h}_k^H \mathbf{w}_k s_k + \textstyle\sum_{j \neq k}\mathbf{h}_k^H \mathbf{w}_j s_j + n_k,
\end{equation}
where $\mathbf{h}_k^H$ is the vector of complex channel gains between the antennas at the BS and user $k$, and $n_k$ is the additive zero-mean circular complex Gaussian noise at that user.
Under this model, if we let $\sigma_k^2$ denote the noise variance, then the SINR at user $k$ is
\begin{equation}\
\text{SINR}_k= \frac{| \mathbf{h}_k^H \mathbf{w}_k|^2}{\sum_{j \neq k} | \mathbf{h}_k^H \mathbf{w}_j|^2 + \sigma_k^2}.
\end{equation}
The design of a set of beamformers $\{ \mathbf{w}_k\}_{k=1}^K$ so that the SINRs satisfy specified target values (i.e., $\text{SINR}_k\geq \gamma_k$) requires the knowledge of the channel vectors $\{\mathbf{h}_k\}_{k=1}^K$. However, the BS has only estimates of $\{\mathbf{h}_k\}_{k=1}^K$, and hence its estimates of the SINRs at the receivers are uncertain. Accordingly, we will incorporate the channel uncertainty model into the design process. In particular, we will consider systems in which the uncertainty can be modelled using the simple additive model,
\begin{equation}\label{uncertainty}
\mathbf{h}_k= \mathbf{h}_{e_k} +\mathbf{e}_k,
\end{equation}
where $\mathbf{h}_{e_k}$ is the BS's estimate of the channel to user $k$, and the uncertainty in that estimate is characterized by the distribution of the elements of $\mathbf{e}_k$.
In this paper, we will focus on scenarios in which $\mathbf{e}_k$ can be modelled as a circular complex Gaussian random variable with mean $\mathbf{m}_k$ and covariance $\mathbf{C}_k$; i.e., $\mathbf{e}_k \backsim \mathcal{CN} (\mathbf{m}_k, \mathbf{C}_k)$. One scenario in which that model is applicable is that of a TDD scheme operating in a slow fading environment, in which the BS estimates the channel on the uplink using a linear estimator and exploits channel reciprocity. When the channel gains are uncorrelated and the BS employs the best linear unbiased estimator (BLUE), $\mathbf{e}_k \backsim \mathcal{CN} (0, \sigma_{e_k}^2\mathbf{I})$, and we will pay particular attention to that case. (Robust beamforming schemes for uncertainty models tailored to the FDD case were developed in \cite{LowComplexityRobustMISO}.)
Now if we let $\delta_k$ denote the maximum tolerable outage probability for user $k$, the generic joint beamforming and power loading problem can be written as
\begin{subequations}\label{outage_min}
\begin{align}
\min_{\substack{\mathbf{w}_k}} \quad &\textstyle\sum_{k=1}^K \mathbf{w}_k^H \mathbf{w}_k \\
\text{subject to} \quad & \text{Prob}(\text{SINR}_k \geq \gamma_k)\geq 1- \delta_k, \quad \forall k. \label{sinr5}
\end{align}
\end{subequations}
This problem is hard to solve due to the intractable probabilistic outage constraint in \eqref{sinr5} even when the uncertainty is Gaussian~\cite{Optimalpowercontrol,Probabilisticallyconstrained,OutageConstrained}.
In order to resolve that intractability, a variety of approximations of the problem in \eqref{outage_min} by problems that are tractable have been proposed \cite{Optimalpowercontrol,Probabilisticallyconstrained,OutageConstrained,LowComplexityRobustMISO}.
In many cases, the class of approximations that is considered is restricted to the class of ``safe'' approximations~\cite{Reference4}.
Such approximations are structured so that they guarantee that any solution of the approximate problem is feasible for the original problem
in \eqref{outage_min}. However, in the downlink beamforming application, such approximations can be quite conservative, in the sense that the
feasible set of the approximate problem is significantly smaller than that of the original problem; cf. \eqref{outage_min}. That can result in
instances of the approximate problem being infeasible when the original problem has a solution, or in beamformer designs that consume significantly more power than necessary. The approximation that we will develop below is not structurally constrained in this way, but it typically performs well in practice. Furthermore, its simple form provides considerable flexibility in its application, and facilitates the development of highly-efficient algorithms.
\section{Principles of the offset-based approach}\label{sect3}
The derivation of the proposed approximation of the outage probability begins by rewriting $\text{SINR}_k \geq \gamma_k$ as $\mathbf{h}_k^H \mathbf{Q}_k \mathbf{h}_k - \sigma_k^2 \geq 0$, where
\begin{equation}
\begin{aligned}
\mathbf{Q}_k &= \mathbf{w}_k \mathbf{w}_k^H/\gamma_k-\textstyle\sum_{j \neq k} \mathbf{w}_j \mathbf{w}_j^H \\
&= \beta_k \mathbf{u}_k \mathbf{u}_k^H/\gamma_k-\textstyle\sum_{j \neq k} \beta_j \mathbf{u}_j \mathbf{u}_j^H.
\end{aligned}
\end{equation}
That is, the probability that $\text{SINR}_k \geq \gamma_k$ is the same as the probability that the term $\mathbf{h}_k^H \mathbf{Q}_k \mathbf{h}_k - \sigma_k^2$ is non-negative. Under the additive uncertainty model in \eqref{uncertainty}, we observe that $\mathbf{h}_k^H \mathbf{Q}_k \mathbf{h}_k - \sigma_k^2$ is an indefinite quadratic function of the uncertainty, $\mathbf{e}_k$. In particular, we can formulate the SINR constraint as follows
\begin{equation}\label{SINR_reformulation}
f_k(\mathbf{e}_k)=\mathbf{h}_{e_k}^H \mathbf{Q}_k \mathbf{h}_{e_k} + 2 \text{Re}(\mathbf{e}_k^H \mathbf{Q}_k \mathbf{h}_{e_k} ) + \mathbf{e}_k^H \mathbf{Q}_k \mathbf{e}_k - \sigma_k^2 \geq 0.
\end{equation}
The key observation that underlies the offset approximation is that for uncertainties $\mathbf{e}_k$ that are reasonably concentrated, if we design the beamforming vectors so that the mean value of $f_k(\mathbf{e}_k)$, denoted by $\mu_{f_k}$, is a significant multiple of its standard deviation, denoted by $\sigma_{f_k}$, then that user will achieve a low outage probability. If we let $r_k$ denote that multiple for the $k$th user, then the resulting approximation of the SINR constraint, $\text{Prob}(\text{SINR}_k \geq \gamma_k)\geq 1- \delta_k$, can be written as
\begin{equation}\label{offset_constr}
\mu_{f_k} \geq r_k \sigma_{f_k}.
\end{equation}
In order to develop an intuitive rationale for that approximation for the outage probability, we observe that when $\mathbf{e}_k$ in \eqref{uncertainty} is Gaussian, $f_k(\mathbf{e}_k)$ has a generalized chi-square distribution \cite{OntheDistributionof}. We also observe that the term that complicates the calculation of the relevant tail probability (i.e., Prob ($f_k(\mathbf{e}_k)<0$)) is the indefinite quadratic term $\mathbf{e}_k^H \mathbf{Q}_k \mathbf{e}_k$ in \eqref{SINR_reformulation}. To have reasonable outage performance, the norm of the channel uncertainty $\mathbf{e}_k$ in \eqref{uncertainty} should be relatively small compared to the norm of the channel; cf., \cite{MIMObroadcast}. In that case, the constant and linear terms in \eqref{SINR_reformulation} will tend to dominate the quadratic term. Furthermore, the distribution of $ \mathbf{e}_k^H \mathbf{Q}_k \mathbf{e}_k$ is ``bell shaped" since $\mathbf{Q}_k$ generically has one positive and $K-1$ negative eigenvalues. Now if we approximate the quadratic term $\mathbf{e}_k^H \mathbf{Q}_k \mathbf{e}_k$ by a Gaussian term of the same mean and variance, then the distribution of $f_k(\mathbf{e}_k)$ becomes Gaussian and the constraint in \eqref{offset_constr} provides precise control over the tail probability.
In other words, the constraint in \eqref{offset_constr} provides precise control of the tail probability of the Gaussian approximation of $f_k(\mathbf{e}_k)$. These insights, and the guidance that they provide on the choice of $r_k$, are discussed in more detail in Appendix~\ref{r_value_sel}.
To be able to use the offset approximation in \eqref{offset_constr} in a low-complexity design algorithm, we need to obtain expressions for $\mu_{f_k}$ and $ \sigma_{f_k}$ in terms of the design variables $\mathbf{w}_k= \sqrt{\beta_k}\mathbf{u}_k$. As shown in Appendix \ref{mean_var_der}, when $\mathbf{e}_k \backsim \mathcal{CN} (\mathbf{m}_k, \mathbf{C}_k)$
\begin{subequations}\label{mean_eqn_c}
\begin{align}
\mu_{f_k}& = \mathbb{E} \{f_k(\mathbf{e}_k)\} \nonumber \\
&= (\mathbf{h}_{e_k}+\mathbf{m}_k)^H \mathbf{Q}_k (\mathbf{h}_{e_k}+\mathbf{m}_k) - \sigma_k^2 + \mathbf{w}_k^H \mathbf{C}_k \mathbf{w}_k /\gamma_k \nonumber\\
& \qquad -\sum_{j \neq k} \mathbf{w}_j^H \mathbf{C}_k \mathbf{w}_j^H, \\
\sigma_{f_k}^2 & = \text{var} \{f_k(\mathbf{e}_k)\} \nonumber \\
&= 2 (\mathbf{h}_{e_k}+\mathbf{m}_k)^H \mathbf{C}_k^{1/2} \mathbf{Q}_{k}^2 \mathbf{C}_k^{1/2} (\mathbf{h}_{e_k}+\mathbf{m}_k) \nonumber \\
&\qquad +\text{tr} (\mathbf{C}_k^{1/2} \mathbf{Q}_{k} \mathbf{C}_k^{1/2} )^2.
\end{align}
\end{subequations}
From the perspective of beamformer design, an important observation is that $\mu_{f_k}$ is a non-convex quadratic function of the beamformers $\{\mathbf{w}_k\}_{k=1}^K$, but for fixed beamforming directions $\{\mathbf{u}_k\}_{k=1}^K$ it is a linear function of the power loading $\{\beta_k\}_{k=1}^K$. The variance $\sigma_{f_k}^2$ is a quartic function of the beamformers, and for fixed directions is a non-convex quadratic function of the power loading. In scenarios in which the model $\mathbf{e}_k \backsim \mathcal{CN} (0, \sigma_{e_k}^2\mathbf{I})$ is appropriate, these expressions simplify to
\begin{subequations}
\begin{align}
\mu_{f_k} & = \mathbf{h}_{e_k}^H \mathbf{Q}_k \mathbf{h}_{e_k}- \sigma_k^2 + \sigma_{e_k}^2 \Bigl(\beta_k /\gamma_k - \sum_{j \neq k}\beta_j \Bigr). \label{mean_eqn} \\
\sigma_{f_k}^2 & =2 \sigma_{e_k}^2 \mathbf{h}_{e_k}^H \mathbf{Q}_{k}^2 \mathbf{h}_{e_k} +\sigma_{e_k}^4 \text{tr} (\mathbf{Q}_{k}^2). \label{var_rel}
\end{align}
\end{subequations}
We will focus on this simplified case in the following sections.
\section{Offset-based robust beamforming}\label{mod_dir}
As discussed above, the robust beamforming problem in \eqref{outage_min} is fundamentally hard to solve due to the intractability of the probabilistic SINR outage constraint in \eqref{sinr5}. If we were to replace that constraint with its offset approximation, $\mu_{f_k} \geq r_k \sigma_{f_k}$, then the problem in \eqref{outage_min} can be closely approximated by
\begin{subequations}\label{outage_min2}
\begin{align}
\min_{\substack{\mathbf{w}_k}} \quad & \textstyle\sum_{k=1}^K \mathbf{w}_k^H \mathbf{w}_k \\
\text{subject to} \quad & \mu_{f_k} \geq r_k \sigma_{f_k}, \quad \forall k. \label{sinr3}
\end{align}
\end{subequations}
Some insights into the behaviour of solutions to \eqref{outage_min2} can be obtained by observing that when the values of $r_k$ are chosen to be the same, the beamforming vectors are designed so that users with a large SINR variance are provided with a larger SINR mean. To do so, those users with a lower SINR variance are not provided with as large mean SINR as they do not need the same protection against the uncertainty.
To develop an algorithm to obtain good solutions to \eqref{outage_min2}, we observe that in \eqref{sinr3} we have the term $\mu_{f_k}$ which is quadratic in $\mathbf{w}_k$, and we also have the term $ \sigma_{f_k}^2= 2 \sigma_{e_k}^2 \mathbf{h}_{e_k}^H \mathbf{Q}_k^2 \mathbf{h}_{e_k} +\sigma_{e_k}^4 \text{tr} (\mathbf{Q}_k^2) $, which includes the square of the matrix $ \mathbf{Q}_k$ and, accordingly, is quartic in $\mathbf{w}_k$. If we make the substitution $\mathbf{W}_k =\mathbf{w}_k \mathbf{w}_k^H$, then the functions in \eqref{sinr3} become linear and quadratic functions of $\mathbf{W}_k$ and the objective becomes linear. As such, the remaining difficulty in the reformulation of the problem is the set of rank-one constraints on $\mathbf{W}_k$. If we relax those constraints we obtain the following semidefinite relaxation of the problem in \eqref{outage_min2}
\begin{subequations}\label{outage_min4}
\begin{align}
\min_{\substack{\mathbf{W}_k, d_{1k}, d_{2k}}} \quad &\text{tr} \Bigl(\textstyle\sum_{k=1}^K \mathbf{W}_k \Bigr) \\
\text{s.t.} \quad & \mathbf{h}_{e_k}^H \mathbf{Q}_k \mathbf{h}_{e_k}- \sigma_k^2 + \sigma_{e_k}^2 \text{tr}(\mathbf{W}_k) /\gamma_k \nonumber \\
& \quad - \sigma_{e_k}^2 \text{tr} \Bigl(\textstyle\sum_{j\neq k} \mathbf{W}_j \Bigr) \geq r_k \| [d_{1k} \; d_{2k} ]\| , \\
& d_{1k} \geq \sqrt{2} \sigma_{e_k} \| \mathbf{h}_{e_k}^H \mathbf{Q}_k \|, \\
& d_{2k} \geq \sigma_{e_k}^2 \| \mathbf{Q}_k \|_F, \\
& \mathbf{W}_k \succeq \mathbf{0}, \quad \forall k,
\end{align}
\end{subequations}
where $\| \cdot \|_F$ represents the Frobenius norm of the matrix.
In this formulation, each SINR constraint in \eqref{sinr3} is replaced by three second order cone (SOC) constraints. Thus, the problem in \eqref{outage_min4} is a convex conic optimization problem and can be efficiently solved using interior point methods. Two refined implementations of those methods are easily accessible through the $\textsc{Matlab}$-based CVX tool \cite{cvx}. In our numerical experience, the rank of the optimal $\mathbf{W}_k$'s in \eqref{outage_min4} has always been one. When that occurs, the semidefinite relaxation is tight and the optimal beamformer vectors $\mathbf{w}_k$ can be directly obtained from the optimal matrices $\mathbf{W}_k$. This phenomenon has been established in some related beamforming problems \cite{Reference2,RobustSINR,UnravelingtheRankOne}, and has been observed numerically in a number of other downlink beamforming problems; e.g., \cite{OutageConstrained}.
\subsection{Low-complexity precoding algorithm}
Although the problem in \eqref{outage_min4} is convex, it contains $3K$ SOC constraints, plus the $K$ semidefinite constraints. As a result, solving \eqref{outage_min4} incurs a significant computational load even for a moderate number of antennas. In this section, we will first show how a mild approximation of the problem in \eqref{outage_min4} leads to an optimization problem with only $K$ SOC constraints. We will then use insights from the KKT conditions of that problem to show that it can be approximately solved using the iterative evaluation of a sequence of closed-form expressions.
The approximation is based on the observation, made above, that in practical downlink systems the uncertainty in the channel estimates must be small in order for the system to support reasonable rates \cite{MIMObroadcast}. In such scenarios, the term in \eqref{sinr3} containing $\sigma_{e_k}^4$ will typically be significantly smaller than the other term. Accordingly, $\sigma_{f_k}^2 \approx 2 \sigma_{e_k}^2 \mathbf{h}_{e_k}^H \mathbf{Q}_k^2 \mathbf{h}_{e_k}$ is a reasonable approximation.
Applying this approximation in the context of the problem in \eqref{outage_min2} we obtain the following approximation of \eqref{sinr3}
\begin{multline}\label{new_sinr_const}
\mathbf{h}_{e_k}^H \mathbf{Q}_k \mathbf{h}_{e_k}- \sigma_k^2 + \sigma_{e_k}^2 \mathbf{w}_k^H \mathbf{w}_k /\gamma_k - \sigma_{e_k}^2 \sum_{j \neq k} \mathbf{w}_j^H \mathbf{w}_j \\ \geq r_k \sqrt{2} \sigma_{e_k} \|\mathbf{h}_{e_k}^H \mathbf{Q}_k\|.
\end{multline}
The semidefinite relaxation of the resulting approximation of the problem in \eqref{outage_min2} can be written as
\begin{subequations}\label{outage_min5}
\begin{align}
\min_{\substack{\mathbf{W}_k}, \mathbf{d}_k} \quad &\text{tr} \Bigl(\textstyle\sum_{k=1}^K \mathbf{W}_k \Bigr) \\
\text{s.t.} \quad & \mathbf{h}_{e_k}^H \mathbf{Q}_k \mathbf{h}_{e_k}- \sigma_k^2 + \sigma_{e_k}^2 \text{tr}(\mathbf{W}_k) /\gamma_k \nonumber \\
& \quad - \sigma_{e_k}^2 \text{tr} \Bigl(\textstyle\sum_{j\neq k} \mathbf{W}_k \Bigr) \geq \| \mathbf{d}_k \|, \label{sinr4} \\
& \mathbf{d}_k = r _k \sqrt{2} \sigma_{e_k} \mathbf{Q}_k \mathbf{h}_{e_k}, \label{dconst} \\
& \mathbf{W}_k \succeq \mathbf{0}, \quad \forall i.
\end{align}
\end{subequations}
We note that the problem in \eqref{outage_min5} is over parameterized (the vectors $ \mathbf{d}_k$ are not needed), but this over parameterization will simplify the following analysis.
The problem in \eqref{outage_min5} is another convex conic program, but it has significantly fewer constraints than that in \eqref{outage_min4}; there are $K$ SOC constraints rather than the $3K$ in \eqref{outage_min4}. While it can be solved with less computational effort than \eqref{outage_min4}, the presence of the semidefinite constraints means that considerable effort is still required. To derive a more efficient algorithm, we examine the Lagrangian of \eqref{outage_min5}, assuming that the matrices $\mathbf{W}_k$ are of rank one. If we let $\nu_k$ denote the dual variable for the constraint in \eqref{sinr4}, and $\boldsymbol\psi_{f_k}$ denote the vector of dual variables for the equality constraint in \eqref{dconst}, the Lagrangian can be written as
\begin{multline}
\mathcal{L}(\mathbf{w}_k, \mathbf{d}_k, \nu_k,\boldsymbol\psi_{f_k})= \sum_{k=1}^{K} \mathbf{w}_k^H \mathbf{w}_k -\sum_{k=1}^{K}\nu_k \Bigl(\mathbf{h}_{e_k}^H \mathbf{Q}_k \mathbf{h}_{e_k} - \sigma_k^2 \\ +\sigma_{e_k}^2 \mathbf{w}_k^H \mathbf{w}_k /\gamma_k - \sigma_{e_k}^2 \sum_{j \neq k}\mathbf{w}_j^H \mathbf{w}_j - \| \mathbf{d}_k \| \Bigr) \\ -\sum_{k=1}^{K} \boldsymbol\psi_{f_k}^H (\mathbf{d}_k - r_k \sqrt{2} \sigma_{e_k} \mathbf{Q}_k \mathbf{h}_{e_k}).
\end{multline}
From the KKT conditions of the problem in \eqref{outage_min5}, we can deduce that
\begin{multline}\label{closed_form}
\mathbf{w}_k =\Biggl( \frac{\nu_k}{\gamma_k}\mathbf{h}_{e_k} \mathbf{h}_{e_k}^H-\sum_{j\neq k} \nu_j \mathbf{h}_{e_j} \mathbf{h}_{e_j}^H + \frac{\nu_k \sigma_{e_k}^2}{\gamma_k} \mathbf{I} -\sum_{j\neq k} \nu_j \sigma_{e_k}^2 \mathbf{I} \\ - \frac{ r_k \sqrt{2} \sigma_{e_k} }{\gamma_k} \text{Re} \{\boldsymbol\psi_{f_k} \mathbf{h}_{e_k}^H \} + \sum_{j\neq k} r_j \sqrt{2} \sigma_{e_k} \text{Re} \{\boldsymbol\psi_j \mathbf{h}_{e_j}^H \} \Biggr)\mathbf{w}_k,
\end{multline}
which is an eigen equation for the direction $\mathbf{u}_k$. Using a similar approach to the perfect CSI case \cite{OptimalMultiuserTransmit}, we can rearrange this equation to obtain the following fixed-point equation for $\nu_k$,
\begin{multline}\label{nu_mod2}
\nu_k^{-1} = \mathbf{h}_{e_k}^H \Biggl( \mathbf{I} + \sum_j \nu_j \mathbf{h}_{e_j} \mathbf{h}_{e_j}^H - \frac{\nu_k \sigma_{e_k}^2}{\gamma_k} \mathbf{I} +\sum_{j\neq k} \nu_j \sigma_{e_k}^2 \mathbf{I} \\ +\frac{ r_k \sqrt{2} \sigma_{e_k} }{\gamma_k} \text{Re} \{\boldsymbol\psi_{f_k} \mathbf{h}_{e_k}^H \} - \sum_{j\neq k} r_j \sqrt{2} \sigma_{e_k} \text{Re} \{\boldsymbol\psi_j \mathbf{h}_{e_j}^H \} \Biggr)^{-1} \\ \times \mathbf{h}_{e_k} \Bigl(1+\frac{1}{\gamma_k} \Bigr).
\end{multline}
The expressions in \eqref{closed_form} and \eqref{nu_mod2} share a similar structure to those obtained for the corresponding QoS problem in the case of perfect CSI at the BS \cite{OptimalMultiuserTransmit}, but the matrix components of each equation contain four additional terms that are dependent on the variance of the channel estimation error. To exploit this structure and obtain an efficient algorithm for good solutions to \eqref{outage_min5} we observe that if we were given $\{\boldsymbol\psi_{f_k}\}$, then we could solve the fixed-point equations in \eqref{nu_mod2} for $\{ \nu_k \}$, and then we could solve the eigen equations in \eqref{closed_form} for the beamforming directions $\{\mathbf{u}_k\}$. The solution could then be completed by performing the appropriate power loading, which will be explained in the following section. Therefore, if we could find a reasonable approximation for the vectors $\boldsymbol\psi_{f_k}$, we would obtain an iterative closed-form solution. To do so, we observe that the variable $\mathbf{d}_k$ in \eqref{dconst} appears in the Lagrangian in the term $ \nu_k \| \mathbf{d}_k \| -\boldsymbol\psi_{f_k}^H \mathbf{d}_k$.
Accordingly, from the stationarity component of the KKT conditions we have that $\| \boldsymbol\psi_{f_k} \| = \nu_k$ and that $\mathbf{d}_k$ and $\boldsymbol\psi_{f_k}$ are in the same direction; i.e., $\mathbf{d}_k/ \| \mathbf{d}_k\| = \boldsymbol\psi_{f_k} /\| \boldsymbol\psi_{f_k} \|$. Accordingly, we can write
\begin{equation}\label{psi}
\boldsymbol\psi_{f_k}= \nu_k \mathbf{d}_k/ \| \mathbf{d}_k\|.
\end{equation}
Since $\mathbf{d}_k = r _k \sqrt{2} \sigma_{e_k} \mathbf{Q}_k \mathbf{h}_{e_k}$, $\boldsymbol\psi_{f_k}$ explicitly depends on the beamforming directions, which have not yet been determined. However, we observe that if we substitute \eqref{psi} into \eqref{nu_mod2}, the terms involving $\mathbf{d}_k$ are multiplied by the standard deviation of the error, $\sigma_{e_k}$. As we have already argued in the derivation of the approximations that lead to \eqref{outage_min4}, $\sigma_{e_k}$ will be small in effective downlink beamforming schemes, and this suggests that reasonable initial approximations of the directions should yield a good approximation of $\{ \nu_k \}$, and hence a good set of beamforming directions. We suggest the use of the zero-forcing (ZF) directions \cite{Zeroforcingmethods} for the estimated channels, which we will denote by $\mathbf{u}_{z_k}$. When we use that initialization, the initial direction of $\mathbf{d}_k$ will be the same as $\mathbf{u}_{z_k}$, which allows us to rewrite the fixed-point equations in \eqref{nu_mod2} as
\begin{multline}\label{nu_mod}
\nu_k^{-1} = \mathbf{h}_{e_k}^H \Biggl( \mathbf{I} + \sum_j \nu_j \mathbf{h}_{e_j} \mathbf{h}_{e_j}^H - \frac{\nu_k \sigma_{e_k}^2}{\gamma_k} \mathbf{I} +\sum_{j\neq k} \nu_j \sigma_{e_k}^2 \mathbf{I} \\ +\frac{ r \sqrt{2} \sigma_{e_k} \nu_k }{\gamma_k} \text{Re} \{\mathbf{u}_{z_k} \mathbf{h}_{e_k}^H \} - \sum_{j\neq k} r \sqrt{2} \sigma_{e_k} \nu_j \text{Re} \{\mathbf{u}_{z_j} \mathbf{h}_{e_j}^H \} \Biggr)^{-1} \\ \times \mathbf{h}_{e_k} \Bigl(1+\frac{1}{\gamma_k} \Bigr).
\end{multline}
The derivations outlined above are summarized in the sequence of closed-form operations in Alg. \ref{Alg1}. While the initial approximation can be improved by using the beamformers obtained in step 4 to obtain a refined estimate of the direction of $\mathbf{d}_k$ and returning to step 2 of the algorithm, the simulation results in Section \ref{sec_sim} suggest that the one-shot approach taken in Alg. \ref{Alg1} produces a solution whose performance is quite close to that of the original offset-based design formulation in \eqref{outage_min4}. That suggests that in the scenarios that we have considered, the underlying approximations are working quite well.
\begin{algorithm}
\caption{Iterative closed-form beamformer design}
\label{Alg1}
\begin{algorithmic}[1]
\State Find the ZF directions $\{\mathbf{u}_{z_k}\}$.
\State Find each $\nu_k$ using \eqref{nu_mod}.
\State Find each $\mathbf{u}_k$ using the corresponding variant of \eqref{closed_form}.
\State Apply the power loading developed in Section \ref{per_user_Power_Loading_algorithm}.
\end{algorithmic}
\end{algorithm}
\subsection{Constant-offset algorithm \cite{LowComplexityRobustMISO}}\label{sec_org_offset_max}
As is apparent from the derivation in the previous section, one of the challenges that complicates the closed-form calculations is the quartic dependence of the variances $\sigma_{f_k}^2$ on the beamforming vectors $\mathbf{w}_k$. One way in which these complications can be reduced is to modify the offset approximation in \eqref{offset_constr} so that the mean, $\mu_{f_k}$, is constrained to be greater than a constant; i.e., the SINR constraint is replaced by $$ \mu_{f_k} \geq r_k. $$
If we make the approximation that the channel estimation errors are small enough that the third term on the right hand side of \eqref{mean_eqn} can be neglected, the semidefinite relaxation of the resulting approximation of \eqref{outage_min2} can be written as
\begin{subequations}\label{r_prob}
\begin{align}
\min_{\substack{\mathbf{W}_k}} \quad &\text{tr} \Bigl(\textstyle\sum_{k=1}^K \mathbf{W}_k \Bigr) \\
\text{s.t.} \quad & \mathbf{h}_{e_k}^H \mathbf{Q}_k \mathbf{h}_{e_k}- \sigma_k^2 \geq r_k, \label{sinr6}\\
& \mathbf{W}_k \succeq \mathbf{0}, \quad \forall k.
\end{align}
\end{subequations}
Interestingly, this problem arose previously in the context of a low-complexity solution to the robust beamforming design problem for FDD and TDD systems that use a zero-outage region approach, and the semidefinite relaxation was shown to be tight \cite{LowComplexityRobustMISO}. The zero-outage region approach provides robustness by requiring that the SINR constraints hold for all channels in a neighbourhood of the estimated channel.
The iterative closed-form solution to \eqref{r_prob} has a similar structure to that in Alg. \ref{Alg1}, but given the simpler structure of the problem, the Lagrange multipliers $\boldsymbol\psi_{f_k}$ disappear, and the expressions in \eqref{closed_form} and \eqref{nu_mod2} simplify to
\begin{equation}\label{closed_form2}
\mathbf{w}_k =\Biggl( \frac{\nu_k}{\gamma_k}\mathbf{h}_{e_k} \mathbf{h}_{e_k}^H-\sum_{j\neq k} \nu_j \mathbf{h}_{e_j} \mathbf{h}_{e_j}^H \Biggr)\mathbf{w}_k,
\end{equation}
\begin{equation}\label{nu}
\nu_k^{-1} = \mathbf{h}_{e_k}^H \Bigl(\mathbf{I}+\textstyle\sum_{j} \nu_j \mathbf{h}_{e_j} \mathbf{h}_{e_j}^H \Bigr)^{-1} \mathbf{h}_{e_k} \Bigl(1+\frac{1}{\gamma_k} \Bigr).
\end{equation}
After obtaining the beamforming directions from \eqref{nu} and \eqref{closed_form2}, the power loading in \cite{LowComplexityRobustMISO} is performed based on the fact that the constraints in \eqref{sinr6} are satisfied with equality at optimality. (If this were not the case for constraint $k$, then the power allocated to $\mathbf{w}_k$ could be reduced in a way that will still satisfy all the constraints and provide a lower objective value, contradicting the presumed optimality.) While doing so generates a solution to \eqref{r_prob}, significant performance gains can be obtained when the beamforming directions obtained from \eqref{closed_form2} are combined with the power loading algorithm presented in Section \ref{per_user_Power_Loading_algorithm}.
\subsection{Complexity analysis and further approximations}\label{sec_approxs}
The problems in \eqref{outage_min4} and \eqref{outage_min5} are convex optimization problems with SOC and semidefinite constraints. General purpose interior point methods for such problems require $\mathcal{O}(N_t^6)$ per iteration, which represents a significant computational load.
In contrast, the key computational steps in the iterative closed-form approximation, Alg. \ref{Alg1}, are those in \eqref{closed_form}, \eqref{nu_mod} and the calculation of the ZF directions that are used in the initialization. The ZF directions can be obtained in $\mathcal{O}(N_t^2 K)$ operations.
The computational cost of solving \eqref{nu_mod} is dominated by the matrix inversion required for each user and hence it grows as $\mathcal{O}(N_t^3 K)$. We can exploit the factorized matrix structure in \eqref{closed_form} which allows for an efficient use of the power iteration method. Therefore, the cost of step 3 grows as $\mathcal{O}(N_t K^2)$. We can see that it is the computation of the Lagrange multipliers \eqref{nu_mod} that requires most of the resources to compute the beamforming directions.
The constant-offset algorithm \cite{LowComplexityRobustMISO} that was reviewed in Section~\ref{sec_org_offset_max} does not require an initial set of directions and the expression for $\nu_k$ is significantly simpler. In particular, the matrix to be inverted is the same for each user, which reduces the number of computations required to $\mathcal{O}(N_t^3)$. Furthermore, additional approximations can be applied to avoid the matrix inversion all together.
When the channels are nearly orthogonal, as they tend to be in massive MISO channels that ``harden" as the number of antennas increases \cite{Multipleantennachannelhardening}, then if we let $\alpha_k= \|\mathbf{h}_{e_k}\|^2$, we can write $\textstyle\sum_{j} \nu_j \mathbf{h}_{e_j} \mathbf{h}_{e_j}^H$ in the form of an eigen decomposition $\textstyle\sum_{j} \nu_j \alpha_j \frac{\mathbf{h}_{e_j}}{\sqrt{\alpha_j} } \frac{\mathbf{h}_{e_j}^H}{\sqrt{\alpha_j}}$,
and hence,
$$\mathbf{h}_{e_k}^H \Bigl(\mathbf{I}+\textstyle\sum_{j} \nu_j \alpha_j \frac{\mathbf{h}_{e_j}}{\sqrt{\alpha_j} } \frac{\mathbf{h}_{e_j}^H}{\sqrt{\alpha_j}} \Bigr)^{-1} \mathbf{h}_{e_k} \approx \frac{\alpha_k}{1+\nu_k \alpha_k}.$$
Accordingly, we can approximate \eqref{nu} by $$\nu_k\approx \gamma_k/\alpha_k.$$
To find the channel norms $\alpha_k= \| \mathbf{h}_{e_k} \|^2$ we need only $\mathcal{O}(N_t)$ operations. Hence, that approximation enables us to compute all $\nu_k$s in only $\mathcal{O}(N_t K)$ operations.
\section{Offset-based robust power Loading}\label{per_user_Power_Loading_algorithm}
In this section, we will show how to apply the offset-based approach to the power loading problem that remains if the beamforming directions are chosen separately. Examples of choices for those directions include the maximum ratio transmission (MRT), zero-forcing (ZF), or regularized zero-forcing (RZF) directions, which are calculated from the estimated channels, or any of the directions generated by the previously described algorithms. Once the directions are chosen, we can rewrite the problem in \eqref{outage_min2} as
\begin{subequations}\label{outage_min6}
\begin{align}
\min_{\substack{\beta_k}} \ \quad & \sum_{k=1}^K \beta_k \\
\text{subject to} \quad & \mu_{f_k} \geq r_k \sigma_{f_k}, \quad \forall k, \label{sinr2}
\end{align}
\end{subequations}
where for fixed directions $\{\mathbf{u}_k\}$ the expressions for $\mu_{f_k}$ and $\sigma_{f_k}$ in \eqref{mean_eqn} and \eqref{var_rel} simplify to
\begin{subequations}\label{mean_eqn2}
\begin{align}
\mu_{f_k} & = |\mathbf{h}_{e_k}^H \mathbf{u}_k|^2 \beta_k/\gamma_k - \sum_{j \neq k} |\mathbf{h}_{e_k}^H \mathbf{u}_j|^2 \beta_j- \sigma_k^2 \nonumber\\
& \qquad +\sigma_{e_k}^2 \Bigl(\beta_k /\gamma_k - \sum_{j \neq k}\beta_j \Bigr). \\
\sigma_{f_k}^2 & =2 \sigma_{e_k}^2 \mathbf{h}_{e_k}^H \Bigl(\beta_k \mathbf{u}_k \mathbf{u}_k^H/\gamma_k-\sum_{j \neq k}\beta_j \mathbf{u}_j \mathbf{u}_j^H \Bigr)^2 \mathbf{h}_{e_k} \nonumber\\ &\qquad +\sigma_{e_k}^4 \text{tr} \Bigl(\beta_k \mathbf{u}_k \mathbf{u}_k^H/\gamma_k-\sum_{j \neq k}\beta_j \mathbf{u}_j \mathbf{u}_j^H \Bigr)^2. \label{var_rel2}
\end{align}
\end{subequations}
Since $\mu_{f_k} $ is linear in $\{\beta_k\}$ and $\sigma_{f_k}$ is a convex quadratic function of $\{\beta_k\}$, the problem in \eqref{outage_min6} can be rewritten as an SOC programming problem, and an optimal solution can be efficiently obtained using generic interior-point methods. However, to begin to develop a more efficient algorithm that exploits some of the specific features of the problem in \eqref{outage_min6}, we observe that at optimality the constraints in \eqref{sinr2} hold with equality. If this were not the case for constraint $k$, then $\beta_k $ could be reduced in a way that still satisfies the constraints and yet provides a lower objective value, which would contradict the presumed optimality. To use that observation, we note that if the variances $\sigma_{f_k}^2$ are fixed, then the set of equations $\{\mu_{k} =r_k \sigma_{f_k}\}$ yields $K$ linear equations in the $K$ design variables $\{\beta_k \}_{k=1}^K$. If we define $\boldsymbol{\beta}=[\beta_1, \beta_2,..., \beta_K]^T$, $\boldsymbol{\sigma}_f=[\sigma_{f_1}, \sigma_{f_2},..., \sigma_{f_K}]^T$, $\boldsymbol{\sigma}=[\sigma_{1}, \sigma_{2},..., \sigma_{K}]^T$, $\mathbf{r}=[r_1, r_2,...,r_k]^T$, and the matrix $\mathbf{A}$ such that $\mathbf{[A]}_{ii}= | \mathbf{h}_{e_i}^H {\mathbf{u}}_i |^2/\gamma_i+\sigma_{e_i}^2 /\gamma_i $, and $\mathbf{[A]}_{ij}= - | \mathbf{h}_{e_i}^H {\mathbf{u}}_j |^2 - \sigma_{e_i}^2$, $\forall i \neq j$, then the set of linear equations can be written as
\begin{equation}\label{A_eqn}
\mathbf{A} \boldsymbol{\beta} =\boldsymbol{\sigma}^2+ \boldsymbol\sigma_{f} \odot \mathbf{r},
\end{equation}
in which $\odot$ represents element-by-element multiplication. Once the values of $\{\beta_k \}$ have been found, we can update the value of $\boldsymbol{\sigma}_f$ using \eqref{var_rel2}. That suggests the iterative linearization algorithm for solving \eqref{outage_min6} that is summarized in Alg.~\ref{Alg2}.
\begin{algorithm}
\caption{The power loading algorithm}
\label{Alg2}
\begin{algorithmic}[1]
\State Initialize $\sigma_{f_k}=1$. Compute $\mathbf{A}$ and $\mathbf{A}^{-1}$.
\State Find $\boldsymbol{\beta}$ by solving the set of linear equations in \eqref{A_eqn}.
\State Update each $\sigma_{f_k}$ using \eqref{var_rel2}.
\State Return to 2 until a termination criterion is satisfied.
\end{algorithmic}
\end{algorithm}
By observing the dependence of $\boldsymbol{\sigma}_f$ on $\boldsymbol{\beta}$ in \eqref{var_rel2}, Alg. \ref{Alg2} can be written in the form of a fixed point technique by writing $\boldsymbol{\beta} =\mathbf{A}^{-1} \boldsymbol{\sigma}^2+ \mathbf{A}^{-1} (\boldsymbol\sigma_{f} \odot \mathbf{r})$. The eigenvalues of $\mathbf{A}^{-1}$ determine the convergence properties for these fixed-point equations. Since the matrix $\mathbf{A}$ typically has large diagonal values representing the signal powers, and lower values on the off-diagonal elements representing the interference powers, the eigen values of $\mathbf{A}^{-1}$ will typically be less than one. Our numerical experience not only confirms this observation, but also suggests that the number of iterations needed for near-optimal performance is very small. In terms of computational cost, the initialization step in Alg. \ref{Alg2} requires $\mathcal{O}(K^2 N_t)$ operations to compute $\mathbf{A}$ and $\mathcal{O}(K^3)$ operations to compute $\mathbf{A}^{-1}$. In each iteration the computational cost for step 2 is $\mathcal{O}(K^2)$ operations, and the cost of step 3 is $\mathcal{O}(K N_t^2)$ operations.
\subsection{Simplifying the SINR variance calculation}\label{simplified_var_subsect}
The above analysis shows that the only step in Alg. \ref{Alg2} whose computational cost grows faster than linearly in the number of antennas is the computation of $\sigma_{f_k}$. In massive MISO systems, the resulting computational load can be significant. To reduce the required computations, we observe that when the number of antennas is large and the channels are uncorrelated, the inner product between different channels will typically be relatively small. Since the beamforming directions will typically be closely aligned with the channel vectors, the inner product between different beamforming vectors will likely be small as well. This observation suggests removing the cross terms $\mathbf{u}_j^H \mathbf{u}_k, \forall j \neq k$ in \eqref{var_rel2}. That would yield the following approximations
\begin{equation}
\begin{aligned}
\mathbf{h}_{e_k}^H \mathbf{Q}_{k} \mathbf{Q}_{k} \mathbf{h}_{e_k} &= \mathbf{h}_{e_k}^H \Bigl(\beta_k \mathbf{u}_k \mathbf{u}_k^H/\gamma_k-\sum_{j \neq k}\beta_j \mathbf{u}_j \mathbf{u}_j^H \Bigr)^2 \mathbf{h}_{e_k} \\
&\approx |\mathbf{h}_{e_k}^H \mathbf{u}_k|^2 \beta_k^2/\gamma_k^2 + \sum_{j \neq k} |\mathbf{h}_{e_k}^H \mathbf{u}_j|^2 \beta_j^2,
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}
\text{tr} (\mathbf{Q}_{k}^2)&= \text{tr} \Bigl(\beta_k \mathbf{u}_k \mathbf{u}_k^H/\gamma_k-\sum_{j \neq k}\beta_j \mathbf{u}_j \mathbf{u}_j^H \Bigr)^2 \\
&\approx \text{tr} \Bigl(\beta_k^2 \mathbf{u}_k \mathbf{u}_k^H \mathbf{u}_k \mathbf{u}_k^H /\gamma_k^2+\sum_{j \neq k}\beta_j^2 \mathbf{u}_j \mathbf{u}_j^H \mathbf{u}_j \mathbf{u}_j^H \Bigr) \\
&= \beta_k^2/\gamma_k^2+\sum_{j \neq k}\beta_j^2.
\end{aligned}
\end{equation}
The numerical results presented in Section~\ref{sec_sim} indicate that these approximations result in designs that are very close in performance to those obtained from the original formulations, even when the number of antennas is quite small. Furthermore, since the terms $|\mathbf{h}_{e_k}^H \mathbf{u}_j|^2$ are already computed in the initialization step that constructs the matrix $\mathbf{A}$, these approximations reduce the computational cost of updating $\boldsymbol\sigma_f$ in step 3 of Alg.~\ref{Alg2} from $\mathcal{O}(N_t^2 K)$ to $\mathcal{O}(K^2)$.
\subsection{User rescheduling}\label{userrescheduling}
One of the fundamental characteristics of the original outage constrained beamformer design problem in \eqref{outage_min2} is that for a certain set of channel estimates the problem may be infeasible. That is, there may be no set of beamformers that can satisfy the outage constraints. Furthermore, even when the problem is feasible, the solution may be impractical in the sense that the minimum transmission power required to satisfy the outage constraints may exceed the capability of the BS. The approximations of the original formulation in \eqref{outage_min4} and \eqref{outage_min5} retain these characteristics, and the power loading problem in \eqref{outage_min6} retains them, too. Fortunately, as we now explain, for systems in which each user specifies the same value for $r$, the structure of a closely-related power loading problem provides insights into which users should be rescheduled in order for the problem in \eqref{outage_min6} to be feasible, and for the solution of the problem to be within the capabilities of the BS. The auxiliary power loading problem that we will consider is that of maximizing a common offset coefficient subject to an explicit power constraint, namely
\begin{subequations}\label{outage_min7}
\begin{align}
\max_{\substack{\beta_k},r} \quad & r \\
\text{subject to} \quad & \textstyle\sum_{k=1}^K \beta_k \leq P_t, \label{pwr_cont} \\
\quad & \mu_{k} \geq r \sigma_{f_k}, \quad \forall k \label{sinr7},
\end{align}
\end{subequations}
where $P_t$ denotes the maximum transmission power of the BS. This problem is always feasible whenever all the estimated channels are different. (The value of $r$ can be decreased until all components of \eqref{sinr7} can be satisfied using a power loading that satisfies \eqref{pwr_cont}.) However, negative values and small positive values of $r$ correspond to cases with high probability of outage. The problem in \eqref{outage_min7} can be solved using an algorithm similar to that in Alg.~\ref{Alg2}. However, at the step analogous to step 2 of Alg.~\ref{Alg2}, we need an additional equation to determine the value for $r$. That equation arises from observing that the power constraint in \eqref{pwr_cont} holds with equality at optimality, and hence, from \eqref{A_eqn} and \eqref{pwr_cont} we have that
$$ r = \frac{P_t- \boldsymbol{1}^T \mathbf{A}^{-1} \boldsymbol\sigma^2}{ \boldsymbol{1}^T\mathbf{A}^{-1} \boldsymbol\sigma_{f}},$$
where $\boldsymbol{1}$ is the vector with all elements equal to one. This equation clearly demonstrates the relationship between the power budget and the robustness. More importantly, it shows that the users that correspond to the largest elements of $\mathbf{A}^{-1} \boldsymbol\sigma^2$ are the ones that play the biggest role in constraining the extent of robustness that can be obtained. That suggests that if the optimal value of $r$ in \eqref{outage_min7} is not large enough to provide the desired robustness level, one or more of those users corresponding to large values of $\mathbf{A}^{-1} \boldsymbol\sigma^2$ should be rescheduled. (We note that the use of good user selection algorithms, e.g., \cite{Ontheoptimalityofmultiantenna}, prior to the design of the beamforming directions will reduce the need to reschedule users, but the inherent capability of the proposed power loading algorithms to perform rescheduling provides significant performance gains when the initial user selection is imperfect.)
Once the optimal value of the auxiliary problem in \eqref{outage_min7} exceeds the desired value for $r$, the power minimization problem in \eqref{outage_min6} can be solved. Since the distribution of $f_k(\mathbf{e}_{k})$ is dominated by the Gaussian terms, values of $r$ in the range of 2 to 5 would be sufficient to obtain outage probabilities consistent with the expectations of contemporary applications; see Appendix~\ref{r_value_sel}.
\subsection{Average outage}\label{mod_algo}
The design formulations that we have considered up until this point have taken the form of minimization of the transmission power subject to (an approximation of) an outage constraint on each user for the current realizations of the channels. However, as we now illustrate, the proposed design approach is quite flexible and can accommodate other notions of outage.
Let us assume that we have the optimal power loading and offset coefficient for the problem in \eqref{outage_min7}, which provide all the users with essentially the same outage probability. We will denote those values by $\{\beta_k^\star\}_{k=1}^K$ and $r^\star$. Given this solution, the goal of this section is to perturb the value of the offset coefficient for each user so as to minimize the average outage probability over the users, and to adjust the power loading accordingly. To do so, we let $\delta_{r_k}$ denote the perturbation on the $k$th user's offset coefficient; i.e., $r_k=r^\star+\delta_{r_k}$.
As discussed in Section~\ref{sect3}, in typical operating scenarios the distribution of $f_k(\mathbf{e}_k)$ can be accurately approximated by a Gaussian distribution. In that case, the outage probability for a given value of the offset coefficient $r_k$ is simply the value of the complementary cumulative distribution function (CCDF) of the standardized normal distribution, $\mathcal{N} (0, 1)$, at the value of $r_k$. If we let $g(\cdot)$ denote the CDF of the standard normal distribution, the the problem of minimizing the outage probability becomes
\begin{subequations}
\begin{align}
\max_{\substack{\beta_k, \delta_{r_k}}} \quad & \textstyle\sum_{k=1}^K g(r^\star+\delta_{r_k}) \\
\text{s.t.} \quad & \textstyle\sum_{k=1}^K \beta_k = P_t,
\end{align}
\end{subequations}
where the condition $\textstyle\sum_{k=1}^K \beta_k = P_t$ ensures that the power used after perturbation will be the same as that used by the solution to \eqref{outage_min7}. That constraint can be shown to be equivalent to the linear constraint $\boldsymbol{1}^T \mathbf{A}^{-1} (\boldsymbol\sigma_{s} \odot \boldsymbol{\delta_{r}} )=0$, where $\boldsymbol{\delta_{r}}$ is the vector containing the scalars $\delta_{r_k}$. Furthermore, the CDF $g(\cdot)$ can be well approximated by a quadratic curve; see Fig.~\ref{fig1}.
\begin{figure}
\begin{center}
\epsfysize= 2.4in
\epsffile{fig2.eps}
\caption{The CDF of the standardized normal distribution, $\mathcal{N} (0, 1)$, denoted $g(r)$, and its least squares quadratic approximation over $r \in[ 1,3]$.
}\label{fig1}
\end{center}
\end{figure}
With this approximation in place, the problem can be stated as the following convex problem in $\boldsymbol\delta_{r}$
\begin{subequations}\label{pert_eqn}
\begin{align}
\max_{\substack{\boldsymbol{\delta_{r}}}} \quad & \textstyle\sum_{k=1}^K a_0(r^\star+\delta_{r_k})^2+ a_1 (r^\star+\delta_{r_k}) +a_2\\
\text{s.t.} \quad & \boldsymbol{1}^T \mathbf{A}^{-1} (\boldsymbol\sigma_{s} \odot \boldsymbol{\delta_{r}})=0,
\end{align}
\end{subequations}
where $a_0, a_1$ and $a_2$ are the coefficients of the quadratic approximation of $g(r)$.
If we let $\mathbf{b}= (\boldsymbol{1}^T \mathbf{A}^{-1}) \odot \boldsymbol\sigma_{s}$, then using an analysis of the KKT conditions of \eqref{pert_eqn}, we can derive the dual variable $\zeta$ of the equality constraint as
$$\zeta=\frac{-(2 a_0 r^\star +a_1) \mathbf{b}^H \boldsymbol{1}}{\mathbf{b}^H \mathbf{b}},$$
and the required $\boldsymbol{\delta_{r}}$ as
$$\boldsymbol{\delta_{r}}=\frac{-(2 a_0 r^\star + a_1)\boldsymbol{1} -\zeta \mathbf{b} }{2 a_0}.$$
Accordingly, whenever we have the optimal solution $\{\beta_k^\star\}_{k=1}^K$ and $r^\star$ of the problem in \eqref{outage_min7}, we can calculate $\zeta$, and the resulting perturbations of the offset coefficient $\boldsymbol{\delta_{r}}$. The modified offset coefficient vector $\mathbf{r}$ can be updated using $r_k=r^\star+\delta_{r_k}$. The power loading $\{\beta_k\}_{k=1}^K$ is then updated by using the linear equations arising from \eqref{sinr7} holding with equality; i.e., $\boldsymbol{\beta} =\mathbf{A}^{-1} \boldsymbol{\sigma}^2+ \mathbf{A}^{-1} (\boldsymbol\sigma_{f} \odot \mathbf{r})$.
\section{Simulation results}\label{sec_sim}
In this section, we will provide three sets of numerical results. First, we will provide simulation results that show the validity of the offset-based algorithms and compare the performance of the algorithms presented here to that of zero-outage region algorithms that obtain robustness by ensuring that outage does not occur for uncertainties that lie in a given region. Specifically we will compare with the sphere bounding (SB) algorithm presented in \cite{OutageConstrained}. Second, we will provide comparisons between the performance of the offset-based power loading algorithms proposed in Section \ref{per_user_Power_Loading_algorithm}, the optimal power loading algorithm in \cite{Coordinateupdate}, and the perturbation-based power loading algorithm that seeks to minimize the averaged outage, which was presented in Section~\ref{mod_algo}. In the third set of simulation results, we will demonstrate the performance gains that can be obtained by using the user rescheduling and the power saving described in Section~\ref{userrescheduling}. We will also show the validity of the low-complexity approximations presented in Section~\ref{sec_approxs}.
For the initial simulation setup, we will we consider a downlink system in which a BS serves three single-antenna users. We will assume that the BS has four antennas, and the three users are randomly distributed within a radius of 3.2km. The large scale fading is described by a path-loss exponent of 3.52 and log-normal shadow fading with 8dB standard deviation, and the small scale fading is modelled using the standard i.i.d. Rayleigh model. The channel estimation error is assumed to be zero-mean and Gaussian with covariance $\sigma_{e_k}^2 \mathbf{I}$. The receiver noise level is -90dBm, and the SINR target is set to 6dB. A simple channel-strength user selection technique is employed, where users are served only if $ 100\|\mathbf{h}_{e_k}\|^2/ \sigma_k^2 \geq \gamma_k$, where we consider 100 here as the implicit total power constraint.
Each of the algorithms that we consider involves a choice of a robustness measure. For the algorithms provided in this paper the robustness measure is the value of the offset coefficient $r_k$. For the sphere bounding algorithm in \cite{OutageConstrained} it is the size of the zero-outage region, and for the power loading algorithm in \cite{Coordinateupdate} it is directly the outage probability. To plot the performance curves, we randomly generate a set of channel realizations and provide the BS with estimates of those channels. Each algorithm is then used to design a set of beamformers that should provide the specified robustness. Using those beamformers we determine whether or not any user in the system with the actual channel realizations is in outage, and we calculate the corresponding transmission power. By repeating this experiment over thousands of channel realizations, we can plot the average outage probability over the users versus the average transmission power for the different algorithms when these algorithms provide a viable solution; by which we mean a solution that satisfies the constraints using a transmitted power that is less than 100. In fairness to all methods, the average is taken over those channel realizations for which all methods produce a viable solution.
In Fig.~\ref{sim1}, assuming $\sigma_{e_k}=0.1$, we plot the average outage probability versus the average total transmitted power for the proposed robust beamforming algorithms in \eqref{outage_min4}, \eqref{outage_min5}, Alg. \ref{Alg1}, and that of a system with the constant-offset directions described in Section.~\ref{sec_org_offset_max} and the suggested power loading in Section \ref{per_user_Power_Loading_algorithm}. As benchmarks, we plot the performance of the SB algorithm \cite{OutageConstrained}, and that of a system that employs the ZF directions combined with the power loading in Section \ref{per_user_Power_Loading_algorithm}. In Fig.~\ref{sim2}, we repeat the experiment for $\sigma_{e_k}=0.05$. We observe that the performance gap between the proposed algorithms becomes smaller when the error variance decreases, which justifies the validity of the approximations for small error size. We also note that the performance of the low-complexity robust beamforming algorithm in Alg. \ref{Alg1} is very close to that of the original formulation in \eqref{outage_min4}, and that both algorithms provide better performance than the SB algorithm (which incurs a significantly larger computational load). The relative performance of the ZF-based algorithm with the proposed power loading algorithm in Section~\ref{per_user_Power_Loading_algorithm} depends on the uncertainty size, where comparatively better performance results are obtained when the uncertainty size is larger. That observation means that while both the offset-based beamforming directions and power loading contribute to the excellent performance for small uncertainty size, as the uncertainty size increases the role of the offset-based power loading becomes more significant. The performance of the combination of the original constant-offset directions in Section~\ref{sec_org_offset_max} with the suggested power loading in Section~\ref{per_user_Power_Loading_algorithm} is not quite as good as that of the other offset-based approaches. However, decoupling the design of the beamforming directions and that of the power loading significantly reduces the computational cost (see Section~\ref{sec_approxs}), and greatly increases the flexibility of the design, as explained in Section~\ref{per_user_Power_Loading_algorithm}.
\begin{figure}
\begin{center}
\epsfysize= 2.8in
\epsffile{sim1.eps}
\caption{The average transmitted power against the outage probability for a system with 3 users, 4 BS antennas, $\gamma$ = 6dB, and $\sigma_{e_k}=0.1$.
}\label{sim1}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\epsfysize= 2.8in
\epsffile{sim2.eps}
\caption{The average transmitted power against the outage probability for a system with 3 users, 4 BS antennas, $\gamma$ = 6dB, and $\sigma_{e_k}=0.05$.
}\label{sim2}
\end{center}
\end{figure}
The second set of simulation results examines the performance gap between the proposed power loading algorithms in Section \ref{per_user_Power_Loading_algorithm}, and the power loading algorithm in \cite{Coordinateupdate} when the constant-offset directions are chosen; see Section.~\ref{sec_org_offset_max}.
In Fig.~\ref{sim3}, we plot the average outage probability versus the average transmitted power for the power loading algorithm in \cite{Coordinateupdate}, the power loading in Alg. \ref{Alg2}, and the modified power loading in Section~\ref{mod_algo}. (For the latter case, the quadratic approximation used in \eqref{pert_eqn} is the least-squares approximation in Fig.~\ref{fig1}.) While the algorithm in \cite{Coordinateupdate} is optimal in terms of the power required to achieve the specified outage probabilities for each user and for each channel realization, the proposed algorithms provide better average outage probability. This performance is achieved while requiring no more than five iterations in the power loading algorithm in Alg. \ref{Alg2}. As one would expect, the modified power loading algorithm in Section~\ref{mod_algo} provides an even lower average outage probability than that obtained by Alg. \ref{Alg2}.
\begin{figure}
\begin{center}
\epsfysize= 2.8in
\epsffile{sim3.eps}
\caption{The average transmitted power against the outage probability for a system with 3 users, 4 BS antennas, $\gamma$ = 6dB, and $\sigma_{e_k}=0.1$.
}\label{sim3}
\end{center}
\end{figure}
To assess the performance gains that result from the power control capabilities of the proposed power loading algorithms, we plot the outage probability of the problem in \eqref{outage_min7} with the constant-offset directions in Section~\ref{sec_org_offset_max} versus the number of antennas. In this case, we set the total power constraint $P_t=1$, and the number of users to six. We also plot the corresponding results when the approximations for obtaining the directions in Section~\ref{sec_approxs}, and those for obtaining the power loading in Section~\ref{simplified_var_subsect} are applied. In addition, we plot the performance of the proposed user rescheduling scheme (Alg. Sect. \ref{userrescheduling} (a)) and the user rescheduling when combined with the power saving (Alg. Sect. \ref{userrescheduling} (b)). We applied user rescheduling whenever the resulting offset $r$ in \eqref{outage_min7} is smaller than two, and the rescheduled user(s) are considered to be in outage. For the power saving algorithm we upper bound $r$ by 5. We observe from Fig.~\ref{sim4} that the proposed approximations provide almost the same outage performance over the whole range of antenna numbers. We also observe that the user rescheduling technique greatly enhances the outage performance, especially when the number of antennas is relatively low. (When the number of antennas is low, there is a greater probability of the channels not being sufficiently orthogonal.) Fig.~\ref{sim4} shows that the power saving algorithm (Alg. Sect.~\ref{userrescheduling} (b)) provides essentially the same performance as the regular algorithms, but significant power can be saved; the average actual transmitted powers used for that algorithm when the number of antennas are $[20,25,\cdots,60]$ are $[0.74, 0.71, 0.67, 0.65, 0.62, 0.59, 0.56, 0.54, 0.52]$ all of which are significantly smaller than the total power constraint $P_t=1$.
\begin{figure}
\begin{center}
\epsfysize= 2.8in
\epsffile{sim4.eps}
\caption{The outage probability versus the number of antennas for a system with 6 users, $\gamma$ = 6dB, and $\sigma_{e_k}=0.1$.
}\label{sim4}
\end{center}
\end{figure}
\section{Conclusion}
In this paper, a new offset-based approach is proposed for robust downlink beamforming. The approach is based on rewriting the SINR outage constraint as a non-negativity constraint on an indefinite quadratic function of the error in the base station's model of the channel. That non-negativity is then approximated by an offset-based constraint in which the mean of the function is required to be larger than a specific multiple of its standard deviation. This approach enabled the formulation of the robust beamforming design problem as a problem that can be transformed into a convex problem through the process of semidefinite relaxation (SDR). The computational complexity of the SDR problem can be further reduced when the uncertainty size is small, allowing for an iterative closed-form solution. When the beamforming directions are defined in advance, the offset-based approach generates a power loading algorithm that provides excellent performance and unique power control capabilities, while incurring only a small computational cost. The demonstrated performance gains, and the significant differences in computational cost exemplify the advantages of using the offset-based approach instead of the sphere bounding approach. Within the suite of algorithms generated by the offset based approach, the separation of the design into the constant-offset directions (Section~\ref{sec_org_offset_max}) and the proposed power loading (Section~\ref{per_user_Power_Loading_algorithm}) provides a compelling balance between performance, computational cost and design flexibility.
\appendices
\section{Choice of $r_k$} \label{r_value_sel}
From Cantelli's Inequality, which is sometimes referred to as the one-sided Chebyshev inequality, we know that for any random variable $\mathbf{X}$ with mean $\mu_x$ and variance $\sigma_x$,
$$\text{Prob}(\mathbf{X}-\mu_x \leq -r \sigma_x) \leq \frac{1}{1+r^2}.$$
Therefore, if we ensure that $\mu_x \geq r \sigma_x$ then $\text{Prob}(\mathbf{X} \leq 0) \leq \frac{1}{1+r^2}$. Accordingly, if we set $r_k=\sqrt{1/\delta_k-1}$, then the approximation in \eqref{offset_constr} is ``safe", in the sense that any solution to the corresponding problem in \eqref{outage_min2} is guaranteed to satisfy the original outage constraints in \eqref{outage_min}. However, for the distributions that typically arise in downlink beamforming Cantelli's Inequality is quite loose and the resulting beamformer design is quite conservative. Indeed, as we explained in Section~\ref{sect3}, for small uncertainties the distribution of $f_k(\mathbf{e}_k)$ is close to being Gaussian. If it were in fact Gaussian, then if the beamformers are designed such that $ \mu_{f_k} \geq r_k \sigma_{f_k}$ then the outage probability would be $Q(r_k)=\frac{1}{2} \text{erfc}(\frac{r_k}{\sqrt{2}})$, where $\text{erfc}(\cdot)$ is the complementary error function.
\section{Mean and variance derivations} \label{mean_var_der}
A Gaussian random variable $\mathbf{e}_k \backsim \mathcal{CN} (\mathbf{m}_k, \mathbf{C}_k)$ can be represented as $\mathbf{e}_k =\mathbf{m}_k + \mathbf{C}_k^{1/2} \hat{\mathbf{e}}_k$, where $\hat{\mathbf{e}}_k \backsim \mathcal{CN} (0, \mathbf{I})$. Using that representation we can write
\begin{equation}
\begin{aligned}
\mu_{f_k}& = \mathbb{E} \{f_k(\mathbf{e}_k)\} \\
&=(\mathbf{h}_{e_k}+\mathbf{m}_k)^H \mathbf{Q}_k (\mathbf{h}_{e_k}+\mathbf{m}_k) - \sigma_k^2 \\
&\quad +\mathbb{E} \{\hat{\mathbf{e}}_{k}^H \mathbf{C}_k^{1/2} \mathbf{Q}_{k}\mathbf{C}_k^{1/2} \hat{\mathbf{e}}_{k}\}\\
&= (\mathbf{h}_{e_k}+\mathbf{m}_k)^H \mathbf{Q}_k (\mathbf{h}_{e_k}+\mathbf{m}_k) - \sigma_k^2 + \mathbf{w}_k^H \mathbf{C}_k \mathbf{w}_k /\gamma_k \\
&\quad -\sum_{j \neq k} \mathbf{w}_j^H \mathbf{C}_k \mathbf{w}_j^H.
\end{aligned}
\end{equation}
The variance can be expressed as
\begin{equation}
\begin{aligned}
\sigma_{f_k}^2&=\text{var}\{f_k(\mathbf{e}_{k}) \} \\
&=\text{var}\{ 2 \text{Re}( \hat{\mathbf{e}}_{k}^H \mathbf{C}_k^{1/2} \mathbf{Q}_{k} (\mathbf{h}_{e_k}+\mathbf{m}_k )) \\
&\qquad + \hat{\mathbf{e}}_{k}^H \mathbf{C}_k^{1/2} \mathbf{Q}_{k}\mathbf{C}_k^{1/2} \hat{\mathbf{e}}_{k} \bigr\} \\
&=2 (\mathbf{h}_{e_k}+\mathbf{m}_k)^H \mathbf{C}_k^{1/2} \mathbf{Q}_{k}^2 \mathbf{C}_k^{1/2} (\mathbf{h}_{e_k}+\mathbf{m}_k) \\
&\qquad +\text{var}\{\hat{\mathbf{e}}_{k}^H \mathbf{C}_k^{1/2} \mathbf{Q}_{k}\mathbf{C}_k^{1/2} \hat{\mathbf{e}}_{k}\}+0^* \\
&=2 (\mathbf{h}_{e_k}+\mathbf{m}_k)^H \mathbf{C}_k^{1/2} \mathbf{Q}_{k}^2 \mathbf{C}_k^{1/2} (\mathbf{h}_{e_k}+\mathbf{m}_k) \\
&\qquad + \text{tr} (\mathbf{C}_k^{1/2} \mathbf{Q}_{k} \mathbf{C}_k^{1/2} )^2,
\end{aligned}
\end{equation}
where $\mathbf{[A]}_{ij}$ denotes the $(i,j)$th element of the matrix $\mathbf{A}$, and tr denotes the trace function. At the point marked with the asterisk we have used the fact that the expectation of the cross terms is equal to zero. This is true because $\mathbb{E} \{2 \text{Re}( \mathbf{e}_{k}^H \mathbf{Q}_{k} (\mathbf{h}_{e_k}+\mathbf{m}_k )) (\hat{\mathbf{e}}_{k}^H \mathbf{C}_k^{1/2} \mathbf{Q}_{k}\mathbf{C}_k^{1/2} \hat{\mathbf{e}}_{k}) \}$ consists of terms containing either similar or different components from the $\hat{\mathbf{e}}_{k} $ vector. Since $\hat{\mathbf{e}}_{k} $ has a zero mean, all terms with different indices will have a zero mean, while terms of similar indexes will take the form of a complex Gaussian raised to the power of three, which also has zero mean.
|
1,941,325,221,152 | arxiv | \section{INTRODUCTION}
Kelly betting is a prescription for optimal resource allocation among a set of gambles which are typically repeated in an independent and identically distributed manner. This type of wagering scheme was first introduced in the seminal paper~\cite{Kelly_1956}. Following this work, many applications and a number of properties of Kelly betting were introduced in the literature over subsequent decades; e.g., see~\cite{Hakansson_1971}-\cite{Algoet_Cover_1988} and~\cite{Maclean_Thorp_Ziemba_2010}. To complete this overview, we also mention more recent work~\cite{Thorp_2006},~\cite{Nekrasov_2014}-\cite{Barmish_Hsieh_2015} and the comprehensive survey~\cite{MacLean_Thorp_Ziemba_2011} covering many of the most important papers.
\vskip 2mm
In its simplest form, the Kelly criterion tells the bettor what is the optimal fraction of capital to wager. As the optimal Kelly fraction increases, various risk measures can become unacceptably large. In this regard, the optimal Kelly fraction is often characterized as too ``aggressive." To avoid this negative, there is a body of literature dealing with so-called ``fractional strategies.'' Such strategies essentially amount to reduction of the~optimal Kelly fraction so that less capital is at risk on each bet; e.g., see~\cite{Maclean_Ziemba_Blazenko_1992}-\cite{Davis_Lleo_2010} and~\cite{Rising_Wyner_2012}.
\vskip 2mm
In contrast to existing literature, the focal point of this paper is to describe scenarios when the Kelly-based theory may actually lead to bets which are too conservative rather than too aggressive. Our results along these lines are captured in the ``Restricted Betting Theorem,'' its corollaries and generalization given in Sections~5 and~6. To motivate these results, in the preceding sections, we formally describe the theoretical framework being considered, explain what is meant by ``data-based Kelly betting'' and provide motivating examples which illustrate how overly conservative betting can result.
\vskip 2mm
With regard to the above, we consider the following scenario: A bettor entertains a sequence of gambles from two different points of view. The first point of view is that of the theoretician who works with a model of the returns as a sequence of independent and identically distributed random variables with a known probability density function. Using the prescription of Kelly for sizing the bet, this bettor arrives at the optimal fraction~$K^*$ of one's wealth which should be wagered on each play. The second point of view is that of the data-based practitioner who makes bets based on an empirically derived probability mass function obtained by drawing samples of the random variable. In this setting, we describe an example which leads to dramatically different bets for the theoretician versus the practitioner. For this example, we see that a data-based practitioner deems the bet to be highly favorable and determines that the optimal betting fraction should be large. However, for this same example, use of the true probability distribution by the theoretician may lead to little or no betting.
\vskip 2mm
The main theoretical result in the paper, the Restricted Betting Theorem, is paraphrased for the simplest case, a scalar random variable, as follows: If~$X_{\min}<0$ and~\mbox{$X_{\max}>0$} are respectively the infimum and supremum of points in the support set~${\cal X}$, the optimal Kelly fraction must lie between~$-1/X_{\max}$ and~$-1/X_{\min}$. For the extreme case when the support of the distribution is unbounded both from above and below, this implies the optimal fraction~$K^* = 0$. That is, the optimum is not to bet at all. More generally, when~$X$ is an $n$-dimensional random vector, the
support set~${\cal X}$ imposes a fundamental restriction on size of the optimal bet fraction~$K$ which is described by
$
h_{\cal X}(-K) \leq 1
$
where~$h_{\cal X}$ is the classical support function used in convex analysis. Following the detailed explanation of this result, the final part the paper considers the issue of ``betting frequency'' and how it bears upon the difference of bet sizes for the practitioner versus the theoretician. Finally, in the concluding section, some promising directions for future research are described.
\vskip 3.5mm
\section{PROBLEM FORMULATION}
In this paper, to make our points about conservatism, we consider one of the simplest formulations of the problem: The bettor is faced with~$N$ gambles with each individual return governed by an independent and identically distributed~(i.i.d.) random vector~\mbox{$X \in {\bf R}^n$} having probability density function~(PDF) $f_X$.
On the $k$-th bet, fraction~$K_i$ of one's account value~$V(k)$ is bet on the~$i$-th component $X_i(k)$ of $X$. We allow~$K_i < 0$ so that the theory is flexible enough to allow the bettor to take either side of the bet being offered. For example, if~$X_i > 0$ corresponds to a coin flip coming up as heads, the use of~$K_i = 1/2$ corresponds to a bet of~$50\%$ of one's account on heads and $K_i = -1/2$ corresponds to a bet of $50\%$ on tails. As a second example, in the case of the stock market, allowing~$K_i < 0$ corresponds to short selling;~i.e., when~$X_i(k) < 0$, the bettor wins. In the sequel, we take
$$
K = {\left[ {\begin{array}{*{20}{c}}
{{K_1}}&{{K_2}}& \cdots &{{K_n}}
\end{array}} \right]^T}.
$$
Then, based on the discussion above, the investment level for the $i$-th bet at stage $k$ is given in feedback form as~\mbox{$I_i(k) = K_i V(k)$} and the associated account value is given by the equation
$$
V(k+1) = V(k) + \sum_{i=1}^{n} I_i(k) X_i(k)
$$
with initial account value $V(0)>0$.
\vskip 2mm
{\bf Admissible Bet Size}:
In the sequel, we let~${\cal X} \subseteq {\bf R}^{n}$ denote the support of~$X$ and we require for all~$x \in \cal X$, the admissible~$K$ must satisfy the condition
$$
1+K^T x \geq 0.
$$
The condition above is to assure satisfaction of the survival requirement;~i.e., along any sample path,~$V \ge 0$.
Henceforth, we denote the totality of corresponding constraints above on~$K$ by~${\cal K}$. Now letting~$X(k)$ be the \mbox{$k$-th} outcome of~$X$ for $k=0,1,2,\ldots,N-1$, the dynamics of the account value at stage $k+1$ are described by the recursion
$$
V(k+1) = (1+K^T X(k))V(k).
$$
Then, the Kelly problem is to select~$K \in \cal{K}$ which maximizes the expected value of the logarithmic {\it growth}
$$
g(K) \doteq \frac{1}{N} \mathbb{E} \left[ {\log \left( {\frac{{V(N)}}{{V(0)}}} \right)} \right].
$$
Using the recursion for~$V(k)$ above and the fact that the~$X(k)$ are i.i.d., we see that the expected log-growth function reduces to
\begin{eqnarray*}
g(K)
&=& \frac{1}{N}\mathbb{E}\left[ {\log \left( {\prod\limits_{k = 0}^{N - 1} {\left( {1 + K^T X\left( k \right)} \right)} } \right)} \right]\\ [2pt]
&=& \frac{1}{N} {\sum\limits_{k = 0}^{N - 1} \mathbb{E} [{\log \left( {1 + K^T X\left( k \right)} \; \right)]} } \\[2pt]
&=& {\int_{\mathcal{X}} {\log } (1 + K^T x){f_X}(x)dx}
\end{eqnarray*}
which is readily shown to be a concave function of $K$.
Subsequently, when the constraint~$K \in {\cal K}$ is included, we seek to find the optimal logarithmic growth
$$
g^* \doteq \max_{K \in {\cal K}}g(K)
$$
and we denote a corresponding optimal element by $K^*$.
\vskip 3.5mm
\section{BETTING BASED ON DATA VERSUS THEORY}
When Kelly betting is used in practice, it is typically not the case that a perfect probability density function model~$f_X(x)$ for the random variable~$X$ is available. The practitioner obtains a number of data samples~$x_1,x_2,\ldots,x_m$ for~$X$ and then proceeds along one of two possible paths: The first path involves assuming a functional form for~$f_X(x)$ and then using the data~$x_i$ to estimate the parameters of this distribution and the associated estimate~$\hat{f}_X(x)$. For example, if one assumes that the samples~$x_i$ come from a normal distribution, the mean~$\hat{\mu}$ and standard deviation~$\hat{\sigma}$ are estimated from the data and one uses the normal distribution~${\cal N}(\hat{\mu},\hat{\sigma})$ in the betting analysis to follow. The second possibility is that no constraints are imposed upon the form of~$f_X$ and one simply works with an empirical approximation~$\hat{f}_X(x)$ for the true PDF~$f_X(x)$. This empirical Probability Mass Function (PMF) is given by the sum of impulses
$$
\hat{f}_X(x) = \frac{1}{m}\sum_{i = 1}^{m} \delta(x -x_i).
$$
In this case, when Kelly betting is considered, $\hat{f}_X(x)$ is used as input to the optimization of~$g(K)$ and a maximizer, call it~$\hat{K}^*$, is used as the betting fraction. For the background probability theory underlying the analysis to follow, the reader is referred to~\cite{Gubner_2006}.
\vskip 2mm
Given the scenario above, the following questions present themselves: If we base our betting fraction~$\hat{K}$ on~$\hat{f}_X$ rather than~$f_X$, how will the optimum~$\hat{K}^*$ compare with the ``true'' optimum~$K^*$? What sample size~$m$ is needed so that the empirically-based optimal performance is acceptably close to the true optimum? Perhaps the simplest possible illustration of these ideas is obtained by considering $X$ to be a scalar corresponding to the outcome of repeated flipping of a biased coin with probability of heads being~$p > 1/2$. Assuming an even-money bet, we take $X=1$ for heads and $X=-1$ for tails. Then, if one has a perfect knowledge of~$p$, it is readily verified that~$g(K)$ is maximized via
$
K^* = 2p-1.
$
On the other hand, if the Kelly bets are being derived from empirical samples~$x_1,x_2,\dots,x_m$ in~~$\{-1,1\}$ with~$x_i = 1$ being the return for ``heads," then the sample mean
$$
\hat{p} = \frac{1}{m} \sum_{i = 1}^m \max \{ x_i, 0\}
$$
is used as input to the analysis and one obtains
$$
\hat{K}^* = \max\{2\hat{p}-1,0\}
$$
as the optimal betting fraction.
\vskip 3.5mm
\section{HOW OVERLY CONSERVATIVE BETS ARISE}
Beginning with an empirically derived PMF as described above, our first objective in this section is as follows: We describe the key ideas driving many scenarios where the Kelly bettor who uses ``pure theory'' in lieu of empirical data may reach a conclusion about the optimal bet size which entirely contradicts common sense real-world considerations. That is, we describe a scenario which demonstrates how formal application of the Kelly theory can lead to a bet size which is far smaller than merited by analysis of risk versus return. Our second objective is to provide a realistic numerical example showing that this pathology which we describe is realizable using real data. To this end, we consider a scenario involving samples drawn from a normal distribution.
\vskip 2mm
{\bf Pathology Explained for a Toy Example}: We consider one of the simplest possible Kelly betting problems. It is described by a Bernoulli random variable~$X$ whose PMF is given as follows:~$P(X = 1) = 1-\varepsilon$ and $P(X = -x_0) = \varepsilon$ where~$x_0 \gg 1$, and
$$
0<\varepsilon < \frac{1}{1+x_0}.
$$
For this simple scenario, the Kelly betting problem is easily solved via existing literature. For the sake of completeness, we describe the solution. Indeed, we initially hold~$\varepsilon$ and~$x_0$ fixed and later consider the consequence of varying these parameters. We first compute
\begin{align*}
g(K)
&= \mathbb{E}「[\log(1 + KX)]\\
&= (1-\varepsilon)\log(1+K) + \varepsilon\log(1-Kx_0)
\end{align*}
and note that this function is readily maximized with respect to~$K$ using ordinary calculus. Via a lengthy but straightforward calculation, we obtain the optimal Kelly fraction~\mbox{$K = K^*$} with
$$
K^* = \frac{1 - \varepsilon(1+x_0)}{x_0}
$$
which is readily verified to satisfy
$$
0 < K^* < \frac{1}{x_0}.
$$
This is consistent with the observation that~$K \geq 1/x_0$ leads to~$\log(1 -Kx_0) = -\infty$ irrespective of the size of~$\varepsilon$. Now, the key point to note is the following: {\it No matter how small~$\varepsilon$ is, the size of~$K^*$ is limited by~$1/x_0$.} In other words, even when the risk~$\varepsilon$ of losing becomes negligible, for the Kelly bettor using this theoretical model, the size of the bet will be inappropriately small. For example, with~$x_0 = 100$, no matter how small~$\varepsilon$ is, the betting fraction~$K$ can never be more than~$1$\% of the account value.
In summary, when situations arise with common sense dictating that one should wager almost all of one's account, the formal Kelly theory forces the betting fraction to be far too small; i.e., an overly conservative bet results.
\vskip 2mm
To complete the arguments related to this toy example, we now imagine a ``practitioner'' who is enamored with Kelly theory but distrusting of a theoretical model. Suppose further that empirical data for the random variable~$X$ above is available, perhaps in limited supply. In this case, per the discussion in Section 2, this bettor collects~$m$ data points, generates an empirical PMF, and then, based on this estimated distribution, determines the optimal bet. What will happen when~$\varepsilon$ is extremely small? Clearly, without~$m$ being unacceptably large, it is virtually certain that the bettor will see~$x_i = 1$ for~$i = 1,2,\ldots,m$. Hence, the empirically derived PMF for the estimated random variable~$\hat{X}$ is trivially described. Namely,~$\hat{X} = 1$ with probability one and the resulting expected log-growth maximizer, namely~$\hat{K}^* = 1$ is more consistent with the common sense maxim: ``When conditions are right, bet the farm.''
\vskip 2mm
The arguments above are not intended to be entirely rigorous because the role of the sample size~$m$ has not really been considered. To tighten up the arguments above, we note the following: In practice, there is a limitation on~$m$, say~\mbox{$m \leq M$}, which can arise for various reasons. For example, if~$X(k)$ represents daily returns on a stock, then it would typically be the case that~$m$ is strongly limited because the underlying assumption of independent and identically distributed returns becomes questionable when~$m$ is too large. For example, many traders do not use large~$M$ in the belief that larger~$M$-values require processing of ``old data'' which may not reflect current market conditions. For the case of the random variable~$X$ in the toy example above, we can ask: What is the probability, call it~$p_{bad}$, that the practitioner will see a ``bad'' sample; i.e.,~$x_i = -x_0$ for some~$i \leq M$. For this simple problem, we obtain
\[
p_{bad} = 1- (1-\varepsilon)^M.
\]
Thus, if~$\varepsilon = 0.001$ and~$M=50$, then we obtain~$p_{bad} \approx 0.05$ and if~$\varepsilon = 0.0001$,~$p_{bad} \approx 0.005$. Note that if such a bad sample is ``seen,'' the behavior of the practitioner becomes similar to that of the theoretical Kelly bettor.
\vskip 2mm
{\bf A More Realistic Example}: To study the issue of conservatism using realistic data, we consider a family of random variables each of which is governed by the normal distribution. Each of these random variables has fixed standard deviation~$\sigma =1$. However, the members of this family are differentiated by their means. We consider means~\mbox{$0 \leq \mu \leq 4$}. For each value of~$\mu$ in this range, we let~$X_{\mu}$ denote the random variable of interest and construct an empirical probability mass function drawing~$m=1,000,000$ samples. Next, for each~$\mu$, we find the optimal Kelly fraction, call it~$\hat{K}^* = \hat{K}^*(\mu)$; see Figure~\ref{fig.K_vs_mu} where this function is plotted. Looking at the plot, we now argue that this result is consistent with common sense considerations. Indeed, when~$\mu$ is at the low end of the range, it is no surprise to see that~$\hat{K}^*(\mu)$ is small because the probability of~$X_\mu < 0$ is significant. For example, when~$\mu = 1$, the optimum is to wager about~$20\%$ of one's wealth on each bet. Similarly, when~$\mu$ is at the high end of the range, we see that $\hat{K}^*(\mu)$ is large because the probability of~$X_\mu < 0$ becomes small. For example, when~$\mu = 4$, the optimum is to wager about~$90\%$ of one's wealth on each bet because the chance of losing is vanishingly small.
\vskip 2mm
In the next section, we see that this analysis using real data is entirely at odds with a purely theoretical analysis. In this regard, when the analysis in the section to follow is used to analyze the random variables~$X_\mu$, one ends up with optimal betting fraction~$K^* = 0$; i.e., no betting at all is dictated.
To conclude this section, we note the following: The fact that our data-based analysis above was carried out with fixed~$\sigma$ is not critical to the conclusions we reached. More generally, when $X$ is governed by normal distribution~$\mathcal{N}(\mu, \sigma)$, the Kelly theory suggests no betting regardless of the relative sizes of the mean~$\mu$ and standard deviation~$\sigma$.
\begin{figure}[htbp]
\centering
\graphicspath{{Figs/}}
\setlength{\abovecaptionskip}{0.1 pt plus 0pt minus 0pt}
\includegraphics[width=0.47\textwidth]{Fig_Kopt_vs_mu.eps}
\figcaption{Optimal Kelly Fraction $\hat{K}^*$ Versus $\mu$ }
\label{fig.K_vs_mu}
\end{figure}
\section{RESTRICTED-BETTING: THE SCALAR CASE}
In this section, we present an analysis regarding the motivating examples in the preceding section. In rough terms, for a scalar random variable~$X$, we see that the minimum and maximum values of points~$x$ in the support lead to fundamental restrictions on the size of the bet allowed by the Kelly theory --- the larger these values, the smaller the Kelly fraction is forced to be. Moreover, this restriction holds true whether the probability of these maximal deviations is significant or not.
\vskip 2mm
Since the key ideas driving the analysis to follow are most simply understood when $X$ is a scalar random variable, we first consider this case. To begin, suppose~$x_0 < 0$ is a point in the support set of~$X$. Then, to avoid~$g(K) = -\infty$, Kelly theory forces the betting fraction to satisfy~$K \le -1/x_0$. This holds true even when the probability that~$X$ gets close to~$x_0$ is vanishingly small. Similarly, for a point~$x_0 > 0$ in the support, similar reasoning forces~$K \ge -1/x_0$. As a result of this aspect of the theory, many bets which are ``excellent'' from a common sense point of view lead to unduly small bets. We note that this is consistent with the examples in Section~3. To summarize, in the Kelly theory, large values of~$X$, whether rare or not, lead to dramatic restrictions in the bet size.
\vskip 2mm
In the lemma below, we formalize the ideas above. An extreme case of the result occurs when the support of~$X$ is the entire real line; e.g., suppose~$X$ is normally distributed. For such cases, as seen below,~$K = 0$ is forced. That is, no betting is allowed. This result holds true regardless of the relative sizes of the mean~$\mu$ and standard deviation~$\sigma$. We note that this outcome of Kelly theory is clearly at odds with practical considerations. Even when the ratio~$\mu/\sigma$ is very large, synonymous with an excellent bet, the theory nevertheless forces~$K = 0$. The lemma below is a special case of the Restricted Betting Theorem given in the next section. Accordingly, its proof is deferred until then.
\vskip 2mm
{\bf Scalar Betting Lemma}: {\it Let $X$ be a random variable with~$\mathbb{E}[|X|]<\infty$, probability density function~$f_X(x)$ and support~${\cal X}$ with extremes
\[
X_{\min} \doteq \inf \{x: x \in \mathcal{X} \}
\text{ \; and \; }
X_{\max} \doteq \sup \{x: x \in \mathcal{X} \}
\]
satisfying $X_{\min}<0 $ and $ X_{\max} >0$. Then any optimizing Kelly fraction~$K$ maximizing~$g(K)$ satisfies the interval confinement condition
$$
K \in [-1/X_{\max},-1/X_{\min}].
$$
}{\bf Remarks}: $(i)$ Consistent with the remarks prior to the statement of the lemma,~$K^* \leq 0$ when~$X_{\min} = -\infty$ and~$K^* \geq 0$ when~$X_{\max} = +\infty$. It follows that~$K = 0$ is forced. In other words, the best bet is no bet at all.
\vskip 2mm
$(ii)$ The lemma says that an optimal $K$ must lie in the confinement interval, but we do not expect {\it every} $K$ in the interval to be optimal. Surprisingly, there may exist some~$K$ in the confinement interval that are ``infinitely bad,'' i.e.,~$g(K)=-\infty$, as shown in the following example.
\vskip 2mm
{\bf Example}: We provide an example of a random variable~$X$ and a constant~$K > 0$ satisfying the confinement condition above
but having the property that~$g(K) = -\infty.$ Indeed, let~$0 < K < 1$ be arbitrary and held fixed in the calculations to follow. We now consider a random variable~$X$ which is constructed as follows. Let
\begin{eqnarray*}
\theta \doteq \frac{1}{2} + \sum_{k=1}^\infty \frac{1}{k^2}
= \frac{1}{2} + \frac{\pi^2}{6},
\end{eqnarray*}
take $X =x_0 =1$ with probability~$p_0=1/(2 \theta)$, and for~\mbox{$k\ge 1$}, take
$
X = x_k \doteq (e^{-k}-1)/K
$
with probability~$p_k \doteq {1}/({k^2 \theta}).$ Note that the definition of $\theta$ above assures that the $p_k$ define a probability mass function;~i.e.,~$p_k \ge 0$ and~$\sum_{k=0}^\infty p_k =1.$
Now, for this random variable, we have~$X_{\min} = -{1}/{K},$
and~\mbox{$X_{\max} = 1.$}
Furthermore, since~\mbox{$0 < K < 1$}, the interval confinement condition is satisfied. To complete the analysis, it remains to show that~$g(K) = -\infty$. Indeed, we calculate
\begin{align*}
g\left( K \right)
&= \mathbb{E}[\log \left( {1 + KX} \right)]\\
& = \sum\limits_{k = 0}^\infty {\log \left( {1 + K{x_k}} \right){p_k}} \\
& = \log \left( {1 + K{x_0}} \right){p_0} + \sum\limits_{k = 1}^\infty {\log \left( {1 + K{x_k}} \right){p_k}} \\
& = \frac{1}{{2\theta }}\log \left( {1 + K} \right) + \frac{1}{\theta }\sum\limits_{k = 1}^\infty {\frac{1}{{{k^2}}}\log \left( {1 + K{x_k}} \right)} \\
& = \frac{1}{{2\theta }}\log \left( {1 + K} \right) - \frac{1}{\theta }\sum\limits_{k = 1}^\infty {\frac{1}{k}} = - \infty.
\end{align*}
\vskip 1mm
\section{THE RESTRICTED BETTING THEOREM}
Recalling the interval confinement condition introduced for a scalar random variable, this section provides a generalization of this result which holds for an~\mbox{$n$-dimensional} random vector~$X$ whose support set~${\cal X}$ can be rather arbitrary. This support set is allowed to be unbounded so that we capture the no-betting result given for~$X$ being a scalar. To obtain the theorem below, we make use of the classical support function which is heavily used in convex analysis;~e.g., see~\cite{Rockafellar_1996}.
\vskip 2mm
Indeed, given a set~${\cal X} \subseteq {\bf R}^n$, the {\it support function} on~${\cal X}$ is the mapping~$h: {\bf R}^n \rightarrow {\bf R} \cup \{+\infty \}$ defined as follows: For~$y \in {\bf R}^n$,
$$
h_{\cal X}(y) \doteq \sup_{x \in \cal X} y^Tx.
$$
After establishing the theorem below, we consider a number of special cases to show that there are large classes of Kelly betting problems for which checking for satisfaction of the conditions is highly tractable.
\vskip 2mm
{\bf The Restricted Betting Theorem}: {\it Given an~\mbox{$n$-dimensional} random vector $X$ with PDF~$f_X$, support~${\cal X}$, and~\mbox{$\mathbb{E}[\| X \|] < \infty$}, any optimizing Kelly fraction vector~$K$ satisfies the condition
$$
h_{\cal X}(-K) \leq 1.
$$
Furthermore, whether~${\cal X}$ is convex or not, the set
$$
{\cal K} \doteq \{K: h_{\cal X}(-K) \leq 1\}
$$
is convex and closed.
}
\vskip 2mm
{\bf Proof}: In the arguments to follow, we work with the extended logarithmic function which takes value~$\log(x) = -\infty$ for~$x \le 0$. Proceeding by contradiction, suppose~$K$ is optimal but fails to satisfy the support function condition above. Then
$$
\sup_{x \in \cal X} [-K]^T x > 1.
$$
Equivalently, there exists some~$x^K \in {\cal X}$ such that~$-K^Tx^K > 1.$
Hence $1 + K^Tx^K < 0.$
Now noting that~$1 + K^Tx$ is continuous in~$x$ and that~$x^K$ is in the support, there exists a suitably small neighborhood of~$x^K$, call it~${\cal N}(x^K)$, such that $1 + K^Tx < 0$
for~$x \in {\cal N}(x^K)$ and
$$
P(X \in {\cal N}(x^K)) > 0.
$$
We now claim that the existence of such a neighborhood implies that $g(K) = -\infty$. Indeed, to prove this, we first observe that
\begin{align*}
g(K)
&=\mathbb{E}[ \log(1+K^TX)]\\
&= \int \log(1+K^T x) f_X(x)dx \\
&= \int \limits_{ 1 + K^T x \le 0 }^{} \hskip -4mm {\log (1 + {K^T}x){f_X}\left( x \right)dx} \\
& \;\;\;\;\;\;\;\;\; + \int \limits_{ 1 + {K^T}x > 0 }^{} \hskip -4mm {\log (1 + {K^T}x){f_X}\left( x \right)dx}.
\end{align*}
Using the property of logarithmic function that
$$
\log(1+K^T x) \le |K^T x|
$$
for all $x$ satisfying $1+K^T x >0$, we obtain an upper bound for $g(K)$. That is,
{\small \begin{align*}
g(K)
&\le \int \limits_{1 + {K^T}x \le 0}^{} \hskip -4mm {\log (1 + {K^T}x){f_X}\left( x \right)dx} + \int \limits_{ 1 + {K^T}x > 0 }^{} \hskip -4mm {\left| {{K^T}x} \right|{f_X}\left( x \right)dx}\\
&\le \int\limits_{1 + {K^T}x \le 0}^{} \hskip -4mm{\log (1 + {K^T}x){f_X}\left( x \right)dx} + \int_{}^{} {\left\| K \right\|\left\| x \right\|} {f_X}\left( x \right)dx\\
&\le \int\limits_{1 + {K^T}x \leq 0}^{} \hskip -4mm{\log (1 + {K^T}x){f_X}\left( x \right) dx} + \| K \| \; \mathbb{E}[\| X \|].
\end{align*}
}Since $ \mathbb{E}[\| X \|] < \infty$, it suffices to show that the integral above has value $-\infty$. Indeed, beginning with the fact that
$$
P(X \in {\cal N}(x^K)) > 0
$$
and noting that~${\cal N}(x^K) \subseteq \{x: 1+K^T x\le 0\}$, the density function~$f_X$ must assign positive probability to the set~\mbox{$\{x: 1+K^T x \leq 0 \}$}. Furthermore, since~\mbox{$\log(1+K^Tx) = -\infty$} for~$x$ satisfying~\mbox{$1+K^Tx \le 0$}, it follows that
\[
\int \limits_{1 + {K^T}x \le 0}^{} \hskip -4mm{\log (1 + {K^T}x){f_X}\left( x \right)dx} = -\infty
\]and we conclude that
$
g(K) = -\infty
$ as required.
\vskip 2mm
To complete the proof, we establish closedness and convexity of~${\cal K}$ using a rather standard convex analysis argument: Indeed, for each fixed~$x \in {\cal X}$, we define the linear function~$L_x(K) \doteq -K^Tx$ and associated set
$$
{\cal K}_x \doteq \{K: L_x(K) \leq 1\}.
$$
Note, that~${\cal K}_x$, being a halfspace, is a closed convex set. Now, using the definition of the support function, it follows that
$$
{\cal K} = \bigcap_{x \in \mathcal{X}}^{} {\cal K}_x.
$$
Hence, since~${\cal K}$ is the intersection of an indexed collection of closed convex sets, it is also closed and convex. $\square$
\vskip 2mm
{\bf Scalar Result as a Special Case}: To see that the Scalar Betting Lemma in Section 5 is a special case of the above, we recall notation $X_{\min}$ and~$X_{\max}$ and assume~\mbox{$X_{\min} < 0$} and~$X_{\max} > 0$ as in the earlier sections. Now, for~$K > 0$, the support function in the theorem above becomes~\mbox{$h_{\cal X}(-K) = -KX_{\min}$}
and for~$K < 0$, it becomes~$h_{\cal X}(-K) = -KX_{\max}.$
Hence the requirement of the theorem~$h_{\cal X}(-K) \leq 1$ leads to the interval confinement condition of the lemma.
\vskip 2mm
{\bf Hypercube Support Set}: One $n$-dimensional generalization of the scalar situation above is obtained when the convex hull of the support of~$X$, $\mbox{conv}{\cal X}$, is a hypercube. Suppose this hypercube has center~$x^0$ and components~$x_i$ satisfying~\mbox{$|x_i - x^0_i| \leq \delta_i$}
where~$\delta_i > 0$ for~$i = 1,2,\ldots,n$. Then using a basic fact about support functions, see~\cite[p.~269]{Witsenhausen_1980}, that
$
h_{\cal X}(y) = h_{\mbox{conv}{\cal X}}(y)
$
for all~$y \in {\bf R}^n$, a straightforward calculation leads to
$$
h_{\cal X}(-K) = \sum_{i=1}^{n}|K_i|\delta_i -\sum_{i=1}^{n}K_ix_i^0 .
$$
Hence, application of the theorem leads to the requirement that
any optimizing Kelly fraction vector~$K$ satisfies the condition
$$
\sum\limits_{i = 1}^n | {K_i}|{\delta _i} - \sum\limits_{i = 1}^n {{K_i}} x_i^0 \le 1.
$$
\vskip 2mm
{\bf Hypersphere Support Set}: As a final example, suppose the convex hull of the support set~${\cal X}$ is a hypersphere in~${\bf R}^n$ with description
$
\| x - x^0 \| \leq r
$
with euclidean norm used above, center~$x^0$, and radius~$r > 0$. Then using an argument which is similar to that used for the hypercube example above, we can easily show that any optimizer~$K$ must satisfy
$$
r \| K \| - K^Tx^0 \le 1.
$$
We note that the constraint sets
\[
\mathcal{K}_r \doteq \{K: r \|K \| - K^Tx^0 \le 1 \}
\]
are nested. That is, if radii $ r_1 \le r_2$, then the set $\mathcal{K}_{r_2} \subseteq \mathcal{K}_{r_1}$. In Figure~\ref{fig:HyperSphere}, these sets are depicted for $x^0 = (1/2,1/2)$ and various radii $r_1=1$, $r_2=1.25$, $r_3=2$, $r_4=3$ and~$r_5=5$.
\begin{figure}[htbp]
\centering
\graphicspath{{figs/}}
\setlength{\abovecaptionskip}{0.1 pt plus 0pt minus 0pt}
\includegraphics[width=0.40\textwidth]{Fig_HyperSphere_Support.eps}
\figcaption{Constraint Sets $\mathcal{K}_r$ for Optimal Fraction $K$}
\label{fig:HyperSphere}
\end{figure}
\vskip 3.5mm
\section{ EXAMPLE INVOLVING HIGH-FREQUENCY} \label{EX:High-freq}
Thus far, our analysis of the restricted betting phenomenon has included no consideration of the frequency with which wagers are being made. In this regard, we imagine the frequency of betting to be so high as to make it seem ``reasonable'' for the theoretician to use a continuous-time stochastic model to determine the optimal betting fraction. The question we consider is as follows: For the high-frequency case with sufficiently many samples being used to construct the empirical distribution, is there still a disparity between theory and practice? That is, is it still the case that the theoretical solution can end up being far too conservative? In~\cite{kuhn_and_luenberger}, an issue with rather similar flavor is considered in the context of portfolio optimization and the analysis given is much more abstract than that given below. Here we consider a concrete example and provide no significant result of general import. Our main objective is to raise issues for future research.
\vskip 2mm
Indeed, we begin with high-frequency historical intra-day tick data for APPLE (ticker AAPL). Each ``tick'' corresponds to a new stock price~$S(k)$ and the time between arrivals of ticks is estimated on average to be about one tenth of a second. This stock-price data is plotted in Figure~\ref{fig:AAPL_Prices} for the period~9:30:00~am to~2:13:47~pm on December~2,~2015. During this period, we have $m = 110,000$ ticks. The first step in our analysis was to
use the time series prices~$S(k)$ to calculate the corresponding returns
$$
X(k) = \frac{S(k+1) - S(k)}{S(k)}.
$$
Given the small time between consecutive ticks, a large percentage of the~$X(k)$ turn out to be zero;~i.e., the price did not change from~$k$ to~$k+1$.
In addition, the smallness of the inter-tick times leads to the remaining probability masses largely concentrated between~$x = -0.0002$ and~\mbox{$x = 0.0002$}
and the data leads to~\mbox{$X_{\min} \approx -0.01 \approx -X_{\max}$}. Thus, the Restricted Betting Theorem forces the approximate bound~\mbox{$-100 \leq K \leq 100$} which is not really meaningful since brokerage requirements typically limit~$ |K| \leq 2$. Based on the empirical data, we plotted~$g(K)$ and obtained the optimal betting fraction~\mbox{$\hat{K}^* \approx 0.824$}. Interestingly, although the price has no obvious ``bullish'' pattern, we see that the theory leads to a rather aggressive bet size which is more than~$80 \%$ of one's wealth.
In contrast, if we assume that the data for this example, comes from a discrete-time Geometric Brownian Motion with this same mean and variance, we obtain~$K^* = 0$ by the Restricted Betting Theorem.
\vskip 2mm
It is interesting to note that other methods in the literature which might be used for the same problem lead to optimal~$K$-values which are remarkably close to~$\hat{K}^* \approx 0.824$ obtained above. For example using estimated mean~\mbox{$\hat{\mu} \approx 1.628 \times 10^{-8}$} and standard deviation~\mbox{$\hat{\sigma} \approx 1.405 \times 10^{-4}$} as the basis for a continuous time Geometric Brownian Motion model, the analysis in~\cite{merton} involves optimizing the expected value having combination of consumption utility and logarithm of terminal wealth. As the consumption weighting tends to zero, the optimal fraction tends to~\mbox{$ K^* = {\hat{\mu}} /{\hat {\sigma}^2} \approx 0.825.$} The same result is obtained in~\cite{Thorp_2006} using the same expected logarithmic growth criterion and assuming a stochastic process model with bounded returns~$X(k)$ with mean $\hat{\mu}$ and standard deviation~$\hat{\sigma}$.
\begin{figure}[htbp]
\centering
\graphicspath{{figs/}}
\setlength{\abovecaptionskip}{0.1 pt plus 0pt minus 0pt}
\includegraphics[width=0.475 \textwidth]{Fig_AAPL_151202_Tick.eps}
\figcaption{AAPL Tick-by-Tick Price of Trade}
\label{fig:AAPL_Prices}
\end{figure}
\vskip 2mm
\section{CONCLUSION AND FUTURE WORK}
In this paper, we considered a random vector~$X$ and compared the size of Kelly bets which are derived using a purely theoretical probability distribution versus those which are obtained from its empirically-obtained counterpart. In making this comparison, the support set~${\cal X}$ for~$X$ was seen to play a crucial role. As seen in the Restricted Betting Theorem in Section~6, when the logarithmic growth function~$g(K)$ is maximized, this set~${\cal X}$ can lead to ``unreasonable'' restrictions on the optimal betting fraction~$K^*$. By this we mean roughly the following: Possible outcomes for~$X$ which are ``large'' can lead to the possibility that extremely attractive betting opportunities are rejected. On the other hand, when betting is based on an empirically derived distribution for~$X$, it is likely that such rare events will not be reflected in the resulting probability mass function. The bet size which results will be more in line with common sense.
\vskip 2mm
These results open the door to a new line of research which might be appropriately called ``data-driven Kelly betting.'' In such an empirical framework, new problems involving the sample size~$m$ will be of fundamental importance. Given that many betting processes involve non-stationary stochastic processes, there is typically a bound~$m \leq M$ which must be respected when deriving the empirical distribution. That is, when the analysis involves sequential betting based on i.i.d. random variables, the use of ``untrustworthy old data'' from far in the past may be inappropriate to use.
\vskip 2mm
A second important future research direction involves extension of Kelly-based analysis to problems involving the betting frequency. This topic, touched upon in Section~7, does not appear to have been heavily considered in the literature; e.g., see~\cite{kuhn_and_luenberger} for results available to date. In this setting, many new modeling and analysis questions arise involving what betting frequencies are available and the model of the random variable~$X$ changes as a function of frequency. For example, if even-money coin flips are carried out at some frequency~$f$, the model for~$X$ does not change from bet to bet; i.e., the bet is independent of frequency. On the other hand, if~$X$ corresponds to the return on a stock based on sampling of a continuous-time Brownian motion, appropriate scaling of the mean and variance become important issues as the frequency increases.
\addtolength{\textheight}{-3cm}
|
1,941,325,221,153 | arxiv | \section{Introduction}
\label{sec:intro}
The canonical $\Lambda$CDM scenario has proven to provide an excellent match to observations at high and low redshift, see for instance~\cite{Riess:1998cb,Perlmutter:1998np,Dunkley:2010ge,Hinshaw:2012aka,Ade:2013zuv,
Story:2014hni,Ade:2015xua,Alam:2016hwk,Troxel:2017xyo,Aghanim:2018eyx}. Despite its enormous success, there are some tensions among the values of cosmological parameters inferred from independent datasets~\cite{Freedman:2017yms,DiValentino:2017gzb,DiValentino:2018gcu}. The most famous and persisting one is that related to the value of the Hubble constant $H_0$ as measured from \textit{Planck} Cosmic Microwave Background (CMB) data ($h = (0.6737 \pm 0.0054)$~\cite{Aghanim:2018eyx}) versus the value extracted from Cepheid-calibrated local distance ladder measurements (\textit{R19}, $h=(0.7403 \pm 0.0142)$~\cite{Riess:2019cxk}), referred to as the \textit{$H_0$ tension}, with $h=H_0/(100\,{\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1})$~\footnote{In Ref.~\cite{Jackson:2007ug,Verde:2019ivm} the reader can find complete reviews comparing the CMB and local determinations of $H_0$.}. This tension now reaches the $4.4\sigma$ level.
Two main avenues have been followed to solve the $H_0$ tension. The first one is based on the possibility that \textit{Planck} and/or the local distance ladder measurement of $H_0$ suffer from unaccounted systematics~\footnote{See e.g.~\cite{Spergel:2013rxa,Addison:2015wyg,Aghanim:2016sns,
Lattanzi:2016dzq,Huang:2018xle} for studies of possible systematics in the context of \textit{Planck} and e.g.~\cite{Rigault:2013gux,Rigault:2014kaa,Scolnic:2017caz,
Jones:2018vbn,Rigault:2018ffm} in the context of the local distance ladder measurement. Local measurements other than the R19 one exist, but most of them appear to consistently point towards values of $H_0$ significantly higher than the CMB one (see e.g.~\cite{Efstathiou:2013via,Cardona:2016ems,Feeney:2017sgx,Dhawan:2017ywl,
Follin:2017ljs,Gomez-Valent:2018hwc,Birrer:2018vtm,Burns:2018ggj,Jimenez:2019onw,
Collett:2019hrr,Wong:2019kwg,Freedman:2019jwv,Liao:2019qoc,Reid:2019tiq,Jee:2019hah}).}. The second more intriguing possibility is that the $H_0$ tension might be the first sign for physics beyond the concordance $\Lambda$CDM model. The most economical possibilities in this direction involve phantom dark energy (\textit{i.e.} a dark energy component with equation of state $w<-1$) or some form of dark radiation (so as to raise $N_{\rm eff}$ beyond its canonical value of $3.046$)~\cite{DiValentino:2016hlg,Bernal:2016gxb,Vagnozzi:2019ezj}. However, in recent years, a number of other exotic scenarios attempting to address the $H_0$ tension have been examined, including (but not limited to) decaying dark matter (DM), interactions between DM and dark radiation, a small spatial curvature, an early component of dark energy (DE), and modifications to gravity (see e.g.~\cite{Alam:2016wpf,Qing-Guo:2016ykt,Ko:2016uft,Karwal:2016vyq,Chacko:2016kgg,
Zhao:2017cud,Vagnozzi:2017ovm,Agrawal:2017rvu,Benetti:2017gvm,Feng:2017nss,Zhao:2017urm,
DiValentino:2017zyq,Gariazzo:2017pzb,Dirian:2017pwp,Feng:2017mfs,Renk:2017rzu,
Yang:2017alx,Buen-Abad:2017gxg,Raveri:2017jto,DiValentino:2017rcr,DiValentino:2017oaw,
Khosravi:2017hfi,Peirone:2017vcq,Benetti:2017juy,Mortsell:2018mfj,Vagnozzi:2018jhn,
Nunes:2018xbm,Poulin:2018zxs,Kumar:2018yhh,Banihashemi:2018oxo,DEramo:2018vss,
Guo:2018ans,Graef:2018fzu,Yang:2018qmz,Banihashemi:2018has,Aylor:2018drw,
Poulin:2018cxd,Kreisch:2019yzn,Pandey:2019plg,Vattis:2019efj,Colgain:2019pck,
Agrawal:2019lmo,Li:2019san,Yang:2019jwn,Colgain:2019joh,Keeley:2019esp,Li:2019yem,
DiValentino:2019exe,Archidiacono:2019wdp,Desmond:2019ygn,Yang:2019nhz,Nesseris:2019fwr,
Visinelli:2019qqu,Cai:2019bdh,Schoneberg:2019wmt,Pan:2019hac,DiValentino:2019dzu,
Xiao:2019ccl,Panpanich:2019fxq,Knox:2019rjx,Ghosh:2019tab,Escudero:2019gvw,Yan:2019gbw,
Banerjee:2019kgu,Yang:2019uog,Cheng:2019bkh,Sakstein:2019fmf,Liu:2019awo,
Anchordoqui:2019amx,Wang:2019isw,Mazo:2019pzn,Pan:2020zza,Yang:2020zuk,
Lyu:2020lwm,Yang:2020uga} for an incomplete list of recent papers).~\footnote{Other scenarios worth mentioning include the possibility that properly accounting for cosmic variance (due to the fact that a limited sample of the Hubble flow is observed) enlarges the uncertainty of the locally determined $H_0$ to the point that the tension is alleviated~\cite{Marra:2013rba,Wojtak:2013gda,Wu:2017fpr,Camarena:2018nbr,
Bengaly:2018xko}, or that local measurements might be biased by the presence of a local void~\cite{Keenan:2013mfa,Romano:2016utn,Fleury:2016fda,Hoscheit:2018nfl,
Shanks:2018rka} (see however e.g.~\cite{Odderskov:2014hqa,Kenworthy:2019qwq} for criticisms on both these possibilities). From the theoretical side models of running vacuum, motivated by QFT corrections in curved spacetime, are instead among the most theoretically well-motivated solutions to the $H_0$ tension (see for example~\cite{Sola:2017znb,Gomez-Valent:2018nib,Rezaei:2019xwo,Sola:2019jek}).}
From the theoretical perspective, interactions between DM and DE beyond the purely gravitational ones are not forbidden by any fundamental symmetry in nature~\cite{Amendola:2007yx,Micheletti:2009pk,Pavan:2011xn,Bolotin:2013jpa,
Costa:2014pba,Ludwick:2019yso} and could help addressing the so called coincidence or \textit{why now?} problem~\cite{Hu:2006ar,Sadjadi:2006qp,delCampo:2008jx,Dutta:2017kch,Dutta:2017fjw} , see e.g.~\cite{Farrar:2003uw,Barrow:2006hia,Amendola:2006dg,He:2008tn,Valiviita:2008iv,
Gavela:2009cy,CalderaCabral:2009ja,Majerotto:2009np,Abdalla:2009mt,Honorez:2010rr,
Clemson:2011an,Pan:2012ki,Salvatelli:2013wra,Yang:2014vza,Yang:2014gza,Nunes:2014qoa,
Faraoni:2014vra,Pan:2014afa,Ferreira:2014cla,Tamanini:2015iia,Li:2015vla,Murgia:2016ccp,
Nunes:2016dlj,Yang:2016evp,Pan:2016ngu,Sharov:2017iue,An:2017kqu,Santos:2017bqm,
Mifsud:2017fsy,Kumar:2017bpv,Guo:2017deu,Pan:2017ent,An:2017crg,Costa:2018aoy,
Wang:2018azy,vonMarttens:2018iav,Yang:2018qec,Martinelli:2019dau,Li:2019loh,
Yang:2019vni,Bachega:2019fki,Yang:2019uzo,Li:2019ajo,
Mukhopadhyay:2019jla,Carneiro:2019rly,Kase:2019veo,Yamanaka:2019aeq,Yamanaka:2019yek} and Ref.~\cite{Wang:2016lxa} for a recent comprehensive review on interacting dark sector models, motivated by the idea of coupled quintessence~\cite{Wetterich:1994bg,Amendola:1999dr,Amendola:1999er,
Mangano:2002gg,Zhang:2005rg,Saridakis:2010mf,Barros:2018efl,
DAmico:2018mnx,Liu:2019ygl}.~\footnote{See also~\cite{Benisty:2017eqh,Benisty:2018qed,Benisty:2018oyy,
Anagnostopoulos:2019myt,Benisty:2019jqz} for examples of models of unified interacting DM-DE.} These models may also be an interesting key towards solving some existing cosmological tensions~\cite{Salvatelli:2014zta,Kumar:2016zpg,Xia:2016vnp,Kumar:2017dnp,Yang:2017ccc,
Feng:2017usu,Yang:2018ubt,Yang:2018xlt,Yang:2018uae,
Li:2018ydj,Kumar:2019wfs,Pan:2019jqh,DiValentino:2017iww,Yang:2017ccc,Feng:2017usu,
Yang:2018ubt,Yang:2018xlt,Yang:2018uae,Li:2018ydj,Kumar:2019wfs,Pan:2019jqh,
Yang:2019uzo,Pan:2019gop,Benetti:2019lxu}.
We have recently shown that one particular and well-studied interacting DE model is still a viable solution to the $H_0$ tension in light of the 2019 \textit{Planck} CMB and local measurement of $H_0$~\cite{DiValentino:2019ffd}. However, our study in~\cite{DiValentino:2019ffd} considered a minimal dark energy scenario, where the interacting DE component is essentially a cosmological constant (see~\cite{Bamba:2012cp} for a recent review on dark energy models). In this work, we allow for more freedom in the DE sector, considering a more generic DE component with an equation of state $w$ not necessarily equal to $-1$. We here study in more detail the properties of DE required to solve the $H_0$ tension, analyzing the suitable values of the coupling ($\xi$) and the equation of state ($w$) for the DE component which can ameliorate the Hubble tension. While these two parameters are, in principle, independent, the potential presence of early-time superhorizon instabilities results in their viable parameter spaces being correlated.
The rest of this paper is then organized as follows. Section~\ref{sec:interacting} reviews the basic equations governing the cosmology of extended interacting dark energy models, briefly discussing their stability and initial conditions. The methodology and datasets adopted in our numerical studies are presented in Sec.~\ref{sec:data}, whereas in Sec.~\ref{sec:results} we present our results. We conclude in Sec.~\ref{sec:conclusions}.
\section{Extended interacting dark energy models}
\label{sec:interacting}
Interacting dark energy models (IDE in what follows) are characterized by a modification to the usual conservation equations of the DM and DE energy-momentum tensors $T^{\mu\nu}_c$ and $T^{\mu\nu}_x$ (which would usually read $\nabla_{\nu}T^{\mu\nu}_c=\nabla_{\nu}T^{\mu\nu}_x=0$), which now read~\cite{Valiviita:2008iv,Gavela:2009cy}:
\begin{eqnarray}
\nabla_{\nu}T^{\mu\nu}_c &=& \frac{Qu^{\mu}}{a}\,,\\
\nabla_{\nu}T^{\mu\nu}_x &=& -\frac{Qu^{\mu}}{a}\,,
\label{eq:continuity}
\end{eqnarray}
where $a$ is the scale factor and the DM-DE interaction rate is given by $Q$:
\begin{eqnarray}
Q = \xi{\cal H}\rho_x\,,
\label{eq:coupling}
\end{eqnarray}
with $\xi$ is a dimensionless number quantifying the strength of the DM-DE coupling. From now on, we shall refer to $\xi$ as the DM-DE coupling. Notice that $Q>0$ and $Q<0$ indicate, respectively, energy transfer from DE to DM and viceversa, or a possible decay of DE into DM and viceversa, depending on the details of the underlying model.
At the background level, for a pressureless cold DM component and a DE component with equation of state (EoS) $w$, the evolution of the background DM and DE energy densities are~\cite{Gavela:2009cy}:
\begin{eqnarray}
\label{eq:rhoc}
\rho_c &=& \frac{\rho^0_c}{a^3}+\frac{\rho^0_x}{a^3} \left [ \frac{\xi}{3w+\xi} \left ( 1-a^{-3w-\xi} \right ) \right ]\,, \\
\label{eq:rhox}
\rho_x &=& \frac{\rho^0_x}{a^{3(1+w)+\xi}} \,,
\end{eqnarray}
where $\rho^0_c$ and $\rho^0_x$ are the DM and DE energy densities today, respectively. At the linear perturbation level, and setting the DE speed of sound $c_{s,x}^2=1$, the evolution of the DM and DE density perturbations ($\delta_c$, $\delta_x$) and velocities ($\theta_c$, $\theta_x$) are given by:
\small
\begin{eqnarray}
\label{eq:deltac}
\dot{\delta}_c &=& -\theta_c - \frac{1}{2}\dot{h} +\xi{\cal H}\frac{\rho_x}{\rho_c}(\delta_x-\delta_c)+\xi\frac{\rho_x}{\rho_c} \left ( \frac{kv_T}
{3}+\frac{\dot{h}}{6} \right )\,, \\
\label{eq:thetac}
\dot{\theta}_c &=& -{\cal H}\theta_c\,,\\
\label{eq:deltax}
\dot{\delta}_x &=& -(1+w) \left ( \theta_x+\frac{\dot{h}}{2} \right )-\xi \left ( \frac{kv_T}{3}+\frac{\dot{h}}{6} \right ) \nonumber \\
&&-3{\cal H}(1-w) \left [ \delta_x+\frac{{\cal H}\theta_x}{k^2} \left (3(1+w)+\xi \right ) \right ]\,,\\
\label{eq:thetax}
\dot{\theta}_x &=& 2{\cal H}\theta_x+\frac{k^2}{1+w}\delta_x+2{\cal H}\frac{\xi}{1+w}\theta_x-\xi{\cal H}\frac{\theta_c}{1+w}\,,
\end{eqnarray}
\normalsize
where $h$ is the usual synchronous gauge metric perturbation. In addition, $v_T$ is the center of mass velocity for the total fluid, whose presence is required by gauge invariance considerations~\cite{Gavela:2010tm}:
\begin{eqnarray}
v_T = \frac{\sum_i \rho_i q_i}{\sum_i \left ( \rho_i + P_i \right )}\,,
\label{eq:vt}
\end{eqnarray}
where the index $i$ runs over the various species (whose energy densities and pressures are $\rho_i$ and $P_i$), and $q_i$ is the heat flux of species $i$, given by:
\begin{eqnarray}
q_i = \frac{ \left ( \rho_i + P_i \right ) \theta_i}{kP_i}\,.
\label{eq:qi}
\end{eqnarray}
The initial conditions for the DE perturbations $\delta_x$ and $\theta_x$ also need to be modified to the following~\cite{Gavela:2010tm}:
\begin{eqnarray}
\delta_x^{\rm in}(\eta) &=& \frac{1+w+\xi/3}{12w^2-2w-3w\xi+7\xi-14}\delta_{\gamma}^{\rm in}(\eta)\nonumber \\
& \times & \frac{3}{2}(2\xi-1-w)\,, \\
\theta_x^{\rm in}(x) &=& \frac{3}{2}\frac{\eta(1+w+\xi/3)}{2w+3w\xi+14-12w^2-7\xi}\delta_{\gamma}^{\rm in}(\eta)\,,
\label{eq:initialconditions}
\end{eqnarray}
where $\eta= k \tau$.
Finally, besides affecting the evolution of the background and the perturbation evolution, as well as requiring suitable initial conditions, the presence of a DM-DE coupling may affect the stability of the interacting system. Apart from the gravitational instabilities present when $w=-1$~\cite{Valiviita:2008iv,He:2008si}, there may also be early-time instabilities~\cite{Valiviita:2008iv,He:2008si,Jackson:2009mz,Gavela:2009cy,Gavela:2010tm,Clemson:2011an}, and avoiding them leads to imposing stability conditions on $w$ and $\xi$. Therefore, within the model in question, even though in principle the two parameters $\xi$ and $w$ describing the dark energy physics sector are independent, it turns out that only two distinct classes of models remain possible: essentially, the signs of $\xi$ and $1+w$ have to be opposite. In one class of models $\xi>0$ and $w<-1$ (and thus energy flows from DE to DM), and in the second one $\xi<0$ and $w>-1$ (thus energy transfer occurs from DM to DE).~\footnote{Other possibilities considered in the literature to address these two types of instabilities include an extension of the parametrized post-Friedmann approach to the IDE case~\cite{Li:2014eha,Li:2014cee,Guo:2017hea,Zhang:2017ize,Guo:2018gyo,Dai:2019vif}, as well as considering phenomenological coupling functions $Q$ depending on the DE EoS $w$~\cite{Yang:2017zjs,Yang:2017ccc,Yang:2018euj,Yang:2018xlt}.} Also, as it is clear from Eq.~(\ref{eq:rhoc}), even when the aforementioned instability-free prescriptions are considered, one needs to ensure that the DM energy density remains positive by requiring $\xi<-3w$. This is not a problem when $\xi<0$ and $w>-1$, since accelerated expansion requires $w<-1/3$, and therefore $w$ cannot take positive values, meaning that $\xi<0$ automatically implies $\xi<-3w$. For the $\xi>0$ and $w<-1$ case, the condition $\xi<-3w$ is not automatically satisfied, and it needs to be imposed as an extra constraint on the allowed parameter spaces.
\begin{figure*}[th]
\begin{center}
\includegraphics[width=0.49\textwidth]{H0_omegamh2_coupling_pos.pdf}
\includegraphics[width=0.49\textwidth]{H0_omegamh2_coupling_neg.pdf}
\caption{Left (right) panel: Samples from Planck chains in the ($H_0$, $\Omega_m h^2$) plane for the $\xi q$CDM ($\xi p$CDM) model, color-coded by $\xi$.}
\label{fig:tri}
\end{center}
\end{figure*}
\section{Models and datasets}
\label{sec:data}
The parameter space of the IDE model we consider is described by the usual six cosmological parameters of $\Lambda$CDM, complemented by one or two additional parameters depending on whether we allow the dark energy equation of state $w$ to vary freely. We recall that the six parameters of the $\Lambda$CDM model are the baryon and cold DM physical density parameters $\Omega_bh^2$ and $\Omega_ch^2$, the angular size of the sound horizon at decoupling $\theta_s$ (given by the ratio between the sound horizon to the angular diameter distance at decoupling), the optical depth to reionization $\tau$, and the amplitude and tilt of the primordial power spectrum of scalar fluctuations $A_s$ and $n_s$. To these $6$ cosmological parameters, we add the DM-DE coupling $\xi$ and the DE EoS $w$.
The stability issue discussed in Sec.~\ref{sec:interacting} will influence the choice of priors on the cosmological parameters. Ideally, we would want to consider two types of cosmological models: $\Lambda$CDM+$\xi$ (seven parameters) and $\Lambda$CDM+$\xi$+$w$ (eight parameters). Technically speaking, within the baseline $\Lambda$CDM model, the DE EoS would be fixed to $w=-1$. However, as we discussed in Sec.~\ref{sec:interacting}, in the case of IDE models, this leads to gravitational instabilities, which undermine the viability of the model. Therefore, na\"{i}vely considering a baseline $\Lambda$CDM+$\xi$ model would not work and we fix the DE EoS to $w=-0.999$ instead, an approach already adopted in~\cite{Salvatelli:2013wra,DiValentino:2019ffd}. Indeed, for $\Delta w \equiv 1+w$ sufficiently small, Eqs.~(\ref{eq:deltax},\ref{eq:thetax}) are essentially only capturing the effect of the DM-DE coupling $\xi$, while at the same time the absence of gravitational instabilities is guaranteed. To avoid early-time instabilities, we also require $\xi<0$. We refer to this model as $\xi\Lambda$CDM or coupled vacuum scenario.
We then extend the baseline coupled vacuum $\xi\Lambda$CDM model by allowing the DE EoS $w$ to vary. To satisfy the stability conditions, see Sec.~\ref{sec:interacting}, we consider two different cases: one where $\xi>0$ and $w<-1$ (which we refer to as $\xi p$CDM model, where the ``\textit{p}'' reflects the fact that the DE EoS lies in the phantom regime), and one where $\xi<0$ and $w>-1$ (which we refer to as $\xi q$CDM model, where the ``\textit{q}'' reflects the fact that the DE EoS lies in the quintessence regime).~\footnote{See e.g.~\cite{Shahalam:2015sja,Shahalam:2017fqt} for concrete examples of construction and dynamical system analyses of coupled quintessence and coupled phantom models.} The three interacting dark energy models we consider in this work, and in particular the values of $w$ and $\xi$ allowed by stability conditions therein, are summarized in Tab.~\ref{tab:models}.
\begin{table}[!b]
\begin{tabular}{|c||c|c|c|}
\hline
\textbf{Model} & DE EoS & DM-DE coupling & Energy flow \\
\hline
\hline
$\xi\Lambda$CDM & $w=-0.999$ & $\xi<0$ & DM$\to$DE \\
\hline
$\xi p$CDM & $w<-1$ & $\xi>0\,,\quad \xi<-3w$ & DE$\to$DM \\
\hline
$\xi q$CDM & $w>-1$ & $\xi<0$ & DM$\to$DE \\
\hline
\end{tabular}
\caption{Summary of the three interacting dark energy models considered in this work. For all three cases, we report the values allowed for the DE EoS $w$ and the DM-DE coupling $\xi$ ensuring that gravitational instabilities, early-time instabilities, and unphysical values for the DM energy density are avoided, as well as the direction of energy flow (DE$\to$DM or DM$\to$DE). For all models, we vary the six usual parameters of the $\Lambda$CDM model.}
\label{tab:models}
\end{table}
Having described the three models we consider in this work, we now proceed to describe the datasets we adopt. We first consider measurements of CMB temperature and polarization anisotropies, as well as their cross-correlations. This dataset is called Planck TT,TE,EE+lowE in~\cite{Aghanim:2018eyx}, whereas we refer to it as \textit{Planck}. We then include the lensing reconstruction power spectrum obtained from the CMB trispectrum analysis~\cite{Aghanim:2018oex}, which we refer to as \textit{lensing}.
In addition to CMB data, we also consider Baryon Acoustic Oscillation (BAO) measurements from the 6dFGS~\cite{Jones:2009yz,Beutler:2011hx}, SDSS-MGS~\cite{York:2000gk,Ross:2014qpa}, and BOSS DR12~\cite{Alam:2016hwk} surveys, and we shall refer to the combination of these BAO measurements as \textit{BAO}. Supernovae Type Ia (SNeIa) distance moduli data from the \textit{Pantheon} sample~\cite{Scolnic:2017caz}, the largest spectroscopically confirmed SNeIa sample consistent of distance moduli for 1048 SNeIa, are also included in our numerical analyses, and we refer to this dataset as \textit{Pantheon}. We also consider a Gaussian prior on the Hubble constant $H_0=74.03\pm1.42$ km/s/Mpc, as measured by the SH0ES collaboration in~\cite{Riess:2019cxk}, and we refer to it as \textit{R19}.
Finally, we consider a case where we combine all the aforementioned datasets (\textit{Planck}, \textit{lensing}, \textit{BAO}, \textit{Pantheon}, and \textit{R19}). We refer to this dataset combination as \textit{All19}.
We modify the Boltzmann solver \texttt{CAMB}~\cite{Lewis:1999bs} to incorporate the effect of the DM-DE coupling as in Eqs.~(\ref{eq:deltac}-\ref{eq:thetax}). We sample the posterior distribution of the cosmological parameters by making use of Markov Chain Monte Carlo (MCMC) methods, through a suitably modified version of the publicly available MCMC sampler \texttt{CosmoMC}~\cite{Lewis:2002ah,Lewis:2013hha}. We monitor the convergence of the generated MCMC chains through the Gelman-Rubin parameter $R-1$~\cite{Gelman:1992zz}, requiring $R-1<0.01$ for our MCMC chains to be considered converged.
In addition to performing parameter estimation, we also perform a model comparison analysis. In particular, we use our MCMC chains to compute the Bayesian evidence for the three interacting dark energy models ($\xi\Lambda$CDM, $\xi q$CDM, and $\xi p$CDM), given various dataset combinations, using the \texttt{MCEvidence} code~\cite{Heavens:2017afc}. We then compute the natural logarithm of the Bayes factor with respect to $\Lambda$CDM, which we refer to as $\ln B$. With this definition, a value $\ln B>0$ [respectively $\ln B<0$] indicates that the interacting model is preferred [respectively disfavoured] over $\Lambda$CDM. We qualify the strength of the obtained values of $\ln B$ using the modified version of the Jeffreys scale provided in~\cite{Kass:1995loi}. In particular, the preference for the model with higher $\ln B$ is weak for $0 \leq \vert \ln B \vert <1$, positive for $1 \leq \vert \ln B \vert <3$, strong for $3 \leq \vert \ln B \vert <5$, and very strong for $\vert \ln B \vert \geq 5$.
\section{Results}
\label{sec:results}
We now discuss the results obtained using the methods and datasets described in Sec.~\ref{sec:data}. We begin by considering the baseline coupled vacuum $\xi\Lambda$CDM model, wherein the DE EoS is fixed to $w=-0.999$ (as a surrogate for the cosmological constant $\Lambda$ for which one has $w=-1$) and $\xi<0$. Then we will describe the $\xi q$CDM model, where $\xi<0$ and $w>-1$ , and finally the $\xi p$CDM model where $\xi>0$ and $w<-1$.
\subsection{Coupled vacuum: $\xi\Lambda$CDM model}
In this section we explore the same model as in Ref.~\cite{DiValentino:2019ffd} but in light of different datasets, notably including also the \textit{BAO} and \textit{Pantheon} measurements of the late-time expansion history. These results are summarized in Tab.~\ref{xi}.
Notice that with Planck CMB data alone, the value of the Hubble constant is much larger than that obtained in the absence of a DM-DE coupling ($H_0=67.27\pm 0.60)$~km/s/Mpc) and therefore the $H_0$ tension is strongly alleviated. When combining \emph{Planck} with R19 measurements, the statistical preference for a non-zero coupling $\xi$ is more significant than $5\sigma$. These results agree with the ones obtained in~\cite{DiValentino:2019ffd}. The reason for this preference is given by the fact that in the coupled vacuum $\xi\Lambda$CDM model the energy flows from DM to DE, and then the amount of DM today is smaller. To match the position of the acoustic peaks in the CMB the quantity $\Omega_c h^2$ should not decrease dramatically, which automatically implies a larger value of $h$, i.e. $H_0$.
An important thing to point out is that $\Omega_ch^2$ is the physical density of cold DM \textit{today}. In the interacting models considered in this work, deviations from $\Lambda$CDM are almost exclusively occurring at late times, which is why the addition of late-time datasets such as \textit{BAO} or \textit{Pantheon} is important. As one can see from Eqs.~(\ref{eq:rhoc},\ref{eq:rhox}), for the region of parameter space considered, the cold DM energy density at the time of last-scattering in the interacting models is essentially the same as that in $\Lambda$CDM, explaining why these models are still able to fit the \textit{Planck} dataset well, as they leave the relative height of the acoustic peaks unchanged.
The addition of low-redshift measurements, as \textit{BAO} or Supernovae Ia Pantheon \textit{Pantheon} data, still hints to the presence of a coupling, albeit at a lower statistical significance. Also for these two data sets the Hubble constant values are larger than those obtained in the case of a pure $\Lambda$CDM scenario ($H_0= 67.66\pm0.42$~km/s/Mpc ($67.48\pm 0.50$~km/s/Mpc) for \textit{Planck}+\textit{BAO} (+\textit{Pantheon})). While in this case the central values of the inferred Hubble parameter are not as high as for the previously discussed case considering CMB data alone (for \textit{Planck}+\textit{BAO} we find $69.4^{+0.9}_{-1.5}$~km/s/Mpc), this value is large enough to bring the $H_0$ tension well below the $3\sigma$ level. In other words, the tension between \textit{Planck}+\textit{BAO} and R19 could be due to a statistical fluctuation in the case of an interacting scenario. Finally, when combining all datasets together (the \textit{All19} combination), we find $H_0=69.9 \pm 0.8$~km/s/Mpc, so that the tension with R19 is reduced to slightly more than $2.5\sigma$.
With regards to the \textit{BAO} dataset, it is important to remind the reader that BAO data is extracted under the assumption of $\Lambda$CDM, and the modified scenario of interacting dark energy could affect the result. However, the residual tension also clearly confirms earlier findings based on the inverse distance ladder approach (e.g.~\cite{Bernal:2016gxb,Feeney:2018mkj,Lemos:2018smw,Taubenberger:2019qna}) that finding late-time solutions to the $H_0$ tension which satisfactorily fit BAO and SNe data is challenging (albeit not impossible).
Finally, we compute $\ln B$ for all the 6 dataset combinations reported in Tab.~\ref{xi}. We confirm the findings of~\cite{DiValentino:2019ffd} that the preference for the coupled vacuum $\xi\Lambda$CDM model is positive when considering the \textit{Planck} dataset alone ($\ln B=1.3$), and very strong when considering the \textit{Planck}+\textit{R19} dataset combination ($\ln B=10.0$). The preference decreases to weak when considering the \textit{Planck}+\textit{lensing} dataset combination ($\ln B=0.9$). On the other hand, including late-time datasets through the \textit{Planck}+\textit{BAO} and \textit{Planck}+\textit{Pantheon} dataset combinations leads to the baseline $\Lambda$CDM model being preferred by Bayesian evidence considerations, with $\ln B=-0.6$ (weak preference) and $\ln B=-1.5$ (positive preference) respectively. Finally, considering the joint \textit{All19} dataset combination we find $\ln B=1.4$, and hence an overall positive preference for the $\xi\Lambda$CDM model. Although such a positive preference is mostly driven by the \textit{R19} dataset, we still find it intriguing given that the late-time \textit{BAO} and \textit{Pantheon} datasets (which strongly constrain late-time deviations from $\Lambda$CDM) were also included, and the resulting value of $H_0$ is such that the $H_0$ tension could be due to a statistical fluctuation in the case of the $\xi\Lambda$CDM model.
\squeezetable
\begin{center}
\begin{table*}
\begin{tabular}{cccccccccccccccc}
\hline\hline
Parameters & Planck & Planck & Planck& Planck & Planck & All19 \\
& &+R19 & +lensing & +BAO & + Pantheon & \\ \hline
$\Omega_b h^2$ & $ 0.0224 \pm 0.0002$ & $ 0.0224\pm0.0002$ & $ 0.0224\pm 0.0002$ & $ 0.0224\pm 0.0001$ & $ 0.0224\pm 0.0002$ & $ 0.0224\pm 0.0001$ \\
$\Omega_c h^2$ & $ <0.105 $ & $ 0.031^{+0.013}_{-0.023}$ & $ <0.108$ & $ 0.095^{+0.022}_{-0.008} $& $ 0.103^{+0.013}_{-0.007} $ & $0.092^{+0.011}_{-0.009}$ \\
$\xi$ & $ -0.54^{+0.12}_{-0.28}$ & $ -0.66^{+0.09}_{-0.13}$ & $ -0.51^{+0.12}_{-0.29}$ & $ -0.22^{+0.21}_{-0.05}$ & $ -0.15^{+0.12}_{-0.06}$ & $-0.24^{+0.09}_{-0.08}$ \\ \hline \hline
$H_0 $[km/s/Mpc] & $ 72.8^{+3.0}_{+1.5}$& $ 74.0^{+1.2}_{-1.0}$ & $ 72.8^{+3.0}_{+1.6}$ & $ 69.4^{+0.9}_{-1.5}$ & $ 68.6^{+0.8}_{-1.0}$ & $69.9 \pm 0.8$\\
$\sigma_8$ & $ 2.27^{+0.40}_{-1.40}$ & $ 2.71^{+0.47}_{-1.30}$ & $ 2.16^{+0.35}_{-1.40}$ & $ 1.05^{+0.03}_{-0.24}$ & $ 0.95^{+0.04}_{-0.12}$ & $1.04^{+0.08}_{-0.13}$\\
$S_8$ & $ 1.30^{+0.17}_{-0.44}$ & $ 1.44^{+0.17}_{-0.34}$ & $ 1.30^{+0.15}_{-0.42} $ & $ 0.93^{+0.03}_{-0.10}$ & $ 0.89^{+0.03}_{-0.05}$ & $0.92^{+0.04}_{-0.06}$\\ \thickhline
$\ln B$ & $1.3$ & $10.0$ & $0.9$ & $-0.6$ & $-1.5$ & $1.4$ \\
Strength & Positive ($\xi\Lambda$CDM) & Very strong ($\xi\Lambda$CDM) & Weak ($\xi\Lambda$CDM) & Weak ($\Lambda$CDM) & Positive ($\Lambda$CDM) & Positive ($\xi\Lambda$CDM) \\
\hline\hline
\end{tabular}
\caption{Constraints on selected cosmological parameters of the $\xi\Lambda$CDM model. Constraints are reported as 68\%~CL intervals, unless they are quoted as upper/lower limits, in which case they represent 95\%~CL upper/lower limits. The horizontal lines separating the final three parameters ($H_0$, $\sigma_8$, and $S_8$) from the above ones highlight the fact that these three parameters are derived. The second-last row, separated from the above ones by a thicker line, reports $\ln B$, the natural logarithm of the Bayes factor computed with respect to $\Lambda$CDM for each of the datasets in question. A positive [respectively negative] value of $\ln B$ indicates that the $\xi\Lambda$CDM [respectively $\Lambda$CDM] model is preferred. The final row quantifies the strength of the preference for either the $\xi\Lambda$CDM model or the $\Lambda$CDM model (as appropriate given the sign of $\ln B$, and indicated in brackets) using the modified Jeffreys scale discussed in the text.}
\label{xi}
\end{table*}
\end{center}
\subsection{Coupled quintessence: $\xi q$CDM model}
The constraints on the \textit{quintessence} coupled model ($\xi q$CDM) are summarized in Tab.~\ref{wq}.
In these models, the energy flows from the DM to the DE sector and the amount of the DM mass-energy density today is considerably reduced as the values of the coupling $\xi$ are increased, see Eq.~(\ref{eq:rhoc}) and the left panel of Fig.~\ref{fig:tri}. This explains why the \textit{Planck}, \textit{Planck}+\textit{R19}, and \textit{Planck}+\textit{lensing} dataset combinations prefer a non-zero value of the coupling at a rather high significance level ($>3\sigma$), as a value $\xi<0$ can accommodate the smaller amount of DM required when $w>-1$. Also in this case, as for the $\xi\Lambda$CDM model, the cold DM energy density as last-scattering is essentially unchanged with respect to $\Lambda$CDM, which is why the model can fit \textit{Planck} data well.
Concerning the $H_0$ tension, even if the value of the Hubble constant $69.8^{+4.0}_{-2.5}$~km/s/Mpc obtained for Planck data only is larger than in the baseline $\Lambda$CDM model, it is still not as large as in the case of the $\xi\Lambda$CDM model discussed above. This is due to the strong anti-correlation between $w$ and $H_0$, see the left panel of Fig.~\ref{fig:wH0}. This well-known anti-correlation reflects the competing effects of $H_0$ and $w$ on the comoving distance to last-scattering and is dominating the impact of $\xi$, which would instead push $H_0$ to even larger values as we saw earlier.
When combining CMB with the low-redshift \textit{BAO} and \textit{Pantheon} datasets, intriguingly a significant preference for a large negative value of $\xi$ persists, contrarily to the $\xi\Lambda$CDM scenario. Such a preference is driven by the fact that a non-zero coupling $\xi$ will reduce the large value of $\Omega_m$ required if the DE EoS is allowed to vary in the $w>-1$ region. As we saw earlier for the $\xi\Lambda$CDM model, adding low-redshift data decreases the central value of $H_0$, but it also reduces the significance of the Hubble tension between Planck+BAO and R19. Interestingly, we see that in case of \textit{Planck}+ \textit{BAO} and \textit{Planck}+\textit{Pantheon} there is also a preference for $w>-1$ at about three standard deviations. This preference is also suggested by the \textit{Planck}+\textit{R19} dataset. As a matter of fact, in the case of interacting dark energy, quintessence models agree with observations and also reduce the significance of the Hubble tension. When considering the \textit{All19} dataset combination, we find $H_0=69.8 \pm 0.8$~km/s/Mpc, and again as in the case of the $\xi\Lambda$CDM model the $H_0$ tension is reduced to slightly more than $2.5\sigma$.
Bayesian evidence considerations, however, overall disfavour the $\xi q$CDM model compared to $\Lambda$CDM. The extra parameter, $w$, is what is penalizing the $\xi q$CDM model. While the improvement in fit within the $\xi\Lambda$CDM model was sufficient to justify the extra parameter $\xi$, this is no longer the case in this model when taking into account the two extra parameters $\xi$ and $w$. In fact, except for the \textit{Planck}+\textit{R19} dataset combination, all other dataset combinations (including \textit{Planck} alone) favour $\Lambda$CDM, with strength ranging from weak (\textit{Planck} and \textit{All19}) to positive (\textit{Planck}+\textit{lensing}, \textit{Planck}+\textit{BAO}, and \textit{Planck}+\textit{Pantheon}), with the largest negative value of $\ln B$ being $\ln B=-2.6$ for the \textit{Planck}+\textit{Pantheon} dataset combination.
\squeezetable
\begin{center}
\begin{table*}
\begin{tabular}{cccccccccccccccc}
\hline\hline
Parameters & Planck & Planck & Planck& Planck & Planck & All19 \\
& &+R19 & +lensing & +BAO & + Pantheon \\ \hline
$\Omega_b h^2$ & $ 0.0224 \pm 0.0002$ & $ 0.0224\pm0.0002$ & $ 0.0224\pm 0.0001$ & $ 0.0224\pm 0.0001$ & $ 0.0224\pm 0.0002$ & $0.0224 \pm 0.0001$ \\
$\Omega_c h^2$ & $ <0.099 $ & $ <0.045$ & $ <0.091$ & $ <0.099 $& $ <0.099 $ & $<0.087$\\
$\xi$ & $ -0.63^{+0.06}_{-0.22}$ & $ -0.73^{+0.05}_{-0.10}$ & $ -0.61^{+0.08}_{-0.22}$ & $ -0.59^{+0.09}_{-0.25}$ & $ -0.58^{+0.10}_{-0.26}$ & $-0.59^{+0.10}_{-0.23}$\\
$w$ & $ <-0.69$ & $ -0.95^{+0.01}_{-0.05}$ & $ <-0.71$ & $ -0.84^{+0.09}_{-0.07}$ & $ -0.84^{+0.09}_{-0.05}$ & $-0.87^{+0.08}_{-0.05}$\\ \hline \hline
$H_0 $[km/s/Mpc] & $ 69.8^{+4.0}_{-2.5}$& $ 73.3^{+1.2}_{-1.0}$ & $ 69.9^{+3.7}_{-2.5}$ & $ 68.6\pm1.4$ & $ 68.3\pm1.0$ & $69.8 \pm 0.7$\\
$\sigma_8$ & $ 2.61^{+0.69}_{-1.70}$ & $ 3.43^{+0.94}_{-1.30}$ & $ 2.48^{+0.63}_{-1.60}$ & $ 2.31^{+0.56}_{-1.40}$ & $ 2.21^{+0.46}_{-1.30}$ & $2.3^{+0.5}_{-1.3}$\\
$S_8$ & $ 1.43^{+0.29}_{-0.46}$ & $ 1.63^{+0.31}_{-0.26}$ & $ 1.39^{+0.23}_{-0.44} $ & $ 1.35^{+0.24}_{-0.45}$ & $ 1.33^{+0.20}_{-0.44}$ & $1.34^{+0.19}_{-0.42}$\\ \thickhline
$\ln B$ & $-0.8$ & $7.4$ & $-1.3$ & $-1.8$ & $-2.6$ & $-0.3$ \\
Strength & Weak ($\Lambda$CDM) & Very strong ($\xi q$CDM) & Positive ($\Lambda$CDM) & Positive ($\Lambda$CDM) & Positive ($\Lambda$CDM) & Weak ($\Lambda$CDM) \\
\hline\hline
\end{tabular}
\caption{As in Tab.~\ref{xi}, for the $\xi q$CDM model.}
\label{wq}
\end{table*}
\end{center}
\subsection{Coupled phantom: $\xi p$CDM model}
The last model explored here is the one in which the DE EoS varies within the phantom region, $w<-1$. Therefore, to avoid instabilities, the coupling $\xi$ must be positive. The constraints on this model are shown in Tab.~\ref{wp}.
Notice from the right panels of Fig.~\ref{fig:tri} and Fig.~\ref{fig:wH0} that \textit{(i)} the current amount of $\Omega_mh^2$ is slightly larger than within the $\Lambda$CDM case [see also Eq.~(\ref{eq:rhoc})]; and \textit{(ii)} the value of the Hubble constant is also always much larger than in the canonical $\Lambda$CDM. This is due to the well-known fact that when $w$ is allowed to vary in the phantom region, the parameter $H_0$ must be increased to not to affect the location of the CMB acoustic peaks. Consequently, we always obtain an upper bound on $\xi$ rather than a preferred region, as the presence of a non-zero coupling $\xi$ drives the value of $\Omega_m h^2$ to values even larger than those obtained when $w$ is not constant and is allowed to vary within the $w<-1$ region freely. Also in this case, as for the $\xi\Lambda$CDM and $\xi q$CDM models, the cold DM energy density as last-scattering is essentially unchanged with respect to $\Lambda$CDM, which is why the model can fit \textit{Planck} data well.
However, the $H_0$ tension is still also strongly alleviated in this case, as there is an extreme degeneracy between $w$ and $H_0$ (see the right panel of Fig.~\ref{fig:wH0}), with $H_0=81.3$~km/s/Mpc from Planck-only data. Therefore, as we saw earlier for the $\xi q$CDM model, the $H_0$-$w$ degeneracy is strongly dominating over the $H_0$-$\xi$ one. Therefore, within the $\xi p$CDM model, the resolution of the $H_0$ tension is coming from the phantom character of the DE component, rather than from the dark sector interaction itself.
When including low-redshift \textit{BAO} and \textit{Pantheon} measurements, the net effect is to bring the mean value of the DE EoS $w$ very close to $-1$. Consequently, the value of $H_0$ also gets closer to its standard mean value within the $\Lambda$CDM case, albeit remaining larger than the latter. In any case, we confirm that the $H_0$ tension is reduced with non-minimal dark energy physics also when low-redshift data are included. When considering the \textit{All19} dataset combination, we find $H_0=69.8 \pm 0.7$~km/s/Mpc, and again as in the case of the $\xi\Lambda$CDM and $\xi q$CDM models the $H_0$ tension is reduced to slightly more than $2.5\sigma$.
As we saw previously with the $\xi q$CDM model, Bayesian evidence considerations overall disfavour the $\xi p$CDM model compared to $\Lambda$CDM, even more so than they did for the $\xi q$CDM model. With the exception of the \textit{Planck}+\textit{R19} dataset combination, all other dataset combinations favour $\Lambda$CDM, with strength ranging from positive (\textit{Planck}, \textit{Planck}+\textit{lensing}, \textit{All19}), to strong (\textit{Planck}+\textit{BAO}), to very strong (\textit{Planck}+\textit{Pantheon}), with the largest negative value of $\ln B$ being $\ln B=-5.2$ for the \textit{Planck}+\textit{Pantheon} dataset combination.
For the sake of comparison, in In Tab.~\ref{lcdm} we report constraints on selected parameters of the three interacting dark energy models we have considered and compare them to the constraints instead obtained assuming $\Lambda$CDM. We do this only for the \textit{Planck} dataset.
Finally, using the full non-Gaussian posterior on $H_0$, we compute the tension with the local measurement of \textit{R19}, quoted in terms of number of $\sigma$s, for all possible combinations of the three interacting dark energy models and six dataset combinations studied in the paper. These numbers are reported in Tab.~\ref{tension}. As we see, the tension is at a level larger than $3\sigma$ only for the \textit{Planck}+\textit{Pantheon} dataset combination for all three models (even for the \textit{Planck}+\textit{BAO} dataset combination the tension always remains below the $2.9\sigma$ level). On the other hand, when considering the \textit{All19} dataset combination, the tension reaches at most the $2.7\sigma$ level, confirming our earlier claim that the residual tension in most cases could almost be justified by a statistical fluctuation.
\squeezetable
\begin{center}
\begin{table*}
\begin{tabular}{cccccccccccccccc}
\hline\hline
Parameters & Planck & Planck & Planck& Planck & Planck & All19 \\
& &+R19 & +lensing & +BAO & + Pantheon \\ \hline
$\Omega_b h^2$ & $ 0.0224 \pm 0.0002$ & $ 0.0224\pm0.0002$ & $ 0.0224\pm 0.0002$ & $ 0.0224\pm 0.0001$ & $ 0.0224\pm 0.00012$ & $0.0224 \pm 0.0001$ \\
$\Omega_c h^2$ & $ 0.132^{+0.005}_{-0.012} $ & $ 0.133^{+0.006}_{-0.012}$ & $ 0.133^{+0.006}_{-0.012}$ & $ 0.134^{+0.007}_{-0.012} $& $ 0.134^{+0.006}_{-0.012} $ & $0.132^{+0.006}_{-0.012}$ \\
$\xi$ & $ <0.248$ & $ <0.277$ & $ <0.258$ & $ <0.295$ & $ <0.295$ & $<0.288$\\
$w$ & $ -1.59^{+0.18}_{-0.33}$ & $ -1.26\pm0.06$ & $ -1.57^{+0.19}_{-0.32}$ & $ -1.10^{+0.07}_{-0.04}$ & $ -1.08^{+0.05}_{-0.04}$ & $-1.12^{+0.05}_{-0.04}$\\ \hline \hline
$H_0 $[km/s/Mpc] & $ >70.4$& $ 74.1\pm1.4$ & $ 85.0^{+10.0}_{-5.0}$ & $ 68.8^{+1.1}_{-1.5}$ & $ 68.3\pm 1.0$ & $69.8 \pm 0.7$\\
$\sigma_8$ & $ 0.88\pm0.08$ & $ 0.80^{+0.06}_{-0.04}$ & $ 0.87\pm0.08$ & $ 0.75\pm0.05$ & $ 0.76^{+0.05}_{-0.04}$ & $0.76^{+0.06}_{-0.04}$\\
$S_8$ & $ 0.74\pm 0.04$ & $ 0.78 \pm 0.03$ & $ 0.74\pm 0.04 $ & $ 0.79\pm 0.03$ & $ 0.80\pm0.03$ & $0.79^{+0.03}_{-0.02}$\\ \thickhline
$\ln B$ & $-1.3$ & $5.6$ & $-1.6$ & $-4.5$ & $-5.2$ & $-2.7$ \\
Strength & Positive ($\Lambda$CDM) & Very strong ($\xi p$CDM) & Positive ($\Lambda$CDM) & Strong ($\Lambda$CDM) & Very strong ($\Lambda$CDM) & Positive ($\Lambda$CDM) \\
\hline\hline
\end{tabular}
\caption{As in Tab.~\ref{xi}, for the $\xi p$CDM model.}
\label{wp}
\end{table*}
\end{center}
\begin{figure*}[th]
\begin{center}
\includegraphics[width=0.49\textwidth]{w_H0_pos.pdf}
\includegraphics[width=0.49\textwidth]{w_H0_neg.pdf}
\caption{Left (right) panel: $68\%$ and $95\%$~CL allowed regions in the ($w, H_0$) plane for the $\xi q$CDM ($\xi p$CDM) model For Planck alone, Planck+BAO, and Planck+R19. Note the marginal overlap between the
Planck+BAO and Planck+R19 confidence regions indicating an easing of the Hubble tension.}
\label{fig:wH0}
\end{center}
\end{figure*}
\squeezetable
\begin{center}
\begin{table*}
\begin{tabular}{ccccccccccccccc}
\hline\hline
Parameters & $\Lambda$CDM & $\xi\Lambda$CDM & $\xi q$CDM & $\xi p$CDM \\ \hline
$\Omega_b h^2$ & $0.0224 \pm 0.0002$ & $0.0224 \pm 0.0002$ & $0.0224 \pm 0.0002$ & $0.0224 \pm 0.0002$ \\
$\Omega_c h^2$ & $0.120 \pm 0.001$ & $<0.105$ & $<0.099$ & $0.132^{+0.005}_{-0.012} $ \\
$\xi$ & $0$ & $-0.54^{+0.12}_{-0.28}$ & $-0.63^{+0.06}_{-0.22}$ & $<0.248$ \\
$w$ & $-1$ & $-0.999$ & $<-0.69$ & $-1.59^{+0.18}_{-0.33}$ \\ \hline \hline
$H_0 $[km/s/Mpc] & $67.3 \pm 0.6$& $72.8^{+3.0}_{-1.5}$ & $69.8^{+4.0}_{-2.5}$ & $>70.4$ \\
$\sigma_8$ & $0.81 \pm 0.01$ & $2.27^{+0.40}_{-1.40}$ & $2.61^{+0.69}_{-1.70}$ & $0.88 \pm 0.08$ \\
$S_8$ & $0.83 \pm 0.02$ & $1.30^{+0.17}_{-0.44}$ & $1.43^{+0.29}_{-0.46}$ & $0.74 \pm 0.04$ \\
\hline\hline
\end{tabular}
\caption{Constraints on selected parameters of the $\Lambda$CDM, $\xi\Lambda$CDM, $\xi q$CDM, and $\xi p$CDM models, using the \textit{Planck} dataset alone. Constraints are reported as 68\%~CL intervals, unless they are quoted as upper/lower limits, in which case they represent 95\%~CL upper/lower limits.}
\label{lcdm}
\end{table*}
\end{center}
\begin{center}
\begin{table*}
\begin{tabular}{ccccccccccccccc}
\hline\hline
Dataset & $\xi\Lambda$CDM & $\xi q$CDM & $\xi p$CDM \\ \hline
\textit{Planck} & $0.4\sigma$ & $1.0\sigma$ & $0.5\sigma$ \\
\textit{Planck}+\textit{R19} & $<0.1\sigma$ & $0.4\sigma$ & $<0.1\sigma$ \\
\textit{Planck}+\textit{lensing} & $0.4\sigma$ & $1.0\sigma$ & $2.1\sigma$ \\
\textit{Planck}+\textit{BAO} & $2.7\sigma$ & $2.7\sigma$ & $2.9\sigma$ \\
\textit{Planck}+\textit{Pantheon} & $3.3\sigma$ & $3.3\sigma$ & $3.3\sigma$ \\
\textit{All19} & $2.5\sigma$ & $2.7\sigma$ & $2.7\sigma$ \\
\hline\hline
\end{tabular}
\caption{Level of tension between the inferred value of $H_0$ and the \textit{R19} local measurements, quoted in terms of number of $\sigma$s, for all possible combinations of the three interacting dark energy models and six dataset combinations studied in the paper.}
\label{tension}
\end{table*}
\end{center}
\section{Conclusions}
\label{sec:conclusions}
In this work, we have re-examined the hotly debated $H_0$ tension in light of the state-of-the-art high- and low-redshift cosmological datasets, within the context of extended dark energy models. In particular, we have considered interacting dark energy scenarios, featuring interactions between dark matter (DM) and dark energy (DE), allowing for more freedom in the dark energy sector compared to our earlier work~\cite{DiValentino:2019ffd}, by not restricting the dark energy equation of state to being that of a cosmological constant. Early-time superhorizon instability considerations impose stability conditions on the DM-DE coupling $\xi$ and the DE EoS $w$, which we have carefully taken into account.
The most important outcome of our studies is the fact that within these non-minimal DE cosmologies, the long-standing $H_0$ tension is alleviated to some extent. For most of the models and dataset combinations considered, we find indications for a non-zero DM-DE coupling, with a significance that varies depending on whether or not we include low-redshift BAO and SNeIa data. When we allow the DE EoS $w$ to change, we find that the $H_0$-$w$ degeneracy strongly dominates over the $H_0$-$\xi$ one. This implies that the $H_0$ tension is more efficiently solved in the coupled phantom $\xi p$CDM model with $\xi>0$ and $w<-1$ rather than in the coupled quintessence $\xi q$CDM model with $\xi<0$ and $w>-1$, due to the phantom character of the DE rather than due to the presence of the DM-DE interaction.
The inclusion of low-redshift BAO and SNe data (whose results the reader can find in the two rightmost columns of Tab.~\ref{xi}, Tab.~\ref{wq}, and Tab.~\ref{wp}) somewhat mildens all the previous findings, although it is worth remarking that the $H_0$ tension is still alleviated even in these cases. It is also intriguing to see that within the coupled quintessence $\xi q$CDM model with $\xi<0$ and $w>-1$, the indication for a non-zero DM-DE coupling persists even when low-redshift data is included. Interestingly, evidence for $w>-1$ at three standard deviations is present when BAO or SNeIa data are included.
Bayesian evidence considerations overall appear to disfavour the interacting models considered, although these conclusions depend very much on which of the three models and six dataset combinations one considers. For instance, the $\xi\Lambda$CDM model with 7 parameters appears to fare rather well when compared to $\Lambda$CDM, being favoured against $\Lambda$CDM for all dataset combinations except \textit{Planck}+\textit{BAO} and \textit{Planck}+\textit{Pantheon}. In particular, when combining all datasets together (the \textit{All19} combination), we find an overall positive preference for the $\xi\Lambda$CDM model over $\Lambda$CDM. The situation is much less favourable for the coupled quintessence and coupled phantom models with 8 parameters, which are always disfavoured (even rather strongly) against $\Lambda$CDM (the only exception being when considering the \textit{Planck}+\textit{R19} dataset combination). Overall, we conclude that the $\xi\Lambda$CDM model can still be considered an interesting solution to the $H_0$ tension \textit{even when low-redshift datasets and Bayesian evidence considerations are taken into account}. This is the main result of our paper.
As a word of caution, the full procedure which leads to the BAO constraints carried out by the different collaborations might be not necessarily valid in extended DE models such as the ones explored here. For instance, the BOSS collaboration, in Ref.~\cite{Anderson:2013zyy}, advises caution when using their BAO measurements (both the pre- and post-reconstruction measurements) in more exotic dark energy cosmologies (see also~\cite{Xu:2012hg} for related work exploring similar biases). Hence, BAO constraints themselves might need to be revised in a non-trivial manner when applied to constrain extended dark energy cosmologies. We plan to explore these and related issues in future work.
Overall, our results suggest that non-minimal modifications to the dark energy sector, such as those considered in our work, are still an intriguing route towards addressing the $H_0$ tension. As it is likely that such tension will persist in the near future, we believe that further investigations along this line are worthwhile and warranted.
\begin{acknowledgments}
E.D.V. acknowledges support from the European Research Council in the form of a Consolidator Grant with number 681431. A.M. is supported by TASP, iniziativa specifica INFN. O.M. is supported by the Spanish grants FPA2017-85985-P and SEV-2014-0398 of the MINECO and the European Union's Horizon 2020 research and innovation program under the grant agreements No.690575 and 67489. S.V. is supported by the Isaac Newton Trust and the Kavli Foundation through a Newton-Kavli fellowship, and acknowledges a College Research Associateship at Homerton College, University of Cambridge. This work is based on observations obtained with Planck (www.esa.int/Planck), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada. We acknowledge use of the Planck Legacy Archive.
\end{acknowledgments}
\section*{Appendix}
Because there are strong correlations between certain parameters in all three interacting dark energy models studied, triangular plots showing the joint posteriors between these parameters might be more informative than the tables we presented. The most correlated parameters are $\Omega_ch^2$, $\xi$, $w$ (where applicable), as well as the derived parameters $H_0$ and $\Omega_m$. Here, we show triangular plots of the joint posteriors of these parameters within the three models studied, which clearly highlight the strong correlations at play.
\begin{figure*}[th]
\includegraphics[width=0.9\linewidth]{base_coupling_tri.pdf}
\caption{Triangular plot showing the 2D joint and 1D marginalized posteriors of $\Omega_ch^2$, $\xi$, $H_0$, and $\Omega_m$, obtained assuming the coupled vacuum $\xi\Lambda$CDM model, for the \textit{Planck} (grey contours), \textit{Planck}+\textit{BAO} (red contours), and \textit{Planck}+\textit{R19} (blue contours) dataset combinations. The plot clearly highlights the strong correlations between these parameters.}
\label{fig:base_coupling_tri}
\end{figure*}
\begin{figure*}[th]
\includegraphics[width=0.9\linewidth]{base_wpos_coupling_tri.pdf}
\caption{Triangular plot showing the 2D joint and 1D marginalized posteriors of $\Omega_ch^2$, $\xi$, $w$ $H_0$, and $\Omega_m$, obtained assuming the coupled quintessence $\xi q$CDM model, for the \textit{Planck} (grey contours), \textit{Planck}+\textit{BAO} (red contours), and \textit{Planck}+\textit{R19} (blue contours) dataset combinations. The plot clearly highlights the strong correlations between these parameters.}
\label{fig:base_wpos_coupling_tri}
\end{figure*}
\begin{figure*}[th]
\includegraphics[width=0.9\linewidth]{base_wneg_coupling_tri.pdf}
\caption{Triangular plot showing the 2D joint and 1D marginalized posteriors of $\Omega_ch^2$, $\xi$, $w$, $H_0$, and $\Omega_m$, obtained assuming the coupled phantom $\xi p$CDM model, for the \textit{Planck} (grey contours), \textit{Planck}+\textit{BAO} (red contours), and \textit{Planck}+\textit{R19} (blue contours) dataset combinations. The plot clearly highlights the strong correlations between these parameters.}
\label{fig:base_wneg_coupling_tri}
\end{figure*}
\bibliographystyle{JHEP}
|
1,941,325,221,154 | arxiv | \section{Introduction and Statement of Main Results}
Let $\mu$ be a weight on $\mathbb{R}$, i.e.~a function that is positive almost everywhere and is locally integrable. Then define $L^2(\mathbb{R};\mu)\equiv L^2(\mu)$ to be the space of functions which are square integrable with respect to the measure $\mu(x) dx$, namely
$$
\left\Vert f\right\Vert_{L^2(\mu)}^2\equiv\int_{\mathbb{R}} \left\vert f(x)\right\vert^2 \mu(x)dx.
$$
For an interval $I$, let $\left\langle \mu\right\rangle_{I}\equiv \frac{1}{\vert I\vert}\int_{I} \mu(x)dx$. And, similarly, set $\mathbb{E}_I^{\mu}(g)\equiv \frac{1}{\mu(I)}\int_{I} g\mu dx$.
In \cite{Bloom} Bloom considers the behavior of the commutator
\begin{equation*}
[b, H] : L ^{p} (\lambda) \mapsto L ^{p} (\mu)
\end{equation*}
where $H$ is the Hilbert transform. When the weights $\mu=\lambda\in A_2$ then it is well-known that boundedness is characterized by $b\in BMO$. Bloom however works in the setting of $ \mu \neq \lambda \in A_2$, finding a characterization in terms of a $BMO$ space adapted to the weight $\rho=\left(\frac{\mu}{\lambda}\right)^{\frac{1}{p}}$,
namely
$$ \label{e:bmo-rho}
\left\Vert b\right\Vert_{BMO_{\rho}}\equiv\sup_{I} \left(\frac{1}{\rho(I)}\int_{I} \left\vert b(x)-\left\langle b\right\rangle_I\right\vert^2 dx\right)^{\frac{1}{2}}.
$$
Recall that $ \lambda \in A_p$ if and only if the supremum over intervals below is finite.
\begin{equation*}
[ \lambda ] _{A_p} = \sup _{I} \langle \lambda \rangle_I \langle \lambda ^{1-p'} \rangle ^{p-1} < \infty .
\end{equation*}
\begin{thm}[Bloom, \cite{Bloom}*{Theorem 4.2}]
\label{t:bloom}
Let $ 1< p < \infty $, $ \mu ,\lambda \in A_p$. Set $ \rho = \left(\frac{\mu}{\lambda}\right) ^{\frac{1}{p}}$. Then,
$$
\left\Vert [b,H]:L^p(\mu)\to L^p(\lambda)\right\Vert\approx \left\Vert b\right\Vert_{BMO_{\rho}}.
$$
\end{thm}
The space $BMO_{\rho}=BMO$ when $\mu=\lambda$, and this case is well-known.
But, the general case is rather delicate, as there are three independent objects in the commutator,
the two weights and the symbol $ b$. It is remarkable that there is a single condition involving all three which characterizes the boundedness of the commutator.
Commutator estimates are interesting in that operator bounds are characterized in terms of function classes.
They generalize Hankel operators, encode weak-factorization results for the Hardy space, and can be used
to derive div-curl estimates. Bloom himself applied his inequality to matrix weights.
As far as we know, many of these topics remain unexplored in the setting of Bloom's
inequality, and we hope to return to these topics in future papers.
Weighted estimates for commutators are complicated, since $ [b,H]$ is essentially the composition of
$ H$ with paraproduct operators, see \eqref{e:expand} below. This makes two weight estimates for
commutators very difficult. But, the key assumption of both weights being in $ A_2$ allows several
proof strategies that are not available in the general two weight case. A key property is the `joint $ A _{\infty }$ property,' namely that one can quantitatively control Carleson sequences of intervals in both measures. Bloom's argument is based upon interesting sharp function inequality for the upper bound, and involves an \emph{ad hoc} argument in the lower bound.
We give an alternate proof of Theorem \ref{t:bloom} in the case when $p=2$.
This allows us to present the key ideas for a more general result.
There are different equivalent formulations of Bloom's $ BMO _{\rho }$ space, two of which are detailed in Section \ref{s:equiv}.
These formulations are ideal for characterizing certain two weight inequalities for paraproducts in Section \ref{s:paraproducts}.
Then, $ [b, H]$ is a linear combination of compositions of $ H$ with paraproducts, plus
an error term, as detailed in \eqref{e:expand}. The Hilbert transform is bounded on $ L^2$ of an $ A_2$ weight, thus, an upper bound for the commutator follows in Section \ref{s:Hilbert}.
For the lower bound, a standard argument reveals yet another formulation of the $ BMO _{\rho }$ condition in
\eqref{e:NecCon}.
In a subsequent paper the authors will show how Bloom's result can be extended to all Calder\'on-Zygmund operators in arbitrary dimension and when $1<p<\infty$.
As the reader will see, there are four different equivalent definitions of Bloom's $ BMO _{\rho }$ space.
It is hardly clear which is the best condition. Also, the $ A_2$ condition will be appealed to repeatedly.
For both reasons, we do not attempt to track the dependence on the $ A_2$ norms of the two weights.
In particular $ A \lesssim B $ means that there is an absolute constant $ C$, so that
$ A \leq C ([\lambda ] _{A_2} [\mu ] _{A_2}) ^{C} B $.
\section{Equivalences for Bloom's BMO}
\label{s:equiv}
One of the interesting points, implicit in Bloom's work, is that the $ BMO _{\rho }$ space presents itself in different
formulations at different points of the proof. In this section, we make these alternate definitions precise,
and do so in the dyadic setting. Thus, $ \mathcal D$ denotes the standard dyadic grid on $ \mathbb R $, and for
$ I\in \mathcal D$, the Haar function associated to $ I$ is
\begin{equation*}
h _{I} \equiv \lvert I\rvert ^{-1/2} ( - \mathbf 1_{I _{-}} + \mathbf 1_{I _{+}})
\end{equation*}
where $ I _{\pm} $ are the left and right dyadic children of $ I$.
For weights $\mu,\lambda\in A_2$, define
\begin{equation}
\label{e:Bloom1}
\mathbf B_{2} [\mu ,\lambda]\equiv \sup _{ K\in\mathcal{D}} \mu^{-1} (K) ^{-1/2}
\left\lVert \sum_{I \;:\; I\subset K} \widehat b (I) \langle \mu^{-1} \rangle_I h_I
\right\rVert _{L ^{2}(\lambda)}.
\end{equation}
Above, $\widehat b (I) = \langle b, h_I \rangle $, and $\langle \cdot,\cdot\rangle $ denotes the usual inner product in \textit{unweighted} $L^2(\mathbb{R})$.
Note that by the boundedness of the square function on $L^2(w)$, \cite{Witwer}, this can equivalently be characterized by:
\begin{equation}
\label{e:Bloom2}
\mathbf B_{2} [\mu ,\lambda] ^2 \equiv \sup _{ K\in\mathcal{D}} \frac 1 {\mu^{-1} (K)} \sum_{I\subset K} \widehat b(I)^2 \left\langle \mu^{-1}\right\rangle_I^2 \left\langle \lambda\right\rangle_I
\end{equation}
\begin{prop}\label{p:B=B} For $\mu,\lambda\in A_2$ there holds
\begin{equation} \label{e:B=B}
\mathbf B _{2}[\mu, \lambda] \simeq \mathbf B _{2}[\lambda^{-1}, \mu^{-1} ] \simeq \lVert b\rVert _{BMO _{\rho }}
\end{equation}
where $ \lVert b\rVert _{BMO _{\rho }}$ denotes the dyadic variant of the $ BMO _{\rho }$ space.
\end{prop}
\begin{proof}
We prove the first equivalence in \eqref{e:B=B}.
Fix a interval $ I_0$ for which we verify that
\begin{equation*}
\sum_{I \subset I_0} \widehat b (I) ^2 \langle \lambda \rangle_I ^2 \left\langle \mu^{-1}\right\rangle_I \lesssim
\mathbf B _{2} [\mu ,\lambda] ^2 \lambda (I_0).
\end{equation*}
This will show that $ \mathbf B _{2}[\lambda^{-1}, \mu^{-1} ] \lesssim\mathbf B _{2}[\mu, \lambda] $,
and by symmetry the reverse inequality holds.
Construct stopping intervals by taking $ \mathcal S$ to be the maximal subintervals $I\subset I_0$ such that
\begin{equation*}
\langle \lambda \rangle_{I} > C \langle \lambda \rangle_{I_0} \quad \textup{or} \quad
\langle \lambda \rangle_{I} < C ^{-1} \langle \lambda \rangle_{I_0},
\end{equation*}
or the same conditions hold for $ \mu^{-1} $. By the $ A _{\infty } $ properties of $ \mu , \mu ^{-1}, \lambda $ and $ \lambda ^{-1}$, for $C= C_{\mu,\lambda}>1$ sufficiently large, there holds
\begin{equation*}
\sum_{S\in \mathcal S} \lambda (S) < \tfrac 12 \lambda (I_0).
\end{equation*}
The small constant in front implies that we can recurse inside these intervals, and so it remains to bound the
sum over intervals `above' the stopping intervals.
Let $ \mathcal I $ denote that $ I\subset I_0$ which are not contained in any stopping interval. Note that the
$ \mathbf B _{2} [\mu ,\lambda] $ condition implies that
\begin{equation} \label{e:B=>}
\sum_{I' \in \mathcal I}
\widehat b (I') ^2 \lesssim \mathbf B _{2} [\mu ,\lambda] ^2 \frac { \mu ^{-1}(I_0)} { \langle \mu^{-1} \rangle _{I_0} ^2 \langle \lambda \rangle_{I_0}} =\mathbf B _{2} [\mu ,\lambda] ^2 \frac { \lvert I_0\rvert } { \langle \mu^{-1} \rangle _{I_0} \langle \lambda \rangle_{I_0}}
\end{equation}
Therefore,
\begin{align*}
\sum_{I \in \mathcal I} \widehat b (I) ^2 \langle \lambda \rangle_I ^2\left\langle \mu^{-1}\right\rangle_I
& \lesssim
\langle \lambda \rangle _{I_0} ^2 \langle \mu^{-1} \rangle _{I_0} \sum_{I \in \mathcal I} \widehat b (I)^2
\\
& \lesssim \mathbf B _{2} [\mu ,\lambda]^2 \lambda (I_0).
\end{align*}
Hence we have that $\mathbf B_2[\lambda^{-1},\mu^{-1}]\lesssim \mathbf B_{2}[\mu,\lambda]$. This argument is symmetric and so the result follows.
\bigskip
We now show that
$
\lVert b\rVert _{BMO _{\rho }} \lesssim \mathbf B _{2}[\mu ,\lambda]
$, establishing first an intermediate result.
Use the same stopping interval construction as in the previous argument. Then, we have by Cauchy-Schwartz and \eqref{e:B=>},
\begin{align*}
\int _{I_0} \biggl[
\sum_{I\in \mathcal I} \frac {\widehat b (I) ^2 } {\lvert I\rvert } \mathbf 1_{I}
\biggr] ^{1/2} \;dx
& \lesssim \mathbf B _{2} [\mu ,\lambda] \frac {\lvert I_0\rvert } { [\langle \mu^{-1} \rangle _{I_0} \langle \lambda \rangle _{I_0} ] ^{1/2} }
\\&
\lesssim \mathbf B _{2} [\mu ,\lambda] \frac {\lvert I_0\rvert ^2 }
{ [\mu^{-1} (I_0) \lambda (I_0) ]} & \textup{rewrite}
\\
& \lesssim \mathbf B _{2} [\mu ,\lambda] \frac {\lvert I_0\rvert ^2 } {( \mu^{-1/2} \lambda ^{1/2} )(I_0)}
& \textup{by H\"older's}
\\&\lesssim \mathbf B _{2} [\mu ,\lambda] \rho (I_0) & \textup{$ \rho = (\mu /\lambda ) ^{1/2} \in A_2$. }
\end{align*}
Here, we use the estimate \eqref{e:B=>}, then H\"older's inequality, to get to the product of
the two $ A_2$ weights. The product is again an $ A_2$ weight, which is the last property used.
It follows from this that we have proved a bound for an $ L ^{1}$ BMO condition, namely
\begin{equation} \label{e:1BMO}
\sup _{I_0} \frac 1 {\rho (I_0)}
\int _{I_0} \biggl[
\sum_{I \;:\; I\subset I_0} \frac {\widehat b (I) ^2 } {\lvert I\rvert } \mathbf 1_{I}
\biggr] ^{1/2} \;dx \lesssim \mathbf B _{2} [\mu ,\lambda] .
\end{equation}
Bloom's definition however includes a square inside the integral, see \eqref{e:bmo-rho}.
To show that the condition above is the same as in \eqref{e:bmo-rho}, run another stopping condition,
and again appeal to the fact that $ \rho \in A_2$.
Let $ \mathcal S$ be the maximal intervals $S\subset I_0$ such that
\begin{equation*}
\sum_{I \;:\; S \subset I\subset I_0} \frac {\widehat b (I) ^2 } {\lvert I\rvert } \ge C
\mathbf B _{2} [\mu ,\lambda] ^2 \langle \rho \rangle _{I_0}
\end{equation*}
For $ C = C _{\rho }$ sufficiently large, there holds
\begin{equation*}
\sum_{S \in \mathcal S} \rho (S) \leq \tfrac 12 \rho (I_0),
\end{equation*}
and so we can recurse on these intervals. Let $ \mathcal I$ be those intervals contained in $ I_0$ but not contained
in any $ S\in \mathcal S$. There holds
\begin{align*}
\int _{I_0} \sum_{I \in \mathcal I} \frac {\widehat b (I) ^2 } {\lvert I\rvert } \mathbf 1_{I}
\;dx
& \lesssim
\mathbf B _{2} [\mu ,\lambda] ^2 \langle \rho \rangle _{I_0} \lvert I_0\rvert \lesssim
\mathbf B _{2} [\mu ,\lambda] ^2 \rho (I_0).
\end{align*}
That implies that $ \lVert b\rVert _{BMO _{\rho }} \lesssim \mathbf B _{2}[\mu ,\lambda] $.
\bigskip
We show that $ \mathbf B _{2}[\mu ,\lambda] \lesssim \lVert b\rVert _{BMO _{\rho }}$.
Fix the interval $ I_0$ on which we will verify the $ \mathbf B _{2}[\mu ,\lambda] $ condition.
We need stopping conditions, so let $ \mathcal S$ be the maximal dyadic intervals $ I\subset I_0$ such that
one of three conditions is met:
\begin{itemize}
\item[(1)] $ \langle \mu^{-1} \rangle _{I} > C \langle \mu^{-1} \rangle _{I_0}$,
\item[(2)] $ \langle \rho \rangle _{I} > C \langle \rho \rangle _{I_0}$, or
\item[(3)]
$
\sum_{I' \;:\; I \subset I'\subset I_0} \widehat b (I') ^2 \lvert I'\rvert ^{-1}
> [C_b \langle \rho \rangle _{I_0} ] ^2 $.
\end{itemize}
For $1\leq j\leq 3$, let $ \mathcal S_j$ be those intervals $ S\in \mathcal S$ which meet the condition $ (j)$. For the first condition, there holds
\begin{equation*}
\sum_{S\in \mathcal S_1} \mu^{-1} (S) \le \tfrac 14 \mu^{-1} (I_0),
\end{equation*}
and so we can recurse on those intervals. For the second condition, there holds
\begin{equation*}
\sum_{S\in \mathcal S_2} \lvert S\rvert \le \epsilon _C \lvert I_0\rvert ,
\end{equation*}
by the $ A _{\infty }$ condition for $ \rho $. Here $ \epsilon _C $ can be made arbitrarily small.
The same condition holds for $ \mathcal S_3$, but this is just the usual John-Nirenberg estimate.
The weight $ \mu^{-1} $ is also $ A_ \infty $, so that for $ \epsilon _C $ sufficiently small, we see that
\begin{equation*}
\sum_{S\in \mathcal S} \mu^{-1} (S) \le \tfrac 12 \mu^{-1} (I_0),
\end{equation*}
and so we can recurse inside this collection. It remains to estimate the sum over $ I \subset I_0$ which
are not contained in a interval $ S\in \mathcal S$. Calling this collection $ \mathcal I$, we have
\begin{align*}
\int _{I_0}
\sum_{I' \in \mathcal I} \widehat b(I')^2 \langle \mu^{-1} \rangle_I ^2 \frac {\mathbf 1_{I}(x)} {\lvert I'\rvert } \; \lambda(x) dx
&\lesssim \langle \mu^{-1} \rangle _{I_0} ^2 \langle \rho \rangle_{I_0} ^2 \lambda (I_0)
\\
& \lesssim\mu^{-1} (I_0) \frac {\mu^{-1}(I_0) \mu (I_0) \lambda^{-1} (I_0) \lambda (I_0) } {\lvert I_0\rvert ^{4} }
\lesssim \mu^{-1} (I_0).
\end{align*}
Here, we have just used the stopping conditions, then used the easy bound
$ \rho (I_0) ^2 \le \mu (I_0) \lambda^{-1} (I_0) $, and finally
appealed to the $ A_2$ conditions on $ \mu^{-1} $ and $ \lambda$.
\end{proof}
\section{Two Weight Inequalities for Paraproduct Operators}
\label{s:paraproducts}
The `paraproduct' operator with symbol function $b$, and its dual, are defined by
\begin{align*}
\Pi_b&\equiv \sum_{I\in\mathcal{D}} \widehat{b}(I) h_I\otimes \frac{\mathsf{1}_I}{\left\vert I\right\vert},
\\ \textup{and} \qquad
\Pi_b^{\ast}&\equiv \sum_{I\in\mathcal{D}} \widehat{b}(I) \frac{\mathsf{1}_I}{\left\vert I\right\vert}\otimes h_I.
\end{align*}
Note that $\Pi_b^{\ast}$ is the adjoint of the paraproduct on \textit{unweighted} $L^2(\mathbb{R})$. Using the identification $\left(L^2(w)\right)^* \equiv L^2(w^{-1})$, with pairing $\left<f,g\right>$ for all $f \in L^2(w)$ and $g \in L^2(w^{-1})$, we can see that
$$\text{The adjoint of } \Pi_b : L^2(\mu) \rightarrow L^2(\lambda) \text{ is } \Pi_b^* : L^2(\lambda^{-1}) \rightarrow L^2(\mu^{-1}); $$
$$\text{The adjoint of } \Pi^*_b : L^2(\mu) \rightarrow L^2(\lambda) \text{ is } \Pi_b : L^2(\lambda^{-1}) \rightarrow L^2(\mu^{-1}). $$
The characterization of the boundedness of these operators between weighted spaces $L^2(\mu)$ and $L^2(\lambda)$ is as follows.
\begin{thm}
\label{t:paraproduct}
Let $\mu,\lambda\in A_2$. Suppose that $\mathbf B_2[\mu,\lambda]$ and $\mathbf B_2[\lambda^{-1},\mu^{-1}]$ finite.
Then we have
\begin{eqnarray}
\label{e:Pib1} & & \left\Vert \Pi_b:L^2(\mu)\to L^2(\lambda)\right\Vert = \left\Vert \Pi^*_b:L^2(\lambda^{-1})\to L^2(\mu^{-1})\right\Vert \simeq \mathbf B_2[\mu,\lambda] \\
\label{e:Pib*1} & & \left\Vert \Pi_b^{\ast}:L^2(\mu)\to L^2(\lambda)\right\Vert = \left\Vert \Pi_b:L^2(\lambda^{-1})\to L^2(\mu^{-1})\right\Vert \simeq \mathbf B_2[\lambda^{-1},\mu^{-1}].
\end{eqnarray}
\end{thm}
\begin{proof}[Proof of Sufficiency in Theorem \ref{t:paraproduct}]
Before the proof, recall that for any weight $w\in A_2$ we have:
\begin{equation}\label{e:PPott}
\sum_{I \in \mathcal{D}} |\widehat{f}(I)|^2 \frac{1}{\left<w\right>_I} \lesssim [w]_{A_2} \|f\|^2_{L^2(w^{-1})},
\end{equation}
which can be found in \cite{PetermichlPott}.
Note that since $\mathbf B_2[\mu,\lambda]$ and $\mathbf B_2[\lambda^{-1},\mu^{-1}]$ are finite:
\begin{eqnarray}
\sum_{I\subset J} \widehat b(I)^2\left\langle \lambda\right\rangle_{I} \left\langle \mu^{-1} \right\rangle_{I}^2 & \leq & \mathbf B_2[\mu,\lambda]^2 \mu^{-1}(J)\quad\forall J\in\mathcal{D}\label{e:CET1nec}\\
\sum_{I\subset J} \widehat b(I)^2 \left\langle \mu^{-1}\right\rangle_I \left\langle \lambda\right\rangle_I^2 & \leq & \mathbf B_2[\lambda^{-1},\mu^{-1}]^2\lambda(J) \quad\forall J\in\mathcal{D}\label{e:CET2nec}.
\end{eqnarray}
These conditions will imply the certain measures are Carleson, and so we can then appeal to the Carleson Embedding Theorem to control terms directly.
We proceed by duality to analyze the operator $\Pi_b$. Note that for $f\in L^2(\mu)$ and $g\in L^2(\lambda)$ we have
\begin{eqnarray*}
\left\vert \left\langle \Pi_b f,g\right\rangle_{L^2(\lambda)} \right\vert & \leq & \sum_{I\in\mathcal{D}} \left\vert\widehat{b}(I) \left\langle f \right\rangle_I \left\langle g, h_I\right\rangle_{L^2(\lambda)}\right\vert = \sum_{I\in\mathcal{D}} \left\vert \widehat{b}(I) \left\langle \mu^{-1}\right\rangle_{I} \mathbb{E}_{I}^{\mu^{-1}}(f\mu) \left\langle g, h_I\right\rangle_{L^2(\lambda)}\right\vert\\
& = & \sum_{I\in\mathcal{D}} \left\vert\widehat{b}(I) \left\langle \mu^{-1}\right\rangle_{I} \left\langle\lambda\right\rangle_{I}^{\frac{1}{2}} \left\langle\lambda\right\rangle_{I}^{-\frac{1}{2}}\mathbb{E}_{I}^{\mu^{-1}}(f\mu) \left\langle g, h_I\right\rangle_{L^2(\lambda)}\right\vert\\
& \leq & \left(\sum_{I\in\mathcal{D}} \hat{b}(I)^2 \left\langle \mu^{-1}\right\rangle_{I}^2\left\langle \lambda\right\rangle_{I} \mathbb{E}_{I}^{\mu^{-1}}(f\mu)^2 \times \sum_{I\in\mathcal{D}} \frac{\left\vert \widehat{g\lambda}(I)\right\vert^2}{\left\langle \lambda\right\rangle_I}\right)^{\frac{1}{2}}\\
& \leq &
\mathbf B_2[\mu,\lambda]\left\Vert \mu f\right\Vert_{L^2(\mu^{-1})} \left\Vert g\lambda\right\Vert_{L^2(\lambda^{-1})}\\
& = &
\mathbf B_2[\mu,\lambda]\left\Vert f\right\Vert_{L^2(\mu)} \left\Vert g\right\Vert_{L^2(\lambda)}.
\end{eqnarray*}
Here, we have used the Carleson Embedding Theorem to control the term with the averages on $f$, which is applicable by \eqref{e:CET1nec}, and we have used \eqref{e:PPott} to handle the other term. The claimed estimate, \eqref{e:Pib1}, on the norm of $\Pi_b:L^2(\mu)\to L^2(\lambda)$ follows.
We next turn to controlling $\Pi_b^*$ and again resort to duality to estimate the norm. Indeed, we have
\begin{eqnarray*}
\left\vert \left\langle \Pi_b^* f,g\right\rangle_{L^2(\lambda)} \right\vert & \leq & \sum_{I\in\mathcal{D}} \left\vert \widehat{b}(I) \left\langle g\lambda \right\rangle_I \left\langle f, h_I\right\rangle_{L^2}\right\vert = \sum_{I\in\mathcal{D}} \left\vert\widehat{b}(I) \left\langle \mu^{-1}\right\rangle_I^{\frac{1}{2}} \left\langle \lambda\right\rangle_{I} \mathbb{E}_I^{\lambda}\left\langle g \right\rangle_I \frac{\widehat{f}(I)}{\left\langle \mu^{-1}\right\rangle_I^{\frac{1}{2}}}\right\vert\\
& \leq & \left(\sum_{I\in\mathcal{D}} \hat{b}(I)^2 \left\langle \mu^{-1}\right\rangle_{I}\left\langle \lambda\right\rangle_{I}^{2} \mathbb{E}_{I}^{\lambda}(g)^2
\times
\sum_{I\in\mathcal{D}} \frac{\widehat{f}(I)^2}{\left\langle \mu^{-1}\right\rangle_I}\right)^{\frac{1}{2}}\\
& \leq
\mathbf B_2[\lambda^{-1},\mu^{-1}]\left\Vert f\right\Vert_{L^2(\mu)} \left\Vert g\lambda\right\Vert_{L^2(\lambda)},
\end{eqnarray*}
with the inequality following by the Carleson Embedding Theorem since we are imposing condition \eqref{e:CET2nec} and also using \eqref{e:PPott}. Combining all these estimates, we see that \eqref{e:Pib*1} holds.
\end{proof}
\begin{proof}[Proof of Necessity in Theorem \ref{t:paraproduct}]
Fix an interval $ I$, and choose $ f=\mu^{-1}\mathbf 1_I$. Then we have $ \lVert f\rVert _{L ^{2} (\mu )} = \mu^{-1} (I) ^{1/2} $. Then we have:
\begin{align*}
\left\lVert \sum_{I \;:\; I\subset I}
\widehat b (I) \langle \mu^{-1} \rangle_I h_I
\right\rVert _{L ^{2}(\lambda)}
\le \lVert \Pi _{b} f \rVert _{L ^{2} (\lambda)} \leq \left\Vert \Pi_b:L^2(\mu)\to L^2(\lambda)\right\Vert \mu^{-1} (I) ^{1/2},
\end{align*}
with the last inequality following from the assumed norm boundedness of the paraproduct. Hence, we have that:
\begin{equation} \label{E:PibNec}
\mathbf{B}_2[\mu,\lambda]\leq \left\Vert \Pi_b:L^2(\mu)\to L^2(\lambda)\right\Vert.
\end{equation}
In light of our previous discussion about adjoints, proving the necessity for $\Pi_b$ will address $\Pi_b^*$ as well. Since if $\Pi^*_b : L^2(\mu) \rightarrow L^2(\lambda)$ is bounded, then $\Pi_b : L^2(\lambda^{-1}) \rightarrow L^2(\mu^{-1})$ is bounded, with the same operator norm. From \eqref{E:PibNec}, we have then
$$B_2[\lambda^{-1},\mu^{-1}] \leq \left\Vert \Pi_b:L^2(\lambda^{-1})\to L^2(\mu^{-1})\right\Vert =
\left\Vert \Pi^*_b:L^2(\mu)\to L^2(\lambda)\right\Vert.$$
\end{proof}
\section{Proof of Bloom's Theorem, $ p=2$}
\label{s:Hilbert}
For the sufficiency, we use Petermichl's beautiful observation that the Hilbert transform can be recovered through an appropriate average of Haar shifts, \cite{MR1756958}. On the dyadic lattice $\mathcal{D}$ with Haar basis $\{h_I\}_{I\in\mathcal{D}}$, we define $\Sh h_I=\frac{1}{\sqrt{2}}(h_{I_-}-h_{I_+})$, which is Petermichl's Haar shift operator. Then, the Hilbert transform is an average of shift operators, with the average performed over the class of
all dyadic grids. In particular, to prove norm inequalities for the Hilbert transform, it suffices to prove them
for the Haar shift operator, which has proven to be a powerful proof technique.
The commutator with the Haar shift operator has an explicit expansion in terms of the paraproducts and $\Sh $, see \cite{MR1756958} for this decomposition,
\begin{equation}
\label{e:expand}
[b,\Sh]f=\Sh(\Pi_bf)-\Pi_b(\Sh f)+\Sh (\Pi_b^* f)-\Pi_b^*(\Sh f)+\Pi_{\Sh f} b-\Sh(\Pi_f b).
\end{equation}
For any $w\in A_2$, $\left\Vert \Sh:L^2(w)\to L^2(w)\right\Vert\lesssim [w]_{A_2}$, \cite{MR2354322}.
Thus, for the first four terms above, we merely have to control the paraproduct term. But this is done in Theorem \ref{t:paraproduct}.
Thus, we see that:
$$
\left\Vert [b,\Sh]f\right\Vert_{L^2(\lambda)}\lesssim \left(\mathbf B_2[\lambda^{-1},\mu^{-1}]+\mathbf B_2[\mu,\lambda]\right)\left\Vert f\right\Vert_{L^2(\mu)}+\left\Vert \Pi_{\Sh f} b-\Sh(\Pi_f b)\right\Vert_{L^2(\lambda)}.
$$
The last two terms in \eqref{e:expand} have more cancellation than the other four terms. By direct calculation,
$$
\Pi_{\Sh f} b-\Sh(\Pi_f b)=\sum_{I\in\mathcal{D}} \frac{\widehat{b}(I)}{\left\vert I\right\vert^{\frac{1}{2}}} \widehat{f}(I) (h_{I_+}-h_{I_-}).
$$
We then show that:
\begin{eqnarray*}
\left\Vert \Pi_{\Sh f} b-\Sh(\Pi_f b)\right\Vert_{L^2(\lambda)}^2 & \lesssim &
\left\Vert S(\Pi_{\Sh f} b-\Sh(\Pi_f b))\right\Vert_{L^2(\lambda)}^2\\
& \lesssim
\sum_{I\in\mathcal{D}} \frac{\widehat b(I)^2}{\left\vert I\right\vert} \left\langle \lambda\right\rangle_I \hat{f}(I)^2\\
& = &
\sum_{I\in\mathcal{D}} \frac{\widehat b(I)^2}{\left\vert I\right\vert} \left\langle \lambda\right\rangle_I \left\langle \mu^{-1}\right\rangle_I \frac{\hat{f}(I)^2}{\left\langle \mu^{-1}\right\rangle_I}.
\end{eqnarray*}
Note that \eqref{e:CET1nec} and \eqref{e:CET2nec} imply that:
$$
\sup_{I\in\mathcal{D}} \frac{ \hat{b}(I)^2\left\langle\lambda\right\rangle_I\left\langle\mu^{-1}\right\rangle_I}{\left\vert I\right\vert}\leq \mathbf B_2[\lambda^{-1},\mu^{-1}]\mathbf B_2[\mu,\lambda].
$$
And, so we then have:
\begin{eqnarray*}
\left\Vert \Pi_{\Sh f} b-\Sh(\Pi_f b)\right\Vert_{L^2(\lambda)}^2 & \lesssim
\mathbf B_2[\lambda^{-1},\mu^{-1}]\mathbf B_2[\mu,\lambda]\sum_{I\in\mathcal{D}} \frac{ \hat{f}(I)^2}{\left\langle \mu^{-1}\right\rangle_I}\\
& \lesssim &
\mathbf B_2[\lambda^{-1},\mu^{-1}]\mathbf B_2[\mu,\lambda] \left\Vert f\right\Vert_{L^2(\mu)}^2.
\end{eqnarray*}
We now turn to the converse result. Assume that there holds
\begin{equation*}
\lVert [ b , H] : L ^{2} (\mu ) \mapsto L ^{2} (\lambda)\rVert <\infty.
\end{equation*}
Using an argument of Coifman--Rochberg--Weiss, \cite{CRW}, we
derive a new necessary condition, and show that it dominates Bloom's condition.
Let $ I$ be an interval centered at the origin, and set $ S_I = \mathbf 1_{I}\, \textup{sgn} (b - \langle b \rangle_I)$. We have
\begin{align*}
\lvert I\rvert \cdot \bigl\lvert (b& - \langle b \rangle_I) \mathbf 1_{I}\bigr\rvert
\\
&=
\int _{I} \frac {b (x) - b (y)} { x-y }(x-y)S_I (x) \mathbf 1_{I} (y) \; dy
\\
&= x S_I (x) \bigl\{[b, H ] (\mathbf 1_{I} (y)) \bigr\} (x)-S_I (x) \bigl\{[b, H ] (y\mathbf 1_{I} (y)) \bigr\} (x).
\end{align*}
The assumed norm inequality
then implies that
\begin{align}
\lvert I\rvert ^2 \int _{I} \lvert b(x) - \langle b \rangle _I \rvert ^2 \; \lambda(x) dx
\lesssim \lvert I\rvert ^{2} \mu (I) \lVert [ b, H] : L ^{2} (\mu) \mapsto L ^{2} (\lambda)\rVert^2.
\end{align}
The assumption that $ I$ is centered at the origin then allows us to dominate $ \lvert x\rvert \lesssim \left\vert I\right\vert$. The centering is a harmless assumption, and so we deduce the necessary condition
\begin{equation} \label{e:necc1}
\sup _{I} \frac 1 {\mu (I)} \int _{I} \lvert b(x) - \langle b \rangle _I \rvert ^2 \; \lambda(x) dx \lesssim \lVert [ b, H] : L ^{2} (\mu) \mapsto L ^{2} (\lambda)\rVert^2.
\end{equation}
But $ \mu \in A_2$, which implies that:
\begin{equation*}
1\leq \frac {\mu (I)\mu^{-1} (I)} {\lvert I\rvert ^2 } \leq \left[ \mu \right]_{A_2}\quad\forall I\in\mathcal{D}.
\end{equation*}
Hence, we see that
\begin{equation}
\label{e:NecCon}
\sup _{I} \frac {\mu^{-1} (I)} {\lvert I\rvert ^2 }
\int _{I} \lvert b(x) - \langle b \rangle _I \rvert ^2 \; \lambda(x) dx \lesssim
\lVert [ b, H] : L ^{2} (\mu) \mapsto L ^{2} (\lambda)\rVert^2.
\end{equation}
\smallskip
We show that \eqref{e:NecCon} implies that $\mathbf B _{2} [\mu ,\lambda]$ is finite. As we have already shown that this is equivalent to the Bloom condition, it implies that the boundedness of the commutator implies that $b$ belongs to the Bloom BMO space. Recall that,
\begin{equation*}
\mathbf B_{2} [\mu ,\lambda]:=
\sup _{K\in\mathcal{D}} \mu^{-1} (K) ^{-1/2}
\left\lVert \sum_{I \;:\; I\subset K}
\widehat b (I) \langle \mu^{-1} \rangle_I h_I
\right\rVert _{L ^{2}(\lambda)}.
\end{equation*}
Fix a interval $ I_0$ on which we need to verify the $\mathbf B_2[\mu,\lambda]$ condition.
Let $ \mathcal S$ be the maximal stopping intervals $ S\subset I_0$ so that $ \langle \mu^{-1} \rangle_S \ge 4 \langle \mu^{-1} \rangle _{I_0}$. By the $ A _{\infty }$ property of $ \mu^{-1} $, it suffices to restrict the sum above to $ I\subset I_0$
with $ I $ not contained in any stopping interval. But then we have
\begin{align*}
\sum_{I \in \mathcal I} \bigl\lvert \widehat b (I)\bigr\rvert ^2 \langle \mu^{-1} \rangle_I^2
\left\langle \lambda\right\rangle_I & \lesssim
\langle \mu^{-1} \rangle_{I_0} ^2
\sum_{I \;:\; I\subset I_0}\left\vert \widehat b (I) \right\vert^2 \frac {\lambda (I)} {\lvert I\rvert }
\\
& \lesssim
\langle \mu^{-1} \rangle_{I_0} ^2 \int _{I_0} \bigl\lvert b(x) - \langle b \rangle_ {I_0}\bigr\rvert ^2 \; \lambda(x) dx
\\
& \lesssim \langle \mu^{-1} \rangle_{I_0} ^2 \frac { \lvert I_0\rvert ^2 } {\mu^{-1} (I_0)} \sup _{I} \frac {\mu^{-1} (I)} {\lvert I\rvert ^2 }
\int _{I} \lvert b - \langle b \rangle _I \rvert ^2 \; \lambda(x) dx\\
& = \mu^{-1} (I_0)\sup _{I} \frac {\mu^{-1} (I)} {\lvert I\rvert ^2 }
\int _{I} \lvert b - \langle b \rangle _I \rvert ^2 \; \lambda(x) dx.
\end{align*}
Therefore, $ \textbf B _{2} [ \mu ,\lambda] $ is bounded by $\lVert [ b, H] : L ^{2} (\mu) \mapsto L ^{2} (\lambda)\rVert$ and so $b\in BMO_{\rho}$, and the proof is complete.
\begin{bibdiv}
\begin{biblist}
\normalsize
\bib{Bloom}{article}{
author={Bloom, S.},
title={A commutator theorem and weighted BMO},
journal={Trans. Amer. Math. Soc.},
volume={292},
date={1985},
number={1},
pages={103--122}
}
\bib{CRW}{article}{
author={Coifman, R. R.},
author={Rochberg, R.},
author={Weiss, Guido},
title={Factorization theorems for Hardy spaces in several variables},
journal={Ann. of Math. (2)},
volume={103},
date={1976},
number={3},
pages={611--635}
}
\bib{PetermichlPott}{article}{
author={Petermichl, S.},
author={Pott, S.},
title={An estimate for weighted Hilbert transform via square functions},
journal={Trans. Amer. Math. Soc.},
volume={354},
date={2002},
number={4},
pages={1699--1703 (electronic)}
}
\bib{MR2354322}{article}{
author={Petermichl, S.},
title={The sharp bound for the Hilbert transform on weighted Lebesgue
spaces in terms of the classical $A_p$ characteristic},
journal={Amer. J. Math.},
volume={129},
date={2007},
number={5},
pages={1355--1375}
}
\bib{MR1756958}{article}{
author={Petermichl, S.},
title={Dyadic shifts and a logarithmic estimate for Hankel operators with
matrix symbol},
journal={C. R. Acad. Sci. Paris S\'er. I Math.},
volume={330},
date={2000},
number={6},
pages={455--460}
}
\bib{Witwer}{article}{
author={Wittwer, Janine},
title={A sharp estimate on the norm of the martingale transform},
journal={Math. Res. Lett.},
volume={7},
date={2000},
number={1},
pages={1--12}
}
\end{biblist}
\end{bibdiv}
\end{document}
\begin{proof}[Proof of Sufficiency in Theorem \ref{t:paraproduct} via Stopping Times]
Let $ f \in L ^{p} (\sigma )$ and $ g\in L ^{p'} (w)$ be supported on an initial interval $ I_0 $.
Build stopping intervals $ \mathcal S$ for these two functions, in the following way.
Initialize $ \mathcal S$ to be $ \{I_0\}$.
In the recursive stage, if $ S\in \mathcal S$ is minimal, take $ \mathcal C_S$, the children of $ S$, to be
the maximal subintervals $ I\subsetneq S$ such that either
\begin{equation} \label{e:stop}
\langle \lvert f\rvert \rangle_I ^{\sigma } > C \langle \lvert f\rvert \rangle_S ^{\sigma },
\quad \textup{or} \quad
\langle \lvert g\rvert \rangle_I ^{w} > C \langle \lvert g\rvert \rangle_S ^{w}.
\end{equation}
Here, $ C= C (\sigma , w)$ is a sufficiently large constant so that $ \mathcal S$ is both $ \sigma $ and $ w$
Carleson. Namely,
\begin{align*}
\sum_{S'\in \mathcal C_S} \sigma (S') &\le \tfrac 12 \sigma (S),
\qquad
\sum_{S'\in \mathcal C_S} w (S') \le \tfrac 12 w (S).
\end{align*}
This depends upon the $ A _{\infty }$ property of $ \sigma $ and $ w$:
If we add $ S'$ to $ \mathcal C _{S}$ due to the $ \sigma $-average of $ f$ being big, these
intervals are automatically small in $ \sigma $-measure. Hence, they are small in Lebesgue measure, and
hence in $ w$-measure as well.
With the children $ \mathcal C_S$ constructed, we add them to $ \mathcal S$, and then repeat the inductive step.
Once $ \mathcal S$ is constructed, let $ I ^{s}$ denote the minimal interval $ S\in \mathcal S$ that contains $ I$.
Then, $ \langle \lvert f\rvert \rangle _{I} ^{\sigma } \lesssim \langle \lvert f\rvert \rangle _{I ^{a}} ^{\sigma }$, and likewise for $ g$.
Estimate the inner product by
\begin{align*}
\bigl\lvert \langle \Pi _{b, \sigma } f , g \rangle _{w}\bigr\rvert
&\le \sum_{S\in \mathcal S} \Bigl\lvert \sum_{I \;:\; I ^{s}=S} \widehat b (I) \langle f \rangle_I ^{\sigma }
\langle \sigma \rangle_I \widehat {gw} (I) \Bigr\rvert
\\
&= \sum_{S\in \mathcal S} \langle \lvert f\rvert \rangle_S ^{\sigma }
\Biggl\lvert
\int _{S}
\sum_{I \;:\; I ^{s}=S} \widehat b (I) \varepsilon_I
\langle \sigma \rangle_I h_I \cdot P _{S,w} g \; dx
\Biggr\rvert.
\end{align*}
Above, we are using these notation. First,
\begin{equation*}
P _{S,w} g =\sum_{I \;:\; I ^{s}=S} \langle g, h _I \rangle _{w} h_I
\end{equation*}
is a martingale projection. Second, $ \varepsilon _I$ is defined by
$
\varepsilon _I \langle \lvert f\rvert \rangle_S ^{\sigma } = \langle f \rangle_I ^{\sigma },
$
and so it bounded by a constant.
Now, to estimate the integral inner product above, we write $ dx = w ^{1/p} w^{-1/p} dx$, and
then use H\"older's inequality to see that
\begin{align*}
\Biggl\lvert
\int _{S}
\sum_{I \;:\; I ^{s}=S} \widehat b (I) \varepsilon_I
\langle \sigma \rangle_I h_I &\cdot P _{S,w} g \; dx
\Biggr\rvert
\\& \leq
\Bigl\lVert
\sum_{I \;:\; I ^{s}=S} \widehat b (I) \varepsilon_I
\langle \sigma \rangle_I h_I
\Bigr\rVert_{ L ^{p} (w) }
\lVert P _{S,w} g \rVert _{L ^{p'} (w ^{-p'/p})}.
\end{align*}
We will apply H\"older's inequality in the index $ S\in \mathcal S$, resulting in two terms.
They are controlled this way.
First, since $ w$ is an $ A_p$ weight,
and the (Lebesgue) martingale projections are good operators on $ L ^{p} (w)$, we have
\begin{align*}
\Bigl\lVert
\sum_{I \;:\; I ^{s}=S} \widehat b (I) \varepsilon_I
\langle \sigma \rangle_I h_I
\Bigr\rVert_{L ^{p} (w) }
& \lesssim
\Bigl\lVert
\sum_{I \;:\; I \subset S} \widehat b (I)
\langle \sigma \rangle_I h_I
\Bigr\rVert_{L ^{p} (w) }
\\
& \lesssim \mathbf B_{p} [\sigma ,w] \sigma (S) ^{1/p}.
\end{align*}
Notice that by the $ \sigma $-Carleson property of $ \mathcal S$, we also have
\begin{equation*}
\sum_{S\in \mathcal S} \bigl[ \langle \lvert f\rvert \rangle_S ^{\sigma } \bigr] ^{p} \sigma (S)
\lesssim \lVert f\rVert _{L ^{p} (\sigma )} ^{p}.
\end{equation*}
For the second term, perhaps we remark that the stopping criteria on $ g$ is not needed if $ 2\leq p' < \infty $.
In this case, one could simply estimate
\begin{align*}
\sum_{S\in \mathcal S} \lVert P _{S,w} g \rVert _{L ^{p'} (w ^{-p'/p})} ^{p'}
&= \int \Bigl[ \sum_{S\in \mathcal S} \lvert P _{S,w} g \rvert ^{p'} \Bigr] ^{p'}
w ^{-p'/p} \; dx
\\
& \le
\int \Bigl[ \sum_{S\in \mathcal S} \lvert P _{S,w} g \rvert ^{2} \Bigr] ^{p'/2}
w ^{-p'/p} \; dx
\\
& \le \int [ \lvert g\rvert w ] ^{p'} w ^{-p'/p} \;dx= \lVert g\rVert _{L ^{p'} (w)} ^{p'} .
\end{align*}
Here, we use the condition $ 2< p'< \infty $ to pass to the square function. Critically, one should
observe that $ w ^{-p'/p} \in A _{p'}$, so that we can appeal to the usual Littlewood-Paley inequalities
to deduce the last line.
But, this simpler argument is not available to us if $ 1< p' <2$.
Instead, we use the $ w$-Carleson property of $ \mathcal S$ in the following way.
Let $ \Delta ^{w} _{I}$ be the $ w$-adapted martingale differences, and set
\begin{equation*}
\tilde P _{S,w} g = \langle g \rangle_S ^{w} + \sum_{I \;:\; I ^{s} =S} \Delta ^{w} _{I} g.
\end{equation*}
Then, observe that $ P _{S,w} g = P _{S,w} g (\tilde P _{S,w} g )$, but moreover, by the doubling property of
$ w$, we can reverse the stopping criteria in \eqref{e:stop}, so that we have
\begin{equation*}
\lvert \tilde P _{S,w} g \rvert \lesssim \langle \lvert g\rvert \rangle_S ^{w} \mathbf 1_{S}
+ \sum_{S'\in \mathcal C_S} \langle \lvert g\rvert \rangle_S ^{w} \mathbf 1_{S'}.
\end{equation*}
Therefore, we can estimate
\begin{align*}
\sum_{S\in \mathcal S}
\lVert P _{S,w} g \rVert _{L ^{p'} (w ^{-p'/p})} ^{p'}
& =
\sum_{S\in \mathcal S}
\lVert P _{S,w} (\tilde P _{S,w} g)\rVert _{L ^{p'} (w ^{-p'/p})} ^{p'}
\\
& \lesssim
\sum_{S\in \mathcal S}
\lVert \tilde P _{S,w} g \rVert _{L ^{p'} (w } ^{p'}
\\
& \lesssim
\sum_{S\in \mathcal S} [\langle \lvert g\rvert \rangle_S ^{w} ] ^{p'} w (S)\lesssim \lVert g\rVert _{L ^{p'} (w)}.
\end{align*}
\end{proof}
We now use Theorem \ref{t:paraproduct} to deduce related results for the Hilbert transform.
Recall that for a sequence $\varepsilon=\{\varepsilon_I\}$ that $T_\varepsilon\equiv \sum_{I\in\mathcal{D}} \varepsilon_I h_I\otimes h_I$ is the Haar multiplier. And that for any $w\in A_2$ we have $\left\Vert T_{\varepsilon}:L^2(w)\to L^2(w)\right\Vert\lesssim [w]_{A_2}$.
\begin{thm} \label{t:HaarMultiplier}
Let $\mu,\lambda\in A_2$. Suppose that $\mathbf B_2[\mu,\lambda]$ and $\mathbf{B}_2[\lambda^{-1},\mu^{-1}]$ are finite. Then $[b,T_\varepsilon]:L^2(\mu)\to L^2(\lambda)$ with
$$
\left\Vert [b,T_{\varepsilon}]:L^2(\mu)\to L^2(\lambda)\right\Vert\lesssim [\lambda]_{A_2}[\mu]_{A_2} (\mathbf B_2[\mu,\lambda]+\mathbf{B}_2[\lambda^{-1},\mu^{-1}])+[\lambda]_{A_2}^2\mathbf B_2[\mu,\lambda]+[\mu]_{A_2}^2\mathbf{B}_2[\lambda^{-1},\mu^{-1}].
$$
Conversely, if $[b,T]:L^2(\mu)\to L^2(\lambda)$ FIX THIS.
\end{thm}
\begin{proof}[Proof of Sufficiency in Theorem \ref{t:HaarMultiplier}]
First, since we are supposing that $\mu,\nu\in A_2$, it will be possible to `absorb' the Haar multiplier when ever it appears since we know that $T_{\varepsilon}:L^2(w)\to L^2(w)$ with norm at most a constant multiple of $[w]_{A_2}$, \cite{Witwer}. This means we only need to control the corresponding paraproduct terms.
Now, consider the commutator $[b,T_\varepsilon]$. Expanding $b$ in terms of its paraproduct decomposition we arrive at
$$
[b,T_\varepsilon]=\Pi_b T_\varepsilon+\Pi_b^\ast T_\varepsilon-T_\varepsilon\Pi_b-T_\varepsilon\Pi_b^{\ast},
$$
since $\Pi_{T_{\varepsilon}f}b - T_{\varepsilon}\Pi_f b = 0$. To control the norm of the paraproduct it suffices to control the norm of each of the four terms.
By the remark above it will suffice to control the two paraproduct operators $\Pi_b$ and $\Pi_b^*$. Coupling \eqref{e:Pib1} and \eqref{e:Pib*1} along with the observation before the proof one easily concludes that:
$$
\left\Vert [b,T_{\varepsilon}]:L^2(\mu)\to L^2(\lambda)\right\Vert\lesssim [\lambda]_{A_2}[\mu]_{A_2} (\mathbf B_2[\mu,\lambda]+\mathbf{B}_2[\lambda^{-1},\mu^{-1}])+[\lambda]_{A_2}^2\mathbf B_2[\mu,\lambda]+[\mu]_{A_2}^2\mathbf{B}_2[\lambda^{-1},\mu^{-1}].
$$
\end{proof}
\begin{proof}[Proof of Necessity in Theorem \ref{t:HaarMultiplier}]
Let us first prove an intermediate result.
Fix a interval $ I_0$, and let $ \mathcal S$ be the stopping intervals for $ I_0$ and $ \sigma $:
The maximal intervals $ I\subset I_0$ such that $ \langle \sigma \rangle_I > C \langle \sigma \rangle _{I_0}$,
where $ C>1$ is a large constant. Let $ \mathcal I$ be the collection of intervals $ I\subset I_0$, not contained
in any stopping interval in $ \mathcal S$. There holds
\begin{equation} \label{e:inter}
\frac {\sigma (I_0) ^{p-1} } {\lvert I_0\rvert ^p }
\int _{I_0} \Bigl [\sum_{I\in \mathcal I} \widehat b (I) ^2 \frac {\mathbf 1_{I}} {\lvert I\rvert }
\Bigr] ^{p/2}\; dw
\lesssim
\lVert [ M _{b}, T ] _{\sigma } \;:\; L ^{p} (\sigma ) \mapsto L ^{p} (w)\rVert
\end{equation}
where $ T$ is a Haar multiplier described below.
We prove this estimate, with the aid of a function $ f \in L ^{p} (\sigma )$ chosen to satisfy
these requirements:
\begin{gather*}
\lvert f\rvert \lesssim \sigma ^{-1} \mathbf 1_{I_0 ^{(1)}} , \qquad
f \mathbf 1_{I_0} = \sigma ^{-1} \mathbf 1_{I_0},
\\
\lVert f\rVert _{L ^{p} (\sigma )} \lesssim \sigma (I_0) ^{-(p-1)/p}
\\
\langle f , \mathbf 1_{I_0 ^{(1)}} \rangle _{\sigma } =
\langle f, h _{I_0 ^{(1)}} \rangle _{\sigma } =0 .
\end{gather*}
These conditions are phrased in terms of $ I _{0} ^{(1)}$, the dyadic parent of $ I_0$,
and can be achieved since $ \sigma $ is a doubling weight.
We also take the multiplier sequence associated to $ T$ so that
$ \epsilon _I=1$ if $ I\in \mathcal I$, and otherwise is zero. Notice that $ T _{\sigma } f \equiv 0$.
The commutator has the expansion in \eqref{e:=}, two of which have $ T _{\sigma }f $ on the inside,
and hence are zero. The leading term of the remaining two is
\begin{equation*}
T\, \Pi _{b, \sigma } f = \sum_{I\in \mathcal I} \epsilon _I \widehat b (I) h_I .
\end{equation*}
There are three more terms in \eqref{e:=}. One of these is
\begin{align*}
T \,\Pi _{b, \sigma } ^{\ast} f =
T \sum_{I : I \cap I_0 \neq 0} \langle f , h_I \rangle _{\sigma } \frac {\mathbf 1_{I}} {\lvert I\rvert }
\equiv 0.
\end{align*}
We get to restrict the sum to those intervals that intersect $ I_0$ by choice of $ T$.
And, then, all the inner products involving $ f$ are zero by choice of $ f$.
Thus, we have
\begin{align*}
\int _{I_0} \Bigl\lvert \sum_{I\in \mathcal I} \widehat b (I) ^2 \frac {\mathbf 1_{I}} {\lvert I\rvert } \Bigr\rvert
^{p/2} \; dw
& \lesssim
\int _{I_0}
\Bigl\lvert \sum_{I\in \mathcal I} \widehat b (I) h_I \Bigr\rvert ^{p} \; dw
\\
& \le \lVert [ M_b , T] _{\sigma } f\rVert _{L ^{p} (w)} ^{p} \lesssim \lVert f\rVert _{L ^{p} (\sigma )} ^{p}
\lesssim \sigma ^{1-p} (I_0).
\end{align*}
Concerning the last term, recall that $ \sigma \in A _{p'}$, so that $ \sigma ^{1-p} \in A _{p}$ is the
dual weight to $ \sigma $, hence using the $ A_p$ characteristic, one has
\begin{align*}
\sigma ^{1-p} (I_0) &= \frac {\sigma ^{1-p} (I_0) \sigma (I) ^{p-1}} {\lvert I\rvert ^{p} }
\cdot \frac {\lvert I\rvert ^{p} } { \sigma (I) ^{p-1}}
\\
& \lesssim \frac {\lvert I\rvert ^{p} } { \sigma (I) ^{p-1}} .
\end{align*}
And, this completes the proof of \eqref{e:inter}.
\medskip
We claim that \eqref{e:inter} implies that
\begin{equation*}
\mathbf B _{p} [ \sigma ,w] \lesssim
\lVert [ M _{b}, T ] _{\sigma } \;:\; L ^{p} (\sigma ) \mapsto L ^{p} (w)\rVert ,
\end{equation*}
and by duality, we can also bound $ \mathbf B _{p'}[w, \sigma ]$.
First, note that by choice of $ \mathcal I$,
there holds
\begin{align*}
\int _{I_0} \Bigl [\sum_{I\in \mathcal I} \widehat b (I) ^2 \langle \sigma \rangle_I ^2
\frac {\mathbf 1_{I}} {\lvert I\rvert }
\Bigr] ^{p/2}\; dw
&
\lesssim \langle \sigma \rangle_ {I_0} ^{p}
\int _{I_0} \Bigl [\sum_{I\in \mathcal I} \widehat b (I) ^2 \frac {\mathbf 1_{I}} {\lvert I\rvert }
\Bigr] ^{p/2}\; dw
\\
& \lesssim \sigma (I_0).
\end{align*}
Now, a simple recursive application of this inequality will complete the proof, since for $ C>1$, the stopping intervals $ \mathcal S$ satisfy
\begin{equation*}
\sum_{S\in \mathcal S} \sigma (S) < \tfrac 12 \sigma (I_0).
\end{equation*}
\end{proof} |
1,941,325,221,155 | arxiv | \section{Introduction}
A great deal of the dynamics of maximally supersymmetric gauge theories and string theories can be learned from the derivative expansion of the effective action, in appropriate phases where the low energy description is simple. On the other hand, it is often nontrivial to implement the full constraints of supersymmetry on the dynamics, due to the lack of a convenient superspace formalism that makes 16 or 32 supersymmetries manifest (see \cite{Howe:1980th,Howe:1981xy,Berkovits:1997pj,Cederwall:2001dx,Bossard:2010bd,Chang:2014kma,Chang:2014nwa} however for on-shell superspace and pure spinor superspace approaches). It became clear recently \cite{Brodel:2009hu,Elvang:2010jv,Elvang:2010xn,Wang:2015jna} that on-shell supervertices and scattering amplitudes can be used to organize higher derivative couplings efficiently in maximally supersymmetric theories, and highly nontrivial renormalization theorems of \cite{Sethi:1999qv,Green:1998by} can be argued in a remarkably simple way based on considerations of amplitudes.
In this paper we extend the arguments of \cite{Wang:2015jna} to gauge theories coupled to maximal supergravity, while preserving 16 supersymmetries. Our primary example is an Abelian gauge theory on a 3-brane coupled to ten dimensional type IIB supergravity, though the strategy may be applied to other dimensions as well. We will formulate in detail the brane-bulk superamplitudes, utilizing the super spinor helicity formalism in four dimensions \cite{Elvang:2013cua} as well as in type IIB supergravity \cite{CaronHuot:2010rj,Boels:2012ie}. By considerations of local supervertices, and factorization of nonlocal superamplitudes, we will derive constraints on the higher derivative brane-bulk couplings of the form $F^4$, $RF^2, D^2 RF^2, D^4RF^2, R^2, D^2 R^2$. These amount to a set of non-renormalization theorems, which when combined with $SL(2,\mathbb{Z})$ invariance, determines the $\tau,\bar\tau$ dependence of such couplings completely in the quantum effective action of a D3 brane in type IIB string theory. Some of these results have previously been observed through explicit string theory computations \cite{Bachas:1999um,Green:2000ke,Fotopoulos:2001pt,Fotopoulos:2002wy,Basu:2008gt,Garousi:2010ki,Garousi:2010bm,Garousi:2011ut,Becker:2011ar,Becker:2011bw,Velni:2012sv,Garousi:2012yr}.
We then turn to the question of determining higher dimensional operators that appear in the four dimensional gauge theory obtained by compactifying the six dimensional $(0,2)$ superconformal theory on a torus. While it is unclear whether this theory can be coupled to the ten dimensional type IIB supergravity, we will be able to derive nontrivial constraints and an exact result on the $F^4$ term by interpolating the effective theory in the Coulomb phase, and matching with perturbative double scaled little string theory. Our result clarifies some puzzles that previously existed in the literature.
\section{Brane-Bulk Superamplitudes}
We begin by considering a maximally supersymmetric Abelian gauge multiplet on a 3-brane coupled to type IIB supergravity in ten dimensions.
The super spinor helicity variables of the ten dimensional type IIB supergravity multiplet are $\zeta_{\underline{\A} A}$ and $\eta_A$, where $\underline{\A}=1,\cdots,16$ is an $SO(1,9)$ chiral spinor index, and $A=1,\cdots 8$ is an $SO(8)$ little group chiral spinor index. The spinor helicity variables $\zeta_{\underline{\A} A}$ are constrained via the null momentum $p_m$ by
\ie
\delta_{AB} p_m = \Gamma_m^{\underline{\A}\underline{\B}} \zeta_{\underline{\A} A} \zeta_{\underline{\B} B}.
\fe
A 1-particle state in the type IIB supergravity multiplet is labeled by a monomial in $\eta_A$. For instance, 1 and $\eta^8\equiv {1\over 8!}\epsilon_{A_1\cdots A_8}\eta_{A_1}\cdots \eta_{A_8}$ correspond to the axion-dilaton fields $\tau$ and $\bar\tau$, $\eta_{[A}\eta_{B]}$ and ${1\over 6!}\epsilon_{AB A_1\cdots A_6} \eta_{A_1}\cdots \eta_{A_6}$ correspond to the complexified 2-form fields, and $\eta_{[A}\eta_B \eta_C \eta_{D]}$ contains the graviton and the self-dual 4-form.
The 32 supercharges ${\bf q}_{\underline{\A}}, {\bf \widetilde q}_{\underline{\A}}$ act on the 1-particle states as \cite{Boels:2012ie}
\ie
{\bf q}_{\underline{\A}} = \zeta_{\underline{\A} A} \eta_A,~~~~ {\bf \overline q}_{\underline{\A}} = \zeta_{\underline{\A} A} {\partial\over \partial \eta_A}.
\fe
The supersymmetry algebra takes the form
\ie
\{ {\bf q}_{\underline{\A}}, {\bf q}_{\underline{\B}} \} = \{ {\bf \overline q}_{\underline{\A}} , {\bf \overline q}_{\underline{\B}} \} = 0,~~~~ \{{\bf q}_{\underline{\A}}, {\bf \overline q}_{\underline{\B}} \} = {1\over 2} p_m \Gamma^m_{\underline{\A}\underline{\B}}.
\fe
To describe coupling to the brane, let us decompose the supercharges with respect to $SO(1,3)\times SO(6)$, and write
\ie
{\bf q}_{\underline\A} = (q_{\A I}, \widetilde q_{\da}{}^I),~~~~ {\bf \overline q}_{\underline\A} = (\overline{\widetilde q}_{\A I}, \overline{ q}{}_{\da}{}^I).
\fe
Here $\A$ and $\da$ are four dimensional chiral and anti-chiral spinor indices, and the lower and upper index $I$ label the chiral and anti-chiral spinors of $SO(6)$. The coupling to four dimensional gauge multiplet on the brane will preserve 16 out of the 32 supercharges, which we take to be $q_{aI}$ and $\overline{ q}{}_{\da}{}^I$.
The four dimensional super spinor helicity variables for the gauge multiplet are $\lambda_\A, \widetilde\lambda_\db, \theta_I$. The null momentum and supercharges of a particle in the multiplet are given by \cite{Elvang:2013cua}
\ie
p_\mu = \sigma_\mu^{\A\db} \lambda_\A \widetilde \lambda_{\db} ,
~~~
q_{\A I} = \lambda_\A \theta_I, ~~~ \overline q_{\db}{}^I = \widetilde \lambda_{\db} {\partial\over \partial \theta_I}.
\fe
The $SO(2)$ little group acts by
\ie\label{sot}
\lambda\to e^{i\A} \lambda, ~~~\widetilde\lambda\to e^{-i\A}\widetilde\lambda, ~~~\theta\to e^{-i\A} \theta.
\fe
Here we adopt a slightly unconventional little group transformation of $\theta_I$, so that $q_{\A I}, \widetilde q_{\db}{}^I$ are invariant under the little group, and can be combined with the supermomenta of the bulk supergravitons in constructing a superamplitude. A 1-particle state in a gauge multiplet is represented by a monomial in $\theta_I$. For instance, 1 and $\theta^4\equiv {1\over 4!}\epsilon^{IJKL}\theta_I\theta_J\theta_K\theta_L$ represent the $-$ and $+$ helicity gauge bosons,\footnote{Note that our sign convention for helicity is the opposite of \cite{Elvang:2013cua}.} while $\theta_I \theta_J$ represent the scalar field $\phi_{[IJ]}$.
In an $n$-point superamplitude that involves particles in the four dimensional gauge multiplet as well as the ten dimensional gravity multiplet, only the four dimensional momentum $P_\mu = \sum_{i=1}^n p_{i\mu}$ and the 16 supercharges $(Q_{\A I},\overline Q_{\db}{}^I) $ are conserved. Here we have defined
\ie
&Q_{\A I} =\sum_{i=1}^n q_{i\A I}= \sum_i \lambda_{i\A} \theta_{iI} + \sum_j \xi_{j\A I A} \eta_{jA},\\
&\overline Q_{\db}{}^I =\sum_{i=1}^n\overline q_{i\db}{}^I= \sum_i \widetilde\lambda_{i\db} {\partial\over \partial \theta_{iI}} + \sum_j \widetilde\xi_{j\db}{}^I{}_A {\partial\over \partial \eta_{jA}},
\fe
where $\widetilde \xi_{i\db}{}^I$ is
the decomposition of the supergravity spinor helicity variable $\zeta_{i\underline{\A} A}$ with respect to $SO(1,3)\times SO(6)\subset SO(1,9)$, namely
\ie
& \zeta_{i\underline{\A} A} = (\xi_{i\A I A},\widetilde \xi_{i\db}{}^I{}_A).
\fe
A typical superamplitude takes the form\footnote{The only exceptions are when the kinematics are constrained in such a way that no nontrivial Lorentz and little group invariants can be formed, such as the 3-graviton amplitude in the bulk, the graviton tadpole on the brane, and the graviton-gauge multiplet coupling on the brane. These will be examined in more detail below.}
\ie\label{ampa}
{\cal A} = \delta^4(P_\mu) \delta^8(Q_{\A I}) {\cal F}(\lambda_i,\widetilde\lambda_i,\theta_i, \zeta_j,\eta_j),
\fe
where
\ie
\D^8(Q_{\A I})\equiv \prod_{\A,I} Q_{\A I},
\fe
and ${\cal F}$ obeys supersymmetry Ward identities \cite{Elvang:2010xn}
\ie
\delta^4(P_\mu) \delta^8(Q_{\A I}) \,\overline Q_{\db}{}^J {\cal F} = 0
\fe
associated with the 8 $\overline Q$ supercharges.
If the amplitude $\mathcal{A}$ (\ref{ampa}) obeys supersymmetry Ward identities, then so does its CPT conjugate
\ie
\overline{\cal A} = \delta^4(P_\mu) \overline Q^8 {\cal F}(\lambda_i,\widetilde\lambda_i,{\partial/\partial\theta_i}, \zeta_j,{\partial/\partial\eta_j}) \prod_i \theta_i^4 \prod_j \eta_j^8,
\fe
where $\overline Q^8\equiv \prod_{\da, I} \overline Q_\da{}^I$.
In formulating superamplitudes purely in the gauge theory, it is useful to work with a different representation of the 16 supercharges, by decomposing
\ie\label{splitpreservedQ}
Q_{\A I} =(\mathcal{Q}_{\A a}, \mathcal{ \overline Q}_{\A \dot a}),~~~~{\overline{ Q}}_{\da}{}^I = (\mathcal{\overline{ Q}}{}_{\da a}, { \mathcal{Q}}{}_{\da\dot a}),
\fe
where $(a,\dot a)$ are spinor indices of an $SU(2)\times SU(2)$ subgroup of the $SU(4)$ R-symmetry. We can then represent the supercharges for individual particles through Grassmannian variables $(\psi_a, \widetilde\psi_{\dot a})$ as
\ie
& \mathcal{Q}_{\A a} = \lambda_\A \psi_a,~~~\mathcal{\overline Q}_{\A\dot a} = \lambda_\A {\partial\over \partial \widetilde\psi^{\dot a}},
\\
& \mathcal{\overline{ Q}}{}_{\da a} = \widetilde\lambda_\da {\partial\over \partial\psi^a},~~~ \mathcal{Q}_{\da\dot a} = \widetilde \lambda_\da \widetilde\psi_{\dot a}.
\fe
In this representation, a basis of 1-particle states is given by monomials in $\psi,\widetilde\psi$. The $-$ and $+$ helicity gauge bosons correspond to $\psi^2$ and $\widetilde\psi^2$, whereas the scalars are represented by 1, $\psi^2\widetilde\psi^2$, and $\psi_a\widetilde\psi_{\dot a}$. We can assign $\psi_a$ and $\widetilde\psi_{\dot a}$ to transform under the $SO(2)$ little group with charge $-1$ and $+1$, respectively.
The $\theta$-representation of superamplitude is convenient for coupling to supergravity, while the $\psi$-representation is convenient for constructing vertices of the gauge theory that solve supersymmetry Ward identities. The superamplitudes in the $\theta$-representation and in the $\psi$-representation are related by a Grassmannian twistor transform:
\ie
{\cal A}_\theta = \int \prod_i d^2\widetilde\psi_i \, e^{\sum_i \widetilde\psi_i \chi_i} {\cal A}_\psi,
\fe
where we make the identification $\theta_\A = (\psi_a, \chi_{\dot a})$, after picking an $SU(2)\times SU(2)$ subgroup of $SU(4)$ R-symmetry.
A typical supervertex constructed in the $\psi$-representation is not manifestly R-symmetry invariant. In a supervertex that involves bulk supergravitons, we can form R-symmetry invariant supervertices by contracting with the spinor helicity variable of the supergraviton, or simply its transverse momentum to the 3-brane, and average over the $SO(6)$ orbit. It is useful to record the non-manifest R-symmetry generators in the $\psi$-representation,
\ie
R_{a\dot b} = \sum_i \psi_{ia} \widetilde\psi_{i \dot b},~~~ R_{\dot a b} = \sum_i {\partial\over \partial \psi_i^a} {\partial\over \partial \widetilde\psi_i^{\dot b}},~~~ R = \sum_i \left(\psi_{ia} {\partial\over \partial \psi_{ia}} + \widetilde\psi_{i\dot a} {\partial\over \partial \widetilde\psi_{i\dot a}} - 2\right).
\fe
\subsection{F-term and D-term Supervertices}
Let us focus on supervertices, namely, local superamplitudes with no poles in momenta. As in maximal supergravity theories, we can write down F-term and D-term supervertices \cite{Wang:2015jna} for brane-bulk coupling. One may attempt to write construct a simple class of supervertices in the form (\ref{ampa}) by taking ${\cal F}$ to be independent of the Grassmann variables $\theta_i, \eta_j$, and depend only on the bosonic spinor helicity variables, subject to $SO(1,3)\times SO(6)$ invariance. When combined with the CPT conjugate vertex, this construction appears to be sufficiently general for purely gravitational F-term vertices. For instance, a supervertex involve 2 bulk supergravitons of the form
\ie
\delta^4(P)\delta^8(Q) = \delta^4(p_1^\parallel+p_2^\parallel) \delta^8(q_1+q_2)
\fe
corresponds to a coupling of the form $R^2+\cdots$ on the brane.
When there are four dimensional gauge multiplet particles involved, however, such simple constructions in the $\theta$-representation of the superamplitude may not give the correct little group scaling. It is sometimes more convenient to start with a supervertex in the $\psi$-representation, average over $SO(6)$, and perform the twistor transform into $\theta$-representation.
For instance, we can write a supervertex that involves $(4+n)$ gauge multiplet particles in the $\psi$-representation, of the form
\ie
\delta^4(P) \delta^8(\mathcal{Q}_\psi)=\delta^4(P) \delta^8(\mathcal{Q}_{\A a}, \mathcal{Q}_{\da \dot a}) = \delta^4(P) \delta^4\left(\sum_{i=1}^{n+4} \lambda_{i\A} \psi_{ia} \right) \delta^4\left(\sum_{i=1}^{n+4} \widetilde\lambda_{i\da} \widetilde\psi_{i\dot a} \right).
\fe
This vertex is not $SO(6)$ invariant; rather, it lies in the lowest weight component of a rank $n$ symmetric traceless tensor representation of the $SO(6)$ R-symmetry. In component fields, it contains couplings of the form $\phi^{i_1}\cdots \phi^{i_{n}} F^4+\cdots$, where $\phi^i$ denotes the 6 scalars, and the traces between $i_k, i_\ell$ are subtracted off.
Indeed, one can verify that for the 4-point superamplitude,
\ie
\int \prod_{i=1}^4 d^2\widetilde\psi_i\, e^{\sum_i \widetilde\psi_i \chi_i} \delta^4(P)\delta^8(\mathcal{Q}_\psi) = \delta^4(P) \delta^8(Q_\theta) {[34]^2\over \langle 12\rangle^2},
\fe
while the analogous twistor transform on $\delta^4(P)\delta^8(\mathcal{Q}_\psi)$ for $n>0$ produces $\delta^4(P)\delta^8(Q_\theta)$ multiplied by an expression of degree $2n$ in $\chi$, that transforms nontrivially under the $SO(6)$. It is generally more difficult to extend a gauge supervertex constructed in the $\psi$-representation to involve coupling to the supergraviton however.
As an example, we construct supervertices in the $\psi$-representation which contain $\phi^m\partial^m R^2$ couplings on the brane. These supervertices are naturally related to the $R^2$ vertex by spontaneously broken translation symmetry. To proceed, we first need to extend the $\psi$-representation to the supergraviton states.
Just as we split the 16 preserved supercharges on the brane in \eqref{splitpreservedQ}, we can split the 16 broken supercharges as follows,
\ie
\overline{\widetilde Q}_{\A I} = (\overline{\widetilde\cQ}_{\A a}, {\widetilde\cQ}_{\A \dot a}),~~~ \widetilde Q_\da{}^I = ({\widetilde\cQ} _{\da a}, \overline{\widetilde\cQ}_{\da\dot a}),
\fe
We'd like to consider a representation of the supergraviton states such that $(\cQ_{\A a}, \cQ_{\da\dot a}, \widetilde\cQ_{\A\dot a}, {\widetilde\cQ}_{\da a})$ are represented as supermomenta, and the remaining 16 supercharges are represented as superderivatives. This is possible provided that $(\cQ_{\A a}, \cQ_{\da\dot a}, \widetilde\cQ_{\A\dot a}, {\widetilde\cQ}_{\da a})$ anticommute with one another. The anticommutator of $Q_{\A I}$ with $\overline{\widetilde Q}_{\B J}$ contains the transverse momentum $P_{IJ}$. Hence while ${\cQ}_{\A a}$ anticommute with $\widetilde\cQ_{\db b}$, it may not anticommute with $\widetilde\cQ_{\B\dot b}$. However, the anticommutator $\{\cQ_{\A a}, \widetilde\cQ_{\B\dot b}\}$ contains only the component $P_{a\dot b}$ that lies in the representation $(2,2)^0$ through the decomposition $6\to (2,2)^0\oplus (1,1)^+\oplus (1,1)^-$ under $SU(2)\times SU(2)\times U(1)\subset SO(6)$. As long as there are no more than two supergravitons in the supervertex, we can always choose the $SO(4)$ subgroup of $SO(6)$ to leave the two transverse momenta of the supergravitons invariant so that $P_{a\dot b}=0$. With this choice, for each supergraviton, $(\cQ_{\A a}, \cQ_{\da\dot a}, \widetilde\cQ_{\A\dot a}, {\widetilde\cQ}_{\da a})$ then anti-commute with one another, and they can be simultaneously represented as supermomenta.
Let us compare this with the standard representation of the supercharges in the 10D type IIB super spinor helicity formalism, for which we can decompose
\ie
\zeta_{\underline{\A} A} = (\zeta_{\A I A}; \zeta_{\da}{}^I{}_A) = (\zeta_{\A a A}, \zeta_{\A \dot a A}; \zeta_{\da a A}, \zeta_{\da\dot a A}).
\fe
By requiring that $P_{a\dot b}=0$, we have
\ie
\epsilon^{\A\B}\zeta_{\A a A}\zeta_{\B \dot b A} = \epsilon^{\da\db} \zeta_{\da a A} \zeta_{\db \dot b A}= 0.
\fe
When this condition is satisfied, we can go to the $\psi$-representation by a Laplace transform on half of the 8 $\eta_A$'s.
A supervertex of the form
\ie
\delta^8(\cQ_{\A a}, \cQ_{\da\dot a}, \widetilde\cQ_{\A\dot a}, \widetilde{\cQ}_{\da a})
\fe
for 2 supergravitons and $m$ D3-brane gauge multiplets is not $SO(6)$ invariant (unless $m=0$). Rather, it lies in the lowest weight component in a set of supervertices that transform in the rank $m$ symmetric traceless representation of $SO(6)$. To form an $SO(6)$ invariant supervertex, we need to contract it with $m$ powers of the total transverse momentum $P_{IJ}$, and average over the $SO(6)$ orbit. In this way, we obtain the desire supervertex that contains $\phi^m \partial^m R^2$ coupling.
\subsection{Elementary Vertices}
There are a few ``elementary vertices" that are the basic building blocks of the brane coupling to supergravity, and are not of the form of the F and D-term vertices discussed above. One elementary vertex is the supergravity 3-point vertex (Figure \ref{elementary}), as discussed in \cite{Boels:2012ie}. In the notation of \cite{Wang:2015jna}, it can be written in the form
\ie
{\cal A}_3={g\over (p_+)^4}\delta^{10}(P)\delta^{12}(W),
\fe
where $g$ is the cubic coupling constant, $W$ represents 12 independent components of the supermomentum, specified by the null plane that contains the three external null momenta, and $p^+$ is an overall lightcone momentum as defined in \cite{Wang:2015jna}. The explicit expression of this vertex will not be discussed here, though the cubic vertex is of course crucial in the consideration of factorization of superamplitudes.
\begin{figure}[htb]
\centering
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[scale=1.7]{sugra.pdf}\\
$\mathcal{A}_3$
\end{minipage}
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[scale=1.7 ]{tadpole.pdf}\\
$\mathcal{B}_1$
\end{minipage}
\raisebox{-20pt}{
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[scale=1.7 ]{RF.pdf}\\
$\mathcal{B}_{1,1}$
\end{minipage}
}
\caption{Elementary supervertices. The wiggly line represents a bulk 1-particle state while the straight line represents a brane 1-particle state. The red dot represents the bulk vertex, whereas the blue and green dots are brane vertices.}
\label{elementary}
\end{figure}
The supergraviton tadpole on the brane is a 1-point superamplitude, of the form
\ie
{\cal B}_1 = T \delta^4(P) \Pi^{ABCD}(\zeta)\eta_A\eta_B\eta_C\eta_D,
\fe
where $T$ stands for the tension/charge of the brane, and $\Pi^{ABCD}(\zeta)$ is an anti-symmetric 4-tensor of the $SO(8)$ little group constructed out of the $\zeta_{\A A}$ associated with a (complex) null momentum in the 6-plane transverse to the 3-brane, of homogeneous degree zero in $\zeta$. If we take the transverse momentum to be in a lightcone direction, after double Wick rotation, the little group $SO(8)$ transverse to the lightcone is broken by the 3-brane to $SO(4)\times SO(4)$. We may then decompose $\eta_A = (\eta_{\A a}^+, \eta^-_{\da\dot a})$, where $(\A,\da)$ are spinor indices of the $SO(4)$ along the brane worldvolume, whereas $(a,\dot a)$ are spinor indices of the $SO(4)$ transverse to the brane as well as the null momentum. With respect to the $SO(4)\times SO(4)$, the 16 supercharges $Q_{\A I},\overline Q_{\db}{}^I$ preserved by the 3-brane coupling may be denoted $Q_{\A a}, Q_{\A\dot a}, \overline Q_{\db b}, \overline Q_{\db \dot b}$. $Q_{\A \dot a}$ and $\overline Q_{\db\dot b}$ trivially annihilate the 1-particle state of the supergraviton, $Q_{\A a}\sim \eta^+_{\A a}$, and $\overline Q_{\db\dot b} \sim \partial/\partial \eta^{-\db\dot b}$. The supergraviton tadpole supervertex can then be written as
\ie\label{tps}
{\cal B}_1 = T \delta^4(P) (\eta^+)^4.
\fe
This amplitude contains equal amount of graviton tadpole and the charge with respect to the 4-form potential, reflecting the familiar BPS relation between the tension and charge of the brane.
The supergraviton-gauge multiplet 2-point vertex $\mathcal{B}_{1,1}$ is another elementary vertex. Here again there is no Lorentz invariant to be formed out of the two external null momenta. Both the transverse and parallel components of the graviton momentum are null. To write this vertex explicitly, we take the graviton transverse momentum to be along a lightlike direction on the $(X^8,X^9)$ plane, and the parallel momentum to be along a lightlike direction on the $(X^0,X^1)$ plane. We will write the null parallel and transverse momenta $p^\parallel,\, p^\perp$ in this frame as
\ie
&p^\parallel = ( p^{\parallel}_{+} , \,p^{\parallel}_{ +},\,0 ,\cdots,0, 0),\\
&p^\perp = (0 ,0,0\cdots, \, i p^{\perp}_{+} ,\, p^{\perp}_{+} ).
\fe
Note that $p^{\parallel}_{+} ,\, p^{\perp}_{+} $ transform under the boosts on the $(X^0,X^1)$ and $(X^8,X^9)$ planes, which will be important for us to fix the $p^{\parallel}_{+} ,\, p^{\perp}_{+} $ dependence in the supervertex $\mathcal{B}_{1,1}$.
The ``tiny group" $SO(6)$ that acts on the transverse directions to the null plane spanned by the momenta of the two particles (one on the brane, one in the bulk) rotates $X^2,\cdots,X^7$, which is broken by the 3-brane to $SO(2)\times SO(4)$. The spinor helicity variables are decomposed as
\ie\label{frame}
& \xi_{\A I A} = \Big( \xi_{+a |A} ,\,\xi_{-a |A} ,\,\xi_{+\dot a |A} ,\,\xi_{-\dot a |A}=0 \Big),
\\
& \widetilde\xi_{\dot\A ~A}^{~I} = \Big( \widetilde\xi_{+a |A} ,\, \widetilde\xi_{-a |A} ,\, \widetilde\xi_{+\dot a |A} ,\, \widetilde\xi_{-\dot a |A}=0 \Big),\\
& \lambda_\A = \Big(\lambda_+=\sqrt{p^{\parallel}_{+} },\lambda_-=0\Big),~~~\widetilde\lambda_\da = \Big(\widetilde\lambda_+=\sqrt{p^{\parallel}_{+} },\widetilde\lambda_- = 0\Big).
\fe
We will also split $\theta_I=(\theta_a, \theta_{\dot a})$.
The 16 unbroken supercharges are represented as
\ie
& Q_{+a} = \xi_{+a|A} \eta_A + \lambda_+\theta_a, ~~ Q_{-a} = \xi_{-a|A}\eta_A ,~~Q_{+\dot a} = \xi_{+\dot a|A}\eta_A + \lambda_+ \theta_{\dot a},~~Q_{-\dot a} = 0,
\\
& \overline Q_{+,a} =\widetilde \xi_{+a|A} {\partial\over \partial\eta_A} + \widetilde\lambda_+ {\partial\over \partial \theta^a},~~ \overline Q_{-,a} = \widetilde\xi_{-a|A} {\partial\over \partial\eta_A},
~~ \overline Q_{+,\dot a} = \widetilde\xi_{+\dot a|A} {\partial\over \partial\eta_A} + \widetilde\lambda_+ {\partial\over \partial \theta^{\dot a}},~~\overline Q_{-,\dot a} = 0 .
\fe
The supervertex can be written in this frame as\footnote{This supervertex is very similar to the cubic vertex in the non-Abelian gauge theory, which is absent here because we restrict to the Abelian case.
}
\ie
{\cal B}_{1,1} = \sqrt{Tg} \delta^4(P) {\delta^{6}(Q_{+a}, Q_{-a}, Q_{+\dot a})\over p^{\parallel}_{+} p^{\perp}_{+} },
\fe
From boost invariance on the $(X^0,X^1)$ plane, we know there is one power of $p^{\parallel}_{+} $ in the denominator. Since the supervertex scales linearly with the momentum, we determine the factor $p^{\perp}_{+} $ in the denominator.
\begin{figure}[htb]
\centering
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[scale=1.8 ]{RR1.pdf}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[scale=1.8 ]{RR2.pdf}
\end{minipage}
\caption{Factorization of the $R^2$ amplitude through elementary vertices. The red dot represents the bulk supergravity vertex whereas the blue and green dots are brane vertices.}
\label{RR}
\end{figure}
The normalization of $\mathcal{B}_{1,1}$ is unambiguously fixed by supersymmetry. Note that there is a unique 2-supergraviton amplitude of the form \cite{Hashimoto:1996bf}
\ie\label{qst}
{\delta^8(Q)\over st},
\fe
at this order in momentum. Here $s=-(p_1+p_2)^2$, $t=(p_1^\perp)^2=(p_2^\perp)^2$. The 2-supergraviton amplitude factorizes through ${\cal B}_1 {\cal A}_3$ and ${\cal B}_{1,1}{\cal B}_{1,1}$ (Figure \ref{RR}), from which the relative coefficients of these two channels are fixed (proportional to $Tg$).
\subsection{Examples of Superamplitudes}
Let us now attempt to construct a 4-point superamplitude that couples one supergraviton to three gauge multiplet particles, that scales like $p^3$ (Figure~\ref{fig:RF3}). We will see that such a superamplitude must be nonlocal, and an independent local supervertex of this form does not exist. This superamplitude should be of the form $\delta^4(P) \delta^8(Q)$ times a rational function that has total degree 2 in $\eta$ and $\theta$,\footnote{If the vertex, at the same derivative order, is $\delta^4(P)\delta^8(Q)$ times a function of $\lambda,\widetilde\lambda$ that is independent of $\eta$ and $\theta$, this function needs to have homogeneous degree $-4$ in the $\lambda$'s and degree 2 in the $\widetilde\lambda$'s in order to reproduce the correct little group scaling. Such a function cannot be a polynomial in the spinor helicity variables and the amplitude would have to be nonlocal. The situation is similar if the function is of degree 4 in $\eta$ and $\theta$, which may be obtained from the CPT conjugate of the previous case. It seems that such amplitudes cannot factorize correctly into lower point supervertices (and they do not exist in string theory).} homogeneous degree $-1$ in the momenta, and must have the little group scaling such that a term $\sim\eta^4\theta_1^2\theta_2^2\theta_3^2$ (representing three scalars coupled to the graviton or the 4-form potential) is little group invariant.
\begin{figure}[htb]
\centering
{
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[scale=2 ]{RF3.pdf}\\
\end{minipage}
}
\caption{A factorization for the $RF^3$ superamplitude for the case of an Abelian gauge multiplet coupled to supergravity.}
\label{fig:RF3}
\end{figure}
To construct this superamplitude, we will pick the supergraviton momentum to be in the $X^9$ direction, and decompose the spinor helicity variables according to $SO(3)\times SO(5)\subset SO(8)$, where the $SO(8)$ that rotates $X^1,\cdots,X^8$ can be identified with the little group of the supergraviton, and the $SO(3)$ and $SO(5)$ rotate $X^1,X^2,X^3$ along the 3-brane and $X^4,\cdots,X^8$ transverse to the 3-brane, respectively. We can write
$\eta_A = \eta_{\A I}$, where $\A$ is an $SO(3)$ spinor index and $I$ an $SO(5)$ spinor index. We can split $\zeta_{\underline{\A} A}$ into $(\zeta_{B A}, \zeta_{\dot B A})$, where $B$ and $\dot B$ are chiral and anti-chiral $SO(8)$ indices. Then the spinor helicity constraint on $\zeta$ is simply that $\zeta_{\dot B A} =0$, and $\zeta_{B A} = \sqrt{p^\perp} \delta_{B A}$. Further decomposing the index $B$ into $SO(3)\times SO(5)$ indices $\B J$, and identifying $A\sim \A I$, we have
\ie
\zeta_{\B J,\A I} = \sqrt{p^\perp} \epsilon_{\B\A} \Omega_{JI},
\fe
where $\Omega_{IJ}$ is the invariant anti-symmetric tensor of $SO(5)\sim Sp(4)$. The supercharges can now be written explicitly (in $SO(3)\times SO(5)$ notation) as
\ie\label{supq}
& Q_{\A I} = \sqrt{p^\perp} \eta_{\A I} + \sum_{i=1}^3 \lambda_{i\A} \theta_{iI},
\\
& \overline Q_{\A I} = \sqrt{p^\perp} {\partial\over \partial \eta^{\A I}} + \sum_{i=1}^3 \widetilde\lambda_{i\A} {\partial\over \partial \theta_i^I}.
\fe
The general superamplitude that solves the supersymmetry Ward identity and has the correct little group scaling and momentum takes the form
\ie\label{paq}
\delta^4(P) \delta^8(Q) \sum_{i,j
f_{ij}(\lambda_k,\widetilde\lambda_k) \left(\widetilde\lambda_{i\A}\eta^{\A}{}_I - \sqrt{p^\perp} \theta_{iI}\right) \left(\widetilde\lambda_{j\B}\eta^\B{}_J - \sqrt{p^\perp} \theta_{jJ}\right) \Omega^{IJ},
\fe
where $f_{ij}$ is a rational function of $\lambda_{k\A}$ and $\widetilde\lambda_{k\A}$, $k=1,2,3$. Note that since we are working in a frame tied to the supergraviton momentum, $\A$ is an $SO(3)$ index, and we can contract $\lambda_i$ with $\widetilde\lambda_j$, and write for instance $[j i\rangle = \widetilde\lambda_{j\A}\lambda_i^\A$. The little group and momentum scaling demands that $f_{ij}$ has homogeneous degree $-4$ in the $\lambda_k$'s and degree 0 in the $\widetilde\lambda_k$'s.
Due to the $\delta^8(Q)$ factor, we can rewrite (\ref{paq}) as
\ie\label{pqn}
\delta^4(P) \delta^8(Q) (p^\perp)^{-1} \sum_{i,j}
f_{ij}(\lambda,\widetilde\lambda) \big([ik\rangle \theta_{kI} + p^\perp \theta_{iI}\big) \big([j\ell \rangle \theta_{\ell J} + p^\perp \theta_{jJ}\big) \Omega^{IJ}.
\fe
It appears that such an amplitude with the correct little group scaling will necessarily have poles, thereby forbidding a local supervertex.\footnote{Note that we can shift
\ie
f_{ij}(\lambda,\widetilde\lambda) \to f_{ij}(\lambda,\widetilde\lambda) + \lambda_{i\A} g^\A_j + \lambda_{j\A} g_i^\A
\fe
for arbitrary $g_i^\A$ without changing the amplitude (\ref{pqn}). }
The corresponding 4-point disc amplitude on D3-brane in type IIB string theory has a pole in $(p^\perp)^2$, and no pole in $s,t,u$ (at zero value). Here $s=-(p_1+p_2)^2$, $t=-(p_2+p_3)^2$, $u=-(p_3+p_1)^2$, with $s+t+u=(p^\perp)^2$. In particular, there is a coupling $(\partial_i \delta\overline\tau) \phi^i F_-^2 $, that corresponds to the term proportional to $\eta^8\theta_{iI}\theta_{iJ}\Omega^{IJ}$ in (\ref{pqn}). This coupling is represented by
\ie
\eta^8 p^\perp \left( [12]^2\theta_{3I}\theta_{3J} + [23]^2\theta_{1I}\theta_{1J} + [31]^2\theta_{2I}\theta_{2J} \right) \Omega^{IJ}
\fe
Comparing to (\ref{pqn}), we need
\ie
\sum_{i,j} f_{ij} [i1\rangle [j 1\rangle + 2 p^\perp \sum_i f_{1i} [i1\rangle + (p^\perp)^2 f_{11} = {[23]^2\over (p^\perp)^2}.
\fe
A solution for $f_{ij}$ with the correct little group scaling is
\ie
& f_{11} = {[23]^2 \over (p^\perp)^4}, ~~~ f_{22} = {[31]^2 \over (p^\perp)^4},~~~f_{33} = {[12]^2 \over (p^\perp)^4},
\\
& f_{12} = -{[13][23] \over (p^\perp)^4}, ~~~ f_{23} = -{[21][31] \over (p^\perp)^4},~~~f_{31} = -{[32][12] \over (p^\perp)^4}.
\fe
To see this, we make use of the following identity for $SU(2)$ spinors,
\ie{}
[23][11\rangle - [13][21\rangle + [12][31\rangle = 0.
\fe
It then follows that
\ie
\sum_k f_{ik} [kj\rangle = 0.
\fe
Then, the superamplitude can be simplified to
\ie\label{supa}
& \delta^4(P) \delta^8(Q) p^\perp \sum_{i,j} f_{ij} \theta_{iI}\theta_{jJ}\Omega^{IJ}
\\
&= \delta^4(P) {\delta^8(Q)\over (p^\perp)^3} \Big\{ [23]^2(\theta_1^2) +[31]^2(\theta_2^2)+[12]^2(\theta_3^2)
-[13][23] (\theta_1\theta_2) - [21][31] (\theta_2\theta_3) - [32][12] (\theta_3\theta_1) \Big\},
\fe
where $(\theta_i\theta_j)\equiv \theta_{iI}\theta_{jJ}\Omega^{IJ}$.
One can verify that, despite the $(p^\perp)^3$ in the denominator, this amplitude has only first order pole in $(p^\perp)^2$. For instance, consider the component proportional to $\eta^6\theta_1^4$, that corresponds to an amplitude that couples the 2-form potential $C_2$ in the bulk to one $+$ helicity gauge bosons and two $-$ helicity gauge bosons. This term in (\ref{supa}) scales like $\lambda_{1\A}\lambda_{1\B} [23]^2$ in our frame, which agrees with the amplitude constructed out of $F_+^2F_-^2$ vertex (in DBI action) and the 2-point $C_2 F_-$ vertex, sewn together by a gauge boson propagator, in our frame which is infinitely boosted along the momentum direction of the supergraviton. The covariantized form of this term in the superamplitude is proportional to
\ie
\delta^4(P) (\eta^6)_{AB} (\theta_1^4) {\epsilon^{IJKL} (\lambda_1^\A \zeta_{\A IA}) (\lambda_1^\B \zeta_{\B JB}) (\zeta_{\C KC}\zeta^\C{}_{LC}) [23]^2\over (p^\perp)^2}.
\fe
In the case of non-Abelian gauge multiplet coupled to supergravity, there is a simpler 4-point brane-bulk superamplitude we can write down, of order $p$. The color ordered superamplitude (Figure \ref{nonabelian}) is
\ie
\delta^4(P) {\delta^8(Q) \over \langle 12\rangle \langle 23\rangle \langle 31\rangle} + ({\rm CPT ~conjugate}).
\fe
Note that this expression only has simple poles in $s_{12}$, $s_{23}$, or $s_{13}$. For instance, if we send $\langle 12\rangle\to 0$, the residue is proportional to $(p^\perp)^2$. In particular, this amplitude couples $\delta\overline\tau$ (or $\delta\tau$ from the CPT conjugate term) to three gluons of $-$ (or $+$) helicity, that factorizes through a cubic vertex in the gauge theory and a brane-bulk cubic vertex.
\begin{figure}[htb]
\centering
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[scale=2]{nonabelian.pdf}\\
\end{minipage}
\caption{A factorization for the $RF^3$ superamplitude for the case of an non-Abelian gauge multiplet coupled to supergravity.}
\label{nonabelian}
\end{figure}
As another example, let us investigate a superamplitude that contains the coupling $\delta\tau F_+^2 F_-^2$. We will label the momenta of the four gauge multiplet fields $p_1,\cdots,p_4$. Such an amplitude must take the form
\ie
\delta^4(P) \delta^8(Q) {\cal F}(\lambda_{i\A},\widetilde\lambda_{i\A}),
\fe
where ${\cal F}$ is a rational function of $\lambda_i$ and $\widetilde\lambda_i$, $i=1,2,3,4$, of a total homogeneous degree $-4$ in the $\lambda_i$'s and degree 4 in the $\widetilde\lambda_i$'s.
A local supervertex would require ${\cal F}$ to be a polynomial in $\lambda,\widetilde\lambda$, which is obviously incompatible with the little group and momentum scaling. We thus conclude that there is no local supervertex that gives rise to $\delta\tau F_+^2 F_-^2$ coupling.\footnote{It appears that in string theory there is no such amplitude.}
On the D3-brane in type IIB string theory, there is a nonlocal $\delta\tau F_+^3 F_-$ amplitude. This should be part of a 5-point superamplitude of the form
\ie\label{tauF+F-3}
\delta^4(P) \delta^8(Q) \sum_{i_1,i_2,i_3,i_4}
f^{I_1I_2I_3I_4}_{i_1i_2i_3i_4}(\lambda,\widetilde\lambda) \prod_{s=1}^4 \left(\widetilde\lambda_{i_s\A}\eta^{\A}{}_{I_s} - \sqrt{p^\perp} \theta_{i_sI_s}\right) ,
\fe
where $f^{I_1I_2I_3I_4}_{i_1i_2i_3i_4}(\lambda,\widetilde\lambda)$ is a rational function of homogeneous degree $-4$ in the $\lambda$'s and degree 0 in the $\widetilde\lambda$'s.
This amplitude has a pole in $s_{123}$, $s_{124}$, $s_{134}$, $s_{234}$, and no pole in $s_{ij}$ nor in $(p^\perp)^2$. In particular, the components proportional to $\eta^8\theta_4^4$ and to $\theta_1^4\theta_2^4\theta_3^4$ (corresponding to $\delta\overline\tau F_-^3 F_+$ and $\delta\tau F_+^3 F_-$ respectively) should have only a pole in $s_{123}$.
\subsection{Soft Limits and D3-brane Coupling}
So far our considerations of brane-bulk coupling are based on supersymmetry Ward identities and unitarity of scattering amplitudes. In the context of D-branes in string theory, a crucial extra piece of ingredient is the identification of the Abelian gauge multiplet on the brane as the Nambu-Goldstone bosons and fermions associated with the spontaneous breaking of super-Poincar\'e symmetry. The amplitudes then obey a soft theorem on the scalar fields of the gauge multiplet. The soft theorem relates the amplitude ${\cal A}(\phi^{IJ},\cdots)$ with the emission of a Nambu-Goldstone boson $\phi^{IJ}$ in the soft limit to the amplitude ${\cal A}(\cdots)$ without the $\phi^{IJ}$ emission,
\ie
\lim_{p_{\phi}\to 0}{\cal A}(\phi^{IJ},\cdots) = \sqrt{g\over T} \, p^{IJ} {\cal A}(\cdots).
\fe
Here $p^{IJ}$ is the $[IJ]$-component of the total momentum transverse to the 3-brane. The normalization of the soft factor is unambiguously determined by the relation between ${\cal B}_{1,1}$ and the 1-point amplitude ${\cal B}_1$.
Let us consider the 3-point amplitude between a supergraviton and two gauge multiplets. The momenta of the two gauge multiplets and the graviton are $p_1, p_2, p_3$, with $p_1+p_2+p_3^\parallel=0$.
The amplitude takes the form
\ie
{\cal B}_{1,2} = g {\delta^8(Q)\over \langle12\rangle^2}.
\label{B12}
\fe
Expanding in components, we have
\ie
{\cal B}_{1,2} = g\Big( [12]^2 \eta_3^8 + \langle 12\rangle^2 \theta_1^4\theta_2^4 + \cdots \Big),
\fe
where the terms proportional to $\theta_1^4\theta_2^4$ and $\eta_3^8$ give the vertices for $\tau F_+^2$ and $\bar\tau F_-^2$ coupling, respectively. Note that $(p_3^\perp)^2 = -(p_1+p_2)^2 = -2p_1\cdot p_2 = \langle 12\rangle [12]$.
\begin{figure}[htb]
\centering
\includegraphics[scale=2 ]{soft.pdf}
\raisebox{45pt}{$\xrightarrow{p_\phi\rightarrow 0}$}
\includegraphics[scale=2]{soft2.pdf}
\caption{Single soft limit of ${\cal B}_{1,2}$. }
\label{soft}
\end{figure}
${\cal B}_{1,2}$ is related to ${\cal B}_{1,1}$ by taking the soft limit on a scalar $\phi_{IJ}$ on the brane (Figure \ref{soft}). The soft theorem on the Nambu-Goldstone bosons $\phi_{IJ}$ implies that, in the limit $p_1\to 0$,
\ie
\left.{\cal B}_{1,2}\right|_{\theta_{1I}\theta_{1J}} \to \sqrt{g\over T}\, p^{IJ} {\cal B}_{1,1},
\fe
where $p^{IJ}=p_3^{IJ}$ is the $\phi^{IJ}$ component of the transverse momentum. More explicitly, we can write
\ie\label{bsoft}
\left.{\cal B}_{1,2}\right|_{\theta_{1I}\theta_{1J}} = g {\lambda_{1\A}\lambda_{1\B}\over \langle 12\rangle^2 } \, \delta^{6(\A\B)[IJ]}(q_2+q_3),
\fe
where
\ie
\delta^{6(\A\B)[IJ]}(Q) &= {1\over 768}\Big[\epsilon^{II_1I_2I_3}(Q^\A{}_{I_1} Q_{\A_1I_2}Q_{\A_2I_3}) \epsilon^{JJ_1J_2J_3} (Q^\B{}_{J_1} Q^{\A_1}{}_{J_2}Q^{\A_2}{}_{J_3})
\\
&~~~~+ \epsilon^{I_1I_2I_3I_4}(Q^\A{}_{I_1} Q^\B{}_{I_2} Q^{\A_1}{}_{I_3} Q^{\A_2}{}_{I_4}) \epsilon^{IJ J_1J_2} (Q_{\A_1J_1} Q_{\A_2J_2})\Big].
\fe
The RHS of (\ref{bsoft}), after imposing $p_2+p_3^\parallel=0$, is independent of the choice of $\lambda_1$, and is proportional to the 2-point bulk-brane vertex ${\cal B}_{1,1}$.
More specifically, let us choose the frame as in the supervertex $\mathcal{B}_{1,1}$. We take $p_2,\, p_3^\parallel$ to be along a lightlike direction in the $(X^0,X^1)$ plane and $p_3^\perp$ to be along a lightlike direction on the $(X^8,X^9)$ plane. The $SO(6)$ spinor indices $I$ is broken into spinor indices $a,\,\dot a$ of $SO(4)$ that rotates $X^4,\,X^5,\,X^6,\,X^7$. We pick the transverse momentum of the supergraviton to be along the direction $[IJ]=[ab]$ on the $(X^8,X^9)$ plane (while $[IJ]=[a\dot b]$ would be a direction in the $X^4,\,X^5,\,X^6,\,X^7$ space). The spinor helicity variables in this frame are given by \eqref{frame}. In particular, $\lambda_{2+}=\sqrt{p^\parallel_+}, \lambda_{2-}=0$ and $p_3^{IJ} = p^\perp _+$. Focusing on the $(\alpha,\beta)=(-,-)$ term in \eqref{bsoft}, this is indeed proportional to the supervertex $\mathcal{B}_{1,1}$ in the soft limit in this frame:
\ie
g {\lambda_{1+}\lambda_{1+} \over \langle 12\rangle^2}\delta^{6(--)[ab]}(q_2+q_3)
\propto g {\delta^{6}(Q_{+a}, Q_{-a}, Q_{+\dot a})\over p^{\parallel }_+} = \sqrt{g\over T}\,p^\perp_+\mathcal{B}_{1,1}.
\fe
\subsection{The Brane-Bulk Effective Action}
Let us comment on the notion of effective action for the brane in our consideration of higher derivative couplings. We will be interested in the ``massless open string 1PI" effective action for a D3-brane in type IIB string theory. Namely, we will be considering a quantum effective action through which the full massless open-closed string scattering amplitudes are reproduced by sewing effective vertices through ``disc type" tree diagrams, that is, diagrams that correspond to factorization through either massless open or closed string channels of a disc diagram.
This effective action is subject to two subtleties. The first is the appearance of non-analytic terms. This is familiar in the massless closed string effective action already: in type IIB string theory, there are for instance string 1-loop non-analytic terms at $\A' D^2 R^4$ and $\A'^4 D^8 R^4$ order in the momentum expansion. Often, the higher derivative terms one wishes to constrain does not receive non-analytic contributions in the quantum effective action of string theory. Sometimes, when the non-analytic terms do appear, such as those of the same order in momentum as $D^2 R F^2$ and $R^2$ terms in the D3-brane effective action, as will be discussed in the next section, their effect is to add a term that is linear in the dilaton (logarithmic in $\tau_2$) to the coefficient of the higher derivative coupling of interest, which is related to a modular anomaly.
If we work with a Wilsonian effective action, take the floating cutoff $\Lambda$ to be very small (compared to string scale) and then consider the momentum expansion, the non-analytic term is absent, and instead of the $\log\tau_2$ contribution, we will have a constant shift of the coefficient of the higher derivative operator (like $D^2 RF^2$ or $R^2$) that depends logarithmically on $\Lambda$. Our analysis of supersymmetry constraints applies straightforwardly in this case (and as we will see, such constant shifts are compatible with supersymmetry). In doing so, however, one loses the exact $SL(2,\mathbb{Z})$ invariance in the effective coupling, and the modular anomaly must be taken into account to recover the $SL(2,\mathbb{Z})$ symmetry.
\begin{figure}[htb]
\centering
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[scale=1.8 ]{loop.pdf}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[scale=1.8 ]{loop2.pdf}
\end{minipage}
\caption{Examples of non-disc type diagrams. The black dots represent (bare) brane-bulk coupling. }
\label{loop}
\end{figure}
The second subtlety has to do with the brane. Note that, in the ``massless open string 1PI" effective action, closed string propagators that connect say a pair of discs have been integrated out already. This is because the tree diagrams that involves bulk fields connecting pairs of brane vertices behave like loop diagrams (Figure \ref{loop}), where the transverse momentum of the bulk propagator is integrated \cite{Goldberger:2001tn,Michel:2014lva}. Therefore, in analyzing tree level unitarity of superamplitudes built out of higher derivative vertices of the effective action, we will consider only the ``disc type" tree diagrams.
\section{Supersymmetry Constraints on Higher Derivative Brane-Bulk Couplings}
Following a similar set of arguments as in \cite{Wang:2015jna}, we will derive non-renormalization theorems on $f_F(\tau,\bar\tau) F^4$ terms that couple the Abelian field strength on the brane to the dilaton-axion of the bulk type IIB supergravity multiplet, and on $f_{RFF}(\tau,\bar\tau) RF^2$ and $f_R(\tau,\bar\tau) R^2$ terms that couple the brane to the bulk dilaton-axion and graviton.
\subsection{$F^4$ Coupling}
Let us suppose that there is supersymmetric $F^4$ coupling on the brane, whose coefficient $f_F(\tau,\bar\tau)$ depends on the axion-dilaton field $\tau$ in the bulk. Consider a vacuum in which the dilaton-axion field $\tau$ acquires expectation value $\tau_0$, and we denote its fluctuation by $\delta\tau$. Expanding
\ie\label{expff}
f_F(\tau,\bar\tau) F^4 = f_F(\tau_0,\bar\tau_0) F^4 + \partial_\tau f_F(\tau_0,\bar\tau_0)\delta\tau F^4
+ \partial_{\overline\tau} f_F(\tau_0,\bar\tau_0)\delta\overline\tau F^4 + \partial_\tau \partial_{\overline\tau} f_F(\tau_0,\bar\tau_0)\delta\tau\delta\overline\tau F^4 + \cdots,
\fe
one could ask if the coefficient of $\delta \tau F^4$, namely $\partial_\tau f_F$ at $\tau=\tau_0$, is constrained by supersymmetry in terms of lower point vertices. This amounts to asking whether the coupling $\delta\tau F^4$ admits a local supersymmetric completion, as a supervertex. As already argued in the previous section, such a supervertex does not exist. The reason is that the desire supervertex, in $\theta$-representation, must be of the form
\ie
\delta^4(P)\delta^8(Q_\theta) {\cal F}(\lambda_i,\bar\lambda_i),
\fe
where ${\cal F}(\lambda_i,\widetilde\lambda_i)$ must have total degree $-4$ in $\lambda_i$, $i=1,\cdots,4$, and degree 4 in $\widetilde\lambda_i$, as constrained by the little group scaling on the massless 1-particle states in four dimensions. Such a rational function will necessarily introduce poles in the Mandelstam variables, and will not serve as a local supervertex.
The situation is in contrast with the 4-point $F^4$ supervertex, which does exist. There, the rational function ${\cal F}$ can be written as $[34]^2/\langle 12\rangle^2$, which due to the special kinematics of 4-point massless amplitude in four dimensions does not introduce poles in momenta. This is not the case for higher than 4-point amplitudes, where the local supervertex of the similar form does not exist. Also note that, had there been such a 5-point supervertex, it would give rise to an independent $\delta\tau F_+^2F_-^2$ coupling, whereas in string theory the analogous nonlocal superamplitude on the D3-brane contains an amplitude of the form $\delta\tau F_+^3 F_-$ instead.
Now that an independent $\delta\tau F^4$ supervertex does not exist, the coefficient $\partial_\tau f_F$, which is given by the soft limit of a 5-point superamplitude, is fixed by the residues of the 5-point superamplitude at its poles. It must then be fixed by lower point supervertices, namely, by the coefficient of $F^4$. This means that there is a linear relation between $\partial_\tau f_F$ and $f_F$, which takes the form of a first order differential equation on $f_F(\tau,\bar\tau)$. In fact, as noted already below \eqref{tauF+F-3}, the actual 5-point superamplitude that factorizes through an $F^4$ supervertex has degree 12 in $\eta$ and $\theta$ (see Figure \ref{tauF+3F-}), so the $\delta\tau F_+^2 F_-^2$ coupling which has degree 8 in $\eta$ and $\theta$ must not be part of this superamplitude and the first order differential equation simply says that $f_F(\tau,\bar\tau)$ is a constant.
\begin{figure}[htb]
\centering
\includegraphics[scale=2]{tF+3F-.pdf}
\caption{Factorization of the $\D\tau F_+^3 F_-$ amplitude through one $F_+^2F_-^2$ vertex and an $RF^2$ supervertex.}
\label{tauF+3F-}
\end{figure}
This is indeed what we see in the DBI action for a D3-brane in type IIB string theory. In the usual convention, the gauge kinetic term is normalized as $\tau_2 F^2$, and the DBI action contains $\tau_2 F^4$ coupling in string frame, which translates into $\tau_2^2 F^4$ in Einstein frame \cite{Johnson:2000ch}. In the consideration of scattering amplitudes, it is natural to rescale the gauge field by $\tau_2^{-1/2}$, so that the kinetic term is canonically normalized. This is the correct normalization convention in which the expansion (\ref{expff}) applies, and the DBI action corresponds to $f_F(\tau,\bar\tau)=1$. Thus, we conclude that the tree level $F^4$ coupling is exact in the full quantum effective action of type IIB string theory. Note that, rather trivially, this result is consistent with $SL(2,\mathbb{Z})$ invariance. Unlike the $R^4$ coupling in type IIB string theory, however, here the constraint from supersymmetry is stronger, and one need not invoke $SL(2,\mathbb{Z})$ to fix the $F^4$ coefficient.
The above discussion is in contrast to the $F^4$ coupling in the Coulomb phase of a four dimensional gauge theory with sixteen supersymmetries.\footnote{We restrict our discussion to the rank 1 case. The spacetime dimension of the gauge theory is not essential here.} In this case, one may consider the $F^4$ coefficient as a function of the scalar fields on the Coulomb branch moduli space. There are independent supervertices of the form
\ie
\delta^4(P)\delta^8(\mathcal{Q}_\psi)
\fe
in the $\psi$-representation, that contains couplings of the form $\phi^{i_1}\cdots \phi^{i_n} F^4+\cdots$ and transforms in the rank $n$ symmetric traceless tensor representation of the $SO(6)$ R-symmetry. As a consequence, through consideration of factorization of 6-point superamplitudes at a generic point on the Coulomb branch, one derives a second order differential equation that asserts $\Delta_\phi f(\phi)$ is proportional to $f(\phi)$. Comparison with DBI action then fixes this differential equation to simply the condition that $f(\phi)$ is a harmonic function. This reproduces the result of \cite{Paban:1998ea,Paban:1998qy}.
\subsection{$RF^2$ Coupling}
The 3-point superamplitude between one supergraviton and two gauge multiplets is particularly simple because there is only one invariant Mandelstam variable, $t=(p_3^\perp)^2 = \langle 12\rangle [12]$, where $p_3$ is the momentum of the supergraviton. A general 3-point superamplitude of this type takes the form (in $\theta$-representation)
\ie\label{asd}
{\cal A}_{1,2} = \delta^4(P) {\delta^8(Q_\theta)\over \langle 12\rangle^2} f(t),~~~f(t) = \sum_{n\geq -1} f_n t^{n+1}.
\fe
Previously, we have considered the term $f_{-1}$ which we called ${\cal B}_{1,2}$ in \eqref{B12}. We have seen that it is not renormalized, and is fixed by the bulk cubic coupling. We will work in units in which this coupling is set to 1. Now let us consider the possibility of having $f_n$ for general $n\geq 0$ as a function of the dilaton-axion $\tau,\bar\tau$.
First, let us ask what are the independent local supervertices that could couple $\delta\tau, \delta\overline\tau$ to $RF^2$. Such an $(3+m)$-point supervertex, with the correct little group scaling in four dimensions, must take the form
\ie
\delta^4(P) \delta^8(Q_\theta) {{\cal P}_{n+1} \over \langle 12\rangle^2},
\fe
where ${\cal P}_{n+1}$ is a function of the spinor helicity variables that scales with momentum like $t^{n+1}$. For $m\geq 1$, the $\langle 12\rangle^2$ in the denominator must be canceled by a factor from the numerator in order for the supervertex to be local (there is no longer the special kinematic constraint as in the case of the 3-point vertex that renders (\ref{asd}) local even for the $f_{-1}$ term). For this, we need $n\geq 1$, so that we can write a local supervertex of the form
\ie\label{fvert}
\delta^4(P)\delta^8(Q_\theta) [12]^2 {\cal P}_{n-1}.
\fe
The 4-point superamplitude for $\tau R F_+ F_-$ can not factorize through lower point supervertices. It follows that the coefficient $f_0$ in (\ref{asd}) as a function of $\tau,\bar\tau$ is subject to a homogenous first order differential equation, which simply states that $f_0$ is a constant. Moreover as we shall see below, $f_0$ is fixed to be identically zero using tree-amplitude in type IIB string theory.
Supervertices of the form (\ref{fvert}) are F-term vertices, and give rise to $(\delta\tau)^m D^{2n} RF^2$ coupling.
We would like to constrain $\partial_\tau \partial_{\bar\tau} f_n$ from supersymmetry, by showing that as the coefficient of a coupling of the form $\delta\tau\delta\overline\tau D^{2n} RF^2$, it cannot be adjustable by introducing a local supervertex. So let us focus on the 5-point supervertices.
When $n\geq 2$, such a coupling may be part of a 5-point D-term supervertex of the form
\ie\label{5ptDtermRF2}
\delta^8(Q) \overline Q^8 {\cal F}(\lambda_i,\widetilde\lambda_i,\theta_i,\zeta_j,\eta_j),
\fe
where ${\cal F}$ is of homogeneous degree $2(n-2)$ in the momenta. For $n=1$, on the other hand, the only available supervertex is the F-term vertex of the form (\ref{fvert}), which gives $(\delta\tau)^2D^2 RF^2$ rather than $\delta\tau\delta\overline\tau D^2 RF^2$ coupling. There appears to be no independent 5-point supervertex for $\delta\tau\delta\overline\tau D^2 RF^2$, and the supersymmetric completion of such a coupling can only be a nonlocal superamplitude. Therefore, $f_1$ is determined by the factorization of the 5-point superamplitude into lower point superamplitudes, that involves 1 or 2 cubic vertices of the type $f_0$ or $f_1$ (Figure \ref{tautaubarRFF}). Thus, we have relations of the form
\ie\label{tre}
4\tau_2^2\partial_\tau \partial_{\bar\tau} f_1(\tau,\bar\tau) = a f_1 + b f_0^2,
\fe
where $a,b$ are constants that are fixed entirely by tree level unitarity and supersymmetry Ward identities.
\begin{figure}[htb]
\centering
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[scale=1.7 ]{ttbRFF1.pdf}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[scale=1.7]{ttbRFF2.pdf}
\end{minipage}
\caption{Factorization of the $\D\tau \D\bar{\tau} R F_+ F_-$ amplitude through lower-point vertices.}
\label{tautaubarRFF}
\end{figure}
Let us compare this with the disc amplitude on D3-branes in type IIB string theory, where $f(t)$ is given by (in string frame) \cite{Hashimoto:1996bf}
\ie
-2{\Gamma(-2t)\over \Gamma(1-t)^2} = t^{-1} + \zeta(2) t+ 2\zeta(3) t^2 + \cdots,
\fe
which, after going to Einstein frame and rescaling the gauge field so that the gauge kinetic term is canonically normalized, corresponds to
\ie\label{stringtreeRF2}
f_{-1} = 1,~~~ f_0 = 0,~~~ f_1 = \zeta(2) \tau_2,~~~ f_2 = 2\zeta(3) \tau_2^{3/2}, ~~~etc.
\fe
As remarked earlier, $f_0=0$ is an exact result in the full quantum effective action for the D3-brane in type IIB string theory. Comparing with (\ref{tre}), we learn that $f_1(\tau,\bar\tau)$ is a harmonic function on the axion-dilaton target space. Knowing its asymptotics in the large $\tau_2$ limit, we can then determine this function by $SL(2,\mathbb{Z})$ invariance.
There is a subtlety here, having to do with non-analytic terms from the open string 1-loop amplitude, that gives rise to a $\log(\tau_2) D^2 RF^2$ term. As a consequence, $f_1(\tau,\bar\tau)$ is only $SL(2,\mathbb{Z})$ invariant up to an additive modular anomaly. This is similar to the modular anomaly of the $R^2$ coefficient, pointed out in \cite{Bachas:1999um,Basu:2008gt} and to be discussed below.
After taking into account the modular anomaly, $f_1$ is unambiguously fixed to b
\ie
f_1(\tau,\bar\tau) = {1\over 2} Z_1(\tau,\bar\tau) = \zeta(2) \tau_2 -{\pi\over 2} \ln\tau_2 + \pi \sum_{m,n=1}^\infty {1\over n} \left( e^{2\pi i mn\tau}+e^{-2\pi i mn\bar\tau}\right).
\fe
Here we denote the non-holomorphic Eisenstein series by $Z_s=2\zeta(2s)E_s$ \cite{Green:1999pv},
\ie
Z_s=\sum_{(m,n)\neq (0,0)} {\tau_2^s\over |m+n\tau|^{2s}},
\fe
which have the weak coupling expansion (for $s\neq 1$),
\ie
Z_s=&2\zeta(2s)\tau_2^s+2\sqrt{\pi}\tau_2^{1-s}{\Gamma(s-1/2)\zeta(2s-1)\over \Gamma(s)}+{\cal O}(e^{-2\pi \tau_2}).
\fe
For $n=2$, the candidate 5-point D-term supervertex \eqref{5ptDtermRF2} has an ${\cal F}$ which is of degree $0$ in the momenta. In order to achieve the correct little group scaling for $D^4RF^2$, ${\cal F}$ must be a non-constant function of $[12]/\la12\ra$ which would lead to a nonlocal expression in the absence of special kinematics. Therefore we conclude there's no independent $\D\tau \D\bar\tau D^4RF^2$ supervertex, which again results in a 2nd order differential equation of the form,
\ie
4\tau_2^2\partial_\tau \partial_{\bar\tau} f_2(\tau,\bar\tau) = a f_2 (\tau,\bar\tau)
\fe
where we've used $f_0=0$. String tree level amplitude \eqref{stringtreeRF2} fixes $a=3/4$. Combining with $SL(2,\bZ)$ invariance, we have $f_2=E_{3/2}$. In particular, the perturbative contributions to $D^4 RF^2$ come from only open string tree-level and two-loop orders.
\subsection{$R^2$ Coupling on the Brane}
Now we turn to $R^2$ coupling on the 3-brane. The F-term supervertices for $n$-point super-graviton coupling to the brane at four-derivative order are given by
\ie
\delta^4(P_\m)\delta^8(\sum_{i=1}^n Q_{i \A I})= \delta^4(P_\m)\delta^8\left( \sum_{i=1}^n \xi_{i\A I}{}^A \eta_{iA} \right)
\fe
and its CPT conjugate. Since there are no four-dimensional particles involved in this amplitude, there is no little group scaling to worry about. These F-term vertices contain $\delta\tau^{n-2} R^2$ and $\delta\overline\tau^{n-2} R^2$ couplings. The mixed $\delta\tau^n\delta\overline\tau^m D^{2k} R^2$ couplings, as part of a local supervertex, can come from D-term supervertices for $k\geq 2$, but not for $k=0,1$. The $\delta\tau\delta\overline\tau R^2$ coupling can only be the soft limit of a 4-point brane-bulk superamplitude, that factorizes through either an $R^2$ vertex or a $D^2 RF^2$ vertex, along with the elementary vertices (Figure \ref{tautaubarRR}).\footnote{A priori, the 4-point brane-bulk superamplitude could factorize through two $RF^2$ type vertices, giving rise to a source term in the differential constraint proportional to $f_0^2$. However as argued before, $f_0=0$ holds to all orders.} The coefficient of $\delta\tau\delta\overline\tau R^2$ is determined by the residues at these poles, thereby related linearly to $R^2$ and $D^2RF^2$ coefficients. We immediately learn that the coefficient $f_R(\tau,\bar\tau)$ of $R^2$ coupling must obey
\ie\label{rteq}
4\tau_2^2 \partial_\tau \partial_{\overline\tau} f_R(\tau,\bar\tau) = a f_R(\tau,\bar\tau) + b f_1(\tau,\bar\tau),
\fe
where $f_1(\tau,\bar\tau)$ is the coefficient of $D^2 RF^2$.
\begin{figure}[htb]
\centering
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[scale=1.5]{ttbRR.pdf}
\end{minipage} \hfill
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[scale=1.5]{ttbRR2.pdf}
\end{minipage}\hfill
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[scale=1.5]{ttbRR3.pdf}
\end{minipage}
\caption{Potential factorizations of the $\D\tau \D\bar{\tau} R^2$ amplitude through lower-point vertices.}
\label{tautaubarRR}
\end{figure}
Let us compare this relation with the perturbative results in type IIB string theory. In the previous subsection we have fixed $f_1(\tau,\bar\tau)$ to be ${1\over 2} Z_1(\tau,\bar\tau)$. $f_R$ receives the contribution $2\zeta(3) \tau_2$ from the disc amplitude \cite{Hashimoto:1996bf}. This gives a linear relation between $a$ and $b$. Modulo the modular anomaly due to non-analytic terms, $f_1$ is a harmonic function, and so $a f_R+bf_1$ is either zero (which implies that $f_R$ is harmonic) or an eigenfunction of the Laplacian operator with eigenvalue $a$. If $a$ is zero, the equation (\ref{rteq}) is incompatible with the tree level result of $f_1$. If $a$ is nonzero, comparison with the tree level answer then implies that $af_R + bf_1$ cannot have an order $\tau_2$ term, and its perturbative expansion in $\tau_2^{-1}$ only contains non-positive powers of $\tau_2$. On the other hand, writing $a = s(s-1)$, then the eigen-modular function $af_R+bf_1$ must have perturbative terms of order $\tau_2^s$ and $\tau_2^{1-s}$, which would lead to a contradiction unless this function is identically zero. In conclusion, $f_R(\tau,\bar\tau)$ is also a harmonic function, and since it should be a modular function modulo the modular anomaly due to a $\log\tau_2$ term coming from the non-analytic terms in the quantum effective action, it is given by the modular completion of its asymptotic expansion at large $\tau_2$, namely $Z_1(\tau,\bar\tau)$. This proves the conjecture of \cite{Basu:2008gt}.
In a similar way, we can derive the supersymmetry constraint on $D^2 R^2$ coupling. The independent $D^2 R^2$ supervertices are
\ie
\delta^4(P) \delta^8(Q_{1aI}+Q_{2aI}) s_{12}^\perp,\quad \delta^4(P) \delta^8(Q_{1aI}+Q_{2aI}) u_{12},
\fe
where $s_{12}^\perp = - (p_1^\perp + p_2^\perp)^2$ and $u_{12}=-4(p_1^\perp)^2+(p_1^\perp + p_2^\perp)^2$,
$p_i^\perp$ being the component of the momentum of the $i$-th particle perpendicular to the 3-brane. F-term $n$-point supervertices give rise to $\delta\tau^{n-2} D^2 R^2$ and $\delta\overline\tau^{n-2} D^2 R^2$ couplings, but $\delta\tau\delta\overline\tau D^2 R^2$ coupling is not part of a local supervertex, and must be the soft limit of a 4-point superamplitude that factorizes through the $D^2 R^2$ vertex. Note that the first D-term supervertex that contributes to the 4-point amplitude starts at the order of $D^4 R^2$ (Figure \ref{D4R2}), and would not affect the $D^2 R^2$ superamplitude. Thus the independent coefficients $f^s_{R,2}(\tau,\bar\tau)$ and $f^u_{R,2}(\tau,\bar\tau)$ of $D^2 R^2$ supervertex obey a second order differential equation of the form
\ie\label{ffd}
4\tau_2^2 \partial_\tau \partial_{\overline\tau} \begin{pmatrix}
f^s_{R,2}(\tau,\bar\tau) \\ f^u_{R,2}(\tau,\bar\tau)
\end{pmatrix} = M \begin{pmatrix}
f^s_{R,2}(\tau,\bar\tau) \\ f^u_{R,2}(\tau,\bar\tau)
\end{pmatrix},~~M\in {\rm Mat}_{2\times 2}(\bR) .
\fe
By comparing with the $D^2 R^2$ term in the disc and annulus 2-graviton amplitude on a D3-brane in type IIB string theory, which is proportional to $\tau_2^{3/2}u_{12} R^2 (1+{\cal O}(\tau^{-2}))$ in Einstein frame\footnote{
The open string annulus diagram involves gauge multiplets in the loop joined by two (bare) brane-bulk supervertices of the type $RF^2$. However the absence of $RF^2$ supervertex at order $p^4$ implies that the open string annulus diagram gives no contribution to the two point superamplitude of order $D^2 R^2$.} \cite{Hashimoto:1996bf,Basu:2008gt}, we conclude that $M$ has an eigenvector $\left(\begin{smallmatrix} 0 \\1 \end{smallmatrix}\right)$ with eigenvalue $3/4$. Combined with $SL(2,Z)$-invariance, this allows us to determine $f^u_{R,2}=Z_{3/2}$ up to an nonzero constant coefficient. Now the other independent differential constraint is $4\tau_2^2\partial_{\tau}\partial_{\bar \tau} f^s_{R,2}=a f^s_{R,2}+b f_{R,2}^u$. If $b\neq 0$, the leading contribution to $f^s_{R,2}$ in $\tau_2^{-1}$ must be $\tau_2^{3/2}\log\tau_2$ up to a nonzero constant, but such non-analytic piece cannot appear at tree level in string perturbation theory. Writing $a=s(s-1)$, then $f^s_{R,2}$ is an eigen-modular function with perturbative terms of order $\tau_2^s$ and $\tau_2^{1-s}$. However since $f^s_{R,2}$ receives no contribution at order $\tau^{3/2}$ (tree) and $\tau^{1/2}$ (open string one loop), consistency of string perturbation theory demands $f^s_{R,2}=0$ identically. To sum up, the $D^2R^2$ coupling on the brane is captured by a single eigen-modular function $f^u_{R,2}=Z_{3/2}(\tau,\bar{\tau})$.
\begin{figure}[htb]
\centering
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[scale=1.7]{ttbRR2.pdf}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[scale=1.7 ]{ttbRR5.pdf}
\end{minipage}
\caption{A factorization channel of the $\delta\tau \bar\delta \tau D^4 R^2$ amplitude and a $D$-term supervertex that contributes at the same order.}
\label{D4R2}
\end{figure}
\section{Torus Compactification of 6D $(0,2)$ SCFT}
Let us consider the six dimensional $A_{N-1}$ $(0,2)$ superconformal theory compactified on a torus of modulus $\tau$, to a four dimensional quantum field theory that may be viewed as the $SU(N)$ ${\cal N}=4$ super-Yang-Mills theory, deformed by higher dimensional operators that preserve 16 supercharges and $SO(5)\subset SO(6)$ R-symmetry. We would like to determine these higher dimensional operators.
\subsection{Harmonicity Condition on the Coulomb Branch Effective Action}
A clear way to address this question is to consider the Coulomb phase of the theory, and study the effective action of Abelian gauge multiplets. We will focus on couplings of the form
\ie
f(\tau,\overline\tau, \phi_i, y) F^4,
\fe
where $\phi_i$, $i=1,\cdots,5$ and $y$ constitute the six scalars $\Phi_i$ in the gauge multiplet, with the $\phi_i$ transforming in the vector representation of $SO(5)$. We may view the compactification as first identifying the 6D $A_1$ $(0,2)$ SCFT compactified on circle with a 5D gauge theory, which is 5D maximally supersymmetric $SU(2)$ gauge theory up to D-term deformations, and then further compactifying the 5D gauge theory \cite{Seiberg:1997ax,Douglas:2010iu,Lambert:2010iw}. On the Coulomb branch, the scalar $y$ comes from the Wilson line of the Abelian gauge field, and is circle valued.
It is known from \cite{Paban:1998ea} that the $(\phi_i, y)$ dependence is such that $f(\tau,\overline\tau, \phi_i, y)$ is a harmonic function on the moduli space $\mathbb{R}^5\times S^1$. In the amplitude language, as already explained in section 2, this can be argued as follows. Expanding near a point on the Coulomb branch, the only supervertices of the form $(\delta\phi)^2 F^4$ are in the symmetric traceless representations of the local $SO(6)$ R-symmetry, whereas the R-symmetry singlet $(\delta\phi)^2 F^4$ coupling can only be part of a nonlocal amplitude. Unlike the supergravity case, here the Coulomb branch effective theory would be free without the $F^4$ and higher derivative couplings, and the six point amplitude can only factorize into a pair of $F^4$ or higher order supervertices, and in particular cannot have polar terms at the same order in momenta as $(\delta\phi)^2 F^4$. It follows that the $SO(6)$ singlet $(\delta\phi)^2 F^4$ vertex is absent, which is equivalent to the statement that $f(\tau,\bar\tau,\phi_i, y)$ is annihilated by the Laplacian operator on the Coulomb moduli space.
The $(\tau,\overline\tau)$ dependence of the $F^4$ coupling, on the other hand, does not follow from supersymmetry constraints on the low energy effective theory.
As a side comment, if we start with M-theory on a torus that is a product of two circles of radii $R_{10}$ and $R_9$, wrap M5-branes on the torus times $\mathbb{R}^{1,3}$, reduce to type IIA string theory along the circle of radius $R_{10}$ and T-dualize along the other circle, we obtain D3-branes in type IIB string theory with $\tau=i R_{10}/R_9$, compactified on a circle of radius
\ie
\widetilde R = \ell_s {\ell_{11}^{3\over 2}\over R_9 R_{10}^{1\over 2}}
\fe
that is transverse to the D3-branes. Here $\ell_{11}$ is the 11 dimensional Planck length and $\ell_s$ is the string length. To identify the four dimensional world volume theory with the torus compactification of the $(0,2)$ SCFT requires taking the limit $R_9,R_{10} \gg \ell_{11}$, which implies that $\widetilde R\ll \ell_s$. Thus, it is unclear whether the four dimensional gauge theory of question can be coupled to type IIB supergravity, with $\tau$ identified with the dilaton-axion field.
\subsection{Interpolation through the Little String Theory}
Nonetheless, without consideration of coupling to supergravity, we will be able to determine the function $f(\tau,\overline\tau,\phi_i, y)$ completely (including the $\tau,\overline\tau$ dependence) by an interpolation in the Coulomb phase of the torus compactified $(0,2)$ little string theory, in a similar spirit as in \cite{Lin:2015zea}.
Based on the $SO(5)$ symmetry and the harmonicity of $f(\tau,\overline\tau, \phi_i, y)$, we can put it in the form
\ie\label{crho}
f(\tau,\overline\tau, \phi_i, y) = c(\tau,\bar\tau) + \sum_{n\in\mathbb{Z}} \int_0^{2\pi{\cal R}} dv {\rho(\tau,\overline\tau, v)\over \left[ |\phi|^2 + (y-v -2\pi n {\cal R} )\right]^2}.
\fe
Here $2\pi {\cal R}$ is the periodicity of the field $y$. The constant term $c(\tau,\overline\tau)$ and the source profile $\rho(\tau,\overline\tau, v)$ are yet to be determined functions. Now let us compare this to the Coulomb branch effective action of the $A_1$ $(0,2)$ little string theory (LST) compactified on a torus, of complex modulus $\tau$ and area $L^2$. The Coulomb moduli space ${\cal M}_{LST}$ is parameterized by the expectation values of four scalars $\phi_i$, $i=1,\cdots,4$, a fifth compact scalar $\phi_5$, and the zero mode of the self-dual 2-form potential $A = {1\over 2} A_{\mu\nu}dx^\mu\wedge dx^\nu$, namely
\ie
y = L^{-1} \int_{T^2} A.
\fe
Here we defined $y$ such that it has a canonically normalized kinetic term, and has periodicty $L^{-1}$($\equiv 2\pi{\cal R}$). The compact scalar $\phi_5$, on the other hand, has periodicity $L/\ell_s^2$.\footnote{This comes from the zero mode of a six dimensional compact scalar of periodicity $1/\ell_s^2$, normalized with canonical kinetic term in four dimensions.}
The torus compactified $(0,2)$ superconformal theory is obtained in the limit $\ell_s\to 0$ while keeping $L$ finite. In this limit $\phi_5$ decompactifies while $y$ retains the periodicity $L^{-1}$.
Far away from the origin on the Coulomb branch, the $(0,2)$ LST can be described by the double scaled little string theory, whose string coupling $g_s$ is related to the expectation values of the scalar fields $\phi_i$ (after compactification to four dimensions) through\footnote{To see this identification, we go back to NS5-branes in type IIA string theory, separated in the transverse $\mathbb{R}^4$ by the displacement $\vec x$. The double scaled little string theory (DSLST) is defined by the limit $|\vec x|\to 0$, holding $g_{eff} = g_s^\infty\ell_s/|x|$ fixed, where $g_s^\infty$ is the asymptotic string coupling before taking the decoupling limit. $g_{eff}$ is then identified with the string coupling at the tip of the cigar in the holographic description of DSLST, which we denote by $g_s$. After further compactifying the DSLST on a torus of area $L^2$ to four dimensions, our normalization convention on the scalar fields $\phi_i$ and $y$ is such that $y$ is identified with $(g_s^\infty \ell_s L)^{-1}$ times the displacement of the 5-branes along the M-theory circle, while $\phi_i$ is identified with $(g_s^\infty \ell_s L)^{-1}x_i$. This then fixes the normalization in the relation between $g_s$ of DSLST and $|\phi|$.}
\ie
g_s = {1\over L\sqrt{ \sum_{i=1}^4 \phi_i^2}}.
\fe
Together with the $SO(4)$ symmetry and harmonicity condition on $\mathbb{R}^4\times T^2$, the coefficient of $F^4$ in LST should take the form
\ie\label{flst}
&f_{LST}(\tau,\overline\tau, \phi_i, y)=
c(\tau,\overline\tau , L/\ell_s)+
\sum_{n,m\in\mathbb{Z}} \int du dv {\rho(\tau,\overline\tau, L/\ell_s, u, v) \over \left[ \sum_{i=1}^4 \phi_i^2 + (\phi_5 - u - m L/\ell_s^2)^2 + (y - v - n/L)^2 \right]^2},
\fe
where $u, v$ are integrated along the $\phi_5$ and $y$ circles in the moduli space. In the weak coupling limit $g_s\rightarrow0$, and therefore large $|\phi_i|$ with $i=1,\cdots,4$, the $F^4$ term in the Coulomb effective action can be computed reliably from the LST perturbation theory. In particular, in the large $\phi_i$ limit, the leading contribution to $f_{LST}$ comes from the tree level scattering amplitude, which scales like $g_s^2 \sim |\phi|^{-2}$, plus corrections of order $e^{-|\phi|}$.\footnote{Note that, from the DSLST perspective, there are no higher order perturbative contributions to $f_{LST}$, but there are non-perturbative contributions. It would be interesting to recover these non-perturbative terms of order $e^{-|\phi|}$ by a D-instanton computation in the $(0,2)$ DSLST on the torus.} This then fixes the constant term $c(\tau,\overline\tau,L/\ell_s)$ to be zero and
\ie
\int du dv \,\rho(\tau,\overline\tau,L/\ell_s, u,v) = 1,
\fe
which is in particular \textit{independent} of $\tau,\overline\tau$.
In the limit
\ie\label{02limit}
\ell_s\to 0,~~~~L, ~\phi_1,\cdots,\phi_5,~y~{\rm finite},
\fe
the $(0,2)$ LST reduces to the $(0,2)$ superconformal theory, and we should recover $SO(5)$ R-symmetry. In this limit, the $F^4$ coefficient \eqref{flst} becomes a harmonic function on $\mathbb{R}^5\times S^1$, thus the source $\rho$ in (\ref{flst}) should be localized at $u=0$. This argument also determines $c(\tau,\overline\tau)=0$ in (\ref{crho}). Next, if we further take the limit
\ie
L\to 0,~~~~\phi_1,\cdots,\phi_5,~y~{\rm finite},
\fe
we should recover four dimensional ${\cal N}=4$ SYM, where the higher dimensional operators (to be discussed below) are suppressed, with the $SO(6)$ R-symmetry restored.
In this limit, the coefficient $f(\tau,\overline\tau,\phi_i,y )$ for the $F^4$ term becomes a harmonic function on $\mathbb{R}^6$,
so we learn that $\rho$ must be supported at $v=0$ as well. Importantly, as stated below (\ref{flst}), the matching with tree level DSLST amplitudes at large $|\phi|$ fixes the overall normalization of $\rho$ to be independent of $\tau,\bar\tau$, hence $\rho(\tau,\bar\tau,\infty, u,v) = \delta(u)\delta(v)$.
Thus, we determine $f(\tau,\overline\tau,\phi_i, y)$ to be given {\it exactly} by (after rescaling all scalar fields by $L/(2\pi)$)
\ie\label{hfunc}
H(\phi_i, y) = \sum_{n\in\mathbb{Z}} {1\over \left[ |\phi|^2 + (y-2\pi n)^2 \right]^2}
\fe
as the coefficient of $F^4$ in the Coulomb branch of the $A_1$ $(0,2)$ SCFT. \footnote{The periodicity of $y$ is either $4\pi$ or $2\pi$ depending on whether the gauge group is $SU(2)$ or $SO(3)$. Here we are considering the case of $SO(3)$ where there is a single singularity on the moduli space where the $SO(6)$ R-symmetry is restored. The $SU(2)$ case will be considered in \cite{xiyin}.}
The key to the above argument is that while the dependence on $\tau$, which are the complexified coupling constant, of the torus compactified $(0,2)$ theory could \textit{a priori} be arbitrarily complicated, the dependence on $\tau$, which becomes the modulus of the target space torus, of the LST tree level scattering amplitude is completely trivial. By interpolating between the weakly coupled $(0,2)$ LST with the $(0,2)$ superconformal field theory, we determine the $\tau$ dependence of the $F^4$ coefficient of the latter theory.
We have implicitly worked in the convention where the gauge fields have canonically normalized kinetic terms. If we work in the more standard field theory convention where the kinetic term for the gauge field is written as $\tau_2 F^2$, then the $F^4$ term acquires a factor $\tau_2^2$, and so we can write
\ie\label{ones}
f(\tau,\overline\tau,\phi_i, y) = \tau_2^2 H(\phi_i, y).
\fe
Let us compare this with our expectation in the large $\tau_2$ regime, where $F^4$ coupling can be computed from 5D maximal SYM compactified on a circle, by integrating out $W$-bosons that carry Kaluza-Klein momenta at 1-loop. As argued in \cite{Lin:2015zea}, the 5D gauge theory obtained by compactifying the $(0,2)$ SCFT (as opposed to little string theory) does not have ${\rm tr} F^4$ operator at the origin of the Coulomb moduli space, thus the 1-loop result from 5D SYM holds in the large $\tau_2$ regime. This indeed reproduces (\ref{ones}).
Near the origin of the Coulomb branch, expanding in $\phi_i$ and in $y$, the term $n=0$ in (\ref{hfunc}) can be understood as the 1-loop $F^4$ term in the Coulomb effective action of ${\cal N}=4$ SYM. The $n\not=0$ terms, which are analytic in the moduli fields at the origin, can be viewed as $F^4$ and higher dimensional operators that deform the ${\cal N}=4$ SYM at the origin. From the expansion
\ie
\sum_{n\not=0} {1\over \left[ |\phi|^2 + (2\pi n-y)^2 \right]^2}
= {\zeta(4)\over 8\pi^4} + {\zeta(6)\over 16\pi^6} \left(5 y^2-|\phi|^2\right) + {\zeta(8)\over 128\pi^8} \left[ 35 y^4 - 42 y^2|\phi|^2 + 3(|\phi|^2)^2\right] + \cdots
\fe
we can read off the operators at the origin of the moduli space,\footnote{Our result (\ref{opex}) disagrees with the proposal of \cite{Douglas:2010iu}, where a different modular weight was assigned to $f(\tau,\bar\tau)$, and the proposed answer has a subleading perturbative term in $\tau_2^{-1}$. One can directly verify, from the circle compactification of 5D SYM, that there are no higher loop contribution to the $F^4$ term through integrating out KK modes. The higher loop corrections to the effective action only appear at $D^2 F^4$ order and above. Furthermore, by unitarity cut construction it appears that the $F^4$ term in the Coulomb effective action cannot be contaminated by higher dimensional operators in the 5D gauge theory that come from the compactification of $(0,2)$ SCFT.}
\ie\label{opex}
{\zeta(4)\over 8\pi^4} \tau_2^2 {\cal O}^{(8)} + { 3 \zeta(6)\over 8\pi^6} \tau_2^2 {\cal O}^{(10)}_{66} + \cdots
\fe
Here ${\cal O}^{(8)}$ is the $1/2$ BPS dimension 8 operator that is the supersymmetric completion of ${\rm tr} F^4$, whereas ${\cal O}^{(10)}_{ij}$ is the $1/2$ BPS dimension 10 operator in the symmetric traceless representation of $SO(6)$ R-symmetry, of the form
\ie
{\cal O}^{(10)}_{ij} = {\rm tr} (\Phi_{(i} \Phi_{j)} F^4) - {1\over 6} \delta_{ij} {\rm tr} (|\Phi|^2 F^4) + \cdots
\fe
Likewise, there is a series of higher dimensional $1/2$ BPS operators that transform in higher rank symmetric traceless representations of the R-symmetry. In fact, these are all the BPS (F-term) operators that are Lorentz invariant in the $SU(2)$ maximally supersymmetric gauge theory. In the higher rank case, i.e. torus compactification of $A_r$ $(0,2)$ SCFT for $r>1$, the 4D gauge theory is also deformed by the $1/4$ BPS dimension 10 double trace operator of the form $D^2 {\rm tr}^2 F^4+\cdots$, and analogous higher dimensional operators in nontrivial representations of R-symmetry. These receive contributions from the circle compactified 5D SYM at two-loop order.
\bigskip
\section*{Acknowledgments}
We would like to thank Clay C\'ordova and Thomas Dumitrescu for discussions. Y.W. is supported in part by the U.S. Department of Energy under grant Contract Number DE-SC00012567. X.Y. is supported by a Sloan Fellowship and a Simons Investigator Award from the Simons Foundation.
|
1,941,325,221,156 | arxiv | \section{Introduction}
In several studies, the interest lies in drawing inference about the regression parameters of a marginal model for correlated, repeated or clustered multinomial variables with ordinal or nominal response categories while the association structure between the dependent responses is of secondary importance. The lack of a convenient multivariate distribution for multinomial responses and the sensitivity of ordinary maximum likelihood methods to misspecification of the association structure led researchers to modify the GEE method of \cite{Liang1986} in order to account for multinomial responses \citep{Miller1993,Lipsitz1994,Williamson1995,Lumley1996,Heagerty1996,Parsons2006}. These GEE approaches estimate the marginal regression parameter vector by solving the same set of estimating equations as in \cite{Liang1986}, but differ in the way they parametrize and/or estimate $\boldsymbol \alpha$, a parameter vector that is usually defined to describe a ``working'' assumption about the association structure.
\cite{Touloumis2012} showed that the joint existence of the estimated marginal regression parameter vector and $\hat{\boldsymbol \alpha}$ cannot be assured in existing approaches. This is because the parametric space of the proposed parameterizations of the association structure depends on the marginal model specification even in the simple case of bivariate multinomial responses. To address this issue, \cite{Touloumis2012} defined $\boldsymbol \alpha$ as a ``nuisance'' parameter vector that contains the marginalized local odds ratios structure, that is the local odds ratios as if no covariates were recorded, and they employed the family of association models \citep{Goodman1985} to develop parsimonious and meaningful structures regardless of the response scale. The practical advantage of the local odds ratios GEE approach is that it is applicable to both ordinal and nominal multinomial responses without being restricted by the marginal model specification. Simulations in \cite{Touloumis2012} imply that the local odds ratios GEE approach captures a significant portion of the underlying correlation structure, and compared to the independence `working' model (i.e., assuming no correlation structure in the GEE methodology), simple local odds ratios structures can substantially increase the efficiency gains in estimating the regression vector of the marginal model. Note that low convergence rates for the GEE approach of \cite{Lumley1996} and \cite{Heagerty1996} did not allow the authors to compare these approaches with the local odds ratios GEE approach while the GEE approach of \cite{Parsons2006} was excluded from the simulation design because its use is restricted to a cumulative logit marginal model specification.
The \proglang{R} \citep{RCoreTeam2013} package \pkg{multgee} implements the local odds ratios GEE approach and it is available from CRAN at \url{http://CRAN.R-project.org/package=multgee}. To emphasize the importance of reflecting the nature of the response scale on the marginal model specification and on the marginalized local odds ratios structure, two core functions are available in \pkg{multgee}: \code{nomLORgee} which is appropriate for GEE analysis of nominal multinomial responses and \code{ordLORgee} which is appropriate for ordinal multinomial responses. In particular, options for the marginal model specification include a baseline category logit model for nominal response categories and a cumulative link model or an adjacent categories logit model for ordinal response categories. In addition, there are three utility functions that enable the user to: i) Perform goodness-of-fit tests between two nested GEE models (\code{waldts}), ii) select the local odds ratios structure based on the rule of thumb discussed in \cite{Touloumis2012} (\code{intrinsic.pars}), and iii) construct a probability table (to be passed in the core functions) that satisfies a desired local odds ratios structure (\code{matrixLOR}).
To appreciate the features of \pkg{multgee}, we briefly review GEE software for multinomial responses in \proglang{SAS} \citep{SAS} and \proglang{R}. The current version of \proglang{SAS} supports only the independence ``working'' model under a marginal cumulative probit or logit model for ordinal multinomial responses. To the best of our knowledge, \proglang{SAS} macros \citep{Williamson1998,Yu2004} implementing the approach of \cite{Williamson1995} are not publicly available. The \proglang{R} package \pkg{repolr} \citep{Parsons2013} implements the approach of \cite{Parsons2006} but it is restricted to using a cumulative logit model. Another option for ordinal responses is the function \code{ordgee} in the \proglang{R} package \pkg{geepack} \citep{Hojsgaard2006}. This function implements the GEE approach of \cite{Heagerty1996} but it seems to produce unreliable results for multinomial responses. To illustrate this, we simulated independent multinomial responses under a cumulative probit model specification with a single time-stationary covariate for each subject and we employed \code{ordgee} to obtain the GEE estimates from the independence `working' model. Description of the generative process can be found in Scenario 1 of \cite{Touloumis2012} except that we used the values $-3,-1,1$ and $3$ for the four category specific intercepts in order to make the problem more evident. Based on $1000$ simulation runs when the sample size $N=500$, we found that the bias of the GEE estimate of $\beta=1$ was $\approx 4.8 \times 10^{28}$, indicating the presence of a bug or -at least- of numerical problems for some situations. Similar problems occurred for the alternative global odds ratios structures in \code{ordgee}. In contrast to existing software, \pkg{multgee} offers greater variety of GEE models for ordinal responses, implements a GEE model for nominal responses and is not limited to the independence ``working'' model, which might lead to significant efficiency losses. Further, one can assess the goodness of fit for two or more nested GEE models.
This paper is organized as follows. In Section \ref{GEENotation}, we present the theoretical background of the local odds ratios GEE approach that is necessary for the use of \pkg{multgee}. We introduce the marginal models implemented in \pkg{multgee}, the estimation procedure for the `nuisance' parameter vector $\boldsymbol \alpha$ and the asymptotic theory on which GEE inference is based. We describe the arguments of the core GEE functions (\code{nomLORgee}, \code{ordLORgee}) in Section \ref{Description1} while the utility functions (\code{waldts}, \code{intrinsic.pars}, \code{matrixLOR}) are described in Section \ref{Description2}. In Section \ref{Example}, we illustrate the use of \pkg{multgee} in a longitudinal study with correlated ordinal multinomial responses. We summarize the features of the package and provide a few practical guidelines in Section \ref{Summary}.
\section{Local odds ratios GEE approach} \label{GEENotation}
For notational ease, suppose the data arise from a longitudinal study with no missing observations. However, note that the local odds ratios GEE approach is not limited neither to longitudinal studies nor to balanced designs, under the strong assumption that missing observations are missing completely at random \citep{Rubin1976}.
Let $Y_{it}$ be the multinomial response for subject $i$ $(i=1,\ldots,N)$ at time $t$ $(t=1,\ldots,T)$ that takes values in $\{1,2,\ldots,J\}$, $J>2$. Define the response vector for subject $i$
$$\mathbf {Y}_{i}=(Y_{it1},\ldots,Y_{i1(J-1)},Y_{i21},\ldots,Y_{i2(J-1)},\ldots,Y_{iT1},\ldots,Y_{iT(J-1)})^{\top},$$
where $Y_{itj}=1$ if the response for subject $i$ at time $t$ falls at category $j$ and $Y_{itj}=0$ otherwise. Denote by $\mathbf{x}_{it}$ the covariates vector associated with $Y_{it}$, and let $\mathbf x_{i}=(\mathbf x^{\top}_{i1},\ldots,\mathbf x^{\top}_{iT})^{\top}$ be the covariates matrix for subject $i$. Define $\pi_{itj}= \E(Y_{itj}|\mathbf x_i)=\Prob(Y_{itj}=1| \mathbf x_i)=\Prob(Y_{it}=j| \mathbf x_i)$ as the probability of the response category $j$ for subject $i$ time $t$, and let $\boldsymbol \pi_{i}=(\boldsymbol \pi^{\top}_{i1},\ldots,\boldsymbol \pi^{\top}_{iT})^{\top}$ be the mean vector of $\mathbf Y_i$, where $\boldsymbol{\pi}_{it} = (\pi_{it1},\ldots,\pi_{it(J-1)})^{\top}$. It follows from the above that $Y_{itJ}=1-\sum_{j=1}^{J-1} Y_{itj}$ and $\pi_{itJ}=1-\sum_{j=1}^{J-1} \pi_{itj}$.
\subsection{Marginal models for correlated multinomial responses}
The choice of the marginal model depends on the nature of the response scale. For ordinal multinomial responses, the family of cumulative link models
\begin{equation}
F^{-1}\left[\Prob(Y_{it}\leq j|\mathbf x_i)\right]=\beta_{0j}+ {\boldsymbol \beta}_{\ast}^{\top} \mathbf{x}_{it}
\label{ABMCLM}
\end{equation}
or the adjacent categories logit model
\begin{equation}
\log\left(\frac{\pi_{itj}}{\pi_{it(j+1)}} \right)=\beta_{0j}+ {\boldsymbol \beta}_{\ast}^{\top} \mathbf{x}_{it}
\label{ABMACLM}
\end{equation}
can be used, where $F$ is the cumulative distribution function of a continuous distribution and $\{\beta_{0j}:j=1,\ldots,J-1\}$ are the category specific intercepts. For nominal multinomial responses, the baseline category logit model
\begin{equation}
\log\left(\frac{\pi_{itj}}{\pi_{itJ}}\right)=\beta_{0j}+{\boldsymbol {\beta}}_{j}^{\top} \mathbf{x}_{it}
\label{ABMBCLM}
\end{equation}
can be used, where $\boldsymbol {\beta}_{j}$ is the $j$-th category specific parameter vector.
It is worth mentioning that the linear predictor differs in the above marginal models. First, the category specific intercepts need to satisfy a monotonicity condition $\beta_{01}\leq\beta_{02}\leq \ldots \leq \beta_{0(J-1)}$ only when the family of cumulative link models in (\ref{ABMCLM}) is employed. Second, the regression parameter coefficients of the covariates $\mathbf x_{it}$ are category specific only in the marginal baseline category logit model~(\ref{ABMBCLM}) and not in the ordinal marginal models (\ref{ABMCLM}) and (\ref{ABMACLM}).
\subsection{Estimation of the marginal regression parameter vector}
To unify the notation, let $\boldsymbol \beta$ be the $p$-variate parameter vector that includes all the regression parameters in (\ref{ABMCLM}), (\ref{ABMACLM}) or (\ref{ABMBCLM}). To obtain $\boldsymbol {\widehat \beta_G}$, a GEE estimator of $\boldsymbol \beta$, \cite{Touloumis2012} solved the estimating equations
\begin{equation}
\mathbf{U}(\boldsymbol \beta,\widehat{\boldsymbol \alpha})=\frac{1}{N}\sum_{i=1}^N \mathbf{D}_i \mathbf V^{-1}_{i} (\mathbf {Y}_i-\boldsymbol{\pi}_i)=\mathbf{0}
\label{EEbeta}
\end{equation}
where $\mathbf{D}_i=\partial \boldsymbol{\pi}_i/\partial \boldsymbol{\beta}$ and $\mathbf V_i$ is a $T(J-1) \times T(J-1)$ `weight' matrix that depends on $\boldsymbol \beta$ and on $\widehat{\boldsymbol \alpha}$, an estimate of the `nuisance' parameter vector $\boldsymbol \alpha$ defined formally in Section \ref{Alpha}. Succinctly, $\mathbf V_i$ is a block matrix that mimics the form of $\COV(\mathbf{Y}_i|\mathbf x_i)$, the true covariance matrix for subject $i$. The $t$-th diagonal matrix of $\mathbf V_i$ is the covariance matrix of $Y_{it}$ determined by the marginal model. The $(t,t^{\prime})$-th off-diagonal block matrix describes the marginal pseudo-association of $(Y_{it},Y_{it^{\prime}})$, which is a function of the marginal model and of the pseudo-probabilities $\{\Prob(Y_{it}=j,Y_{it^{\prime}}=j^{\prime}|\mathbf x_i):j,j^{\prime}=1,\ldots,J-1\}$ calculated based on $(\widehat{\boldsymbol \alpha},\boldsymbol \beta)$. We should emphasize that $\mathbf V_i$ is a `weight' matrix because $\boldsymbol \alpha$ is defined as a `nuisance' parameter vector and it is unlikely to describe a valid `working' assumption about the association structure for all subjects.
\subsection{Estimation of the nuisance parameter vector and of the weight matrix} \label{Alpha}
Order the $L=T(T-1)/2$ time-pairs with the rightmost element of the pair most rapidly varying as $(1,2),(1,3),\ldots,(T-1,T)$, and let $G$ be the group variable with levels the $L$ ordered pairs. For each time-pair $(t,t^{\prime})$, ignore the covariates and cross-classify the responses across subjects to form an $J \times J$ contingency table such that the row totals correspond to the observed totals at time $t$ and the column totals to the observed totals at time $t^{\prime}$, and let $\theta_{tjt^{\prime}j^{\prime}}$ be the local odds ratio at the cutpoint $(j,j^{\prime})$ based on the expected frequencies $\{f_{tjt^{\prime}j^{\prime}}:j,j^{\prime}=1,\ldots,J\}$. For notational reasons, let $A$ and $B$ be the row and column variable respectively. Assuming a Poisson sampling scheme to the $L$ sets of $J \times J$ contingency tables, fit the RC-G(1) type model \citep{Becker1989a}
\begin{equation}
\log f_{tjt^{\prime}j^{\prime}}=\lambda+\lambda^{A}_{j}+\lambda^{B}_{j^{\prime}}+\lambda^{G}_{(t,t^{\prime})}+\lambda^{AG}_{j(t,t^{\prime})}+\lambda^{BG}_{j^{\prime}(t,t^{\prime})}+\phi^{(t,t^{\prime})}\mu^{(t,t^{\prime})}_j \mu^{(t,t^{\prime})}_{j^{\prime}},
\label{RCGmodel}
\end{equation}
where $\{\mu^{(t,t^{\prime})}_{j}:j=1,\ldots,J\}$ are the score parameters for the $J$ response categories at the time-pair $(t,t^{\prime})$.
After imposing identifiability constraints on the regression parameters in (\ref{RCGmodel}), the log local odds ratios structure is given by
\begin{equation}
\log \theta_{tjt^{\prime}j^{\prime}}=\phi^{(t,t^{\prime})}\left(\mu^{(t,t^{\prime})}_{j}-\mu^{(t,t^{\prime})}_{j+1}\right)\left(\mu^{(t,t^{\prime})}_{j^{\prime}}-\mu^{(t,t^{\prime})}_{j^{\prime}+1}\right).
\label{RCstructure2}
\end{equation}
At each time-pair, (\ref{RCstructure2}) summarizes the local odds ratios structure in terms of the $J$ score parameters and the intrinsic parameter $\phi^{(t,t^{\prime})}$ that measures the average association of the marginalized contingency table. Since the score parameters do not need to be fixed or monotonic, the local odds ratios structure is applicable to both nominal and ordinal multinomial responses.
\cite{Touloumis2012} defined $\boldsymbol \alpha$ as the parameter vector that contains the marginalized local odds ratios structure
$$\alpha=\left(\theta_{1121},\ldots,\theta_{1(J-1)2(J-1)},\ldots,\theta_{(T-1)1T1},\ldots,\theta_{(T-1)(J-1)T(J-1)}\right)^{\top}$$
where $\theta_{tjt^{\prime}j^{\prime}}$ satisfy (\ref{RCstructure2}). To increase the parsimony of the local odds ratios structures for ordinal responses, they proposed to use common unit-spaced score parameters $\left(\mu^{(t,t^{\prime})}_{j}=j\right)$ and/or common intrinsic parameters $\left(\phi^{(t,t^{\prime})}=\phi\right)$ across time-pairs. For a nominal response scale, they proposed to apply a homogeneity constraint on the score parameters $\left(\mu^{(t,t^{\prime})}_{j}=\mu_{j}\right)$ and use common intrinsic parameters across time-pairs. To estimate $\boldsymbol \alpha$ maximum likelihood methods are involved by treating the $L$ marginalized contingency tables as independent. Technical details and justification about this estimation procedure can be found in \cite{Touloumis2011a} and \cite{Touloumis2012}.
Conditional on the estimated marginalized local odds ratios structure $\widehat{\boldsymbol \alpha}$ and the marginal model specification at times $t$ and $t^{\prime}$, $\{\Prob(Y_{it}=j,Y_{it^{\prime}}=j^{\prime}|\mathbf x_i):t<t^{\prime},j,j^{\prime}=1,\ldots,J-1\}$ are obtained as the unique solution of the iterative proportional fitting (IPF) procedure \citep{Deming1940}. Hence, $\mathbf V_i$ can be readily calculated and the estimating equations in (\ref{EEbeta}) can be solved with respect to $\boldsymbol \beta$.
\subsection{Asymptotic properties of the GEE estimator}
Given $\widehat{\boldsymbol \alpha}$, inference about $\boldsymbol \beta$ is based on the fact that $\sqrt{N}(\boldsymbol {\widehat\beta}_G-\boldsymbol \beta)\sim \mathrm{N}(\mathbf 0,\boldsymbol {\Sigma})$ asymptotically,
where
\begin{equation}
\boldsymbol {\Sigma}=\lim_{N\to\infty} N \boldsymbol {\Sigma}_0^{-1} \boldsymbol {\Sigma}_1 \boldsymbol {\Sigma}_0^{-1},
\label{RobustCovariance}
\end{equation}
$\boldsymbol {\Sigma}_0=\sum_{i=1}^N \mathbf{D}_i^{\top} \mathbf {V}^{-1}_{i} \mathbf{D}_i$ and $\boldsymbol {\Sigma}_1=\sum_{i=1}^N \mathbf{D}_i^{\top} \mathbf {V}^{-1}_{i} \COV(\mathbf{Y}_i|\mathbf x_i) \mathbf {V}^{-1}_{i} \mathbf{D}_i$. For finite sample sizes, $\widehat{\boldsymbol {\Sigma}}$ is estimated by ignoring the limit in (\ref{RobustCovariance}) and replacing $\boldsymbol \beta$ with $\boldsymbol {\widehat \beta}_G$ and $\COV(\mathbf{Y}_i|\mathbf x_i)$ with $(\mathbf {Y}_i-\widehat{\boldsymbol{\pi}}_i)(\mathbf {Y}_i-\widehat{\boldsymbol{\pi}}_i)^{\top}$ in $\boldsymbol {\Sigma}_0$ and $\boldsymbol {\Sigma}_1$. In the literature, $\widehat{\boldsymbol {\Sigma}}/N$ is often termed as ``sandwich'' or ``robust'' covariance matrix of $\boldsymbol {\widehat \beta}_G$.
\section{Description of core functions}\label{Description1}
We describe the arguments of the functions \code{nomLORgee} and \code{ordLORgee}, focusing on the marginal model specification (\code{formula}, \code{link}), data representation (\code{id}, \code{repeated}, \code{data}) and local odds ratios structure specification (\code{LORstr}, \code{LORterm}, \code{homogeneous}, \code{restricted}). For completeness' sake, we also present computational related arguments (\code{LORem}, \code{add}, \code{bstart}, \code{LORgee.control}, \code{ipfp.control}, \code{IM}). The two core functions share the same arguments, except \code{link} and \code{restricted} which are available only in \code{ordLORgee}, and they both create an object of the class \code{LORgee} which admits \code{summary}, \code{coef}, \code{update} and \code{residuals} methods.
\subsection{Marginal model specification}
For ordinal multinomial responses, the \code{link} argument in the function \code{ordLORgee} specifies which of the marginal models (\ref{ABMCLM}) or (\ref{ABMACLM}) will be fitted. The options \code{\textquotedbl{logit}\textquotedbl}, \code{\textquotedbl{probit}\textquotedbl}, \code{\textquotedbl{cauchit}\textquotedbl} or \code{\textquotedbl{cloglog}\textquotedbl} indicate the corresponding cumulative distribution function $F$ in the cumulative link model (\ref{ABMCLM}), while the option \code{\textquotedbl{acl}\textquotedbl} implies that the adjacent categories logit model (\ref{ABMACLM}) is selected. For nominal multinomial responses, the function \code{nomLORgee} fits the baseline category logit model (\ref{ABMBCLM}), and hence the \code{link} argument is not offered.
The \code{formula} (\code{=response~covariates}) argument identifies the multinomial response variable (\code{response}) and specifies the form of the linear predictor (\code{covariates}), assuming that this includes an intercept term. If required, the $J>2$ observed response categories are sorted in an ascending order and then mapped onto $\{1,2,\ldots,J\}$. To account for a covariate \code{x} with a constrained parameter coefficient fixed to 1 in the linear predictor, the term \code{offset(x)} must be inserted on the right hand side of \code{formula}.
\subsection{Data representation}\label{Data Representation}
The \code{id} argument identifies the $N$ subjects by assigning a unique label to each subject. If required, the observed \code{id} labels are sorted in an ascending order and then relabeled as $1,\ldots,N$, respectively.
The \code{repeated} argument identifies the times at which the multinomial responses are recorded by treating the $T$ unique observed times in the same manner as in \code{id}. The purpose of \code{repeated} is dual: To identify the $T$ distinct time points and to construct the full marginalized contingency table for each time-pair by aggregating the relevant/available responses across subjects. The \code{repeated} argument is optional and it can be safely ignored in balanced designs or in unbalanced designs in which if the $t$-th response is missing for a particular subject then all subsequent responses at times $t^{\prime}>t$ are missing for that subject. Otherwise, it is recommended to provide the \code{repeated} argument in order to ensure proper construction of the full marginalized contingency table. To this end, note that if the measurement occasions are not recorded in a numerical mode, then the user should create \code{repeated} by mapping the $T$ distinct measurement occasions onto the set $\{1,\ldots,T\}$ in such a way that the temporal order of the measurement occasions is preserved. For example, if the measurements occasions are recorded as ``before'', ``baseline'', ``after'', then the levels for \code{repeated} should be coded as $1,2$ and $3$, respectively.
The dataset is imported via the \code{data} argument in ``long'' format, meaning that each row contains all the information provided by a subject at a given measurement occasion. This implies that \code{data} must include the variables specified in the mandatory arguments \code{formula} and \code{id}, as well as the optional argument \code{repeated} when this is specified by the user. If no \code{data} is provided then the above variables are extracted from the \code{environment} that \code{nomLORgee} and \code{ordLORgee} are called. Currently missing observations, identified by \code{NA} in \code{data}, are ignored.
\subsection{Marginalized local odds ratios structure specification}
The marginalized local odds ratios is specified via the \code{LORstr} argument. Table~\ref{tab:LOR} displays the structures proposed by \cite{Touloumis2012}. Currently the default option is the time excheangeability structure (\code{\textquotedbl{time.exch}\textquotedbl}) in \code{nomLORgee} and the category excheangeability (\code{\textquotedbl{category.exch}\textquotedbl}) structure in \code{ordLORgee}. The uniform (\code{\textquotedbl{uniform}\textquotedbl}) and category excheangeability structures are not allowed in \code{nomLORgee} because given unit-spaced parameter scores are not meaningful for nominal response categories.
The user can also fit the independence `working' model (\code{LORstr}=\code{\textquotedbl{independence}\textquotedbl}) or even provide the local odds ratios structure (\code{LORstr=\textquotedbl{fixed}\textquotedbl}) using the \code{LORterm} argument. In this case, an $L \times J^2$ matrix must be constructed such that the $g$-th row contains the vectorized form of a probability table that satisfies the desired local odds ratios structure at the time-pair corresponding to the $g$-th level of $G$.
\cite{Touloumis2011a} discussed two further versions of the \code{\textquotedbl{time.exch}\textquotedbl} and the RC (\code{\textquotedbl{RC\textquotedbl}}) structures based on using: i) Heterogeneous score parameters (\code{homogeneous}=\code{FALSE}) at each time-pair, and/or ii) monotone score parameters (\code{restricted}=\code{TRUE}), an option applicable only for ordinal response categories. However, it is sensible to employ these additional options only when the local odds ratios structures in Table~\ref{tab:LOR} do not seem adequate.
It is important to mention that the user must provide only the arguments required for the specified local odds ratios structure. For example, the arguments \code{homogeneous}, \code{restricted} and \code{LORterm} are ignored when \code{LORstr=\textquotedbl{uniform}\textquotedbl}.
\begin{table}
\centering
\begin{tabular}{cccccl}
\hline
\hline
$\log \theta_{tjt^{\prime}j^{\prime}}$ & \code{LORstr} & Functions & Parameters \\
\hline
$\phi$ & \code{\textquotedbl{uniform}\textquotedbl} & \code{ordLORgee} & 1\\
$\phi^{(t,t^{\prime})}$ &\code{\textquotedbl{category.exch}\textquotedbl} & \code{ordLORgee} &L\\
$\phi \left(\mu_{j}-\mu_{j+1}\right)\left(\mu_{j^{\prime}}-\mu_{j^{\prime}+1}\right)$ & \code{\textquotedbl{time.exch}\textquotedbl} & Both & $J-1$ \\
$\phi^{(t,t^{\prime})}\left(\mu^{(t,t^{\prime})}_{j}-\mu^{(t,t^{\prime})}_{j+1}\right)\left(\mu^{(t,t^{\prime})}_{j^{\prime}}-\mu^{(t,t^{\prime})}_{j^{\prime}+1}\right)$ & \code{\textquotedbl{RC}\textquotedbl} & Both & $L(J-1)$ \\
\hline
\end{tabular}
\caption{The main options for the marginalized local odds ratios structures in \pkg{multgee}.}
\label{tab:LOR}
\end{table}
\subsection{Computational details}
The default estimation procedure for the marginalized local odds ratios structure is to fit model (\ref{RCGmodel}) to the full marginalized contingency table (\code{LORem=\textquotedbl{3way}\textquotedbl}) after imposing the desired restrictions on the intrinsic and the score parameters. \cite{Touloumis2011a} noticed that the estimated local odds ratios structure under model (\ref{RCGmodel}) is identical to that obtained by fitting independently a row and columns (RC) effect model \citep{Goodman1985} with homogeneous score parameters to each of the $L$ contingency tables. Motivated by this, an alternative estimation procedure (\code{LORem=\textquotedbl{2way}\textquotedbl}) for estimating the structures \code{\textquotedbl{uniform}\textquotedbl} and \code{\textquotedbl{time.exch}\textquotedbl} was proposed. In particular, one can estimate the single parameter of the \code{\textquotedbl{uniform}\textquotedbl} structure as the average of the $L$ intrinsic parameters $\phi^{(t,t^{\prime})}$ obtained by fitting the linear-by-linear association model \citep{Agresti2002} independently to each of the $L$ marginalized contingency tables. For the \code{\textquotedbl{time.exch}\textquotedbl} structure, one can fit $L$ RC effects models with homogeneous (\code{homogeneous=TRUE})/heterogeneous (\code{homogeneous=FALSE}) score parameters and then estimate the log local odds ratio at each cutpoint $(j,j^{\prime})$ by averaging $\log \hat{\theta}_{tjt^{\prime}j^{\prime}}$ for $t<t^{\prime}$. Regardless of the value of \code{LORem}, the appropriate model for counts is fitted via the function \code{gnm} of the \proglang{R} package \pkg{gnm} \citep{Turner2012}.
In the presence of zero observed counts, a small positive constant can be added (\code{add}) at each cell of the marginalized contingency table to ensure the existence of $\widehat{\boldsymbol \alpha}$. We conjecture that a constant of the magnitude $10^{-4}$ will serve this purpose without affecting the strength of the association structure.
A Fisher scoring algorithm is employed to solve the estimating equations (\ref{EEbeta}) as in \cite{Lipsitz1994}. The only difference is that now $\widehat{\boldsymbol{\alpha}}$ is not updated. The default way to obtain the initial value for $\boldsymbol \beta$ is via the function \code{vglm} of the R package \pkg{VGAM} \citep{Yee2010}. Alternatively, the initial value can be provided by the user (\code{bstart}). The Fisher scoring algorithm converges when the elementwise maximum relative change in two consecutive estimates of $\boldsymbol \beta$ is less than or equal to a predefined positive constant $\epsilon$. The \code{control} argument controls the related iterative procedure variables and printing options. The default maximum number of iterations is $15$ and the default tolerance is $\epsilon=0.001$.
Recall that calculation of the `weight' matrix $\mathbf V_i$ at given values of $(\boldsymbol \beta,\boldsymbol \alpha)$ relies on the IPF procedure. The \code{ipfp.ctrl} argument controls the related variables. The convergence criterion is the maximum of the absolute difference between the fitted and the target row and column marginals. By default, the tolerance of the IPF procedure is $10^{-6}$ with a maximal number of iterations equal to 200.
The \code{IM} argument defines which of the \proglang{R} functions \code{solve}, \code{qr.solve} or \code{cholesky} will be used to invert matrices in the Fisher scoring algorithm.
\section{Description of utility functions}\label{Description2}
The function \code{waldts} performs a goodness-of-fit test for two nested GEE models based on a Wald test statistic. Let $\mathrm{M_0}$ and $\mathrm{M_1}$ be two nested GEE models with marginal regression parameter vectors $\boldsymbol \beta_0$ and $\boldsymbol \beta_1=(\boldsymbol \beta_0^{\top},\boldsymbol \beta^{\top}_q)^{\top}$, respectively. Define a matrix $\mathbf C$ such that $\mathbf C \boldsymbol \beta_1=\boldsymbol \beta_q$. Here $q$ equals the rank of $\mathbf C$ and the dimension of $\boldsymbol \beta_q$. The hypothesis
$$H_0: \boldsymbol \beta_q=0 \text{ vs } H_1: \boldsymbol \beta_q \neq 0$$
tests the goodness-of-fit of $\mathrm{M_0}$ versus $\mathrm{M_1}$. Based on a Wald type approach, $H_0$ is rejected at $\alpha \%$ significance level, if $(\mathbf C \widehat{\boldsymbol \beta})^{\top} (N\mathbf C \widehat{\boldsymbol \Sigma}\mathbf C^{\top})^{-1}(\mathbf C \widehat{\boldsymbol \beta}) \geq X_{q}(\alpha)$, where $\widehat{\boldsymbol \beta}$ and $\widehat{\boldsymbol \Sigma}$ are estimated under model $\mathrm{M_1}$ and $X_{q}(\alpha)$ denotes the $\alpha$ upper quantile of a chi-square distribution with $q$ degrees of freedom.
\cite{Touloumis2012} suggested to select the local odds ratios structure by inspecting the range of the $L$ estimated intrinsic parameters under the \code{\textquotedbl{category.exch}\textquotedbl} structure for ordinal responses, or under the \code{\textquotedbl{RC}\textquotedbl} structure for nominal responses. If the estimated intrinsic parameters do not differ much, then the underlying marginalized local odds ratios structure is likely nearly exchangeable across time-pairs. In this case, the simple structures \code{\textquotedbl{uniform}\textquotedbl} or \code{\textquotedbl{time.exch}\textquotedbl} should be preferred because they tend to be as efficient as the more complicated ones. The function \code{intrinsic.pars} gives the estimated intrinsic parameter of each time-pair.
The single-argument function \code{matrixLOR} creates a two-way probability table that satisfies a desired local odds ratios structure. This function aims to ease the construction of the \code{LORterm} argument in the core functions \code{nomLORgee} and \code{ordLORgee}.
\section{Example}\label{Example}
To illustrate the main features of the package \pkg{multgee}, we follow the GEE analysis performed in \cite{Touloumis2012}. The data came from a randomized clinical trial \citep{Lipsitz1994} that aimed to evaluate the effectiveness of the drug Auranofin versus the placebo therapy for the treatment of rheumatoid arthritis. The five-level (1=poor, \ldots, 5=very good) ordinal multinomial response variable was the self-assessment of rheumatoid arthritis recorded at one ($t=1$), three ($t=2$) and five ($t=3$) follow-up months. To acknowledge the ordinal response scale, the marginal cumulative logit model
\begin{align}
\log \left(\frac{\Prob(Y_{it}\leq j|\mathbf x_i)}{1-\Prob(Y_{it}\leq j|\mathbf x_i)}\right)&=\beta_{0j}+\beta_1 I(time_i=3)+\beta_2 I(time_i=5) +\beta_3 trt_i \nonumber\\
&+\beta_4 I(b_i=2)+\beta_5 I(b_i=3)+\beta_6 I(b_i=4)+\beta_7 I(b_i=5).
\label{MarginalModelData}
\end{align}
was fitted, where $i=1,\ldots,301$, $t=1,2,3$, $j=1,2,3,4$ and $I(A)$ is the indicator function for the event $A$. Here $\mathbf x_i$ denotes the covariates matrix for subject $i$ that includes the self-assessment of rheumatoid arthritis at the baseline ($b_i$), the treatment variable ($trt_i$), coded as $(1)$ for the placebo group and $(2)$ for the drug group, and the follow-up time recorded in months ($time_i$).\\
The GEE analysis is performed in two steps. First, we select the marginalized local odds ratios structure by estimating the intrinsic parameters under the \code{\textquotedbl{category.exch}\textquotedbl} structure
\begin{CodeChunk}
\begin{CodeInput}
R> library("multgee")
R> data("arthritis")
R> head(arthritis)
R> intrinsic.pars(y = y, data = arthritis, id = id, repeated = time,
+ rscale = "ordinal")
\end{CodeInput}
\begin{CodeOutput}
0.6517843 0.9097341 0.9022272
\end{CodeOutput}
\end{CodeChunk}
The range of the estimated intrinsic parameters is small ($\approx 0.26$) which suggests that the underlying marginalized association pattern is nearly constant across time-pairs. Thus we expect the \code{\textquotedbl{uniform}\textquotedbl} structure to capture adequately the underlying correlation pattern. Note that we passed the time variable to the \code{repeated} argument because this numerical variable indicates the measurement occasion at which each observation was recorded.
Now we fit the cumulative logit model (\ref{MarginalModelData}) under the \code{\textquotedbl{uniform}\textquotedbl} via the function \code{ordLORgee}
\begin{CodeChunk}
\begin{CodeInput}
R> fit <- ordLORgee(formula = y ~ factor(time) + factor(trt) + factor(baseline),
+ link = "logit", id = id, repeated = time, data = arthritis,
+ LORstr = "uniform")
R> summary(fit)
\end{CodeInput}
\begin{CodeOutput}
GEE FOR ORDINAL MULTINOMIAL RESPONSES
version 1.4 modified 2013-12-01
Link : Cumulative logit
Local Odds Ratios:
Structure: uniform
Model: 3way
call:
ordLORgee(formula = y ~ factor(time) + factor(trt) + factor(baseline),
data = arthritis, id = id, repeated = time, link = "logit",
LORstr = "uniform")
Summary of residuals:
Min. 1st Qu. Median Mean 3rd Qu. Max.
-0.5161000 -0.2399000 -0.0749700 0.0000219 -0.0066990 0.9933000
Number of Iterations: 5
Coefficients:
Estimate san.se san.z Pr(>|san.z|)
beta01 -1.84315 0.38929 -4.7346 < 2e-16 ***
beta02 0.26692 0.35013 0.7624 0.44585
beta03 2.23132 0.36625 6.0924 < 2e-16 ***
beta04 4.52542 0.42123 10.7434 < 2e-16 ***
factor(time)3 0.00140 0.12183 0.0115 0.99080
factor(time)5 -0.36172 0.11395 -3.1743 0.00150 **
factor(trt)2 -0.51212 0.16799 -3.0486 0.00230 **
factor(baseline)2 -0.66963 0.38036 -1.7605 0.07832 .
factor(baseline)3 -1.26070 0.35252 -3.5763 0.00035 ***
factor(baseline)4 -2.64373 0.41282 -6.4041 < 2e-16 ***
factor(baseline)5 -3.96613 0.53164 -7.4602 < 2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Local Odds Ratios Estimates:
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12]
[1,] 0.000 0.000 0.000 0.000 2.257 2.257 2.257 2.257 2.257 2.257 2.257 2.257
[2,] 0.000 0.000 0.000 0.000 2.257 2.257 2.257 2.257 2.257 2.257 2.257 2.257
[3,] 0.000 0.000 0.000 0.000 2.257 2.257 2.257 2.257 2.257 2.257 2.257 2.257
[4,] 0.000 0.000 0.000 0.000 2.257 2.257 2.257 2.257 2.257 2.257 2.257 2.257
[5,] 2.257 2.257 2.257 2.257 0.000 0.000 0.000 0.000 2.257 2.257 2.257 2.257
[6,] 2.257 2.257 2.257 2.257 0.000 0.000 0.000 0.000 2.257 2.257 2.257 2.257
[7,] 2.257 2.257 2.257 2.257 0.000 0.000 0.000 0.000 2.257 2.257 2.257 2.257
[8,] 2.257 2.257 2.257 2.257 0.000 0.000 0.000 0.000 2.257 2.257 2.257 2.257
[9,] 2.257 2.257 2.257 2.257 2.257 2.257 2.257 2.257 0.000 0.000 0.000 0.000
[10,] 2.257 2.257 2.257 2.257 2.257 2.257 2.257 2.257 0.000 0.000 0.000 0.000
[11,] 2.257 2.257 2.257 2.257 2.257 2.257 2.257 2.257 0.000 0.000 0.000 0.000
[12,] 2.257 2.257 2.257 2.257 2.257 2.257 2.257 2.257 0.000 0.000 0.000 0.000
pvalue of Null model: <0.0001
\end{CodeOutput}
\end{CodeChunk}
The \code{summary} method summarizes the fit of the GEE model including the GEE estimates, their estimated standard errors based on the ``sandwich'' covariance matrix and the $p$-values from testing the statistical significance of each regression parameter in (\ref{MarginalModelData}). The estimated marginalized local odds ratios structure can be found in a symmetric $T(J-1) \times T(J-1)$ block matrix written symbolically as
$$\begin{bmatrix}
\begin{array}{cccc}
\mathbf 0 &\boldsymbol{\Theta}_{12} &\ldots &\boldsymbol{\Theta}_{1T} \\
\boldsymbol{\Theta}_{21} &\mathbf 0 &\ldots &\boldsymbol{\Theta}_{2T} \\
\ldots &\ldots &\ddots & \ldots \\
\boldsymbol{\Theta}_{T1} &\boldsymbol{\Theta}_{T2} &\ldots &\mathbf 0
\end{array}
\end{bmatrix}.$$
Each block denotes an $(J-1) \times (J-1)$ matrix. The ($j,j^{\prime}$)-th element of the off-diagonal block $\boldsymbol{\Theta}_{tt^{\prime}}$ represents the estimate of $\theta_{tjt^{\prime}j^{\prime}}$. Based on the properties of the local odds ratios it is easy to see that $\boldsymbol{\Theta}_{tt^{\prime}}=\boldsymbol{\Theta}^{\top}_{t^{\prime}t}$ for $t<t^{\prime}$. Finally, the diagonal blocks are zero to reflect the fact that no local odds ratios are estimated when $t=t^{\prime}$. In our example, $J=5$ and thus each block is a $4 \times 4$ matrix. Since the \code{uniform} structure is selected, all local odds ratios are equal and estimated as $2.257$. Finally, \code{pvalue of Null model} corresponds to the $p$-value of testing the hypothesis that no covariate is significant, i.e., $\beta_1=\beta_2=\beta_3=\beta_4 =\beta_5=\beta_6=\beta_7=0$, based on a Wald test statistic.
The goodness-of-fit of model (\ref{MarginalModelData}) can be tested by comparing it to a marginal cumulative logit model that additionally contains the age and gender main effects in the linear predictor
\begin{CodeChunk}
\begin{CodeInput}
R> fit1 <- update(fit, formula = ~. + factor(sex) + age)
R> waldts(fit, fit1)
\end{CodeInput}
\begin{CodeOutput}
Goodness of Fit based on the Wald test
Model under H_0: y ~ factor(time) + factor(trt) + factor(baseline)
Model under H_1: y ~ factor(time) + factor(trt) + factor(baseline) + factor(sex) +
age
Wald Statistic=3.9554, df=2, p-value=0.1384
\end{CodeOutput}
\end{CodeChunk}
\section{Summary and practical guidelines}\label{Summary}
We described the \proglang{R} package \pkg{multgee} which implements the local odds ratios GEE approach \citep{Touloumis2012} for correlated multinomial responses. Unlike existing GEE softwares, \pkg{multgee} allows GEE models for ordinal (\code{ordLORgee}) and nominal (\code{nomLORgee}) responses. The available local odds ratios structures (\code{LORstr}) in each function respect the nature of the response scale to prevent usage of ordinal local odds ratios structures (e.g., \code{\textquotedbl{uniform}\textquotedbl}) in \code{nomLORgee}. The fitted GEE model is summarized via the \code{summary} method while the estimated regression coefficient can be retrieved via the \code{coef} method. The statistical significance of the regression parameters can be assessed via the function \code{waldts}. A similar strategy to that presented in Section \ref{Example}, can be adopted to analyze GEE models for correlated nominal multinomial responses.
From a practical point of view, we recommend the use of the \code{\textquotedbl{uniform}\textquotedbl} structure for ordinal responses and the \code{\textquotedbl{time.exch}\textquotedbl} structure for nominal especially when the range of the estimated intrinsic parameters (\code{intrinsic.pars}) is small. Based on our experience, some convergence problems might occur as the complexity of the local odds ratios structure increases and/or if the marginalized contingency tables are very sparse. Two possible solutions are either to adopt a simpler local odds ratios structure or to increase slightly the value of the constant added to the marginalized contingency tables (\code{add}). However, we believe that users should refrain from using the independence `working' model unless the aforementioned strategies fail to remedy the convergence problems. To decide on the form of the linear predictor, variable selection model procedures could be incorporated using the function \code{waldts}.
In future versions of \pkg{multgee}, we plan to permit time-dependent intercepts in the marginal models, to increase the range of the marginal models, by including, for example, the family of continuation-ratio models for ordinal responses, and to offer a function for assessing the proportional odds assumption in models (\ref{ABMCLM}) and (\ref{ABMACLM}).
\bibliographystyle{jss}
|
1,941,325,221,157 | arxiv |
\section{Introduction\protect\footnote{This paper has been updated from the original version, primarily to include results on BERT \citep{devlin2018bert}. See Appendix~\ref{appendix:camera-ready-changes} for a detailed list of changes.}}
Pretrained word embeddings \citep{mikolov2013distributed,pennington2014glove} are a staple tool for NLP. These models provide continuous representations for word types, typically learned from co-occurrence statistics on unlabeled data, and improve generalization of downstream models across many domains. Recently, a number of models have been proposed for {\em contextualized} word embeddings. Instead of using a single, fixed vector per word type, these models run a pretrained encoder network over the sentence to produce contextual embeddings of each token. The encoder, usually an LSTM \citep{hochreiter1997long} or a Transformer \citep{vaswani2017attention}, can be trained on objectives like machine translation \citep{mccann2017learned} or language modeling \citep{peters2018deep,radford2018improving,howard2018universal,devlin2018bert}, for which large amounts of data are available. The activations of this network--a collection of one vector per token--fit the same interface as conventional word embeddings, and can be used as a drop-in replacement input to any model.
Applied to popular models, this technique has yielded significant improvements to the state-of-the-art on several tasks, including constituency parsing \citep{Kitaev-Klein:2018:SelfAttentiveParser}, semantic role labeling \citep{he2018jointly,strubell2018linguistically}, and coreference \citep{lee2018higher}, and has outperformed competing techniques \citep{kiros2015skip,conneau2017supervised} that produce fixed-length representations for entire sentences.
Our goal in this work is to understand where these contextual representations improve over conventional word embeddings. Recent work has explored many token-level properties of these representations, such as their ability to capture part-of-speech tags \citep{blevins2018hierarchical,belinkov2017evaluating,shi2016does}, morphology \citep{belinkov2017neural,belinkov2017evaluating}, or word-sense disambiguation \citep{peters2018deep}. \citet{peters2018dissecting} extends this to constituent phrases, and present a heuristic for unsupervised pronominal coreference. We expand on this even further and introduce a suite of \textit{edge probing} tasks covering a broad range of syntactic, semantic, local, and long-range phenomena. In particular, we focus on asking what information is encoded at each position, and how well it encodes structural information about that word's role in the sentence. Is this information primarily syntactic in nature, or do the representations also encode higher-level semantic relationships? Is this information local, or do the encoders also capture long-range structure?
\begin{figure}[!t]
\centering
\def0.7\columnwidth{0.7\columnwidth}
\input{figures/edgeprobe-model.pdf_tex}
\caption{Probing model architecture (\S~\ref{sec:model}). All parameters inside the dashed line are fixed, while we train the span pooling and MLP classifiers to extract information from the contextual vectors. The example shown is for semantic role labeling, where $s^{(1)} = [1,2)$ corresponds to the predicate (``eat"), while $s^{(2)} = [2,5)$ is the argument (``strawberry ice cream"), and we predict label \texttt{A1} as positive and others as negative. For entity and constituent labeling, only a single span is used.}
\label{fig:edgeprobe-model}
\end{figure}
We approach these questions with a probing model (Figure \ref{fig:edgeprobe-model}) that sees only the contextual embeddings from a fixed, pretrained encoder. The model can access only embeddings within given spans, such as a predicate-argument pair, and must predict properties, such as semantic roles, which typically require whole-sentence context. We use data derived from traditional structured NLP tasks: tagging, parsing, semantic roles, and coreference. Common corpora such as OntoNotes \citep{weischedel2013ontonotes} provide a wealth of annotations for well-studied concepts which are both linguistically motivated and known to be useful intermediates for high-level language understanding. We refer to our technique as ``edge probing'', as we decompose each structured task into a set of graph edges (\S~\ref{sec:data}) which we can predict independently using a common classifier architecture (\S~\ref{sec:model})\footnote{Our code is publicly available at \url{https://github.com/jsalt18-sentence-repl/jiant}.}.
We probe four popular contextual representation models (\S~\ref{sec:representation-models}): CoVe \citep{mccann2017learned}, ELMo \citep{peters2018deep}, OpenAI GPT \citep{radford2018improving}, and BERT \citep{devlin2018bert}.
We focus on these models because their pretrained weights and code are available, since these are most likely to be used by researchers. We compare to word-level baselines to separate the contribution of context from lexical priors, and experiment with augmented baselines to better understand the role of pretraining and the ability of encoders to capture long-range dependencies.
\section{Edge Probing}
\label{sec:data}
To carry out our experiments,
we define a novel ``edge probing'' framework motivated
by the need for a uniform set of metrics and architectures across tasks.
Our framework is generic, and can be applied to any task that can be represented as a labeled graph anchored to spans in a sentence.
\paragraph{Formulation.}
Formally, we represent a sentence as a list of tokens $T = [t_0, t_1, \ldots, t_n]$, and a labeled edge as $\{s^{(1)}, s^{(2)}, L\}$. We treat $s^{(1)} = [i^{(1)}, j^{(1)})$ and, optionally, $s^{(2)} = [i^{(2)}, j^{(2)})$ as (end-exclusive) spans. For unary edges such as constituent labels, $s^{(2)}$ is omitted. We take $L$ to be a set of zero or more targets from a task-specific label set $\mathcal{L}$.
To cast all tasks into a common classification model, we focus on the \textit{labeling} versions of each task. Spans (gold mentions, constituents, predicates, etc.) are given as inputs, and the model is trained to predict $L$ as a multi-label target. We note that this is only one component of the common pipelined (or end-to-end) approach to these tasks, and that in general our metrics are not comparable to models that jointly perform span \textit{identification} and labeling. However, since our focus is on analysis rather than application, the labeling version is a better fit for our goals of isolating individual phenomena of interest, and giving a uniform metric -- binary F1 score -- across our probing suite.
\begin{table}[t]
\begin{center}
\input{tables/corpus_examples.tex}
\end{center}
\caption{Example sentence, spans, and target label for each task. O = OntoNotes, W = Winograd.}
\label{tab:examples}
\end{table}
\subsection{Tasks}
Our experiments focus on eight core NLP labeling tasks: part-of-speech, constituents, dependencies, named entities, semantic roles, coreference, semantic proto-roles, and relation classification. The tasks and their respective datasets are described below, and also detailed in Table~\ref{tab:examples} and Appendix~\ref{appendix:dataset-stats}.
\textbf{Part-of-speech tagging (POS)} is the syntactic task of assigning tags such as noun, verb, adjective, etc. to individual tokens. We let $s_1 = [i,i+1)$ be a single token, and seek to predict the POS tag.
\textbf{Constituent labeling} is the more general task concerned with assigning a non-terminal label for a span of tokens within the phrase-structure parse of the sentence: e.g. is the span a noun phrase, a verb phrase, etc. We let $s_1 = [i,j)$ be a known constituent, and seek to predict the constituent label.
\textbf{Dependency labeling} is similar to constituent labeling, except that rather than aiming to position a span of tokens within the phrase structure, dependency labeling seeks to predict the functional relationships of one token relative to another: e.g. is in a modifier-head relationship, a subject-object relationship, etc. We take $s_1 = [i,i+1)$ to be a single token and $s_2 = [j,j+1)$ to be its syntactic head, and seek to predict the dependency relation between tokens $i$ and $j$.
\textbf{Named entity labeling} is the task of predicting the category of an entity referred to by a given span, e.g. does the entity refer to a person, a location, an organization, etc. We let $s_1 = [i,j)$ represent an entity span and seek to predict the entity type.
\textbf{Semantic role labeling (SRL)} is the task of imposing predicate-argument structure onto a natural language sentence: e.g. given a sentence like \textit{``Mary pushed John''}, SRL is concerned with identifying \textit{``Mary''} as the pusher and \textit{``John''} as the pushee. We let $s_1 = [i_1,j_1)$ represent a known predicate and $s_2 = [i_2, j_2)$ represent a known argument of that predicate, and seek to predict the role that the argument $s_2$ fills--e.g. \texttt{ARG0} (agent, the \textit{pusher}) vs. \texttt{ARG1} (patient, the \textit{pushee}).
\textbf{Coreference} is the task of determining whether two spans of tokens (``mentions'') refer to the same entity (or event): e.g. in a given context, do \textit{``Obama''} and \textit{``the former president''} refer to the same person, or do \textit{``New York City''} and \textit{``there''} refer to the same place. We let $s_1$ and $s_2$ represent known mentions, and seek to make a binary prediction of whether they co-refer.
\textbf{Semantic proto-role (SPR)} labeling is the task of annotating fine-grained, non-exclusive semantic attributes, such as \texttt{change\_of\_state} or \texttt{awareness}, over predicate-argument pairs.
E.g. given the sentence \textit{``Mary pushed John''}, whereas SRL is concerned with identifying \textit{``Mary''} as the pusher, SPR is concerned with identifying attributes such as \texttt{awareness} (whether the pusher is \textit{aware} that they are doing the pushing). We let $s_1$ represent a predicate span and $s_2$ a known argument head, and perform a multi-label classification over potential attributes of the predicate-argument relation.
\textbf{Relation Classification (Rel.)} is the task of predicting the real-world relation that holds between two entities, typically given an inventory of symbolic relation types (often from an ontology or database schema). For example, given a sentence like \textit{``Mary is walking to work''}, relation classification is concerned with linking \textit{``Mary''} to \textit{``work''} via the \texttt{Entity-Destination} relation. We let $s_1$ and $s_2$ represent known mentions, and seek to predict the relation type.
\subsection{Datasets}
We use the annotations in the OntoNotes 5.0 corpus \citep{weischedel2013ontonotes} for five of the above eight tasks: POS tags, constituents, named entities, semantic roles, and coreference. In all cases, we simply cast the original annotation into our edge probing format. For POS tagging, we simply extract these labels from the constituency parse data in OntoNotes. For coreference, since OntoNotes only provides annotations for positive examples (pairs of mentions that corefer) we generate negative examples by generating all pairs of mentions that are not explicitly marked as coreferent.
The OntoNotes corpus does not contain annotations for dependencies, proto-roles, or semantic relations. Thus, for dependencies, we use the English Web Treebank portion of the Universal Dependencies 2.2 release \citep{silveira14gold}. For SPR, we use two datasets, one (SPR1; \citet{teichert2017semantic}) derived from Penn Treebank and one (SPR2; \citet{rudinger2018sprl}) derived from English Web Treebank. For relation classification, we use the SemEval 2010 Task 8 dataset \citep{hendrickx2009semeval}, which consists of sentences sampled from English web text, labeled with a set of 9 directional relation types.
In addition to the OntoNotes coreference examples, we include an extra ``challenge'' coreference dataset based on the Winograd schema \citep{levesque2011winograd}. Winograd schema problems focus on cases of pronoun resolution which are syntactically ambiguous and thus are intended to require subtler semantic inference in order to resolve correctly (see example in Table \ref{tab:examples}). We use the version of the Definite Pronoun Resolution (DPR) dataset \citep{rahman2012resolving} employed by \citet{white2017inference}, which contains balanced positive and negative pairs.
\section{Experimental Set-Up}
\subsection{Probing Model}
\label{sec:model}
Our probing architecture is illustrated in Figure~\ref{fig:edgeprobe-model}. The model is designed to have limited expressive power on its own, as to focus on what information can be extracted from the contextual embeddings. We take a list of contextual vectors $[e_0, e_1, \ldots, e_n]$ and integer spans $s^{(1)} = [i^{(1)}, j^{(1)})$ and (optionally) $s^{(2)} = [i^{(2)}, j^{(2)})$ as inputs, and use a projection layer followed by the self-attention pooling operator of \citet{lee2017end} to compute fixed-length span representations. Pooling is only within the bounds of a span, e.g. the vectors $[e_i, e_{i+1}, \ldots, e_{j-1}]$, which means that the only information our model can access about the rest of the sentence is that provided by the contextual embeddings.
The span representations are concatenated and fed into a two-layer MLP followed by a sigmoid output layer. We train by minimizing binary cross-entropy against the target label set $L \in \{0,1\}^{|\mathcal{L}|}$. Our code is implemented in PyTorch \citep{paszke2017automatic} using the AllenNLP \citep{gardner2018allennlp} toolkit. For further details on training, see Appendix~\ref{appendix:model-details}.
\subsection{Sentence Representation Models}
\label{sec:representation-models}
We explore four recent contextual encoder models: CoVe, ELMo, OpenAI GPT, and BERT.
Each model takes tokens $[t_0, t_1, \ldots, t_n]$ as input and produces a list of contextual vectors $[e_0, e_1, \ldots, e_n]$.
\textbf{CoVe} \citep{mccann2017learned} uses the top-level activations of a two-layer biLSTM trained on English-German translation, concatenated with 300-dimensional GloVe vectors. The source data consists of 7 million sentences from web crawl, news, and government proceedings (WMT 2017; \citet{WMT:2017}).
\textbf{ELMo} \citep{peters2018deep} is a two-layer bidirectional LSTM language model, built over a context-independent character CNN layer and trained on the Billion Word Benchmark dataset \citep{chelba2014one}, consisting primarily of newswire text. We follow standard usage and take a linear combination of the ELMo layers, using learned task-specific scalars \citep[Equation 1 of ][]{peters2018deep}.
\textbf{GPT} \citep{radford2018improving} is a 12-layer Transformer \citep{vaswani2017attention} encoder trained as a left-to-right language model on the Toronto Books Corpus \citep{zhu2015aligning}. Departing from the original authors, we do not fine-tune the encoder\footnote{We note that there may be information not easily accessible without fine-tuning the LSTM weights. This can be easily explored within our framework, e.g. using the techniques of \cite{howard2018universal} or \cite{radford2018improving}. We leave this to future work, and hope that our code release will facilitate such continuations.}.
\textbf{BERT} \citep{devlin2018bert} is a deep Transformer \citep{vaswani2017attention} encoder trained jointly as a masked language model and on next-sentence prediction, trained on the concatenation of the Toronto Books Corpus \citep{zhu2015aligning} and English Wikipedia. As with GPT, we do not fine-tune the encoder weights. We probe the publicly released \texttt{bert-base-uncased} (12-layer) and \texttt{bert-large-uncased} (24-layer) models\footnote{\citet{devlin2018bert} recommend the \texttt{cased} BERT models for named entity \textit{recognition} tasks; however, we find no difference in performance on our entity \textit{labeling} variant and so report all results with \texttt{uncased} models.}.
For BERT and GPT, we compare two methods for yielding contextual vectors for each token: \textbf{\texttt{cat}} where we concatenate the subword embeddings with the activations of the top layer, similar to CoVe, and \textbf{\texttt{mix}} where we take a linear combination of layer activations (including embeddings) using learned task-specific scalars \citep[Equation 1 of ][]{peters2018deep}, similar to ELMo.
The resulting contextual vectors have dimension $d = 900$ for CoVe, $d = 1024$ for ELMo, and $d = 1536$ (\texttt{cat}) or $d = 768$ (\texttt{mix}) for GPT and BERT-base, and $d = 2048$ (\texttt{cat}) or $d = 1024$ (\texttt{mix}) for BERT-large\footnote{For further details, see Appendix~\ref{appendix:representation-models}.}. The pretrained models expect different tokenizations and input processing. We use a heuristic alignment algorithm based on byte-level Levenshtein distance, explained in detail in Appendix~\ref{appendix:retokenization}, in order to re-map spans from the source data to the tokenization expected by the above models.
\section{Experiments}
\label{sec:experiments}
Again, we want to answer: What do contextual representations encode that conventional word embeddings do not? Our experimental comparisons, described below, are intended to ablate various aspects of contextualized encoders in order to illuminate how the model captures different types of linguistic information.
\paragraph{Lexical Baselines.}\label{sec:lex-baselines}
In order to probe the effect of each \textit{contextual} encoder, we train a version of our probing model directly on the most closely related context-independent word representations. This baseline measures the performance that can be achieved from lexical priors alone, without any access to surrounding words. For CoVe, we compare to the embedding layer of that model, which consists of 300-dimensional GloVe vectors trained on 840 billion tokens of CommonCrawl (web) text. For ELMo, we use the activations of the context-independent character-CNN layer (layer 0) from the full model. For GPT and for BERT, we use the learned subword embeddings from the full model.
\paragraph{Randomized ELMo.}\label{sec:random-elmo} Randomized neural networks have recently \citep{zhang2018} shown surprisingly strong performance on many tasks, suggesting that architecture may play a significant role in learning useful feature functions. To help understand what is actually \textit{learned} during the encoder pretraining, we compare with a version of the ELMo model in which all weights above the lexical layer (layer 0) are replaced with random orthonormal matrices\footnote{This includes both LSTM cell weights and projection matrices between layers. Non-square matrices are orthogonal along the smaller dimension.}.
\paragraph{Word-Level CNN.} To what extent do contextual encoders capture long-range dependencies, versus simply modeling local context? We extend our lexical baseline by introducing a fixed-width convolutional layer on top of the word representations. As comparing to the lexical baseline factors out word-level priors, comparing to this CNN baseline factors out local relationships, such as the presence of nearby function words, and allows us to see the contribution of long-range context to encoder performance. To implement this, we replace the projection layer in our probing model with a fully-connected CNN that sees $\pm 1$ or $\pm 2$ tokens around the center word (i.e. kernel width 3 or 5).
\section{Results}
\label{sec:results}
Using the above experimental design, we return to the central questions originally posed. That is, what types of syntactic and semantic information does each model encode at each position? And is the information captured primarily local, or do contextualized embeddings encode information about long-range sentential structure?
\begin{table}[t]
\begin{center}
\input{tables/model_comparison.tex}
\end{center}
\caption{Comparison of representation models and their respective lexical baselines. Numbers reported are micro-averaged F1 score on respective test sets. \textbf{Lex.} denotes the lexical baseline (\S~\ref{sec:lex-baselines}) for each model, and bold denotes the best performance on each task. Lines in \textit{italics} are subsets of the targets from a parent task; these are omitted in the macro average. SRL numbers consider core and non-core roles, but ignore references and continuations. Winograd (DPR) results are the average of five runs each using a random sample (without replacement) of 80\% of the training data. 95\% confidence intervals (normal approximation) are approximately $\pm3$ ($\pm6$ with BERT-large) for Winograd, $\pm1$ for SPR1 and SPR2, and $\pm0.5$ or smaller for all other tasks.}
\label{tab:comparison-of-models}
\end{table}
\paragraph{Comparison of representation models.}
We report F1 scores for ELMo, CoVe, GPT, and BERT in Table~\ref{tab:comparison-of-models}.
We observe that ELMo and GPT (with \texttt{mix} features) have comparable performance, with ELMo slightly better on most tasks but the Transformer scoring higher on relation classification and OntoNotes coreference. Both models outperform CoVe by a significant margin (6.3 F1 points on average), meaning that the information in their word representations makes it easier to recover details of sentence structure.
It is important to note that while ELMo, CoVe, and the GPT can be applied to the same problems, they differ in architecture, training objective, and both the quantity and genre of training data (\S~\ref{sec:representation-models}). Furthermore, on all tasks except for Winograd coreference, the lexical representations used by the ELMo and GPT models outperform GloVe vectors (by 5.4 and 2.4 points on average, respectively). This is particularly pronounced on constituent and semantic role labeling, where the model may be benefiting from better handling of morphology by character-level or subword representations.
We observe that using ELMo-style scalar mixing (\texttt{mix}) instead of concatenation improves performance significantly (1-3 F1 points on average) on both deep Transformer models (BERT and GPT). We attribute this to the most relevant information being contained in intermediate layers, which agrees with observations by \citet{blevins2018hierarchical}, \citet{peters2018deep}, and \citet{devlin2018bert}, and with the finding of \citet{peters2018dissecting} that top layers may be overly specialized to perform next-word prediction.
When using scalar mixing (\texttt{mix}), we observe that the BERT-base model outperforms GPT, which has a similar 12-layer Transformer architecture, by approximately 2 F1 points on average. The 24-layer BERT-large model performs better still, besting BERT-base by 1.1 F1 points and ELMo by 2.7 F1 - a nearly 20\% relative reduction in error on most tasks.
We find that the improvements of the BERT models are not uniform across tasks. In particular, BERT-large improves on ELMo by 7.4 F1 points on OntoNotes coreference, more than a 40\% reduction in error and nearly as high as the improvement of the ELMo encoder over its lexical baseline. We also see a large improvement (7.8 F1 points)\footnote{On average; the DPR dataset has high variance and we observe a mix of runs which score in the mid-50s and the high-60s F1.} on Winograd-style coreference from BERT-large in particular, suggesting that deeper unsupervised models may yield further improvement on difficult semantic tasks.
\paragraph{Genre Effects.} \label{sec:genre}
Our probing suite is drawn mostly from newswire and web text (\S~\ref{sec:data}). This is a good match for the Billion Word Benchmark (BWB) used to train the ELMo model, but a weaker match for the Books Corpus used to train the published GPT model. To control for this, we train a clone of the GPT model on the BWB, using the code and hyperparameters of \citet{radford2018improving}. We find that this model performs only slightly better (+0.15 F1 on average) on our probing suite than the Books Corpus-trained model, but still underperforms ELMo by nearly 1 F1 point.
\paragraph{Encoding of syntactic vs. semantic information.}
By comparing to lexical baselines, we can measure how much the contextual information from a particular encoder improves performance on each task. Note that in all cases, the contextual representation is strictly more expressive, since it includes access to the lexical representations either by concatenation or by scalar mixing.
We observe that ELMo, CoVe, and GPT all follow a similar trend across our suite (Table~\ref{tab:comparison-of-models}), showing the largest gains on tasks which are considered to be largely syntactic, such as dependency and constituent labeling, and smaller gains on tasks which are considered to require more semantic reasoning, such as SPR and Winograd. We observe small absolute improvements (+6.3 and +3.5 for ELMo Full vs. Lex.) on part-of-speech tagging and entity labeling, but note that this is likely due to the strength of word-level priors on these tasks. Relative reduction in error is much higher (+66\% for Part-of-Speech and +44\% for Entities), suggesting that ELMo does encode local type information.
Semantic role labeling benefits greatly from contextual encoders overall, but this is predominantly due to better labeling of core roles (+19.0 F1 for ELMo) which are known to be closely tied to syntax (e.g. \citet{punyakanok2008importance,gildea2002necessity}). The lexical baseline performs similarly on core and non-core roles (74 and 75 F1 for ELMo), but the more semantically-oriented non-core role labels (such as purpose, cause, or negation) see only a smaller improvement from encoded context (+8.8 F1 for ELMo). The semantic proto-role labeling task (SPR1, SPR2) looks at the same type of core predicate-argument pairs but tests for higher-level semantic properties (\S~\ref{sec:data}), which we find to be only weakly captured by the contextual encoder (+1-5 F1 for ELMo).
The SemEval relation classification task is designed to require semantic reasoning, but in this case we see a large improvement from contextual encoders, with ELMo improving by 22 F1 points on the lexical baseline (50\% relative error reduction) and BERT-large improving by another 4.6 points. We attribute this partly to the poor performance (51-58 F1) of lexical priors on this task, and to the fact that many easy relations can be resolved simply by observing key words in the sentence (for example, ``\textit{caused}'' suggests the presence of a \texttt{Cause-Effect} relation). To test this, we augment the lexical baseline with a bag-of-words feature, and find that for relation classification we capture more than 70\% of the headroom from using the full ELMo model.\footnote{For completeness, we repeat the same experiment on the rest of our task suite. The bag-of-words feature captures 20-50\% (depending on encoder) of the full-encoder headroom for entity typing, and much smaller fractions on other tasks.}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/elmo_baselines}
\caption{Additional baselines for ELMo, evaluated on the test sets. CNN$k$ adds a convolutional layer that sees $\pm k$ tokens to each side of the center word. Lexical is the lexical baseline, equivalent to $k = 0$. Orthonormal is the full ELMo architecture with random orthonormal LSTM and projection weights, but using the pretrained lexical layer. Full (pretrained) is the full ELMo model. Colored bands are 95\% confidence intervals (normal approximation).
}
\label{fig:elmo-baselines}
\end{figure}
\paragraph{Effects of architecture.} \label{sec:effects-of-architecture}
Focusing on the ELMo model, we ask: how much of the model's performance can be attributed to the architecture, rather than knowledge from pretraining? In Figure~\ref{fig:elmo-baselines} we compare to an orthonormal encoder (\S~\ref{sec:random-elmo}) which is structurally identical to ELMo but contains no information in the recurrent weights. It can be thought of as a randomized feature function over the sentence, and provides a baseline for how the architecture itself can encode useful contextual information. We find that the orthonormal encoder improves significantly on the lexical baseline, but that overall the learned weights account for over 70\% of the improvements from full ELMo.
\paragraph{Encoding non-local context.} \label{sec:non-local-context}
\begin{figure}[t]
\centering
\includegraphics[width=.9\linewidth]{figures/score_by_distance_dep-3}
\caption{Dependency labeling F1 score as a function of separating distance between the two spans. Distance 0 denotes adjacent tokens. Colored bands are 95\% confidence intervals (normal approximation). Bars on the bottom show the number of targets (in the development set) with that distance. Lex., CNN1, CNN2, Ortho, and Full are as in Figure~\ref{fig:elmo-baselines}.}
\label{fig:score-by-distance-dep}
\end{figure}
How much information is carried over long distances (several tokens or more) in the sentence? To estimate this, we extend our lexical baseline with a convolutional layer, which allows the probing classifier to use local context. In Figure~\ref{fig:elmo-baselines} we find that adding a CNN of width 3 ($\pm 1$ token) closes 72\% (macro average over tasks) of the gap between the lexical baseline and full ELMo; this extends to 79\% if we use a CNN of width 5 ($\pm 2$ tokens). On nonterminal constituents, we find that the CNN $\pm 2$ model matches ELMo performance, suggesting that while the ELMo encoder propagates a large amount of information about constituents (+15.4 F1 vs. Lex., Table~\ref{tab:comparison-of-models}), most of it is local in nature. We see a similar trend on the other syntactic tasks, with 80-90\% of ELMo performance on dependencies, part-of-speech, and SRL core roles captured by CNN $\pm 2$. Conversely, on more semantic tasks, such as coreference, SRL non-core roles, and SPR, the gap between full ELMo and the CNN baselines is larger. This suggests that while ELMo does not encode these phenomena as efficiently, the improvements it does bring are largely due to long-range information.
We can test this hypothesis by seeing how our probing model performs with distant spans. Figure~\ref{fig:score-by-distance-dep} shows F1 score as a function of the distance (number of tokens) between a token and its head for the dependency labeling task. The CNN models and the orthonormal encoder perform best with nearby spans, but fall off rapidly as token distance increases. The full ELMo model holds up better, with performance dropping only 7 F1 points between $d = 0$ tokens and $d = 8$, suggesting the pretrained encoder does encode useful long-distance dependencies.
\section{Related Work}
\label{sec:related-work}
Recent work has consistently demonstrated the strong empirical performance of contextualized word representations, including CoVe \citep{mccann2017learned}, ULMFit \citep{howard2018universal}, ELMo \citep{peters2018deep,lee2018higher,strubell2018linguistically,Kitaev-Klein:2018:SelfAttentiveParser}.
In response to the impressive results on downstream tasks, a line of work has emerged with the goal of understanding and comparing such pretrained representations. SentEval \citep{conneau2018senteval} and GLUE \citep{wang2018glue} offer suites of application-oriented benchmark tasks, such as sentiment analysis or textual entailment, which combine many types of reasoning and provide valuable aggregate metrics which are indicative of practical performance. A parallel effort, to which this work contributes, seeks to understand what is driving (or hindering) performance gains by using ``probing tasks,'' i.e. tasks which attempt to isolate specific phenomena for the purpose of finer-grained analysis rather than application, as discussed below.
Much work has focused on probing fixed-length sentence encoders, such as InferSent \citep{conneau2017supervised}, specifically their ability to capture surface properties of sentences such as length, word content, and word order \citep{adi2016fine}, as well as a broader set of syntactic features, such as tree depth and tense \citep{conneau2018cram}. Other related work uses perplexity scores to test whether language models learn to encode properties such as subject-verb agreement \citep{linzen2016assessing,gulordava2018colorless,marvin2018targeted,kuncoro2018lstms}.
Often, probing tasks take the form of ``challenge sets'', or test sets which are generated using templates and/or perturbations of existing test sets in order to isolate particular linguistic phenomena, e.g. compositional reasoning \citep{dasgupta2018evaluating,ettinger2018assessing}. This approach is exemplified by the recently-released Diverse Natural Language Collection (DNC) \citep{poliak2018collecting}, which introduces a suite of 11 tasks targeting different semantic phenomena. In the DNC, these tasks are all recast into natural language inference (NLI) format \citep{white2017inference}, i.e. systems must understand the targeted semantic phenomenon in order to make correct inferences about entailment.
\citet{evaluating-fine-grained-semantic-phenomena-in-neural-machine-translation-encoders-using-entailment} used an earlier version of recast NLI to test NMT encoders' ability to understand coreference, SPR, and paraphrastic inference.
Challenge sets which operate on full sentence encodings introduce confounds into the analysis, since sentence representation models must pool word-level representations over the entire sequence. This makes it difficult to infer whether the relevant information is encoded within the span of interest or rather inferred from diffuse information elsewhere in the sentence. One strategy to control for this is the use of minimally-differing sentence pairs \citep{poliak2018collecting,ettinger2018assessing}. An alternative approach, which we adopt in this paper, is to directly probe the token representations for word- and phrase-level properties. This approach has been used previously to show that the representations learned by neural machine translation systems encode token-level properties like part-of-speech, semantic tags, and morphology \citep{shi2016does,belinkov2017neural,belinkov2017evaluating}, as well as pairwise dependency relations \citep{belinkov2018thesis}. \citet{blevins2018hierarchical} goes further to explore how part-of-speech and hierarchical constituent structure are encoded by different pretraining objectives and at different layers of the model. \citet{peters2018dissecting} presents similar results for ELMo and architectural variants.
Compared to existing work, we extend sub-sentence probing to a broader range of syntactic and semantic tasks, including long-range and high-level relations such as predicate-argument structure. Our approach can incorporate existing annotated datasets without the need for templated data generation, and admits fine-grained analysis by label and by metadata such as span distance. We note that some of the tasks we explore overlap with those included in the DNC, in particular, named entities, SPR and Winograd. However, our focus on probing token-level representations directly, rather than pooling over the whole sentence, provides a complementary means for analyzing these representations and diagnosing the particular advantages of contextualized vs. conventional word embeddings.
\section{Conclusion}
We introduce a suite of ``edge probing'' tasks designed to probe the sub-sentential structure of contextualized word embeddings. These tasks are derived from core NLP tasks and encompass a range of syntactic and semantic phenomena. We use these tasks to explore how contextual embeddings improve on their lexical (context-independent) baselines. We focus on four recent models for contextualized word embeddings--CoVe, ELMo, OpenAI GPT, and BERT.
Based on our analysis, we find evidence suggesting the following trends. First, in general, contextualized embeddings improve over their non-contextualized counterparts largely on syntactic tasks (e.g. constituent labeling) in comparison to semantic tasks (e.g. coreference), suggesting that these embeddings encode syntax more so than higher-level semantics. Second, the performance of ELMo cannot be fully explained by a model with access to local context, suggesting that the contextualized representations do encode distant linguistic information, which can help disambiguate longer-range dependency relations and higher-level syntactic structures.
We release our data processing and model code, and hope that this can be a useful tool to facilitate understanding of, and improvements in, contextualized word embedding models.
\subsubsection*{Acknowledgments} %
This work was conducted in part at the 2018 Frederick Jelinek Memorial Summer Workshop on Speech and Language Technologies, and supported by Johns Hopkins University with unrestricted gifts from Amazon, Facebook, Google, Microsoft and Mitsubishi Electric Research Laboratories, as well as a team-specific donation of computing resources from Google. PX, AP, and BVD were supported by DARPA AIDA and LORELEI. Special thanks to Jacob Devlin for providing checkpoints of GPT model trained on the BWB corpus, and to the members of the Google AI Language team for many productive discussions.
|
1,941,325,221,158 | arxiv | \section{Abstract}
\noindent Neutron irradiation progressively changes the properties of zirconium alloys: they harden and their average c/a lattice parameter ratio decreases with fluence \cite{was2017fundamentals,onimus2012radiation,buckley1962properties,carpenter1981irradiation}. The bombardment by neutrons produces point defects, which evolve into dislocation loops that contribute to a non-uniform growth phenomenon called irradiation-induced growth (IIG). To gain insights into these dislocation loops in Zr we studied them using atomistic simulation. We constructed and relaxed dislocation loops of various types. We find that the energies of $\langle a \rangle$ loops{} on different habit planes are similar, but our results indicate that they are most likely to form on the 1st prismatic plane and then reduce their energy by rotating onto the 2nd prismatic plane. By simulating loops of different aspect ratios, we find that, based on energetics alone, the shape of $\langle a \rangle$ loops{} does not depend on character, and that these loops become increasingly elliptical as their size increases. Additionally, we find that interstitial $\langle c/2+p \rangle$ loops{} and vacancy $\langle c \rangle$ loops{} are both energetically feasible and so the possibility of these should be considered in future work. Our findings offer important insights into loop formation and evolution, which are difficult to probe experimentally.
\section{Introduction} \label{sec:intro} %
\noindent Zirconium alloys are used in light water nuclear power reactors in several applications, in particular for nuclear fuel cladding. These reactors operate at temperatures from around 350~K to about 580~K \cite{griffiths1987formation}. At these temperatures $\alpha$-Zr has a hexagonal close-packed (HCP) crystal structure \cite{liu2009experimental}. Recrystallised Zr alloy guide tubes and grids suffer from irradiation-induced growth (IIG), which occurs in three phases: initial, steady and breakaway \cite{holt1986c}.
IIG begins with the initial growth phase, where there is rapid growth up to a fluence of around $10^{25}$~n/m$^2$. After this, the growth curve gradient flattens and there is a long phase where little or no growth occurs. This is known as the steady growth phase. Finally, at around $3 \times 10^{25}$~n/m$^2$, the growth rate becomes rapid and this is the breakaway growth phase \cite{holt1986c,carpenter1988irradiation}. Breakaway growth is the most detrimental as in this regime the cladding rapidly elongates. IIG constrains the design of fuel assemblies, so an IIG resistant alloy would allow for more freedom in their design. IIG is an anisotropic growth and is stochastic. This can cause the fuel rods to buckle, leading to problems such as hot spots \cite{pickman1975interactions}. Note that IIG is one phenomenon that may contribute to fuel rod deformation, but other phenomena, such as pellet cladding interaction, may also contribute \cite{rossiter2012understanding}.
The design of growth resistant alloys would be greatly aided by a comprehensive understanding of the microscopic mechanisms behind IIG.
A promising explanation of IIG provided by Buckley postulates that beyond its initial stage, IIG is primarily caused by irradiation-induced dislocation loops \cite{buckley1962properties}. In the early stages of irradiation, dislocation loops with burgers vector $1/3 \langle1 1 \bar{2} 0\rangle$ are seen \cite{holt1986c,kelly1973characterization}. These are termed $\langle a \rangle$ loops{} and are of either vacancy or interstitial character \cite{kelly1973characterization}. In Zr irradiated (E~$>$~1~MeV) at 668~K, Jostsons et al.~observed that 66$\%$ of the $\langle a \rangle$ loops{} were vacancy in character at a fluence of 6.4~$\times$~10$^{19}$~n~cm$^{-2}$~\cite{jostsons1977nature}. At a higher fluence of 1.8~$\times$~10$^{20}$~n~cm$^{-2}$, they characterised approximately equal numbers of vacancy and interstitial $\langle a \rangle$ loops{}~\cite{jostsons1977nature}. Northwood et al.~irradiated single crystal Zr to 1.8~$\times$~10$^{20}$~n~cm$^{-2}$ (E~$>$~1~MeV) at $\sim$500~K and characterised 44$\%$ of the $\langle a \rangle$ loops{} as vacancy type~\cite{northwood1976dislocation}. $\langle a \rangle$ loops{} inhabit a variety of planes, clustered around the 1st prismatic planes \cite{jostsons1977nature,kelly1973characterization}, as shown in Fig.~\ref{fig:aLoopPlanes}. At higher doses, irradiation-induced dislocation loops with a burgers vector containing a component in the $c$ direction, [0001], appear. Holt and Gilbert believe that these $c$-component loops cause breakaway IIG \cite{holt1986c}. $c$-component loops seen in irradiated Zr have been determined by transmission electron microscopy (TEM) to have a burgers vector of $1/6 \langle2 \bar{2} 0 3\rangle$ and are vacancy in character \cite{griffiths1987formation}. For the remainder of this paper these $c$-component loops will be referred to as $\langle c/2+p \rangle$ loops{} and they inhabit the basal planes \cite{griffiths1987formation}. Also of interest are $c$-component loops with burgers vector $\langle 0 0 0 {2}\rangle$, which are termed $\langle c/2 \rangle$ loops{} and those with burgers vector $\langle0 0 0 1\rangle$ that are termed $\langle c \rangle$ loops{}. The reason $\langle c \rangle$ loops{} are of interest is that they were observed in a TEM study by Griffiths \cite{griffiths1987formation}. $\langle c/2 \rangle$ loops{} were observed in an earlier study by Griffiths et al.~\cite{griffiths1984anisotropic} and may be the precursors to $\langle c/2+p \rangle$ loops \cite{hull2001introduction}. Note that the $p$ in $\langle c/2+ p \rangle$ denotes the partial dislocation that transforms $\langle c/2 \rangle$ loops{} to $\langle c/2+p \rangle$ loops{}. The planes inhabited by the loops most pertinent to this study are shown schematically in Fig.~\ref{fig:unitCell}.
\begin{figure}[htbp!]\begin{center}
{\includegraphics[width=\figwidth]{./figures/jostsons1977fig5adaptedTrace.png}}
\newline
\caption{The $\langle a \rangle$ loop{} habit planes in a sample of neutron-irradiated zone-refined Zr. Empty and filled circles represent vacancy loops and interstitial loops respectively. (Minimally adapted from Jostsons et al.~\cite{jostsons1977nature}, with permission from Elsevier.)}
\label{fig:aLoopPlanes}
\end{center}
\end{figure}
\begin{figure}[!ht]\begin{center}
{\includegraphics[width=\figwidth]{./figures/HCPcellSchematic4.pdf}}
\caption{A schematic illustration of the orientation of key crystallographic planes with respect to the hexagonal symmetry of $\alpha$-Zr.} %
\label{fig:unitCell}
\end{center}
\end{figure}
\subsection{Irradiation Induced Growth}
\noindent Jostsons et al.~observed that irradiated Zr single crystals elongated in the $a$-directions and shrank in the $c$-directions \cite{jostsons1977nature}. Mechanical processing of Zr alloy fuel cladding induces a strong texture \cite{baron1990interlaboratories,tucker1984high}, where the majority of the basal plane normals are oriented in the radial direction for Zircaloy-2 fuel tube \cite{fidleris1987overview}. An increase in the lengths of crystal grains in the $a$-direction, combined with their strong orientational order, causes macroscopic growth.
Carpenter et al.~reported behaviour consistent with Buckley's postulated IIG explanation that $\langle a \rangle$ loops{} of interstitial character are a cause of $a$-direction expansion and $\langle c/2+p \rangle$ loops{} are a cause of $c$-direction shrinkage \cite{carpenter1981irradiation,{buckley1962properties}}.\\
\subsection{Dislocation Loop Shape}
\noindent TEM images of $\langle a \rangle$ loops{} in irradiated Zr alloy samples, show that these loops have an elliptical shape \cite{jostsons1977nature,northwood1979characterization}. The major axis of elliptical $\langle a \rangle$ loops{} lies parallel to the $c$-direction and the minor axis lies in the basal plane \cite{jostsons1977nature}. Ellipticities of various $\langle a \rangle$ loops{}, of vacancy and interstitial type, are shown in Fig.~\ref{fig:ellipLoopsPlot}. Interstitial $\langle a \rangle$ loops{} are less elliptical than those of vacancy type \cite{jostsons1977nature}. Conversely, TEM images show $\langle c/2+p \rangle$ loops{} to be circular \cite{harte2017effect}. The elliptical shape of $\langle a \rangle$ loops{} indicates that the dislocation line energy is anisotropic. This could be due to the anisotropic elastic behaviour of $\alpha$-Zr (i.e. a difference in the long-range displacement field), an anisotropy in the dislocation core structure (i.e. a difference in the short-range atomic arrangement), or both. The fact that $\langle c/2+p \rangle$ loops{} are circular is consistent with the symmetry in the basal plane.
\begin{figure}[!ht]\begin{center}
{\includegraphics[width=\figwidth]{./figures/jostons1977fig7cropped.pdf}} %
\caption{The ellipticities of $\langle a \rangle$ loops{} as a function of loop size, where the numbers in parentheses are the quantity of loops observed. In this plot, `2a' is the ellipse major axis and `2b' is the ellipse minor axis. (Reproduced from Jostsons et al.~\cite{jostsons1977nature}, with permission from Elsevier.).}
\label{fig:ellipLoopsPlot}
\end{center}
\end{figure}
Along with the effects of elastic and crystal anisotropy, anisotropic diffusion of point defects may have an influence on loop ellipticity. A computational study by Fuse found that for Zr the stable interstitial site is the basal octahedral and that this diffuses preferentially in the basal plane \cite{fuse1985evaluation}. A review by Frank \cite{frank1988intrinsic} of point defect studies, building on a review of computational work on the subject by Bacon \cite{bacon1988review}, reported this to be the general case for HCP metals with $c/a < \sqrt{8/3}$. Frank's review also stated that the diffusion of vacancies is only weakly anisotropic \cite{frank1988intrinsic}, citing self-diffusion studies by Seeger and G{\"o}sele \cite{seeger1975vacancies} and Peterson \cite{peterson1978self}. Molecular dynamics simulations by Osetsky et al.~\cite{osetsky2002anisotropy} and by Kulikov and Hou \cite{kulikov2005vacancy} found that, at reactor temperatures, interstitial diffusion occurs overwhelmingly in the basal plane.
Woo's 1988 theory utilises the diffusional anisotropy difference (DAD) of point defects to explain the existence of irradiation-induced defects seen in Zr and the associated IIG \cite{woo1988theory}. It postulates that dislocation lines parallel to $[0001]$ capture more interstitials than vacancies, which makes the minor axis shorter for vacancy loops and the minor axis longer for interstitial loops \cite{woo1988theory}. If vacancy and interstitial loops begin as equally elliptical, then the influence of DAD will be to make vacancy loops more elliptical and interstitial loops less elliptical \cite{woo1988theory}. Woo used a TEM study by Brimhall \cite{brimhall1971microstructural} to support the notion that elliptical loops have equal ellipticity without DAD \cite{woo1988theory}. However Brimhall's study only identified one interstitial loop and the study was on irradiated $\alpha$-Ti \cite{brimhall1971microstructural}. As $\alpha$-Ti is non-cubic, with a $c/a$ ratio less than ideal, it too should be affected by DAD \cite{wood1962lattice}. Therefore the question of whether $\langle a \rangle$ loops{} in Zr would be equally elliptical without DAD is unanswered, but is something we can probe by simulating elliptical loops in a model without diffusion effects. Contradicting DAD is a recent Monte Carlo modelling study by Christiaen et al.~that found IIG could be replicated with diffusion behaviour where vacancies diffuse preferentially in the basal plane \cite{christiaen2020influence}. Additionally, Christiaen et al.~reference ab intio studies of Zr that suggest vacancies diffuse more rapidly in the basal place, although they acknowledge that alloying elements could alter this \cite{christiaen2020influence}.\\
\subsection{Stacking Faults.}
\noindent A faulted dislocation loop contains a stacking fault and this contributes to the formation energy of the loop. Stacking faults important to our present work are those on the basal plane $\{0001\}$, the 1st prismatic plane $\{10\bar{1}0\}$ and the 2nd prismatic plane $\{2\bar{1}\bar{1}0\}$. On the basal plane three types of stacking fault are pertinent: high energy (HE), first intrinsic (I1) and extrinsic (E). One mechanism for a $\langle c/2+ p \rangle$ loop{} to form is for it to begin as a vacancy platelet on the basal plane, formed by agglomerated irradiation-induced vacancies, which collapses to form a HE stacking fault. This gives the HE stacking fault a plane sequence of `ABABBAB' (where the HCP plane sequence is denoted `ABABAB'). The loop grows by absorbing additional vacancies to the point where the HE stacking fault energy becomes so great that the loop shears to produce a lower energy I1 stacking fault, in order to reduce the loop's energy. This shearing is achieved by a partial dislocation with burgers vector $1/3\langle1 \bar{1} 0 0\rangle$ sweeping across the HE stacking fault \cite{hull2001introduction}. The HE stacking fault is bounded by a dislocation line with burgers vector: $\bold{b}_{\ttm{HE}} = 1/2[0001]$ and we term this type of loop a $\langle c/2 \rangle$ loop{}. The combination of the dislocation bounding the HE stacking fault with the partial dislocation results in a burgers vector of $1/6\langle2 \bar{2} 0 3\rangle$ via the reaction
\begin{equation}\label{eq:gradient}
\frac{1}{2}[0001] + \frac{1}{3}\langle1 \bar{1} 0 0\rangle =\frac{1}{6}\langle2 \bar{2} 0 3\rangle,
\end{equation}
\noindent which is the burgers vector, $\bold{b}_{\ttm{I1}}$, of the dislocation line bounding the I1 stacking fault. This transition changes a $\langle c/2 \rangle$ loop{} into a $\langle c/2+ p \rangle$ loop{}. For Zr containing an I1 stacking fault, the plane sequence is `ABABCBC', meaning that it is a HCP structure containing a layer with face centred cubic (FCC) structure.
A $c$-component loop with $\bold{b}_{\ttm{HE}} = 1/2[0001]$ but containing an extrinsic stacking fault is also of interest and we term this a $\langle c/2 \rangle_{\ttm{EXT}}$ loop. As an extrinsic fault has lower energy than a HE stacking fault, transformation from a $\langle c/2 \rangle$ loop{} to a $\langle c/2 \rangle_{\ttm{EXT}}$ loop may occur followed by a transition to a $\langle c/2+ p \rangle$ loop{}, rather than a direct transition from a $\langle c/2 \rangle$ loop{} to a $\langle c/2+ p \rangle$ loop{} \cite{hull2001introduction}.\
A further type of $c$-component loop potentially exists, the $\langle c \rangle$ loop{}, which is a perfect loop containing no stacking fault. We expect that above some threshold radius, a $\langle c \rangle$ loop{} will have lower energy than a $\langle c/2+ p \rangle$ loop{} containing the same number of vacancies, because although the burgers vector magnitude of the $\langle c \rangle$ loop{} is greater, it is truly unfaulted, whereas the $\langle c/2+ p \rangle$ loop{} still contains an I1 fault. Griffths and Gilbert observed $\langle c \rangle$ loops{} with TEM, in irradiated and annealed Zircaloy-4 \cite{griffiths1987formation}. Griffths and Gilbert stated that the formation of $\langle c \rangle$ loops{} occurs via the reaction
\begin{equation}\label{eq:doubleClp}
\frac{1}{6}\langle2 0 \bar{2} \bar{3}\rangle + \frac{1}{6}\langle\bar{2} 0 2 \bar{3}\rangle = [0 0 0 \bar{1}],
\end{equation}
\noindent in which two $\langle c/2+p \rangle$ loops{} coalesce to form a $\langle c \rangle$ loop{} \cite{griffiths1987formation}. $\langle c \rangle$ loops{} have also been observed in irradiated Mg, which like Zr has a HCP crystal structure \cite{xu2017origin}. Despite reports of their existence, $\langle c \rangle$ loops{} have been little studied.\
An unfaulting process would occur for an $\langle a \rangle$ loop{}, if it initially forms on a $\{1 \bar{1} 0 0\}$ plane as a faulted prismatic loop with burgers vector $\frac{1}{2}\langle0 1 \bar{1} 0\rangle$. This faulted prismatic $\langle a \rangle$ loop{} could unfault to form a sheared $\langle a \rangle$ loop{} with the observed burgers vector of $\frac{1}{3} \langle1 1 \bar{2} 0\rangle$ \cite{varvenne2014vacancy}. The determination of unfaulting radii for $\langle a \rangle$ loops{} and $\langle c/2 \rangle$ loops{} is important for improved understanding of nascent loops, where we define a nascent loop as one with a diameter less than 10~nm. Varvenne et al.~performed a computational study in which they determined the unfaulting radius for faulted vacancy $\langle a \rangle$ loops{} to be 2.7\ nm and the unfaulting radius for $\langle c/2 \rangle$ loops{} to be 2.4\ nm \cite{varvenne2014vacancy}.\\
\subsection{Dislocation Loop Nucleation.}
\noindent Nucleation of dislocation loops is an area where there remains a great deal of uncertainty. It is believed that loop nucleation begins with the agglomeration of irradiation-induced point defects into clusters \cite{averback1987energetic,woo1990diffusion}. The clusters may then absorb more point defects to the point where they collapse to create a dislocation loop \cite{woo2009generation}. Experimentally $\langle a \rangle$ loops{} are observed early on in the irradiation process, whilst $c$-component loops do not appear until higher doses \cite{aHarteThesis}.
Naturally, multiple loops will nucleate and orient themselves in a way that reduces their strain field interaction. As can be seen in Fig.~\ref{fig:aLoopsJostsons}, $\langle a \rangle$ loops{} arrange themselves along the trace of the basal plane. The TEM image seen in Fig.~\ref{fig:aLoopsJostsons} is a 2D image of a Zr crystal oriented with the [$1 \bar{1} 0 0$] normal to the page. Hence the image shows many ($1 \bar{1} 0 0$) plane loops throughout the depth of the TEM foil. Therefore Fig.~\ref{fig:aLoopsJostsons} shows the $\langle a \rangle$ loop{} positions in the image plane along [0001] and $[11\bar{2}0]$, but the positions along [$1\bar{1}00$] cannot be determined and so the degree of ordering along [$1 \bar{1} 0 0$] is unknown. The ordering of loop types has not been entirely established experimentally. Whilst it might be expected that ordered loops alternate between interstitial and vacancy character to minimise the interaction of their strain fields, to our knowledge this has not been confirmed.\ %
\begin{figure}[!ht]\begin{center}
{\includegraphics[width=\figwidth]{./figures/jostsons1977fig1.pdf}} %
\caption{The elliptical shape of $\langle a \rangle$ loops{} can be seen in this TEM image of irradiated zone refined Zr. Ordering of the $\langle a \rangle$ loops{} along the trace of the basal plane is also evident (Reproduced from Jostsons et al.~\cite{jostsons1977nature}, with permission from Elsevier.).}
\label{fig:aLoopsJostsons}
\end{center}
\end{figure}
The computational atomistic simulation study reported in this paper addresses the phenomena outlined above. Whilst TEM studies have done much to reveal details about dislocation loop structure at the nanoscale, computer models allow inspection of individual atoms, with none of the obscuring effects of experimental sample artefacts or limitations to atomic resolution. Experimentally, it is difficult to resolve defects smaller than $\sim$5~nm using TEM \cite{jenkins1999application} and so simulations provide an advantage in this regard. Defect types, sizes and configurations can be controlled in the simulation and their energies calculated. This makes study of defect energetics accessible in a way that would be difficult experimentally. These results are considered within the limitations of the model: the applicability of the potential employed, the use of pure Zr and the limited size of the simulated volume.
\section{Method} %
\noindent To study the phenomena introduced in Section~\ref{sec:intro}, we have made use of atomistic models of bulk $\alpha$-Zr, containing carefully controlled defect populations. The defect types studied are tabulated in Table \ref{loopsTable}.\
The simulation supercells were relaxed, and in some cases annealed, with the LAMMPS molecular modelling software package (\url{http://lammps.sandia.gov}) \cite{plimpton1995fast} using Mendelev \& Ackland's \#3 (M\&A \#3) potential. This was selected because it closely reproduces the stacking fault energies (SFE) of Zr, which was a deficiency of previous Zr potentials \cite{mendelev2007development}. As this study is concerned with dislocation loops, which usually contain a faulted plane, the ability of a potential to capture SFE is crucial. M\&A \#3 also replicates interstitial formation energies well, with the octahedral (O) site being the most stable, followed by the BO. The formation energies were close, with the former being 2.88~eV and the latter 2.90~eV in good agreement with \textit{ab initio} results in the literature \cite{de2011structure}. Some more recent \textit{ab initio} results by Peng et al.~suggest that BO is more stable than O, (formation energies 2.78~eV and 2.92~eV respectively) \cite{peng2012stability}, although the absolute formation energies are still close to those determined by Peng et al.\ and replicated by M\&A \#3.
The OVITO visualisation software package \cite{stukowski2009visualization} was used to analyse the relaxed supercell configurations. \
\if
To check the suitability of M\&A \#3 for modelling dislocation lines we carried out a number of density functional theory (DFT) benchmarking calculations. DFT calculations were carried out on a series of infinite dislocation dipoles in the $[11\bar{2}0]$ direction and a series of infinite dislocation dipoles in the [0001] direction. The DFT calculations were compared with the equivalent calculations performed with LAMMPS using M\&A \#3. For the $[11\bar{2}0]$ direction dislocations the line energy agreement is unexpectedly good: within a few \% of the DFT results. For the [0001] direction dislocations the agreement is not as good and the difference between the LAMMPS and DFT calculated line energies is $\sim 45\%$.\\
\fi
\vspace{.5cm}
\begin{table*}[t!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Loop type & Loop Character & Habit Plane. & Burgers Vector & Burgers Vector \\
& Studied & & & Magnitude (Ang) \\[1.ex]
\hline
& & & & \\[-1.ex]
${\langle c/2+p \rangle}$ & Vacancy and SIA & $ (0001) $ & $\frac{1}{6} \langle 2 \bar{2} 0 3 \rangle$ & 3.18 \\ [1.ex]
${\langle c/2 \rangle}$ & Vacancy & $(0001) $ & $\langle 0 0 0 2 \rangle$ & 2.57 \\[1.ex]
$\langle c/2 \rangle_{\ttm{EXT}}$ & Vacancy & $(0001) $ & $\langle 0 0 0 2 \rangle$ & 2.57 \\[1.ex]
${\langle c \rangle}$ & Vacancy & $(0001)$ & $\langle 0 0 0 1 \rangle$ & 5.15 \\ [1.ex]
$\langle a \rangle^{\text{1ord}}$ & Vacancy and SIA & $(0 1 \bar{1} 0)$ & $\frac{1}{3} \langle 2 \bar{1} \bar{1} 0 \rangle$ & 3.23 \\[1.ex]
Edge $\langle a \rangle^{\text{1ord}}$ & Vacancy & $(0 1 \bar{1} 0)$ & $\frac{1}{2} \langle 0 {1} \bar{1} 0 \rangle$ & 2.80 \\[1.ex]
$\langle a \rangle^{\text{2ord}}$ & Vacancy and SIA & $(1 \bar{2} 1 0)$ & $\frac{1}{3} \langle 2 \bar{1} \bar{1} 0 \rangle$ & 3.23 \\ [1ex]
\hline
\end{tabular}
\captionsetup{width=.99\textwidth} %
\caption{The various dislocation loops included in this study. A $\langle c/2 \rangle$ loop contains a HE stacking fault and a $\langle c/2 \rangle_{\ttm{EXT}}$ loop contains an extrinsic stacking fault.}%
\label{loopsTable}
\end{center}
\end{table*}
\subsection{Dislocation Loop Construction.}
\noindent Dislocation loops were created by first removing or adding a platelet of atoms, depending on the character of the loop. The surrounding atoms were then displaced according to a model displacement field $\textbf{u}_0(\textbf{r})$.
Lazar and Kitchener \cite{lazar2013dislocation} provide a method to calculate the displacement field for a dislocation loop in an anisotropic medium based on continuum elasticity theory:
\begin{equation}\label{eq:anisLoopDisp}
\bold{u}(\bold{r}) = - \frac{\bold{b} \Omega(\bold{r})}{4 \pi} + \bold{u}(\bold{r})_{\textrm{elastic}}.
\end{equation}
\noindent Here the first term is the plastic displacement and the second term is the elastic displacement \cite{lazar2013dislocation}. $\Omega(\bold{r})$ is the solid angle subtended by the loop at point $\bold{r}$ and the loop's burgers vector is $\bold{b}$. In our study we are considering elliptical dislocation loops and since the calculation of the solid angle subtended by an ellipse at a general point is non-trivial we have adopted an alternative, pragmatic approach.
We use a model displacement field $\textbf{u}_0(\textbf{r})$, and then rely on relaxation under the model interatomic forces to displace the atoms into the final relaxed form, $\textbf{u}(\textbf{r})$. We choose the form
\begin{equation}
\textbf{u}_0(\textbf{r}) = \bold{b} ~\alpha(\mu) \beta({d}),
\end{equation}
where \textbf{r} is the initial position of an atom, \bold{b} is the burgers vector of the loop and $\alpha(\mu)$ and $\beta(d)$ are two functions that determine the rate of decay of the displacement field in directions normal to and parallel to the habit plane of the loop. These functions are defined in detail in Appendix~\ref{app:dislocationconstruction}.
\subsection{Energy minimisation and selecting dislocation loop densities.}\label{SelectDislocDens_pap1}
\noindent To arrive at the stable structure for our dislocations, we relax the atomic coordinates using the Polak-Ribiere version of the conjugate gradient (CG) minimisation algorithm with an energy tolerance of $10^{-10}$ (unitless) and a force tolerance of $10^{-6}$~eV/{\AA}. To ensure that we reach a valid final state, we run a series of five minimisations alternating between relaxing only the atomic positions within a fixed supercell and also relaxing the supercell under conditions of the CG minimisation algorithm using the same tolerances as previously.
We apply full periodic boundary conditions to simulate infinite bulk material. We also make use of skewed supercells to avoid the case where the periodic copies of the dislocation loops are stacked directly on top of one another, which would give rise to particularly unrealistic strain interactions. Ascertaining whether loops in reality are offset in this way is difficult. TEM micrographs, such as Fig.~\ref{fig:aLoopsJostsons}, reveal that $\langle a \rangle$ loops{} are ordered in rafts. However, as micrographs are 2-D projections it is not possible to see whether the loops are configured in an ordered manner along $\langle1 \bar{1} 0 0\rangle$. Nonetheless, for loops of like character it is reasonable to assume that loops nucleate in a staggered manner, to minimise their strain interaction, and this arrangement is replicated by the skewing of the supercell.\
Excessive elastic interaction between a loop and its periodic images will lead to erroneously high strain interactions and could give rise to unrealistic structures. Thus we relaxed a test series of loops, surrounded by a varying thickness of bulk Zr padding, producing different effective separation distances, the results of which are shown in Fig.~\ref{fig:aLpSeparation}. Kulikov and Hou \cite{kulikov2005vacancy} took such defect interaction into account in a 2005 study on loop energies in Zr, maximally separating defects across periodic boundaries.\ %
For skewed simulation boxes the formation energy, E$_\text{f}$, per vacancy increases with loop separation because the overlapping strain fields are of dissimilar character. For this reason, whereas the orthogonal box gives a repulsive interaction, the skewed box gives an attractive one. This is illustrated schematically in Fig.~\ref{fig:aLpSeparation} which shows how like strain fields overlap at relatively short range in the orthogonal box. In contrast, in the skewed cell the overlapping of the strain fields is only at longer range. As loop separation increases, in larger simulation supercells, skewing has a reduced effect.
\begin{figure}[htbp!]\begin{center}
{\includegraphics[width=\figwidth]{./figures/EfVsSeparationSchematic3.pdf}}. %
\caption{Formation energy per vacancy of a 10~nm $\langle a \rangle$ loop{} as a function of separation from its periodic image. The schematic illustrations show the strain field interactions, with the red shaded area representing tensile strain and the blue compressive strain.}
\label{fig:aLpSeparation}
\end{center}
\end{figure}
Based on the results shown in Fig.~\ref{fig:aLpSeparation} we chose a loop separation of 40~nm as sufficient to avoid undue strain interactions, without making the cells overly large. We note that this separation creates a loop number density of around $10^{21}~$m$^{-3}$, comparable with that observed experimentally and that a separation of $\sim$50~nm was seen in TEM for c-component loops \cite{harte2017effect}.
\section{Results}\label{sec:results} %
\subsection{Nascent Loop Transformations.}\label{sec:nascLoops}
\subsubsection{Transformations of c-component loops.}\label{cLpTrans}
\noindent
We begin by examining small, nascent loops, below the size easily resolved experimentally. These are of interest because they could reveal details of the nucleation process and the transition from one growth stage to another.
In an atomistic modelling study de Diego et al.~\cite{de2008structure} simulated vacancy clusters on the basal plane that transformed into $\langle c/2 \rangle$ loops{}. This supports the hypothesis that c-component loops begin as a cluster of vacancies, which then collapses to form a platelet. As a $\langle c/2 \rangle$ loop{} absorbs more vacancies there is an increased impetus for it to shear to a lower energy configuration: a $\langle c/2+ p \rangle$ loop{}.
The radius at which this happens, $r^*_{\text{HE}\rightarrow \text{I1}}$, was recently determined by Varvenne et al.~to be 2.4~nm. Varvenne et al.~used the M\&A \#2 potential, which better describes vacancy cluster energies \cite{mendelev2007development,varvenne2014vacancy}, whereas we have adopted M\&A \#3 potential for an improved representation stacking faults. We have therefore repeated the calculation of $r^*_{\text{HE}\rightarrow \text{I1}}$ by creating a series of $\langle c/2 \rangle$ loops{} and a series of $\langle c/2+p \rangle$ loops{}, of varying diameter, contained in 140~nm x 140~nm x 40~nm supercells that house around 33 million atoms. We relaxed these, analysed the energies and modelled the loop energy, $E(r)$, as the sum of energy contributions from the stacking fault and the bounding dislocation line:
\begin{equation}\label{eq:loopE}
E(r) = \gamma ~\pi (r-\Delta r)^2 + \lambda ~2 \pi (r-\Delta r).
\end{equation}
Here $\gamma$ is the loop stacking fault energy per unit area and $\lambda$ is the energy per unit line length of the bounding dislocation. We treated $\gamma$ and $\lambda$ as fitting constants for the series of simulation results for $E(r)$. $\Delta r$ is introduced to account for a difference between the nominal loop radius, $r$, used in constructing the loops and an effective radius $(r-\Delta r)$, post relaxation, that emerges from the simulation results.
The fitted functions for the $\langle c/2 \rangle$ series and the $\langle c/2+p \rangle$ series are plotted in Fig.~\ref{fig:cLpUnfault}. The intersection of the functions gives $r^*_{\text{HE}\rightarrow \text{I1}} = 3.21$~nm, marked by the dashed line in Fig.~\ref{fig:cLpUnfault}. Table \ref{sect1.1table} contains $r^*_{\text{HE}\rightarrow \text{I1}}$ along with $\gamma$ and $\lambda$ for $\langle c/2 \rangle$ loops{} and $\langle c/2+p \rangle$ loops{}, along with some of these values from other sources.
\begin{figure}[htbp!]\begin{center}
\begin{subfigure}[b]{\figwidth}
\includegraphics[width=\figwidth]{./figures/pap1sec1_cLpUnfaulting.pdf} %
\captionsetup{justification=centering,singlelinecheck=false} %
\caption{}
\end{subfigure}
\begin{subfigure}[b]{\figwidth}
\includegraphics[width=\figwidth]{./figures/pap1sec1_cLpUnfaultingRadius2.pdf} %
\captionsetup{justification=centering,singlelinecheck=false} %
\caption{}
\end{subfigure}
\caption{The formation energy per vacancy as a function of radius for c-loops. $\langle c/2+p \rangle$, $\langle c/2 \rangle_{\text{EXT}}$ and $\langle c/2 \rangle$ loop data are displayed in image (a), along with their fitted functions. Markers are the simulation data and lines the fitted functions. Image (b) shows the crossover radii, $r^*_{\mathrm{HE}\rightarrow \text{I1}}$ and $r^*_{\text{HE}\rightarrow \text{EXT}}$.}
\label{fig:cLpUnfault}
\end{center}
\end{figure}
Our fitted functions for $\langle c/2 \rangle_{\ttm{EXT}}$ and $\langle c/2 + p \rangle$ loops predict that the energy of the former will always be higher than that of the latter. However, Varvenne et al.'s \cite{varvenne2014vacancy} study \emph{does} predict a transition from a $\langle c/2 \rangle_{\ttm{EXT}}$ loop to a $\langle c/2+ p \rangle$ loop{} at a radius of $r^*_{\text{E}\rightarrow \text{I1}}=$ 2.4~nm. M\&A~\#3 is superior for SFEs, as shown in Table \ref{sect1.1table}, and for the transformation to an I1 stacking fault, the SFE is more important than the vacancy binding energies, as the number of vacancies is conserved during this transformation. Additionally, the vacancies are obliterated when the platelet collapses and so can no longer be considered a cluster. This was discussed by Varvenne at al., as they found that with M\&A~\#2 cavities were more stable than vacancy dislocation loops, but this is evidently not the case in reality as cavities are rarely seen in irradiated Zr \cite{varvenne2014vacancy,griffiths1988review}.
\begin{table*}[t!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& This study &\multicolumn{2}{c}{} Varvenne \cite{varvenne2014vacancy} && Domain \cite{domain2004atomic} \\ [1ex] \cline{3-5}%
& EAM & DFT & EAM & EAM & \textit{Ab Initio} \\ [1ex]
& M\&A \#3 & & M\&A \#2 & M\&A \#3 & \\ [1ex]
\hline
$r^*_{\text{HE}\rightarrow \text{I1}}$~(nm) & 3.70 & & & & \\ [1ex]
$r^*_{\text{HE}\rightarrow \text{E}}$~(nm) & 7.56 & & & & \\ [1ex]
$r^*_{\text{E}\rightarrow \text{I1}}$~(nm) & n/a & 1.4 & 2.4 && \\ [1ex]
$r^*_{\text{I1}\rightarrow \text{Unfaulted}}$~(nm) & 56.1 & & && \\ [1ex]
$\gamma_{\text{I1}}$~(mJ/m$^2$) &129 & 147 & 55 & 99 & 124 \\[1ex] %
$\gamma_{\text{EXT}}$~(mJ/m$^2$) & 333 & 274 & 164 & 297 & 249 \\[1ex] %
$\gamma_{\text{HE}}$~(mJ/m$^2$) &390 &&&&\\[1ex]%
$\gamma_{\text{SIA}}$~(mJ/m$^2$) &140 &&&&\\[1ex]%
$\lambda_{\text{c/2+p}}$~(eV/nm) & 15.68 & & & & \\ [1ex]
$\lambda_{\text{c/2}}$~(eV/nm) & 12.15 &&&& \\ [1ex]
$\lambda_{\text{SIA}}$~(eV/nm) & 14.84 &&&& \\ [1ex]
\hline
\end{tabular}
\caption{Results from this study are tabulated here, along with results from other computational studies. EAM refers to the embedded atom method potential used in the atomistic modelling. The value $r^*_{\text{HE}\rightarrow \text{I1}}$ is the radius at which a $\langle c/2 \rangle$ loop's energy becomes greater than that of an equivalent $\langle c/2+ p \rangle$ loop{}. Similarly, $r^*_{\text{HE}\rightarrow \text{E}}$ is the radius for transormation from a $\langle c/2 \rangle$ loop{} to a $\langle c/2 \rangle_\text{EXT}$, $r^*_{\text{E}\rightarrow \text{I1}}$ is that from a $\langle c/2 \rangle_{\text{EXT}}$ loop to a $\langle c/2 + p \rangle$ and $r^*_{\text{I1}\rightarrow \text{Unfaulted}}$ that from a $\langle c/2+ p \rangle$ loop{} to a $\langle c \rangle$ loop{}. Our results show that at no radii do $\langle c/2 \rangle_{\text{EXT}}$ loop have lower energy than $\langle c/2+p \rangle$ loops{} and n/a is used to denote this.}
\label{sect1.1table}
\end{center}
\end{table*}
\indent Our analysis implies a transition directly from the HE stacking fault to the I1, but Varvenne et al.~\cite{varvenne2014vacancy} also determined the radius at which the transition from an E stacking fault to an I1 occurs, implying a stacking fault transformation sequence HE $\rightarrow$ E $\rightarrow$ I1. To explore this issue further we conducted a series of 800~K annealing simulations on $\langle c/2 \rangle$ loops{}, with radii from 0.5~nm to 10~nm, containing a HE stacking fault. {Each simulation consisted of a 10~ps heating phase to 800~K, then a 20~ps anneal and a 15~ps cooling phase. These were carried out in a simulation box at constant volume with a Nose-Hoover thermostat. We found that for loops of 2.5~nm radius and below, the post annealing simulations contained defects resembling stacking fault tetrahedra, where no dislocation line could be discerned. At radii of 3~nm and above the annealed cells contained discernible dislocation loops. Analysis of these revealed that their burgers vector was $1/6 \langle20\bar{2}3\rangle$ and common neighbour analysis revealed the loop plane to be a single FCC layer. This indicates that the loops with radii of 3~nm and above are $\langle c/2+p \rangle$ loops{}. The 3~nm radius $\langle c/2+ p \rangle$ loop{} was tilted towards the pyramidal plane and this concurs with observations of small, tilted c-component loops in TEM \cite{topping2018effect,griffiths1987neutron,griffiths1993hvem}. The degree of tilting gradually decreased with increasing radius and almost no tilt was present at 10~nm. These annealed loops suggest that the HE stacking fault configuration initially collapses to form a stacking fault tetrahedron, which then becomes a $\langle c/2+ p \rangle$ loop{}, following the absorption of more vacancies. This suggests $\langle c/2 \rangle$ loops{}, containing either a HE or an extrinsic stacking fault, are not stable. However, Griffiths observed loops with $\textbf{b} = 1/2[0001]$ on the basal plane, but these occurred after electron irradiation, which produces Frenkel pairs \cite{griffiths1984anisotropic}. Neutron and proton irradiation produce collision cascades, which create large clusters for neutron irradiation and sequences of smaller clusters for proton irradiation \cite{was2017fundamentals}. Therefore, the defects that electron irradiation produces may not be comparable to defects produced by these heavier particles. Nonetheless, the observation of basal loops with $\textbf{b} = 1/2[0001]$ indicates that $\langle c/2 \rangle$ loops{} do exist.\
Whilst small discrepancies exist in the determination of the radius at which $\langle c/2+p \rangle$ loops{} become energetically optimal, all the above results indicate that basal plane loops with radii above $\sim4$~nm are $\langle c/2+p \rangle$ loops{}. However, as impurities change SFEs, transformation radii may be altered in real alloys \cite{de2008structure}. This highlights the necessity for future development of Zr alloy empirical potentials.\
\iffalse
\subsubsection{Discussion of Nascent Loop Transformations.}
\begin{figure}[htbp!]\begin{center}
\resizebox{2.5in}{!}{
\includegraphics[width=0.5\figwidth]{../images/5nmLoop/5nmAnneal.png}}
\\[1ex]
\resizebox{\figwidth}{!}{
\includegraphics[]{../images/6nmLoop/6nmLoopAnneal.png}}
\\[1ex]
\resizebox{2.5in}{!}{
\includegraphics[]{../images/10nmLoop/10nmAnnealFront.png}}
\\[1ex]
\resizebox{2.5in}{!}{
\includegraphics[]{../images/20nmLoop/anneal20nm15nmPad.png}}
\caption{Annealed cells that initially contained a $\langle c/2 \rangle$ with a HE stacking fault, are shown here. From top to bottom the initial loop radii are: 2.5~nm, 3~nm, 5~nm and 10~nm. Post-anneal the 2.5~nm loop has transformed to what appears to be a stacking fault tetrahedron. The red lines present in the 3~nm and 10~nm loop images are the bounding dislocation line, which is absent in the 2.5 nm image as no discernible dislocation line was present. The dislocation loop is highly tilted for the 3~nm loop and only marginally tilted for the 10~nm loop. Although the dislocation loop was tilted the faulted loop plane remained in the basal plane.\dc{This image will be refined later}}
\end{center}
\end{figure}
\dc{Small (3nm) loops looks hexagonal and big (>10nm) circular. In Varvenne 2014 they built hex c-loops. Perhaps we should include some words about c-loop shape.}
\fi
\subsubsection{Habit Plane of $\langle a \rangle$ loops.}\label{aLoopPlane}
\noindent Studies by TEM of irradiated zirconium have observed $\langle a \rangle$ loops{} with a variety of loop normals from $[10\bar{10}]$ to $[11\bar{2}0]$, corresponding to loops on the 1st prismatic ${\{10\bar{1}0\}}$ and 2nd prismatic ${\{11\bar{2}0\}}$ planes respectively \cite{kelly1973characterization,jostsons1977nature}. Although experimental evidence \cite{jostsons1977nature,northwood1979characterization,kelly1973characterization} (and see Fig.~\ref{fig:aLoopPlanes}) has shown the majority of the $\langle a \rangle$ loop{} habit planes clustered around 1st prismatic planes, it is unclear why this preference for $\langle a \rangle$ loops{} to inhabit ${\{10\bar{1}0\}}$ exists. To address this we calculated the energies of $\langle a \rangle$ loops{} with various diameters, for a series on $(0 1 \bar{1} 0)$ and for a series on $(1\bar{2}10)$. The loops energies were fitted to a function as in Section \ref{cLpTrans}. Loop series were created for both vacancy and interstitial $\langle a \rangle$ loops{}, and these results are displayed in Fig.~\ref{fig:aLp1prisV2pris} and tabulated in Table~\ref{aLpEtable}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Defect & $\lambda$~(eV/nm) & $\gamma$~(mJ/m$^2$) \\ [1ex] %
\hline
& & \\ [-2.ex]
$\langle a \rangle^{\text{1ord}}_{\text{vac}}$ & 14.9 & 32 \\
Edge $\langle a \rangle^{\text{1ord}}_{\text{vac}}$ & 13.1 & 176 \\
$\langle a \rangle^{\text{2ord}}_{\text{vac}}$ & 15.6 & 39 \\
$\langle a \rangle^{\text{1ord}}_{\text{sia}}$ & 13.7 & 49 \\
$\langle a \rangle^{\text{2ord}}_{\text{sia}}$ & 14.9 & 44 \\ [1ex]
\hline
\end{tabular}
\caption{The $\langle a \rangle$ loop energetics results are tabulated here.}
\label{aLpEtable}
\end{center}
\end{table}
The interplay between the dislocation line energy density $\lambda$ and the stacking fault energy density $\gamma$ in determining which prismatic plane results in the lowest energy for an $\langle a \rangle$ loop{} is complex. This is because the effective area densities of point defects in $\langle a \rangle$ loops{} on the 1st prismatic and 2nd prismatic planes are different. Analysis of $\langle a \rangle$ loops{} on different prismatic planes must thus be in terms of defects accommodated within each loop, rather than loop radii.
The area density of the 1st prismatic plane is less than that of the 2nd prismatic plane, so for $\langle a \rangle$ loops{} containing an equivalent number of defects, the $\langle a \rangle$ loop{} on the 1st prismatic plane will have a larger radius than that on the 2nd, meaning it has greater bounding dislocation line length. However, $\lambda$ also varies with habit plane and the overall dislocation line energy will depend on both $\lambda$ and the circumference. The line energy is anisotropic, but in this case we model only circular loops and treat $\lambda$ as a mean value.
\begin{figure}[]\begin{center}
\begin{subfigure}[b]{\figwidth}
\includegraphics[width=\figwidth]{./figures/pap1_aLpEnergyPRESENTATION.pdf}%
\captionsetup{justification=centering,singlelinecheck=false} %
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.75\figwidth}
\includegraphics[width=0.75\figwidth]{./figures/pap1_aLpEnergy_lowrad.pdf}%
\captionsetup{justification=centering,singlelinecheck=false} %
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.75\figwidth}
\includegraphics[width=0.75\figwidth]{./figures/pap1_aLpEnergy_highrad.pdf}%
\captionsetup{justification=centering,singlelinecheck=false} %
\caption{}
\end{subfigure}
\caption{Formation energy per defect as a function of radius, for SIA and vacancy $\langle a \rangle$ loops inhabiting the 1st and 2nd prismatic planes. Circles represent the simulation results and lines are the model fit. Image (a) shows the series across the full range of radii studied. Only the data points for one set are visible in the image (a) because the points for the other 3 sets effectively lie in the same position, for the scale used. Note that in image (c) the `Vac 1$^{\text{st}}$ ord' line is concealed by the overlying `Vac 2$^{\text{nd}}$ ord' line.}
\label{fig:aLp1prisV2pris}
\end{center}
\end{figure}
As sheared $\langle a \rangle$ loops are unfaulted, we would expect the fitted $\gamma$ values to be zero. However, in all cases $\gamma$ has a small but finite value (around 25\% of $\gamma_\text{I1}$) that we believe is due to minor dis-registries of atoms due to the strain field propagating out from the dislocation core. As $\gamma$ is non-zero, the surface defect energy will dominate the line energy for large loops, but as the $\gamma : \lambda$ ratio is small in comparison to c-component loops, this happens at much larger sizes than was the case for c-component loops. This is demonstrated in Fig.~\ref{fig:aLp1prisV2pris}(b) and (c) which show small and large loop energies respectively. Small $\langle a \rangle^{\text{1ord}}_{\text{vac}}$ loops have higher energy than $\langle a \rangle^{\text{2ord}}_{\text{vac}}$ loops, but at larger radius this difference diminishes and at radii above $\sim 65$~nm the order reverses. At all radii $\langle a \rangle^{\text{1ord}}_{\text{sia}}$ loops have higher energy than $\langle a \rangle^{\text{2ord}}_{\text{sia}}$ loops and this difference increases with radius.\
In all cases, the differences in energy per defect of the various $\langle a \rangle$ loops{} are small: around several meV per defect. Hence the thermodynamic driving force available for a loop transformation to occur is small, in comparison to that for c-component loops. This may be the reason for the distribution of loop habit planes seen in Jostsons's work \cite{jostsons1977nature}. For the size range of the loops that Jostsons was examining, the 2nd order prismatic habit plane results in the lowest energies, for both interstitial and vacancy $\langle a \rangle$ loops{}. If loops nucleate on the 1st prismatic, there may be a reduction in energy from rotating onto the 2nd prismatic, but the thermodynamic driving force may be insufficient for all loops to do so, as obstacles or large transformation energy barriers may impede this rotation. For the loop radii observed in Jostsons et al.'s study of $\langle a \rangle$ loop{} habit planes \cite{jostsons1977nature}, the energy reduction between 1st prismatic and 2nd prismatic habit planes is greater for interstitial loops. This concurs with Jostsons et al.'s data, replicated in Fig.~\ref{fig:aLoopPlanes} in this paper, as more of the interstitial loops are distributed towards the 2nd prismatic plane than for the vacancy loops.
\subsubsection{Unfaulting radius of $\langle a \rangle$ loops{}.} %
\noindent Jostsons et al.~observed $\langle a \rangle$ loops{} with TEM and, whether on the 1st or 2nd prismatic planes, saw them to have a burgers vector of $1/3\langle11\bar{2}0\rangle$ \cite{jostsons1977nature}.
Griffiths assumed that after nucleation, $\langle a \rangle$ loops have $\bold{b}=1/2\langle10\bar{1}0\rangle$ and then unfault via the reaction \noindent \cite{griffiths1991microstructure}
\begin{equation}\label{eq:aLpUfault}
\frac{1}{2}\langle10\bar{1}0\rangle + \frac{1}{6}\langle\bar{1}2\bar{1}0\rangle \rightarrow \frac{1}{3}\langle11\bar{2}0\rangle.
\end{equation}
de Diego et al.~gave credence to this process, when they conducted an atomistic modelling study \cite{de2008structure} and observed the unfaulting of a vacancy dislocation loop, inhabiting a 1st prismatic plane, after a 150 ps anneal. A defect platelet on the 1st prismatic plane that collapsed into a pure edge $\langle a \rangle$ loop{} would have $\bold{b}=1/2\langle10\bar{1}0\rangle$ and a defect platelet on the 2nd prismatic plane that collapsed into a pure edge $\langle a \rangle$ loop{} would have $\bold{b}=1/3\langle11\bar{2}0\rangle$. As the magnitude of $\bold{b}=1/2\langle10\bar{1}0\rangle$ is lower than that of $\bold{b}=1/3\langle11\bar{2}0\rangle$ we can expect that $\lambda$ resulting from the former will be lower than that of the latter, in accordance with Frank's rule. Line energy dominates the total energy at small radii, hence we presume that $\langle a \rangle$ loops{} nucleate on the 1st prismatic plane.
\begin{figure}[htbp!]\begin{center}
\begin{subfigure}[b]{\figwidth}
\centering
\includegraphics[width=\figwidth]{./figures/pap1sec1_aLpVac.pdf}%
\captionsetup{justification=centering,singlelinecheck=false} %
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.8\figwidth}
\includegraphics[width=0.7\figwidth]{./figures/pap1sec1_aLpVacUnfaulting.pdf}%
\captionsetup{justification=centering,singlelinecheck=false} %
\caption{}
\label{fig:aLp1prisEdgeVsSheared_b}
\end{subfigure}
\caption{Formation energy per vacancy as a function of radius for edge $\langle a \rangle^{\text{1ord}}_{\text{vac}}$ loops (\bold{b}$=\frac{1}{2} \langle 0 {1} \bar{1} 0 \rangle$) and sheared $\langle a \rangle^{\text{1ord}}_{\text{vac}}$ loops (\bold{b}$=\frac{1}{3} \langle 2 \bar{1} \bar{1} 0 \rangle$). The unfaulting radius, $r^*_{\langle a \rangle}$, is shown in image (b), which also includes the function for edge $\langle a \rangle^{\text{2ord}}_{\text{vac}}$ loops (\bold{b}$=\frac{1}{3} \langle 2 \bar{1} \bar{1} 0 \rangle$) denoted as `2$^{\text{nd}}$ ord edge' in the legend.}
\label{fig:aLp1prisEdgeVsSheared}
\end{center}
\end{figure}
As a nucleated edge $\langle a \rangle$ loop{} grows by absorbing vacancies, there is an increased thermodynamic driving force for it to unfault via a shearing across the loop plane. Jostsons et al.~observed sheared $\langle a \rangle$ loops{} \cite{jostsons1977nature}, establishing experimentally that $\langle a \rangle$ loops{} can unfault. We investigated the radius at which this shear occurs, $r^*_{\langle a \rangle}$, by simulating two series of $\langle a \rangle$ loops{} with various diameters, inhabiting the $(10\bar{1}0)$ plane. One series had $\textbf{b} = a/2\langle10\bar{1}0\rangle$ and the other had $\textbf{b} = a/3\langle11\bar{2}0\rangle$. As previously, functions were fitted to the loop energy as a function of radius. Fig.~\ref{fig:aLp1prisEdgeVsSheared}a shows that for all but small radii, sheared $\langle a \rangle$ loops{} have the lowest energy. We calculate the radius $r^*_{\langle a \rangle}$, below which pure edge $\langle a \rangle$ loops{} have the lowest energy (see Fig.~\ref{fig:aLp1prisEdgeVsSheared_b}), to be 2.8~nm, which is close to the $r^*_{\langle a \rangle}$ value determined by Varvenne et al.~of 2.7~nm \cite{varvenne2014vacancy}. The small values of $r^*_{\langle a \rangle}$ determined by our study and by Varvenne et al. explain why $\langle a \rangle$ loops observed in TEM always have $\textbf{b} = 1/3 \langle11\bar{2}0\rangle$, as resolving loops smaller than $r^*_{\langle a \rangle}$ is difficult.
\iffalse %
\subsubsection{section 1.4}
We postulated that vacancies agglomerate into clusters, which later collapse to form dislocation loops. This process is difficult to observe experimentally but molecular dynamics can simulate this. We created a supercell containing randomly situated vacancies at a defect density of ??. This supercell was then subjected to a simulated anneal, by raising it's temperature to XXK over a time period of XXns using a timestep of XXps. It was held at a constant temperature of XXK for XXns and periodic snapshots were taken of the configuration. The vacancies, which began as randomly dispersed formed a loop-like structure after the anneal. The snapshots gave an estimation of how quickly the vacancies coalesced together to form a loop. A selection of these snapshots can be seen in Fig.~\ref{fig:vacAnnealSnapshots}.
\fi
\subsection{Ellipticity of $\langle a \rangle$ Loops.}\label{sec:loopEllip}
\noindent Jostsons et al.~examined \noindent $\langle a \rangle$ loops{} with TEM \cite{jostsons1977nature} and saw them to be elliptical. The ellipticities they measured are shown in Fig.~\ref{fig:ellipLoopsPlot}. Ellipticity is defined here as $1-a/b$, where $a$ is the minor axis length and $b$ is the major axis length. The ellipticity of vacancy $\langle a \rangle$ loops{} increases with diameter, up to around 100~nm where it plateaus at $\sim 0.4$ \cite{jostsons1977nature}. Interstitial $\langle a \rangle$ loops{} differ in that their ellipticity plateaus earlier at $\sim 0.1$, when their diameter is 20~nm and greater \cite{jostsons1977nature}. An increase in loop ellipticity increases the loop's dislocation line length and should increase its energy. This increase in line length must be offset by anisotropy in dislocation line energy which may be due to the anisotropy of the crystal and elastic anisotropy. Combined with this are possible effects from DAD, which would reduce interstitial $\langle a \rangle$ loop{} ellipticity because preferential diffusion of interstitials along basal planes increases the semi-minor axis and excess vacancies diffusing in $\langle0001\rangle$ reduce the semi-major axis \cite{woo1988theory}. Woo's 1988 DAD theory predicts the opposite result for vacancy $\langle a \rangle$ loops{}, where the ellipticity is increased \cite{woo1988theory}. Interstitial $\langle a \rangle$ loops{} and vacancy $\langle a \rangle$ loops{} would be equally and moderately elliptical due to line energy anisotropy alone, but DAD combines with this, to increase vacancy $\langle a \rangle$ loop{} ellipticity and reduce interstitial $\langle a \rangle$ loop{} ellipticity. Thus, Woo's 1988 DAD theory's predictions fit the observed $\langle a \rangle$ loop{} ellipticities \cite{woo1988theory}.\
We have investigated the DAD hypothesis by calculating the optimal ellipticities of $\langle a \rangle^{\ttm{1ord}}_{\ttm{vac}}$ loops, $\langle a \rangle^{\ttm{2ord}}_{\ttm{vac}}$ loops, $\langle a \rangle^{\ttm{1ord}}_{\ttm{sia}}$ loops and $\langle a \rangle^{\ttm{2ord}}_{\ttm{sia}}$ loops. For each loop type we created, a set of loops with various diameters, and with various ellipticities at each diameter. For a given diameter, the formation energies, $E_{\textrm{f}}$, as a function of ellipticity ($e$) were fitted to a function:
\begin{equation}\label{eq:elipFunc}
E_{\textrm{f}}(e) = 4 \lambda_\ttm{c} \sqrt{\frac{A}{\pi}} \bigg(\frac{1}{\sqrt{1-e}} + \alpha \sqrt{1-e}\bigg).
\end{equation}
\noindent Here, $\lambda_\ttm{c}$ and $\lambda_\ttm{a}$ are the line energies per unit length for lines parallel to $[0001]$ and $[2\bar{1}\bar{1}0]$ respectively. $A$ is the area of the loop and $\alpha$ is the ratio $\lambda_\ttm{a}/\lambda_\ttm{c}$. The minimum of this fitted function gives the optimum ellipticity for that loop type and diameter, as shown in Fig.~\ref{fig:ellipFitting} for a sample diameter. The optimum ellipticity as a function of loop diameter is shown in Fig.~\ref{fig:ellips}. Values of $\lambda_{\ttm{c}}$, $\lambda_{\ttm{a}}$ and the average line energy $\bar{\lambda}$, for a 100~nm diameter loop are tabulated in Table~\ref{aLpEtableFitted}. Here $\bar{\lambda}$ is defined as $(a\lambda_{\ttm{a}} + b\lambda_{\ttm{c}})/(a + b)$.\
\begin{figure}[h!]\begin{center} %
{\includegraphics[width=\figwidth]{./figures/EfVsEllip100nmVac_aLoop.pdf}}%
\caption{
Variation of loop energy with ellipticity for the example of 100~nm diameter vacancy $\langle a \rangle$ loops, inhabiting the 1st order prismatic plane. The solid line shows the fitted function Eq.~\ref{eq:elipFunc}.}
\label{fig:ellipFitting}
\end{center}
\end{figure}
\begin{figure}[htbp!]\begin{center}
{\includegraphics[width=\figwidth]{./figures/ellipVsDiaMasterLabel.pdf}}%
\caption{Ellipticities of experimentally observed and simulated $\langle a \rangle$ loops{}. The experimental values were taken from Jostsons et al.'s 1977 TEM study \cite{jostsons1977nature}. Note that ellipticity for 2nd prismatic $\langle a \rangle$ loops{} is expressed as a function of equivalent 1st prismatic loop diameter, where this equivalence is given by defect number. This provides a fair comparison as the actual 2nd prismatic $\langle a \rangle$ loop{} diameters are smaller than the 1st prismatic diameters by a factor of $\sqrt{3}/2$ for the same number of point defects.}
\label{fig:ellips}
\end{center}
\end{figure}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& & & & \\ [-1.5ex]
Defect & $\lambda_{\ttm{c}}~(\ttm{eV/nm})$ & $\lambda_{\ttm{a}}~(\ttm{eV/nm})$ & $\bar{\lambda}~(\ttm{eV/nm})$ & $\alpha$ \\ [0.8ex] \cline{3-5}%
\hline
$\langle a \rangle^\textrm{1ord}_\textrm{vac}$ & 14.3 & 15.9 & 15.3 & 1.11 \\
$\langle a \rangle^\textrm{2ord}_\textrm{vac}$ & 13.2 & 16.7 & 14.8 & 1.27 \\
$\langle a \rangle^\textrm{1ord}_\textrm{sia}$ & 14.0 & 16.0 & 14.9 & 1.15 \\
$\langle a \rangle^\textrm{2ord}_\textrm{sia}$ & 13.3 & 16.3 & 14.4 & 1.23 \\ [1ex]
\hline
\end{tabular}
\caption{The results of fitting Eq.~\ref{eq:elipFunc} for the energies of $\langle a \rangle$ loops{} energetics results. The example of a 100~nm diameter loop is used. $\alpha$ is the ratio $\lambda_{\ttm{a}}$ : $\lambda_{\ttm{c}}$.}%
\label{aLpEtableFitted}
\end{center}
\end{table}
The results for ellipticity show that for $\langle a \rangle^{\ttm{1ord}}_{\ttm{vac}}$ loops the ellipticity is $\sim$0 for the smallest 5 nm loop. As $\langle a \rangle^{\ttm{1ord}}_{\ttm{vac}}$ loop diameter increases, ellipticity increases to $\sim0.1$ at 110~nm. This increase is roughly linear. The ellipticities for $\langle a \rangle^{\ttm{1ord}}_{\ttm{sia}}$ loops follow a similar pattern to $\langle a \rangle^{\ttm{1ord}}_{\ttm{vac}}$ loops, demonstrating that ellipticity is not a function of loop character.
The results for the 2nd prismatic plane follow a similar trend, although the ellipticities are higher. For $\langle a \rangle^{\ttm{2ord}}_{\ttm{vac}}$ loops the ellipticity is $\sim0.1$ for the 5 nm loop, increasing linearly to $\sim0.2$ for the 110 nm loop. As with the 1st prismatic $\langle a \rangle$ loops{}, ellipticity does not change with character.\
The ellipticity fitting data in Table~\ref{aLpEtableFitted} shows that in all cases $\lambda_{\ttm{a}}$ is greater than $\lambda_{\ttm{c}}$. The values of $\bar{\lambda}$ compare well with those calculated from the energy fitting of $\langle a \rangle$ loops{}, shown in Table~\ref{aLpEtable}. The differences in these values are partly due to the energy fitting series containing a variety of loop sizes, whereas the values for the ellipticity fitting were taken from a loop with 100~nm diameter.
For vacancy $\langle a \rangle$ loops{}, throughout the experimental data series the ellipticities are greater than those of the simulation series. This agrees with Woo's 1988 theory which postulates that for vacancy $\langle a \rangle$ loops{} ellipticity will be increased by DAD effects, above that expected due to crystal anisotropy and elastic anisotropy\cite{woo1988theory}. For interstitial $\langle a \rangle$ loops{}, there is again an agreement with Woo's 1988 theory, as the experimental data shows an ellipticity plateau at $\sim0.1$, which is lower than the ellipticity of the simulated interstitial loops, which do not include the effects of DAD. Note that for small diameters the $\langle a \rangle^{\ttm{1ord}}_{\ttm{sia}}$ loops have lower ellipticities than for the experimental interstitial data. However, the experimental loops are a mix of those on the 1st and 2nd prismatic planes. If the simulated $\langle a \rangle^{\ttm{1ord}}_{\ttm{sia}}$ loops and $\langle a \rangle^{\ttm{2ord}}_{\ttm{sia}}$ loops were similarly mixed, by averaging their ellipticities, the simulated interstitial ellipticities would certainly be higher than the experimental. The fact that the simulated ellipticities and the experimental ellipticities are closer at small diameters could be an indication that DAD is more influential for larger loops, as these capture more diffusing defects.\
When stating that, considering only elastic anisotropy, vacancy and interstitial $\langle a \rangle$ loops{} would have equal ellipticity, Woo referenced a 1971 TEM study by Brimhall et al.~as justification \cite{woo1988theory,{brimhall1971microstructural}}. However, Brimhall et al.'s study was on $\alpha$-Ti, the micrographs only showed a small number of loops and only one of these was identified as having interstitial character. Thus Brimhall et al.'s study was of limited statistical validity in assessing relationships of ellipticities to loop character. Additionally, $\alpha$-Ti is a HCP crystal with a $c$:$a$ ratio less than the perfect value and so DAD should alter loop ellipticity. Our present work provides independent evidence that vacancy and interstitial $\langle a \rangle$ loops{} have equal ellipticity, when only static anisotropy effects are considered, supporting Woo's 1988 theory.
\iffalse %
\dc{put line study back in, to replace this paragraph: We conducted a further study into $\langle a \rangle$ loop{} ellipticity, using infinite dislocation lines of differing separation \cite{hulse2019simulating}. Concurring with our work shown here, this showed that $\lambda_{\ttm{a}}$ was greater than $\lambda_{\ttm{c}}$. This provided additional evidence that ellipticity does not depend on character and that without the influence of DAD, ellipticity depends on the relative formation energy of dislocation line segments in different directions. Additionally, $\alpha$ increased with separation of the dipole and this provides an explanation as to why $\langle a \rangle$ loop{} ellipticity increases with diameter.}
\fi
\subsubsection{Dislocation dipole energies}\label{sec:InfdislocLineDiffSep}
\noindent As shown above, the ellipticity of $\langle a \rangle$ loops{} varies with loop diameter. This may be due to strain interaction from opposing loop segments, making the energy cost of bringing those segments closer together prohibitively expensive. As diameter increases these opposing segments separate, lowering their mutual strain interaction, to the point where the segments on the minor axis can come together to the equilibrium distance dictated by the relative values of $\lambda_{\ttm{c}}$ and $\lambda_{\ttm{a}}$. Hence, this process may give an insight not only into ellipticity but also into the effective range of the interaction between dislocation line strain fields. To probe this, we created a series of infinite $[0001]$ dislocation line dipoles in a thin slab perpendicular to the dislocation line direction and with full periodic boundary conditions. We then varied the dipole separation from 5~nm to 100~nm. A second series was created, with a dipole line direction of $[2\bar{1}\bar{1}0]$. The line segments had $\bold{b}=1/3[\bar{1}2\bar{1}0]$ and together they compose a rectangular approximation to an $\langle a \rangle^{\ttm{1ord}}_{\ttm{vac}}$ loop. We repeated this procedure, using interstitial dipoles (i.e. enclosing a ribbon of interstitial defects) to replicate the line segments of $\langle a \rangle^{\ttm{1ord}}_{\ttm{sia}}$ loops.
\begin{figure}[!ht]\begin{center}
\includegraphics[width=0.9\figwidth]{./figures//lambdaVsSep.pdf} %
\captionsetup{justification=centering,singlelinecheck=false} %
\caption{Image (a) displays $\lambda$ as a function of dipole separation, for vacancy and interstitial dipoles in the \textit{c}-direction $[0001]$ and \textit{a}-direction $[2\bar{1}\bar{1}0]$. Image (b) displays ellipticity as a function of dipole separation, for vacancy and interstitial dipoles.}
\label{fig:dipoleE}
\end{center}
\end{figure}
The formation energies per unit length ($\lambda$) of the various dipoles, as a function of separation, are plotted in Fig.~\ref{fig:dipoleE}. To calculate $\lambda$, we obtained the formation energy of a dipole and subtracted from this the energy of the dipole surface. The dipole surface is the ribbon of defects that separates the dipoles and it has a formation energy of its own. Although in theory this surface is unfaulted, in practice it contains a small, finite surface energy. This surface energy is, as with $\langle a \rangle$ loops{}, due to small disregistry of atoms and so we used the values of $\gamma$ obtained from our study of $\langle a \rangle$ loops{} to calculate the surface energy for the dipoles.\
For each defect type, $\lambda$ is higher for $[2\bar{1}\bar{1}0]$ than for $[0001]$, as expected. At small separations, $<$~20~nm, $\lambda$ increases as separation increases. This is counter to expectations as like dipole lines repel so increasing separation should lower $\lambda$. We postulate that this occurs because at these small separations the dipole lines' strain fields overlap very strongly. The resulting strain field is no longer a simple superposition of the two separate dislocations' strain fields, and so $\lambda$ has a lower magnitude than expected. Above separations $\sim$ 20~nm the dipole's lines act as distinct dislocations and in this regime $\lambda$ decreases as separation increases. In concurrence with our $\langle a \rangle$ loop{} study in Section~\ref{aLoopPlane}, $\lambda$ is lower for interstitial dipoles than vacancy dipoles.
Fig.~\ref{fig:ellipsLoopsAndDipoles} shows the ellipticity as a function of dipole separation implied by the dipole energy calculations. In this situation the ribbon energy is analogous to the platelet energy and has already been removed, leaving ellipticity dependent on the line energy discrepancy only. Ergo, we approximate ellipticity as, 1-(1/$\alpha$).
Ellipticity for vacancy and interstitials is similar, and begins at $\sim$~0.02 for small separations before rising as separation increases. Ellipticity reaches $\sim$~0.13 at 90~nm for both defect types. The dipoles are on the 1st order prismatic plane and so we compare these results to the $\langle a \rangle^{\ttm{1ord}}$ loop ellipticity results. This comparison is displayed in Fig.~\ref{fig:ellipsLoopsAndDipoles} and shows a clear similarity between the ellipticities from our $\langle a \rangle$ loop{} and dislocation dipole studies. This provides additional evidence that ellipticity does not depend on character and that without the influence of DAD, ellipticity depends on the relative formation energy of dislocation line segments in different directions and will be small.
\begin{figure}[htbp!]\begin{center}
{\includegraphics[width=\figwidth]{./figures/ellipVsSepDipoleWithLoopData.pdf}}%
\caption{Ellipticities of dislocation dipoles and $\langle a \rangle^{\ttm{1ord}}$ loops, as a function of separation for the dipoles and loop diameter for the $\langle a \rangle^{\ttm{1ord}}$ loops. For $\langle a \rangle^{\ttm{1ord}}$ loops, data is included from the simulation results from Section~\ref{aLoopPlane} and experimental results from \cite{jostsons1977nature}.}
\label{fig:ellipsLoopsAndDipoles}
\end{center}
\end{figure}
\subsection{Interstitial $\langle c/2+p \rangle$ loops and $\langle c \rangle$ loops.}\label{sec:SIAcLps_doub_cLps} %
\subsubsection{Interstitial $\langle c/2+p \rangle$ Loops.}
\noindent $\langle c/2+p \rangle$ loops present in irradiated Zr are thought to be vacancy in character \cite{onimus2012radiation,griffiths1987formation}. This is central to IIG theory that attributes growth to the effective transfer of atoms from basal to prismatic planes \cite{holt1988mechanisms}. Holt proposed that the loss of atoms from basal planes occurs because $\langle c/2+p \rangle$ loops{} form on the basal planes and those $\langle c/2+p \rangle$ loops{} are vacancy in character \cite{holt1988mechanisms}. However, little has been done to determine whether interstitial $\langle c/2+p \rangle$ loops{} are feasible. To probe the stability of interstitial $\langle c/2+p \rangle$ loops{} we constructed a series of such loops of various diameters and fitted their formation energies to the same function used for vacancy $\langle c/2+p \rangle$ loops{}. The results and fitted functions are shown in Fig.~\ref{fig:SIAcLps}. These values can be compared with the formation energy per point defect for vacancy $\langle c/2+p \rangle$ loops{}, to give an indication of whether interstitial $\langle c/2+p \rangle$ loops{} are realistically stable. The $\gamma$ and $\lambda$ values pertaining to interstitial $\langle c/2+p \rangle$ loops{} and are reported in Table~\ref{sect1.1table}.\
\begin{figure}[htbp!]\begin{center}
{\includegraphics[width=\figwidth]{./figures/pap1_cLpSiaE.pdf}} %
\caption{Formation energy per defect, as a function of radius, for interstitial and vacancy $\langle c/2+p \rangle$ loops.}
\label{fig:SIAcLps}
\end{center}
\end{figure}
Fig.~\ref{fig:SIAcLps} shows that the energies of vacancy $\langle c/2+p \rangle$ loops and interstitial $\langle c/2+p \rangle$ loops are similar, suggesting that interstitial $\langle c/2+p \rangle$ loops are not energetically infeasible. Thus, their absence may be due to the effects of DAD, which inhibits capture of interstitials by basal planar defects because interstitials diffuse preferentially along basal planes where they are more likely to be captured by prismatic planar defects (i.e. $\langle a \rangle$ loops{}) \cite{woo1988theory}. However, because little experimental effort has been made to identify the presence of interstitial $\langle c/2+p \rangle$ loops, future experimental work in this area would be valuable, possibly via X-ray diffraction line profile analysis \cite{topping2018investigating}.
\subsubsection{Transition from vacancy $\langle c/2+p \rangle$ loops{} to $\langle c \rangle$ loops{}.} %
\noindent As $\langle c/2+p \rangle$ loops{} contain an $I_1$ stacking fault there is a contribution to their formation energy that scales as the square of their radius. However, if two basal layer platelets, rather than one, are removed from the crystal, an unfaulted dislocation loop is created. As this double layered c-component loop has $\textbf{b}=[0001]$, we term this a $\langle c \rangle$ loop{}. The dislocation line energy per unit length for a $\langle c \rangle$ loop{}, $\lambda_{\langle c \rangle}$, is expected to be much greater than that of a $\langle c/2+ p \rangle$ loop{}, $\lambda_{\langle c/2+p \rangle}$, because the burgers vector magnitude of the former is 62\% greater than that of the latter. Contrasting with this is the SFE contribution, which for the $\langle c/2+ p \rangle$ loop{} is $\gamma_{I1}$ but for $\langle c \rangle$ loops{} is expected to be close to zero. Thus, there will there be a critical radius, $r^*_{\langle c \rangle}$, at which a $\langle c \rangle$ loop{} becomes energetically optimal over a $\langle c/2+ p \rangle$ loop{}. To find the value of $r^*_{\langle c \rangle}$ we constructed a series of $\langle c \rangle$ loops{} from 50~nm to 150~nm in a simulation box of constant size. Their energies were fitted, as in Section~\ref{cLpTrans}, using Eq~\ref{eq:loopE}. This enabled us to find the crossover point of the energy functions for $\langle c/2+p \rangle$ and $\langle c \rangle$ loops{}, which are plotted in Fig.~\ref{fig:doub_cLps}. As the radius of a $\langle c/2+ p \rangle$ loop{} is $\sqrt{2}$ bigger than a $\langle c \rangle$ loop{} containing an equivalent number of defects, the crossover was expressed in terms of defect number and occurs at around 100,000 vacancies, or a $\langle c/2+ p \rangle$ loop{} radius of 56.1~nm. This crossover radius, whilst large, is not beyond the limit of $\langle c/2+p \rangle$ loops{}, which have been observed in TEM studies of Zr as having larger radii than this \cite{topping2018effect,harte2017effect}. However, Griffiths identified a $\langle c \rangle$ loop{} in Zr via TEM \cite{griffiths1987formation}, suggesting that $\langle c \rangle$ loop{} formation is not impossible. Griffiths stated that the mechanism of formation for $\langle c \rangle$ loops{} is that, ``vacancies condense on the existing faulted loops forming a double layer and an unfaulted lattice" \cite{griffiths1987formation}. It could be that this mechanism does not allow the direct transition from a $\langle c/2+ p \rangle$ loop{} to a $\langle c \rangle$ loop{} and this transformation only occurs when a large $\langle c/2+ p \rangle$ loop{} is in a region rich in vacancies. We postulate that an alternative mechanism could occur here, where two $\langle c/2+p \rangle$ loops{} coalesce to form a $\langle c \rangle$ loop{}. The $\langle c \rangle$ loop{} formation phenomenon is worthy of further investigation.
\begin{figure}[!ht]\begin{center}
\includegraphics[width=\figwidth]{./figures/pap1UnfltRad2.pdf} %
\captionsetup{justification=centering,singlelinecheck=false} %
\caption{Formation energy per vacancy for $\langle c \rangle$ loops, as a function of vacancies in the loop. The data for $\langle c/2+p \rangle$ loops are shown for comparison, as these may be the defect from which $\langle c \rangle$ loops nucleate \cite{griffiths1987formation}. The crossover point where the energy per defect of $\langle c \rangle$ loops becomes lower than that of $\langle c/2+p \rangle$ loops is denoted as $r^*_{\ttm{unfaulted}}$. Data points represent the simulation results and lines are the model fit.}
\label{fig:doub_cLps}
\end{center}
\end{figure}
\subsection{Loop strain fields}\label{sec:strainInter} %
\noindent As briefly explained in Section \ref{sec:loopEllip}, a part of the formation energy of a dislocation loop is due to the interaction of the strain fields of opposing line segments across the centre of the loop. We can consider these opposing segments of the loop as dislocation line dipoles, where the strain interaction is that of one half of the dipole with the strain field of its counterpart.
To study strain fields inside loops, we took the series of loops of varying diameters that were constructed and relaxed in Section \ref{sec:nascLoops}. From these loops we calculated the elastic strain in the loop centre, using the Ovito software package \cite{stukowski2009visualization}.
Fig.~\ref{fig:innerStrain} shows the $[0001]$ normal elastic strain values, $\epsilon_{zz}$, at the $\langle c/2+ p \rangle$ loop{} centre, as a function of loop diameter. These results show that at small diameters, at and below 3~nm, the strain is high, but above 3~nm the strain rapidly decreases. This threshold, shown as a dashed line in Fig.~\ref{fig:innerStrain}, exists because at small diameters the defect becomes a cluster, containing a highly strained region. Above the threshold, the defect is a dislocation loop and as its diameter increases the highly strained regions close to the dislocation line move further away from the loop centre.
\begin{figure}[htbp!]\begin{center}
\includegraphics[width=0.8\figwidth]{./figures/intStrain.pdf}%
\captionsetup{justification=centering,singlelinecheck=false} %
\caption{Internal $\epsilon_{zz}$ values, in the centre of $\langle c/2+p \rangle$ loops as a function of diameter. The dashed line shows the threshold, above which the defects behave as loops. At and below this threshold the defects behave as clusters.}
\label{fig:innerStrain}
\end{center}
\end{figure}
Figure~\ref{fig:constraint} shows maps of the $\epsilon_{zz}$ for $\langle c/2+p \rangle$ loops{} of varying sizes. There is a clear trend from the case of small loops (Fig.~\ref{fig:constraint10}), in which the strain is more contained within the loop, to large loops (Fig.~\ref{fig:constraint30}), where the strain field more closely resembles the superposition of the fields of dislocation line segments on opposing sides of the loop. This perhaps provides an explanation for why, with increasing irradiation dose, the $\langle a \rangle$ loops{} increase in number density and decrease in diameter, even though this entails an increase in the formation energy per point defect. The more confined strain fields of smaller loops will reduce the energy of elastic interaction between loops, allowing them to form closer together along their loop normal directions. The increase in energy of the loops per point defect is offset by a reduced interaction energy penalty.
\begin{figure}[!ht]\begin{center}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\linewidth]{./figures/strainConstraint10nmClp1pt5pct.pdf}%
\captionsetup{justification=centering,singlelinecheck=false} %
\caption{10~nm diameter loop.}
\label{fig:constraint10}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\linewidth]{./figures/strainConstraint15nmClp1pt5pct.png}
\captionsetup{justification=centering,singlelinecheck=false} %
\caption{15~nm diameter loop.}
\label{fig:constraint15}
\end{subfigure}
\
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\linewidth]{./figures/strainConstraint20nmClp1pt5pct.png}
\captionsetup{justification=centering,singlelinecheck=false} %
\caption{20~nm diameter loop.}
\label{fig:constraint20}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=\linewidth]{./figures/strainConstraint30nmClp1pt5pct.pdf}
\captionsetup{justification=centering,singlelinecheck=false} %
\caption{30~nm diameter loop.}
\label{fig:constraint30}
\end{subfigure}
\caption{Strain maps showing $\epsilon_{zz}$ for $\langle c/2+p \rangle$ loops of various diameters.}
\label{fig:constraint}
\end{center}
\end{figure}
\noindent The rate at which the strain fields decay outside the loops will determine how close together they will tend to form. TEM observations have seen irradiation-induced dislocation loops to be ordered at high densities and typical separation distances have been observed. For example, Harte et al.\ observed $\langle c/2+p \rangle$ loops{} that had a $\sim$50~nm separation along $\langle0001\rangle$ %
To explore this ordering, we constructed a series of $\langle c/2+p \rangle$ loops{} in orthogonal supercells, varied their separations along $[0001]$ and calculated the strain at the mid-point between a pair of loops. The results are displayed in Fig.~\ref{fig:extStraincLps}, in which the strain, $\epsilon_{zz}$, reduces rapidly as the loop separation is increased before plateauing at a very small strain when separations are above $\sim$80~nm.
The experimentally determined loop number densities, line densities and diameters \cite{barashev2015theoretical,topping2017investigation,topping2018effect,{harte2017effect}} give us an estimation of the spacing between $\langle c/2+p \rangle$ loops{} of 40 - 50~nm, and this corresponds to strains between $\epsilon_{zz} = 0.422\%$ and $\epsilon_{zz} = 0.207\%$. This range is shown by the dotted lines in Fig.~\ref{fig:extStraincLps}. Clearly there is a complicated interplay when loop separation increases, as elastic interaction energy decreases, but formation energy increases because there is a lower loop density (assuming a fixed number of point defects).
\begin{figure}[htbp!]\begin{center}
{\includegraphics[width=0.75\figwidth]{./figures/extStrain.pdf}}%
\caption{External $\epsilon_{zz}$ values, in between two $\langle c/2+p \rangle$ loops as a function of separation of the loops. The dotted lines shows the $\langle c/2+ p \rangle$ loop{} separation range that we estimate here as between 40 - 50~nm.}
\label{fig:extStraincLps}
\end{center}
\end{figure}
\section{Conclusions.}
\noindent Increases in computing power now allow routine simulation of dislocation loop populations at experimentally relevant scales and we believe that our simulated dislocation loops are bigger than those previously simulated. We have been able to study the effects of dislocation lines and loop planar faults in combination, and to include the effects of strain on opposing loop segments. We believe this study is also the first to simulate elliptical loops. These techniques have enabled us to study dislocation loop sizes, configurations and densities in detail and this has revealed new insights in the loop behaviour. The most pertinent of these are:
\begin{itemize}
\item Our calculated value for the critical radius at which a $\langle c/2 \rangle$ loop{} transforms to a $\langle c/2+ p \rangle$ loop{}, $r^*_{\text{HE}\rightarrow \text{I1}}$, is 3.2~nm and this is close to that reported by Varvenne et al.~\cite{varvenne2014vacancy}. Thus, the radius at which we expect the $\text{HE}\rightarrow\text{I1}$ transformation to occur is small and is consistent with the notion that the $\langle c/2 \rangle$ loop{} is the precursor to the $\langle c/2+ p \rangle$ loop{}.
\item The energies of the 1st order sheared and 2nd order edge $\langle a \rangle$ loops{} are very similar, which may be the reason these loops do not inhabit a definite plane, but have a distribution between the two. The results we have presented suggest that $\langle a \rangle$ loops{} form on the 1st prismatic plane as pure edge loops, shear to create 1st order prismatic sheared loops and then rotate to inhabit the 2nd order prismatic plane as pure edge loops. However, their rotation onto the 2nd order prismatic plane may be inhibited by obstacles and there may be insufficient thermodynamic driving force to overcome these, due to the small difference in formation energies between 1st order sheared $\langle a \rangle$ loops{} and 2nd order edge $\langle a \rangle$ loops{}.
\item We have shown that based on energy considerations alone ellipticity is the same for interstitial and vacancy $\langle a \rangle$ loops{}. This was stated by Woo, but was not supported by strong evidence \cite{woo1988theory}. Our results support the predictions of Woo's 1988 theory by showing that without the effects of DAD, vacancy $\langle a \rangle$ loop{} ellipiticity is lower than that experimentally observed and interstitial $\langle a \rangle$ loop{} ellipticity is higher. The results from our $\langle a \rangle$ loop{} study and our dislocation dipole study agree on this. Ellipticities for $\langle a \rangle$ loops{} on 2nd order prismatic planes are greater than those on 1st order prismatic planes.
\item We confirmed, in the $\langle a \rangle$ loop{} ellipticity study of Section~\ref{sec:loopEllip}, that the energy per line length is greater in $a$-directions than in $c$-directions for dislocations with $\bold{b}=1/3\langle1\bar{2}10\rangle$. This creates the correct energetics for ellipses with the major axis parallel to [0001]. Our study of dislocation dipoles revealed that the relative energy differences between these increase with line separation, explaining why $\langle a \rangle$ loops{} become more elliptical as their diameter increases.
\item Interstitial $\langle c/2+p \rangle$ loops{} are not energetically infeasible, so their existence cannot be ruled out. However, DAD effects may prevent interstitial $\langle c/2+p \rangle$ loops{} from growing. This warrants further experimental study.
\item A $\langle c/2+ p \rangle$ loop{} with radius greater than $\sim$50~nm would lower its energy if it could transform into a $\langle c \rangle$ loop{}. However, the mechanism for this transformation to occur may be restrictive. This aspect of $c$-component loops is ripe for further investigation.
\item Our results show that $\langle a \rangle$ loops{} have lower formation energy per defect than $\langle c/2+p \rangle$ loops{} of the same size. This indicates that $\langle c/2+p \rangle$ loops{} should not occur, but evidently they do. We postulate that many smaller $\langle a \rangle$ loops{} transform into a large $\langle c/2+ p \rangle$ loop{} because the formation energy per defect is lower for a large $\langle c/2+ p \rangle$ loop{} than for the same defects accommodated in many smaller $\langle a \rangle$ loops{}. Experimental results of Topping et al.~\cite{topping2018effect} agree with this notion as for Zircaloy-2 irradiated to 4.7dpa, $\langle a \rangle$ loops{} had a diameter of $\sim$10~nm and $\langle c/2+p \rangle$ loops{} a diameter of $\sim$100~nm. Our model predicts, for these diameters, that the energy per vacancy will be lower for $\langle c/2+p \rangle$ loops{} than for $\langle a \rangle$ loops{}.
\indent Harte et al.~\cite{harte2017effect} produced Bright Field Scanning Transmission Electron Microscope (BF-STEM) images of proton irradiated Zircaloy-2, shown in Fig.~\ref{fig:BFStemLoops}. These provide additional evidence for the idea that $\langle c/2+p \rangle$ loops{} originate from ordered arrays of $\langle a \rangle$ loops{} as $\langle c/2+p \rangle$ loops{} appear anti-correlated to $\langle a \rangle$ loops{} in the $\langle a \rangle$ loops{}' rafts along the basal plane trace, creating gaps in these rafts.
\begin{figure}[htbp!]\begin{center}
{\includegraphics[width=1.0\figwidth]{./figures/harte2017.png}}
\caption{BF-STEM images of proton irradiated Zircaloy-2, where image a) was taken along $\langle 11\bar{2}0 \rangle$ and for b) \bold{g} = 0002. Therfore, $\langle a \rangle$ loops{} and $\langle c/2+p \rangle$ loops{} are visible in a), but $\langle a \rangle$ loops{} are invisible in b) (Reproduced from Harte et al.~\cite{harte2017effect}, available at https://doi.org/10.1016/j.actamat.2017.03.024, under the terms of the Creative Commons Attribution Licence (CC BY) https://creativecommons.org/licenses/by/4.0/.).}
\label{fig:BFStemLoops}
\end{center}
\end{figure}
\item We have shown that in small loops, the high-strain lobes emanating from the dislocation lines overlap strongly. As the diameter increases the strain extends further through the crystal and the strain at the loop centre decreases. However, as $\langle a \rangle$ loop{} diameter decreases the strain field is highly confined close to the loop and does not interact strongly with that of neighbouring loops. This allows $\langle a \rangle$ loops{} to position themselves closer together, which we postulate is the reason that $\langle a \rangle$ loop{} diameters reduce as irradiation proceeds and more point defects need accommodating in a given volume.
\end{itemize}
\section{Acknowledgements.}
\noindent For providing us with sponsorship and support, we express gratitude to EDF and particularly Antoine Ambard of EDF. Additionally, we thank the Engineering and Physical Sciences Research Council for providing us with funding through Doctoral Training Centre in Advanced Metallic Systems grant (EP/G036950/1). CPR was funded by a University Research Fellowship of The Royal Society. Calculations made use of the University of Manchester's Computational Shared Facility.
\section{References}
\bibliographystyle{unsrt}
|
1,941,325,221,159 | arxiv | \section{Introduction}\label{sec:intro}
The diffuse interstellar bands (DIBs) are more than 300 absorption lines in the optical
spectrum that reside in the interstellar medium (\citealt{1934PASP...46..206M}, \citealt{1995ARA&A..33...19H}).
See for example \citet{2008ApJ...680.1256H} for a recent DIB inventory.
DIBs are ubiquitously present throughout the Galaxy and they have been detected also in other galaxies
(\citealt{2002ApJ...576L.117E}, \citealt{2005A&A...429..559S}, \citealt{2006A&A...447..991C}, \citealt{2007A&A...470..941C},
\citealt{2008A&A...485L...9C}, \citealt{2008A&A...480L..13C}, \citealt{2008A&A...492L...5C}).
Not a single carrier has been identified unambiguously yet.
Their relative large widths argue against atoms and di-atomic molecules in the gas-phase.
And although their intensity is related to the extinction by dust grains their (spectral) properties and behaviour
are more consistent with large gas-phase molecules (see also the review by \citealt{2006JMoSp.238....1S}).
In particular the substructure in several DIB profiles indicates that the carrier(s) are large gas phase molecules
(\citealt{1995MNRAS.277L..41S}, \citealt{1996A&A...307L..25E}, \citealt{2004ApJ...611L.113C}).
DIBs respond to the local environmental conditions, in particular to the effective strength of the UV field
(\emph{e.g.}\ \citealt{1997A&A...326..822C}, \citealt{2006A&A...447..991C}).
Their strength variation could reflect the local charge state balance of the carrier molecules
(\citealt{2005A&A...432..515R}, \citealt{Cox2006b}).
Therefore, specific groups of stable UV resistant molecules (such as PAHs, fullerenes and carbon chains) are commonly
postulated carrier candidates (\citealt{1995ARA&A..33...19H}).
Interstellar grains are known to become aligned when situated in a magnetic field which is evidenced by linear and circular
continuum polarisation (\emph{e.g.}\ \citealt{1965ApJ...141.1340S}).
The linear continuum polarisation can be described by the following empirical relation:
\begin{equation}
P(\lambda) = P_{\rm max}\ {\rm exp}(-K\ {\rm ln}^2(\lambda_{\rm max} / \lambda)), \label{eq:serkowski}
\end{equation}
with $K(\lambda_{\rm max})$ = (1.66$\pm$0.09) $\lambda_{\rm max}$ + (0.01$\pm$0.05) (\citealt{1965ApJ...141.1340S},
\citealt{1974AJ.....79..581C}, \citealt{1992ApJ...386..562W}).
The wavelength dependency of the polarisation is mainly determined by the composition and size of the dust particles.
The polarisation efficiency, $P(\lambda)/A(V)$, is a function of various factors, such as porosity and shape (Voshchinnikov \& Das 2008).
For example, it has been well established that the silicate features at 9.7 and 18.5~$\mu$m show an excess of polarisation
(\citealt{1998MNRAS.299..743A}, \citealt{2000MNRAS.312..327S}).
Also ice-features show polarisation (3.1~|$\mu$m O-H stretching mode of water ice; \citealt{1996ApJ...461..902H},
or ice features near 4.6-4.7~$\mu$m show polarization due to CO and CN-bearing species; \citealt{1996ASPC...97..243C}).
These detections have been taken as evidence for alignment of core/mantle grains in molecular clouds.
Thus, polarization of the 3.4~$\mu$m C-H feature is expected if it is due to carbonaceous mantles on silicate cores.
However, no polarisation has been detected for the 3.4~$\mu$m feature (\citealt{2006ApJ...651..268C}).
\citet{1999ApJ...512..224A} obtained an upper limit of $\sim$0.06$\pm$0.13\% for $\Delta p$, which is a factor 5 below the predicted value
for $\Delta p_{9.7}/\tau_{3.4}$ of 0.4\%.
Several theoretical and experimental studies predict that also large (ionized) molecules, such as PAHs and fullerenes,
can align, via for example the Barnett effect, under certain physical conditions (see \emph{e.g.}\ \citealt{1990ApJ...356..507B},
\citealt{1992A&A...253..498R}, \citealt{1994MNRAS.268..713L}, \citealt{1997ApJ...478..395W}).
Depending on the polarisation (\emph{i.e.}\ parallel versus perpendicular) of the incident light changes can be seen
in the electronic absorption spectra of large molecules that have some intrinsic asymmetry.
A summary of different proposed alignment mechanisms is given in {\citet{2007A&A...465..899C}}.
Therefore, the polarisation signal across a DIB profile could provide further constraints on the (molecular) properties of their carriers.
Spectropolarimetry of diffuse bands has been limited to about ten lines-of-sight and only nine individual DIBs, although no
DIB polarisation has yet been detected. Recent studies include those by
\citet{1992ApJ...398L..69A,1995ApJ...448L..49A}, \citet{1996ASPC...97..143S}, and \citet{2007A&A...465..899C}
(but see also references therein for earlier work).
The most recent and comprehensive study involved three sightlines and six DIBs \citep{2007A&A...465..899C}.
This study set the most stringent detection limits, between 0.01 and 0.14\%, for linear and circular polarisation of 6 narrow DIBs.
These values exclude classical grains as carriers of the $\lambda\lambda$~5780, 5797, 6613 and 6284 DIBs.
That the 6379 and 6613~\AA\ DIBs originate from (classical) grains could only be marginally excluded from
previous polarisation measurements.
This lack of line polarisation of DIBs implies that the DIB carriers are not embedded in or attached onto large - silicate - grains
(\emph{i.e.} those that produce optical extinction and polarisation), but might still be related to smaller - carbonaceous - grains,
\emph{i.e.} those that produce the far-UV extinction. The current constraint on the line polarisation is still consistent with a gas
phase carrier for which the polarisation signal could be very weak.
The aim of the present study is to ascertain whether or not the DIB carriers can give rise to an observable polarisation
and what that means for their identity.
There is not {\it a priori} way of knowing if and which DIB carriers are related to grains or molecules, and therefore
each DIB could or could not give rise to significant line polarisation predicted for grains or very weak polarisation from molecules.
Note that only a few DIBs exhibit a strong correlation with each other thus indicating that the majority of the DIBs have different,
though possibly physically/chemically related, carriers (\citealt{1997A&A...326..822C}, \citealt{2010ApJ...708.1628M}).
We present new observations for two lines-of-sight previously studied but with an order of magnitude higher sensitivity
and for many more additional individual DIBs not included in spectropolarimetry studies before.
In particularly, our study includes also weak DIBs and DIBs in the near-IR.
\begin{table}[th!]
\caption{Target and line-of-sight information (from literature).}
\centering
\begin{tabular}{lll}\hline\hline
ID & \object{HD 197770} & \object{HD 194279} \\ \hline
Ra (J2000)\tablefootmark{a} & 20:43:13.68 & 20:23:18.16 \\
Dec (J2000)\tablefootmark{a} & +57:06:50.4 & +40:45:32.6 \\
Spectral Type\tablefootmark{a} & B2\,III & B1.5\,Ia \\
distance (pc) & $\sim$440\tablefootmark{g} & 1100\tablefootmark{i} \\
$B$ (mag)\tablefootmark{a} & 6.594 & 7.92 \\
$V$ (mag)\tablefootmark{a} & 6.341 & 7.09 \\
\ensuremath{{\mathrm E}_{\mathrm (B-V)}}\ (mag) & 0.58$\pm$0.04\tablefootmark{c}& 1.21 \\
$A_V$ (mag) & 1.61$\pm$0.14\tablefootmark{c} & 3.9 \\
$R_V$ & 2.8\tablefootmark{d}, 2.77$\pm$0.15\tablefootmark{c} & 3.2\tablefootmark{d}, 3.25$\pm$0.03\tablefootmark{e} \\
$P_{\rm max}$ (\%) & 3.83\tablefootmark{b}, 3.81\tablefootmark{f} & 2.77 \\
$\lambda_{\rm max}$ ($\mu$m) & 0.49\tablefootmark{b}, 0.51\tablefootmark{f} & 0.58 \\
$P_{\rm max}/A_V$ (\%/mag) & 2.4 & 0.71 \\
$P_{\rm max}/\tau_V$\tablefootmark{h} & 0.026 & 0.008 \\
\hline
\label{tb:targets}
\end{tabular}
\tablefoot{
\tablefoottext{a}{From Simbad}
\tablefoottext{b}{From \citet{1995ApJ...445..947C}}
\tablefoottext{c}{From \citet{2004ApJ...616..912V}}
\tablefoottext{d}{Computed from $R_V = 5.5 \lambda_{\rm max}$ (\citealt{1975ApJ...196..261S})}
\tablefoottext{e}{From \citet{2003AN....324..219W}}
\tablefoottext{f}{From \citet{1993ApJ...403..722W}}
\tablefoottext{g}{From \citet{1994ApJS...95..419D}}
\tablefoottext{h}{The optical depth $\tau_V = A_V/1.086$.}
\tablefoottext{i}{Spectroscopic parallax distance (\citealt{2002ApJ...567..391M}); see also Sect.~\ref{sec:hd194279}.}
}
\end{table}
\section{Spectropolarimetric observations}\label{sec:observations}
For the present study we obtained new spectropolarimetry data with ESPaDOnS at the Canadian-French-Hawaian Telescope (CFHT).
The data were taken on 8-9 July and 21-25 July 2008 under good seeing ($\leq$ 1\arcsec) conditions.
ESPaDOnS is a high-resolution high-efficiency 2-fiber echelle spectrograph with polarising capabilities.
The dispersion for the spectropolarimeter is about 64\,000,
covering a wide spectral range, from 3700 to 10480~\AA\ with only a few small gaps in the near-infrared.
We selected two reddened targets, HD\,197770 and HD\,194279, for an in-depth study of the interstellar
line polarisation. A summary of line-of-sight properties is provided in Table~\ref{tb:targets}.
Exposure times amounted to 3040 and 5840 seconds for each final Stokes $Q$, $U$ and $V$ spectrum (each consisting of four sub-exposures taking at
different positions of the retarder) for HD\,197770 and HD\,194279, respectively.
The observations were automatically reduced with Upena, which is CFHT's reduction pipeline for ESPaDOnS.
The Upena data reduction system uses Libre-ESpRIT which is a purpose built data reduction software tool (\citealt{1997MNRAS.291..658D}).
We choose to opt for continuum normalized spectra (we are looking for line variation) and to apply both the heliocentric velocity correction
and the radial velocity correction from telluric lines.
Thus we obtain total intensity stokes $I$ spectra as well as Stokes $Q/I$, and $U/I$ (linear), and $V/I$ (circular) spectra normalized to a zero mean
(\emph{i.e.}\ the spectra are sensitive to line polarisation only).
The achieved signal-to-noise ratio (S/N) varies across the spectrum, but is about 700 and 1200 in the red range, for HD\,194279 and HD\,19770, respectively.
In addition to the shape of the SED, the reduced S/N at the longest wavelengths is also due to the lower efficiency of the instrument
(mainly the detector) and at the shortest wavelengths due to stronger extinction by dust.
\section{Results and discussion}
\subsection{Polarisation and total intensity line profiles}
The polarisation efficiency of an absorption line can be written generally as
(adapted from \citealt{1974psns.coll..916G}, \citealt{1974ApJ...188..517M}, \citealt{1996ASPC...97..143S}):
\begin{equation}
\Delta P(\lambda) = f_{P}(\lambda)\ P(\lambda)\ \Delta\tau(\lambda) / \tau(\lambda)
\end{equation}
where $\tau(\lambda)$ and $P(\lambda)$ are the continuum optical depth and the continuum polarisation.
$\Delta\tau(\lambda)$ (\emph{i.e.} ${\rm ln}\,(I_{\rm c} / I_\lambda)$)
is the observed change in optical depth across the line profile
and $f_{P}(\lambda)$ is the polarisation efficiency factor across the line profile.
$P(\lambda)$ can be computed from Eq.~\ref{eq:serkowski} if $P_{\rm max}$ and $K$ are known.
Therefore, with $\tau_\lambda = A_\lambda/1.086$ we can write the previous as
\begin{equation}
\Delta P(\lambda) = f_{P}(\lambda)\ \Delta\tau(\lambda) \ 1.086\ P(\lambda)/A(\lambda)\label{eq:DeltaP}
\end{equation}
where $f_{P}(\lambda)$ is the unknown polarisation efficiency parameter and $A(\lambda)$ is the extinction curve which
depends on $A_V$ and $R_V$ (see \emph{e.g.} \citealt{1989ApJ...345..245C},
\citealt{1994ApJ...422..158O}, \citealt{1999PASP..111...63F}, \citealt{2007ApJ...663..320F}).
\citet{2008JQSRT.109.1527V} derive $P(\lambda)/A(\lambda) \propto \lambda^\epsilon$, with $\epsilon = 1.41$ for
HD\,197770 (from 1000~\AA\ to 1 $\mu$m), which matches with the power-law index of 1.4 computed for $P(\lambda)/A(\lambda)$
using Eq.~\ref{eq:DeltaP} and the $R_V = 2.8$ extinction curve (see \citealt{2007ApJ...663..320F}).
Applying the same procedure to HD\,194279 gives $\epsilon \sim 1.32$ in the optical.
Thus, the expected polarisation signal $\Delta P$ for a DIB depends on its wavelength.
For example, compared to the red DIBs near 5500~\AA\ ({\emph e.g} V band) the near-infrared
DIBs ($\sim$ 7500~\AA) are then predicted, for the same $f_P$, to give rise to a polarisation
signal a factor $\sim$1.5 stronger.
For grain related polarisation the efficiency is predicted to be a constant for a given transition,
thus the polarisation profile would have the same shape as the optical depth profile.
\citet{1974ApJ...188..517M} estimated for grain related carrier that $f_P \approx$ 1.0 -- 1.8.
We can also write: $f_P = \frac{\Delta P}{\Delta \tau_\lambda}\ \frac{\tau}{P}$.
However, we have no a priori information on the polarisability efficiencies of the DIB
carriers, which would scale with the amount of material (\emph{i.e.} column density)
and could be much higher for carriers of the weak DIBs.
The measured equivalent width (or central depth) is proportional to both column density $N$ and oscillator strength $f$.
For example, a weak DIB could be a result of a small oscillator strength, $f$ (or small abundance), of the particular
electronic transition, while the polarisation efficiency, $f_P$ could be large relative to the line strength (or abundance).
Or vice versa, a strong DIB could be due to a carrier with a large oscillator strength but with a small
polarisation efficiency; \emph{i.e.}\ the ratio $f_P/f$ is not known for any of the DIB carrier candidates.
Thus, we stress the importance of investigating both strong and weak bands for polarisation features.
Intensity and polarisation spectra of CH, CH$^+$, CN, C$_2$, C$_3$, \ion{Na}{i}, \ion{Ca}{i}, \ion{Ca}{ii}, \ion{K}{i}
are shown in Figures~\ref{fig:atommol_hd197770} and~\ref{fig:atommol_hd194279} (Online).
Measurement of interstellar line strengths are given in Table~\ref{tb:is-lines}.
The polarisation spectra of the (di)atomic absorption lines do not reveal any line polarisation.
To exploit the large spectral range and high quality of the spectra obtained it is possible to explore the line
polarisation of not just the strongest DIBs but also those of moderate and weak strength, at both optical and near-infrared wavelengths.
First we focus on the strongest known diffuse bands. From the survey of HD\,183143 by \citet{1995ARA&A..33...19H} we select
all DIBs with central depths larger than 7\%, however we omit some of the very broad bands (\emph{i.e.}
at 5778, 6177, and 6533~\AA) as these are too difficult to detect in our echelle spectra.
This cut-off is somewhat arbitrary but at least ensures that all well-studied DIBs are included (Table~\ref{tb:poleff}).
Figures~\ref{fig:spectraIP_hd197770_strongest} and~\ref{fig:spectraIP_hd194279_strongest} show the observed line polarisation computed as follows:
\begin{equation}
\Delta P = \sqrt{\frac{1}{3} [(P_{\rm m}+Q/I)^2 + (P_{\rm m}+U/I)^2 + (P_{\rm m}+V/I)^2]} - P_{\rm m}
\end{equation}
for the 15 strong DIBs at $\lambda\lambda$4428, 5705, 5780, 5797, 6196, 6203, 6269, 6283, 6379, 6613, 6660, 6993, 7224, 8621, and 9577 toward
HD\,197770 and HD\,194279. Individual $V$, $Q$ and $U$ spectra are shown in Figures~\ref{fig:4stokes_hd197770_strongest}
and ~\ref{fig:4stokes_hd194279_strongest} (Online).
Complementary to previous polarisation studies and to account for the possible strong alignment / polarisation signal
of weak(er) DIB carriers (see previous section) we include also a selection of these in this work.
We select sixteen DIBs of moderate and weak strength confirmed by several previous DIB surveys (\citealt{1994A&AS..106...39J},
\citealt{2000A&AS..142..225T}, \citealt{2009ApJ...705...32H}). In addition, we selected also 13 near-infrared DIBs for analysis.
In our selection we avoided DIBs that might be contaminated by either stellar or telluric lines, strong adjacent DIBs, or those that are too weak
to be detected in the most reddened sightline towards HD\,194279.
See Table~\ref{tb:poleff} and Figs.~\ref{fig:spectraIP_weak_hd197770} to~\ref{fig:spectraIP_nir_hd194279} for the list of DIBs and
the corresponding intensity and polarisation $\Delta P$ spectra.
Again, the corresponding $Q$, $U$, and $V$ spectra are shown in Figs.~\ref{fig:4stokes_weak_hd197770} to~\ref{fig:4stokes_nir_hd194279} (Online).
\begin{figure*}[ht!]
\centering
\includegraphics[angle=0,width=18cm,clip]{16365fig1.ps}
\caption{Total normalized intensity $I$ and polarisation $\Delta P$ spectra of 15 strongest DIBs toward HD\,197770.
The $\Delta P$ spectra are scaled 10x and displaced vertically for display.
DIB rest wavelengths from \citet{2008ApJ...680.1256H}, corrected for radial velocity of \ion{K}{i}, are shown as dashed vertical lines. }
\label{fig:spectraIP_hd197770_strongest}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[angle=0,width=18cm,clip]{16365fig2.ps}
\caption{Total normalized intensity $I$ and polarisation $\Delta P$ spectra of 15 strongest DIBs toward HD\,194279.
The $\Delta P$ spectra are scaled 10x and displaced vertically for display.
DIB rest wavelengths from \citet{2008ApJ...680.1256H}, corrected for radial velocity of \ion{K}{i}, are shown as dashed vertical lines. }
\label{fig:spectraIP_hd194279_strongest}
\end{figure*}
\begin{table*}[ht!]
\caption{Upper limits on the DIB carrier polarisation efficiency factor $f_P$. See text for details.}
\label{tb:poleff}
\centering
\resizebox{16cm}{!}{
\begin{tabular}{lll|lll|lll}\hline\hline
\multicolumn{9}{c}{Upper limits on the polarisation efficiency factor $f_P$} \\ \hline
DIB\tablefootmark{a} & HD\,197770 & HD\,194279 & DIB & HD\,197770 & HD\,194279 & DIB & HD\,197770 & HD\,194279 \\ \hline
4428.19 &$\sim$0.033 &$\sim$0.070 & 4979.61 & 0.15 & 0.39 & 7045.89 & 0.06 & 0.11 \\
5705.08 & 0.056 & 0.068 & 5844.92 & 0.09 & 0.16 & 7061.05 & 0.03 & 0.05 \\
5780.48 & 0.007 & 0.008 & 5849.81 & 0.02 & 0.03 & 7062.68 & 0.07 & 0.07 \\
5797.06 & 0.005 & 0.010 & 5973.81 & 0.08 & 0.24 & 7069.55 & 0.03 & 0.07 \\
6195.98 & 0.010 & 0.024 & 5975.75 & 0.03 & - & 7077.86 & - & 0.14 \\
6203.05 & 0.019 & 0.024 & 6027.68 & - & 0.24 & 7119.71 & 0.03 & 0.06 \\
6269.85 & 0.015 & 0.017 & 6065.28 & 0.05 & 0.15 & 7375.87 & - & 0.17 \\
6283.84 & 0.017 & 0.006 & 6089.85 & 0.02 & 0.07 & 7385.89 & 0.08 & 0.14 \\
6379.32 & 0.005 & 0.010 & 6113.18 & 0.02 & 0.08 & 7405.71 & 0.10 & 0.25 \\
6613.62 & 0.003 & 0.007 & 6139.98 & 0.04 & 0.13 & 7559.41 & 0.08 & 0.10 \\
6660.71 & 0.010 & 0.033 & 6234.01 & 0.04 & 0.08 & 7832.89 & 0.02 & 0.06 \\
6993.13 & 0.014 & 0.015 & 6244.46 & 0.07 & 0.17 & 7862.43 & 0.07 & 0.23 \\
7224.03 & 0.013 & 0.007 & 6308.80 & - & 0.14 & 8026.25 & 0.04 & 0.03 \\
8620.41 & 0.11 & 0.029 & 6317.86 & - & 0.07 & & & \\
9577.00 & - &$\sim$0.095 & 6425.66 & 0.03 & 0.07 & & & \\
& & & 6456.01 & 0.07 & 0.06 & & & \\
\hline
\end{tabular}
}
\tablefoot{
\tablefoottext{a}{DIB central wavelengths from \citet{2008ApJ...680.1256H} are with respect to \ion{K}{i}.
Central wavelength for 8620 DIB from \mbox{\citet{2008A&A...488..969M}} and for 9577 DIB from Herbig (1995).
For DIB non-detections the upper limits could not be derived (``-'').}
}
\end{table*}
\subsection{Environmental conditions of the sightlines}
In this section we review the physical conditions of the interstellar medium towards HD\,197770 and HD\,194279.
\subsubsection{HD\,197770}
HD\,197770 is an evolved, spectroscopic, eclipsing binary with two B2 stars (\emph{e.g.}\ \citealt{1996PASP..108..401C}).
It lies on the edge of a large area of molecular clouds and star formation in the Cygnus region, at the edge of Cyg OB7 and Cep OB2
(\citealt{1998AJ....115.2561G}), at a distance of $\sim$440~pc (\citealt{1994ApJS...95..419D}).
The interstellar medium in front of HD\,197770 has also been studied extensively, in particular because it is currently
one of two sightlines (the other being HD\,147933-4) for which a polarisation feature (at level of 0.4\% and efficiency
$\Delta p$/ $\Delta\tau$ = 0.0017)
corresponding to the 2175~\AA\ UV bump has been detected (\citealt{1992ApJ...385L..53C}, \citealt{1994ApJ...427L..47S},
\citealt{1994ApJ...431..783K}, \citealt{1995ASSL..202..271M}, \citealt{1997ApJ...478..395W}) as predicted by \citet{1989IAUS..135..313D}.
\citet{1992ApJ...385L..53C}, \citet{1997ApJ...478..395W} assigned the polarisation of the UV bump to small aligned graphite disks,
while other authors favour silicate grains (\citealt{1994ApJ...431..783K}).
The sightline shows a high continuum linear polarisation of almost 4\% in the optical (see also Table~\ref{tb:targets}).
The dust grains in this interstellar cloud are aligned, where $P_{\rm max} / \tau_V$ is 0.026,
which is close to optimal alignment (0.032; \citealt{1975ApJ...196..261S}).
From optical spectroscopy we observe strong CH and CN absorption lines, but a weak CH$^+$ line.
Column densities of \ion{Ca}{i}, \ion{Fe}{i}, CH, CH$^+$, CN and C$_2$ for the main velocity component at -3~km\,s$^{-1}$\
have been reported by \citet{1994ASPC...58...84H} and are consistent with our values (Table~\ref{tb:is-lines}).
The atomic and molecular line profiles show a single strong narrow component at a heliocentric velocity of -17~km\,s$^{-1}$.
The CN and CH lines are narrow, with FWHM of $\sim$0.07 to 0.09~\AA, while CH$^+$, \ion{Ca}{i}, and \ion{Ca}{ii} are
a little broader, with FWHM of $\sim$0.16 to 0.20~\AA.
Both the 5797 and 5780~\AA\ DIBs are weak, per unit reddening, with respect to the Galactic average.
However, the strength ratio of 5797 over 5780 is relatively high ($W(5797)/W(5780) = 0.58$),
typical of a translucent ($\zeta$-type) diffuse cloud.
This is also indicated by the low CH$^+$/CH ratio of 0.11 (lower than 0.5 is typical for a quiescent medium, no shocks).
Previously, \citet{1993ApJ...403..722W} invoked quiescence for this sightline to efficiently align the UV bump grains.
In summary, this sightline probes a {\it single dense quiescent interstellar cloud}.
\subsubsection{HD\,194279}\label{sec:hd194279}
HD\,194279 (Cyg OB9) is associated to the NGC\,6910 cluster which has a distance between 1.7 -- 2.1~kpc
(\citealt{1956PDAO...10..201U}, \citealt{1930LicOB..14..154T}).
\citet{2002ApJ...567..391M} report a spectroscopic parallax distance of 1.1~kpc, while \citet{1994Ap.....37..317G} derived
a distance of 740~pc for the Cyg\,OB9 association.
The sightline toward HD\,194279 shows a more complex structure with components at -16.1, -9 and -2.6~km s$^{-1}$.
It shows multiple strong components in both CH and CH$+$ whereas the CN line is very weak.
The CH$^+$, CH and CN lines have similar FWHM of 0.20, 0.22, and 0.17~\AA, respectively.
To explain high CH$^+$ abundances in the diffuse ISM both shocks and a strong UV field are invoked in cloud models (\citealt{1996A&A...307..271S}).
The $C_2$ transitions are detected, originating from the coldest component only.
It thus appears that HD\,194279 probes several diffuse clouds which in superposition cause significant reddening.
The dust grains in this interstellar cloud are not efficiently aligned as illustrated by a very low value for
$P_{\rm max} / \tau_V = 0.008$, which is far from optimal alignment.
The $W(5797)/W(5780)$ ratio in this line-of-sight is 0.32, which is close to the average Galactic value of $\sim$0.26 (Vos et al. in preparation).
Also the W(CH$^+$)/W(CH) ratio, of 1.36, is intermediate, a sign of a slight enhanced production of CH$^+$ in this sightline.
From the line profiles of the atomic and molecular lines it is clear that this line-of-sight probes multiple diffuse cloud components,
for which the entire sightline gives average Galactic values for DIB strengths and molecular line ratios.
In summary, this sightline probes an {\it average of diffuse interstellar clouds}.
\subsection{DIB polarisation limits}\label{sec:discussion}
Previously, \citet{2007A&A...465..899C} provided linear polarisation 2$\sigma$ detection limits per FWHM of 0.04 -- 0.14\% for
HD\,199770 for the $\lambda\lambda$5780, 5797, 6196, 6283, 6379, and 6283 DIBs.
Corresponding polarisation detection limits per \AA\ (PDLA) are 0.06 to 0.19\%.
Or alternatively, $f_{\rm max}$ are 0.31, 0.44, 0.45, 0.18, 0.47, and 0.68 resp.
Circular line polarisation for these DIBs gave 2$\sigma$ (per 0.1~\AA) limits of 1.0 -- 2.5\% for HD\,197770.
In this work we derive new upper limits on the polarisation efficiency $f_P$ (\emph{i.e.}\ similar to $f_{\rm max}$ in \citealt{2007A&A...465..899C})
for 45 strong, weak and near-infrared DIBs (see Table~\ref{tb:poleff}).
$f_P(\lambda_{\rm peak})$ is computed from Eq.~\ref{eq:DeltaP} adopting
$P(\lambda)/A(\lambda) = P_{\rm max}/A_V$, $\Delta P(\lambda) = 2 \sigma_{\Delta P}$
(the standard deviation per pixel on $\Delta P$), and $\Delta\tau_{\rm peak} = {\rm ln}(1/(1 - {\rm central depth} )$.
The new constraints on the level of line polarisation are given in Table~\ref{tb:poleff}.
In this work we investigate two different types of interstellar clouds that show evidence of interstellar polarisation due to dust grains.
In neither of these two environments do we detect polarisation significant signals, with detection limits on $f_P$ of about 0.02 for the strongest
DIBs toward HD\,197770. For example, for the 8621 DIB we derive $f_P < 0.11$. The strength of this DIB is known to be
strongly correlated to the amount of interstellar dust and has been thus suggested to be related
more directly to grains than other DIBs (\citealt{2007PASP..119.1268W}, \citealt{2008A&A...488..969M}).
The theoretical $f_P$ for a classical grain carrier is a factor 10 higher than this new limit.
The 9577 DIB, attributed to the C$_{60}^+$ fullerene, also does not reveal line polarisation, with $f_P < 0.1$ (towards HD\,194279).
For the weak DIBs we obtain upper limits on $f_P$ of $\sim$0.02 to $\sim$0.2 towards both sightlines.
Again these limits suggest a non-grain related carrier.
The near-infrared DIBs suggest $f_P$ values between 0.02 to 0.10 for these bands (for the HD\,197770 sightline;
factor of two less stringent for the HD\,194279 sightline).
These low levels for the polarisation efficiency exclude typical ``classical'' dust grains as carriers, and thus
strongly reinforce the idea that all DIB carriers are gas-phase molecules.
In particular, polycyclic aromatic hydrocarbons (PAHs) and fullerenes are proposed as candidates for the DIB carriers (See recent assessment
by \citealt{1996ApJ...458..621S}).
The presence of these large molecules in the ISM has been confirmed from their mid-infrared emission features in various astrophysical
environments (\citealt{2008IAUS..251..357S}, \citealt{2008ARA&A..46..289T}).
Recently, \citet{2009ApJ...698.1292S} investigated the polarised infrared emission by PAHs upon anisotropic irradiation by UV photons and the
subsequent alignment of the angular momentum and the principal axis of the PAH molecule. Conservation of the angular moment and partial
memory retention of UV photon source direction leads to partial polarisation (of not more than few \%) of the infrared emission.
This is an extension of the notion put forward by
\citet{1988prco.book..769L} that infrared emission features resulting from in-plane and out-of-plane modes should have orthogonal polarization directions.
Additionally, Tolkachev and collaborators have shown from theoretical and experimental work that large molecules can show polarisation
signatures in fluorescent emission excitation lines (\emph{e.g.} \citealt{1994OptSp..77...38T}, \citealt{2009JApSp..76..806T}).
In the case of PAHs, it would be interesting to quantitatively assess the polarization efficiency that is associated with the electronic
absorption of these molecules and their ions when aligned in an external magnetic field and compare this value to the values of $f_P$ derived from the observations.
Recent advances in quantum chemical calculations of PAH polarizability (\citealt{2007JChPh.127a4107M}) should help make it possible
to quantify the polarization that is associated with a population of PAH molecules or ions. Studies are ongoing in this direction and will be reported in a separate report.
\section{Conclusion}
The results presented in this study show that:
\begin{enumerate}
\item The Polarisation Detection Limit per \AA\ in \% (\emph{i.e.}\ PDLA = 2$\sigma_{\Delta P}$(per pixel) / $\sqrt{1./ {\rm pixel size (\AA)}}\times 100.0$)
for the DIBs in the red and green spectral range (\emph{i.e.}
between 5700 and 7000~\AA) have typical values between 0.004 and 0.010\%, an order of magnitude improvement with
respect to previous limits.
\item None of the 45 DIBs measured and analysed in this work show unambiguous evidence for line polarisation of the DIBs.
\item For the strongest DIBs towards HD\,197770 the obtained upper limits on the polarisation efficiency factor are at least a
factor 10 smaller (and in some cases more than 300 times smaller) than those expected for classical grains.
\item For all 45 DIBs the derived $f_P$ is significantly less than 1, the lower limit predicted for carriers related to classical grains.
\end{enumerate}
In summary, none of the diffuse bands of varying strengths and widths exhibit a polarised absorption spectrum,
neither for the dense cloud, with efficient grain alignment, in the line-of-sight towards HD\,197770, nor
the diffuse clouds averaged along in the line-of-sight toward HD\,194279.
Thus, we postulate it is likely that none of the DIB carriers measured in our study are directly related to grain-like carriers.
This includes the 8621~\AA\ DIB for which a very good correlation with dust reddening has been observed.
Also, if DIB carriers are indeed related to large gas-phase molecules, it appears that these do indeed not align efficiently in
the diffuse ISM and/or have a low polarisation efficiency.
\begin{acknowledgements}
We thank the CFHT queued service mode observers for help in preparing and executing the observations and help with the subsequent
processing of the spectral data.
P. Ehrenfreund is supported by NASA Grant NNX08AG78G and the NASA Astrobiology Institute (NAI).
This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
We are thankful to the IDL Astronomy Library maintained at the Goddard Space Flight Center \citep{1993ASPC...52..246L}.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,941,325,221,160 | arxiv | \section{Introduction}
It is safe to say that in the study of the mathematics of origami, flat
origami has received the most attention. To put it simply, a {\em flat origami model} is
one which can be pressed in a book without (in theory) introducing new creases. We say
``in theory" because when one actually folds a flat origami model, slight errors in
folding will often make the model slightly non-flat. In our analysis, however, we
ignore such errors and assume all of our models are perfectly folded.
We also assume that our paper has zero thickness and that our creases have no width.
It is surprising how rich the results are using a purely combinatorial
analysis of flat origami. In this paper we introduce the basics of this approach,
survey the known results, and briefly describe where future work might lie.
First, some basic definitions are in order. A {\em fold}
refers to any folded paper object, independent of the number of folds done
in sequence. The {\em crease
pattern} of a fold is a planar embedding of a graph which represents the creases that
are used in the final folded
object. (This can be thought of as a structural blueprint of the fold.)
Creases come in two types: {\em mountain
creases}, which are convex, and {\em valley creases}, which are concave (see Figure
\ref{hufig1}). Clearly the type of a crease depends on which side of the paper we look at,
and so we assume we are always looking at the same side of the paper.
We also define a {\em mountain-valley (MV) assignment} to be a function mapping the set
of all creases to the set $\{M,V\}$. In other words, we label each crease mountain or
valley. MV assignments that can actually be folded are called {\em valid}, while those which
do not admit a flat folding (i.e. force the paper to self-intersect in some way) are called
{\em invalid}.
\begin{figure}
\centerline{\includegraphics[scale=.5]{hufigure1.eps}}
\caption{Mountain creases, valley creases, and the crease pattern for the flapping bird
with MV assignment shown.}\label{hufig1}
\end{figure}
There are two basic questions on which flat-folding research has focused:
\newcounter{hull1}
\begin{list}
{\arabic{hull1}.}{\usecounter{hull1}
\setlength{\parsep}{0in}\setlength{\itemsep}{0in}}
\item Given a crease pattern, without an MV assignment, can we tell whether it can flat fold?
\item If an MV assignment is given as well, can we tell whether it is valid?
\end{list}
These are also the focus of this survey. We will not discuss the special cases of flat origami
tesselations, origami model design, or other science applications.
\section{Classic single vertex results}
We start with the simplest case for flat origami folds. We define a {\em single vertex fold}
to be a crease pattern (no MV assignment) with only one vertex in the interior of the paper and
all crease lines incident to it. Intersections of creases on the boundary of the paper clearly
follow different rules, and nothing of interest has been found to say about them thus far
(except in origami design; see \cite{hulan1}, \cite{hulan2}).
A single vertex fold which is
known to fold flat is called a {\em flat vertex fold}. We present a few basic theorems
relating to necessary and sufficient conditions for flat-foldability of single vertex
folds. These theorems appear in their cited references without proof. While Kawasaki,
Maekawa, and Justin undoubtedly had proofs of their own, the proofs presented below appear in
\cite{huhul1}.
\begin{thm}[Kawasaki \cite{hukaw2}, Justin \cite{hujus1},
\cite{hujus2}]\label{hukj}
Let $v$ be a vertex of degree $2n$ in a single vertex fold
and let
$\alpha_1, ..., \alpha_{2n}$ be the consecutive angles between the creases. Then $v$ is
a flat vertex fold if and only if
\begin{equation}\label{huiso}
\alpha_1-\alpha_2+\alpha_3-\cdots -\alpha_{2n}=0.
\end{equation}
\end{thm}
\noindent{\bf Proof:} Consider a simple closed curve
which winds around the vertex. This curve mimics the path of an ant
walking around the vertex on the surface of the paper after it is folded. We
measure the
angles the ant crosses as positive when traveling to the left and negative when walking to
the right. Arriving at the point where the ant started means that this alternating sum is
zero. The converse is left to the reader; see
\cite{huhul1} for more details. $\Box$
\begin{thm}[Maekawa, Justin \cite{hujus2}, \cite{hukas}]\label{humj}
Let $M$ be the number of mountain creases and $V$ be the number of valley
creases adjacent to a vertex in a single vertex fold. Then $M-V=\pm 2$.
\end{thm}
\noindent{\bf Proof:} (Siwanowicz) If $n$ is the number of creases, then $n=M+V$.
Fold the paper flat and
consider the cross-section
obtained by clipping the area near the vertex from the paper; the cross-section
forms a flat polygon. If we view each interior
$0^\circ$ angle as a valley crease and each interior $360^\circ$ angle as a mountain
crease, then $0V+360M=(n-2)180=(M+V-2)180$, which gives $M-V=-2$. On the other hand,
if we view each
$0^\circ$ angle as a \emph{mountain} crease and each $360^\circ$ angle as a
\emph{valley} crease (this corresponds to flipping the paper over), then
we get $M-V=2$. $\Box$
\vspace{.1in}
In the literature, Theorem \ref{hukj} and \ref{humj} are referred to as Kawasaki's Theorem
and Maekawa's Theorem, respectively.
Justin \cite{hujus3}
refers to equation (\ref{huiso}) as the {\em isometries condition}.
Kawasaki's Theorem is sometimes
stated in the equivalent form that the sum of alternate angles around $v$ equals
$180^\circ$, but this is only true if the vertex is on a flat sheet of paper.
Indeed, notice that the proofs of the Kawasaki's and Maekawa's Theorems do not use the fact
that
$\sum \alpha_i = 360^\circ$. Thus these two theorems are also valid for single vertex
folds where $v$ is at the apex of a cone-shaped piece of paper. We will require
this generalization in sections 4 and 5.
Note that while Kawasaki's Theorem assumes that the vertex has even degree, Maekawa's
Theorem does not. Indeed, Maekawa's Theorem can be used to prove this fact. Let
$v$ be a single vertex fold that folds flat and let $n$ be the degree of $v$.
Then $n=M+V=M-V+2V=\pm 2 + 2V$, which is even.
\section{Generalizing Kawasaki's Theorem}
Kawasaki's Theorem gives us a complete description of when a single vertex in a crease
pattern will (locally) fold flat. Figure \ref{hufig2} shows two examples of crease patterns
which satisfy Kawasaki's Theorem at each vertex, but which will not fold flat. The example on
the left is from \cite{huhul1}, and a simple argument shows that no two of the creases $l_1,
l_2, l_3$ can have the same MV parity. Thus no valid MV assignment for
the lines
$l_1, l_2, l_3$ is possible. The example on the right has valid MV assignments, but still
fails to fold flat. The reader is encouraged to copy this crease
pattern and try to fold it flat, which will reveal that some flap of paper will have to
intersect one of the creases. However, if the location of the two
vertices is changed relative to the border of the paper, or if the crease $l$ is made longer,
then the crease pattern {\em will} fold flat.
\begin{figure}
\centerline{\includegraphics[scale=.5]{hufigure2.eps}}
\caption{Two impossible-to-fold-flat folds.}\label{hufig2}
\end{figure}
This illustrates how difficult the question of flat-foldability is for
multiple vertex folds. Indeed, in 1996 Bern and Hayes \cite{huber} proved
that the general question of whether or not a given crease pattern can fold flat is
NP-hard. Thus one would not expect to find easy necessary and sufficient conditions for
general flat-foldability.
We will present two efforts to describe general flat-foldability. The first has to do
with the realization that when we fold flat along a crease, one part of the paper is being
reflected along the crease line to the other side. Let us denote $R(l_i)$ to be the
reflection in the plane, $\mathbb R^2$, along a line $l_i$.
\begin{thm}[Kawasaki \cite{hukaw0}, \cite{hukaw3}, Justin \cite{hujus1},
\cite{hujus3}]\label{hureflect}
Given a multiple vertex fold, let $\gamma$ be any closed, vertex-avoiding curve
drawn on the crease pattern which crosses crease lines $l_1,...$, $l_{n}$, in order. Then,
if the crease pattern can fold flat, we will have
\begin{equation}\label{huref}
R(l_1)R(l_2)\cdots R(l_n)=I
\end{equation}
where $I$ denotes the identity transformation.
\end{thm}
Although a rigorous proof of Theorem \ref{hureflect} does not appear in the literature we
sketch here a proof by induction on the number of vertices. In the base case, we are given a
single vertex fold, and it is a fun exercise to show that condition (\ref{huref}) is
equivalent to equation (\ref{huiso}) in Kawasaki's Theorem (use the fact that the composition
of two reflections is a rotation). The induction step then proceeds by breaking the curve
$\gamma$ containing $k$ vertices into two closed curves, one containing $k-1$ vertices and one
containing a single vertex (the
$k$th).
The condition (\ref{huref}) is not a sufficient condition for
flat-foldability (the crease patterns in Figure \ref{hufig2} are counterexamples here as
well). In fact, as the induction proof illustrates, Theorem \ref{hureflect}
extends Kawasaki's Theorem to as general a result as possible.
In \cite{hujus3} Justin proposes a necessary and sufficient condition for general
flat-foldability, although as Bern and Hayes predicted, it is not very computationally
feasible. To summarize, let $C$ be a crease pattern for a flat origami model, but for the
moment we are considering the boundary of the paper as part of the graph. If $E$ denotes the
set of edges in $C$ embedded in the plane, then we call $\mu(E)$ the {\em f-net}, which is the
image of all creases and boundary of the paper after the model has been folded. We then call
$\mu^{-1}(\mu(E))$ the {\em s-net}. This is equivalent to
imagining that we fold carbon-sensitive paper, rub all the
crease lines firmly, and then unfold. The result will be the $s$-net.
Justin's idea is as follows: Take all the faces of the $s$-net which get mapped by $\mu$ to the
same region of the $f$-net and assign a {\em superposition order} to them in accordance to
their layering in the final folded model. One can thus try to fold a given crease pattern by
cutting the piece of paper along the creases of the $s$-net, tranforming them under
$\mu$, applying the superposition order, and then attempting to glue the paper back together.
Justin describes a set of three intuitive {\em crossing conditions} (see \cite{hujus3}) which
must not happen along the $s$-net creases during the glueing process if the model is to be
flat-foldable -- if this can be done, we say that the {\em non-crossing condition} is
satisfied. Essentially Justin conjectures that a crease pattern folds flat if and only if the
non-crossing condition holds.
Although the spirit of this approach seems to accurately reflect the flat-foldability of
multiple vertex folds, no rigorous proof appears in the literature; it seem that this is an
open problem.
\section{Generalizing Maekawa's Theorem}
To extend Maekawa's Theorem to more than one vertex, we define interior vertices in a flat
multiple vertex fold to be {\em up vertices} and {\em down vertices} if they locally have
$M-V=2$ or
$-2$, respectively. We define a crease line to be an {\em interior} crease if its
endpoints lie in the interior of the paper (as opposed to on the boundary), and
consider any crease line with both endpoints on the boundary of the paper to
actually be two crease lines with an interior vertex of degree 2 separating them.
\begin{thm}[Hull \cite{huhul1}]
Given a multiple vertex flat fold, let $M$ (resp. $V$) denote the number of mountain (resp.
valley) creases, $U$ (resp. $D$) denote the number of up (resp. down) vertices, and $M_i$
(resp. $V_i$) denote the number of interior mountain (resp. valley) creases. Then
$$M-V=2U-2D-M_i+V_i.$$
\end{thm}
Another interesting way to generalize Maekawa's Theorem is to explore restrictions which turn
it into a sufficiency condition. In the case where all of the angles
around a single vertex are equal, an MV assignment with $M-V=\pm 2$ is guaranteed to be valid.
This observation can be generalized to sequences of consecutive equal angles around a
vertex.
Let us denote a single vertex fold by
$v=(\alpha_1,... \alpha_{2n})$ where the $\alpha_i$ are consecutive angles between the
crease lines. We let
$l_1,..., l_{2n}$ denote the creases adjacent to
$v$ where $\alpha_i$
is the angle between creases $l_i$ and $l_{i+1}$ ($\alpha_{2n}$
is between $l_{2n}$ and $l_1$).
If $l_i,...,l_{i+k}$
are consecutive crease lines in a single vertex fold which have been given a MV
assignment, let $M_{i,...,i+k}$ be the number of mountains and $V_{i,...,i+k}$ bethe
number of valleys among these crease lines. We say that a given MV assignment is valid for the
crease lines
$l_i, ..., l_{i+k}$ if the MV assignment can be
followed to fold up these crease lines without forcing the paper to self-intersect. (Unless
these lines include all the creases at the vertex, the result will be a cone.) The necessity
porttion oof the following result appears in \cite{huhul2}, while sufficiency is new.
\begin{thm}\label{hullth1}
Let $v=(\alpha_1,...,\alpha_{2n})$ be a single vertex fold in either a piece of paper or a
cone, and suppose we have
$\alpha_i= \alpha_{i+1}=\alpha_{i+2}=\cdots =\alpha_{i+k}$ for some
$i$ and $k$. Then a given MV assignment is valid for $l_i,..., l_{i+k+1}$ if
and only if
$$M_{i,...,i+k+1}-V_{i,...,i+k+1} = \left\{\begin{array}{cl}
0 & \mbox{when $k$ is even}\\
\pm 1 & \mbox{when $k$ is odd.}\end{array}\right.$$
\end{thm}
\noindent{\bf Proof:} Necessity follows by applications of Maekawa's
Theorem. If $k$ is even, then the cross-section of the paper around the
creases in question might look as shown in the left of Figure \ref{hufig3}. If we consider only
this sequence of angles and imagine adding a section of paper with angle $\beta$ to
connect the loose ends at the left and right (see Figure \ref{hufig3}, left), then we'll have
a flat-folded cone which must satisfy Maekawa's
Theorem. The angle $\beta$ adds two extra creases, both of which
must be mountains (or valleys). We may assume that the vertex points up, and thus
we subtract two from the result of Maekawa's Theorem to get
$M_{i,...,i+k+1}-V_{i,...,i+k+1}=0$.
\begin{figure}[h]
\centerline{\includegraphics[scale=.5]{hufigure4.eps}}
\caption{Applying Maekawa when $k$ is even (left) and odd (right).}
\label{hufig3}
\end{figure}
If $k$ is odd (Figure \ref{hufig3}, right), then this angle sequence, if considered by
itself, will have the loose ends from angles $\alpha_{i-1}$ and $\alpha_{i+k+1}$ pointing
in the same direction. If we glue these together (extending them if necessary)
then Maekawa's Theorem may be applied. After subtracting (or adding) one to the result of
Maekawa's Theorem because of the extra crease made when gluing the loose flaps, we get
$M_{i,...,i+k+1}-V_{i,...,i+k+1}=\pm 1$.
For sufficiency, we proceed by induction on $k$. The result is trivial for the base cases
$k=0$ (only one angle, and the two neighboring creases will either be M, V or V, M) and
$k=1$ (two angles, and all three possible ways to assign 2 M's and 1 V, or vice-versa, can
be readily checked to be foldable). For arbitrary $k$, we will always be able to find
two adjacent creases $l_{i+j}$ and $l_{i+j+1}$ to which the MV assignment assigns
opposite parity. Let $l_{i+j}$ be M and $l_{i+j+1}$ be V. We make these folds and we
can imagine that $\alpha_{i+j-1}$ and $\alpha_{i+j}$ have been fused into the other layers of
paper, i.e. removed. The value of
$M-V$ will not have changed for the remaining sequence $l_i, ..., l_{i+j-1}, l_{i+j+2}, ...,
l_{i+k}$ of creases, which are flat-foldable by the induction hypothesis. $\Box$
\section{Counting valid MV assignments}
We now turn to the question of counting how many different ways we can fold a flat origami
model. By this we mean, given a crease pattern that is known to fold flat, how many
different valid MV assignments are possible?
We start with the single vertex case. Let $C(\alpha_1,...,\alpha_{2n})$ denote the
number of valid MV assignments possible for the vertex fold $v=(\alpha_1,...,\alpha_{2n})$.
\begin{figure}[h]
\centerline{\includegraphics[scale=.6]{hufigure3.eps}}
\caption{The three scenarios for vertices of degree 4.}\label{hufig4}
\end{figure}
An an example, consider the case where $n=2$ (so we have 4 crease lines at $v$). We compute
$C(\alpha_1,\alpha_2,\alpha_3,\alpha_4)$ using Maekawa's Theorem. Its value will depend on the
type of symmetry the vertex has, and the three possible situations are depicted in Figure
\ref{hufig4}. $C(90,90,90,90)=8$ because any crease could be the
``odd creasee out" and the vertex could be up or down. In Figure
\ref{hufig4} (b) we have only mirror symmetry, and by Theorem \ref{hullth1},
$M_{2,3,4}-V_{2,3,4}=\pm 1$. Thus
$l_2,l_3,l_4$ must have 2 M's and 1 V or vice versa; this determines $l_1$'s parity, giving
$C(\alpha_1,...,\alpha_4)=6$. In Figure \ref{hufig4} (c)
$M_{1,2}-V_{1,2}=0$, so $l_1$ and $l_2$ can be M,V or V,M, and the other two must be
both M or both V, giving $C(\alpha_1,...,\alpha_4)=4$.
The example in Figure \ref{hufig4}(a) represents the case with no restrictions.
This appears whenever all the angles are equal around $v$, giving
$C(\alpha_1,...,\alpha_{2n})$ $= 2{2n\choose n-1}$. The idea in Figure
\ref{hufig4} (c), where we pick the smallest angle we see and let its creases be M,V or
V,M, can be applied inductively to give the lower bound in the following (see
\cite{huhul2} for a full proof):
\begin{thm}\label{hullth2}
Let $v=(\alpha_1,...,\alpha_{2n})$ be the vertex in a flat vertex fold, on either a
flat piece of paper or a cone. Then
$$2^n\leq C(\alpha_1,...,\alpha_{2n})\leq 2{2n\choose n-1}$$
are sharp bounds.
\end{thm}
A formula for $C(\alpha_1,...,\alpha_{2n})$ seems out of reach, but using the
equal-angles-in-a-row concept, recursive formulas exist to compute this quantity in
linear time.
\begin{thm}[Hull, \cite{huhul2}]\label{hullth3}
Let $v=(\alpha_1,...,\alpha_{2n})$ be a flat vertex fold in either a piece of paper
or a cone, and suppose we have
$\alpha_i= \alpha_{i+1}=\alpha_{i+2}=\cdots =\alpha_{i+k}$ and $\alpha_{i-1}>
\alpha_i$ and
$\alpha_{i+k+1}>\alpha_{i+k}$ for some
$i$ and $k$. Then
$$C(\alpha_1,...,\alpha_{2n}) =
{k+2\choose
\frac{k+2}{2}}C(\alpha_1,...,\alpha_{i-2},\alpha_{i-1}-\alpha_i+\alpha_{i+k+1},
\alpha_{i+k+2},..., \alpha_{2n})$$
if $k$ is even, and
$$C(\alpha_1,...,\alpha_{2n}) =
{k+2\choose \frac{k+1}{2}}C(\alpha_1,...,\alpha_{i-1},\alpha_{i+k+1}, ...,
\alpha_{2n})$$
if $k$ is odd.
\end{thm}
Theorem \ref{hullth3} was first stated in \cite{huhul2}, but the basic ideas behind it are
discussed by Justin in \cite{hujus3}.
As an example, consider $C(20,10,40,50,60,60,60,60)$.
Theorem \ref{hullth2} tells us that this qualtity lies between 16 and 112. But using Theorem
\ref{hullth3} we see that $C(20,10,40,50,60,60,60,60)=$ ${2\choose 1}C(50, 50, 60,60,60,60)$
$= {2\choose 1}{3\choose 1}C(60,60,60,60)$
$= {2\choose 1}{3\choose 1}2{4\choose 1} =48$.
Not much is known about counting valid MV assignments for flat multiple vertex folds.
While there are similarities with work done on counting the number of ways to fold up a grid of
postage stamps (see \cite{hukoe}, \cite{hulun1}, \cite{hulun2}), the questions asked are
slightly different. For other work, see \cite{hujus3} and
\cite{huhul2}.
\section{Conclusion}
In conclusion, the results for flat-foldability seem to have almost completely exhausted the
single vertex case. Open problems exist, however, in terms of global flat-foldability, and
very little is known about enumerating valid MV assignments for multiple vertex
crease patterns.
|
1,941,325,221,161 | arxiv | \section{Introduction}
Geophysical forecasting remains an active area of research because of its profound implications for economic planning, disaster management, and adaptation to climate change. Traditionally, forecasts for geophysical applications have relied on the confluence of experimental observations, statistical analyses, and high-performance computing. However, the high-performance computing aspect of forecasting has traditionally been limited to ensemble partial differential equation (PDE)-based forecasts of different weather models (for example, \cite{skamarock2008description,rogers2009ncep,yoo2013diagnosis}). Recently, there has been an abundance of publicly available weather data through modern techniques for remote sensing, experimental observations, and data assimilation within numerical weather predictions. Consequently, many researchers have attempted to utilize data for more effective forecasts of geophysical processes \cite{schneider2017earth,gentine2018could,brenowitz2018prognostic,rasp2018deep,o2018using}. For example, researchers have started building data-driven forecasts by emulating the evolution of the weather \emph{nonintrusively} \cite{chattopadhyay2020predicting,chattopadhyay2019data,chattopadhyay2019analog}. In this approach, forecasts are based on data-driven models by eschewing numerical equations. One rationale for these types of predictions is that equation-based forecasts are inherently limited since they do not capture all the relevant physical processes of the atmosphere or the oceans. More important, the data-driven models are attractive because they promise the possibility of overcoming the traditional limitations of equation-based models based on numerical stability and time to solution. Indeed, nonintrusive surrogate models for PDE-based systems have found popularity in many engineering applications \cite{taira2017modal,wang2019non,maulik2020reduced,qian2020lift} because they have been successful in reducing computational simulation campaigns for product design or in complex systems control
\cite{proctor2016dynamic,peitz2019multiobjective,noack2011reduced,rowley2017model}.
We focus on a particularly promising approach for nonintrusive modeling (or forecasting) involving the use of linear dimensionality reduction followed by recurrent neural network time evolution \cite{mohan2018deep,pawar2019deep}. This forecast technique compresses the spatiotemporal field into its dominant principal components by using proper orthogonal decomposition (POD) (also known as principal components analysis) \cite{kosambi2016statistics,berkooz1993proper}. Following this, the coefficients of each component are evolved by using a time series method. In recent literature, long short-term memory networks (LSTMs), a variant of recurrent neural networks, have been used extensively for modeling temporally varying POD coefficients \cite{rahman2019nonintrusive,maulik2020time}. The construction of an LSTM architecture for this purpose is generally based on trial and error, requires human expertise, and consumes significant development time. To address these limitations, we devise an automated way of developing POD-LSTMs using a neural architecture search (NAS) for a real-world geophysical data set, the NOAA Optimum Interpolation Sea-Surface Temperature (SST) data set, which represents a contribution over past POD-LSTM studies that have studied academic data sets alone. In particular, we leverage the NAS framework of DeepHyper \cite{balaprakash2019scalable} to automate the discovery of stacked LSTM architectures that evolve in time POD coefficients for spatiotemporal data sets. DeepHyper is a scalable open-source hyperparameter and NAS package that was previously assessed for automated discovery of fully connected neural networks on tabular data. In this study, we extend DeepHyper's capabilities for discovering stacked LSTM architectures by parameterizing the space of stacked LSTM architectures as a directed acyclic graph. We adopt the scalable infrastructure of DeepHyper using different search methods for POD-LSTM development. A schematic that describes our overall approach is shown in Figure \ref{fig:overall_schematic}. The main contributions of this work are as follows.
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.75\textwidth]{figures/SC_Schematic.png}
\caption{Our proposed NAS approach for automated POD-LSTM development. Snapshots of spatiotemporally varying training data are compressed by using proper orthogonal decomposition to generate reduced representations that vary with time. These representations (or coefficients) are used to train stacked LSTMs that can forecast on test data. The POD basis vectors obtained from the training data are retained for reconstruction using the forecast coefficients.
}
\label{fig:overall_schematic}
\end{figure*}
\begin{itemize}
\item We develop an automated NAS approach to generate stacked LSTM architectures for POD-LSTM to forecast the global sea-surface temperature on the NOAA Optimum Interpolation SST data set.
\item We improve the scalability of the NAS approach within DeepHyper by implementing aging evolution, an asynchronous evolutionary algorithm; and we demonstrate its efficacy for developing POD-LSTM.
\item We compare aging evolution with reinforcement learning and random search methods at scale on up to 512 nodes of the Theta supercomputer and show that the proposed approach has \revised{better scaling and node utilization}.
\item We show that automatically obtained POD-LSTMs compare favorably with manually designed variants and baseline machine learning forecast tools.
\end{itemize}
\section{Data set and preprocessing}
In this section we describe the data set and the POD technique we used for data compression.
\subsection{NOAA Optimum Interpolation Sea-Surface Temperature data set}
For our geophysical emulation we utilize the open-source NOAA Optimum Interpolation SST V2 data set.\footnote{Available at https://www.esrl.noaa.gov/psd/} Seasonal fluctuations in this data set cause strong periodic structure, although complex ocean dynamics still lead to rich phenomena. Temperature snapshot data is available on a weekly basis on a one-degree grid. This data set has previously been used in data-driven forecasting and analysis tasks (for instance, see \cite{kutz2016multiresolution,callaham2019robust}) particularly from the point of view of identifying seasonal and long-term trends for ocean temperatures by latitude and longitude. Each ``snapshot'' of data comprises a temperature field in an array of size 360 $\times$ 180 (i.e., the longitudes and latitudes of a one-degree resolution grid), which corresponds to the average sea-surface temperature magnitude for that week. Prior to its utilization for forecasting, a mask is used to remove missing locations in the array that correspond to the land area. The nonzero data points then are flattened to obtain an $\mathbb{R}^Z$-dimensional vector as our final snapshot for a week.
This data is available from October 22, 1981, to June 30, 2018 (i.e., 1,914 snapshots). We utilize the time period of October 22, 1981, to December 31, 1989, for training and validation (427 snapshots). The remaining data set (i.e, 1990 to 2018) is used for testing (1,487 snapshots). Note that this breakdown of the data set into training and testing is commonly used \cite{callaham2019robust} and the 8-year training period captures seasonal as well as subdecade trends in the data set. The training data is utilized to obtain data points given by a window of inputs and a window of outputs corresponding to the desired task of forecasting the future, given observations of the past sea-surface temperatures. Further specifics of the forecasting (i.e., the window of history interpreted as input and the length of the forecast) will be discussed in Section \ref{comp_fore}. These data points are then split into training and validation sets. We note that this forecast is performed non-autoregressively---that is, the data-driven method \emph{is not} utilized for predictions beyond the desired window size. Since this data set is produced by combining local and satellite temperature observations, it represents an attractive forecasting task for data-driven methods.
\subsection{Compression and forecasting}
\label{comp_fore}
Here, we first review the POD technique for the construction of a reduced basis \cite{kosambi2016statistics,berkooz1993proper} for data compression (see \cite{taira2017modal} for POD and its relationship with other dimension-reduction techniques). The POD procedure involves identifying a space that approximates snapshots of a signal optimally with respect to the $L^2-$norm. The process of orthogonal basis ($\boldsymbol{\vartheta}$) generation commences with the collection of snapshots in the \emph{snapshot matrix},
\begin{align}
\mathbf{S} = [\begin{array}{c|c|c|c}{\hat{\mathbf{q}}^{1}_h} & {\hat{\mathbf{q}}^{2}_h} & {\cdots} & {\hat{\mathbf{q}}^{N_{s}}_h}\end{array}] \in \mathbb{R}^{N_{h} \times N_{s}},
\end{align}
where $N_s$ is the number of snapshots and $\hat{\mathbf{q}}^i_h \in \mathbb{R}^{N_h}$ corresponds to an individual $\mathbb{R}^{N_h}$ degree-of-freedom snapshot in time of the discrete solution domain with the mean value removed, namely,
\begin{align}
\begin{gathered}
\hat{\mathbf{q}}^i_h = \mathbf{q}^i_h - \mathbf{\bar{q}}_h, \quad
\mathbf{\bar{q}}_h = \frac{1}{N_s} \sum_{i=1}^{N_s} \mathbf{q}^i_h,
\end{gathered}
\end{align}
where $\overline{\mathbf{q}}_h \in \mathbb{R}^{N_h}$ is the time-averaged solution field. We note that $\mathbf{q}^i_h$ may be assumed to be any multidimensional signal that is subsequently flattened. Within the context of our geophysical data sets, these correspond to the flattened land or sea-surface temperature snapshots. Our POD bases can then be extracted efficiently through the method of snapshots where we solve the eigenvalue problem on the correlation matrix $\mathbf{C} = \mathbf{S}^T \mathbf{S} \in \mathbb{R}^{N_s \times N_s}$. Then
\begin{align}
\begin{gathered}
\mathbf{C} \mathbf{W} = \mathbf{W} \Lambda,
\end{gathered}
\end{align}
where $\Lambda = \operatorname{diag}\left\{\lambda_{1}, \lambda_{2}, \cdots, \lambda_{N_{s}}\right\} \in \mathbb{R}^{N_{s} \times N_{s}}$ is the diagonal matrix of eigenvalues and $\mathbf{W} \in \mathbb{R}^{N_{s} \times N_{s}}$ is the eigenvector matrix. Our POD basis matrix can then be obtained by
\begin{align}
\begin{gathered}
\boldsymbol{\vartheta} = \mathbf{S} \mathbf{W} \in \mathbb{R}^{N_h \times N_s}.
\end{gathered}
\end{align}
In practice, a reduced basis $\boldsymbol{\psi} \in \mathbb{R}^{N_h \times N_r}$ is built by choosing the first $N_r$ columns of $\boldsymbol{\vartheta}$ for the purpose of efficient reduced-order models, where $N_r \ll N_s$. This reduced basis spans a space given by
\begin{align}
\mathbf{X}^{r}=\operatorname{span}\left\{\boldsymbol{\psi}^{1}, \dots, \boldsymbol{\psi}^{N_r}\right\}.
\end{align}
The coefficients of this reduced basis (which capture the underlying temporal effects) may be extracted as
\begin{align}
\begin{gathered}
\mathbf{A} = \boldsymbol{\psi}^{T} \mathbf{S} \in \mathbb{R}^{N_r \times N_s}.
\end{gathered}
\end{align}
The POD approximation of our solution is then obtained via
\begin{align}
\hat{\mathbf{S}} = [\begin{array}{c|c|c|c}{\tilde{\mathbf{q}}^{1}_h} & {\tilde{\mathbf{q}}^{2}_h} & {\cdots} & {\tilde{\mathbf{q}}^{N_{s}}_h}\end{array}] \approx \boldsymbol{\psi} \mathbf{A} \in \mathbb{R}^{N_h \times N_s},
\end{align}
where $\tilde{\mathbf{q}}_h^i \in \mathbb{R}^{N_h}$ corresponds to the POD approximation to $\hat{\mathbf{q}}_h^i$. The optimal nature of reconstruction may be understood by defining the relative projection error,
\begin{align}
\frac{\sum_{i=1}^{N_{s}}\left\|\hat{\mathbf{q}}^i_h-\tilde{\mathbf{q}}^i_h \right\|_{\mathbb{R}^{N_{h}}}^{2}}{\sum_{i=1}^{N_{s}}\left\|\hat{\mathbf{q}}^i_h\right\|_{\mathbb{R}^{N_{h}}}^{2}}=\frac{\sum_{i=N_r+1}^{N_{s}} \lambda_{i}^{2}}{\sum_{i=1}^{N_{s}} \lambda_{i}^{2}},
\end{align}
which shows that with increasing retention of POD bases, increasing reconstruction accuracy may be obtained.
Following compression, the overall forecast task may be formulated after recognizing that the rows of $\mathbf{A}$ contain information about the different POD modes and the columns correspond to their varying information in time. Therefore, one approach for forecasting is to predict the evolution of the $N_r$ coefficients in time. Once a forecast has been performed, the first $N_r$ bases may be used to reconstruct the snapshot (in the future). A popular approach for this forecasting task is to use data-driven time series methods. This is motivated by the fact that equations for the evolution of coefficients are nontrivial in the reduced basis. We can then generate training data by extracting the coefficients in $\mathbf{A}$ in a windowed-input and windowed-output form to completely define our forecast task. The state at a given time is represented by an $N_r$-dimensional column vector of POD coefficients. We fix the value of $N_r=5$, which captures approximately 92 \% of the variance of the data. While a larger value of $N_r$ would lead to improved reconstruction of small-scale features, training a stable data-driven model to predict the lower-energy modes requires additional treatment.
As we demonstrate in Sec. \ref{science-results}, however, setting $N_r=5$ is sufficient to capture the seasonal and long-term trends in sea-surface temperature. Given $N_s$ snapshots of data, we choose every subinterval of width $2K$ as an example, where $K$ snapshots are the input and $K$ snapshots are the output. We utilize a randomly sampled 80\% of examples for training and utilize the remaining 20\% for validation. For our NOAA SST data set, we have $N_s=427$ snapshots, and we choose $K=8$, resulting in 1,111 examples. We note that we avoid any potential data leakage by testing on data points generated in entirely different years of our data (no overlap between training and testing data).
\section{NAS using DeepHyper}
In this section, we describe the stacked LSTM search space for POD-LSTM and the NAS methods employed to explore it.
\subsection{Stacked LSTM search space}
In DeepHyper, the search space of the neural architecture is represented as a directed acyclic graph. The nodes representing inputs and outputs of the deep neural network are fixed and respectively denoted as $\mathcal{I}_i$ and $\mathcal{O}_j$. All other nodes $\mathcal{N}_k$ are called intermediate nodes, each with a list of operations (choices). Each intermediate node is a constant, a variable, or a skip connection node. A constant node's list contains only one operation, while a variable node's list contains multiple options.
The formation of skip connections between the variable nodes is enabled by a skip-connection variable node. Given three nodes $\mathcal{N}_{k-1}, \mathcal{N}_{k}$, and $\mathcal{N}_{k+1}$, the skip connection node allows for the possible construction of a direct connection between $\mathcal{N}_{k-1}$ and $\mathcal{N}_{k+1}$. This skip connection node will have two operations, zero for no skip connection and identity for skip connection. In a skip connection, the tensor output from $\mathcal{N}_{k-1}$ is passed through a dense layer and a sum operator. Since the skip connections can be formed between variable nodes that have a different number of neurons, the dense layer is used to project the incoming tensor to the right shape when the skip connection is formed. The sum operator adds the two tensors from $\mathcal{N}_{k-1}$ and $\mathcal{N}_k$ and passes the resulting tensor to $\mathcal{N}_{k+1}$. Without loss of generality, the same process can be followed for any number of nodes. For example, in the case of three nodes, two skip-connection nodes will be inserted before an incumbent node. For the stacked LSTM discovery, we define $m$ variable LSTM nodes, where each node is an LSTM layer with a list of numbers of neurons as possible operations. Figure \ref{seach_space_schematic} shows an example LSTM search space with two variable nodes.
We note that the input and output nodes are determined by the shape of our training data and are immutable. We also note that the second dimension of a tensor that is transformed from input to output is kept constant for all experiments. This aligns with the temporal dimension of an LSTM and is not perturbed.
\begin{figure}
\centering
\includegraphics[width=0.36\textwidth]{sspace_part1.pdf}
\caption{Example of a stacked LSTM search space for POD-LSTM with two variable LSTM nodes in blue, $\mathcal{N}_{1}$ and $\mathcal{N}_{3}$. The skip-connection variable nodes are $\mathcal{N}_{2}$, $\mathcal{N}_{4}$, and $\mathcal{N}_{5}$. Dotted lines represent possible skip connections. The last layer is a constant LSTM(5) node to match the output dimension of five.}
\label{seach_space_schematic}
\end{figure}
\subsection{Algorithms for architecture discovery}
We use search methods in DeepHyper to choose from a set of possible integer values at each variable node. At the LSTM variable nodes, this choice decides the number of hidden layer neurons. At the skip connection variable nodes, this choice decides connections to previous layers of the architecture. For intelligently searching this space, we have implemented a recently introduced completely asynchronous evolutionary algorithm called aging evolution (AE) \cite{real2019regularized} within DeepHyper. In addition, DeepHyper supports two search methods for NAS: a parallel version of reinforcement learning (RL) \cite{balaprakash2019scalable, zoph2018learning} based on the proximal policy optimization \cite{schulman2017proximal} and a random search (RS).
\subsubsection{Aging evolution}
AE searches for new architectures by performing mutations without crossovers on existing architectures within a population. At the start of the search, a population of $p$ architectures is initialized randomly, and the fitness metric (for validation accuracy) is recorded.
Following this initialization, samples of size $s$ are drawn randomly without replacement. A mutation is performed on the architecture with the highest accuracy within each sample (the parent) to obtain a new (child) architecture. A mutation corresponds to choosing a different operation for one variable node in the search space. This is achieved by first randomly sampling a variable node and then choosing (again at random) a value for that node excluding the current value. The validation accuracy of the child architecture is recorded. The child then is added to the population by replacing the oldest member of the population. For the purpose of mutation, an architecture is interpreted to be a sequence of integers, and certain probabilistic perturbations to this sequence are performed to obtain new architectures. Over multiple cycles, better architectures are obtained through repeated sampling and mutation. The sampling and mutation operations are inexpensive and can be performed quickly. When AE completes an evaluation, another architecture configuration for training is obtained by performing a mutation of the previously evaluated architectures (stored for the duration of the experiment in memory) and does not require any communication with other compute nodes.
\subsubsection{Distributed RL method}
RL is a framework where an agent interacts (or multiple agents interact) with an environment by performing actions and collecting rewards and observations from this same environment. In our case, actions correspond to operation choices for variable nodes in the NAS search space. The reward is the accuracy computed on the validation set. The RL method in DeepHyper uses proximal policy optimization \cite{schulman2017proximal} with a loss function of the form
\begin{equation}
J_t(\theta) = \hat{\mathbb{E}}_t\lbrack \textrm{min}(r_t(\theta)\hat{A}_t, \textrm{clip}(r_t(\theta), 1-\epsilon, 1+\epsilon) \hat{A}_t \rbrack,
\end{equation}
where $r_t(\theta) = \frac{\pi_\theta(a_t|s_t)}{\pi_{\theta_{\rm old}}(a_t|s_t)}$ is the ratio of action probabilities under the new and old policies; the clip/median operator ensures that the ratio is in the interval $[1-\epsilon, 1+\epsilon]$; and $\epsilon \in (0,1)$ is a hyperparameter (typically set to 0.1 or 0.2).
The clipping operation helps stabilize gradient updates. The method adopts the multimaster-multiworker paradigm for parallelization. Each master runs a copy of a policy and value neural network, termed an agent. It generates $b$ architectures and evaluates them in parallel using multiple worker nodes.
The $b$ validation metrics are then collected by each agent to do a distributed computation of gradients.
The agents perform an all-reduce with the mean operator on gradients and use that to update the policy and value neural network. This procedure was chosen instead of asynchronous update because it has been empirically shown to perform better (see, e.g., \cite{heess2017emergence}).
\subsubsection{Random search method}
For comparison of all our methods, we also describe results from a random search algorithm that explores the search space of architectures by randomly assigning operations at each node. This search is embarrassingly parallel: it does not need any internode communication. The lack of an intelligent search, however, leads to architectures that do not improve over time or scale. Such results are demonstrated in our experiments as well.
\section{Experiments}
We used Theta, a 4,392-node, 11.69-petaflop Cray XC40–based supercomputer at the Argonne Leadership Computing Facility. Each node of Theta is a 64-core Intel Knights Landing processor with 16 GB of in-package memory, 192 GB of DDR4 memory, and a 128 GB SSD. The compute nodes are interconnected by an Aries fabric with a file system capacity of 10 PB. The software environment that we used consists of Python 3.6.6, TensorFlow 1.14 \cite{tensorflow2015-whitepaper} and DeepHyper 0.1.7. The NAS API within DeepHyper utilized Keras 2.3.1.
For the stacked LSTM search space, we used 5 LSTM variable nodes ($m=5$). This resulted in the creation of 9 skip connection variable nodes. The possible operations at each of the LSTM variable node were set to [Identity, LSTM(16), LSTM(32), LSTM(64), LSTM(80), LSTM(96)] representing different layer types: Identity layer, LSTM layer with 16, 32, 64, 80, and 96 units. The dense layers for projection did not have any activation function. After each add operation, the ReLU activation function was applied to the tensor. For this search space, the total number of architectures was 8,605,184.
As a default, we evaluated the three search methods on 128 nodes. For scaling experiments, we used different node counts: 33, 64, 256, and 512. We used 33 (instead of 32) nodes because in RL we set the number of agents to 11 and adapt the number of workers per agent based on the node count as prescribed in \cite{balaprakash2019scalable}. This implies that given any number of compute nodes, 11 are reserved solely for the agents whereas the rest function as workers that are equally distributed to each agent. With 33 nodes, each agent is assigned 2 workers, for a total of 33 compute nodes being utilized. When 64 nodes are used, each agent is allocated 4 workers, for a total of 55 used nodes and 9 unused nodes. Similarly, for 128 compute nodes, each agent is allocated 10 workers, for a total of 121 utilized and 7 unused nodes. For 256 and 512 compute nodes, each agent is provided 22 and 45 workers, resulting in 3 and 6 unused nodes, respectively. The equal division of workers among agents is implemented for simplicity within DeepHyper.
For each node count, each search method was run for 3 hours of wall time.
Each evaluation in all three search methods involved training a generated network and returning the validation metric to the search method. The evaluation used only a single node (no multinode data-parallel training). The mean squared error was used for training the network, and the coefficient of determination ($R^2$) was used as a metric on the validation data set. The AE and RL methods were tasked with optimizing the $R^2$ metric by searching a large space of LSTM architectures. The RS method sampled configurations at random without any feedback. The training hyperparameters were kept the same in all of our experiments: batch size of 64, learning rate of 0.001, and 20 epochs of training with the ADAM optimizer \cite{kingma2014adam}. We set the maximum depth of the stacked LSTM network (an upper bound held constant in all our searches) to 5.
To assess the efficiency of the search, we tracked the averaged reward (i.e., the validation accuracy of our architecture) with respect to wall-clock time. To assess scaling, we recorded the averaged node utilization for each search; to assess the overall benefit, we selected the best architecture found during the search and assessed it on the test data set. For this, we performed posttraining, where the best-found architecture was retrained from scratch for a longer duration and tested on a completely unseen test data set. We note that our metrics (given by the reward and node utilization) were computed by using a moving window average of window size 100.
\subsection{Comparison of different methods}
Here, we compare the three search methods and show that AE outperforms RL and RS by achieving a higher validation accuracy in a shorter wall-clock time.
We ran AE, RL, and RS on 128 nodes on Theta. For the asynchronous algorithms of AE and RS, all nodes were considered workers because they are able to evaluate architectures independently of any master. For AE, we used a population size of 100 and a sample size of 10 to execute the mutation of architectures asynchronously.
For RS, random configurations are sampled independently of any other nodes. In contrast, RL relies on a synchronous update where the compute nodes are divided into agents and workers. Each agent is given an equal number of workers to evaluate architectures and is provided rewards before the agent neural networks are updated by using the policy gradients. We fixed the number of agents for this and all subsequent experiments to 11. For a 128-compute node experiment with 11 agents, we had 10 workers per agent node, for a total of 110 workers and 11 agents; 7 nodes remained idle.
\begin{figure}
\centering
\includegraphics[width=0.42\textwidth]{128_rw.png}
\caption{Comparison of search trajectories for AE, RL, and RS for 128 compute nodes on Theta. Each search was run for 3 hours of wall time. AE obtains optimal architectures in a much shorter duration. Both RL and AE are an improvement over random search methods.}
\label{fig:128_rw}
\end{figure}
Figure \ref{fig:128_rw} shows the search trajectory of validation $R^2$ of our three search strategies with respect to wall-clock time. We observe that AE reaches a validation $R^2$ value of 0.96 within 50 minutes. On the other hand, RL exhibits strong exploration in the beginning of the search and reaches a $R^2$ value comparable to that of AE at 160 minutes. The RS without any feedback mechanism finds architectures with $R^2$ values between 0.93 and 0.94. The results of RS show the importance of having a feedback-based search such as AE and RL.
The superior performance of AE can be attributed to its effective aging mechanism and the resulting regularization, as discussed in \cite{real2019regularized}. In AE, the individuals in the population die faster; an architecture can stay alive for a long time only through inheritance from parent to child for a number of generations. When that number is passed, the architecture undergoes retraining; and if the retraining accuracy is not high, the architecture is removed from the population. An architecture can remain in the population for a long time only when its retraining accuracy is high for multiple generations. Consequently, the aging mechanism helps navigate the training noise in the search process and provides a regularization mechanism. RL lacks such a regularization mechanism, and the slower convergence can be attributed to the synchronous gradient update mechanism at the inter- and intra-agent levels.
\subsection{Posttraining and science results}
\label{science-results}
To ensure efficient NAS, one commonly solves a smaller problem during the search itself before utilizing the best architecture discovered for the larger training task. This approach helps us explore the large space of neural architectures more efficiently. It also helps us deal with larger data sets if needed. In this study, we utilized fewer epochs of training (20 epochs) during the search and retrained from scratch using a greater number of epochs during posttraining (100 epochs). Here, we show that retraining the best-stacked LSTM architecture obtained from AE results in significant improvement. \revised{This phase is called posttraining to differentiate it from the commonly used augmentation phase of many NAS algorithms where the best-architecture is augmented with additional layers and retrained on the full problem}.
Figure \ref{fig:networks} shows the best architecture found by AE with 128 nodes. One can observe the unusual nature of our network as evidenced by multiple skip connections. We utilized the best architecture found by AE (in terms of validation $R^2$) for posttraining and scientific assessments.
\revised{For posttraining, we used the same hyperparameters as specified in the NAS with the exception of a longer training duration of 100 epochs (instead of 20 for the search). A final validation $R^2$ value of 0.985 was observed, and this trained architecture was used for our science assessments.} Our architecture search as well as our posttraining utilized a sequence-to-sequence learning task where the historical temperatures (in a sequence) were used to predict a forecast sequence of the same length (i.e., measurements of 8 weeks of sea-surface temperature data were utilized to predict 8 weeks of the same in the future). This may also be seen in the output space of the best-found architectures where the second dimension of the output tensor is the same as the one used for the input.
\begin{figure}
\centering
\includegraphics[width=0.2\textwidth]{NOAA_Network.pdf}
\caption{Best-found LSTM architecture for the NOAA SST data set using the aging evolution search strategy on 128 compute nodes of Theta for 3 hours of wall time.}
\label{fig:networks}
\end{figure}
Figure \ref{fig:post_training} shows the forecasts for the POD coefficients on both the training and the testing data sets. While the training data (representing temperatures between 1981 and 1989) predicts very well, the test data (from 1990 to 2018) shows a gradual increase in errors, with later data points far away from the training regime. In this figure, the modes refer to the linear coefficients that premultiply the POD basis vectors prior to reconstruction in the physical space. They may be interpreted to be the contribution of the different basis vectors (and correspondingly the different spatial frequency contributions) in the evolving flow. One can also observe that the amount of stochasticity increases significantly as the modal number increases. This increase may be explained by the fact that while seasonal dynamics are cyclical and lead to repeating patterns at the global scale (mode 1, 2, and 3), small-scale fluctuations may be exceedingly stochastic (mode 4 and beyond). We note that there are no external inputs to our data set and that the networks are utilized for forecasting under the assumption that the past information is always 100\% accurate. Simply speaking, the outputs of the LSTM forecast are not reused as inputs for future forecasts. The past is always known a priori.
\revised{We also performed comparisons with data extracted (within the appropriate time range) from the Community Earth System Model (CESM) \cite{kay2015community}. The CESM forecasts represent a state-of-the-art process-based climate modeling system based on coupled numerical PDE evaluations for atmospheric, oceanic, land carbon cycle, and sea-ice component models. These are in addition to diagnostic biogeochemistry calculations for the oceanic ecosystem and the atmospheric carbon dioxide cycle. The CESM forecast data is made publicly available for climatologists because of the large compute costs incurred in its simulation\footnote{http://www.cesm.ucar.edu/projects/community-projects/LENS/data-sets.html}. Figure \ref{fig:post_training} also shows the coefficients of the CESM data projected onto the NOAA POD modes. We note that the POD coefficients of the CESM forecasts tend to pick up trends in the large-scale features (i.e., modes 1 and 2) appropriately but show distinct misalignment with increasing modes. Another important fact is that the CESM model forecasts were performed on a finer grid (320 $\times$ 384 degrees of freedom for oceanic regions alone) and some errors may be due to cubic interpolation onto the remote sensing grid. We also note that the CESM forecasts are based on specifying one initial condition and running climate simulations for multiple coupled geophysical processes for centuries. In contrast, the POD-LSTM methodology detailed in this paper relies on a short (8-week) forecast given \emph{true} observations from the past. Therefore, we stress that the proposed emulation strategy and the long-term climate forecasts using PDEs are designed for different use cases. Although POD-LSTM provides more accurate estimates, CESM may provide better results for longer forecast horizons.}
\revised{To perform a fairer assessment of our proposed framework, we also tested our predictions against forecasts from the Global Hybrid Coordinate Ocean Model (HYCOM)---the current state-of-the-art SST forecast technique.\footnote{https://www.ncdc.noaa.gov/data-access/model-data/model-datasets/navoceano-hycom-glb} In contrast to CESM, HYCOM provides short-term 4-day forecasts at 3-hour time steps, updated daily, and relies on a 1/12 degree grid (i.e., 144 times finer than the NOAA grid). We used aggregate HYCOM data (currently available only between April 5, 2015, and June 24, 2018) for a comparison with our proposed model emulator. The extracted HYCOM data was interpolated onto the NOAA grid-coordinates before an assessment of root mean square errors (RMSEs). In particular, we focus on RMSEs in the Eastern Pacific region (between -10 to +10 degrees latitude and 200 to 250 degrees longitude) because of the importance of predicting abrupt SST rises in this region as well its distance from landmasses that may contribute to interpolation errors. In addition, we provide a weekly breakdown of metrics within this time period and also show comparisons with CESM in Table \ref{RMSE_Table}. The trained POD-LSTM is seen to be competitive with both process-based models. However, we highlight the fact that the slightly larger biases in the CESM and HYCOM data may be an artifact of interpolation from grids of different spatiotemporal resolution. In either case, the POD-LSTM represents a viable opportunity for accelerated forecasting without the traditional constraints of either system of PDEs. }
\begin{table*}[]
\centering
\caption{RMSE breakdown (in Celsius) for different forecast techniques compared against the NAS-POD-LSTM forecasts between April 5, 2015, and June 24, 2018, in the Eastern Pacific region (between -10 to +10 degrees latitude and 200 to 250 degrees longitude). The proposed emulator matches the accuracy of the process-based models for this particular metric and assessment.}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{} & \multicolumn{8}{c|}{RMSE ($^\circ$Celsius) }\\
\hline
& Week 1 & Week 2 & Week 3 & Week 4 & Week 5 & Week 6 & Week 7 & Week 8 \\ \hline
Predicted & 0.62 & 0.63 & 0.64 & 0.66 & 0.63 & 0.66 & 0.69 & 0.65 \\ \hline
CESM & 1.88 & 1.87 & 1.83 & 1.85 & 1.86 & 1.87 & 1.86 & 1.83 \\ \hline
HYCOM & 0.99 & 0.99 & 1.03 & 1.04 & 1.02 & 1.05 & 1.03 & 1.05 \\ \hline
\end{tabular}
\label{RMSE_Table}
\end{table*}
\revised{An example forecast within the testing range (for the week starting June 14, 2015) is shown in Figure \ref{fig:NOAA_contour}, where the larger structures in the temperature field are captured effectively by the emulator. We note here that the POD-LSTM framework is fundamentally limited by the fact that the spectral content of the predictions can at best match the spectral support of the number of POD modes retained. Thus, we may interpret this to be training and forecasting on a filtered version of the true data set, where the truncation of the POD components leads to limited recovery of high-frequency information. We also observe that CESM and HYCOM predictions show qualitative agreement with the true data at larger scales. We show point estimates for forecasts from the different process-based frameworks at different locations in the Eastern Pacific ocean in Figure \ref{fig:NOAA_probe}, which show good agreement between HYCOM and POD-LSTM. CESM, as expected, is slightly inaccurate for short-term time scales because of its formulation. Both HYCOM and POD-LSTM capture seasonal trends appropriately within our testing period.}
\begin{figure*}
\centering
\mbox{
\subfigure[NOAA SST training data forecast]{\includegraphics[width=0.48\textwidth]{Train_Trajectory.png}}
\subfigure[NOAA SST testing data forecast]{\includegraphics[width=0.48\textwidth]{Test_Trajectory.png}}
} \\
\caption{Posttraining results with progress to convergence for the optimal architecture showing improved performance for longer training durations (top row). Training and testing forecasts for the NOAA SST data set (bottom row) display the a posteriori performance after surrogate training. Comparisons with CESM show that higher frequencies may be predicted more accurately for short-term forecasts using the proposed method.}
\label{fig:post_training}
\end{figure*}
\begin{figure*}
\centering
\subfigure[NOAA (Truth)]{\includegraphics[width=0.24\textwidth]{NOAA_Field.png}}
\subfigure[HYCOM]{\includegraphics[width=0.24\textwidth]{HYCOM_Field.png}}
\subfigure[CESM]{\includegraphics[width=0.24\textwidth]{CESM_Field.png}}
\subfigure[POD-LSTM]{\includegraphics[width=0.24\textwidth]{Predicted_Field.png}}
\caption{Sample averaged temperature forecasts in degrees Celsius for the week starting June 14, 2015 (within the testing regime for our machine learning framework). Note that CESM forecast horizons span several decades whereas HYCOM provides 4-day short-term forecasts.}
\label{fig:NOAA_contour}
\end{figure*}
\begin{figure*}
\centering
\mbox{
\subfigure[-5 $^{\circ}$ latitude, 210 $^{\circ}$ longitude]{\includegraphics[width=0.32\textwidth]{Probe_0.png}}
\subfigure[+5 $^{\circ}$ latitude, 250 $^{\circ}$ longitude]{\includegraphics[width=0.32\textwidth]{Probe_1.png}}
\subfigure[+10 $^{\circ}$ latitude, 230 $^{\circ}$ longitude]{\includegraphics[width=0.32\textwidth]{Probe_2.png}}
}
\caption{Temporal probes for the temperature at three discrete locations within the Eastern Pacific zone. HYCOM and POD-LSTM are shown to perform equally well while CESM makes slight errors in forecast due to its long-term forecast formulation. The data are plotted for weeks between April 5, 2015, and June 17, 2018.}
\label{fig:NOAA_probe}
\end{figure*}
\subsection{Comparisons with baseline machine learning models}
Here, we compare our automatically generated POD-LSTM network with manually generated POD-LSTM networks and with other classical forecasting methods and show the efficacy of our approach.
The classical forecasting assessments use linear, XGBoost, and random forest models within the non-autoregressive framework with no exogenous inputs; in other words, models are fit between an input space corresponding to a historical sequence of temperature to forecast the next sequence. These methods are all deployed within the \texttt{fireTS}\footnote{https://github.com/jxx123/fireTS} package; details may be found in \cite{xie2018benchmark}. Briefly, if our target is given by $\mathbf{a}(t+1), \mathbf{a}(t+2), \hdots, \mathbf{a}(t+K)$, we fit a data-driven regressor using information from $\mathbf{a}(t-1), \mathbf{a}(t-2), \hdots, \mathbf{a}(t-K)$, where $K$ is now interpreted to be the autoregression order. \revised{A prediction is made by using the underlying regressor while assuming that the past information is always coming from the true measurements. Note that all our baseline regression frameworks were implemented through the scikit-learn package \cite{sklearn} with default configurations}. For our manually designed LSTMs, we developed simple stacked architectures and scanned across the number of hidden layer neurons for each cell. Our LSTMs utilized both one and five hidden layers and demonstrated the challenge of manual model selection. These LSTMs also utilized 100 epochs for training.
Table \ref{Table_Methods} shows the results. The $R^2$ metrics show that the best architecture found by DeepHyper (denoted NAS-POD-LSTM) outperforms the manually designed LSTMs and classical time-series prediction methods, with the highest $R^2$ value of 0.876. In particular, the benefit of using LSTMs over their counterparts is easily observed where LSTMs are seen to be more accurate ($R^2>0.73$) than the linear model ($R^2 \approx 0.1$), XGBoost ($R^2 \approx -0.1$) and the random forest regressor ($R^2 \approx 0.0$).
\begin{table*}[]
\centering
\caption{Coefficients of determination ($R^2$) of different data-driven forecasting methods on the NOAA SST data set. Here, training data and validation data are obtained from 1981 to 1989, whereas testing data is obtained from 1990 to 2018. Note that the LSTM metrics are expressed in one-layered/five-layered configuration.}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\textbf{Model} & \textbf{NAS-POD-LSTM} & \textbf{Linear} & \textbf{XGBoost} & \textbf{Random Forest} & \textbf{LSTM-40} & \textbf{LSTM-80} & \textbf{LSTM-120} & \textbf{LSTM-200} \\ \hline
\textbf{1981-1989} & \textbf{0.985} & 0.801 & 0.966 & 0.823 & 0.916/0.944 & 0.931/0.948 & 0.922/0.956 & 0.902/0.963 \\ \hline
\textbf{1990-2018} & \textbf{0.876} & 0.172 & -0.056 & 0.002 & 0.742/0.687 & 0.734/0.687 & 0.746/0.711 & 0.739/0.724 \\ \hline
\end{tabular}
\label{Table_Methods}
\end{table*}
In terms of times to solution, all data-driven models (i.e., models based around the forecast of POD coefficients) provided forecasts for the given time period (1981--2018) almost instantaneously. For instance, the NAS-LSTM model requires approximately 10 seconds to make forecasts in POD coefficient space from which entire fields can be reconstructed instantaneously by using the linear reconstruction operation. \revised{This can be contrasted with CESM (for the forecast period of 1920--2100), which required 17 million core-hours on Yellowstone, NCAR's high-performance computing resource, for each of the 30 members of the large ensemble. While compute costs in such high detail are not available for HYCOM, this short-term ocean prediction system runs daily at the Navy DoD Supercomputing Resource Center, with daily data typically accessible within 48 hours of the initial run time\footnote{https://www.hycom.org/dataserver}. Benchmarking for the 1/25 degree HYCOM forecasts (twice finer than the reference data used here) indicates the requirement of 800 core-hours per day of forecast on a Cray XC40 system.\footnote{https://www.hycom.org/attachments/066\_talk\_COAPS\_17a.pdf}}
\subsection{Scaling}
Here, we study the scaling behavior of the three search methods. We show that AE scales better than RL and outperforms RS with respect to accuracy. In addition to 128 nodes, we utilized 33, 64, 256, and 512 nodes for the scaling experiments. We analyzed the scaling with respect to node utilization, number of architectures evaluated, and number of high-performing architectures.
\noindent {\bf Node utilization:} Table \ref{Scaling_table} shows the average node utilization of the search methods for different node counts. The metric used in this table is computed by a ratio for the observed effective node utilization of an experiment against the ideal node utilization (for AE, RS, and RL). The observed effective node utilization over 3 hours of wall time is computed by using an area-under-the-curve (AUC) calculation. Subsequently, this area is divided by the ideal node utilization AUC. Therefore, a metric closer to 1 implies optimal utilization of all compute nodes. We note that the trapezoidal integration rule is utilized for the AUC calculation. We observe that the node utilization of AE and that of RS are similar and are above 0.9 for up to 256 nodes; for 512 nodes, the utilization drops below 0.87. \revised{The node utilization for RL is poor, hovering around 0.5 for the different compute node experiments. This can be attributed to two factors: the gradient averaging across agents is synchronous, and each agent needs to wait until all its workers finish their evaluations before computing the agent-specific gradient. In the beginning, each agent will generate a wide range of architectures, and each can have a different training time. The worker nodes for a given agent cannot proceed to the next batch of evaluations, and they become idle because one or more worker nodes require more time to finish the training. This situation has previously been observed in \cite{balaprakash2019scalable} within DeepHyper. Note that AE and RS do not have such utilization bottlenecks.}
\revised{\noindent {\bf Number of architectures evaluated:} We observe that AE consistently is able to evaluate more architecture evaluations than RS and RL can for a fixed wall time (see Table \ref{Scaling_table}). The advantage over RL may be explained by the fact that RL requires a synchronization before obtaining rewards for a batch of architectures (also evidenced by lower node utilization). The improvement over RS may be attributed to the AE algorithm prioritizing architectures with fewer trainable parameters---a fact that has been demonstrated for asynchronous algorithms previously \cite{balaprakash2019scalable}. When 33 compute nodes are used, the AE strategy completes 2,093 evaluations compared with only 1,066 by RL and 1,780 by RS. For 64 compute nodes, the total number of evaluations for AE increases significantly, with 4,201 evaluations performed successfully, whereas RL and RS lead to 2,100 and 3,630 evaluations in the same duration, respectively. As mentioned previously, for 128 nodes, AE performs 8,068 evaluations whereas RL and RS evaluate 4,740 and 7,267 architectures, respectively. For 256 nodes, AE, RL, and RS evaluate 18,39, 9,680, and 15,221 architectures, respectively. Similar trends are seen for 512 nodes, with 33,748, 16,335, and 26,559 architectures evaluated by AE, RL, and RS, respectively. We note that the AE strategy evaluates roughly double the number of architectures for all compute node experiments in comparison with RL.}
\revised{\noindent {\bf High-performing architectures discovered:}
Since AE, RL, and RS are randomized search methods, ideally one would need multiple runs with different random seeds to exactly assess the impact of scalability. However, running multiple repetitions is computationally infeasible: for instance, 10 repetitions requires 450 hours (=$10$ runs $\times$ $3$ methods $\times$ $5$ node counts $\times$ $3$ hours) if we run each job sequentially. Therefore, we analyzed the high-performing architecture metric, defined as the number of unique discovered architectures that have $R^2$ greater than $0.96$. Figure \ref{subfig:thresh_sf_a} shows this metric as a function of time for AE. We can observe that the number of unique architectures obtained by AE grows considerably with greater numbers of compute nodes. In particular, the number of such architectures obtained by AE at 180 minutes using 33 nodes is achieved by AE with 64 nodes in 90 minutes, and the cumulative number of unique architectures at the end of the search is much higher as well. We can observe a similar trend as we double the node counts: in 90 minutes, 128 nodes obtains a number of unique architectures similar to that of the entire 64-node run. The 256-node search obtains a number of unique architectures in 120 minutes similar to that of the entire 128-node run, and the 512-node search finds more unique architectures in 90 minutes than in the entire 256-node search. These results suggest that the AE algorithm is well suited to discovering unique high-performing neural architectures from a large search space on up to at least 512 KNL nodes.}
\revised{
For a thorough comparison of AE with the other algorithms, we also examined the number of unique architectures obtained at the end of all 180-minute searches. The results are shown in Figure \ref{subfig:thresh_sf_a}. We can see that the AE method outperforms the RL and RS strategies comprehensively. Noticeably, the number of unique architectures discovered by RL is seen to saturate after 256 nodes, indicating possible issues with scaling these types of synchronous algorithms on KNL nodes. Given these results, we recommend the use of AE for the most efficient exploration of a given search space on KNL architectures.}
\begin{figure*}
\centering
\subfigure[AE-discovered architectures: Temporal breakdown]{\label{subfig:thresh_sf_a}\includegraphics[width=0.42\textwidth]{AE_Scaling.png}}
\subfigure[High-performing architectures ]{\label{subfig:thresh_sf_b}\includegraphics[width=0.42\textwidth]{Threshold_Comp.png}}
\caption{Percentage of architectures with $R^2$ greater than $0.96$ for different compute nodes. The plot on the left shows that the AE search strategy is more effective at obtaining better architectures.}
\label{fig:threshold_comp}
\end{figure*}
\begin{table}
\centering
\caption{Tabulation of node utilization and total number of evaluations for different search strategies on varying numbers of compute nodes of Theta.}
\begin{tabular}{|c|c|c|c||c|c|c|}
\hline
& \multicolumn{3}{c||}{\textbf{Node utilization}} & \multicolumn{3}{c|}{\textbf{Number of evaluations}} \\ \hline
\textbf{No. of nodes} & \textbf{AE} & \textbf{RL} & \textbf{RS} & \textbf{AE} & \textbf{RL} & \textbf{RS} \\ \hline
\textbf{33} & 0.905 & 0.592 & \textbf{0.913} & \textbf{2,093} & 1066 & 1780 \\ \hline
\textbf{64} & 0.920 & 0.482 & \textbf{0.927} & \textbf{4,201} & 2100 & 3630 \\ \hline
\textbf{128} & 0.918 & 0.527 & \textbf{0.921} & \textbf{8,068} & 4740 & 7267 \\ \hline
\textbf{256} & 0.911 & 0.509 & \textbf{0.936} & \textbf{18,039} & 9680 & 15221 \\ \hline
\textbf{512} & \textbf{0.962} & 0.541 & 0.869 & \textbf{33,748} & 16335 & 26559 \\ \hline
\end{tabular}
\label{Scaling_table}
\end{table}
\subsection{Variability analysis}
Our previous set of experiments shows that AE strikes the right balance between optimal node utilization and reward-driven search by avoiding the synchronization requirements of RL. To ensure that the behavior of AE is repeatable, we made 10 runs of AE, each with different random seeds, on 128 nodes. The results are shown in Figure \ref{fig:evo_ppo_bounds}. The averaged reward and node utilization curves are shown by their mean value and by shading of two standard deviations obtained from 10 runs. We also show results from performing 10 experiments using RL, where the oscillatory node utilization is replicated for different random seeds. We also replicate the slower growth of the reward as observed previously in Fig. \ref{fig:128_rw}. Overall, the values reflect the performance observed in the previous run that used AE on 128 nodes.
\begin{figure*}
\centering
\mbox{
\subfigure[Averaged reward - AE]{\includegraphics[width=0.35\textwidth]{Reward_Bounds.png}}
\subfigure[Node utilization - AE]{\includegraphics[width=0.35\textwidth]{NO_Bounds.png}}
} \\
\mbox{
\subfigure[Averaged reward - RL]{\includegraphics[width=0.35\textwidth]{PPO_Reward_Bounds.png}}
\subfigure[Node utilization - RL]{\includegraphics[width=0.35\textwidth]{PPO_NO_Bounds.png}}
}
\caption{Mean and two standard-deviation-based confidence intervals for 10 NAS evaluations for AE (top) and RL (bottom) on 128 nodes. The low variability of AE indicates that the optimal performance of this search algorithm was not fortuitous. The oscillatory behavior in node utilization for RL can be observed across different experiments.}
\label{fig:evo_ppo_bounds}
\end{figure*}
\section{Related work}
POD-based compression coupled with LSTM-based forecasting has been utilized for multiple physical phenomena that suffer from numerical instability and imprecise knowledge of physics. For instance, the POD-LSTM method has been used for building implicit closure models for high-dimensional systems \cite{maulik2020time,rahman2019nonintrusive}, for turbulent flow control \cite{mohan2018deep}, and for exogenous corrections to dynamical systems \cite{ahmed2020long}. Our work is the first deployment of the POD-LSTM method for real-world geophysical data forecasting. There have been recent results from the use of convolutional LSTMs (i.e., using a nonlinear generalization of POD) for forecasting on computational fluid dynamics data \cite{xingjian2015convolutional,mohan2019compressed}. However, we have performed this study using POD because of the spectrally relevant information associated with each coefficient of the forecast. More important, previous POD-LSTMs were manually designed, whereas we have demonstrated NAS for automated POD-LSTM development, a significant improvement in the state of the art.
Several studies of automated recurrent network architecture search have been conducted for a diverse set of applications. One of the earliest examples that is similar to our approach was GNARL \cite{angeline1994evolutionary}, which utilized evolutionary programming to search for the connections and weights of a neural network with applications to natural language processing. The approach was deployed serially, however, and relied on a synchronous update strategy where 50\% of the top-performing architectures are retained while the rest are discarded at each cycle. More recently, there has been research into the use of differentiable architecture search \cite{DBLP:journals/corr/abs-1806-09055}, where a continuous relaxation of the discrete search space was used to instantiate a hyper-neural network including all possible models. Based on that, a bilevel stochastic-gradient-based optimization problem was formulated. Nevertheless, this method does not show stable behavior when using skip connections because of unbalanced optimization between different possible operations at variable nodes. Evolutionary NAS has also been utilized to discover the internal configurations of an LSTM cell itself; for instance, in \cite{rawal2018nodes} a tree-based encoding of the LSTM cell components was explored for natural language and music modeling tasks. Other studies have looked into optimizing parameters that control the number of loops between layers of a network during the stochastic-gradient-based learning process \cite{savarese2019learning} for image classification applications. This approach leads to recurrent neural architecture discovery during the learning process itself. In contrast, in our study, architectures are generated and evaluated separately at scale.
Evolutionary-algorithm-based architecture searches have also been deployed at scale, for example in \cite{ororbia2019investigating}, where hybridizations of LSTM cells as well as simpler nodes were studied by using neuroevolution \cite{stanley2002evolving}. Notably, this work was deployed at scale and also dealt with forecasting of real-world data obtained from the aviation and power sectors. Our approach is also able to integrate hybridizations of fully connected, skip, and identity layers in its search space at scale. A recent investigation in \cite{huang2019wenet} also poses the recurrent NAS problem as the determination of weighting parameters for different networks that act as a mixture of experts applied to language modeling tasks. Our work differs from a majority of these investigations by combining the use of scale for architecture discovery for forecasting on a real-world data set for geophysical forecasting tasks.
\section{Conclusion and future work}
We introduced a scalable neural architecture for the automated development of proper-orthogonal-decomposition-based long-short-term memory networks (POD-LSTM) to forecast the NOAA Optimum Interpolation Sea-Surface Temperature data set. We implemented aging evolution (AE), an asynchronous evolutionary algorithm within DeepHyper, an open-source automated machine learning package, to significantly improve its scalability on Theta, a leadership-class HPC system at Argonne's Leadership Computing Facility. We compared AE with the two search methods already within DeepHyper, a distributed reinforcement learning and random search method, and showed that AE outperforms the distributed reinforcement learning method with respect to node utilization and \revised{scalability}. In addition, AE achieves architectures with better accuracy in shorter wall-clock time and matches the node utilization \revised{of the completely asynchronous random search}. We compared the best architecture obtained from AE that was retrained for a longer number of epochs with manually designed POD-LSTM variants and linear and nonlinear forecasting methods. We showed that the automatically designed architecture outperformed all of the baselines with respect to the accuracy on the test data, obtaining a $R^2$ value of $0.876$. The automated data-driven POD-LSTM method that we developed has the potential to provide fast emulation of geophysical phenomena and can be leveraged within ensemble forecast problems as well as for real-time data assimilation tasks. Since the POD-LSTM method relies on the interpretation of flow fields solely from the perspective of data, they may be used for forecasting in multiple applications involving spatiotemporally varying behavior. However, a key point that needs domain-specific intuition is the underlying knowledge of spatial and temporal scales that determines the sampling resolution for snapshots and the dimension of the reduced basis constructed through the POD compression. For example, basic POD-based methods (without task-specific augmentation) fail to capture shocks or contact discontinuities, which are common in engineering applications \cite{taira2020modal}.
Keeping geophysical emulators as our focus, our future work will seek to overcome the limitations of the POD by hybridizing compression and time evolution in one end-to-end architecture search. In addition, we aim to deploy such emulation discovery strategies for larger and more finely resolved data sets for precipitation and temperature forecasting on shorter spatial and temporal scales. The results from this current investigation also are promising for continued investigation into AutoML-enhanced data-driven surrogate discovery for scientific machine learning.
\section*{Acknowledgments}
The authors acknowledge the valuable discussions with Dr. Rao Kotamarthi (Argonne National Laboratory) and Ashesh Chattopadhyay (Rice University) in the preparation of this manuscript. This material is based upon work supported by the U.S. Department of Energy (DOE), Office of Science, Office of Advanced Scientific Computing Research, under Contract DE-AC02-06CH11357. This research was funded in part and used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. RM acknowledges support from the Margaret Butler Fellowship at the Argonne Leadership Computing Facility. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. DOE or the United States Government. Declaration of Interests - None.
\bibliographystyle{plain}
|
1,941,325,221,162 | arxiv | \section{Introduction}
It is important to understand the rheological properties of granular materials, for application in industry and several other fields \cite{Silbert2001, GDRMidi2004, Jop2006}.
The rheology of granular particles, under constant volume, has been intensively studied, and is known to change drastically across the jamming density \cite{Hatano2007, Olsson2007}.
Below the jamming density, the viscosity is proportional to the shear rate \cite{Bagnold1954}, and the kinetic theory is a powerful tool for understanding the rheology of dilute and moderately-dense cases \cite{Brey1998, Garzo1999, Garzo2003, Brilliantov2004, Lutsko2005, Garzo2007_1, Garzo2007_2, Mitarai2007, Chialvo2013}.
On the other hand, above the jamming density, the contacts between the particles become dominant, and the shear stress has a finite value, even for a low shear-rate limit.
Additionally, both above and below the jamming density, critical scaling is reported \cite{Hatano2007, Olsson2007}.
Another important setup for investigating the rheology is the constant pressure condition, where the relationship between the inertial number (dimensionless shear rate) and stress ratio (shear stress divided by the normal stress) has been intensively studied; this relationship is a monotonically increasing function \cite{Cruz2005, Bouzid2013, Bouzid2015, DeGiuli2015, DeGiuli2016, Barker2017}.
The relationship changes, depending on the existence of tangential friction between the particles \cite{Bouzid2013}.
Under certain situations, such as the van der Waals force for fine powders \cite{Israelachvili2011, Castellanos2005}, capillary force for moderately humid particles \cite{Rowlinson1982, Herminghaus2005, Mitarai2005, Mitarai2006, Mitarai2010, Herminghaus2013}, and electromagnetic force for magnetic beads \cite{Forsyth2001}, the cohesiveness between particles is nonnegligible.
The existence of cohesive forces alters the rheology because phase transition or nucleation occurs \cite{Heist1994, Strey1994, Yasuoka1998} and several patterns appear, depending on a set of control parameters \cite{Takada2014, Saitoh2015}.
The dissipation between collisions increases, when the temperature is comparable to the magnitude of the attractive forces \cite{Muller2011, Murphy2015, Takada2016}.
Recently, certain papers have studied the rheology of cohesive granular particles, under a constant volume condition \cite{Gu2014, Chaudhuri2012, Irani2014, Irani2016, Saitoh2015, Takada2017}.
There exists a yield stress, even below the jamming density \cite{Irani2014, Irani2016, Iordanoff2005, Aarons2006}, which does not appear for noncohesive systems.
Irani et al. \cite{Irani2014, Irani2016} determined a negative peak below the jamming density at constant volume conditions.
This result completely differs from that in noncohesive cases.
On the other hand, many experiments have been performed under a constant pressure condition, which is more realistic than those under a constant volume condition \cite{Irani2014, Irani2016}.
Therefore, it is important to study the rheology, under a constant pressure condition.
In this paper, we study the role of cohesive interparticle forces on the rheology of model granular systems, under constant pressure conditions, using molecular dynamics (MD) simulations.
The organization of this paper is as follows:
In the next section, we explain our simulation model.
Section \ref{sec:results} is the principal part of this paper, where we present the simulation results.
We then discuss and conclude our results in Secs.\ \ref{sec:discussion} and \ref{sec:conclusion}, respectively.
In appendices \ref{sec:Histogram} and \ref{sec:Kurtosis}, we present the mean velocity distribution and its kurtosis to validate our criterion to distinguish the phases.
\begin{figure}[htbp]
\centering
\includegraphics[width=.6\linewidth]{setup_modified}
\caption{Simulation model.
Bidisperse particles are confined between the top and bottom walls by pressure applied in the $y$-direction.
We apply a shear to the system by moving the walls in the $x$-direction; the upper (lower) wall moves in the positive (negative) $x$-direction.}
\label{fig:setup}
\end{figure}
\section{Simulation model}
In this section, we describe our simulation model.
We consider a two-dimensional system, and prepare 2,000 bidisperse (50:50) frictionless particles with diameters and masses of $d_1(\equiv d)$, $d_2$, and $m_1(\equiv m)$, $m_2$, respectively.
In this paper, the dispersity is fixed as $d_2=1.4d$, and $m_2=1.4^2 m$.
Walls with length, $L_x=42d$,are made of smaller particles, aligned in the $x$-direction with an interval, $d$.
To efficiently apply shear, we also align the particles to form triangles with sides, $6d$, as shown in Fig.\ \ref{fig:setup}.
The upper (lower) wall is compressed with a force, $-PL_x$ ($+PL_x$), in the $y$-direction, and moves with a velocity, $V$ ($-V$), in the $x$-direction.
In our system, the system size fluctuates in the $y$-direction.
We define length, $L_y$, as the long-time average of the system size in the $y$-direction, for later discussion.
The interaction between particles is described by the sum of the elastic force from the potential and dissipative forces.
The potential part is given by
\begin{align}
U(r_{ij})= \begin{cases}
\epsilon \left[\left(1-\frac{r_{ij}}{d_{ij}}\right)^2 - 2u^2\right] & \frac{r_{ij}}{d_{ij}} \le 1+u, \\
-\epsilon \left(1+2u - \frac{r_{ij}}{d_{ij}}\right)^2 & 1+u < \frac{r_{ij}}{d_{ij}} \le 1+2u,\\
0 & \frac{r_{ij}}{d_{ij}}>1+2u,
\end{cases}\label{eq:potential}
\end{align}
where $d_{ij}\equiv (d_i+d_j)/2$, $r_{ij}$ is the distance between the $i$-th and $j$-th particles, $\epsilon$ relates to the stiffness of the particles, and $u$ characterizes the well depth and width.
It is to be noted that this potential (\ref{eq:potential}) is used not only for cohesive grains but also for attractive emulsions \cite{Irani2014, Lois2008, Chaudhuri2012}.
In addition, we consider the dissipative force, which acts when two particles overlap each other.
The magnitude of the dissipative force also depends on the relative velocity of the two particles.
Its explicit expression is given by
\begin{equation}
\bm{F}^{\rm diss}(\bm{r}_{ij}, \bm{v}_{ij}) = -\zeta \Theta(d_{ij}-r_{ij}) (\bm{v}_{ij} \cdot \hat{\bm{r}}_{ij})\hat{\bm{r}}_{ij},
\label{eq:dissipation}
\end{equation}
where $\zeta$ is the dissipation rate, $\bm{v}_{ij}\equiv \bm{v}_i - \bm{v}_j$ is the relative velocity between the particles, $\hat{\bm{r}}_{ij}$ is a unit vector parallel to $\bm{r}_{ij}=\bm{r}_i - \bm{r}_j$, and $\Theta(x)$ is a step function, where $\Theta(x)=1$ for $x>0$ and $0$ for $x\le 0$.
Then, the force acting on the $i$-th particle is expressed as
\begin{equation}
\bm{F}_i = \sum_{j\neq i} \left[ -\bm\nabla_i U(r_{ij}) + \bm{F}^{\rm diss} (\bm{r}_{ij}, \bm{v}_{ij})\right].
\end{equation}
In addition, the equation of motion of the walls is selected to be overdamped with a dissipation rate, $\zeta_{\rm wall}=10 \sqrt{m\epsilon}/d$, to accelerate the simulations.
The time evolution of the walls is described by
\begin{align}
-PL_x - \zeta_{\rm wall}\dot y_{\rm upper} &= 0, \\
PL_x - \zeta_{\rm wall}\dot y_{\rm lower} &= 0.
\end{align}
Here, ${y}_{\rm upper}$ (${y}_{\rm lower}$) corresponds to the coordination of the upper (lower) wall in the $y$-direction.
In this paper, all the quantities are nondimensionalized in terms of $m$, $d$, and $\epsilon$.
As in previous studies \cite{Irani2014, Irani2016}, we select the dimensionless dissipation rate as $\zeta^* \equiv \zeta d/\sqrt{m\epsilon}=2$.
In particular, this dissipation rate approximately corresponds to a restitution coefficient of approximately $0.135$.
Certain metal material, such as copper or aluminum, have this restitution- coefficient value.
It is to be noted that the choice of the restitution coefficient does not drastically affect the rheology \cite{Cruz2005}.
\section{Results}\label{sec:results}
In this section, we present the results, when three dimensionless parameters are controlled:\ the velocity of the moving walls, $V^*(\equiv V \sqrt{m/\epsilon})$, that determines the shear rate, the normal stress, $P^*(\equiv P d^2/\epsilon)$, and the strength of the attractive potential, $u$ in Eq.\ (\ref{eq:potential}).
\subsection{Phase diagram}\label{sec:phase_diagram}
A set of control parameters specifies the behavior of the system in the steady states.
We determined four steady states: the (i) uniform shear phase, (ii) oscillation phase, (iii) clustering phase, and (iv) shear-banding phase, as described below.
Initially, the long-time averaged velocity profiles in the $y$-direction are linear in phases (i) and (ii).
All the particles flow uniformly and uniform shear is applied in phase (i).
Although the long-time averaged behavior of phase (ii) is similar to that of phase (i), the temporal behavior is completely different.
Phase (ii) has large velocity fluctuations in the bulk and backward movements are observed within a short time period.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\linewidth]{slip_modified}
\caption{Time evolution of the integrated mean displacement, $\Delta$ (Eq.\ (\ref{eq:delta})), in the region near $y=L_y/4$ for phase (i) (dashed line) with $u = 2\times 10^{-4}$, $P^* = 10^{-3}$, and $V^* = 10^{-3}$, and phase (ii) (solid line) with $u = 2\times 10^{-1}$, $P^* = 10^{-3}$, and $V^* = 10^{-3}$, where $t^*\equiv t\sqrt{\epsilon/md^2}$ and $\Delta^*\equiv\Delta/d$. }
\label{fig:slip}
\end{figure}
To characterize phase (ii), we focus on the region ($y=L_{y}/4$) between the center of the system and the upper wall with width, $L_y/31$, and calculate the average velocity, $\bar {v}^*_x(y=L_y/4)$, in this region.
We select this region because the long-time averaged velocity is not zero and the effect of the walls is relatively small, in this regime.
We define the integrated mean displacement, $\Delta(t)$, in this region as
\begin{equation}
\Delta(t)\equiv \int_0^t dt^\prime \bar{v}_x(t^\prime).
\label{eq:delta}
\end{equation}
The time evolution of $\Delta(t)$ in phases (i) and (ii) is shown in Fig.\ \ref{fig:slip}.
The oscillation phase (ii) has large fluctuations and the behavior of this phase is clearly different from that of phase (i).
The intermittent decrease of $\Delta(t)$ indicates a backward motion for reducing the deformation energy by the shear.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\linewidth]{velocity_distribution}
\caption{Distribution function of the velocity fluctuation $\delta v_x^* \equiv v_x - \bar{v}_x$ for various velocity $V$ with $v_x^*\equiv v_x \sqrt{m/\epsilon}$.
Here, an average flow $\bar{v}_x$ is subtracted from the particle velocity $v_x$.
The horizontal axis is the normalized particle velocity divided by the velocity of moving walls, $V^*$.
We use the following parameters ($u = 2\times 10^{-2}, P^* = 10^{-2}$, and $V^* = 2.2\times10^{-1}$ (circles), $2.2\times 10^{-2}$ (squares), $2.2\times10^{-3}$ (triangles)).
We classify $V^* = 2.2\times10^{-1}, 2.2\times 10^{-2}$ as phase (i), and $2.2\times10^{-3}$ as phase (ii). }
\label{fig:particle_velocity}
\end{figure}
To specify the difference between phases (i) and (ii) qualitatively, we focus on the distribution function of the velocity fluctuation of the particles.
Here, we measure this distribution in the region between $y=L_y/4$ and $y=-L_y/4$ to reduce the effect of walls, and subtract the average velocity $\bar{v}_x(y)$ from the particle velocity.
Figure \ref{fig:particle_velocity} exhibits the exponential distributions for wide range of the wall velocities $V$, which dependence is reported for cohesive systems in the previous study \cite{Takada2014}.
Here, the velocity fluctuation becomes larger than the wall velocity as the wall velocity decreases (for instance, $V^*=2.2\times 10^{-3}$ in Fig.\ \ref{fig:particle_velocity}).
This large fluctuation also suggests the existence of the backward motion (phase (ii)) as shown in Fig.\ \ref{fig:slip}.
To classify the phase (ii) from the phase (i), we use the following threshold.
If the probability for $|\delta v_x^*/V^*|>1$ is more than $0.018$, the phase belongs to the oscillation phase (ii).
This value corresponds to the probability that the value deviates from three times larger than the standard deviation of the exponential distribution.
The classification of the phases (i) and (ii) is shown later in this subsection.
It is noted that our choice of the criterion is reasonable because we have obtained the similar phase diagrams when we use different criterion based on the mean velocity analysis (see Appendix \ref{sec:Histogram}).
\begin{figure}[htbp]
\includegraphics[width=1.0\linewidth]{profile}
\caption{(a) Snapshot and (b) density (circles) and velocity (squares) profiles of the clustering phase ($u = 2\times 10^{-1}, P^* = 10^{-3}$, and $V^* = 10^{-1}$).
Similar plots are shown in (c) and (d) for the shear-banding phase, where we use the following parameters ($u = 2\times 10^{-2}, P^* = 10^{-3}$, and $V^* = 10^{-2}$).
The shaded regions in (b) and (d) exhibit clustering and shear-banding, respectively.}
\label{fig:Profile}
\end{figure}
Next, we describe the other phases.
When the normal stress is low, and the attractive potential becomes dominant with respect to the shear, uniform shear cannot be applied to the system, even in the long-time average.
In this case, there exist two characteristic phases: the (iii) clustering and (iv) shear-banding phases.
Typical snapshots, and the density and velocity profiles of these phases are depicted in Fig.\ \ref{fig:Profile}.
In phase (iii), certain clusters are formed and they roll over with time, in the bulk region (see Fig.\ \ref{fig:Profile}(a)).
Uniform shear cannot be applied due to the existence of clusters, as shown in Fig.\ \ref{fig:Profile}(b).
It is to be noted that the packing fraction also decreases in this region because voids exist near the clusters.
On the other hand, in phase (iv), the packing fraction remains nearly uniform, as shown in Fig.\ \ref{fig:Profile}(d), while the shear is localized in a certain region (see Fig.\ \ref{fig:Profile}(d)).
Note that this localization occurs not only in the bulk region but also in the region near the walls, depending on the parameters.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1.0\linewidth]{phase_diagram}
\caption{Phase diagrams for (a) $P^*=10^{-2}$, (b) $10^{-3}$.
There are four phases: (i) uniform shear (open circles), (ii) oscillation (filled circles), (iii) clustering (open squares), and (iv) shear-banding (open triangles), where the dimensionless normal stress is $P^*\equiv Pd^2 /\epsilon$ and the dimensionless velocity of the moving walls is $V^*\equiv V\sqrt{m/\epsilon}$.
We plot cross marks for the others.}
\label{fig:phase}
\end{center}
\end{figure}
Based on the above discussion, we present the phase diagrams for the various normal stresses, $P$, in Fig.\ \ref{fig:phase}.
The uniform shear phase is stable, when the velocity of the moving walls is more.
On the other hand, the oscillation phase appears, when the velocity is less.
When the attractive potential is strong, the oscillation phase appears, even in the faster regime, indicating that the attractive force controls the oscillation phase.
Only in the region where the attraction is dominant compared to the repulsion, the clustering and shear-banding phases emerge.
\begin{figure}[htbp]
\includegraphics[width=1.0\linewidth]{mu-I_rheology_P}
\caption{Plot of the $\mu-I$ rheology for various $u$: $u=2\times10^{-1}$ (circles), $2\times10^{-2}$ (triangles), $2\times10^{-3}$ (squares), $2\times10^{-4}$ (cross marks), and $0$ (reverse triangles), when the normal stress is (a) higher ($P^*=10^{-2}$) and (b) lower ($P^*=10^{-3}$).}
\label{fig:mu_I_rheology_u}
\end{figure}
\subsection{Flow curve}\label{sec:Flow curve}
Next, we present the $\mu-I$ curves for the system.
Here, $\mu(\equiv -\sigma_{xy}/P)$ is the friction coefficient, and $I$ is the inertial number defined by $I\equiv\dot{\gamma}\sqrt{m/(Pd)}$.
We define the shear rate, $\dot{\gamma}$, as the slope of the velocity profile in the region, where the velocity profile is linear in the long-time average.
Hence, we focus only on the (i) uniform shear and (ii) oscillation phases.
The shear stress, $\sigma_{xy}$, is the $xy$ component of the microscopic pressure tensor defined by
\begin{equation}
\sigma_{\alpha \beta} = \frac{1}{L_x L_y}\sum_{i=1}^{N} \left(m_i V_{i,\alpha}V_{i,\beta} + \frac{1}{2}\sum_{j\neq i} r_{ij,\alpha} F_{ij,\beta}\right),
\label{eq:shear stress}
\end{equation}
where $\bm{V}_{i} \equiv \bm{v}_{i}-\bm{U}(y)$ is the deviation from the macroscopic velocity field, $\bm{U}(y)$, each time.
Figure \ref{fig:mu_I_rheology_u} shows the $\mu-I$ rheology for cases with (a) higher normal stress ($P^*=10^{-2}$) and (b) lower normal stress ($P^*=10^{-3}$), when the strength of the attraction, $u$, is varied.
Initially, there exists a yield stress in the low shear limit.
Next, the curves collapse in the high shear-rate regime, which is independent of the attractive potential, $u$, except for $u=2\times 10^{-1}$.
This collapse indicates that the effect of attraction is negligible.
On the other hand, in the low shear-rate regime, the effect of attraction is considerable, i.e., the friction coefficient increases as the attraction becomes high.
In contrast, the flow curve for the strong attractive potential ($u=2\times 10^{-1}$) is completely different from that of the weak attraction cases.
The friction coefficient is abnormally large, as shown in Figs. 6(a) and (b), and this trend is notable for the low normal-stress cases, where the attraction is dominant.
This is because the system can be supported by attraction even in the case without compressive force, and the low normal stress gives the high friction coefficient.
This large friction coefficient is also explained from different points of view in a later subsection.
\begin{figure}[htbp]
\includegraphics[width=1.0\linewidth]{mu-mu_c}
\caption{Plot of the $\mu-\mu_\mathrm{c}$ curve for various $u$, when the normal stress is (a) higher ($P^*=10^{-2}$) and (b) lower ($P^*=10^{-3}$).
The symbols are the same as in Fig.\ \ref{fig:mu_I_rheology_u}.
$\mu_{\rm c}$ are the plateau values in the low shear-rate limit, in Fig.\ \ref{fig:mu_I_rheology_u}.
The dashed lines represent Eq. (\ref{eq:mu-muc_I}).}
\label{fig:mu-mu_c_u}
\end{figure}
To investigate the effect of cohesive forces on the flow curve, we first focus on the dynamic part of the friction coefficient, i.\ e., we subtract the plateau value, $\mu_{\rm c}$, from the flow curve shown in Fig.\ \ref{fig:mu_I_rheology_u}, for the case where the attraction is dominant.
Figure \ref{fig:mu-mu_c_u} shows the plot of $\mu-\mu_\mathrm{c}$ against the inertial number, $I$, for various $P$ and $u$.
For all the parameters, these data can be fitted using
\begin{equation}
\mu = \mu_\mathrm{c}+a \sqrt{I}, \label{eq:mu-muc_I}
\end{equation}
with the fitting parameters, $a$ and $\mu_{\rm c}$.
It should be noted that this result coincides with those of the earlier studies \cite{Cruz2005, Bouzid2013, Bouzid2015, DeGiuli2015, DeGiuli2016, Barker2017}, which reported this relationship using frictionless particles without cohesive interactions, under a constant pressure condition.
This implies that the cohesiveness does not affect the exponent of $I$ in Eq.\ (\ref{eq:mu-muc_I}), and affects only the static friction, $\mu_{\rm c}$, and the coefficient, $a$.
\begin{figure}[htbp]
\includegraphics[width= 0.85\linewidth]{mu_c-P_modified}
\caption{Plateau value, $\mu_\mathrm{c}$, in the flow curve (Fig.\ \ref{fig:mu_I_rheology_u}) as a function of the normal pressure, $P^*$, for various $u$.
The symbols are the same as in Fig.\ \ref{fig:mu_I_rheology_u}}
\label{fig:mu_c-P}
\end{figure}
Furthermore, we demonstrate that the static part of the friction coefficient, $\mu_\mathrm{c}$, for various normal stresses and cohesiveness, in Fig.\ \ref{fig:mu_c-P}, exhibits that $\mu_{\rm c}$ is a decreasing function of the normal stress.
In addition, it is to be noted that $\mu_{\rm c}$ tends to be independent of the pressure, when the cohesion becomes weak.
Then, we can conclude that the cohesive force affects the relationship between the static friction coefficient and the normal pressure.
\begin{figure}[htbp]
\includegraphics[width=0.9\linewidth]{sigma-I_P1E-2_modified}
\caption{Normal stress, $\sigma_{xx}^*(\equiv \sigma_{xx}d^2/\epsilon)$, in the $x$-direction versus the inertial number for various $u$, when $P^*=10^{-2}$.
The symbols are the same as in Fig.\ \ref{fig:mu_I_rheology_u}.}
\label{fig:sigmaxx}
\end{figure}
We also plot the normal stress, $\sigma_{xx}$, in the $x$-direction as a function of the inertial number, as shown in Fig.\ \ref{fig:sigmaxx}.
For the weak attraction case, the normal stress in the $x$-direction is equivalent to the normal stress, $P^*=10^{-2}$.
In contrast, it becomes lower than the normal stress and tends to be negative for the attraction dominant case, $u=2\times 10^{-1}$.
It is to be noted that the normal stress, $\sigma_{yy}$, in the $y$-direction is always positive, even in this regime.
This suggests the existence of an anisotropy in the system, when the attraction is dominant, which is discussed in a later subsection.
\subsection{Packing fraction}\label{sec:Packing fraction}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{phi-I}
\caption{Packing fraction for various $u$, when the normal stress is (a) higher ($P^*=10^{-2}$) and (b) lower ($P^*=10^{-3}$).
The symbols are the same as in Fig.\ \ref{fig:mu_I_rheology_u}.
The dashed lines indicate the jamming density ($\phi_{\rm J}=0.843$) for a purely repulsive system.}
\label{fig:packing fraction}
\end{figure}
Under a constant pressure condition, we cannot control the packing fraction, which is determined by a set of control parameters.
We plot the inertial number dependence of the packing fraction in Fig.\ \ref{fig:packing fraction}.
The packing fraction is nearly independent of the inertial number but depends on the cohesiveness and the pressure.
When the cohesiveness is strong, the packing fraction is nearly equal to the jamming density, as shown by the dashed line in Fig.\ \ref{fig:packing fraction}.
On the other hand, a high-fraction is realized for $u=2\times10^{-1}$.
Note that high pressure also makes the system denser.
The parameters for this high packing fraction are same as those for the abnormally large friction coefficient discussed in the previous subsection.
In particular, a large friction coefficient is achieved in the low pressure and strong attraction case.
\subsection{Anisotropy}\label{sec:Anisotropy}
\begin{figure}[htbp]
\includegraphics[width=0.85\linewidth]{aniso}
\caption{Anisotropic coordination number versus the inertial number for the (a) repulsion dominant ($u=2\times 10^{-4}$ and $P^*=10^{-3}$) and (b) attraction dominant ($u=2\times 10^{-1}$ and $P^*=10^{-3}$) cases.
The anisotropies of the coordination number are represented by the direction: the maximum compression part ($Z_{\rm max}$) or the minimum compression part ($Z_{\rm min}$).
The filled and open circles represent the coordination numbers of the maximum compression part in the repulsive and attractive ranges, respectively.
The filled and open triangles indicate the coordination number of the minimum compression part in the repulsive and attractive ranges, respectively.}
\label{fig:coordination number}
\end{figure}
To clarify the reason for the anisotropies of $\sigma_{xx}$ and $\sigma_{yy}$, we focus on the microscopic properties, in particular, the coordination number and interparticle forces.
First, we present the anisotropic coordination numbers in Fig.\ \ref{fig:coordination number}.
Here, we define the coordination number, $Z$, as the mean number of particles, which interact with a certain particle.
In addition, we divide the coordination number into repulsive ($Z^{\rm rep}$) and an attractive ($Z^{\rm att}$) parts, from Eq.\ (\ref{eq:potential}).
Moreover, in our sheared system, there are two principal axes \cite{Barker2017}: the maximum compressional axis, which corresponds to the direction of $\theta=-\pi/4$, and the minimum compressional axis, which corresponds to the direction of $\theta=\pi/4$.
Here, $\theta$ is defined as the angle from the $x$-axis, counterclockwise.
Therefore, we divide $Z^{\rm rep}$ ($Z^{\rm att}$) into the maximum compressions part, $Z^{\rm rep}_{\rm max}$ ($Z^{\rm att}_{\rm max}$) (second and fourth orthants, i.e., $\pi/2\le \theta<\pi$ and $3\pi/2\le \theta<2\pi$), and the minimum compression part, $Z^{\rm rep}_{\rm min}$ ($Z^{\rm att}_{\rm min}$) (first and third orthants, i.e., $0\le \theta<\pi/2$ and $\pi\le \theta<3\pi/2$).
The weak cohesion case is shown in Fig.\ \ref{fig:coordination number}(a) ($u = 2\times 10^{-4}, P^* = 10^{-3}$).
Although $Z_{\rm max}^{\rm rep}$ is slightly larger than $Z_{\rm min}^{\rm rep}$, there are hardly any differences in the anisotropic properties, and the coordination number in the attraction range is negligible.
On the other hand, Fig.\ \ref{fig:coordination number}(b) shows the case, where the attraction is dominant.
Here, we can observe anisotropies, such as $Z_{\rm max}^{\rm rep}>Z_{\rm min}^{\rm rep}$ and $Z_{\rm min}^{\rm att}>Z_{\rm max}^{\rm att}$.
These results are as predicted because a high value of $Z_{\rm max}^{\rm rep}$ represents repulsion-dominance in the maximum compressional axis, and a high value of $Z_{\rm min}^{\rm att}$ represents attraction-dominance in the minimum compressional axis.
Additionally, we investigate the angular distribution of the interparticle forces to reveal further details on the anisotropy.
These distributions for the weak-cohesion and cohesion-dominant cases are shown in Figs.\ \ref{fig:aniso_force}(a) and (b), respectively.
Here, we present the absolute values of the repulsive and attractive forces for visibility.
As previously mentioned, the maximum compressional axis ($\theta = -\pi / 4$) coincides with the repulsion-dominant region.
On the other hand, for the attraction dominant region, the attractive force becomes maximum at the minimum compressional axis ($\theta = \pi / 4$), as shown in Fig.\ \ref{fig:aniso_force}(b).
\begin{figure}
\includegraphics[width=0.8\linewidth]{aniso_force}
\caption{Angular distribution of the interparticle interactions for the (a) weak cohesion ($u = 2\times 10^{-4}, P^* = 10^{-3}, V^* = 2.2\times10^{-3}$) and (b) strong cohesion ($u = 2\times 10^{-1}, P^* = 10^{-3}, V^* = 2.2\times10^{-2}$) cases.
The solid and dashed lines represent the repulsive and attractive interactions, respectively.
Here, the absolute values are plotted, for the attractive forces.
The arrows indicate the maximum and minimum compression axes ($\sigma_1$ and $\sigma_2$), respectively.
For case (a), it is to be noted that the attractive force is invisible because its contribution is considerably weaker than the repulsive force.}
\label{fig:aniso_force}
\end{figure}
\section{Discussion}\label{sec:discussion}
\begin{figure}
\includegraphics[width=0.8\linewidth]{Fig_Force}
\caption{Schematic of the decomposition of the repulsive and attractive forces in Fig.\ \ref{fig:aniso_force}(b).
}
\label{fig:aniso_force_diss}
\end{figure}
First, we discuss the abnormally high friction coefficient, when the attractive potential is strong and the normal stress is low, as shown in Fig.\ \ref{fig:mu_I_rheology_u}(b).
It is to be noted that the normal stress in the $x$-direction, $\sigma_{xx}$, also becomes large in this regime, simultaneously.
We interpret these phenomena schematically.
Figure \ref{fig:aniso_force_diss} shows the forces acting on a fixed particle.
We decompose the repulsive (attractive) force vector, $\bm F^{\rm rep}$ ($\bm F^{\rm att}$), into $\bm F_x^{\rm rep}$, and $\bm F_y^{\rm rep}$ ($\bm F_x^{\rm att}$ and $\bm F_y^{\rm att}$) in the case of Fig.\ \ref{fig:aniso_force}(b) (see Fig.\ \ref{fig:aniso_force_diss}).
Here, $\bm F_x^{\rm rep}$ and $\bm F_x^{\rm att}$ have opposite signs, causing a decrease in $\sigma_{xx}$, compared to the purely repulsive system, as shown in Fig.\ \ref{fig:sigmaxx}.
On the other hand, $\bm F_y^{\rm rep}$ and $\bm F_y^{\rm att}$ have the same sign and the sum of both increases $\sigma_{xy}$, as per the second term in Eq.\ (\ref{eq:shear stress}).
This is the origin of the abnormally large friction coefficient.
Next, we discuss the high packing fraction presented in Sec.\ \ref{sec:Packing fraction}.
When the normal stress is high, and the cohesive force is strong, the packing fraction is approximately $1.6$, which is nearly twice larger than the jamming density for purely repulsive systems.
To clarify the origin of this high packing fraction, we consider the energy balance.
We focus on a particle in the bulk region.
Here, we only consider the region between this particle and a wall (we select the upper wall, for simplicity) because of the axial symmetry with respect to the $x$-axis.
Between this particle and the wall, there exist approximately two and three particles as the first and second nearest neighbors, respectively, as per the contact number discussed in Sec.\ \ref{sec:Anisotropy}.
From the energy balance, the following relationship is approximately satisfied:
\begin{equation}
Pd \cdot \delta \sim \left( 2\cdot \frac{\epsilon}{d^2}\delta^2 - 3\cdot 2\epsilon u^2 \right ) -\left(- 2\cdot 2\epsilon u^2 \right ), \label{eq:energy_balance}
\end{equation}
where $\delta$ is the mean overlap and $2\epsilon/d^2$ corresponds to the linear spring constant of the interaction (see Eq.\ (\ref{eq:potential})).
The left-side of Eq.\ (\ref{eq:energy_balance}) corresponds to the work from the normal stress;
whereas, the right-side of Eq.\ (\ref{eq:energy_balance}) is the energy balance, before and after the deformation of the particles.
Here, the terms in the first bracket are the deformation energy from the first nearest neighbors and the attractive potential energy from the second nearest neighbors after deformation, respectively, and the second term is the attractive potential energy from the first nearest neighbors, before deformation.
From Eq.\ (\ref{eq:energy_balance}), the overlap, $\delta$, can be estimated as
\begin{equation}
\delta \sim \frac{1}{2}\left[\frac{Pd^3}{2\epsilon}+\sqrt{\left(\frac{Pd^3}{2\epsilon}\right)^2+4 u^2 d^2}\right],
\end{equation}
as a function of the normal stress, $P$ and the cohesiveness, $u$.
Using this overlap length, the deformation area of the particle, after deformation, is approximately given by $\Delta S\sim \delta^2 / 2$.
The packing fraction should be the jamming density for the deformed particles, and the simplest estimation is given by
\begin{equation}
\phi_{\rm J}^{\rm eff} = \phi_{\rm J}\times \frac{S}{S-4\Delta S}.\label{eq:phi_eff}
\end{equation}
Here, $S\equiv \pi d^2/4$ is the area of the particle, before deformation, and $S/(S-4\Delta S)$ is the deformation ratio of the particle, where factor $4$ in the denominator on the right-side is due to the existence of four particles as nearest neighbors, including other particles on the other side.
For $P^*=10^{-3}$ and $u=2\times 10^{-1}$, this density becomes $\phi_{\rm J}^{\rm eff}\sim 0.94$.
Similarly, for $P^*=10^{-3}$ and $u=2\times 10^{-4}$, the density is estimated as $\phi_{\rm J}^{\rm eff}\sim 0.84$.
These estimated values are consistent with those of the simulations observed in Fig.\ \ref{fig:packing fraction}; however, our estimation indicates that the finite deformations are nonnegligible and the packing fraction is affected by both the pressure and cohesiveness.
In particular, the existence of the second nearest neighbors play a crucial role in the large packing friction.
The system is more stable, when the second nearest neighbors enter the potential well, causing a high packing fraction, as shown in Fig.\ \ref{fig:packing fraction}.
This discrepancy may be due to the geometry of the particles, i.\ e., the contact number depends on the set of parameters, as shown in Fig.\ \ref{fig:coordination number}.
Quantitative treatment considering the geometry can be possible, but this will be done elsewhere.
Third, we determined clustering and shear-banding phases, when the normal stress is low, as shown in Fig.\ \ref{fig:Profile}.
In phases (iii) and (iv), the shear is localized in a certain region.
In phase (iii), the existence of a cluster due to strong cohesion reduces the energy, while the rolling of this cluster increases the energy.
On the other hand, in phase (iv), shear-banding is selected instead of clustering.
These phases may be determined to minimize the energy of the system.
The parameter dependence of the phase selection is significant, but this is also intended in future.
Finally, we compare our results with previous works \cite{Irani2014, Irani2016}.
We prepared finite walls and moved these walls to apply a shear to the system, under a constant pressure condition.
However, previous studies involved a constant volume condition with the Lees-Edwards boundary condition \cite{Lees1972}.
They demonstrated that the flow curve was nonmonotonic below the jamming density, whereas it monotonically increased with the shear rate, above the jamming density.
Under a constant pressure condition, we present a monotonically increasing flow curve, irrespective of the density.
This difference may be due to the stability of the voids or droplets.
In previous studies \cite{Irani2014, Irani2016}, these droplets may survive under a certain condition below the jamming density, which is also reported in Ref.\ \cite{Takada2014}.
In contrast, droplets tend to vanish in our system because the normal stress in the $y$-direction may generally inhibit the spatial heterogeneity of the density.
In future, we intend to examine the stabilities of the droplets more carefully.
\section{Conclusion}\label{sec:conclusion}
We performed MD simulations of cohesive granular particles, under a constant pressure condition, and sheared the system by moving the walls.
We considered the effect of cohesion on rheology by adopting the attractive potential.
First, we established four distinct phases, depending on the constant pressure, shear velocity of the moving walls, and the attractive potential.
In the region, where uniform shear was applied in the long-time average, we classified the uniform shear phase and oscillation phase by investigating the distribution function quantitatively.
In addition, when the cohesive force was strong, it was difficult to obtain uniform shear in the bulk, even if a long-time average was available.
In such cases, we found the clustering and shear-banding phases.
By parameter studies, we generated the phase diagram.
Next, based on this phase diagram, we plotted the flow curve ($\mu-I$ rheology) in the region, where uniform shear was applied in the long-time average.
The flow curves are a monotonically increasing function of the inertial number.
By subtracting the plateau value, $\mu_{\rm c}$, from the $\mu-I$ rheology, we analyzed the exponent of $\mu-\mu_{\rm c}$ to $I$.
This exponent, which coincides with $1/2$ when the curve was properly fit,
is known in noncohesive systems \cite{Bouzid2013}.
Then, we conclude that the effect of cohesion can be represented by $\mu_{\rm c}$.
In the flow curves, strong cohesion yields a large friction coefficient due to the attractive force.
In this region, there appear anisotropies of the coordination number and angular distribution of the interparticle forces.
These demonstrate that the repulsive forces are maximum in the maximum compressional axis, whereas the attractive forces are maximum in the minimum compressional axis, which is the origin of the large friction coefficient.
\section*{Acknowledgements}
We thank Michio Otsuki, Ryohei Seto, Walter Kob, and Hisao Hayakawa for their useful comments.
This study was supported by the following: JSPS KAKENHI (JP16H06478) in Scientific Research on Innovative Areas ``Science of Slow Earthquakes", ``Exploratory Challenge on Post-K computer" (Frontiers of Basic Science: Challenging the Limits) by the MEXT, Earthquake and Volcano Hazards Observation and Research Program by the MEXT, and the Earthquake Research Institute cooperative research program.
The numerical computation in this study was partially carried out by the computer systems of the Earthquake and Volcano Information Center of the Earthquake Research Institute, University of Tokyo.
|
1,941,325,221,163 | arxiv | \section{Introduction}
\label{sec:intro}
The color magnitude diagram of co-eval stellar populations in the Milky Way can be used to infer the age of its oldest stars. The age can also be estimated for individual stars if their metallicity and the distance to them are known. For resolved stellar populations, however, an independent measurement of the distance is not strictly necessary as
the full morphology of the color-magnitude diagram can, in principle, provide a determination of the absolute age. There is extensive literature on this subject; reviews can be found in e.g., Refs.~\cite{Catelan,Soderblom,Bolte+}.
Historically, the age of the oldest stellar populations in the Milky Way has been measured using the luminosity of the main-sequence turn off point (MSTOP) in the color-magnitude diagram of globular clusters (GCs). Globular clusters are (almost-- more on this below) single stellar populations of stars (see e.g., Ref.~\cite{Bolte+}). It has long been recognized that they are among the most metal poor ($\sim 1 \%$ of the solar metallicity) stellar systems in the Milky Way, and exhibit color-magnitude diagrams characteristic of old ($> 10$ Gyr) stellar populations~\cite{OMalley,Catelan,Bolte+}.
In fact, the first quantitative attempt to compute the age of the globular cluster M3 was made by Haselgrove and Hoyle more than 60 years ago~\cite{Hoyle}. In this work, stellar models were computed on the early Cambridge mainframe computer and its results compared ``by eye" to the observed color-magnitude diagram. A few stellar phases were computed by solving the equations of stellar structure; this output was compared to observations. Their estimated age for M3 is only 50\% off from its current value.\footnote{Their low age estimate is due to the use of an incorrect distance to M3, since the stellar model used deviated just $\sim$ 10\% from current models' prediction of the effective temperature and gravity of stars, with their same, correct assumptions~\cite{Bolte+}.} This was the first true attempt to use computer models to fit resolved stellar populations and thus obtain cosmological parameters: the age of the Universe in this case. Previous estimates of the ages of GCs involved just analytic calculations, which significantly impacted the accuracy of the results, given the complexity of the stellar structure equations (see e.g., Ref.~\cite{Sandage}).
The absolute age of a GC inferred using only the MSTOP luminosity is degenerate with other properties of the GC.
As already shown in the pioneering work of Ref.~\cite{Hoyle}, the distance uncertainty to the GC entails the largest contribution to the error budget: a given \% level of relative uncertainty in the distance determination involves roughly the same level of uncertainty in the inference of the age. Other sources of uncertainty are: the metallicity content, the Helium fraction, the dust absorption \cite{Bolte+} and theoretical systematics regarding the physics and modeling of stellar evolution.
However, there is more information enclosed in the full-color magnitude diagram of a GC than that enclosed in its MSTOP.
As first pointed out in Refs.~\cite{JimenezPadoanLF,PadoanJimenezLF}, the full color-magnitude diagram has features that allow for a joint fit of the distance scale and the age (see Appendix \ref{sec:sensitivity} for a visual rendering of this). On the one hand, figure~2 in Ref.~\cite{JimenezPadoanGC} shows how the different portions of the color-magnitude diagram constrain the corresponding physical quantities. On the other, figure~1 in Ref.~\cite{PadoanJimenezLF} and figure~3 in Ref.~\cite{JimenezPadoanGC} show how the luminosity function is not a pure power law but has features that contain information about the different physical parameters of the GC. This technique enabled the estimation of the ages of the GCs M68~\cite{JimenezPadoanLF}, M5 and M55~\cite{JimenezPadoanGC}. Moreover, in principle, exploiting the morphology of the horizontal branch makes it possible to determine the ages of GCs independently of the distance~\cite{JimenezGC96}.
Further, on the observational front, the gathering of Hubble Space Telescope (HST) photometry for a significant sample of galactic GCs has been a game changer. HST has provided very accurate photometry with a very compact point spread function, thus easing the problems of crowding when attempting to extract the color-magnitude diagram for a GC and making it much easier to control contamination from foreground and background field stars.
For these reasons, a precise and robust determination of the age of a GC requires a global fit of all these quantities from the full color-magnitude diagram of the cluster. In order to exploit this information, and due to degeneracies among GC parameters, we need a suitable statistical approach. Bayesian techniques, which have recently become the workhorse of cosmological parameter inference, are of particular interest.
In the perspective of possibly using the estimated age of the oldest stellar populations in a cosmological context as a route to constrain the age of the Universe, it is of value to adopt Bayesian techniques in this context too.
There are only a few recent attempts at using Bayesian techniques to fit GCs' color-magnitude diagrams, albeit only using some of their features (see e.g., Ref.~\cite{BayesianGC}). Other attempts to use Bayesian techniques to age-date individual stars from the GAIA catalog can be found in Ref.~\cite{Lund}. A limitation with the methodology presented in Ref~\cite{BayesianGC} is the large number of parameters needed in their likelihood. Actually, for a GC of $N _{\rm stars}$ there are, in principle, $4 \times N_{\rm stars} + 5$ model's parameters (effectively $3 \times N_{\rm stars} + 5$), where the variables for each star are: initial stellar mass, photometry, ratio of secondary to primary initial stellar masses (fixed to 0 in Ref.~\cite{BayesianGC}) and cluster membership indicator. In addition, there are 5 (4) additional GC variables, namely: age, metallicity (fixed in the analysis of Ref. ~\cite{BayesianGC}), distance modulus, absorption and Helium fraction. For a cluster of 10,000 or more stars, the computational cost of this approach is very high. To overcome this issue Ref.~\cite{BayesianGC} randomly selected a subsample of 3000 stars, half above and half below the MSTOP of the cluster, ``to ensure a reasonable sample of stars on the sub-giant and red-giant branches". Another difficulty arises from the fact that the cluster membership indicator variable can take only the value of 0 or 1 (i.e., whether a star belongs to the cluster or not). This creates a sample of two populations referred to as a \emph{finite mixture distributions}~\cite{BayesianGC}.
Capitalizing on the wide availability and potential of current observations, the aim of this paper is to present a Bayesian approach to exploit features in the color-magnitude diagram beyond the MSTOP and determine robustly the absolute age, jointly with all other relevant quantities such as metallicity, distance, dust absorption and abundance of $\alpha$-enhanced elements, of each GC. In addition to statistical errors, we estimate systematic theoretical uncertainties regarding the stellar model. We bypass the computational challenge of the approach explored in Ref. \cite{BayesianGC} by introducing some simplifications and by coarse-graining the information in the GC color-magnitude diagram, which greatly reduces the dimensionality of the problem without significant loss of information.
Our paper is organized as follows. In \S~\ref{sec:data} we describe the HST GC data; the stellar model used to fit the data and the calibration of the GC data is shown in \S\ref{sec:calib}. The approach developed to obtain the parameters of GCs is introduced in \S~\ref{sec:inference} where we describe the likelihood adopted and how we explore the posterior with Monte Carlo Markov chains. Results, the age of the oldest GCs and the corresponding inferred age of the Universe are presented in \S~\ref{sec:results}. We expose our conclusions in \S~\ref{sec:summary}. A series of appendixes cover the technical details of our method.
\section{Data and stellar model}
\label{sec:data}
\subsection{Globular cluster catalogs: defining our sample}
We use the HST-ACS catalog of 65 globular clusters~\cite{Sarajedini2007} plus $6$ additional ones from Ref.~\cite{Dotter2010}. Out of 71 clusters, two were removed because of high differential reddening and a lack of red giant branch stars~\cite{BayesianGC}, one more was removed because of a lack of reasonable extinction prior from the literature, leaving 68 clusters in total. The data are available in two different Vega filters: F606W and F814W.
In order to clean the data of stars with poorly determined photometry, we use the same prescriptions as in Ref.~\cite{BayesianGC}. First, we remove stars for which photometric errors,\footnote{Each photometric error has been rescaled depending on the number of observations according to the catalog instructions in the {\tt readme} file.} in both filters, fall into the outer 5\% tail of the distribution. Then, we also remove stars in the outer 2.5\% tails of the distributions of X and Y pixel location errors. Indeed, large pixel location errors indicate a non-reliable measurement of the properties of the star.
Similarly, we also expect measurements to be less robust at very low magnitudes. Moreover, the photometric error corresponding to these stars becomes very large, reducing drastically the information content of this part of the color-magnitude diagram.
Hence, for each cluster we define a ``functional'' magnitude interval between the lowest apparent magnitude of the brightest stars and a magnitude cut arbitrarily defined at $m_{F606W} = 26$, to include most of the main sequence stars for every cluster.
Only stars that satisfy all the conditions listed above and belong to the defined functional magnitude interval are considered further.\footnote{A further cut at low magnitudes is introduced in Sec. ~3. The cut described here is motivated by the survey limitations; the cut in Sec.~3 is to speed up the analysis without removing significant signal.} For readers interested in the number and percentage of stars retained, details are reported in Tab.~\ref{tab:stars_number} of appendix~\ref{app:GCtable}.
\subsection{Software and stellar models}
For the theoretical modeling of the data, we choose to work with a modified version of the software package \texttt{isochrones}\footnote{\url{https://github.com/timothydmorton/isochrones}, version 1.1-dev.}~\citep{isochrones}. This software reads synthetic photometry files provided by stellar models and then interpolates magnitudes along isochrones (points in the stellar evolutionary track at same age) correcting for absorption, given the input parameters. Even though a new version is currently under development (\texttt{isochrones}2.0), and that in the main text of this paper we only use one model, we decided to use a modified version of the previous release as it enables us to consider different stellar models.
The two stellar models already implemented are \texttt{MIST}~\cite{MIST0,MIST1} and \texttt{DSED}~\cite{dsed}. Each stellar model comprises a set of {\it photometry files} that correspond to (discretized) isochrones in a color magnitude diagram. However, it is important to note that only in the photometry files of \texttt{DSED} several different abundances (parameterised by [$\alpha$/Fe]) of $\alpha$-enhanced elements, other than the solar abundance, are provided. These are elements like O, Ne, Mg, Si, S, Ca and Ti that are created via $\alpha-$particle (helium nucleus) capture;
[$\alpha$/Fe] is fixed to 0 in the photometry files corresponding to the \texttt{MIST} model. This is important as GCs do have non-solar-scaled abundances. As we will show below (see Appendix \ref{sec:sensitivity}) the abundance [$\alpha$/Fe] is partially (but only partially) degenerate with variations of the GC's age and metallicity, so that it must be considered as a free parameter in the analysis to avoid biasing the results and to infer the correct statistical uncertainties. Therefore, we consider [$\alpha$/Fe] as an independent parameter and limit our analysis to the \texttt{DSED} model; the ranges in parameter space covered by the \texttt{DSED} model photometry files in \texttt{isochrones} are specified in Tab.~\ref{tab:Table1}.
\begin{table}[]
\centering
\begin{threeparttable}
\begin{tabular}{|c|c|}
\hline
Stellar model & DSED \\ \hline\hline
initial rotation rate $v/v_{crit}$ & 0.0 \\ \hline
Age range & 0.250-15 Gyr \\ \hline
Age sampling & 0.5 Gyr \\ \hline
number of EEPs per isochrone & $\simeq$ 270 \\ \hline
Metallicity range {[}Fe/H{]} & -2.5 to 0.5 dex\\ \hline
\begin{tabular}[c]{@{}c@{}}Helium fraction configuration\\ \end{tabular} & $Y_{\rm init}$ = 0.245\tnote{\dag}, 0.33, 0.40 \tnote{\ddag} \\ \hline
{[}$\alpha$/Fe{]} & -0.2 to 0.8\tnote{+} \\ \hline
\end{tabular}%
\footnotesize
\begin{tablenotes}
\item[\dag] The varying Helium fraction configurations, $Y$, are defined in photometry files as $Y = Y_{\rm init} + 1.5 Z$ where $Z$ is the metal mass fraction and $Y_{\rm init}$ is the starting value.
\item[\ddag] Fixed Helium fraction configurations $Y=0.33 $ and 0.40 are only available for [Fe/H] $\leq$ 0.
\item[+] For the fixed Helium fraction configurations, only two options [$\alpha$/Fe]$=0$ or $+0.4$ are available.
\end{tablenotes}
\end{threeparttable}
\caption{Properties of the \texttt{DSED} stellar models available in the \texttt{isochrones} package. We refer the reader to original Ref.~\cite{dsed} for more details.
}
\label{tab:Table1}
\end{table}
The modifications we made to the code include:
\begin{itemize}
\item change of the cubic interpolation process, going from (Mass, Age, Metallicity) to (EEP, Age, Metallicity) where EEPs are equivalent isochrone evolutionary points.\footnote{EEPs were introduced in Refs. \cite{Simpson70, Bertelli90, Bertelli94}. EEPs are a uniform basis which simplifies greatly the interpolation among evolutionary tracks. Each phase of stellar evolution is represented by a given number of points, each point in one track has a comparable interpretation in another track.} EEPs are provided by \texttt{isochrones}, we only modify the interpolation interface, following the implementation of \texttt{isochrones}2.0,
\item implementation of a standard magnitude correction to account for extinction in the selected filters according to the Fitzpatrick extinction curve (see e.g., Ref.~\cite{Fitz99}) in the selected (here HST $F_{606W}$ and $F_{814W}$) and V band filter,
\item interpolation on the [$\alpha$/Fe] parameter.
\end{itemize}
The set of fitted parameters for each GC are age, distance modulus, metallicity, [$\alpha$/Fe] and absorption. Note that there are different photometry files corresponding to different values of metallicity [Fe/H] and Helium fraction, $Y$.\footnote{The Helium fraction $Y$ of a GC is not necessarily identical to the cosmological one. If Population III stars have enriched the medium with Helium, it is the resulting Helium fraction that matters here. Hence, in principle there could be object by object (GC) variations of $Y$. } These, however, are not two fully independent quantities: both quantities are a function of the stellar and (proto)-solar metal mass fraction, denoted by $Z$ and $Z_{\odot}$, respectively. Consequently, they are highly correlated. We are interested in the Age-Metallicity relation, hence for our purposes we can use only one of them, the [Fe/H] fraction\footnote{The metallicity $Z$ is related to [Fe/H] fraction in the usual way: [Fe/H] = 1.024 log(Z) + 1.739, see Ref.~\cite{Bertelli94}.} in our case, as the independent variable. We vary [$\alpha$/Fe] independently of [Fe/H] and $Y$.
\section{Color-magnitude diagram-based likelihood for globular clusters}
\label{sec:calib}
As mentioned in Sec.~\ref{sec:intro}, the traditional Bayesian analysis of this kind of data sets attempts to model each star independently, which implies a significant computational cost due to the large number of parameters to explore. A common approach is to fit the initial mass of each of the $N_{\rm stars}$ stars in the color-magnitude diagram as an independent parameter (along all other stellar parameters). Then, the posterior is marginalized over all individual star parameters to infer the parameters describing the GC.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{split.png}
\includegraphics[width=0.7\textwidth]{cumul.pdf}
\caption{Top Panel: Illustration, for a typical GC (IC4499), of the initial split of the ``functional'' magnitude interval in two parts (MS below the MSTOP and UB above the MSTOP). The red line corresponds to $m_{\rm MSTOP}$, and the black line marks $m_{\rm cut}$. Points below $m_{\rm cut}$ do not add significant additional information, but significantly slow down and complicate our analysis. This is why they are not considered here. Bottom panel: Cumulative distribution of stars and adopted magnitude cuts for the same cluster.}
\label{split}
\end{figure}
Here we attempt to reduce the high dimensionality of the parameter space using a different approach. While the large number of stars can be a liability in terms of computational cost for traditional Bayesian approaches, we turn it to our advantage, especially in the most populated part of the color-magnitude diagram. For each isochrone of the stellar model, there are a number of equivalent evolutionary points (EEPs)
(see line 5 of Table \ref{tab:Table1}) associated with an initial stellar mass. Each EEP has a counterpart in every isochrone, making it possible to identify specific points in the color-magnitude diagram across different isochrones, e.g., the MSTOP. In other words, the isochrone profile in the color-magnitude diagram is sampled by EEPs (which are ``universal" across different isochrones) obtained for different adopted values of the parameters of interest. This is the reason why, as it is well known, the interpolation between evolution tracks is greatly simplified by interpolating instead directly between EEPs.
Since we are not interested in the initial mass of stars, we do not model each star independently and exploit the benefits of the EEPs working directly with them, as provided by the relevant photometry files. This reduces the dimensionality of our analysis to just the five GC parameters described in the previous section.
We divide the ``functional'' magnitude interval into two parts as illustrated in Figure~\ref{split}: the part below the MSTOP, which we refer to as MS for main sequence, and the part above, which we refer to as UB for upper branch. The large spread of colors at low magnitudes introduces a lot of noise, which slows down significantly the convergence of our algorithm without adding, in practice, any useful additional signal. For this reason, on top of the selection cuts described in Sec.~\ref{sec:data}, we apply a potentially more stringent upper magnitude cut. In practice, for the 68 clusters in our catalog we choose an upper cut magnitude value
\begin{equation}
m_{\rm cut} = {\rm min}(m_{\rm MSTOP} + 5\,, 26),
\label{mcut}
\end{equation}
where $m_{\rm MSTOP}$ is the magnitude corresponding to the MSTOP. In fact, for some GCs going 5 magnitudes below the MSTOP would cause to include noisy data. With this choice we limit the cut for those GCs to $m_{\rm cut} = 26$. Our findings are not sensitive to the details of this cut as long as the noisy, dim part of the color-magnitude diagram is removed, and enough EEPs in the main sequence are retained, which is what we ensure here.
\subsection{Main sequence}
We proceed to bin in magnitude the sample of stars belonging to the main sequence; these bins should be thin enough so that the isochrone can be approximated as linear in each bin, yet with number of stars per bin large enough to satisfy the central limit theorem. Given the large number of stars in the MS (as illustrated in the bottom panel of Figure~\ref{split}), these two conditions are fulfilled for all GCs. In practice, we use bins in the $F_{606W}$ magnitude interval for the MS with constant width of 0.2 mag, which yields a maximum of 25 bins and a minimum of 20 for the GCs in our catalog. The number of stars per bin is proportional to the number of stars in the GC and ranges from several hundreds to several thousands.
It is then justified to assume that the scatter in color of stars inside each magnitude bin follows a Gaussian distribution centered on the true underlying isochrone. This simplification (akin to a coarse-graining in the color-magnitude diagram, and thus to a data-compression) alone allows us to decrease the effective size of the data set, and thus, compared to previous approaches, to reduce the number of model parameters for this part of the analysis: we have 5 parameters, and $N_{\rm bins}$ number of data points. The main peak of the distribution of star positions along the color axis in each bin, indexed by $i$, should be, and is, well approximated by a Gaussian distribution (see Figure~\ref{fitms2} in Appendix~\ref{app:MScalib} for an illustration). Bins where the distribution cannot be fit by a unimodal Gaussian -- a possible sign of multiple populations -- are removed from the analysis. This always happens at the faint end of the main sequence (except for three clusters for which one to two bins are removed), even after the cut from Equation~\eqref{mcut}. For this reason we use instead the median of the distribution. It allows us to keep the maximum of 25 bins while taking into account the effect of the multiple population. The median value is almost identical to the Gaussian mean and larger error bars are a reasonable trade-off for outliers. More details are presented in Appendix \ref{app:MScalib}. The color at bin center for each magnitude bin $C^{\rm data}_{i}$ is defined by the median. Since the main sequence in the color-magnitude diagram is not perfectly vertical, we rescale the error by $\sigma^{\rm data}_{i} \approx 1.253\sigma_{{\rm EEP},i} \times \cos (\phi_{i}) $ where $1.253\sigma_{{\rm EEP},i}$ is the standard error of the median and $\phi_{i}$ is the angle between the data orientation and the vertical axis inside bin $i$ as detailed in Appendix~\ref{app:MScalib} (in particular see Figure~\ref{fitms1} in the Appendix). This correction is very small and always well below 4\%. Figure~\ref{fig:ms_binning} shows an example of this binning for GC IC4499, along with the corresponding $C^{\rm data}$ and $\sigma^{\rm data}$.
Assuming that bins are uncorrelated (which given the small observational errors in the star magnitudes is a fair assumption), the logarithm of the likelihood is defined as
\begin{equation}
\mathcal{L}_{MS} = \ln\: L = -\frac{1}{2} \sum_{i = 1}^{N_{\rm bins}} \left(\frac{C^{\rm data}_{i} - C^{\rm th}_i}{\sigma^{\rm data}_{i}}\right)^2
\label{eq: likelihood}
\end{equation}
where $C^{\rm th}_i$ is the theoretical isochrone color interpolated at the center of bin $i$, and $N_{\rm bins}$ is the number of bins considered in the analysis (i.e., after removing the bins with bimodal color distributions).
\begin{figure}
\centering
\includegraphics[scale=0.55]{binning_ms_new.png}
\caption{Binning of the main sequence, illustrated for the GC IC4499. The red dots and black lines represent the central value and standard deviation of the color distribution in each bin, respectively.}
\label{fig:ms_binning}
\end{figure}
\subsection{Upper branch}
\label{sec:ub}
In addition to the main sequence, we consider stars belonging to the Upper Branch (UB) i.e., stars brighter than the MSTOP. We bin the magnitude interval as we did for the main sequence. However, in this case, the number of stars is not large enough to support the central limit theorem for small magnitude bins; in addition we expect that the measurement will be highly sensitive to outliers. Therefore, we cannot fit the color distribution to a Gaussian function as done for the MS. Instead, we apply these three prescriptions:
\begin{itemize}
\item Since \texttt{DSED} isochrones do not include stages beyond the tip of the red giant branch --i.e., do not include EEPs belonging to the Horizontal branch and the asymptotic giant branch--, we mask out all the bins which correspond to stars (and EEP) that do not belong to either the sub-giant branch or the red giants.
\item Since the estimation of the mean is easily contaminated by outliers, we use the median color instead in each bin as an estimate for $C^{\rm data}_i$. In fact, we expect that the color errors follow a Gaussian distribution, and that the outliers are stars that are not part of the GC main sequence of upper branch (our target sample). If we could select only stars that belong to our target sample, they would follow a Gaussian distribution. In practice, using the median down weights the contribution of outliers on the estimate of the central value of the distribution. Therefore, it provides a good estimate of the mean value of the distribution of the target sample; here we {\it assume} that the resulting distribution matches the target distribution and can be assumed to be Gaussian.
\item We use the error of the median for normal distributions $\sigma_{{\rm med},i} = 1.253 \sigma_{{\rm EEP},i}$, where $\sigma_{{\rm EEP},i}$ is the regular standard deviation in bin $i$.
\end{itemize}
This is illustrated in Figure \ref{fig:rgb}. In this figure, for a representative GC, IC4999, the stars in the color-magnitude diagram are shown as grey points, the excluded bins are shaded, the red points show the $C_i^{\rm data}$, and the error bars show the $\sigma_{{\rm med},i}$.
\begin{figure}
\centering
\includegraphics[scale=0.4]{binning_rgb.pdf}
\caption{Binning of the upper branch for a representative GC IC4499. The grey points are the stars, the horizontal blue lines show the adopted binning. The masked bins are shaded. Each red point represents the median value at bin center. The error bars correspond to $\sigma_{{\rm med},i}$.}
\label{fig:rgb}
\end{figure}
Finally the likelihood is also taken to be Gaussian as in Eq. \ref{eq: likelihood}, with the only differences of $C^{\rm data}_{i}$ being the median value at bin center, and $\sigma_{{\rm med},i}$ the associated error for bin $i$.
We are aware that this choice of Gaussian likelihood is not as well motivated as for the MS. Nevertheless we note here that other systematic uncertainties (see section \ref{sec:syserr}) are likely larger than the one introduced by this approximation.
\subsection{Multiple populations and magnitude cut}
\label{sec:multiple_pop}
For the sake of simplicity in the analysis, we assume that parameters such as age, metallicity and distance are common to all stars belonging to the GC. Nonetheless, GCs can be more complex and host distinct populations. Multiple populations in GCs is an active research area (see e.g., \cite{ReviewBastian18} for a review). It is important to note that multiple populations do not necessarily have different ages, they may have e.g., different element abundances. Moreover, the effects of multiple populations are minimized for the filters used to create the catalog ($F_{606W}$ and $F_{814W}$; see Ref.~\cite{ReviewBastian18} and references therein). When we apply our analysis to GCs known to host multiple populations to quantify the effect that this might have in the inferred constraints, we find that having multiple populations introduces an additional widening in the marginalized inferred age, as well as multiple peaks for the metallicities. GCs with multiple populations have a manifestly multi-modal posterior distribution where additional {\it local} maxima may appear. We find that the magnitude cut $m_{\rm cut}$ (see Equation \ref{mcut}) we impose helps to reduce the sensitivity to secondary populations, i.e., it suppresses the secondary local maxima, but leave the global maximum unaffected.
This is because it is easier to see multiple population in the faint end of the MS; at brighter magnitudes, the two populations blend. Nevertheless, the posterior distributions obtained for some GCs are still multi-modal. Masking out bins where the distribution is markedly multimodal further minimize this effect. Any residual multi-modality is blended with the main maximum and thus effectively contributes to growing the errors. The way we deal in practice with the multi-modality of these secondary local maxima is developed further in Sec. \ref{sec:inference}.
\section{Parameter inference}
\label{sec:inference}
We assume that the two parts (MS and UB) of the ``functional'' magnitude interval considered are independent. The total log-likelihood, ${\cal L}=\ln L$, is then $\mathcal{L} = \mathcal{L}_{\rm MS} + \mathcal{L}_{\rm UB}$.
The parameters that we vary are: age, metallicity [Fe/H], absorption, distance and $\alpha$ enhancement [$\alpha$/Fe].
In order to ensure that we remain inside the interpolation domain of the stellar model, we use uniform priors corresponding to the intersection of the parameter-space volumes of the stellar model (in our case this corresponds to the prior region of \texttt{DSED} see Table \ref{tab:Table1}). These are: [1,15] Gyr for the age, [-2.5,0.5] dex for metallicity, (0,3] for absorption, (0,$\infty$) for distance and [-0.2,0.8] for [$\alpha$/Fe].
In addition, we adopt gaussian priors on the metallicity, distance modulus, absorption and [$\alpha$/Fe] as follows.
For the metallicity, $\alpha$ enhancement and distance the priors are centered around estimates from the literature for each globular cluster (see Ref.~\citep{Dotter2010}). For 65 clusters the extinction estimates are based on the two catalogs of Refs~\cite{Harris, Dutra}; however, for three globular clusters (NGC 6121, NGC 6144, NGC 6723) we use instead values from more recent literature (Refs~ \cite{ngc6121,ngc6144,ngc6723} respectively) since the quality of the fit and the posterior were unacceptable when using the catalogs estimates.
We adopt $\sigma_{\rm [Fe/H]} = 0.2$ dex for the width of the Gaussian priors for the metallicity, based on spectroscopic measurements, corresponding to twice the typical errors reported in Ref.~\cite{Bolte+}).\footnote{In principle, this prior could be more stringent, following Ref.~\cite{Carretta}. However we decide not to do this here, and explore a wider range in metallicy.} The width adopted for the distance modulus prior is $\sigma_{\rm dm} = 0.5$ from Gaia/Hipparcos indirect distances, 2-3 times the typical errors reported in Ref.~\cite{Bolte+,OMalley}. We assume a dispersion on the reddening $\sigma_{E(B-V)} = 0.02$, in agreement with Ref.~\cite{OMalley}, which translates into Gaussian priors on absorption with $\sigma_{\rm abs} = 0.06$ following the Cardelli et al. \cite{Cardelli} relation. For [$\alpha$/Fe] we adopt a prior of $\sigma_{\alpha} = 0.2$ which is equivalent to the sampling step of the DSED stellar grid.
Unlike the priors on metallicity or distance which are conservative compared to recent literature, the prior on absorption needs to be restrictive to reduce the degeneracy between age and absorption. Even though it may appear narrow, one should bear in mind that this parameter is usually kept fixed in other analyses in the literature.\footnote{We have also explored relaxing the metallicity prior by increasing the width of the gaussian by a factor few. We find that this more conservative choice does not affect the final results of the inferred age ($t_{\rm GC}$, $t_{\rm U}$) as statistical errors remain below the systematic ones.}
For some clusters, the posterior distribution is cut by the 15 Gyr age limit imposed by the grid of the stellar models, but even in these cases the peak of the distribution is always well determined and the cut happens at the $\sim 2 \sigma$ level, hence the effect on the results can be kept under control.
Given the nature of the problem (degeneracy between the age, distance and the metallicity), the nature of the data (possible presence of multiple populations), and the nature of the likelihood calibration (we fit, at the same time, the MS and the UB, where, in principle, each might favor a different region of the parameter space and be affected by different degeneracies), we expect that the posterior distribution might be multi-modal. In this case,
the standard \texttt{emcee} sampler may be inefficient.
Existing methodologies to handle multi-modal distributions include slicing the parameter space and combining the results afterwards, or techniques like parallel-tempering Monte Carlo Markov chains where the chains are run at different temperatures, which makes it easier to the chains to communicate and thus ``move" between peaks and low likelihood regions.
The first approach is expensive in terms of computational cost and we found the second one not efficient in our case.
Parallel tempering MCMC will move the ``coldest" chains to a formal global maximum which is however in a non-physical region of parameter space (ages $\gtrsim$ 15 Gyr and very low metallicities [Fe/H] $<$ -2.3 dex). We explain this tendency as follows. At high ages and low metallicities the evolutionary tracks in the color magnitude diagram become very similar to each other (as shown in Figure~\ref{diff1} in Appendix~\ref{sec:sensitivity}). In other words, there is a lot of prior volume to explore, and therefore the chains tend to spend a lot of time there. This is an artifact of the prior probability distribution chosen.
One of the consequences of having multi-modal posterior distributions with several local maxima of the likelihood and one global maximum, and using the standard affine invariant \texttt{emcee} sampler, is a low acceptance fraction. This is especially significant if the modes are well separated, i.e., if the separation between modes is much larger than the width of the distribution around the maxima. Indeed, only a small fraction of MCMC steps, close to the likelihoods peaks are accepted. One possibility to bypass this technical difficulty may involve re-parametrization~\cite{KosowskyJim} or non-uniform priors, in addition to using stronger Gaussian priors on the metallicity.
We decided to stick to the standard \texttt{emcee} sampler and increased the number of chains to improve the number of accepted steps. We run 100 chains (or walkers for \texttt{emcee}) for 5000 steps (several times the autocorrelation length) with a burn-in phase of 500 steps. This set up returns a suitable and stable acceptance rate. For multimodal distribution, the initialization of the chains can be a important factor. We tested two configurations (a tiny Gaussian ball centered on estimates from the literature see Ref.~\citep{Dotter2010} and a uniform distribution with boundaries matching the uniform priors. Both gave consistent results and we kept the second configurations as it is more objective.
We have also made several convergence tests on a subset of clusters varying the number of walkers and increasing the steps of each of them (from 100 to 700 walkers for up
to 100,000 steps) and found that this does not change the results.
We report the error on the parameters as the highest posterior density interval (also sometimes referred to as minimum credible interval) at a given confidence level. Note that for non-symmetric distributions (such as those we have here) these errors are not necessarily symmetric.
\subsection{Systematic uncertainties}
\label{sec:syserr}
In our approach, all the parameters that describe the GCs (age, distance, metallicity, [$\alpha$/Fe] and extinction) are determined directly from the data. While HST photometry does have some remaining systematic uncertainty, this is minute compared to the uncertainty associated with the theoretical stellar model (see below). We estimate the systematic uncertainties in the ages of GCs induced by the theoretical stellar model using the recipe in Table~2 of Ref.~\cite{OMalley}. To our knowledge, this is the most rigorous approach among stellar model-building to estimate the systematic uncertainties using the ``known-unknowns". Inspection of Table~2 in Ref.~\cite{OMalley} shows that the main systematic uncertainty is due to the use of mixing length theory to model convection in the 1D stellar models. The other dominant systematic uncertainty is related to reaction rates and opacities.\footnote{Rotation is another source of systematic uncertainty, as the rotation speed of stars in GCs is unknown. However, the main effect of rotation is to alter the depth of the convection zone. Given that we have explored a wide range of values of the mixing length parameters, the effect of rotation is effectively included in our systematic budget estimation.} Everything else is subdominant, thus the combined effect these two components captures well the extent of systematic errors.
Mixing-length theory\footnote{In Appendix~\ref{appendix:MLT} we give a brief description of mixing-length theory.} has two parameters: the mixing-length parameter (i.e., roughly how much the convection cells travel before they break up), and the overshoot parameter (how much the convective cell travels beyond the equilibrium condition). Of these two, the second one is unimportant for low-mass stars such as those in GCs. These two parameters dominate the uncertainty in stellar model building; the uncertainties in nuclear reactions are at the \% level.
In principle, changes in the mixing length do not alter the lifetime of the star, see discussion in page 725, of Ref.~\cite{Jimenez95}. The effect on the inferred age arises from degeneracies with metallicity. In this work the metallicity is strongly constrained so that, in principle, the effect of mixing length uncertainties could be reduced significantly.
In fact, the mixing length parameter is usually calibrated from fits to the Sun, but astro-seismology from other stars at different evolutionary stages indicates a spread of values between 1.0 and 1.7. Thus, the results from observations of the Sun are extrapolated to stars belonging to GCs, but adopting the full spread of mixing length parameter values to quantify the systematic uncertainties.
However, a better estimation of systematic uncertainties related with the mixing length parameter is possible. As shown in Ref.~\cite{JimenezGC96}, not only the morphology of the red giant branch can be used to constrain the value of the mixing length, but also all the GCs analysed in Ref.~\cite{JimenezGC96}, had the same value for the mixing length and showed no star-to-star variation of the mixing length parameter. Therefore, the morphology of the red giant branch is sufficient to constrain the mixing length, once the metallicity is constrained, without the need to rely on the solar calibration. Thus, potentially, for the present study, as the metallicity can be constrained from the lower main sequence as well as the sub-giant branch (see Figure~\ref{diff1}), the upper giant branch could be used to determine the value of the mixing length as done in Ref.~\cite{JimenezGC96}. This approach would require adding the mixing length parameter as an extra free parameter in our analysis; we leave this for future work.
Here instead we prefer to be conservative and use the full range for the mixing length considered in Ref.~\cite{OMalley} (i.e., between 1.0 and 1.7), which is conservative because the study of Ref.~\cite{JimenezGC96} showed that fits to the position of the red giant branch with known metallicity indicate no significant spread in mixing length parameter. These fits recover a value of 1.6, well in agreement with results from calibration to the Sun. To estimate the error in ages due to mixing length variations over the full conservative interval, we use the stellar models of Ref.~\cite{Jimenez95}, and in particular the fitting formulas therein. This yields a 0.3 Gyr age uncertainty.
In addition to this we add an extra 0.2 Gyr to account for uncertainties in reaction rates and opacities, as from Table~2 of Ref.~\cite{OMalley}. In total, we have a 0.5 Gyr uncertainty budget due to systematic effects in stellar modeling.
Note that in the standard MSTOP approach, another systematic uncertainty to account for would be the value of [$\alpha$/Fe], which in general is not known and is assumed to be between $0.2-0.4$. However, in our approach, this is not the case as this is a parameter of the model: its value is directly constrained by the analysis and its uncertainty is therefore already included in our marginalized errors.
\section{Results}
\label{sec:results}
\begin{figure}
\begin{centering}
\includegraphics[width=0.49\columnwidth]{param1.pdf}
\includegraphics[width=0.49\columnwidth]{param2.pdf}
\includegraphics[width=0.49\columnwidth]{param3.pdf}
\includegraphics[width=0.49\columnwidth]{param4.pdf}
\includegraphics[width=0.49\columnwidth]{param5.pdf}
\caption{$68\%$ confidence level marginalised constraints for the five parameters of interest
for each of the GC in the sample (CG id, in the $x$-axis, corresponds to the ordering of Table \ref{tab:my-table}). The shaded blue regions represent boundaries of the uniform prior. There are additional gaussian priors of $\sigma_{[Fe/H]} = 0.2$ dex for metallicity, $\sigma_{\rm dm}$ = 0.5 on the distance modulus, $\sigma_{[\alpha]} = 0.2$ for alpha enhancement and $\sigma_{\rm abs}=0.06$ in the absorption centered around values from the literature (see text for details).}
\label{fig:GC1}
\end{centering}
\end{figure}
We apply the methodology presented in previous sections to our catalog of 68 GCs. Two-dimensional marginalized posteriors for all pairs of parameters can be found for
a representative GC in Appendix~\ref{app:fits}.
Figure~\ref{fig:GC1} shows our main results (see also Appendix~\ref{app:GCtable-params} and Tables~\ref{tab:my-table} and \ref{tab:my-table2}). We present marginalized constraints on the absolute age, metallicity, distance, absorption and [$\alpha$/Fe] of each GC assuming the \texttt{DSED} model. The $x$-axis in each panel shows the cluster id following the same order as in table~\ref{tab:my-table}. The gray horizontal areas show the hard priors imposed by the stellar models domain in parameter space and the gray vertical band (when reported) illustrates the width the gaussian prior adopted (see Sec.~\ref{sec:inference}).
We find no correlation between age and distances, absorption or [$\alpha$/Fe]. In particular the absorption values are low and the distribution presents a scatter that is not correlated with the age. On an individual cluster-basis the constraints on [$\alpha$/Fe] are very weak, however values of [$\alpha$/Fe]$>0.6$ are typically disfavored. A comparison with Dotter et al. \citep{Dotter2010} spectroscopic measurements can be found in Appendix~\ref{app:fits}.
\begin{figure}
\begin{centering}
\includegraphics[width=1.\columnwidth]{omalley_comp_dsed.pdf}
\caption{Direct comparison between our marginalized constraints on the age, distance and metallicity of GCs with results from Ref.~\cite{OMalley} for the 22 GCs in common. The blue lines indicate the identity. We plot uncertainty bars for both determinations when available. There is excellent agreement for the metallicity determination and reasonable agreement for the distance determinations, although our distances (with error bars so small that are behind the red dots) are on average somewhat shorter than those of Ref.~\cite{OMalley} by about 200 pc. The age agreement is within the uncertainties, but our ages are slightly older on average. See text for more details.}
\label{fig:GC4}
\end{centering}
\end{figure}
In Figure~\ref{fig:GC4} we compare our inferred constraints with the findings of Ref.~\cite{OMalley} for the 22 GCs in common. It is interesting to note the good agreement obtained for the metallicity estimates of [Fe/H]. Our distances, using information from the color-magnitude diagram and only very weak priors, are in reasonable agreement with those obtained Ref.~\cite{OMalley}, which rely on external information (GAIA parallaxes and accurate distance to nearby dwarf stars). However, we find a small shift as our determination of distances is $\sim 200$ pc smaller on average. This small discrepancy arises because the analysis in Ref.~\cite{OMalley} assumes a fixed extinction value, while we treat extinction as a free parameter to be constrained by the data and marginalized over. For the ages determination the agreement is within 68\% confidence level uncertainties. From the first panel of Figure~\ref{fig:GC4} it is possible to appreciate that the errors from this study are smaller that those of Ref.~\cite{OMalley} even when Ref.~\cite{OMalley} uses additional external information, not used here. This illustrates the advantage of considering regions of the color-magnitude diagram beyond the main sequence.
The use of the full color-magnitude diagram, along with the adoption of the priors motivated in sec.~\ref{sec:inference}, enables us to break the age--distance--metallicity degeneracy. In particular, the breaking of the age-metallicity degeneracy is visualized in Appendix \ref{sec:sensitivity} where we show how the isochrones and the color magnitude diagram change in response to variations of these parameters.
\subsection{The age of the oldest GCs}
\label{sec:ageGC}
\begin{figure}
\begin{centering}
\includegraphics[width=\columnwidth, height=0.4\textheight ]{fig7.png}
\caption{ Age distribution for globular clusters with different metallicity cuts ([Fe/H] $< 2$ (dot-dashed); [Fe/H] $< 1.5$ (solid); no cut (dashed)) . The behavior is consistent with the expected age-metallicity relation. We only display the statistical uncertainty. An additional uncertainty of $0.5$ Gyr at 68\% confidence level needs to be added to account for the systematic uncertainty.}
\label{fig:GCage}
\end{centering}
\end{figure}
On average, the oldest GCs are those expected to be more metal poor. Here we consider two metallicity cuts as a way to select the oldest GCs: [Fe/H]$<-2$ as adopted in Ref.~\cite{JimGC} -- leaving 11 clusters -- and [Fe/H]$<-1.5$ -- leaving 38 clusters-- . We estimate the age distribution $t_{\rm GC}$ for these two samples by multiplying the individual bayesian posteriors (see Fig.~\ref{fig:GCage}).
For [Fe/H]$<-2$ this yields $t_{\rm GC}= 13.32 ^{+0.15}_{-0.20} {\rm (stat.)} \pm 0.5 {\rm (sys.)}$, while
for [Fe/H]$<-1.5$ we obtain $t_{\rm GC}= 13.32 \pm 0.1 {\rm (stat.)} \pm 0.5 {\rm (sys.)}$. The first uncertainty is the statistical uncertainty while the second uncertainty is the systematic one, as calculated in Sec. \ref{sec:syserr}. The results for the two cuts are very consistent; as expected, the additional 27 clusters in the [Fe/H]$<-1.5$ cut reduce the statistical error significantly; here we therefore adopt the [Fe/H]$< - 1.5$ cut due to the increased statistical power.
\begin{figure}
\begin{centering}
\includegraphics[width=.8\columnwidth]{ageUfinal2B.pdf}
\caption{Estimate of the age of the Universe from the age of the oldest globular clusters (solid thick black line) including systematic uncertainties (dashed line) added in quadrature to a gaussian fit (with asymmetric variances) of the statistical distribution (dotted line). The thin blue line shows the Planck 2018 posterior for the age of the Universe.}
\label{fig:GCUniverse}
\end{centering}
\end{figure}
\subsection{From globular cluster ages to the age of the Universe}
\label{sec:ageuni}
The age of the oldest stars sets a lower limit for the age of the Universe. These stars and the oldest GCs formed at a redshift $z_{\rm f}$. Hence, it is possible to estimate the age $t_{\rm U}$ of the Universe from the age $t_{\rm GC}$ of the oldest GCs adding a formation time $\Delta t$, corresponding to the look back time at $z_{\rm f}$.
As argued in Ref.~\cite{JimGC}, it is possible to estimate the probability distribution of $\Delta t$ by considering that the first galaxies are found at $z\sim 11$ and a significant number of galaxies are found at $z > 8$. Many of these galaxies contain stellar populations that indicate that star formation started at $z \sim 15-40 $~\cite{2018Natur.557..392H,2020ApJ...888..124S,2019MNRAS.489.3827B}; $z_{\rm f}$ is thus assumed to be $z_{\rm f}\ge11$. Both theoretically \citep{Padoan_Jimenez1997,Trenti2015,Choksi2018,Reina_Campos2019,Kruijssen2019}
and observationally \citep{Forbes2015} GC seem to form at $z_f >10$. On the other hand, GCs could not have formed before the start of reionization which is estimated to happen around $z_{\rm f, max}\sim 30$. Ref.~\cite{JimGC} includes a computation of the probability distribution of $\Delta t$ marginalizing over $H_0$, $\Omega_{m,0}$ and $z_{\rm f}$, with $z_{\rm f}$ varying between $z_{\rm f, min} = 11$ and $z_{\rm f, max}$. The resulting distribution depends very weakly on cosmology for reasonable values of the cosmological parameters, and very weakly on the choice of $z_{\rm f, max}$ provided $z_{\rm f, max}> 20$. Here we estimate the full probability distribution of $t_{\rm U}=t_{\rm GC}+\Delta_t$ by performing a convolution of the posterior probability distribution for $t_{\rm GC}$ as provided in \S~\ref{sec:ageGC} and the probability distribution of $\Delta_t$ from Ref.~\cite{JimGC} for which we provide a fitting formula in appendix \ref{appendix:fitDt}.
We find $t_{\rm U}=13.5^{+0.16}_{-0.14} {\rm (stat.)} \pm 0.5 ({\rm sys.})$ at 68\% confidence level. The resulting posterior distribution for the age of the Universe $t_{\rm U}$ is presented in Figure \ref{fig:GCUniverse}. The solid black line is the result including only statistical errors, the dashed line is obtained by fitting this distribution with two gaussians with the same maximum but independent variances for the two sides (dotted line), and then adding the systematic error in quadrature (dashed line). For reference the blue thin line shows the constraint inferred from CMB observations from Planck, assuming the $\Lambda$CDM model ~\cite{Planck18}.
\section{Summary and Conclusions}
\label{sec:summary}
Resolved stellar populations of GCs provide an excellent data set to constrain the age of the Universe, which in turn is a key parameter in cosmology governing the background expansion history of the Universe. Since the mid 90's, estimates of the ages of GCs have been in the range $12-14$ Gyr consistently (see e.g. Ref.~\cite{JimenezGC96}). With current improvements in observational data and stellar modeling, it is possible to decrease the uncertainty on the ages by a factor 4. Given the high-quality of data obtained by HST and the improvement in the accuracy of stellar models, we have attempted to estimate the physical parameters of GCs including their age, using as many features as possible in their color-magnitude diagrams.
It is well known that the MSTOP is very sensitive to the GC's age; however, it is also sensitive to distance, metallicity, and other parameters, due to significant degeneracies in parameter space. However, degeneracies can be in large part lifted if other features of the color-magnitude diagram are exploited (see Appendix~\ref{sec:sensitivity}).
In this paper, we have analyzed a sample of 68 ACS/HST globular clusters using most of the information in the color-magnitude diagram: specifically, the main sequence and red giant branch. We have developed a Bayesian approach to perform an analysis of each GC, varying simultaneously their age, distance, metallicity, [$\alpha$/Fe] and reddening adopting physically-motivated priors based on independent measurements of distances, metallicities and extinctions found in recent literature. Our obtained posteriors yield constraints that are fully compatible with previous, and independent, values in the literature.
The average age of the oldest (and most metal poor) GCs is $t_{\rm GC}=13.32 \pm 0.1 {\rm (stat.)} \pm 0.5 {\rm (sys.)}$ Gyr. The systematic errors are due to theoretical stellar model uncertainties and in particular uncertainties in the mixing length, reaction rates and opacities. Systematic errors are now bigger than the statistical error, once constraints from several objects are combined. Hence, to make further progress, uncertainties in stellar model-building should be addressed.
This determination can be used to estimate the Universe absolute age by taking into account the look back time at the likely redshift of formation of these objects. We find the age of the Universe as determined from stellar objects to be $t_{\rm U}=13.5^{+0.16}_{-0.14} {\rm (stat.)} \pm 0.5 ({\rm sys.})$ at 68\% confidence level. The statistical error is 1.2\%; the error budget is dominated by systematic uncertainties on the stellar modeling.
The prospect of determining the age of the Universe with an accuracy competitive with current cosmology standards, may serve to motivate an effort to reduce uncertainties in stellar-model building. This will be addressed in future work.
The statistical uncertainty in $t_U$ is now sufficiently small to warrant comparison to the CMB model-dependent inferred age, which is one of the most accurately quantities inferred from the CMB~\cite{Knox}. Thus comparing the CMB derived value to independent astrophysical estimates can yield precious insights into possible new physics, or support the $\Lambda$CDM model. Our determined value of $t_{\rm U}$ is fully compatible with the inferred value from the Planck mission observations assuming the $\Lambda$CDM model.
\acknowledgments
We thank the stellar modelers for making their stellar models publicly available. We thank the anonymous referee for an useful and constructive report. We also thank David Nataf for very useful feedback. This work is supported by MINECO grant PGC2018-098866-B-I00 FEDER, UE. JLB is supported by the Allan C. and Dorothy H. Davis Fellowship.
JLB has been supported by the Spanish MINECO under grant BES-2015-071307, co-funded by the ESF, during part of the development of this project. LV acknowledges support by European Union's Horizon 2020 research and innovation program ERC (BePreSySe, grant agreement 725327).
The work of BDW is supported by the Labex ILP (reference ANR-10-LABX-63) part of the Idex SUPER, received financial state aid managed by the Agence Nationale de la Recherche, as part of the programme Investissements d'avenir under the reference ANR-11-IDEX-0004-02; and by the ANR BIG4 project, grant ANR-16-CE23-0002 of the French Agence Nationale de la Recherche.
The Center for Computational Astrophysics is supported by the Simons Foundation.
\providecommand{\href}[2]{#2}\begingroup\raggedright |
1,941,325,221,164 | arxiv | \section{Introduction}
Contact geometers have long been fascinated with the subtle art of distinguishing tight contact structures from overtwisted one: the border between these two categories is poorly understood. In this paper, we investigate one manifestation of this problem: when does transverse surgery on transverse knots in overtwisted contact structures result in a tight contact structure?
We focus on fibred knots $K \subset M$, where the fibre surface is a (rational) Seifert surface for $K$; to such a knot is naturally associated a contact structure $\xi_K$, turning $K$ into a transverse knot. Both null-homologous fibred knots and rationally fibred knots ({\it ie.\ }rationally null-homologous and fibred) support a unique contact structure up to isotopy (see \cite{Giroux:OBD, BEVHM}).
What draws our attention to such knots is a result of Etnyre and Vela-Vick (\cite{EVV:torsion}), namely, that $\xi_K$ restricted to the complement of $K$ is always tight, even when $(M, \xi_K)$ is overtwisted. Following the guiding principle that ``removing twisting'' of the contact planes increases the chances of being tight, we will investigate the operation of \textit{admissible transverse surgery}: this operation defines a contact structure on $M_r(K)$ that has ``less twisting'' of the contact planes than the original contact structure.
We know that this operation has a hope of succeeding, because when we remove a ``sufficient amount'' of twisting, it is known that this operation will result in a tight contact structure (\textit{cf.\@} Corollary~\ref{FDTC > 1 tight}). We will look at what we can say when an arbitrary small amount of twisting is removed. In this setting, the more negative the surgery coefficient, the less twisting is removed.
To make the above more precise, we say a few words about how the topological operation of surgery interacts with the contact geometry. Topological $r$-surgery on a null-homologous fibred knot $K$ induces a rational fibred knot $K_r$ in $M_r(K)$, for $r \neq 0$. The contact structure $\xi_{K_r}$ on $M_r(K)$ induced by the fibred knot $K_r$ was determined by Baker, Etnyre, and van Horn-Morris (in \cite{BEVHM}) (for $r < 0$) and by the author (\cite{Conway}) (for $r > 0$) to be the contact structure obtained by admissible (resp.\@ inadmissible) transverse $r$-surgery on $K$ in $(M,\xi_K)$. In the cases we will consider, admissible (resp.\@ inadmissible) transverse surgery on $K$ corresponds to negative (resp.\@ positive) contact surgery on a Legendrian approximation of $K$ (this is not true in general; see \cite{BE:transverse}).
The main result of this paper is a characterisation of the Heegaard Floer contact invariant of the contact structure $\xi_{K_r}$, for $r < 0$. In particular, we will be concerned with $\cplus(\xi_r(K)) \in \HFplus(-M_r(K))$ and its image $\cplusred(\xi_{K_r})$ in $\HFplus(-M_r(K)) \to \HFred(-M_r(K))$, defined by Ozsváth and Szabó in \cite{OS:contact}. Among other properties, the non-vanishing of either of these classes implies that the contact structure $\xi_{K_r}$ is tight.
To state the result, we also need to consider the ``mirror'' of $K$ as a fibred knot in $-M$ (that is, $M$ with reversed orientation). To emphasise the ambient manifold, we will denote this knot as $\overline{K} \subset -M$, and the supported contact structure on $-M$ as $\xi_{\overline{K}}$.
\begin{thm}\label{maintheorem}
If $(M_r(K), \xi_{K_r})$ is the result of admissible transverse $r$-surgery on $K \subset (M, \xi_K)$, then:
\begin{enumerate}
\item $c^+(\xi_{K_r}) \neq 0$ for all $r < 0$ if and only if $c^+_{\rm{red}}(\xi_{\overline{K}}) = 0$.
\item $c^+_{\rm{red}}(\xi_{K_r}) \neq 0$ for all $r < 0$ if and only if $c^+(\xi_{\overline{K}}) = 0$.
\end{enumerate}
\end{thm}
This theorem will proved in slightly more generality (taking surgery rationally fibred knots into account) as Theorem~\ref{rationalmaintheorem}, and in Corollary~\ref{Rplus(red)}, we look at the particular values of $r$ at which the contact invariants change from zero to non-zero.
Ozsváth and Szabó took the first steps toward this result: in \cite{OS:contact}, they showed that when $\HFred(M) = 0$, then $\cplus(\xi_{K_{-1}}) \neq 0$ for any fibred knot $K \subset M$. Since $\HFred(M) = 0$ implies that $\cplusred(\xi_{\overline{K}}) = 0$, our result extends this to $\xi_{K_r}$, for all $r < 0$. Similar results appear by Hedden and Plamenevskaya in \cite{HP}, in the context of L-space surgeries. In \cite{HM}, Hedden and Mark investigated $\cplus(\xi_{K_r})$ for $r = -1/n$, in cases where Theorem~\ref{maintheorem} gives no information. It is always true that $\cplus(\xi_{K_{-1/n}})\neq 0$ for sufficiently large positive integers $n$ (see Corollary~\ref{FDTC > 1 tight}), and they determined upper bounds on the minimum such $n$ (\textit{cf.\@} Corollary~\ref{Rplus(red)}).
\begin{example}
For an example of how Theorem~\ref{maintheorem} can obviate the need for detailed calculation of Heegaard Floer groups, we show that the open book $(\Sigma, \phi)$ in Figure~\ref{genus 2 example} supports a tight contact structure with $\cplusred \neq 0$, for any integers $k_1, k_2 \geq 2$. Let $K$ be the binding of this open book, and consider the open book $(\Sigma,\phi^{-1}\circ \tau_{\bd})$ for inadmissible transverse $(-1)$-surgery on $\overline{K}$, where $\tau_{\bd}$ is a boundary parallel Dehn twist. This latter open book is a stabilisation of one considered in \cite[Section~4]{BE:transverse}, which was shown to support a contact structure with $\cplus = 0$. By Theorem~\ref{maintheorem}, we conclude that $(\Sigma,\phi)$ supports a tight contact structure with $\cplusred \neq 0$.
\begin{figure}[htbp]
\begin{center}
\begin{overpic}[scale=2,tics=20]{"genus2example"}
\put(30,66){\large $-$}
\put(80,90){\large $+$}
\put(135,66){\large $-$}
\put(160,90){\large $+$}
\put(215,90){\large $-k_1$}
\put(222,17){\large $-k_2$}
\put(250,60){\large $-$}
\put(288,60){\large $+$}
\end{overpic}
\caption{A genus-2 open book supporting a tight contact structure with $\cplusred \neq 0$, for any integers $k_1, k_2 \geq 2$.}
\label{genus 2 example}
\end{center}
\end{figure}
\end{example}
We present several consequences of Theorem~\ref{maintheorem}. First, we can recover a familiar result of Honda, Kazez, and Matić. The fractional Dehn twist coefficient is a measure of how much a mapping class ``twists'' arcs with endpoints on the boundary components of an open book.
\begin{cor}[\cite{HKM:right2}]\label{FDTC > 1 tight}
Let $(\Sigma, \phi)$ be an open book decomposition, where $\Sigma$ has one boundary component. If the fractional Dehn twist coefficient (FDTC) of $\phi$ around $\bd \Sigma$ is greater than $1$, then the supported contact structure is tight with $\cplusred \neq 0$.
\end{cor}
Their proof involved the construction first of a taut foliation which was then perturbed to a tight contact structure. Our proof does not require passing via taut foliations, nor does it prove their existence (in that sense, we do not recover the full extent of their result).
We call $K \subset M$ an L-space knot if $M_r(K)$ is an L-space for some $r > 0$, \textit{ie.\@} $\big|H_1(M_r(K))\big| = \rank \HFhat(M_r(K))$. In $S^3$, such knots are known to be fibred (\cite{Ni}) and support the tight contact structure (\cite{Hedden:positivity}). Outside of $S^3$, not all L-space knots are fibered. However, when they \textit{are} fibered, we generalise Hedden's result from \cite{Hedden:positivity}.
\begin{cor}\label{L-space knot}
If $K \subset M$ is a fibered L-space knot, then $\cplus(\xi_K) \neq 0$.
\end{cor}
The support genus of a contact structure is defined in \cite{EO:invariants} to be the minimal genus of an open book supporting the contact structure. Ozsváth, Stipsicz, and Szabó showed in \cite{OSS:planar} that if $\cplusred(\xi) \neq 0$, then $\xi$ is non-planar, \textit{ie.\@} cannot be supported by an open book with planar pages.
\begin{cor}\label{non-planar contact structures}
If $K \subset M$ is fibred, and $\cplus(\xi_{\overline{K}}) = 0$, then $(M_r(K), \xi_{K_r})$ is non-planar for all $r < 0$.
\end{cor}
Similarly, consider the support genus of a Legendrian knot $L \subset (M, \xi)$, \textit{ie.\@} the minimal genus open book supporting $\xi$ such that $L$ sits on a page, defined in \cite{Onaran}. This next corollary generalises a result of Li and Wang (\cite{LW}).
\begin{cor}\label{non-planar Legendrian knots}
If $K \subset M$ is fibred, and $\cplus(\xi_{\overline{K}}) = 0$, and $L$ is a Legendrian approximation of $K$, then neither $L$ nor $-L$ sit on the page of a planar open book supporting $\xi_K$, where $-L$ is $L$ with the orientation reversed.
\end{cor}
In particular, no Legendrian approximation of a torus knot (or more generally, any non-trivial fibered strongly quasi-positive knot, by \cite{Hedden:positivity}) with maximal Thurston--Bennequin number sits on the page of a planar open book for $(S^3, \xi_{\rm{std}})$.
We can use the full generality of Theorem~\ref{rationalmaintheorem} to show that sufficiently large inadmissible transverse surgery on a (rationally) null-homologous fibred knot $K \subset (M, \xi_K)$ preserves the non-vanishing of $\cplusred$.
\begin{cor}\label{cplusred preserved}
Let $K \subset M$ be a (rationally) null-homologous fibred knot, and fix a framing of $K$. If $\cplusred(\xi_K) \neq 0$, then $\cplusred(\xi_{K_r}) \neq 0$ for all sufficiently large $r$ (measured with respect to the fixed framing).
\end{cor}
As an amusing application, Theorem~\ref{maintheorem} gives many examples of rationally fibred knots $K$ such that both $K$ and $\overline{K}$ support tight contact structures. The first such examples were discovered by Hedden and Plamenevskaya in \cite[Corollary~2]{HP}. This is a phenomenon that for null-homologous fibred knots only occurs when the monodromy of the fibration is isotopic to the identity. To create these examples, start with a null-homologous fibred genus-$g$ knot $K'$ that supports a tight contact structure $\xi_{K'}$ on $M'$, and such that $\cplusred(\xi_{K'}) = 0$. By \cite[Theorem~1.6]{Conway}, inadmissible transverse $r$-surgery on $K'$ is tight for $r > 2g-1$. Thus, if $K = K'_r$ is the surgery dual knot to $K'$ under this $r$-surgery, then both $K$ and $\overline{K} = \overline{K'}_{-r}$ support tight contact structures, by \cite[Theorem~1.6]{Conway} and Theorem~\ref{maintheorem}, respectively.
Up to this point, we have only considered negative surgery coefficients. This is because admissible transverse surgery is only defined for slopes in an interval $[-\infty, a)$, where $a$ is often hard to pin down. However, when the fibred knot $K \subset M$ supports an overtwisted contact structure $\xi_K$, and the associated monodromy is not right-veering (see \cite{HKM:right1} for details), we can show that admissible transverse surgery is defined for any positive surgery coefficient, and that such a surgery is tight.
\begin{thm}\label{positive surgeries}
Let $K \subset M$ be a null-homologous fibred knot supporting an overtwisted contact structure $\xi_K$. Assume further that the monodromy $\phi_K$ of the open book associated to the fibration is not right-veering. Then we can define admissible transverse $r$-surgery for all $r \in \Q$, and the resulting manifold $(M_r(K),\xi_r)$ satisfies $\cplusred(\xi_r) \neq 0$ for all $r \geq 0$ (in addition to whatever is guaranteed by Theorem~\ref{maintheorem}).
\end{thm}
\begin{remark}\label{admiss vs inadmiss}
For $r > 0$, we stress that $\xi_r$ given in Theorem~\ref{positive surgeries} is not the contact structure $\xi_{K_r}$ on $M_r(K)$ supported by the rationally fibred knot $K_r$. Indeed, $\xi_r$ is the result of admissible transverse surgery on $K$, whereas $\xi_{K_r}$ for $r > 0$ is the result of inadmissible transverse surgery on $K$ (see Theorem~\ref{thm:OBD surgery}).
\end{remark}
We can use Theorems~\ref{rationalmaintheorem}~and~\ref{positive surgeries} to narrow down the possibilities for manifolds that do not support tight contact structures.
\begin{cor}\label{where surgery is not tight}
If $\HFred(M) = 0$, and $K \subset M$ is a null-homologous fibred knot, then if $M_r(K)$ does not support a tight contact structure, then $\phi_K$ is right-veering and $r > 0$.
For a general $M$ and $K \subset M$ is null-homologous and fibred, there exists at most one $r \in \Q\cup \{\infty\}$ such that both orientations of $M_r(K)$ do not support a tight contact structure. In addition, at least one of the following is true:
\begin{itemize}
\item $M_r(K)$ supports a tight contact structure in both of its orientations for all $r > 0$.
\item $M_r(K)$ supports a tight contact structure in both of its orientations for all $r < 0$.
\end{itemize}
\end{cor}
This paper is organised as follows: in Section~\ref{sec:background} we recall some Heegaard Floer homology, transverse surgery, and how they interact; in Section~\ref{sec:neg surgeries}, we deal with the proof of Theorem~\ref{maintheorem} and Corollaries~\ref{FDTC > 1 tight}--\ref{cplusred preserved}; in Section~\ref{sec:pos surgeries}, we prove Theorem~\ref{positive surgeries} and Corollary~\ref{where surgery is not tight}; and in Section~\ref{sec:questions}, we look at some further questions, namely, the difference between $\cplus$ and $\chat$, and genus-1 fibred knots.
\subsection*{Acknowledgements}
The author would like to thank John Etnyre for many helpful discussion throughout this project. Additionally, the author has benefited from discussions with and/or comments on an early draft of this paper from David Shea Vela-Vick, Tom Mark, B\"{u}lent Tosun, Matt Hedden, and an anonymous referee. This work was partially supported by NSF Grant DMS-13909073.
\section{Background}
\label{sec:background}
We assume familiarity with the basics of Heegaard Floer Homology and knot Floer Homology (\cite{OS:hf1, OS:hfk}); Legendrian and transverse knots, surgery on Legendrian knots, open book decompositions, and convex surfaces (\cite{Etnyre:knots, Etnyre:contactlectures, Etnyre:OBlectures, DGS:surgery}). Here, we will recall the relevant information.
\subsection{Heegaard Floer Homology}
\label{subsec:HF}
All of the Heegaard Floer groups in this paper will be defined over the field $\F = \Z/2\Z$. As per convention, given a generating set $\{\bm{x}\}$ for $\CFhat(M)$, we consider $\CFinfinity(M)$ as being generated by $\{U^i\bm{x}\,|\,i \in \Z\}$. Then, $\CFminus(M)$ is generated by $\{U^i\bm{x}\,|\,i \in \Z_{> 0}\}$, and $\CFplus(M) = \CFinfinity(M) / \CFminus(M)$. These chain complexes are involved in several short exact sequences, leading to homology long exact sequences.
$$\cdots \xrightarrow{\delta_+} \HFhat(M) \xrightarrow{\widehat{\iota}} \HFplus(M) \xrightarrow{U} \HFplus(M) \xrightarrow{\delta_+} \cdots$$
$$\cdots \xrightarrow{\delta_-} \HFminus(M) \xrightarrow{U} \HFminus(M) \xrightarrow{\pi_-} \HFhat(M) \xrightarrow{\delta_-} \cdots$$
$$\cdots \xrightarrow{\delta_\infty} \HFminus(M) \xrightarrow{\iota_-} \HFinfinity(M) \xrightarrow{\pi_\infty} \HFplus(M) \xrightarrow{\delta_\infty} \cdots$$
For the second short exact sequence, we naturally identify the chain complex generated by $\{U\bm{x}\}$ with $\CFhat(M)$. With this identification, the commutativity of the following diagram can be checked on the chain level.
\begin{center}
\begin{tikzpicture}[node distance=2cm,auto]
\node (UL) {$\HFhat(M)$};
\node (UM) [right of=UL] {\null};
\node (UR) [right of=UM] {$\HFplus(M)$};
\node (BR) [below of=UM] {$\HFminus(M)$.};
\draw[->] (UL) to node {$\widehat\iota$} (UR);
\draw[->] (UR) to node {$\delta_\infty$} (BR);
\draw[->] (UL) to node[below]{$\delta_-$} (BR);
\end{tikzpicture}
\end{center}
We define $\HFred(M)$ to be $\coker \pi_\infty$, or equivalently $\ker \iota_-$; these two definitions are isomorphic via $\delta_\infty$.
The Heegaard Floer chain groups for $-M$ are dual to those for $M$. In particular, we get a natural non-degenerate bilinear pairing (the Kronecker pairing) on $\CFhat(-M) \otimes \CFhat(M)$ that descends to homology:
$$\langle \cdot, \cdot \rangle : \HFhat(-M) \otimes \HFhat(M) \to \F.$$
There is a similar pairing for $\HFpm(-M)$ and $\HFmp(M)$, see \cite{OS:4manifolds1} for details. For our purposes, it is sufficient to note that if $\delta_-(x) \neq 0$, where $x \neq 0 \in \HFhat(-M)$ and $\delta_- : \HFhat(-M) \to \HFplus(-M)$, then there exists some $y \neq 0 \in \HFhat(M)$ such that $\langle x, y \rangle \neq 0$ and $y \in \im \delta_+$, or equivalently, $\widehat\iota(y) = 0 \in \HFplus(M)$.
Putting this together with the fact that $\delta_- = \delta_\infty \circ \widehat\iota$, we get the following characterisation of elements of $\HFred(-M)$, which we will use in the proof of Theorem~\ref{maintheorem}.
\begin{lem}\label{HF duality}
Given an element $x \in \HFhat(-M)$, then $\widehat\iota(x) \in \HFplus(-M)$ descends to a non-zero element of $\HFred(-M)$ if and only if there exists some $y \neq 0 \in \HFhat(M)$ such that $\langle x, y \rangle \neq 0$ and $\widehat\iota(y) = 0 \in \HFplus(M)$.
\end{lem}
\subsection{Knot Heegaard Floer Homology}
\label{subsec:HFK}
Ozsváth and Szabó \cite{OS:hfk} and independently Rasmussen \cite{Rasmussen} define a filtration on $\CFhat(M)$ induced by a (rationally) null-homologous knot $K \subset M$; this filtration $\mathcal{A}$ is called the {\it Alexander grading}. We get an induced bi-filtration on $\CFinfinity(M)$ given by $\mathcal{F}(U^i\bm{x}) = (-i, \mathcal{A}(\bm{x}) - i)$. We will use $\mathcal{C}_K$ to denote the complex $\CFinfinity(M)$ filtred by $K$, and as per convention, $\mathcal{C}_K\{\mbox{relation}(i,j)\}$ will denote sub and quotient complexes of $\mathcal{C}_K$ determined by $\mbox{relation}(i,j)$.
\begin{remark}
The Alexander grading depends on a choice of relative Spin$^c$ structure $\mathfrak{s} \in \mathrm{Spin}^{\it c}(M,K)$. However, different choices of $\mathfrak{s}$ give isomorphic chain complexes, and in practice amounts to shifting the Alexander grading by a fixed constant. Since we are only interested in the structure of $\mathcal{C}_K$ and not the particular values of the Alexander grading, we need not worry about this issue. For the remainder of the article, we assume that we have fixed $\mathfrak{s}$, whenever needed. See \cite{OS:hfk, OS:rationalsurgery, HP, Raoux} for more details.
\end{remark}
First note that $\mathcal{C}_K\{i = 0\} = \CFhat(M)$, and $\mathcal{C}_K\{i \geq 0\} = \CFplus(M)$. The homology of quotient complexes of $\mathcal{C}_K\{i = 0\}$ tell us whether $K$ is fibred.
\begin{thm}[\cite{Ni}, \cite{HP}]\label{thm:fibred HFK}
A rationally null-homologous knot $K\subset M$ with irreducible complement is fibred if and only if $H_*(\mathcal{C}_K\{i = 0, j = {\rm bottom}\})\cong\F$, where ${\rm bottom}$ is the smallest value of $j$ such that the homology is non-trivial. If $K$ is integrally null-homologous, then ${\rm bottom} = g(K)$, where $g(K)$ is the genus of the knot.
\end{thm}
Let $K$ be a rationally null-homologous knot with a fixed framing. If $K$ is null-homologous, then we can choose the Seifert framing, and if not, then we can choose the \textit{canonical framing} (see \cite{MT, Raoux}), but this choice will not affect our results.
According to \cite{OS:hfk, OS:rationalsurgery}, we can use $\mathcal{C}_K$ to determine the Heegaard Floer homology of sufficiently large (positive or negative) surgeries on $K$ (with respect to the fixed framing). For our purposes, it will be sufficient to identify the complexes $$\mathcal{C}_{K_n}\{i = 0, j = {\rm bottom}\} \mbox{, }\mathcal{C}_{K_n}\{i = 0\} \mbox{, and } \mathcal{C}_{K_n}\{i \geq 0\},$$ where $K_n$ is the surgery dual knot to $K$ after $n$-surgery, for a large integer $n > 0$. We will also determine $$\mathcal{C}_{K_{-n}}\{i = 0, j = {\rm top}\} \mbox{, } \mathcal{C}_{K_{-n}}\{i = 0\} \mbox{, and } \mathcal{C}_{K_{-n}}\{i \geq 0\},$$ where $K_{-n}$ is the surgery dual knot to $K$ after $-n$-surgery, for a large integer $n > 0$.
Let ${\rm top}$ be the maximum value of $j_0$ such that $H_*\left(\mathcal{C}_K\{i = 0, j = j_0\}\right)$ is non-trivial, and define ${\rm bottom}$ as the minimum such value. Let $\widehat{A} = \mathcal{C}_{K}\{\max(i,j-{\rm top}+1)=0\}$, let $A^+ = \mathcal{C}_{K}\{\max(i,j-{\rm top}+1)\geq0\}$, and let $S = \mathcal{C}_{K}\{i < 0 , j = {\rm top}-1\} = \mathcal{C}_K\{i = -1, j = {\rm top}-1\}$. The last equality holds because for a fixed $i_0$, we can assume that $\mathcal{C}_K\{i = i_0\}$ is non-trivial only for ${\rm bottom} < j - i_0 < {\rm top}$.
\begin{thm}[\cite{OS:hfk, OS:rationalsurgery, HP}]\label{thm:HF positive surgery formula}
Let $K \subset M$ be a rationally null-homologous knot with a fixed framing. For all sufficiently large integers $n$, we have:
\begin{itemize}
\item $\mathcal{C}_{K_n}\{i = 0, j = {\rm bottom}\} \simeq S$,
\item $\mathcal{C}_{K_n}\{i = 0\} = \CFhat(M_n(K)) \simeq \widehat{A}$,
\item $\mathcal{C}_{K_n}\{i \geq 0\} = \CFplus(M_n(K)) \simeq A^+$,
\item $\chat(\xi_{K_n})$ is the image of $H_*(S) \to H_*(A)$.
\end{itemize}
\end{thm}
Let $\widehat{A}' = \mathcal{C}_{K}\{\min(i,j - {\rm bottom} - 1)=0\}$, let $A'^+ = \mathcal{C}_{K}\{\min(i,j-{\rm bottom}-1)\geq0\}$, and let $Q' = \mathcal{C}_{K}\{i > 0 , j = {\rm bottom}+1\} = \mathcal{C}_K\{i = 1, j = {\rm bottom} + 1\}$.
\begin{thm}[\cite{OS:hfk, OS:rationalsurgery, HP}]\label{thm:HF negative surgery formula}
Let $K \subset M$ be a rationally null-homologous knot with a fixed framing. For all sufficiently large integers $n$, we have:
\begin{itemize}
\item $\mathcal{C}_{K_{-n}}\{i = 0, j = {\rm top}\} \simeq Q'$,
\item $\mathcal{C}_{K_{-n}}\{i = 0\} = \CFhat(M_{-n}(K)) \simeq \widehat{A}'$,
\item $\mathcal{C}_{K_{-n}}\{i \geq 0\} = \CFplus(M_{-n}(K)) \simeq A'^+$.
\end{itemize}
\end{thm}
\begin{remark}
The proof of \cite[Theorem~4.2]{HP}, on which the first bullet points of both the theorems above are based, goes through with no changes for rationally null-homologous fibred knots, after taking care when calculating the values of the Alexander filtration.
\end{remark}
Based on Theorem~\ref{thm:fibred HFK}, we conclude that if $K$ is fibred, $H_*(Q')$ and $H_*(S)$ are singly generated.
\subsection{Transverse Surgery and Open Books}
\label{subsec:transverse surgery}
Consider a contact manifold $(M, \xi)$ containing a transverse knot $K$. Fix a framing on $K$, against which all slopes will be measured. There is a natural analogue of Dehn surgery on transverse knots, called {\it transverse surgery}. This comes in two flavours: {\it admissible} and {\it inadmissible}. See \cite{BE:transverse, Conway} for more details.
Given a transverse knot $K\subset (M, \xi)$, a neighbourhood of $K$ is contactomorphic to $\{r \leq a\} \subset (S^1 \times \R^2, \xi_{\rm{rot}}=\ker(\cos r\, dz + r\sin r\,d\theta))$ for some $a$, where $z$ is the coordinate on $S^1$, and $K$ is identified with the $z$-axis. The slope of the leaves of the foliation induced by the contact planes on the torus $\{r = r_0\}$ is $-\cot r_0 / r_0$. We will abuse notation and refer to the torus $\{r = r_0\}$ (resp.{}\ the solid torus $\{r \leq r_0\}$) as being a torus (resp.{}\ a solid torus) of slope $-\cot r_0 / r_0$.
If $L$ is a Legendrian approximation of $K$, then inside a standard neighbourhood of $L$, we can find a neighbourhood of $K$ where the boundary torus is of slope $p/q$, for any $p/q < tb(L)$. Conversely, if we have a neighbourhood for $K$ where the boundary torus is of slope $p/q$, then there exist Legendrian approximations $L_n$ of $K$ with $tb(L_n) = n$, for any integer $n < p/q$.
To perform {\it admissible transverse surgery}, we take a torus of rational slope $p/q$ inside $\{r \leq a\}$, we remove the interior of the corresponding solid torus of slope $p/q$, and perform a contact cut on the boundary, {\it ie.\ }we quotient the boundary by the $S^1$-action of translation along the leaves of slope $p/q$ (see \cite{Lerman} for details); the contact structure descends to the quotient manifold to give the result of admissible transverse $p/q$-surgery on $K$. To perform {\it inadmissible transverse surgery}, we first remove $\{r < b\}$ for some $b \leq a$. We then glue on the $T^2 \times I$ layer $\{r_0 \leq r \leq b + 2\pi\}$ by identifying $\{r = b + 2\pi\}$ with $\bd\left( M\backslash \{r < b\}\right)$; we choose $r_0$ such that the new boundary $\{r = r_0\}$ is a torus of slope $p/q$. The result of inadmissible transverse $p/q$-surgery on $K$ is the result of performing a contact cut on the new boundary.
A (rationally) fibred link $L \subset M$ induces an open book decomposition of $M$, which supports a unique contact structure $\xi_L$ and in which $L$ is a transverse link (\cite{Giroux:OBD, BEVHM}). The construction of $\xi_L$ gives a natural neighbourhood of each component of $L$ in which we can define admissible transverse surgery up to the slope given by the page (the \textit{page slope}).
\begin{remark}\label{multiple tori}
As in \cite[Remark 2.9]{Conway}, the result of admissible transverse surgery depends in general on the neighbourhood of $K$ chosen to define it. For fibred knots $K \subset (M, \xi_K)$, we will always be using the neighbourhood mentioned above, unless otherwise noted.
\end{remark}
If a link $L\subset M$ is (rationally) fibred, then topological $r$-surgery on a component $K$ of $L$ for any $r$ not equal to the page slope gives a rationally fibred knot $L' \subset M'$.
\begin{thm}[\cite{BEVHM}, \!\cite{Conway}]\label{thm:OBD surgery} The contact structure $\xi_{L'}$ on $M'$ is the result of admissible (resp.{}\ inadmissible) transverse $r$-surgery on the knot $K \subset (M, \xi_L)$, when $r$ is less than (resp.{}\ greater than) the page slope. \end{thm}
In \cite[Section~3.3]{Conway}, the author gave an algorithm to exhibit an abstract integral open book supporting $\xi_{L'}$. To do this, we break up an admissible transverse surgery into a sequence of integral admissible transverse surgeries, and an inadmissible transverse surgery is treated as an inadmissible transverse $(f+1/n)$-surgery followed by an admissible transverse surgery (where $f$ is the page slope). The algorithm can be described in terms of the following two moves: (1) adding a boundary parallel Dehn twist about a boundary component of the current open book, and (2) stabilising the open book along a boundary parallel arc. The first move is an integral admissible (resp.{} inadmissible) transverse surgery if we add a positive (resp.{} negative) Dehn twist, and the second move changes the surgery coefficient of subsequent surgeries. See \cite[Section~3.3]{Conway} for the details.
\subsection{Heegaard Floer Contact Elements}
\label{subsec:HF and contact geometry}
As in \cite{OS:contact, HP}, when $K \subset M$ is rationally fibred, the \textit{contact element} $\widehat{c}(\xi_K) \in \HFhat(-M)$ of the supported contact structure is given by the image of
$$H_*(\mathcal{C}_{\overline{K}}\{i = 0, j = {\rm bottom}\}) \to H_*(\mathcal{C}_{\overline{K}}\{i = 0\}) \cong \HFhat(-M),$$ where $\overline{K}$ is the knot $K$ considered as a subset of $-M$. We denote by $\cplus(\xi_K)$ and $\cplusred(\xi_K)$ the elements $\widehat\iota(\chat(\xi_K)) \in \HFplus(-M)$ and $\delta_\infty(\cplus(\xi_K)) \in \HFred(-M)$. By Lemma~\ref{HF duality}, $\cplusred(\xi_K) \neq 0$ if and only if there exists some $y \neq 0 \in \HFhat(M)$ such that $\langle \chat(\xi_K), y \rangle \neq 0$ and $\widehat\iota(y) = 0 \in \HFplus(M)$.
The contact elements interest us because their non-vanishing proves that the contact structure is tight (see \cite{OS:contact}). Furthermore, they behave well under capping off binding components of open books (\cite{Baldwin:cappingoff}) and admissible transverse surgery, as in the following lemma.
\begin{figure}[htbp]
\begin{center}
\begin{overpic}[scale=3,tics=20]{"positivesurgery"}
\put(-15,110){\large $B_1$}
\put(172,19){\large $B_2$}
\put(172,77){\large $B_3$}
\put(172,177){\large $B_k$}
\put(110,20){\large $+1$}
\put(38,110){\large $-n$}
\put(155,110){\large $\vdots$}
\put(155,135){\large $\vdots$}
\put(120,130){\large $\phi''$}
\end{overpic}
\caption{This open book is glued along $B_1$ to do transverse inadmissible $s$-surgery, for $1/n < s < 1/(n-1)$. The monodromy is $\tau_{B_1}^{-n}\tau_{B_2}\phi''$, where the support of $\phi''$ is to the right of the dashed curve.}
\label{fig:positive surgery}
\end{center}
\end{figure}
\begin{lem}\label{lemma:admissible preserves non-vanishing}
Let $K \subset (M, \xi)$ be a transverse knot (not necessarily fibred or null-homologous). If $\cplus(\xi) \neq 0$ (resp.{}\ $\cplusred(\xi) \neq 0$), then the result $(M',\xi')$ of admissible transverse surgery on $K$ satisfies $\cplus(\xi')\neq0$ (resp.{}\ $\cplusred(\xi') \neq 0$).
\end{lem}
\begin{proof}
If $K' \subset (M', \xi')$ is the surgery dual knot to $K$, then some inadmissible transverse $s$-surgery on $K'$ results in $(M, \xi)$, where $s$ is measured with respect to some framing of $K'$. By \cite[Lemma~6.5]{BEVHM}, there exists an open book decomposition for $(M',\xi')$ such that $K'$ is a component of the binding. We then stabilise this open book along arcs parallel to the boundary component corresponding to $K'$ to get an open book $(\Sigma', \phi')$, such that $s > 0$ when measured against the framing induced by $\Sigma'$ on $K'$. By Theorem~\ref{thm:OBD surgery}, we can construct an integral open book for $(M, \xi)$ from $(\Sigma', \phi')$, using the algorithm from \cite[Section~3.3]{Conway}.
Let $0$ be the page slope on $K'$. If $s = 1/n$ for some positive integer $n$, then $(M, \xi)$ is supported by the open book $(\Sigma', \phi'\circ\tau_{B'}^{-n})$, where $\tau_{B'}$ is a Dehn twist about the boundary component $B'$ corresponding to $K'$. By \cite{BEVHM, Baldwin:monoid}, the property that a monodromy supports a contact structure with non-vanishing contact invariant is preserved under composition. Hence, since $(\Sigma', \tau_{B'}^n)$ supports a contact structure with non-vanishing contact invariant, and $\phi' = \left(\phi'\circ\tau_{B'}^{-n}\right)\circ\tau_{B'}^n$, so too does $(\Sigma', \phi')$ support a contact structure with non-vanishing contact invariant.
If $1/n < s < 1/(n-1)$ (where if $n=1$, we mean just $s > 1$), then by \cite[Section~3.3]{Conway}, $(M, \xi)$ is supported by an open book $(\Sigma' \cup \Sigma'', \phi'\circ\tau^{-n}_{B_1}\tau_{B_2}\phi'')$. Here, the open book $(\Sigma'',\tau_{B_1}^{-n}\tau_{B_2}\phi'')$ has boundary components $B_1, B_2,\ldots, B_k$ (where $k \geq 3$), and $\phi''$ is supported in a neighbourhood of $B_3 \cup \cdots \cup B_k$ (see Figure~\ref{fig:positive surgery}); we glue $\Sigma'$ to $\Sigma''$ by identifying $B_1$ with $B'$ (the binding component corresponding to $K'$). In this open book for $(M, \xi)$, if we cap off $B_3,\ldots,B_k$, we arrive at an open book $(\Sigma', \phi'\circ\tau_{B'}^{-n+1})$.
Since by \cite{Baldwin:cappingoff}, capping off an open book preserves the non-vanishing of the contact class, if $n = 1$, then we are done. If $n > 1$, then $-n+1 > 0$, and we have an open book for inadmissible transverse $1/(-n+1)$-surgery on $K'$, where the supported contact structure has non-vanishing contact invariant. We finish the proof as in the case $s = 1/(-n+1)$.
\end{proof}
\section{Negative Surgeries}
\label{sec:neg surgeries}
In this section, we will state and prove Theorem~\ref{maintheorem} in full generality (as Theorem~\ref{rationalmaintheorem}), and then prove its corollaries.
\begin{thm}\label{rationalmaintheorem}
Let $K \subset M$ be a rationally null-homologous fibred knot with a fixed framing, and let $f$ be the slope on $\bd N(K)$ induced by a page of the fibration. $(M_r(K), \xi_{K_r})$ is the result of admissible transverse $r$-surgery on $K \subset (M, \xi_K)$, where $r$ is measured with respect to the fixed framing, then:
\begin{enumerate}
\item $c^+(\xi_{K_r}) \neq 0$ for all $r < f$ if and only if $c^+_{\rm{red}}(\xi_{\overline{K}}) = 0$.
\item $c^+_{\rm{red}}(\xi_{K_r}) \neq 0$ for all $r < f$ if and only if $c^+(\xi_{\overline{K}}) = 0$.
\end{enumerate}
\end{thm}
\begin{proof}
To keep notation simple, we will assume $K$ is null-homologous, where the fixed framing is taken to be the Seifert framing (or equivalently, the page slope $f$), and we fix a relative Spin$^c$ structure to measure the Alexander grading on $\mathcal{C}_K$ and $\mathcal{C}_{\overline{K}}$ such that ${\rm top} = g$ and ${\rm bottom} = -g$, where $g = g(K)$ is the genus of $K$. The proof for the more general case is the same, \textit{mutatis mutandis}.
It follows from Lemma~\ref{lemma:admissible preserves non-vanishing} that the vanishing of $\cplus(\xi_{K_r})$ (resp.{}\ $\cplusred(\xi_{K_r})$) for some $r < 0$ implies the same vanishing for $\xi_{K_n}$, where $n$ is a sufficiently negative integer.
Thus, to prove Theorem~\ref{rationalmaintheorem} we only need to show that the theorem is true when $r$ is restricted to be a sufficiently negative integer $n$, using Theorems~\ref{thm:HF positive surgery formula}~and~\ref{thm:HF negative surgery formula}.
\subsection*{Proof of (1)} Since $\cplus(\xi_{K_n}) \in \HFplus(-\left(M_n(K)\right)) = \HFplus(\left(-M\right)_{-n}(\overline{K}))$, we will look at $\mathcal{C}_{\overline{K}}$, and large positive integer surgeries on $\overline{K}$. We start by noting two short exact sequences (where the left map is inclusion), which commute with the obvious vertical inclusions:
\begin{center}
\begin{tikzpicture}[node distance=3cm,auto]
\node (UL) {$\mathcal{C}_{\overline{K}}\{i = 0, j \leq g-1\}$};
\node (ULL) [left of=UL, node distance = 3cm] {$0$};
\node (UM) [right of=UL, node distance=4cm] {$\mathcal{C}_{\overline{K}}\{i = 0\}$};
\node (UR) [right of=UM, node distance=4cm] {$\mathcal{C}_{\overline{K}}\{i = 0, j = g\}$};
\node (URR) [right of=UR, node distance = 3cm] {$0$};
\node (LL) [below of=UL, node distance = 2cm] {$\mathcal{C}_{\overline{K}}\{i = 0, j \leq g-1\}$};
\node (LLL) [left of=LL, node distance = 3cm] {$0$};
\node (LM) [right of=LL, node distance=4cm] {$\mathcal{C}_{\overline{K}}\{i \geq 0\}$};
\node (LR) [right of=LM, node distance=4cm] {$\mathcal{C}_{\overline{K}}\{\max(i,j-g+1) \geq 1\}$};
\node (LRR) [right of=LR, node distance = 3cm] {$0$};
\draw[->] (ULL) to (UL);
\draw[->] (UL) to (UM);
\draw[->] (UM) to (UR);
\draw[->] (UR) to (URR);
\draw[->] (LLL) to (LL);
\draw[->] (LL) to (LM);
\draw[->] (LM) to (LR);
\draw[->] (LR) to (LRR);
\draw[->] (UL) to (LL);
\draw[->] (UM) to (LM);
\draw[->] (UR) to (LR);
\end{tikzpicture}
\end{center}
Denote the left-hand complexes by $D$. We know that $\mathcal{C}_{\overline{K}}\{i = 0 \} = \CFhat(-M)$ and $\mathcal{C}_{\overline{K}}\{i \geq 0\} = \CFhat(-M)$. The two right-hand complexes can be identified via $U$ with $\mathcal{C}_{\overline{K}}\{i = -1, j = g-1\} = S$ and $\mathcal{C}_{\overline{K}}\{\max(i, j - g + 1) \geq 0\} = A^+$. The following commutative diagram comes from the homology long exact sequence associated to the above diagram.
\begin{center}
\begin{tikzpicture}[node distance=3cm,auto]
\node (UL) {$H_*(D)$};
\node (UM) [right of=UL, node distance=3cm] {$\HFhat(-M)$};
\node (UR) [right of=UM, node distance=3cm] {$H_*(S)$};
\node (URR) [right of=UR, node distance = 3cm] {$H_*(D)$};
\node (LL) [below of=UL, node distance = 2cm] {$H_*(D)$};
\node (LM) [right of=LL, node distance=3cm] {$\HFplus(-M)$};
\node (LR) [right of=LM, node distance=3cm] {$H_*(A^+)$};
\node (LRR) [right of=LR, node distance = 3cm] {$H_*(D)$};
\draw[->] (UL) to node{$\widehat\iota_D$} (UM);
\draw[->] (UM) to node{$\widehat\pi$} (UR);
\draw[->] (UR) to node{$\widehat\partial$} (URR);
\draw[->] (LL) to node{$\iota^+_D$} (LM);
\draw[->] (LM) to node{$\pi^+$} (LR);
\draw[->] (LR) to node{$\partial^+$} (LRR);
\draw[->] (UL) to node{$\cong$} (LL);
\draw[->] (UM) to node{$\widehat\iota$} (LM);
\draw[->] (UR) to node{$\iota_S$} (LR);
\draw[->] (URR) to node{$\cong$} (LRR);
\end{tikzpicture}
\end{center}
By Theorem~\ref{thm:fibred HFK}, $H_*(S)$ is generated by a single element $x$. The quotient complex $\mathcal{C}_{\overline{K}}\{i = 0, j = g\}$ is dual to the subcomplex $\mathcal{C}_K\{i = 0, j = -g\}$, and $\chat(\xi_{\overline{K}})$ is the image of $$H_*(\mathcal{C}_K\{i = 0, j = -g\}) \to \HFhat(M).$$ It follows that if $\chat(\xi_{\overline{K}}) \neq 0$, then any element $y \in \widehat\pi^{-1}(x) \subset \HFhat(-M)$ will pair non-trivially with $\chat(\xi_{\overline{K}})$. Thus, based on Theorem~\ref{thm:HF positive surgery formula} and the discussion in Section~\ref{subsec:HF and contact geometry}, we deduce the following facts:
\begin{itemize}
\item $\cplusred(\xi_{\overline{K}}) \neq 0$ if and only if $\widehat\partial(x) = 0$ and there is a $y \in \widehat\pi^{-1}(x)$ such that $\widehat\iota(y) = 0$;
\item $\cplus(\xi_{K_n}) = 0$ for sufficiently negative integers $n$ if and only if $\iota_S(x) = 0$.
\end{itemize}
First assume that $\cplusred(\xi_{\overline{K}}) \neq 0$. As above, we can find some non-zero $y \in \HFhat(-M)$ such that $\widehat\pi(y) = x$ and $\widehat\iota(y) = 0$. But then by commutativity, $\iota_S(x) = \pi^+\widehat\iota(y) = 0$, and so $\cplus(\xi_{K_n}) = 0$.
Now assume that $\cplus(\xi_{K_n}) = 0$. Thus, $\iota_S(x) = 0$. Since $\widehat\partial(x) = \partial^+\iota_S(x) = 0$, we know that $\chat(\xi_{\overline{K}}) \neq 0$. Then, pick a non-zero element $y_0 \in \widehat\pi^{-1}(x)$. If $\widehat\iota(y_0) = 0$, then we're done, so assume that $\widehat\iota(y_0) \neq 0$. Since $\pi^+(\widehat\iota(y_0)) = \iota_S(x) = 0$, we can find some element $d \in H_*(D)$ such that $\iota^+_D(d) = \widehat\iota(y_0)$. But now, $y_0 - \widehat\iota_D(d) \in \widehat\pi^{-1}(x)$, and $\widehat\iota(y_0 - \widehat\iota_D(d)) = \widehat\iota(y_0) - \iota^+_D(d) = 0$. Thus, $\cplusred(\xi_{\overline{K}}) \neq 0$.
\subsection*{Proof of (2)} This time, we will consider $\mathcal{C}_K$, and sufficiently negative integer surgeries on $K$. As in (1), we start by noting two short exact sequences (where the left map is inclusion), which commute with the obvious vertical inclusions:
\begin{center}
\begin{tikzpicture}[node distance=3cm,auto]
\node (UL) {$\mathcal{C}_K\{i = 0, j \geq -g+1\}$};
\node (ULL) [left of=UL, node distance = 3cm] {$0$};
\node (UM) [right of=UL, node distance=5cm] {$\mathcal{C}_K\{\max(i, j + g - 1) = 0\}$};
\node (UR) [right of=UM, node distance=5cm] {$\mathcal{C}_K\{i \geq 1, j = -g+1\}$};
\node (URR) [right of=UR, node distance = 3cm] {$0$};
\node (LL) [below of=UL, node distance = 2cm] {$\mathcal{C}_K\{i = 0, j \geq -g+1\}$};
\node (LLL) [left of=LL, node distance = 3cm] {$0$};
\node (LM) [right of=LL, node distance=5cm] {$\mathcal{C}_K\{\max(i, j + g - 1) \geq 0\}$};
\node (LR) [right of=LM, node distance=5cm] {$\mathcal{C}_K\{i \geq 1\}$};
\node (LRR) [right of=LR, node distance = 3cm] {$0$};
\draw[->] (ULL) to (UL);
\draw[->] (UL) to (UM);
\draw[->] (UM) to (UR);
\draw[->] (UR) to (URR);
\draw[->] (LLL) to (LL);
\draw[->] (LL) to (LM);
\draw[->] (LM) to (LR);
\draw[->] (LR) to (LRR);
\draw[->] (UL) to (LL);
\draw[->] (UM) to (LM);
\draw[->] (UR) to (LR);
\end{tikzpicture}
\end{center}
Denote the left-hand complexes by $D$. The two middle complexes are $\widehat{A}'$ and $A'^+$, and the top-right complex is $Q'$ (recall, $\mathcal{C}_K\{i > 1, j = -g+1\}$ is empty). The two right-hand complexes can also be identified via $U$ with $\mathcal{C}_K\{i = 0, j = -g\}$ and $\CFplus(M)$. The following commutative diagram comes from the homology long exact sequence associated to the above diagram.
\begin{center}
\begin{tikzpicture}[node distance=3cm,auto]
\node (UL) {$H_*(D)$};
\node (UM) [right of=UL, node distance=3cm] {$H_*(\widehat{A}')$};
\node (UR) [right of=UM, node distance=3cm] {$H_*(Q')$};
\node (URR) [right of=UR, node distance = 3cm] {$H_*(D)$};
\node (LL) [below of=UL, node distance = 2cm] {$H_*(D)$};
\node (LM) [right of=LL, node distance=3cm] {$H_*(A'^+)$};
\node (LR) [right of=LM, node distance=3cm] {$\HFplus(M)$};
\node (LRR) [right of=LR, node distance = 3cm] {$H_*(D)$};
\draw[->] (UL) to node{$\widehat\iota_D$} (UM);
\draw[->] (UM) to node{$\widehat\pi$} (UR);
\draw[->] (UR) to node{$\widehat\partial$} (URR);
\draw[->] (LL) to node{$\iota^+_D$} (LM);
\draw[->] (LM) to node{$\pi^+$} (LR);
\draw[->] (LR) to node{$\partial^+$} (LRR);
\draw[->] (UL) to node{$\cong$} (LL);
\draw[->] (UM) to node{$\iota_A$} (LM);
\draw[->] (UR) to node{$\iota_Q$} (LR);
\draw[->] (URR) to node{$\cong$} (LRR);
\end{tikzpicture}
\end{center}
By Theorem~\ref{thm:fibred HFK}, $H_*(Q')$ is generated by a single element $x$. The complex $Q' = \mathcal{C}_{K_n}\{i = 0, j = \rm{top}\}$ is dual to $S = \mathcal{C}_{\overline{K}_{-n}}\{i = 0, j = \rm{bottom}\}$, and so if $\chat(\xi_{K_n}) \neq 0$, then any element $y \in \widehat\pi^{-1}(x) \subset H_*(A')$ will pair non-trivially with $\chat(\xi_{K_n})$. Also, since $Q'$ can be identified via $U$ with $\mathcal{C}_K\{i = 0, j = -g\}$, the map $\iota_Q$ can be identified with the map $$H_*(\mathcal{C}\{i = 0, j = -g\}) \to \HFhat(M) \to \HFplus(M).$$ Thus, based on Theorem~\ref{thm:HF negative surgery formula} and the discussion in Section~\ref{subsec:HF and contact geometry}, we deduce the following facts:
\begin{itemize}
\item $\cplusred(\xi_{\overline{K}_{n}}) \neq 0$ for sufficiently negative integers $n$ if and only if $\widehat\partial(x) = 0$ and there is a $y \in \widehat\pi^{-1}(x)$ such that $\iota_A(y) = 0$;
\item $\cplus(\xi_{\overline{K}}) = 0$ if and only if $\iota_Q(x) = 0$.
\end{itemize}
The rest of the proof follows exactly as in the proof of (1) above.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{FDTC > 1 tight}]
If $K \subset M$ is the fibred knot giving the open book decomposition $(\Sigma, \phi)$, then the open book decomposition associated to $\overline{K}$ is $(\Sigma,\phi^{-1})$, and if $FDTC(\phi) > 1$, then $FDTC(\phi^{-1}) < -1$. By \cite{BEVHM}, the contact structure $\xi_{\overline{K}_{-1}}$ is supported by $(\Sigma, \phi^{-1} \circ \tau_{\bd})$, where $\tau_{\bd}$ is a positive Dehn twist around the binding of $\Sigma$. By \cite{KR}, $\rm{FDTC}(\phi^{-1} \circ \tau_{\bd}) = \rm{FDTC}(\phi^{-1}) + 1 < 0$, and thus $\xi_{\overline{K}_{-1}}$ is overtwisted, by \cite{HKM:right1}. Since $\cplus(\xi_{\overline{K}_{-1}}) = 0$, the corollary now follows from Theorem~\ref{rationalmaintheorem}.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{L-space knot}]
If $M_r(K)$ is an L-space for some $r > 0$, then $\left(-M\right)_{-r}(\overline{K})$ is also an L-space for the same $r$. This means that $\cplusred(\xi_{\overline{K}_{-r}})$ must vanish, and so we conclude that $\cplus(\xi_K) \neq 0$, by Theorem~\ref{rationalmaintheorem}.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{non-planar contact structures}]
Since $\cplus(\xi_{\overline{K}}) = 0$, Theorem~\ref{rationalmaintheorem} implies that $\cplusred(\xi_{K_r}) \neq 0$. Thus, $(M_r(K), \xi_{K_r})$ cannot be planar, by \cite{OSS:planar}.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{non-planar Legendrian knots}]
If $L$ is a Legendrian approximation of $K$, then some negative contact surgery on $L$ is equivalent to some admissible transverse surgery on $K$, by \cite{BE:transverse}. We know from \cite[Theorem~5.10]{Onaran} that if $L$ sits on the page of a planar open book for $\xi_K$, then the result of negative contact surgery on $L$ is also planar, which contradicts Corollary~\ref{non-planar contact structures}. Since after putting $-L$ on the page of an open book, we can reverse the orientation to get $L$, we see that $-L$ is also not planar.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cplusred preserved}]
By Theorem~\ref{rationalmaintheorem}, $\cplus(\xi_{\overline{K}_{-r}}) = 0$ for sufficiently large $r$, say for $r > N$. Then, for any $s > N$, we claim that $\cplus(\xi_{K_s}) \neq 0$. Indeed, if it were not the case, then by Theorem~\ref{rationalmaintheorem}, sufficiently negative admissible transverse surgery on $\overline{K}_{-s}$ would have non-vanishing $\cplus$ invariant. However, sufficiently negative admissible transverse surgery on $\overline{K}_{-s}$ gives us $\xi_{\overline{K}_{-s'}}$, for some $N < s' < s$, which we know to have vanishing $\cplus$ invariant.
\end{proof}
Let $K \subset (M, \xi_K)$ be an integrally null-homologous fibred knot, where $\cplus(\xi_K) \neq 0$. Let
$$R^+_{\rm{red}}(K) = \inf \big\{r \in \Q\,\big|\,\cplusred(\xi_{K_r}) \neq 0\big\},$$
and let
$$R^+(K) = \inf \big\{r \in \Q\,\big|\,\cplus(\xi_{K_r}) \neq 0\big\}.$$
If $\cplusred(\xi_K) = 0$, then we define $R^+_{\rm{red}}(K) = \infty$. Then, the same approach as the proof of Corollary~\ref{cplusred preserved} will give the following.
\begin{cor}\label{Rplus(red)}
Let $K \subset M$ be an integrally null-homologous fibred knot, such that $\cplus(\xi_{\overline{K}}) \neq 0$. Then:
\begin{itemize}
\item $\cplus(\xi_{K_r}) = 0$ for all $-\infty \leq r \leq -R^+_{\rm{red}}(\overline{K})$.
\item $\cplus(\xi_{K_r}) \neq 0$ and $\cplusred(\xi_{K_r}) = 0$ for all $-R^+_{\rm{red}}(\overline{K}) < r \leq -R^+(\overline{K})$.
\item $\cplusred(\xi_{K-r}) \neq 0$ for all $-R^+(\overline{K}) < r < 0$.
\end{itemize}
\end{cor}
\section{Positive Surgeries}
\label{sec:pos surgeries}
We have thus far restricted our attention to negative surgeries, as in general, the existence of admissible transverse $r$-surgeries on a fibred transverse knot $K \subset (M, \xi_K)$ are only guaranteed for $r$ less than the page slope. In fact, if $U$ is the unknot in $(S^3,\xi_U)$, then we know that we cannot do admissible transverse $r$-surgery for any $r \geq 0$. In general, however, if we can thicken the standard neighbourhood of $K$, we can define admissible transverse $r$-surgery for some $r \geq 0$.
Recall from Remark~\ref{multiple tori} that the result of admissible transverse surgery may depend on the chosen neighourhood, and so is not in general well-defined. Thus far, we have chosen a neighourhood of $K$ that was compatible with the open book decomposition, such that Theorem~\ref{thm:OBD surgery} applied. For positive surgeries, there is no such choice. In particular, if $K_r$ is the surgery dual knot to $K$ in $M_r(K)$, then the contact structure coming from admissible transverse $r$-surgery on $K \subset (M, \xi_K)$ bears no relation to $\xi_{K_r}$ (see Remark~\ref{admiss vs inadmiss}).
For the rest of this section, let $K \subset M$ be a null-homologous transverse knot. If $K$ has a Legendrian approximation $L_n$ with $tb(L_n) = n$, then for any $r < n$, we can find a standard neighbourhood of $K$ inside a standard neighbourhood of $L_n$ which allows admissible transverse $r$-surgery for any $r < n$.
We will focus on the case where $\xi_K$ is overtwisted, as we can say the most in that situation. Etnyre and van Horn-Morris proved in \cite{EVHM:fibered} that the complement of the fibred transverse knot $K \subset (M,\xi_K)$ is non-loose, {\it ie.\ }the restriction of $\xi_K$ to $M\backslash K$ is tight. Thus $K$ intersects every overtwisted disc in $(M,\xi_K)$. Following \cite{BO:non-loose}, we define the {\it depth} $d(K)$ of a transverse knot $K$ to be the minimum of $\big| K \cap D\big|$ over all overtwisted discs $D$. Since $K$ is non-loose, $d(K) \geq 1$. The depth of a transverse link, a Legendrian knot, and a Legendrian link can be similarly defined.
We first show that we can pass from transverse knots to Legendrian knots without increasing the depth.
\begin{lem}\label{lem:depth transverse to legendrian}
If $K$ is a transverse knot such that $d(K) = 1$, then there exists a Legendrian approximation $L$ of $K$ such that $d(L) = 1$.
\end{lem}
\begin{proof}
We consider a neighbourhood of $\{r = \pi/4, \theta = 0\} \subset (S^1 \times \R^2,\xi_{\rm{rot}} = \ker(\cos r\, dz + r\sin r\,d\theta))$ as a generic model for a neighbourhood of $K$ (where $z \sim z + 1$), and let the overtwisted disc $D = \{z = 0, r \leq \pi\}$. It would seem preferable to consider $\{r = 0\}$, but since that knot passes through the unique elliptic point of the characteristic foliation on $D$, it would not represent a generic model.
The contact planes away from $r = 0$ are spanned by $\frac{\bd}{\bd r}$ and the vector $X(r) = -\tan r\,\frac{\bd}{\bd z} + \frac{\bd}{\bd \theta}$. We define a Legendrian approximation $L_{-n}$ of $K'$ by
$$L_n(t) = \begin{pmatrix}\alpha\, z(t)\\\pi/4 + \epsilon \cos(nt)\\ -\alpha f(nt)\end{pmatrix},$$
in the coordinates $(z,r,\theta)$, for some positive real number $\alpha$, where $$z'(t) = -\frac{n f'(nt)}{\tan\left(\pi/4 + \epsilon \cos(nt)\right)}, \, z(0) = 0.$$ We choose $f : \R \to \R$ to be a smooth function such that
$$f(t) = \begin{cases}\frac{\sin t}{m} & t \in \left(0, \frac{\pi}{2}-\delta\right)\cup\left(\frac{3\pi}{2} + \delta, 2\pi\right), \\ m\sin t & t \in \left(\frac{\pi}{2} + \delta, \frac{3\pi}{2} - \delta\right),\end{cases}$$
where $m$ is a positive integer and $\delta > 0$ is small. We define $f$ to be periodic with period $2\pi$, and we define $f$ on $\left(\frac{\pi}{2} - \delta, \frac{\pi}{2} + \delta\right)\cup\left(\frac{3\pi}{2} - \delta, \frac{3\pi}{2} + \delta\right)$ such that $f$ is smooth.
Then, given any $\epsilon > 0$, choose $n$ to be a sufficiently large positive integer such that $z(2\pi) > 1$ and $\max |f|/z(2\pi) < \arctan \left(4\epsilon/\pi\right)$; choose $m$ to be a sufficiently large positive integer such that there exists some value $z_0$ of $z$ such that $\big| L_n \cap \{z = z_0 \}\big| = 1$. If we now let $\alpha$ be $1/z(2\pi)$, then $L_{-n}$ will be a closed Legendrian in an $\epsilon$-neighbourhood of $K'$. Note that $tb(L_{-n}) = -n$, and $\big|L_{-n}\cap\{ z = z_0/z(2\pi), r \leq \pi\}\big| = 1$. Thus, $d(L_{-n}) \leq 1$.
To show that $d(L_{-n}) \neq 0$, note that a transverse push-off of $L_{-n}$ can be made arbitrarily close to $L_{-n}$. Thus, if there exists an overtwisted disc in $(M, \xi)$ that is disjoint from $L_{-n}$, it is also disjoint from a sufficiently small neighbourhood of $L_{-n}$. Then, the tranverse push-off $K$ of $L_{-n}$ would also be disjoint from this overtwisted disc. However, $d(K) = 1$, and so this cannot happen.
\end{proof}
The next step is to show that if $d(L) = 1$, then $L$ can be negatively destabilised.
\begin{lem}\label{lem:destabilise along OT disc}
Let $K \subset (M,\xi)$ be a non-loose transverse knot, and let $L_n$ be a Legendrian approximation of $K$ with $tb(L_n) = n$. If $d(K) = d(L_n) = 1$, then $L_n$ can be negatively destabilised to $L_m$ (hence $L_m$ is also a Legendrian approximation of $K$) for any $n \geq m$, where $tb(L_m) = m$ and $d(L_m) = 1$.
\end{lem}
\begin{proof}
Let $T$ be the convex boundary of a regular neighbourhood of $L_n$. Let $\gamma$ be a curve on $T$ isotopic to a meridian of $L_n$. Using the Legendrian Realisation Principle (\cite[Theorem~3.7]{Honda:classification1}), we can isotope $T$ to another convex torus $T'$ such that $\gamma$ is Legendrian with $tb(\gamma) = -1$.
Given an overtwisted disc $D$ such that $\big|L_n \cap D\big| = 1$, isotope $D$ such that its intersection with $T'$ is exactly $\gamma$. Let $A \subset D$ be the annulus with boundary $\bd D \cup \gamma$. Since the twisting of the contact planes along the boundary components of $A$ is non-positive with respect to the framing given by $A$, we can perturb $A $ to be convex. Consider the dividing curves $\Gamma$ on $A$: since $tb(\gamma) = -1$ and $tb(\bd D) = 0$, $\Gamma$ intersects $\gamma$ twice and $\bd D$ zero times. Since $L_n$ is a non-loose Legendrian knot (as $K$ is a non-loose transverse knot), the dividing curves $\Gamma$ can have no contractible components, by Giroux's Criterion (\cite{Giroux:convex}), so, $\Gamma$ is an arc parallel to $\gamma$, and gives a bypass.
If $T''$ is the convex torus we get from the bypass $A$ on $T'$, the dividing curves on $T''$ are meridional. The region in between $T'$ and $T''$ is a basic slice, and we can find convex tori with dividing curves of slope $m$ for any $n \leq m$. We define $L_m$ to be a Legendrian divide of a convex torus with dividing curves of slope $m$.
In order to determine whether $L_m$ is a positive or negative destabilisation of $L_n$, consider the convex torus $T'''$, the boundary of a regular neighbourhood of $L_{n-1}$, the negative stabilisation of $L_n$ (we pick a negative stabilisation since it is non-loose). If we glue the basic slice from $T'''$ to $T'$ to the basic slice from $T'$ to $T'''$, we get a $T^2 \times I$ with a tight contact structure, and convex boundaries with dividing curves of slope $n-1$ and $\infty$, {\it ie.\ }another basic slice. By the classification in \cite{Honda:classification1} of tight contact structures on basic slices, the signs of the basic slices that we glue together must agree. Thus, since $L_{n-1}$ is a negative stabilisation of $L_n$, so too is $L_n$ an $(m-n)$-fold negative stabilisation of $L_m$.
To show that $d(L_m) \geq 1$, note (as in the proof of Lemma~\ref{lem:depth transverse to legendrian}) that $K$ is a transverse push-off of $L_m$, so $L_m$ must be non-loose. Then, since $\big|L_m \cap D\big| = 1$, $d(L_m) \leq 1$, and so we conclude that $d(L_m) = 1$.
\end{proof}
Finally, we use these destabilisations to define admissible transverse $r$-surgery on $K$, for $r \geq 0$.
\begin{proof}[Proof of Theorem~\ref{positive surgeries}]
Ito and Kawamuro showed in \cite[Corollary~3.4]{IK:coverings} that the binding of an open book $(\Sigma, \phi)$ is a transverse link of depth 1 if and only if the monodromy is not right-veering. With our hypotheses, we conclude that $d(K) = 1$.
Let $(\Sigma, \phi)$ be the open book defined by $K$. By Theorem~\ref{thm:OBD surgery}, an open book for admissible transverse $(-1/n)$-surgery on $K$ is given by $(\Sigma,\phi\circ\tau_{\bd}^n)$, where $\tau_{\bd}$ is a Dehn twist about the binding component of $\Sigma$. By Corollary~\ref{FDTC > 1 tight}, this supports a contact structure $\xi_{K_{-1/n}}$ that satisfies $\cplusred(\xi_{K_{-1/n}}) \neq 0$, for sufficiently large integers $n$.
Since $d(K) = 1$, Lemma~\ref{lem:destabilise along OT disc} guarantees the existence of Legendrian approximations $L_m$ of $K$ with $tb(L_m) = m$, for any $m$. We will use the neighbourhood of $K$ coming from a standard neighbourhood of $L_m$ to define transverse admissible $r$-surgery on $K$, for $r < m$. This neighbourhood also corresponds to a neighbourhood of $K_{-1/n} \subset (M_{-1/n}(K), \xi_{K_{-1/n}})$, and admissible transverse $r$-surgery on $K \subset (M, \xi_K)$ for $-1/n < r < m$ is equivalent to some admissible transverse surgery on $K_{-1/n} \subset (M_{-1/n}(K), \xi_{K_{-1/n}})$ using the corresponding neighbourhood. The theorem then follows from Lemma~\ref{lemma:admissible preserves non-vanishing}.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{where surgery is not tight}]
The first statement, for $\HFred(M) = 0$, follows from Theorems~\ref{maintheorem}~and~\ref{positive surgeries}.
For the second statement in the theorem, consider now a general $M$. The manifold $M_0(K)$ supports a tight contact structure in both its orientations (see for example \cite{HKM:haken}). If $\cplusred(\xi_K)$ and $\cplusred(\xi_{\overline{K}})$ both vanish, then $\cplus(\xi_{K_r})$ and $\cplus(\xi_{\overline{K}_r})$ do not vanish for any $r < 0$, by Theorem~\ref{rationalmaintheorem}. So assume $\cplusred(\xi_K) \neq 0$. By Lemma~\ref{lemma:admissible preserves non-vanishing}, $\cplus(\xi_{K_r}) \neq 0$ for all $r < 0$. By Corollary~\ref{Rplus(red)}, at least one of $\cplus(\xi_{K_r})$ and $\cplus(\xi_{\overline{K}_{-r}})$ is non-vanishing for any $r > 0$, unless $r = R^+(K) = R^+_{\rm red}(K)$.
For the last statement, note that at least one of $\phi_K$ and $\phi_{\overline{K}}$ is not right-veering (unless $M = \#\left(S^1 \times S^2\right)$ and $\phi_K = \rm{id}$, whereupon $\HFred(M) = 0$, and we're done, as above). Assume $\phi_{\overline{K}}$ is not right-veering, so $\xi_{\overline{K}}$ is overtwisted. Then $(M_r(K), \xi_{K_r})$ is tight for all $r < 0$, by Theorem~\ref{rationalmaintheorem}, and $-\left(M_r(K)\right) = (-M)_{-r}(\overline{K})$ supports a tight contact structure for all $r < 0$, by Theorem~\ref{positive surgeries}.
\end{proof}
\begin{figure}[htbp]
\begin{center}
\begin{overpic}[scale=1.3,tics=20]{"figureeightOTboundary"}
\put(100,140){\large $\alpha$}
\put(18,125){\large $\beta$}
\put(185,210){\large $\delta_1$}
\put(187,80){\large $\delta_2$}
\put(120,185){\large $\gamma$}
\end{overpic}
\caption{The open book $(\Sigma,\tau_\alpha^{-1}\tau_\beta\tau_{\delta_2})$ is a positive stabilisation of the open book given by the figure-eight knot $K\subset S^3$. The curve $\delta_1$ is a push-off of $K$, and $\gamma$ is the boundary of an overtwisted disc in $\xi_K$.}
\label{fig:figure eight OBD}
\end{center}
\end{figure}
If the open book $(\Sigma, \phi)$ is a negative stabilisation of another open book (whereupon it is not right-veering), Baker and Onaran explicitly identified an overtwisted disc that intersects $K$ once and has boundary on the page of a stabilisation of $(\Sigma, \phi)$ (see \cite[Theorem~5.2.3]{BO:non-loose}). Given a transverse knot $K$, a Legendrian approximation $L$ of $K$, and an overtwisted disc $D$ such that $\big| L \cap D \big| = 1$, a careful tracking of dividing curves will show that contact $(-1)$-surgery on $\bd D$ results in a contact manifold contactomorphic to the starting one, but where $L$ gets identified with its negative destabilisation. We can use this and Baker and Onaran's identification of $\bd D$ on a page to construct open books for admissible transverse $r$-surgery on $K$ for $r \geq 0$.
\begin{example}
Let $K \subset (S^3,\xi_K)$ be the figure-eight knot. After a positive stabilisation using a boundary-parallel arc, we use \cite[Theorem~5.2.3]{BO:non-loose} to identify $\gamma$ in Figure~\ref{fig:figure eight OBD} as the boundary of an overtwisted disc $D$ which $K$ intersects once.
In the construction of $S^3$ from the open book decomposition $(\Sigma,\tau_\alpha^{-1}\tau_\beta\tau_{\delta_2})$, the boundary component parallel to $\delta_1$ is identified with the figure-eight knot $K$, and the framing on $K$ given by $\Sigma$ is $-1$ (with respect to the Seifert framing). The open book $(\Sigma,\tau_\alpha^{-1}\tau_\beta\tau_\gamma^n\tau_{\delta_2})$, for some $n \in \Z$, also supports $(S^3,\xi_K)$, but now the framing given to $K$ by $\Sigma$ is $n-1$, since composing the monodromy with $\tau_{\gamma}^n$ is equivalent to contact $(-1)$-surgery on $n$ push-offs of $\gamma = \bd D$. Thus, the open book $(\Sigma,\tau_\alpha^{-1}\tau_\beta\tau_\gamma^n\tau_{\delta_1}\tau_{\delta_2})$ supports the tight contact manifold $(S^3_{n-2}(K),\xi_{n-2})$, where $\xi_{n-2}$ is the result of admissible transverse $(n-2)$-surgery on $K$.
\end{example}
\section{Further Questions}
\label{sec:questions}
In this section, we discuss how Theorem~\ref{rationalmaintheorem} might be brought to bear on some open questions.
\subsection{$\cplus$ versus $\chat$}
\label{subsec:cplus versus chat}
Given a contact manifold $(M, \xi)$, we know that $\cplus(\xi)$ vanishes if $\chat(\xi)$ does. The converse, however, remains open: does the vanishing of $\cplus(\xi)$ imply the vanishing of $\chat(\xi)$?
To explore this, we define two invariants of elements of $\HFhat(M)$, using $\CF^{0,i}(M) = \ker U^i \subset \CFplus(M)$, and its homology $\HF^{0,i}(M)$.
\begin{definition}
Given $x \in \HFhat(M)$, let the \textit{vanishing order} $\VO(x)$ of $x$ be the minimum $i$ such that the image of $x$ in $\HFhat(M) \to \HF^{0,i}(M)$ is trivial (and it equals $\infty$ is no such finite $i$ exists). Given $x \neq 0 \in \HFhat(M)$, let the \textit{dual vanishing order} $\DVO(x)$ of $x$ be the minimum of $\VO(y)$ over all $y \in \HFhat(-M)$ that pair non-trivially with $x$ (and it equals $\infty$ if no such $y$ exists).
\end{definition}
Note that $\chat(\xi) \neq 0$ if and only if $\VO(\chat(\xi)) > 0$; $\cplus(\xi) \neq 0$ if and only if $\VO(\chat(\xi)) = \infty$; and $\cplusred(\xi) \neq 0$ if and only if $\VO(\chat(\xi)) = \infty$ and $\DVO(\chat(\xi)) < \infty$.
Let $K \subset M$ be a fibred knot, fix an integer $m \geq 1$, and consider the following short exact sequences, where the bottom-right complex $C = \mathcal{C}_{\overline{K}}\{1 \leq \max(i,j-g+1) \leq m\}\cup\mathcal{C}_{\overline{K}}\{i = m, j = g+m\}$.
\begin{center}
\begin{tikzpicture}[node distance=3cm,auto]
\node (UL) {$\mathcal{C}_{\overline{K}}\{i = 0, j \leq g-1\}$};
\node (ULL) [left of=UL, node distance = 3cm] {$0$};
\node (UM) [right of=UL, node distance=4cm] {$\mathcal{C}_{\overline{K}}\{i = 0\}$};
\node (UR) [right of=UM, node distance=4cm] {$\mathcal{C}_{\overline{K}}\{i = 0, j = g\}$};
\node (URR) [right of=UR, node distance = 3cm] {$0$};
\node (LL) [below of=UL, node distance = 2cm] {$\mathcal{C}_{\overline{K}}\{i = 0, j \leq g-1\}$};
\node (LLL) [left of=LL, node distance = 3cm] {$0$};
\node (LM) [right of=LL, node distance=4cm] {$\mathcal{C}_{\overline{K}}\{0 \leq i \leq m\}$};
\node (LR) [right of=LM, node distance=4cm] {$C$};
\node (LRR) [right of=LR, node distance = 3cm] {$0$};
\draw[->] (ULL) to (UL);
\draw[->] (UL) to (UM);
\draw[->] (UM) to (UR);
\draw[->] (UR) to (URR);
\draw[->] (LLL) to (LL);
\draw[->] (LL) to (LM);
\draw[->] (LM) to (LR);
\draw[->] (LR) to (LRR);
\draw[->] (UL) to (LL);
\draw[->] (UM) to (LM);
\draw[->] (UR) to (LR);
\end{tikzpicture}
\end{center}
Then, the same proof as in Theorem~\ref{rationalmaintheorem}(1), and the similar version corresponding to Theorem~\ref{rationalmaintheorem}(2), gives the following.
\begin{thm}\label{vanishing order}
Let $K \subset M$ be a fibred knot. Then for all sufficiently negative integers $n$:
\begin{enumerate}
\item $\DVO(\chat(\xi_{\overline{K}})) - 1 \leq \VO(\chat(\xi_{K_n})) \leq \DVO(\chat(\xi_{\overline{K}}))$.
\item $\DVO(\chat(\xi_{K_n})) - 1 \leq \VO(\chat(\xi_{\overline{K}})) \leq \DVO(\chat(\xi_{K_n}))$.
\end{enumerate}
\end{thm}
\begin{remark}
The inequalities are there because $C$ is not precisely $\ker \left(U|_{A^+}\right)^i$ for any $i$. In fact, with the identification of $A^+$ with $\mathcal{C}_{\overline{K}}\{\max(i,j-g+1)\geq 1\}$, we have $\ker \left(U|_{A^+}\right)^m \subsetneq C \subsetneq \ker \left(U|_{A^+}\right)^{m+1}$.
\end{remark}
If we could find a fibred knot $K$ supporting $(M, \xi_K)$, such that $\cplusred(\xi_K) \neq 0$ and $\DVO(\xi_K) > 1$, then Theorem~\ref{vanishing order} would imply that for all sufficiently negative integers $n$, $\cplus(\xi_{\overline{K}_n}) = 0$, but $\chat(\xi_{\overline{K}_n}) \neq 0$.
\begin{question}
Do there exist examples of contact manifolds $(M, \xi)$ with $\DVO(\chat(\xi)) > 1$? Is there any geometric meaning to $\DVO(\chat(\xi))$?
\end{question}
\subsection{Genus-1 Open Books}
\label{subsec:genus1}
As yet no obstruction has been found for a non-planar contact manifold $(M, \xi)$ to be supported by a genus-$1$ open book decomposition. In fact, it is currently unknown whether there exist any contact manifolds whose {\it support genus} ({\it ie.\ }minimal genus of a supporting open book decomposition, defined in \cite{EO:invariants}) is greater than 1. To explore this, we look at properties of genus-1 open books with one boundary component.
Let $K\subset M$ be a fibred knot of genus 1. A result of Honda, Kazez, and Matić (\cite{HKM:contactclass}), and independently Baldwin (\cite{Baldwin:genus1}), shows that $\xi_K$ is tight if and only if $\cplus(\xi_K)$ is non-vanishing. Honda, Kazez, and Matić prove that this is equivalent to the monodromy $\phi_K$ of the open book having {\it fractional Dehn twist coefficient} $\rm{FDTC}(\phi_K)$ greater than 0, or greater-than-or-equal to 0 if $\phi_K$ is a periodic mapping class (see \cite{HKM:contactclass} for details on these terms).
\begin{thm}\label{thm:genus 1}
Let $K \subset M$ be a fibred knot of genus 1, and let $(M_{r}(K),\xi_{K_r})$ be the result of admissible transverse $r$-surgery on $K \subset (M,\xi_K)$, for $r < 0$. The following are equivalent:
\begin{enumerate}
\item $\cplusred(\xi_{\overline{K}}) \neq 0$ and $\DVO(\chat(\xi_{\overline{K}})) = 1$.
\item $\cplus(\xi_{K_r}) = 0$ for some $r < 0$.
\item $\chat(\xi_{K_r}) = 0$ for some $r < 0$.
\item $\cplus(\xi_{K_{-1}}) = 0$.
\item $\chat(\xi_{K_{-1}}) = 0$.
\item $\xi_{K_{-1}}$ is overtwisted.
\item \begin{enumerate}\item $\rm{FDTC}(\phi_{\overline{K}}) > 1$, if $\phi_{\overline{K}}$ is periodic. \item $\rm{FDTC}(\phi_{\overline{K}}) \geq 1$, if $\phi_{\overline{K}}$ is not periodic. \end{enumerate}
\end{enumerate}
\end{thm}
\begin{proof}
$(1) \implies (2)$: This is Theorem~\ref{rationalmaintheorem}.
$(4) \iff (5) \iff (6)$: These follow from the discussion preceding the theorem.
$(5) \implies (3) \implies (2)$: These are trivial.
$(2) \implies (4)$: We show the contrapositive. We claim that inadmissible transverse $\frac{n}{n-1}$-surgery on $K_{-1}$, for $n > 1$ an integer, results in the same contact manifold as admissible transverse $(-n)$-surgery on $K$, namely $(M_{-n}(K),\xi_{K_{-n}})$ ({\it cf.\ }\cite[Section~3.3]{Conway}). Indeed, let $\lambda$ be the $0$-framing on a neighbourhood of $K$, and let $\mu$ be a meridional curve for $K$. Then the page of the open book for $K_{-1}$ also traces the curve $\lambda$ on the boundary of a neighbourhood of $K_{-1}$, but now the meridional curve is $\mu - \lambda$. Inadmissible transverse $\frac{n}{n-1}$-surgery on $K_{-1}$, measured in terms of $\lambda$ and $\mu$, is $$n\cdot (\mu - \lambda) + (n-1)\cdot \lambda = n\cdot \mu - \lambda,$$ which is $(-n)$-surgery on $K$. That the result of inadmissible transverse surgery $K_{-1}$ can be considered as a single transverse surgery on $K$ follows from the definition of transverse surgery.
By \cite[Theorem~1.6]{Conway}, if $\cplus(\xi_{K_{-1}}) \neq 0$, then inadmissible transverse $r$-surgery on $K_{-1}$ preserves this non-vanishing for all $r > 1$. In particular, $\cplus(\xi_{K_{-n}}) \neq 0$ for all integers $n > 0$. By Lemma~\ref{lemma:admissible preserves non-vanishing}, this implies that $\cplus(\xi_{K_r}) \neq 0$ for all $r < 0$.
$(3) \implies (1)$: The only part of this that does not follow from Theorem~\ref{rationalmaintheorem} is the dual vanishing order of $\chat(\xi_{\overline{K}})$. However, if $\chat(\xi_{K_r}) = 0$ some $r < 0$, then Lemma~\ref{lemma:admissible preserves non-vanishing} implies that $\chat(\xi_{K_n}) = 0$ for all sufficiently negative integers $n$, \textit{ie.\@} $\VO(\chat(\xi_{K_n})) = 0$. Thus, by Theorem~\ref{vanishing order}, $\DVO(\chat(\xi_{\overline{K}})) \leq 1$. However, since $\cplusred(\xi_{\overline{K}}) \neq 0$, we know that $\DVO(\chat(\xi_{\overline{K}})) \geq 1$, and so therefore it is equal to $1$.
$(4) \iff (7)$: Note that an open book decomposition for $(M_{-1}(K),\xi_{K_{-1}})$ is given by $(\Sigma_{1,1},\phi_K\circ\tau_{\bd})$, where $\Sigma_{1,1}$ is genus 1 surface with one boundary component, and $\tau_{\bd}$ is a positive Dehn twist about a curve parallel to $\bd\Sigma_{1,1}$. Furthermore, $\rm{FDTC}(\phi_K \circ \tau_{\bd}) = \rm{FDTC}(\phi_K) + 1$ and $\rm{FDTC}(\phi_K) = -\rm{FDTC}(\phi_{\overline{K}})$, according to \cite{KR}. Thus, the equivalence follows from the discussion preceding this theorem.
\end{proof}
Consider now a contact manifold $(M, \xi)$, supported by a genus-$1$ open book $(\Sigma, \phi)$, where $\cplusred(\xi) \neq 0$. We can cap off $(\Sigma, \phi)$ to an open book $(\Sigma_{1,1},\phi')$ with a single binding component that supports $(M',\xi')$, where $\cplusred(\xi') \neq 0$, $\DVO(\chat(\xi')) = 1$, and $\rm{FDTC}(\phi') > 1$, by Theorem~\ref{thm:genus 1}.
\begin{question}\label{genus1question}
By \cite[Theorem~1.2]{Baldwin:cappingoff}, we know that $\DVO(\chat(\xi)) \geq \DVO(\chat(\xi'))$. Under what conditions can we say that $\DVO(\chat(\xi')) = 1$ implies $\DVO(\chat(\xi)) = 1$? Under what conditions does $\rm{FDTC}(\phi') > 1$ imply that $\rm{FDTC}(\phi, B) > 1$ at every boundary component $B$ of $\Sigma$?
\end{question}
Even partial answers to Question~\ref{genus1question} could lead to obstructions to $\xi$ being supported by a genus-1 open book.
|
1,941,325,221,165 | arxiv | \section{Introduction}
\vspace{-2mm}
Over the past few decades, two of the most popular object recognition tasks, object detection and semantic segmentation, have received a lot of attention. The goal of object detection is to accurately predict the semantic category and the bounding box location for each object instance, which is a quite coarse localization. Different from object detection, the semantic segmentation task aims to assign the pixel-wise labels for each image but provides no indication of the object instances, such as the object instance number and precise semantic region for any particular instance. In this work, we follow some of the recent works~\cite{hariharan2014simultaneous}~\cite{liu2015multi}~\cite{zhang2015monocular} and attempt to solve a more challenging task, instance-level object segmentation, which predicts the segmentation mask for each instance of each category. We suggest that the next generation of object recognition should provide a richer and more detailed parsing for each image by labeling each object instance with an accurate pixel-wise segmentation mask. This is particularly important for real-world applications such as image captioning, image retrieval, 3-D navigation and driver assistance, where describing a scene with detailed individual instance regions is potentially more informative than describing roughly with located object detections. However, instance-level object segmentation is very challenging due to high occlusion, diverse shape deformation and appearance patterns, obscured boundaries with respect to other instances and background clutters in real-world scenes. In addition, the exact instance number of each category within an image is dramatically different.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.13]{figure/motivation.pdf}
\vspace{-2mm}
\caption{{Exemplar instance-level object segmentation results. For each image, the category-level segmentation results, predicted instance locations for all foreground pixels and instance-level segmentation results are sequentially shown in each row. Different colors indicate the different object instances for each category. To better show the predicted instance locations, we plot velocity vectors that start from each pixel to its corresponding predicted instance center as shown by the arrow. Note that the pixels predicting similar object centers can be directly collected as one instance region. Best view in color and {scale up three times.}}}
\vspace{-8mm}
\label{fig:motivation}
\end{center}
\end{figure*}
Recently, tremendous advances in semantic segmentation~\cite{BoxSup}~\cite{lin2015efficient}~\cite{long2014fully}~\cite{CRF-RNN} and object detection~\cite{fastercnn}~\cite{redmon2015you}~\cite{stewart2015end} have been made relying on deep convolutional neural networks (DCNN)~\cite{szegedy2014going}~\cite{vgg}. Some previous works have been proposed to address instance-level object segmentation. Unfortunately, none of them has achieved excellent performance in an end-to-end way. In general, these previous methods take complicated pre-processing such as bottom-up region proposal generation~\cite{MCG}~\cite{uijlings2013selective}~\cite{zitnick2014edge}~\cite{pinheiro2015learning} or post-processing such as graphical inference as the requisite. Specifically, the recent two approaches, SDS~\cite{hariharan2014simultaneous} and the one proposed by Chen et al.~\cite{liu2015multi}, use the region proposal methods to first generate potential region proposals and then classify on these regions. After classification, post-processing such as non-maximum suppression (NMS) or Graph-cut inference, is used to refine the regions, eliminate duplicates and rescore these regions. Note that most region proposal techniques~\cite{MCG}~\cite{uijlings2013selective}~\cite{pinheiro2015learning} typically generate thousands of potential regions, and take more than one second per image. Additionally, these proposal-based approaches often fail in the presence of strong occlusions. When only small regions are observed and evaluated without awareness of the global context, even a highly accurate classifier can produce many false alarms. Depending on the region proposal techniques, the common pipelines are often trained using several independent stages. These separate pipelines rely on independent techniques at each stage and the targets of the stages are significantly different. For example, the region proposal methods try to maximize region recalls while the classification optimizes for single class accuracy.
In this paper, we propose a simple yet effective Proposal-Free Network (PFN) for solving the instance-level segmentation task in an end-to-end way. The motivation of the proposed network is illustrated in Figure~\ref{fig:motivation}. The pixels predicting the same instance locations can be directly clustered into the same object instance region. Moreover, the object boundaries of the occluded objects can be inferred by the difference in the predicted instance locations. For simplicity, we use the term \emph{instance locations} to denote the coordinates of the instance bounding box each pixel belongs to. Inspired by the observation that humans glance at an image and instantly know what and where the objects are in the image, we reformulate the instance-level segmentation task in the proposed network by directly inferring the regions of object instances from the global image context, in which the traditional region proposal generation step is totally disregarded. The proposed PFN framework is shown in Figure~\ref{fig:network}. To solve the semantic instance-level object segmentation task, three sub-tasks are addressed: category-level segmentation, instance location prediction for each pixel, and instance number prediction for each category in the entire image.
First, the convolutional network is fine tuned based on the pre-trained VGG classification net~\cite{vgg} to predict the category-level segmentation. In this way, the domain-specific feature representation on semantic segmentation for each pixel can be learned.
Second, by fine-tuning on the category-level segmentation network, the instance locations for each pixel as well as the instance number of each category are simultaneously predicted by the further updated instance-level segmentation network. In terms of instance locations for each pixel, six location maps including the coordinates of the center, the top left corner and the bottom right corner of the bounding box of each instance, are predicted. The predicted coordinates can be complementary to each other and make the algorithm more robust for handling close or occluded instances. To obtain more precise instance location prediction for each pixel, multi-scale prediction streams with individual supervision (i.e. multi-loss) are appended to jointly encode local details from the early, fine layers and the global semantic information from the deep, coarse layers. The feature maps from deep layers often focus on the global structure, but are insensitive to local boundaries and spatial displacement. In contrast, the feature maps from early layers can sense better the local detailed boundaries. The fusion layer combining multi-scale predictions is utilized before the final prediction layer.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.55]{figure/prediction.pdf}
\vspace{-4mm}
\caption{{The proposal-free network overview. Our network predicts the instance numbers of all categories and the pixel-level information that includes the category-level confidences for each pixel and the coordinates of the instance bounding box each pixel belongs to. The instance location prediction for each pixel involves the coordinates of center, top-left corner and bottom-right corner of the object instance that a specific pixel belongs to. Any off-the-self clustering method can be utilized to generate ultimate instance-level segmentation results.}}
\label{fig:network}
\vspace{-8mm}
\end{center}
\end{figure*}
Third, the instance numbers of all categories are described with a real number vector and also regressed with Euclidean loss in the instance-level segmentation network. Note that the instance number vector embraces the category-level information (whether the instance of a specific category appears or not) and instance-level information (how many object instances appear for a specific category). Thus, the intermediate feature maps from the category-level segmentation network and the instance-level feature maps after the fusion layer from the instance-level segmentation network are concatenated together, which can be utilized to jointly predict the instance numbers.
In the testing stage, instance numbers and pixel-level information including category-level confidences and coordinates of the instance bounding box each pixel belongs to, can together help generate the ultimate instance-level segmentation after clustering. Note that any off-the-self clustering method can be used for this simple post-processing, and the predicted instance number specifies the exact cluster number for the corresponding category.
Comprehensive evaluations and comparisons on the PASCAL VOC 2012 segmentation benchmark well demonstrate that the proposed proposal-free network yields results that significantly surpass all previous published methods. It boosts the current state-of-the-art performance from 46.3\%~\cite{liu2015multi} to 58.7\%. It should be noted that all previous works utilize the extra region proposal extraction algorithms to generate the region candidates and then feed these candidates into a classification network and complex post-processing steps. Instead, our PFN generates the instance-level segmentation results in a much simple and more straightforward way.
\vspace{-3mm}
\section{Related Work}
Deep convolutional neural networks (DCNN) have achieved great success in object classification~\cite{szegedy2014going}~\cite{alexnet}~\cite{vgg}~\cite{wei2014cnn}, object detection~\cite{fastercnn}~\cite{redmon2015you}~\cite{babylearning}~\cite{ren2015object} and object segmentation~\cite{long2014fully}~\cite{wcrf}~\cite{BoxSup}~\cite{hariharan2014simultaneous}~\cite{noh2015learning}. In this section, we discuss the most relevant work on object detection, semantic segmentation and instance-level object segmentation.
\textbf{Object Detection.} Object detection aims to localize and recognize every object instance with a bounding box. The detection pipelines~\cite{girshick2014rich}~\cite{fastercnn}~\cite{redmon2015you}~\cite{gidaris2015object} generally start from extracting a set of box proposals from input images and then identify the objects using classifiers or localizers. The box proposals are extracted either by the hand-crafted pipelines such as selective search~\cite{uijlings2013selective}, EdgeBox~\cite{zitnick2014edge} or the designed convolutional neural network such as deep MultiBox~\cite{erhan2014scalable} or region proposal network~\cite{fastercnn}. For instance, the region proposal network~\cite{fastercnn} simultaneously predicts object bounds and objectiveness scores to generate a batch of proposals and then uses the Fast R-CNN~\cite{girshick2015fast} for detection. Different from these prior work, Redmon et al.~\cite{redmon2015you} first proposed a You Only Look Once (YOLO) pipeline that predicts bounding boxes and class probabilities directly from full images in one evaluation. Our work shares some similarities with YOLO, where the region proposal generation is discarded. However, our PFN is based on the intuition that the pixels inferring similar instance locations can be directly collected as a single instance region. The pixel-wise instance locations and the instance number of each category are simultaneously optimized in one network. Finally, the fine-grained segmentation mask of each instance can be produced with our PFN instead of the coarse outputs depicted by the bounding boxes from YOLO.
\textbf{Semantic Segmentation.} The most recent progress in object segmentation~\cite{long2014fully}~\cite{wcrf}~\cite{BoxSup} was achieved by fine-tuning the pre-trained classification network with the ground-truth category-level masks. For instance, Long et al.~\cite{long2014fully} proposed a fully convolutional network for pixel-wise labeling.
Papandreou et al.~\cite{wcrf} utilized the foreground/background segmentation methods to generate segmentation masks, and conditional random field inference is used to refine the segmentation results. Zheng et al.~\cite{CRF-RNN} formulated the conditional random fields as recurrent neural networks for dense semantic prediction. Different from the category-level prediction by these previous methods, our PFN targets at predicting the instance-level object segmentation that provides more powerful and informative predictions to enable the real-world vision applications. Note that these previous pipelines using the pixel-wise cross-entropy loss for semantic segmentation cannot be directly utilized for instance-level segmentation because the instance number of each category for different images significantly varies, and the output size of prediction maps cannot be constrained to a pre-determined number.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.6]{figure/framework.pdf}
\vspace{-4mm}
\caption{{The detailed network architecture and parameter setting of PFN. First, the category-level segmentation network is fine-tuned based on the pre-trained VGG-16 classification network. The cross-entropy loss is used for optimization. Second, the instance-level segmentation network that simultaneously predicts the instance numbers of all categories and the instance location vector for each pixel is further fine-tuned. The multi-scale prediction streams (with different resolution and reception fields) are appended to the intermediate convolutional layers, and are then fused to generate final instance location predictions. During each stream, we incorporate the corresponding coordinates (i.e. x and y dimension) of each pixel as the feature maps in the second convolutional layer with 130 = 128 + 2 channels. The regression loss is used during training. To predict instance numbers, the convolutional feature maps and the instance location maps are concatenated together for inference, and the Euclidean loss is used. The two losses from two targets are jointly optimized for the whole network training.}}
\label{fig:framework}
\vspace{-8mm}
\end{center}
\end{figure*}
\textbf{Instance-level Object Segmentation.} Recently several approaches which tackle the instance-level object segmentation~\cite{hariharan2014simultaneous}~\cite{liu2015multi}~\cite{silberman2014instance}~\cite{zhang2015monocular}~\cite{tighe2014scene} have emerged. Most of the prior works utilize the region proposal methods as the requisite. For example, Hariharan et al.~\cite{hariharan2014simultaneous} classified region proposals using features extracted from both the bounding box and the region foreground with a jointly trained CNN. Similar to~\cite{hariharan2014simultaneous}, Chen et al.~\cite{liu2015multi} proposed to use the category-specific reasoning and shape prediction through exemplars to further refine the results after classifying the proposals.~\cite{silberman2014instance} designed a higher-order loss function to make an optimal cut in the hierarchical segmentation tree based on the region features. Other works have resorted to the object detection task to initialize the instance segmentation and the complex post-processing such as integer quadratic program~\cite{tighe2014scene} and probabilistic model~\cite{yang2012layered} to further determine the instance segments.
These prior works based on region proposals are very complicated due to several pre-processing and post-processing steps. In addition, combining independent steps is not an optimal solution because the local and global context information cannot be incorporated together for inferring. In contrast, our PFN directly predicts pixel-wise instance location maps and uses a simple clustering technique to generate instance-level segmentation results. In particular, Zhang et al.~\cite{zhang2015monocular} predicted depth-ordered instance labels of the image patch and then combined predictions into the final labeling via the Markov Random Field inference. However, the instance number to be present in each image patch is limited to be smaller than 6 (including background), which makes it not scalable for real-world images with an arbitrary number of possible object instances. Instead, our network predicts the instance number in a totally data-driven way by the trained network, which can be naturally scalable and easily extended to other instance-level recognition tasks.
\vspace{-3mm}
\section{Proposal-Free Network}
Figure~\ref{fig:framework} shows the detailed network architecture of PFN. The category-level segmentation, instance locations for each pixel and instance numbers of all categories are taken as three targets during the PFN training.
\vspace{-2mm}
\subsection{Category-level Segmentation Prediction}
The proposed PFN is fine-tuned based on the publicly available pre-trained VGG 16-layer classification network~\cite{vgg} for the dense category-level segmentation task. We utilize the ``DeepLab-CRF-LargeFOV" network structure as the basic presented in~\cite{DeepLabCRF} due to its leading accuracy and competitive efficiency. The important convoluational filters are shown in the top row of Figure~\ref{fig:framework}, and other intermediate convolutional layers can be found in the published model file~\cite{DeepLabCRF}. The reception field of the ``DeepLab-CRF-LargeFOV" architecture is $224\times 224$ with zero-padding, which enables effective prediction of the subsequent instance locations that requires the global image context for reasoning. For category-level segmentation, the 1000-way ImageNet classifier in the last layer of VGG-16 is replaced with $C+1$ confidence maps, where $C$ is the category number. The loss function is the sum of pixel-wise cross-entropy in terms of the confidence maps (down-sampled by 8 compared to the original image). During testing, the fully-connected conditional random fields~\cite{DeepLabCRF}
are employed to generate more smooth and accurate segmentation maps.
This fine-tuned category-level network can generate semantic segmentation masks for the subsequent instance-level segmentation for each input image. Then the instance-level network is fine-tuned based on the category-level network, where the $C+1$ category-level predictions are eliminated. Note that we use two separate stages from optimizing category-level segmentation and instance-level segmentation. The intuition is that category-level segmentation prefers the prediction that is insensitive for different object instances of a specific category while instance-level segmentation aims to distinguish between individual instances. The motivations of two targets are significantly different. Therefore the convolutional feature maps, especially for the latter convolutional layers, cannot be shared. We verify the superiority of subsequently fine-tuning two separate networks for two tasks in the experiment. In addition, the performance on instance-level segmentation is much better when fine-tuning the instance-level network based on the category-level segmentation network compared to the original VGG-16. This may be because the category-level segmentation can provide a better start for parameter learning where the basic segmentation-aware convolutional filters have already been well learned.
\vspace{-3mm}
\subsection{Instance-level Segmentation Prediction}
The instance-level segmentation network takes an image with an arbitrary size as the input and outputs the corresponding instance locations for all the pixels and the instance number of each category.
\textbf{Pixel-wise Instance Location Prediction.} For each image, the instance location vector of each pixel is defined as the bounding box information of its corresponding object instance that contains the pixel. The object instance $s$ of a specific category can be identified by its center $(c^x, c^y)$, the top-left corner $(l^x, l^y)$ and the bottom-right corner $(r^x, r^y)$ of its surrounding bounding box, as illustrated in Figure~\ref{fig:network}. Note that the information may be redundant by using two corners besides the centers. Incorporating the redundant information can be treated as the model combination, which increases the robustness of the algorithm to noises and inaccurate prediction. For each pixel $i$ belonging to the object instance $s$, the ground-truth instance location vector is denoted as $t_{i} = (c^x_s/w_s, c^y_s/h_s, l^x_s/w_s, l^y_s/h_s, r^x_s/w_s, r^y_s/h_s)$, where $w_s$ and $h_s$ are the width and the height of the object instance $s$, respectively. With these definitions, we minimize an objective function to optimize the instance location which is inspired by the one used for Fast R-CNN~\cite{girshick2015fast}. Let $t_i$ denote the predicted location vector and $t_i^*$ the ground-truth location vector for each pixel $i$, respectively. The loss function $\ell^o$ can be defined as
\vspace{-2mm}
\begin{equation}
\ell^o(t_i, t_i^*) = [k_i^* \geq 1]R(t_i - t_i^*),
\end{equation}
\noindent{where} $k_i^* \in \{0,1,2,\dots, C\}$ is the semantic label for the pixel $i$, and $C$ is the category number. $R$ is the robust loss function (smooth-$L_1$) in~\cite{girshick2015fast}. The term $[k_i^* \geq 1]R(t_i - t_i^*)$ means the regression loss that is activated only for the foreground pixels and disabled for background pixels. The reason of using this filtered loss is that predicting the instance locations is only possible for foreground pixels which definitely belong to a specific instance.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.55]{figure/testing.pdf}
\vspace{-2mm}
\caption{{The exemplar segmentation results by refining the category-level segmentation with the predicted instance numbers. For each image, we show their classification results inferred from category-level segmentation and the predicted instance numbers in the left. In the first row, the refining strategy is to convert the inconsistent predicted labels into background. In the second row, the refining strategy is to convert the wrongly predicted labels in category-level segmentation to the ones predicted in the instance number vector. Different colors indicate different object instances. Better viewed in zoomed-in color pdf file.}}
\label{fig:testing}
\vspace{-8mm}
\end{center}
\end{figure*}
Following the recent results of~\cite{DeepLabCRF}~\cite{long2014fully}, we have also utilized the multi-scale prediction to increase the instance location prediction accuracy. As illustrated in Figure~\ref{fig:framework}, the five multi-scale prediction streams are attached to the input image, the output of each of the first three max pooling layers and the last convolutional layer (fc7) in the category-level segmentation network. For each stream, two layers (first layer: 128 convolutional filters, second layer: 130 convolutional filters) and deep supervision (i.e. individual loss for each stream) are utilized. The spatial padding for each convolutional layer is set so that the spatial resolution of feature maps is preserved. The multi-scale predictions from five streams are accordingly down-sampled and then concatenated to generate the fused feature maps (as the fusing layer in Figure~\ref{fig:framework}). Then the $1\times 1$ convolutional filters are used to generate the final pixel-wise predictions. It should be noted that multi-scale predictions are with different spatial resolution and inferred under different reception fields. In this way, the fine local details (e.g. boundaries and local consistency) captured by early layers with higher resolution and the high-level semantic information from subsequent layers with lower resolution can jointly contribute to the final prediction.
To predict final instance location maps in each stream, we attach their own coordinates of all the pixels as the additional feature maps in the second convolutional layer to help generate the instance location predictions. The intuition is that predicting accurate instance locations for each pixel may be difficult due to various spatial displacements in each image while the intrinsic offsets between each pixel position and its corresponding instance locations are much easier to be learned. By incorporating the spatial coordinate maps into feature maps, more accurate location predictions can be obtained, which is also verified in our experiment. Consider that the feature maps $x^v$ of the $v$-th convolutional layer are a three-dimensional array of size $h^v \times w^v \times d^v$, where $h^v$ and $w^v$ are spatial dimensions and $d^v$ is the channel number. We generate $2$ spatial coordinate maps $[x^o_1, x^o_2]$ with size $h^v \times w^v$, where $x^o_{i_x, i_y,1}$ and $x^o_{i_x, i_y,2}$ at the spatial position $(i_x,i_y)$ for each pixel $i$ are set as $i_x$ and $i_y$, respectively. By concatenating the feature maps $x^v$ and the coordinate maps, the combined feature maps $\hat{x}^v = [x^v, x^o_1, x^o_2]$ of size $h^v \times w^v \times (d^v + 2)$ can be obtained. The outputs $x_{i_x, i_y}^{v+1}$ at the location $(i_x,i_y)$ in the next layer can be computed by
\vspace{-2mm}
\begin{equation}
x_{i_x, i_y}^{v+1} = f_b(\{\hat{x}^v_{i_x + \delta_{i_x}, i_x + \delta_{i_y}}\}_{0\leq \delta_{i_x}, \delta_{i_y}\leq b}),
\end{equation}
\noindent{where} $b$ is the kernel size and $f_b$ is the convolutional filters. In PFN, $x^{v+1}$ represents the final instance location prediction maps with six channels.
Suppose we have $M = 5$ multi-scale prediction streams, and each stream is associated with a regression loss $\ell^o_m(\cdot), m\in \{1,2,\dots,M\}$. For each image, the loss for the final prediction maps after fusing is denoted as $\ell^o_{\text{fuse}}(\cdot)$. The overall loss function for predicting pixel-wise instance locations then becomes
\vspace{-3mm}
\begin{equation}
L^o(\mathbf{t}, \mathbf{t^*}) = \sum_{m=1}^M\frac{\sum_{i}\ell^o_m(t_i, t_i^*)}{\Omega} + \frac{\sum_{i}\ell^o_{\text{fuse}}(t_i, t_i^*)}{\Omega},
\end{equation}
\noindent{where} $\mathbf{t} = \{t_i\}$ and $\mathbf{t^*} = \{t_i^*\}$ represent the predicted instance locations and ground-truth instance locations of all pixels, respectively. The $\Omega$ denotes the number of foreground pixels for each image. Divided by $\Omega$, the resulting loss $L^o$ can be prevented from being too large, which may lead to non-convergence.
\textbf{Instance Number Prediction.} Another sub-task of PFN is to predict the instance numbers of all categories. The instance numbers of the input image that account for the object instances of each category naturally contains the category-level information and instance-level information. As shown in Figure~\ref{fig:framework}, the feature maps of the last convolutional layer from the previously trained category-level segmentation network and the instance location predictions are combined together to form the fused feature maps with $1024 + 6$ channels. These fused feature maps are then convolved with $3\times3$ convolutional filters and down-sampled with stride 2 to obtain the 128 feature maps. Then the fully-connected layer with 1024 outputs is performed to generate the final $C$-dimensional instance number prediction maps.
Given an input image $I$, we denote the instance number vector of all $C$ categories as $\mathbf{g} = [g_1, g_2, \dots, g_C]$, where $g_c, c \in \{1,2, \dots, C\}$ represents the object instance number of each category appearing in the image. Let $\mathbf{g}$ denote the predicted instance number vector and $\mathbf{g^*}$ represent the ground-truth instance number vector for each image, respectively. The loss function $L^n$ is defined as
\vspace{-3mm}
\begin{equation}
L^n(\mathbf{g}, \mathbf{g^*}) = \frac{1}{C}\sum_{c=1}^{C} ||g_c - g^*_c||^2.
\end{equation}
\textbf{Network Training.} To train the whole instance-level network, the over loss function $L$ for each image is actuated as
\vspace{-4mm}
\begin{equation}
L(\mathbf{t}, \mathbf{t^*}, \mathbf{g}, \mathbf{g^*}) = \lambda L^o(\mathbf{t}, \mathbf{t^*}) + L^n(\mathbf{g}, \mathbf{g^*}).
\end{equation}
The class-balancing parameter $\lambda$ is empirically set to 10, which means the bias towards better pixel-wise instance location prediction. In this way, the instance number predictions and pixel-wise instance location predictions are jointly optimized in a unified network. The two different targets can benefit each other by learning more robust and discriminative shared convolutional filters. We borrow the convolutional filters except for those of the last prediction layer in the previously trained category-level network to initialize the parameters of the instance-level network. We randomly initialize all newly added layers by drawing weights from a zero-mean Gaussian distribution with standard deviation 0.01. The network can be trained by back-propagation and stochastic gradient descent (SGD)~\cite{lecun1989backpropagation}.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.8]{figure/cluster.pdf}
\vspace{-4mm}
\caption{{Comparison of segmentation results by constraining the pixel number of each clustered object instance. }}
\vspace{-8mm}
\label{fig:cluster}
\end{center}
\end{figure*}
\vspace{-2mm}
\subsection{Testing Stage}
During testing, we first feed the input image $I$ into the category-level segmentation network to obtain category-level segmentation results, and then pass the input image into the instance-level network to get the instance number vector $\mathbf{g}$ and the pixel-wise instance location predictions $\mathbf{t}$.
Then the clustering based on all the predicted instance locations $\mathbf{t}$ of all the pixels can be performed. We separately cluster the predicted instance locations for each category, which can be obtained by filtering the $\mathbf{t}$ with the category-level segmentation result $\mathbf{p}$, and the predicted instance numbers of all categories $\mathbf{g}$ indicate the expected cluster numbers used for spectral clustering. The simple normalized spectral clustering~\cite{ng2002spectral} is utilized due to its simplicity and effectiveness. For each category $c$, the similarity matrix $W$ is constructed by calculating the similarities between any pair of pixels that belong to the resulting segmentation mask $\mathbf{p_c}$. Let the spatial coordinate vectors for the pixel $i$ and $j$ be $q_i = [i_x, i_y]$ and $q_j = [j_x, j_y]$, respectively. The $t_i$ and $q_i$ vectors are all normalized by their corresponding maximum. The Gaussian similarity function $w_{i,j}$ for each pair $(i,j)$ is computed as
\vspace{-3mm}
\begin{equation}
w_{i,j} = \exp(\frac{-||t_i - t_j||^2/|t_i|}{2\sigma^2}) + \exp(\frac{-||q_i - q_j||^2/|q_i|}{2\sigma^2}),
\label{eq:sim}
\end{equation}
where $|t_i|$ denotes the feature dimension for the vector $t_i$, which equals 6 (the coordinates of centers, top-left corner and bottom-right corner), and $|q_i|$ indicates the feature dimension of $q_i$, which equals 2. During clustering, we simply connect all pixels in the same segmentation mask of a specific category with positive similarity because the local neighboring relationships can be captured by the second term in Eqn.~(\ref{eq:sim}). We simply set $\sigma = 0.5$ for all images. To make the clustering results robust to the initialization of seeds during the $\emph{k}$-means step of spectral clustering, we randomly select the seeds twenty times by balancing the accuracy and computational cost. Then the clustering result with maximal average within-cluster similarities for all clusters is selected as the final result.
Note that inconsistent global image category predictions from instance number vectors and pixel-wise category-level segmentation are often observed. For example, as illustrated in the first row of Figure~\ref{fig:testing}, the instance number prediction infers 4 person instances and 2 bicycle instances while the category-level segmentation indicates three categories (i.e. person, bicycle, car) appearing in the image. Thus it is necessary to keep the predicted global image category to be consistent between the instance number prediction and the pixel-wise segmentation. Note that the instance number prediction task is much simpler than pixel-wise semantic segmentation due to dense pixel-wise optimization targets. We can thus use instance number prediction to refine the produced category-level segmentation.
The object category from instance number prediction can be easily obtained by thresholding the instance number vector by $\tau = 0.5$, which means, if the predicted instance number of a specific category $c$ is larger than $\tau$, the category $c$ is regarded as the true label. Specifically, two strategies are adopted: first, if more than one category is predicted to have at least one instance in the image, any pixels assigned with all other categories (i.e. the categories with predicted instance number 0) will be re-labeled as the background, as illustrated in the first row of Figure~\ref{fig:testing}; second, if only one category is inferred from instance number prediction, pixels labeled with other object categories (excluding background pixels) in the semantic segmentation mask will be totally converted into the predicted ones, as illustrated in the second row of Figure~\ref{fig:testing}. The refined category-level segmentation masks are used to further generate instance-level segments.
In addition, the predicted segmentation result is not perfect due to the noisy background pixels. The instance locations of pixels belonging to one object have much higher possibilities to form a cluster while the predictions of background pixels are often quite random, forming very small clusters. Therefore, we experimentally discard those clusters, whose pixel numbers are less than 0.1\% of the pixels in the segmentation mask. Finally, after obtaining the final clustering result for each category, the instance-level object segmentation result can be easily obtained by combining all the clustering results of all categories. Example results after constraining the pixel number of each clustered instance region are shown in Figure~\ref{fig:cluster}.
\section{Experiments}
\subsection{Experimental Settings}
\textbf{Dataset and Evaluation Metrics.} The proposed PFN is extensively evaluated on the PASCAL VOC 2012 validation segmentation benchmark~\cite{everingham2014pascal}. We compare our method with two state-of-the-art algorithms: SDS~\cite{hariharan2014simultaneous} and the method by Chen et al.~\cite{liu2015multi}. Following two baselines~\cite{hariharan2014simultaneous}~\cite{liu2015multi}, the segmentation annotations from SBD~\cite{hariharan2011semantic} are used for training the network, {and 1,449 images in the PASCAL VOC 2012 segmentation validation set are used for evaluation. We cannot report results on PASCAL VOC 2012 segmentation test set because no instance-level segmentation annotation is provided. In addition, because VOC 2010 segmentation set is only a subset of VOC 2012 and no baseline has reported results on VOC 2010, we only evaluate our algorithm on VOC 2012 set.} For fair comparison with state-of-the-art instance level segmentation methods, $AP^r$ and $AP^r_{vol}$ metrics are used following SDS~\cite{hariharan2014simultaneous}. The $AP^r$ metric measures the average precision under 0.5 IoU overlap with ground-truth segmentation. Hariharan et al.~\cite{liu2015multi} proposed to vary IoU scores from 0.1 to 0.9 to show the performance for different applications. The $AP^r_{vol}$ metric calculates the mean of $AP^r$ under all IoU scores. Note that two baselines fine-tune the networks based on Alexnet architecture~\cite{krizhevsky2012imagenet}. For fair comparison, we also report results based on the Alexnet architecture~\cite{krizhevsky2012imagenet}.
\textbf{Training Strategies.} All networks in our experiments are trained and tested based on the published DeepLab code~\cite{wcrf}, which is implemented based on the publicly available Caffe platform~\cite{jia2014caffe} on a single NVIDIA GeForce Titan GPU with 6GB memory. We first train the category-level segmentation network, which is then used to initialize our instance-level segmentation network for fine tuning. For both training stages, we randomly initialize all new layers by drawing weights from a zero-mean Gaussian distribution with standard deviation 0.01. We use mini-batch size of 8 images, initial learning rate of 0.001 for pre-trained layers, and 0.01 for newly added layers in all our experiments. We decrease the learning rate to 1/10 of the previous one after 20 epochs and train the two networks for roughly 60 epochs one after the other. The momentum and the weight decay are set as 0.9 and 0.0005, respectively. The same training setting is utilized for all our compared network variants.
We evaluate the testing time by averaging the running time for images on the VOC 2012 validation set on NVIDIA GeForce Titan GPU and Intel Core i7-4930K CPU @3.40GHZ. Our PFN can rapidly process one $300\times 500$ image in about one second. This compares much favorably to other state-of-the-art approaches, as the current state-of-the-art methods~\cite{hariharan2014simultaneous}~\cite{liu2015multi} rely on region proposal pre-processing and complex post-processing steps: ~\cite{hariharan2014simultaneous} takes about 40 seconds while ~\cite{liu2015multi} is excepted to be more expensive than ~\cite{hariharan2014simultaneous} because more complex top-down category specific reasoning and shape prediction are further employed for inference based on ~\cite{hariharan2014simultaneous}.
\begin{table*} \setlength{\tabcolsep}{0.7pt}
\centering
\caption{Comparison of instance-level segmentation performance with several architectural variants of our network and two state-of-the-arts using $AP^r$ metric over 20 classes at 0.5 IoU on the PASCAL VOC 2012 validation set. All numbers are in \%. }\renewcommand\arraystretch{1.7}\vspace{-3mm}
\begin{tabular}{l|c|cccccccccccccccccccc|c }
\toprule
Settings & Method &\rotatebox{90}{plane}&\rotatebox{90}{bike}&\rotatebox{90}{bird}&\rotatebox{90}{boat}&\rotatebox{90}{bottle}&\rotatebox{90}{bus}&\rotatebox{90}{car}&\rotatebox{90}{cat}&\rotatebox{90}{chair}&\rotatebox{90}{cow}&\rotatebox{90}{table}&\rotatebox{90}{dog}&\rotatebox{90}{horse}&\rotatebox{90}{motor}&\rotatebox{90}{person}&\rotatebox{90}{plant}&\rotatebox{90}{sheep}&\rotatebox{90}{sofa}&\rotatebox{90}{train}&\rotatebox{90}{tv}& average\\
\midrule
\hline
\multirow{2}*{Baselines} & SDS~\cite{hariharan2011semantic} & 58.8 & 0.5 & 60.1 & 34.4 & 29.5 & 60.6 & 40.0 & 73.6 & 6.5 & 52.4 & 31.7 & 62.0 & 49.1 & 45.6 & 47.9 & 22.6 & 43.5 & 26.9 & 66.2 & 66.1 & 43.8\\
& Chen et al.~\cite{liu2015multi} & 63.6 & 0.3 & 61.5 & 43.9 & 33.8 & 67.3 & 46.9 & 74.4 & 8.6 & 52.3 & 31.3 & 63.5 & 48.8 & 47.9 & 48.3 & 26.3 & 40.1 & 33.5 & 66.7 & 67.8 & 46.3\\
\hline
\multirow{1}*{Ours (Alexnet)} & PFN Alexnet & 63.0 & 15.3 & 69.8 & 48.4 & 23.5 & 60.2 & 24.1 & 82.2 & 13.9 & 60.7 & 41.3 & 73.5 & 76.9 & 69.7 & 37.6 & 20.0 & 41.4 & 58.8 & 78.9 & 58.4 & 50.9\\
\hline
\multirow{1}*{Training Strategy} & PFN unified & 72.9 & \textbf{18.1} & 78.8 & 55.4 & 23.2 & 63.6 & 17.8 & 72.1 & 14.7 & 64.1 & 44.5 & 69.5 & 71.5 & 63.3 & 39.1 & 9.5 & 27.9 & 47.7 & 72.1 & 57.0 & 49.1\\
\hline
\multirow{5}*{Location prediction} & PFN offsets(2)& 73.6 & 17.2 & 78.9 & 55.6 & \textbf{29.4} & 61.6 & 31.7 & 77.8 & 13.5 & 59.6 & 38.4 & 70.2 & 67.5 & 66.8 & 43.6 & 9.8 & 41.1 & 43.3 & 75.3 & 65.2 & 51.0\\
& PFN centers(2)& 78.0 & 15.5 & 76.4 & 58.7 & 25.6 & 69.3 & 28.9 & 88.2 & 16.6 & 67.2 & 47.8 & 82.3 & 78.0 & 71.5 & 47.0 & 23.8 & 48.8 & 63.8 & 83.3 & 72.0 & 57.1\\
& PFN centers,w,h(4)& 80.1 & 15.9 & 76.6 & 60.2 & 25.7 & 70.9 & 30.0 & 87.7 & 18.3 & 70.1 & 50.8 & 82.5 & 77.9 & 71.4 & 47.4 & 24.4 & 48.0 & 64.0 & 82.8 & 72.2 & 57.8\\
& PFN centers,topleft(4)& \textbf{80.5} & 15.9 & \textbf{79.2} & \textbf{62.5} & 27.8 & 69.4 & 31.1 & 86.6 & \textbf{18.8} & 73.6 & 50.6 & 81.6 & 77.2 & 71.5 & 47.5 & 22.5 & 47.7 & 63.6 & 83.0 & \textbf{72.5} & 58.2\\
& PFN +topright, bottomleft(10)& 76.9 & 15.1 & 73.9 & 55.8 & 26.0 & 73.7 & 31.1 & 92.1 & 17.6 & 74.0 & 48.1 & 82.0 & \textbf{85.5} & 71.7 & \textbf{48.8} & \textbf{25.2} & \textbf{57.7} & 64.5 & 88.7 & 72.0 & \textbf{59.0}\\
\hline
\multirow{3}*{Network structure}& PFN w/o multiscale& 72.8 & 16.5 & 71.9 & 50.3 & 25.2 & 65.9 & 27.4 & 90.4 & 16.3 & 64.6 & 48.1 & 78.9 & 74.1 & 72.0 & 42.4 & 21.3 & 46.2 & 64.4 & 82.8 & 72.3 & 55.2\\
& PFN w/o coordinate maps& 74.1 & 17.3 & 72.8 & 57.0 & 27.6 & 72.8 & 30.0 & \textbf{92.6} & 17.7 & 69.4 & 48.5 & 81.7 & 80.9 & 72.0 & 47.8 & 24.9 & 50.1 & 62.7 & 87.9 & 72.3 & 58.0\\
& PFN fusing\_summation& 80.6 & 16.5 & 78.8 & 59.4 & 27.2 & 71.0 & 29.9 & 85.1 & 18.1 & \textbf{75.4} & \textbf{52.8} & 80.9 & 76.5 & \textbf{74.1} & 47.7 & 20.8 & 46.7 & \textbf{66.8} & 86.0 & 71.7 & 58.3\\
\hline
\multirow{3}*{Instance number} & PFN w/o category-level& 72.4 & 17.6 & 77.7 & 55.4 & \textbf{29.4} & 63.5 & \textbf{32.0} & 77.8 & 13.3 & 61.2 & 38.5 & 70.8 & 69.5 & 66.1 & 44.3 & 13.1 & 42.6 & 45.6 & 71.9 & 64.5 & 51.4\\
& PFN w/o instance-level& 74.0 & 15.6 & 72.9 & 55.5 & 26.4 & 72.2 & 31.3 & 91.1 & 19.1 & 66.9 & 49.9 & 82.3 & 75.0 & 73.4 & 47.7 & 24.2 & 53.3 & 64.4 & \textbf{91.0} & 72.3 & 57.9\\
& PFN separate\_finetune& 74.1 & 17.3 & 74.3 & 57.2 & 27.6 & 72.8 & 30.2 & \textbf{92.6} & 18.1 & 69.7 & 49.5 & 81.7 & 80.9 & 71.6 & 47.8 & 24.9 & 50.1 & 62.7 & 87.9 & 71.8 & 58.1\\
\hline
\multirow{4}*{Testing strategy}
& PFN w/o coordinates& 72.7 & 16.8 & 72.0 & 51.8 & 24.0 & 67.6 & 27.4 & 90.4 & 16.7 & 64.6 & 48.1 & 78.9 & 74.1 & 71.5 & 42.8 & 22.1 & 45.2 & 64.4 & 82.8 & 72.3 & 55.3\\
& PFN w/o classify+size& 76.2 & 16.1 & 72.9 & 57.9 & 25.3 & 69.5 & 29.4 & 88.3 & 15.8 & 67.1 & 48.1 & 82.0 & 73.8 & 71.8 & 46.1 & 22.3 & 47.8 & 62.7 & 83.7 & 72.3 & 56.4\\
& PFN w/o size& 72.0 & 17.3 & 72.8 & 57.0 & 27.6 & 72.8 & 30.8 & \textbf{92.6} & 17.7 & 64.6 & 48.5 & 81.7 & 80.9 & 73.3 & \textbf{48.8} & 24.9 & 50.1 & 62.7 & 87.9 & 72.3 & 57.8\\
& PFN w/o classify& 71.0 & 15.6 & 72.9 & 55.3 & 25.8 & 70.4 & 31.4 & 91.1 & 17.7 & 66.9 & 48.5 & \textbf{82.7} & 76.4 & 72.3 & 47.2 & 24.2 & 51.4 & 64.4 & \textbf{91.0} & 72.3 & 57.4\\
\hline
Ours (VGG 16) & PFN & 76.4 & 15.6 & 74.2 & 54.1 & 26.3 & \textbf{73.8} & 31.4 & 92.1 & 17.4 & 73.7 & {48.1} & 82.2 & 81.7 & 72.0 & 48.4 & 23.7 & \textbf{57.7} & 64.4 & 88.9 & 72.3 & 58.7\\
\hline
\multirow{3}*{Upperbound}& PFN upperbound\_instnum& 81.6 & 19.0 & 80.0 & 58.1 & 30.0 & 77.0 & 33.9 & 92.9 & 19.8 & 82.6 & 57.2 & 81.3 & 83.0 & 74.4 & 49.6 & 21.6 & 56.2 & 67.8 & 91.2 & 68.9 & 61.3\\
& PFN upperbound\_instloc & 81.8 & 23.6 & 84.6 & 66.4 & 38.2 & 75.3 & 35.3 & 94.9 & 24.8 & 84.2 & 61.7 & 83.9 & 87.2 & 75.2 & 55.6 & 27.3 & 63.9 & 69.3 & 88.9 & 72.5 & 64.7\\
\midrule
\end{tabular}
\label{mpr}
\vspace{-8mm}
\end{table*}
\subsection{Results and Comparisons}
In the first training stage, we train a category-level segmentation network using the same architecture as ``DeepLab-CRF-LargeFOV" in~\cite{wcrf}. By evaluating the pixel-wise segmentation in terms of pixel intersection-over-union (IOU)~\cite{long2014fully} averaged across 21 classes, we archive 67.53\% on category-level segmentation task on the PASCAL VOC 2012 validation set~\cite{everingham2014pascal}, which is only slightly inferior to $67.64\%$ reported in~\cite{wcrf}.
Table~\ref{mpr} and Table~\ref{mpr_vol} present the comparison of the proposed PFN with two state-of-the-art methods~\cite{hariharan2011semantic}~\cite{liu2015multi} using $AP^r$ metric at IoU score 0.5 and 0.6 to 0.9, respectively. We directly use their published results on PASCAL VOC 2012 validation set for fair comparison. All results of the state-of-the-art methods were reported in~\cite{liu2015multi} which re-evaluated the performance of~\cite{hariharan2011semantic} using VOC 2012 validation set. For fair comparison, we also report the results of PFN using the Alexnet architecture~\cite{krizhevsky2012imagenet} as used in two baselines~\cite{hariharan2011semantic}~\cite{liu2015multi}, i.e. ``PFN Alexnet". Following the strategy presented in~\cite{wcrf}, we convert the fully connected layers in Alexnet to fully convolutional layers, and all other settings are the same as those used in ``PFN". The results of~\cite{hariharan2011semantic} and~\cite{liu2015multi} achieve $43.8\%$ and $46.3\%$ in $AP^r$ metric at IoU 0.5. Meanwhile, our ``PFN Alexnet" is significantly superior over the two baselines, i.e. 50.9\% vs 43.8\%~\cite{hariharan2011semantic} and 46.3\%~\cite{liu2015multi} in $AP^r$ metric. Further detailed comparisons in $AP^{r}$ over 20 classes at IoU scores 0.6 to 0.9 are listed in Table~\ref{mpr_vol}. By utilizing the more powerful VGG-16 network architecture, our PFN can substantially improve the performance and outperform these two baselines by over $14.9\%$ for SDS~\cite{hariharan2011semantic} and $12.4\%$ for Chen et al.~\cite{liu2015multi}. PFN also gives huge boosts in $AP^r$ metrics at 0.6 to 0.9 IoU scores, as reported in Table~\ref{mpr_vol}. For example, when evaluating at 0.9 IoU score where the localization accuracy for object instances is strictly required, the two baselines achieve $0.9\%$ for SDS~\cite{hariharan2011semantic} and 2.6\% for ~\cite{liu2015multi} while PFN obtains $15.7\%$. This verifies the effectiveness of our PFN although it does not require extra region proposal extractions as the pre-processing step. The detailed $AP^r$ scores for each class are also listed. In general, our method shows dramatically higher performance than the baselines. Especially, in predicting small object instances (e.g., bird and chair) or object instances with a lot of occlusion (e.g., table and sofa), our method achieves a larger gain, e.g. 74.2\% vs 60.1\%~\cite{hariharan2011semantic} and 61.5\%~\cite{liu2015multi} for bird, 64.4\% vs 26.9\%~\cite{hariharan2011semantic} and 33.5\%~\cite{liu2015multi} for sofa. This demonstrates that our network can effectively deal with the internal boundaries between the object instances and robustly predict the instance-level masks with various appearance patterns or occlusion. In Table~\ref{compare_mpr_vol}, we also report the $AP^r_{vol}$ results of our different architecture variants, which average all $AP^r$ at 0.1 to 0.9 IoU scores. We cannot compare the $AP^r_{vol}$ with the baselines as they~\cite{liu2015multi} do not publish these results.
\begin{table*} \setlength{\tabcolsep}{0.7pt}
\centering
\caption{Comparison of instance-level segmentation performance with several architectural variants of our network using $AP^r_{vol}$ metric over 20 classes that averages all $AP^r$ performance from 0.1 to 0.9 IoU scores on the PASCAL VOC 2012 validation set. All numbers are in \%.}\renewcommand\arraystretch{1.7}\vspace{-3mm}
\begin{tabular}{l|c|cccccccccccccccccccc|c }
\toprule
Settings & Method &\rotatebox{90}{plane}&\rotatebox{90}{bike}&\rotatebox{90}{bird}&\rotatebox{90}{boat}&\rotatebox{90}{bottle}&\rotatebox{90}{bus}&\rotatebox{90}{car}&\rotatebox{90}{cat}&\rotatebox{90}{chair}&\rotatebox{90}{cow}&\rotatebox{90}{table}&\rotatebox{90}{dog}&\rotatebox{90}{horse}&\rotatebox{90}{motor}&\rotatebox{90}{person}&\rotatebox{90}{plant}&\rotatebox{90}{sheep}&\rotatebox{90}{sofa}&\rotatebox{90}{train}&\rotatebox{90}{tv}& average\\
\hline
\multirow{1}*{Ours (Alexnet)} & PFN Alexnet & 62.2 & 20.0 & 64.6 & 41.0 & 23.8 & 56.6 & 22.4 & 76.0 & 15.5 & 56.2 & 39.2 & 68.6 & 65.4 & 61.2 & 35.2 & 20.2 & 37.6 & 52.1 & 68.9 & 49.9 & 46.8\\
\hline
\multirow{1}*{Training Strategy} & PFN unified & 71.2 & 22.8 & \textbf{74.1} & 47.3 & 24.2 & 55.1 & 18.5 & 69.8 & 15.4 & 56.2 & 40.1 & 63.7 & 63.0 & 56.2 & 38.1 & 13.2 & 31.5 & 41.6 & 63.8 & 47.1 & 45.6\\
\hline
\multirow{5}*{Location prediction} & PFN offsets(2)& 70.4 & \textbf{23.1} & 73.4 & 46.6 & 31.0 & 55.3 & 29.1 & 74.3 & 15.6 & 54.7 & 35.4 & 64.9 & 60.0 & 57.7 & 41.3 & 13.5& 42.6 & 39.6 & 64.7 & 53.2 & 47.3\\
& PFN centers(2)& 72.5 & 21.0 & 69.6 & 50.0 & 25.0 & 63.3 & 27.2 & 79.2 & 17.1 & 60.8 & 44.3 & 74.9 & 67.8 & 64.3 & 41.0 & 24.3 & 42.5 & 55.6 & 72.8 & 58.0 & 51.6\\
& PFN centers,w,h(4)& \textbf{74.4} & 21.8 & 69.8 & 51.3 & 25.1 & 64.7 & 28.2 & 78.6 & 18.4 & 63.3 & \textbf{46.9} & 74.0 & 67.6 & 64.3 & 41.2 & \textbf{24.7} & 41.6 & 55.4 & 72.6 & 58.4 & 52.1\\
& PFN centers,topleft(4)& 74.3 & 21.8 & 72.8 & \textbf{53.0} & 27.3 & 63.4 & 29.0 & 77.7 & 18.6 & \textbf{65.7} & 46.8 & 73.3 & 67.1 & 65.0 & 41.5 & 23.2 & 40.7 & 54.7 & 72.8 & 58.7 & 52.4\\
& PFN +topright, bottomleft(10)& 71.3 & 20.8 & 66.4 & 48.5 & 26.6 & 65.2 & 27.2 & 83.1 & 17.3 & 64.6 & 45.1 & 74.6 & 70.8 & 64.3 & 41.6 & 23.4 & \textbf{48.8} & 56.4 & 76.1 & 57.9 & \textbf{52.5}\\
\hline
\multirow{3}*{Network structure}& PFN w/o multiscale& 69.6 & 21.3 & 66.7 & 44.1 & 25.8 & 58.4 & 25.3 & 81.9 & 17.4 & 60.6 & 44.5 & 72.8 & 64.7 & 62.4 & 39.5 & 21.6 & 41.6 & \textbf{57.7} & 74.2 & 58.7 & 50.4\\
& PFN w/o coordinate maps& 70.5 & 21.9 & 65.2 & 50.0 & 26.6 & \textbf{67.6} & 27.8 & 83.1 & 17.9 & 61.4 & 45.1 & 73.9 & \textbf{68.8} & 62.8 & 41.9 & 24.1 & 44.8 & 56.4 & 76.5 & 58.7 & 52.3\\
& PFN fusing\_summation& 74.3 & 22.7 & 72.4 & 50.3 & 27.0 & 64.6 & 27.7 & 76.2 & 17.7 & 66.4 & 48.6 & 72.7 & 66.6 & \textbf{67.1} & 41.7 & 21.7 & 39.8 & 57.0 & 75.3 & 57.8 & 52.4\\
\hline
\multirow{3}*{Instance number} & PFN w/o category-level& 69.4 & 22.3 & 71.9 & 46.5 & \textbf{31.5} & 58.0 & \textbf{29.5} & 74.1 & 15.8 & 56.9 & 35.4 & 66.1 & 61.8 & 57.8 & 42.1 & 15.8 & 43.3 & 40.9 & 63.6 & 52.6 & 47.8\\
& PFN w/o instance-level& 69.4 & 21.1 & 65.1 & 48.6 & 25.6 & 64.3 & 27.4 & 81.9 & \textbf{18.7} & 61.1 & 46.0 & \textbf{75.2} & 67.0 & 64.0 & 41.0 & 22.4 & 47.6 & 57.6 & \textbf{77.7} & 58.8 & 52.0\\
& PFN separate\_finetune& 70.5 & 21.9 & 66.3 & 50.2 & 26.6 & \textbf{67.6} & 27.9 & 83.1 & 18.3 & 61.7 & 45.8 & 73.9 & \textbf{68.8} & 62.4 & 42.0 & 24.1 & 44.8 & 56.4 & 76.5 & 58.1 & 52.3\\
\hline
\multirow{4}*{Testing strategy}
& PFN w/o coordinates& 69.9 & 21.7 & 66.7 & 44.4 & 25.3 & 58.9 & 25.3 & 81.9 & 17.9 & 60.6 & 44.5 & 72.8 & 64.7 & 63.3 & 39.5 & 22.1 & 41.9 & \textbf{57.7} & 74.2 & 58.7 & 50.6\\
& PFN w/o classify+size& 70.7 & 21.3 & 66.6 & 49.4 & 24.6 & 63.4 & 27.7 & 79.3 & 16.3 & 60.8 & 44.5 & 74.6 & 64.7 & 64.6 & 40.2 & 22.7 & 41.7 & 54.9 & 73.2 & 58.4 & 51.0\\
& PFN w/o size& 68.8 & 22.0 & 65.3 & 50.0 & 26.6 & 67.0 & 28.5 & 83.1 & 17.9 & 59.3 & 45.1 & 74.3 & \textbf{68.8} & 64.6 & \textbf{42.9} & 23.7 & 44.9 & 56.4 & 77.5 & 58.4 & 52.3\\
& PFN w/o classify& 67.8 & 21.1 & 65.3 & 48.3 & 25.7 & 62.3 & 27.6 & 81.9 & 17.7 & 60.0 & 44.9 & \textbf{75.2} & 67.0 & 62.9 & 41.1 & 22.8 & 45.0 & 57.6 & \textbf{77.7} & \textbf{58.8} & 51.5\\
\hline
Ours (VGG 16) & PFN & 70.8 & 21.1 & 66.7 & 47.6 & 26.7 & 65.3 & 27.5 & \textbf{83.2} & 17.2 & {64.5} & 45.1 & 74.7 & 67.9 & 64.5 & 41.3 & 22.1 & \textbf{48.8} & 56.5 & 76.2 & 58.2 & 52.3\\
\midrule
\end{tabular}
\label{compare_mpr_vol}
\vspace{-3mm}
\end{table*}
\vspace{-3mm}
\subsection{Ablations Studies of Our networks}
We further evaluate the effectiveness of our six important components of PFN, including the training strategy, instance location prediction, network structure, instance number prediction, testing strategy and upperbounds, respectively. The performance over all the categories by all variants is reported in Table~\ref{mpr} and Table~\ref{compare_mpr_vol}.
\textbf{Training strategy:} Note that our PFN training includes two stages: the category-level segmentation network and the instance-level network. To justify the necessity of using two stages, we evaluate the performance of training a unified network that consists of the category-level segmentation, pixel-wise instance location prediction and instance number prediction in one learning stage, namely ``PFN unified". ``PFN unified" is fine-tuned based on the VGG-16 pre-trained model and three losses for three sub-tasks are optimized in one network. The category-level prediction is appended in the last convolutional layer within the dashed blue box in Figure~\ref{fig:framework}, and the loss weight for category-level segmentation is set as 1. From our experimental results, ``PFN unified" leads to 9.6\% decrease in average $AP^r$ and $6.6\%$ decrease in average $AP^r_{vol}$, compared with ``PFN". Intuitively, the target of category-level segmentation is to be robust for individual object instances of the same category while erasing the instance-level information during optimization. On the contrary, the instance-level network aims to learn the instance-level information for distinguishing different object instances with large variance in appearance, view or scale. This comparison result verifies well that training with two sequential stages can lead to better global instance-level segmentation.
\begin{table*} \setlength{\tabcolsep}{2pt}
\centering
\caption{Per-class instance-level segmentation results using $AP^r$ metric over 20 classes at 0.6 to 0.9 (with a step size of 0.1) IoU scores on the VOC PASCAL 2012 validation set. All numbers are in \%.}\renewcommand\arraystretch{1.7}\vspace{-3mm}
\begin{tabular}{l|c|cccccccccccccccccccc|c }
\toprule
IoU score & Method &\rotatebox{90}{plane}&\rotatebox{90}{bike}&\rotatebox{90}{bird}&\rotatebox{90}{boat}&\rotatebox{90}{bottle}&\rotatebox{90}{bus}&\rotatebox{90}{car}&\rotatebox{90}{cat}&\rotatebox{90}{chair}&\rotatebox{90}{cow}&\rotatebox{90}{table}&\rotatebox{90}{dog}&\rotatebox{90}{horse}&\rotatebox{90}{motor}&\rotatebox{90}{person}&\rotatebox{90}{plant}&\rotatebox{90}{sheep}&\rotatebox{90}{sofa}&\rotatebox{90}{train}&\rotatebox{90}{tv}& average\\
\midrule
\hline
\multirow{3}*{0.6} & SDS~\cite{hariharan2011semantic} & 43.6 & 0 & 52.8 & 19.5 & 25.7 & 53.2 & 33.1 & 58.1 & 3.7 & 43.8 & 29.8 & 43.5 & 30.7 & 29.3 & 31.8 & 17.5 & 31.4 & 21.2 & 57.7 & \textbf{62.7} & 34.5\\
& Chen et al.~\cite{liu2015multi} & 57.1 & 0.1 & 52.7 & 24.9 & \textbf{27.8} & 62.0 & \textbf{36.0} & 66.8 & 6.4 & 45.5 & 23.3 & 55.3 & 33.8 & 35.8 & 35.6 & \textbf{20.1} & 35.2 & 28.3 & 59.0 & 57.6 & 38.2\\
& PFN Alexnet& 60.3 & 9.9 & 67.4 & 31.8 & 18.7 & 52.9 & 18.6 & 75.6 & 8.1 & 54.6 & 36.0 & 71.3 & 63.3 & 65.6 & 29.3 & 14.8 & 31.2 & 48.5 & 66.9 & 47.3 & 43.6\\
& PFN & \textbf{73.2} & \textbf{11.0} & \textbf{70.9} & \textbf{41.3} & 22.2 & \textbf{66.7} & 26.0 & \textbf{83.4} & \textbf{10.7} & \textbf{65.0} & \textbf{42.4} & \textbf{78.0} & \textbf{69.2} & \textbf{72.0} & \textbf{38.0} & 19.0 & \textbf{46.0} & \textbf{51.8} & \textbf{77.9} & 61.4 & \textbf{51.3}\\
\midrule
\hline
\multirow{3}*{0.7} & SDS~\cite{hariharan2011semantic} & 17.8 & 0 & 32.5 & 7.2 & 19.2 & 47.7 & 22.8 & 42.3 & 1.7 & 18.9 & 16.9 & 20.6 & 14.4 & 12.0 & 15.7 & 5.0 & 23.7 & 15.2 & 40.5 & 51.4 & 21.3\\
& Chen et al.~\cite{liu2015multi} & 40.8 & 0.07 & 40.1 & 16.2 & \textbf{19.6} & 56.2 & \textbf{26.5} & 46.1 & 2.6 & 25.2 & 16.4 & 36.0 & 22.1 & 20.0 & 22.6 & 7.7 & 27.5 & 19.5 & 47.7 & 46.7 & 27.0\\
& PFN Alexnet& 56.1 & 5.0 & 59.8 & 25.6 & 12.7 & 50.4 & 15.5 & 69.3 & 3.2 & 42.9 & 24.5 & 63.6 & 58.4 & 54.4 & 21.1 & 7.9 & 26.2 & 39.9 & 59.1 & 37.0 & 36.6\\
& PFN & \textbf{68.5} & \textbf{5.6} & \textbf{60.4} & \textbf{34.8} & 14.9 & \textbf{61.4} & 19.2 & \textbf{78.6} & \textbf{4.2} & \textbf{51.1} & \textbf{28.2} & \textbf{69.6} & \textbf{60.7} & \textbf{60.5} & \textbf{26.5} & \textbf{9.8} & \textbf{35.1} & \textbf{43.9} & \textbf{71.2} & \textbf{45.6} & \textbf{42.5}\\
\midrule
\hline
\multirow{3}*{0.8} & SDS~\cite{hariharan2011semantic} & 2.1 & 0 & 8.3 & 4.5 & 11.5 & 32.3 & 9.0 & 17.9 & 0.7 & 4.7 & 9.0 & 6.5 & 1.8 & 4.4 & 3.3 & 1.9 & 7.9 & 10.2 & 12.7 & 24.3 & 8.7\\
& Chen et al.~\cite{liu2015multi} & 10.5 & 0 & 15.7 & 9.8 & \textbf{11.4} & 32.7 & 12.5 & 34.8 & 1.1 & 11.6 & 9.5 & 15.3 & 4.6 & 6.5 & 6.0 & 3.0 & 13.9 & 14.4 & 27.0 & 30.4 & 13.5\\
& PFN Alexnet& 46.6 & 1.5 & 48.0 & 15.0 & 8.6 & 44.3 & 10.7 & 58.6 & 2.0 & 37.4 & 15.5 & 47.8 & 40.0 & 34.9 & 13.5 & 3.8 & 20.2 & 23.4 & 51.7 & 30.7 & 27.7\\
& PFN & \textbf{54.6} & \textbf{1.5} & \textbf{49.5} & \textbf{21.0} & 10.4 & \textbf{50.7} & \textbf{14.2} & \textbf{63.5} & \textbf{2.1} & \textbf{38.3} & \textbf{18.9} & \textbf{51.8} & \textbf{41.2} & \textbf{36.7} & \textbf{16.5} & \textbf{4.2} & \textbf{26.2} & \textbf{25.3} & \textbf{59.6} & \textbf{36.9} & \textbf{31.2}\\
\midrule
\hline
\multirow{3}*{0.9} & SDS~\cite{hariharan2011semantic} & 0 & 0 & 0.2 & 0.3 & 2.0 & 3.8 & 0.2 & 0.9 & 0.1 & 0.2 & 1.5 & 0 & 0 & 0 & 0.1 & 0.1 & 0 & 2.3 & 0.2 & 5.8 & 0.9\\
& Chen et al.~\cite{liu2015multi} & 0.6 & 0 & 0.6 & 0.5 & \textbf{4.9} & 9.8 & 1.1 & 8.3 & 0.1 & 1.1 & 1.2 & 1.7 & 0.3 & 0.8 & 0.6 & 0.3 & 0.8 & 7.6 & 4.3 & 6.2 & 2.6\\
& PFN Alexnet& 37.1 & 0.1 & 24.6 & 7.0 & 3.6 & 30.4 & 4.9 & 40.0 & \textbf{0.6} & 23.3 & 2.8 & 28.9 & \textbf{13.6} & 7.8 & 5.0 & 1.1 & 10.9 & 12.2 & 23.8 & 8.1 & 14.3\\
& PFN & \textbf{43.9} & \textbf{0.1} & \textbf{24.5} & \textbf{7.8} & 4.1 & \textbf{32.5} & \textbf{6.3} & \textbf{42.0} & \textbf{0.6} & \textbf{25.7} & \textbf{3.2} & \textbf{31.8} & {13.4} & \textbf{8.1} & \textbf{5.9} & \textbf{1.6} & \textbf{14.8} & \textbf{14.3} & \textbf{25.0} & \textbf{8.5} & \textbf{15.7}\\
\midrule
\end{tabular}
\label{mpr_vol}
\vspace{-4mm}
\end{table*}
\textbf{Instance location prediction:} Recall that PFN predicts the spatial coordinates (6 dimensions) of the center, top-left corner and bottom-right corner of the bounding box for each pixel. We also extensively evaluate other five kinds of instance location predictions: 1) ``PFN offsets (2)", which predicts the offsets of each pixel with respect to the centers of its object instance; 2) ``PFN centers (2)", where only the coordinates of its object instance center for each pixel are predicted; 3) ``PFN centers, w,h(4)", which predicts the centers, width and height of each instance; 4) ``PFN centers, topleft(4)", where the centers and top-left corners of each instance are predicted; 5) ``PFN +topright, bottomleft (10)", which additionally predicts the top-right corners and bottom-left corners of each instance besides the ones in ``PFN". The performance is obtained by changing the channel number in the final prediction layer accordingly.
The ``PFN offsets (2)" gives dramatically inferior performance to ``PFN" (51.0\% vs 58.7\% in $AP^r$ and 47.3\% vs 52.3\% in $AP^r_{vol}$). The main reason may be that offsets are possibly with negative values, which may be difficult to optimize. By only predicting the centers of an object instance, the resulting $AP^r$ of ``PFN centers (2)" also shows inferior performance to ``PFN". The reason for this inferiority may be that predicting the top-left corners and bottom-right corners can bring more information about object scales and spatial layouts. ``PFN centers, topleft(4)" and ``PFN centers, w,h(4)" theoretically predict the same information about instance locations, and also achieve similar results in $AP^r$, 57.8\% vs 58.2\%, respectively. It is worth noting that the 6 dimension predictions of ``PFN" capture redundant information compared to ``PFN centers, topleft(4)" and ``PFN centers, w,h(4)". The superiority of ``PFN" over ``PFN centers, topleft(4)" and ``PFN centers, w,h(4)" (0.5\% and 0.9\%, respectively) can be mainly attributed to the effectiveness of model combination. In our primary experiment, the version that predicts the top-left and bottom-right corner coordinates achieves unnoticeable performance with ``PFN centers, topleft(4)", because the information captured by centers and top-left corners are equal to that by top-left and bottom-right corners. The combined features used for clustering with additional information can be equally regarded as multiple model combination, which is widely used for the object classification challenge~\cite{szegedy2014going}. Moreover, we test the performance of introducing two additional points (top-right corner and bottom-left corner) as the prediction. Only a slight improvement (0.3\%) by comparing ``PFN +topright, bottomleft (10)" with ``PFN" can be observed yet more parameters and computation memory are required. We thus only adopt the setting of predicting the center, top-left and bottom-right corners for all our other experiments.
\textbf{Network Structure:} Extensive evaluations on different network structures are also performed. First, the effectiveness of multi-scale prediction is verified. ``PFN w/o multiscale" shows the performance of using only one straightforward layer to predict pixel-wise instance locations. The performance decreases by $3.5\%$ in $AP^r$ compared with ``PFN". This significant inferiority demonstrates the effectiveness of multi-scale fusing that incorporates the local fine details and global semantic information into predicting the pixel-wise instance locations.
Note that the spatial coordinates of each pixel are utilized as the feature maps for predicting the instance locations. The superiority of using spatial coordinates can be demonstrated by comparing ``PFN w/o coordinate maps" with ``PFN", i.e. 0.7\% difference in $AP^r$. The coordinate maps can help the feature maps be more precise for predicting absolute spatial layouts of object instances, where the convolutional filters can put more focus on learning the relative spatial offsets.
In the fusing layer for predicting pixel-wise instance locations, ``PFN" utilizes the concatenation operation instead of element-wise summation for multi-scale prediction. ``PFN fusing\_summation" shows 0.4\% decrease in $AP^r$ when compared to ``PFN". The $1\times1$ convolutional filters are utilized to adaptively weigh the contribution of the instance location prediction of each scale, which is more reasonable and experimentally effective than simple summation.
\textbf{Instance Number Prediction:} We explore other options to predict the instance numbers of all categories for each image. ``PFN w/o category-level" only utilizes the instance location predictions as the feature maps for predicting instance numbers and the category-level information is totally ignored. The large gap between ``PFN w/o category-level" and ``PFN" (51.4\% vs 58.7\%) verifies the importance of using category-level information for predicting instance numbers. Because the instance lcoation predictions only capture the instance numbers of all categories and category-level information is discarded, the exact instance number for a specific category thus cannot be inferred. The importance of incorporating instance-level information is also verified by comparing ``PFN w/o instance-level" with ``PFN", 57.9\% vs 58.7\% in $AP^r$. This shows that the instance number prediction can benefit from the pixel-wise instance location prediction, where more fine-grained annotations (pixel-wise instance-level locations) are provided for learning better feature maps.
We also evaluate the performance of sequentially optimizing the instance locations and the instance number instead of using one unified network. ``PFN separate\_finetune" first optimizes the network for predicting pixel-wise instance locations, and then fixes the current network parameters and only trains the newly added parameters for instance number prediction. The performance decrease of ``PFN separate\_finetune" compared to ``PFN" (58.1\% vs 58.7\% in $AP^r$) shows well the effectiveness of training one unified network. The information in the global aspect from instance numbers can be utilized for predicting more accurate instance locations.
\textbf{Testing Strategy:} We also test different strategies for generating final instance-level segmentations during testing. Note that during spectral clustering, the similarity of two pixels is computed by considering both the prediction instance locations with 6 dimensions and two spatial coordinates of each pixel. By eliminating the coordinates in the similarity function, a significant decrease in $AP^r$ can be observed by comparing ``PFN w/o coordinates" with ``PFN", 55.3\% vs 58.7\%. This verifies that the spatial coordinates can enhance the local neighboring connections during clustering, which can lead to more reasonable and meaningful instance-level segmentation results.
Recall that two steps are used for post-processing, including refining the segmentation results with instance number prediction and constraining the cluster size during clustering. We extensively evaluate the effectiveness of using these two steps. By comparing $56.4\%$ of ``PFN w/o classify + size" that eliminates these two steps with $58.7\%$ of ``PFN" in $AP^r$, it can be observed that better performance can be obtained by leveraging the instance number prediction and constraining the cluster size to refine instance-level segmentation. After only eliminating the refining strategy by constraining the cluster size, $0.9\%$ decrease can be observed. It demonstrates that constraining the cluster size can help reduce the effect of noisy background pixels to some extent and more robust instance level segmentation results can be obtained. On the other hand, the incorporation of instance number prediction can help improve the performance in $AP^r$ by $1.3\%$ when comparing ``PFN w/o classify" with ``PFN". In particular, significant improvements can be observed for easily confused categories such as 73.7\% vs 66.9\% for ``cow", 57.7\% vs 51.4\% for ``sheep" and 81.7\% vs 76.4\% for ``horse". This demonstrates the effectiveness of using instance number prediction for refining the pixel-wise segmentation results.
\textbf{Upperbound:} Finally, we also evaluate the limitations of our current algorithm. First, ``PFN upperbound\_instnum" reports the performance of using the ground-truth instance number prediction for clustering and other experimental settings are kept the same. It can be seen that only $2.6\%$ improvement in $AP^r$ is obtained. The errors from instance number prediction are already small and have only little effect on the final instance-level segmentation. Second, the upperbound for instance location predictions is reported in ``PFN upperbound\_instloc" by using the ground-truth instance locations for each pixel as the features for clustering. The large gap between $64.7\%$ of ``PFN upperbound\_instloc" and $58.7\%$ of ``PFN" verifies that the accurate instance location prediction is critical for good instance-level segmentation. Note that the current category-level segmentation only achieves $67.53\%$ of pixel-wise IoU score, which largely limits the performance of our instance-level segmentation because we perform the clustering on the category-level segmentation. A better category-level segmentation network architecture can definitely help improve the performance of instance-level segmentation under our PFN framework.
\vspace{-4mm}
\subsection{Visual Illustration}
Figure~\ref{fig:visual} visualizes the predicted instance-level segmentation results with our PFN. Note that we cannot visually compare with two state-of-the-art methods because they generate several region proposals for each instance and can only visualize top detection results for each image as described in their papers. However, our method can directly produce exact region segments for each object instance just like the results of category-level segmentation. Different colors indicate different object instances for the instance-level segmentation results. The semantic labels of our instance-level segmentation results are exactly the same with the ones labeled in category-level segmentation results. It can be observed that the proposed PFN performs well in predicting the object instances with heavy occlusion, large background clutters and complex scenes. For example, the first three rows show the results on images with very complex background clutters, several objects with heavy occlusion and diverse appearance patterns. The predicted instance-level segmentations are highly consistent with the ground-truth annotations, and the object instances with heavy occlusion can also be visually distinguished, such as the first images in the first and third rows. The fourth row illustrates some images with very small object instances, such as birds and potted-plants. The fifth row shows examples where the object instances in one image have high similarity in appearance and much occlusion with each other. Other results show more examples of instance-level images under diverse scenarios and with very challenging poses, scales, views and occlusion. These visualization results further demonstrate the effectiveness of the proposed PFN.
{We also show some failure cases of our PFN in Table~\ref{fig:failure}. The instances with heavy occlusion and some small object instances are difficult to identify and segment. In future, we can make more efforts to tackle these more challenging instances.}
\begin{figure*}
\begin{center}
\includegraphics[scale=1.0]{figure/visualization.pdf}
\caption{{Illustration of instance-level object segmentation results by the proposed PFN. For each image, we show the ground-truth instance-level segmentation, the category-level segmentation and the predicted instance-level segmentation results sequentially. Note that for instance-level segmentation results, different colors only indicate different object instances and do not represent the semantic categories. In terms of category-level segmentation, different colors are used to denote different semantic labels. Best viewed in color.}}
\label{fig:visual}
\end{center}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[scale=0.35]{figure/failurecase.pdf}
\vspace{-3mm}
\caption{{Illustration of failure cases. Our PFN may fail to segment object instances with heavy occlusion (in first row) and small instances (in second row).}}
\vspace{-8mm}
\label{fig:failure}
\end{center}
\end{figure}
\section{ Conclusion and Future Work}
{In this paper, we present an effective proposal-free network for fine-grained instance-level segmentation. Instead of utilizing extra region proposal methods as the requisite, PFN directly predicts the instance location vector for each pixel that belongs to a specific instance and the instance numbers of all categories. The pixels that predict the same or close instance locations can be directly regarded as belonging to the same object instance. During testing, the simple spectral clustering is performed on the predicted pixel-wise instance locations to generate the final instance-level segmentation results, and the predicted instance numbers of all categories are employed to indicate the cluster number for each category. Significant improvements over the state-of-the-art methods are achieved by PFN on the PASCAL VOC 2012 segmentation benchmark. Extensive evaluations of different components of PFN are conducted to validate the effectiveness of our method. Our PFN, without complicated per-processing and post-processing as requisite, is much simpler to implement with much lower computational cost compared with previous state-of-the-arts. In the future, we plan to extend our framework to the generic multiple instances in outdoor and indoor scenes, which may have higher degrees of clutters and occlusion.
\vspace{-2mm}
\bibliographystyle{ieee}
|
1,941,325,221,166 | arxiv | \section{Introduction}
Recent high-throughput technologies bring to the statistical community new type of data being increasingly large, heterogeneous and complex. Addressing significance in such context is particularly challenging because of the number of questions that could naturally come up. A popular statistical method is to adjust for multiplicity by controlling the False Discovery Rate (FDR), which is defined as the expected proportion of errors among the items declared as significant. Once the amount of possible false discoveries is controlled, the question of increasing the power, that is the amount of true discoveries, arises naturally. In the literature, it is well-known that the power can be increased by clustering the null hypotheses into homogeneous groups. The latter can be derived in several ways:
\begin{itemize}
\item sample size: a first example is the well-studied data set of the Adequate Yearly Progress (AYP) study \citep{rogosa2005accuracy}, which compares the results in mathematics tests between socioeconomically advantaged and disadvantaged students in Californian high school. As studied by \citet{cai2009simultaneous}, ignoring the sizes of the schools tends to favor large schools among the detections, simply because large schools have more students and not because the effect is stronger. By grouping the schools in small, medium, and large schools, more rejections are allowed among the small schools, which increases the overall detection capability. This phenomenon also appears in more large-scale studies, as in GWAS (Genome-Wide Association Studies) by grouping hypotheses according to allelic frequencies \citep{sun2006stratified} or in microarrays experiments by grouping the genes according to the DNA copy number status \citep{roquain2009optimal}.
\item spatial structure: some data sets naturally involve a spatial (or temporal) structure into groups. A typical example is neuroimaging: in \citet{schwartzman2005cross}, a study compares diffusion tensor imaging brain scans on 15443 voxels of 6 normal and 6 dyslexic children. By estimating the densities under the null of the voxels of the front and back halves of the brain, some authors highlight a noteworthy difference which suggests that analysing the data by making two groups of hypotheses seems more appropriate, see \citet{efron2008simultaneous} and \citet{cai2009simultaneous}.
\item hierarchical relation: groups can be derived from previous knowledge on hierarchical structure, like pathways for genetic studies, based for example on known ontologies (see e.g. \citet{ashburner2000gene}). Similarly, in clinical trials, the tests are usually grouped in primary and secondary endpoints, see \citet{dmitrienko2003gatekeeping}.
\end{itemize}
In these examples, while ignoring the group structure can lead to overly conservative procedures, this knowledge can easily be incorporated by using weights. This method can be traced back to \citet{holm1979simple} who presented a sequentially rejective Bonferroni procedure that controls the Family-Wise Error Rate (FWER) and added weights to the $p$-values. Weights can also be added to the type-I error criterion instead of the $p$-values, as presented in \citet{benjamini1997multiple} with the so-called weighted FDR. \citet{blanchard2008two} generalized the two approaches by weighting the $p$-values and the criterion, with a finite positive measure to weight the criterion (see also \citet{ramdas2017unified} for recent further generalizations). \citet{genovese2006false} introduced the $p$-value weighted BH procedure (WBH) which has been extensively used afterwards with different choices for the weights. \citet{roeder2006using,roeder2009genome} have built the weights upon genomic linkage, to favor regions of the genome with strong linkage.
\citet{hu2010false} calibrated the weights by estimating the proportion of true nulls inside each group (procedure named HZZ here). \citet{zhao2014weighted} went one step further by improving HZZ and BH with weights that maximize the number of rejections at a threshold computed from HZZ and BH. They proposed two procedures Pro1 and Pro2 shown to control the FDR asymptotically and to have a better power than BH and HZZ.
However, the problem of finding optimal weights (in the sense of achieving maximal averaged number of rejected false nulls) has been only scarcely considered in the literature. For FWER control and Gaussian test statistics, \citet{wasserman2006weighted} designed oracle and data-driven optimal weights, while \citet{dobriban2015optimal} considered a Gaussian prior on the signal. For FDR control, \citet{roquain2009optimal} designed oracle optimal weights by using the knowledge of the distribution under the alternative of the hypotheses. Unfortunately, this knowledge is not reachable in practice. This leads to the natural idea of estimating the oracle optimal weights by maximizing the number of rejections. This idea has been followed by \citet{ignatiadis2016data} with a procedure called IHW. While they proved that IHW controls the FDR, its power properties have not been considered. In particular, it is unclear whether maximizing the overall number of rejections is appropriate in order to maximize power.
In this paper, we present a general solution to the problem of optimal data-driven weighting of BH procedure in the case of grouped null hypotheses. The new class of procedures is called ADDOW (for Adaptive Data-Driven Optimal Weighting). With mild assumptions, we show that ADDOW asymptotically controls the FDR and has optimal power among all weighted step-up procedures. Interestingly, our study shows that the heterogeneity with respect to the proportion of true nulls should be taken into account in order to attain optimality. This fact seems to have been ignored so far: for instance we show that IHW is optimal when the true nulls are evenly distributed across groups but its performance can quickly deteriorate otherwise.
In Section~\ref{section_framework}, we present the mathematical model and assumptions. In Section~\ref{section_weighting}, we define what is a weighting step-up procedure. In Section~\ref{section_addow}, we introduce ADDOW along with a stabilized version, designed to deal with the overfitting problem due to weak signal. Section~\ref{section_results} provides our main theoretical results. Our numerical simulations are presented in Section~\ref{section_simulations}, while we conclude in Section~\ref{section_conclusion} with a discussion. The proofs of the two main theorems are given in Section~\ref{section_proof_thm} and more technical results are deferred to appendix. Let us underline that an effort has been made to make the proofs as short and concise as possible, while keeping them as clear as possible.
In all the paper, the probabilistic space is denoted $\left(\Omega,\mathcal{A},\mathbb{P}\right)$. The notations $\overset{a.s.}{\longrightarrow}$ and $\overset{\mathbb{P}}{\longrightarrow}$ stand for the convergence almost surely and in probability.
A "+" symbol is used to indicate that two cases (A) and (B) are simultaneously statisfied: (A)+(B).
\section{Setting}
\label{section_framework}
\subsection{Model}
\label{subsection_model}
We consider the following stylized grouped $p$-value modeling: let $G\geq2$ be the number of groups. In each group $g\in\{1,\dotsc,G\}$, let $\left(H_{g,1}, H_{g,2},\dotsc\right)$ be some binary variables corresponding to the null hypotheses to be tested in this group, with $H_{g,i}=0$ if it is true and $=1$ otherwise. Consider in addition $\left(p_{g,1}, p_{g,2},\dotsc\right)$ some random variables in $[0,1]$ where each $p_{g,i}$ corresponds to the $p$-value testing $H_{g,i}$.
We make the following marginal distributional assumption for $p_{g,i}$: if $H_{g,i}=0$, $p_{g,i}$ follows a uniform distribution on $[0,1]$. We denote by $U:x\mapsto \ind{x>0} \times \min(x,1)$ its cumulative distribution function (c.d.f.). If $H_{g,i}=1$, $p_{g,i}$ follows a common distribution corresponding to c.d.f. $F_g$. In particular, note that the $p$-values are assumed to have the same alternative distribution within each group. We make the mild assumption that $F_g$ is strictly concave on $[0,1]$ (and thus is also continuous on $\mathbb{R}$, see Lemma~\ref{Fgcont}). Furthermore, by concavity, $x\mapsto \frac{F_g(x)-F_g(0)}{x-0}$ has a right limit in 0 that we denote by $f_g(0^+)\in[0,\infty]$, and $x\mapsto \frac{F_g(x)-F_g(1)}{x-1}$ has a left limit in 1 that we denote by $f_g(1^-)\in[0,\infty)$.
Above, we considered an infinite set of hypotheses/$p$-values because our study will be asymptotic in the number of tests $m$. At step $m$, we observe the $p$-values $p_{g,i}$, $g\leq G$, $i\leq m_g$ where the $m_g$ are non-decreasing integer sequences depending on $m$ and such that $\sum_{g=1}^G m_g=m$. Let us emphasize that $G$ is kept fixed with $m$ throughout the paper. Note also $m_{g,1}=\sum_{i=1}^{m_g}H_{g,i} $ the number of false nulls and $m_{g,0}=m_g-m_{g,1}$ the number of true nulls in group $g$. We make the assumption that there exists $\pi_g>0$ and $\pi_{g,0}>0$ such that for all $g$, $m_g/m\to\pi_g$ and $m_{g,0}/m_g\to\pi_{g,0}$ when $m\to\infty$. For each $g$ we also assume that $ \pi_{g,1}=1- \pi_{g,0}>0$. These assumptions mean that, asymptotically, no group, and no proportion of signal or sparsity, is vanishing. We denote $\pi_0=\sum_g \pi_g \pi_{g,0}$ the mean of the $\pi_{g,0}$'s and denote the particular case where the nulls are evenly distributed in each group by~\eqref{ED}:
\begin{equation}
\pi_{g,0}=\pi_0 , \:\:1\leq g\leq G.
\label{ED}\tag{ED}
\end{equation}
Let us now specify assumptions on the joint distribution of the $p$-values. While we make no assumption on the $p$-value dependence between two different groups, we assume that the $p$-values are weakly dependent within each group:
\begin{equation}
\frac{1}{m_{g,0}} \sum_{i=1}^{m_g}\ind{p_{g,i}\leq t, H_{g,i}=0} \overset{a.s.}{\longrightarrow} U(t) ,\:\: t \geq0,
\label{wd0}
\end{equation}
and
\begin{equation}
\frac{1}{m_{g,1}} \sum_{i=1}^{m_g}\ind{p_{g,i}\leq t, H_{g,i}=1} \overset{a.s.}{\longrightarrow} F_g(t) ,\:\: t \geq0 .
\label{wd1}
\end{equation}
This assumption is mild and classical, see \citet{storey2004strong}. Note that weak dependence is trivially achieved if the $p$-values are independent.
\subsection{$\pi_{g,0}$ estimation}
\label{subsection_estimation}
For each $g$, let us assume we have at hand an estimator $\hat \pi_{g,0}\in(0,1]$ of $m_{g,0}/m_g$
and assume that $\hat \pi_{g,0}\overset{\mathbb{P}}{\longrightarrow} \bar \pi_{g,0}$ for some $\bar \pi_{g,0}\geq \pi_{g,0}$. Let also $\bar \pi_0=\sum_g \pi_g \bar\pi_{g,0}$.
In our setting, this assumption can be fulfilled by using the estimators introduced in \citet{storey2004strong}:
\begin{equation}
\hat \pi_{g,0}(\lambda)=\frac{1-\frac{1}{m_g}\sum_{i=1}^{m_g}\ind{p_{g,i}\leq\lambda}+\frac{1}{m} }{1-\lambda},
\label{storey}
\end{equation}
for a given parameter $\lambda\in(0,1)$ let arbitrary (the $\frac{1}{m}$ is here just to ensure $\hat \pi_{g,0}(\lambda)>0$). It is easy to deduce from~\eqref{wd0} and~\eqref{wd1} that $\frac{1}{m_g}\sum_{i=1}^{m_g}\ind{p_{g,i}\leq\lambda}\overset{a.s.}{\longrightarrow} \pi_{g,0}\lambda+\pi_{g,1}F_g(\lambda)$, which provides our condition:
$$ \hat \pi_{g,0}(\lambda) \overset{a.s.}{\longrightarrow} \pi_{g,0}+\pi_{g,1}\frac{1-F_g(\lambda)}{1-\lambda} \geq \pi_{g,0} .$$
While $(\bar\pi_{g,0})_g$ is let arbitrary in our setting, some particular cases will be of interest in the sequel. First is the Evenly Estimation case~\eqref{EE} one where
\begin{equation}
\bar\pi_{g,0}=\bar\pi_0 ,\:\: 1\leq g\leq G.
\label{EE}\tag{EE}
\end{equation}
In that case, our estimators all share the same limit, and doing so they do not take in account the heterogeneity with respect to the proportion of true nulls. As we will see,~\eqref{EE} is relevant when the proportion of true nulls is homogeneous across groups, that is, when~\eqref{ED} holds. A particular subcase of~\eqref{EE} is the Non Estimation case~\eqref{NE} where:
\begin{equation}
\hat\pi_{g,0}=1
\text{ which implies } \bar\pi_{g,0}=1 ,\:\: 1\leq g\leq G.
\label{NE}\tag{NE}
\end{equation}
In the latter, basically, the $\pi_{g,0}$ estimation step is skipped.
Finally, let us introduce the Consistent Estimation case~\eqref{CE} for which the estimators $\hat \pi_{g,0}$ are assumed to be all consistent:
\begin{equation}
\bar\pi_{g,0}=\pi_{g,0} ,\:\:1\leq g\leq G.
\label{CE}\tag{CE}
\end{equation}
While this corresponds to a favorable situation, this assumption can be met in classical situations, where $f_g(1^-)=0$ and $\lambda=\lambda_m$ tends to 1 slowly enough in definition~\eqref{storey}, see Lemma~\ref{PAindepgauss} in Section~\ref{subsection_prooflemmas}.
The condition $f_g(1^-)=0$ is called "purity" in the literature. It has been introduced in \citet{genovese2004stochastic} and then deeply studied, along with the convergence of Storey estimators, in \citet{neuvial2013asymptotic}.
\subsection{Criticality}
\label{subsection_crit}
To study asymptotic FDR control and power, it is convenient to focus only in situations where we some rejections are possible (the Power and FDR being converging to 0 otherwise). To this end, \citet{chi2007performance} introduced the notion of criticality: they defined some critical alpha level, denoted $\alpha^*$, for which BH procedure has no asymptotical power if $\alpha<\alpha^*$ (see also \citet{neuvial2013asymptotic} for a link between criticality and purity).
We extended this notion of criticality in our heterogeneous setting in Section~\ref{subsection_prooflemmas} (see Definition~\ref{def_alpha}) and will focus in our results on the supercritical case $\alpha\in(\alpha^*,1)$. Lemma~\ref{alphacrit} states that $\alpha^*<1$ so such an $\alpha$ always exists.
While the formal definition of $\alpha^*$ is reported to the appendix for the sake of clarity, let us emphasize that it depends on the $(F_g)_g$, $(\pi_g)_g$, $(\pi_{g,0})_g$ and, maybe less intuitively, on the $(\bar\pi_{g,0})_g$, which means that the choice of the estimators changes the value of $\alpha^*$.
\subsection{Leading example}
\label{subsection_gaussian}
While our framework allows a general choice for $F_g$, a canonical example that we have in mind is the Gaussian one-sided framework where the test statistic in group $g$ follows $\mathcal{N}(0,1)$ under the null, while it follows $\mathcal{N}(\mu_g,1)$ under the alternative, for $G$ unknown parameters $\mu_g>0$.
Classically, this corresponds to consider $p$-values uniform under the null with alternative c.d.f. given by $$F_g(\cdot)=\bar\Phi\left(\bar\Phi^{-1}(\cdot)-\mu_g \right),$$ with derivative $$f_g(\cdot) = \exp\left(\mu_g\left(\bar\Phi^{-1}(\cdot)-\frac{\mu_g}{2} \right) \right),$$ where we denoted $\bar\Phi(z)=\mathbb{P}\left(Z\geq z \right)$ for $Z\sim\mathcal{N}(0,1)$. Hence $F_g$ is strictly concave and this framework fulfills the assumptions of Section~\ref{subsection_model}.
Furthermore we easily check that $f_g(0^+)=\infty$, so $\alpha^*=0$ and $f_g(1^-)=0$ which means that this framework is supercritical ($\alpha^*=0$, see Definition~\ref{def_alpha}) with purity and then achievable consistent estimation~\eqref{CE}.
\subsection{Criteria}
\label{subsection_criterions}
The set of indices corresponding to true nulls is denoted by $\mathcal{H}_0$, that is $(g,i)\in\mathcal{H}_0$ if and only if $H_{g,i}=0$, and we also denote $\mathcal{H}_1=\comp{\mathcal{H}_0}$.
In this paper, we define a multiple testing procedure $R$ as a set of indices that are rejected: $p_{g,i}$ is rejected if and only if $(g,i)\in R$. The False Discovery Proportion (FDP) of $R$, denoted by $\FDP(R)$, is defined as the number of false discoveries divided by the number of rejections if there are any, and 0 otherwise:
$$ \FDP(R)=\frac{\left|R \cap\mathcal{H}_0 \right|}{\left|R\right| \vee1} .$$
We denote $\FDR(R)=\mathbb{E}\left[ \FDP(R) \right]$ the FDR of $R$. Its power, denoted $\Pow(R)$, is defined as the mean number of true positives divided by $m$:
$$\Pow(R)=m^{-1}\mathbb{E}\left[\left|R \cap\mathcal{H}_1 \right|\right] .$$
Note that our power definition is slightly different than the usual one for which the number of true discoveries is divided by $m_1=\sum_g m_{g,1}$ instead of $m$. This simplifies our expressions (see Section~\ref{subsection_notation}) and does not have any repercussion because the two definitions differ only by a multiplicative factor converging to $1-\pi_0\in(0,1)$ when $m\to\infty$.
\section{Weighting}
\label{section_weighting}
\subsection{Weighting the BH procedure}
\label{subsection_MWBH}
Say we want to control the FDR at level $\alpha$. Assume that the $p$-values are arranged in increasing order $p_{(1)}\leq \dotsc \leq p_{(m)} $ with $p_{(0)}=0$, the classic BH procedure consists in rejecting all $p_{g,i}\leq \alpha \frac{\hat k}{m}$ where $\hat k= \max\left\{k\geq0 : p_{(k)}\leq \alpha \frac{ k}{m} \right\}$.
Take a nondecreasing function $h$ defined on $[0,1]$ such that $h(0)=0$ and $h(1)\leq 1$, we denote $\mathcal{I}(h)=\sup\left\{ u\in[0,1] : h(u)\geq u \right\}.$ Some properties of the functional $\mathcal{I}(\cdot)$ are gathered in Lemma~\ref{IG}, in particular $h\left(\mathcal{I}(h)\right)=\mathcal{I}(h)$. We now reformulate BH with the use of $\mathcal{I}(\cdot)$, because it is more convenient when dealing with asymptotics. Doing so, we follow the formalism notably used in \citet{roquain2009optimal} and \citet{neuvial2013asymptotic}. Define the empirical function
$$\widehat G : u \mapsto m^{-1}\sum_{g=1}^G\sum_{i=1}^{m_g}\mathds{1}_{\{ p_{g,i}\leq \alpha u \}},$$
then ${\hat k}=m\times\mathcal{I}( \widehat G )$. This is a particular case of Lemma~\ref{hatuegalIG}.
The graphical representation of the two points of view for BH is depicted in Figure~\ref{I_of_hat_G} with $m=10$. The $p$-values are plotted on the right part of the figure along with the function $k\mapsto \alpha k/m$ and we see that the last $p$-value under the line is the sixth one. On the left, the function $\widehat G$
corresponding to these $p$-values is displayed alongside the identity function, with the last crossing point being located between the sixth and seventh jumps, thus $\mathcal{I}( \widehat G )=6/m$ and 6 $p$-values are rejected.
\begin{figure}
\centering\includegraphics[width=\linewidth]{illu_I_of_hat_G.pdf}
\caption{\footnotesize The BH procedure applied to a set of 10 $p$-values. Right plot: the $p$-values and the function $k\to\alpha k/m$. Left plot: identity function and $\widehat G$. Each plot shows that 6 $p$-values are rejected.}
\label{I_of_hat_G}
\end{figure}
The weighted BH (WBH) with weight vector $w\in \mathbb{R}^G_+$ is defined by computing
$$\widehat G_w : u \mapsto m^{-1}\sum_{g=1}^G\sum_{i=1}^{m_g}\mathds{1}_{\{ p_{g,i}\leq \alpha u w_g \}}$$
and rejecting all $p_{g,i}\leq \alpha \mathcal{I}\left(G_w\right) w_g$. We denote it $\WBH(w)$. Note that $w$ is authorized to be random, hence it can be computed from the $p$-values. In particular, $\BH=\WBH(\bm{1})$ where $\bm{1}=(1,\dots,1)\in\mathbb{R}^G_+$.
Following \citet{roquain2009optimal}, to deal with optimal weighting, we need to further generalize WBH into a multi-weighted BH (MWBH) procedure by introducing a weight function $W : [0,1]\to \mathbb{R}^G_+$, {which can be random}, such that the following function:
\begin{equation}
\widehat G_W : u \mapsto m^{-1}\sum_{g=1}^G\sum_{i=1}^{m_g}\mathds{1}_{\{ p_{g,i}\leq \alpha u W_g(u) \}},
\label{eq_def_gW}
\end{equation}
is nondecreasing. The resulting procedure rejects all the $p$-values such that $p_{g,i}\leq \alpha \hat u_W W_g(\hat u_W)$ and is denoted $\MWBH(W)$ where, for the rest of the paper, we denote
\begin{equation}
\hat u_W=\mathcal{I}\left( \widehat G_W \right),
\label{def_uw}
\end{equation}
and name it the step-up threshold. One different weight vector $W(u)$ is associated to each $u$, hence the "multi"-weighting. Note that the class of MWBH procedures is more general than the one of WBH procedures because any weight vector can be seen as a constant weight function.
Note that, there is a simple way to compute $\hat u_W$. For each $r$ between 1 and $m$ denote the $W(r/m)$-weighted $p$-values $p_{g,i}^{[r]}=p_{g,i}/W_g(r/m)$ (with the convention $p_{g,i}/0=\infty$), order them $p_{(1)}^{[r]}\leq\dotsc\leq p_{(m)}^{[r]}$ and note $p_{(0)}^{[r]}=0$. Then $\hat u_W= m^{-1}\max\left\{ r\geq0 : p_{(r)}^{[r]} \leq \alpha \frac{r}{m} \right\}$ (this is Lemma~\ref{hatuegalIG}).
{As in previous works}, in order to achieve a valid FDR control, these procedures should be used with weights that satisfy some specific relation. Here, we introduce the following weight spaces:
\begin{equation}
K^m=\left\{w\in \mathbb{R}^G_+ : \sum_g \frac{m}{m_g} \hat \pi_{g,0} w_g\leq 1\right\},
\label{def_Km}
\end{equation}
\begin{equation}
K^m_{\text{NE}}=\left\{w\in \mathbb{R}^G_+ : \sum_g \frac{m}{m_g} w_g\leq 1\right\}.
\label{def_KmNE}
\end{equation}
Note that $K^m$
{may appear unusual} because it depends on the estimators $\hat\pi_{g,0}$, however it is completely known and usable in practice. Note also that $K^m=K^m_{\text{NE}}$ in the~\eqref{NE} case.
Finally, for a weight function $W$ and a threshold $u\in[0,1]$, we denote by $R_{u,W}$ the double indexed procedure rejecting the $p$-values less than or equal to $\alpha u W_g(u)$, that is $R_{u,W}=\{(g,i) : p_{g,i}\leq \alpha u W_g(u) \}$. By~\eqref{eq_def_gW}, note that $\widehat G_W(u)=m^{-1}\left| R_{u,W}\right|$ and $\MWBH(W)=R_{\hat u_W,W}$.
\subsection{Choosing the weights}
\label{subsection_previous}
Take $W$ and $u$, and let $P^{(m)}_W(u)=\Pow\left( R_{u,W}\right)$. We have
\begin{align*}
P^{(m)}_W(u)&=m^{-1}\mathbb{E}\left[ \sum_{g=1}^G\sum_{i=1}^{m_g}\mathds{1}_{\{ p_{g,i}\leq \alpha u W_g(u) , H_{g,i}=1\} } \right]\notag \\
&=\sum_{g=1}^G \frac{m_{g,1}}{m}F_g\left(\alpha u W_g(u)\right)
\end{align*}
Note that these relations are valid only if $W$ and $u$ are deterministic. In particular, they are not valid when used a posteriori with a data-driven weighting and $u=\hat u_W$.
In \citet{roquain2009optimal}, the authors define the oracle optimal weight function $W^{*}_{or}$ as:
\begin{equation}
W^{*}_{or}(u)=\argmax_{w\in K^m_{\text{NE}}} P^{(m)}_w(u) .
\label{def_or}
\end{equation}
Note that they defined $W^{*}_{or}$ only in case~\eqref{NE}, but their definition easily extends to the general case as above. They proved the existence and uniqueness of $W^{*}_{or}$ in~\eqref{ED}+\eqref{NE} case and that, asymptotically, $\MWBH(W^{*}_{or})$ controls the FDR at level $\pi_0\alpha$ and has a better power than every $\MWBH(w^{(m)})$ for $w^{(m)}\in K^m_{\text{NE}}$ some deterministic weight vectors satisfying a convergence criterion.
However, computing $W^{*}_{or}$ requires the knowledge of the $F_g$, not available in practice, so the idea is to estimate $W^{*}_{or}$ with a data driven weight function $\widehat W^*$ and then apply MWBH with this random weight function. For this, consider the functional defined by, for any (deterministic) weight function $W$ and $u\in[0,1]$:
\begin{align}
G^{(m)}_W(u)=\mathbb{E}\left[\widehat G_W(u) \right]
&=\sum_{g=1}^G \left( \frac{m_{g,0}}{m} U(\alpha u W_g(u))+ \frac{m_{g,1}}{m} F_g(\alpha u W_g(u)) \right)\notag\\
&=P^{(m)}_W(u)+\sum_{g=1}^G \frac{m_{g,0}}{m} U(\alpha u W_g(u)).\label{eq_heur}
\end{align}
$G^{(m)}_W(u)$ is the mean ratio of rejections for the procedure rejecting each $p_{g,i}\leq \alpha u W_g(u)$. The intuitive idea is that maximizing $G^{(m)}_W(u)$ is close to maximizing $P^{(m)}_W(u)$. We justify this heuristic as follows: assuming $U$ is the identity function, then the right term of~\eqref{eq_heur} becomes $\alpha u\sum_{g} \frac{m_{g,0}}{m} W_g(u)$ and it does not depend of the weights if {additionally} $\sum_{g} \frac{m_{g,0}}{m} W_g(u)=1$, which makes $P^{(m)}_W(u)$ the only term depending on $W$. Now, we can evaluate the constraint on $W$ by estimating $\frac{m_{g,0}}{m}=\frac{m_{g}}{m}\frac{m_{g,0}}{m_g}$ by $\frac{m_{g}}{m}\hat\pi_{g,0}$ (which leads to the weight space $K^m$ defined in equation~\eqref{def_Km}), and $G^{(m)}_w(u)$ can be easily estimated by the (unbiased) estimator $\widehat G_w(u)$. As a result, maximizing the latter in $w$ should lead to good weights, not too far from $W^{*}_{or}(u)$.
\citet{zhao2014weighted} followed this heuristic by applying a two-stage approach to derive two procedures, named Pro1 and Pro2. Precisely, in the first stage they use the weight vectors $\hat w^{(1)}=(\frac{1}{\hat \pi_0},\dotsc,\frac{1}{\hat \pi_0})$, where $\hat \pi_0=\sum_g \frac{m_g}{m}\hat \pi_{g,0}$, and $\hat w^{(2)}$ defined by $\hat w^{(2)}_g=\frac{\hat \pi_{g,1}}{\hat \pi_{g,0}(1-\hat \pi_0)}$, where $\hat \pi_{g,1}=1-\hat \pi_{g,0}$, and let $\hat u_M=\max(\hat u_{\hat w^{(1)}},\hat u_{\hat w^{(2)}})$. In the second stage, they maximize $\widehat G_w(\hat u_M)$ over $K^m$, which gives rise to the weight vector $\widehat W^*(\hat u_M)$ according to our notation. Then they define their procedures as the following: $$\ZZPro1=R_{\hat u_M,\widehat W^*(\hat u_M) },$$
and $$\ZZPro2=\WBH\left({\widehat W^*(\hat u_M)} \right).$$ The caveat of this approach is that the initial thresholding, that is the definition of $\hat u_M$, seems somewhat arbitrary, which will result in sub-optimal procedures, see Corollary~\ref{cor_zz}. As a side remark, $\hat w^{(1)}$ and $\hat w^{(2)}$ are involved in other procedures of the literature. The HZZ procedure of \citet{hu2010false} is $\WBH(\hat w^{(2)})$, and $\WBH(\hat w^{(1)})$ is the classical Adaptive BH procedure (see e.g. Lemma~2 of~\citet{storey2004strong}) denoted here as ABH.
\citet{ignatiadis2016data} actually used the above heuristic with multi-weighting (while their formulation differs from ours) which consists in maximizing $\widehat G_w(u)$ in $w$ for each $u$. However, their choice of the weight space is only suitable for the case~\eqref{NE} and can make the above heuristic break down, because in general the right term in~\eqref{eq_heur} can still depend on $w$, see remark~\ref{rk_bh_best}.
In the next section, we take the best of the two approaches to attain power optimality with data-driven weighting. Let us already mention that the crucial point is Lemma~\ref{ASTUCE}, that fully justifies the heuristic (in cases \eqref{CE} and \eqref{ED}+\eqref{EE}).
\begin{rk}
We can compute numerical examples where BH has asymptotic power larger than IHW. For example, if we break \eqref{ED} by taking a small $\pi_{1,0}$ (almost pure signal) and a large $\pi_{2,0}$ (sparse signal), along with a small group and a large one ($\pi_1$ much smaller than $\pi_2$) and strong signal in both groups, IHW slightly favors group 2 whereas the oracle optimal favors group 1. BH does not favor any group thus a larger power than IHW. This example is simulated in Section~\ref{subsection_sim_pow} (see Figure~\ref{fig_set2_pow}).
\label{rk_bh_best}
\end{rk}
\section{New procedures}
\label{section_addow}
\subsection{$\ADDOW$ definition}
\label{subsection_def_addow}
We exploit previous intuition and propose to estimate the oracle optimal weights $W^{*}_{or}$ by maximizing in $w\in K^m$ the empirical counterpart to $G^{(m)}_{w}(u)$, that is $\widehat G_w(u)$.
\begin{defn}
We call an adaptive data-driven optimal weight function a random function $\widehat W^*:[0,1]\to K^m$ such that for all $u\in[0,1]$:
$$\widehat G_{\widehat W^*}(u)= \underset{w\in K^m}{\max} \widehat G_{ w}(u) .$$
\end{defn}
Such function always exists because
$\left\{ \widehat G_w(u),\, w \in K^m \right\} \subset \left\{ \frac{k}{m}, \,k\in \llbracket 0, m\rrbracket \right\} $
is finite, but may not be unique. So in all the following, we take a certain $\widehat W^*$, and our results do not depend on the choice of $\widehat W^*$. An important fact is that $\widehat G_{\widehat W^*}$ is nondecreasing (see Lemma~\ref{croissance}) so $\hat u_{\widehat W^*}$ exists and the corresponding MWBH procedure is well-defined:
\begin{defn}
The ADDOW procedure is the MWBH procedure using $\widehat W^*$ as the weight function, that is, $\ADDOW=\MWBH\left( \widehat W^* \right)$.
\end{defn}
One shall note that ADDOW is in fact a class of procedures depending on the estimators $\hat\pi_{g,0}$ through $K^m$. Note that, in the \eqref{NE} case, ADDOW reduces to IHW.
\begin{rk}
It turns out that ADDOW reduces to a WBH procedure. It comes from part 2 of the proof of Theorem~\ref{thm_pow} and Remark~\ref{MWBH<wbh}. Moreover, to every MWBH procedure, corresponds a WBH procedure with power higher or equal.
\end{rk}
\subsection{Stabilization for weak signal}
\label{subsection_def_stab}
Since ADDOW uses the data both through the $p$-values and the weights, this will result in a slight increase of the FDR, as we will see in the simulations (Section~\ref{subsection_sim_fdr}). This effect is close in spirit to the well known overfitting phenomenon. In our setting where the signal is strong enough, this drawback is proved to vanish when $m$ is large enough, see the simulations and Theorem \ref{thm_fdr}. However, the latter is not true for weak signal: if the data are close to be random noise, making the weight optimization can lead ADDOW to find signal only by chance, that is, to make false positives. To circumvent this concern, we propose to stabilize ADDOW by using a pre-testing phase close in spirit to the Kolmogorov-Smirnov (KS) test \citep{kolmogorov1933sulla} to determine whether the signal is weak or not and then to apply ADDOW only if the signal is large enough (and just apply BH otherwise).
Formally, we reject the hypothesis that the signal is weak if $Z_m>q_{\beta,m}$, where
$$Z_m=\sqrt{m}\sup_{w\in K^m_{\text{NE}}} \sup_{u\in[0,1]} \left(\widehat G_w(u)-\alpha u\right),$$
and $q_{\beta,m}$ is the $(1-\beta)$-quantile of the distribution of $Z_{0m}$, where $Z_{0m}$ is defined as
\begin{equation}
Z_{0m}=\sqrt{m} \sup_{u\in[0,1]} \left( m^{-1}\sum_{g=1}^G\sum_{i=1}^{m_g}\mathds{1}_{\left\{U_{g,i}\leq \alpha u \widetilde W_g^*(u) \right\}} -\alpha u\right),
\label{def_z0m}
\end{equation}
where the $U_{g,i}$ are uniform variables over $[0,1]$ with, for all $g$, $U_{g,1},\dotsc,U_{g,m_g}$ independent, and
$$\widetilde W^*(u)\in\argmax_{w\in K^m_{\text{NE}}} m^{-1}\sum_{g=1}^G\sum_{i=1}^{m_g}\mathds{1}_{\left\{U_{g,i}\leq \alpha u w_g \right\}} .$$
$Z_{0m}$ can be viewed as a copy of $Z_m$ but under the full null model where the $p$-values are uniform on $[0,1]$ and independent, and without estimating $\pi_{g,0}$. We denote the test rejecting the weak signal scenario by $\phi_\beta=\mathds{1}_{ \{ Z_m> q_{\beta,m} \}}$. This gives us a stabilization procedure depending on $\beta$ that we call $\sADDOW_\beta$:
\begin{equation}
\sADDOW_\beta =\left\{
\begin{array}{ccc}
\ADDOW & \text{if} & \phi_\beta=1 \\
\BH & \text{if} & \phi_\beta=0
\end{array}
\right.
\label{def_saddow}
\end{equation}
We expect that in the weak signal case, the stabilized procedures have better control of the FDR than $\ADDOW$, because in that case, without estimating $\pi_{g,0}$ and if the $p$-values are all independent, the distribution of $Z_m$ is close to the distribution of $Z_{0m}$, and we have the following approximation:
\begin{align*}
\FDR\left( \sADDOW_\beta \right) &= \mathbb{E}\left[ \phi_\beta \FDP\left( \ADDOW \right) +(1-\phi_\beta) \FDP\left( \BH\right) \right]\\
&\leq \mathbb{E}\left[ \phi_\beta +\FDP\left( \BH\right) \right]\\
&\leq \mathbb{P}\left(Z_{m} > q_{\beta,m}\right)+\FDR\left( \BH\right) \\
&\lesssim \mathbb{P}\left(Z_{0m} > q_{\beta,m}\right)+\frac{m_{0}}{m}\alpha \\
&\leq \beta + \frac{m_{0}}{m}\alpha,
\end{align*}
where $\mathbb{P}\left(Z_{0m} > q_{\beta,m}\right)\leq\beta$ by definition of $q_{\beta,m}$ and $m_0=\sum_gm_{g,0}$ is the number of true nulls. If $\beta$ is chosen small the control at level $\alpha$ should be achieved. This heuristic will be supported by the simulations in Section~\ref{subsection_sim_fdr}.
\section{Results}
\label{section_results}
\subsection{Main results}
\label{subsection_addow_results}
Now we present the two main Theorems of this paper. The two are asymptotical and justify the use of $\ADDOW$ when $m$ is large. The first is the control of the FDR at level at most $ \alpha$. The second shows that ADDOW has maximum power over all MWBH procedures in the \eqref{CE} case, and also in \eqref{ED}+\eqref{EE} case. The two are proven in Section~\ref{section_proof_thm}.
\begin{thm} Let us consider the framework defined in Sections~\ref{subsection_model} and~\ref{subsection_estimation}. We have
\begin{equation}
\lim_{m\to\infty} \FDR\left( \ADDOW \right) \leq\alpha .
\label{eq_thm_fdr}
\end{equation}
If $\alpha\leq\bar \pi_0$, we have two more accurate results: in \eqref{CE} case,
$$\lim_{m\to\infty} \FDR\left( \ADDOW \right) =\alpha, $$
and in \eqref{ED}+\eqref{EE} case,
$$\lim_{m\to\infty} \FDR\left( \ADDOW \right) =\frac{\pi_0}{\bar \pi_0}\alpha .$$
\label{thm_fdr}
\end{thm}
\begin{thm} Let us consider defined in Sections~\ref{subsection_model} and~\ref{subsection_estimation}, with the additional assumption \eqref{CE} or \eqref{ED}+\eqref{EE}. For any sequence of random weight functions $(\widehat W)_{m\geq1}$, such that $\widehat W:[0,1]\to K^m$ and $\widehat G_{\widehat W}$ is nondecreasing, we hav
$$\lim_{m\to\infty} \Pow\left( \ADDOW \right)\geq\limsup_{m\to\infty} \Pow\left( \MWBH\left(\widehat W\right) \right).$$
\label{thm_pow}
\end{thm}
\subsection{Relation to $\IHW$}
\label{subsection_ihw}
Recall that IHW is ADDOW in the \eqref{NE} case, and that \eqref{NE} is a subcase of \eqref{EE}. Hence, as a byproduct, we deduce from Theorems~\ref{thm_fdr} and~\ref{thm_pow} the following result on IHW.
\begin{cor}
Let us consider the framework defined in Sections~\ref{subsection_model} and~\ref{subsection_estimation}, with the additional assumption \eqref{ED}. Then
$$\lim_{m\to\infty} \FDR\left( \IHW \right) ={\pi_0}\alpha ,$$
and for any sequence of random weight functions $(\widehat W)_{m\geq1}$ such that $\widehat W:[0,1]\to K^m_{\text{NE}}$ and $\widehat G_{\widehat W}$ is nondecreasing, we have
$$ \lim_{m\to\infty} \Pow\left( \IHW \right) \geq \limsup_{m\to\infty} \Pow\left( \MWBH\left(\widehat W\right) \right) .$$
\label{cor_ihw}
\end{cor}
While equation~\eqref{eq_thm_fdr} of Theorem~\ref{thm_fdr} covers Theorem 4 of the supplementary material of \citet{ignatiadis2016data} (with slightly stronger assumption on the smoothness of the $F_g$’s), the FDR controlling result of Corollary~\ref{cor_ihw} gives a slightly sharper bound ($\pi_0\alpha$ instead of $\alpha)$ in~\eqref{ED} case.
The power optimality stated in Corollary~\ref{cor_ihw} is new and was not shown in \citet{ignatiadis2016data}. It thus supports the fact that IHW should be used under the assumption~\eqref{ED} and when $\pi_0$ is close to 1 or not estimated.
\subsection{Comparison to other existing procedures}
\label{subsection_comparison}
For any estimators $\hat\pi_{g,0}\in[0,1]$, any weighting satisfying $\sum_g \frac{m_g}{m} w_g\leq1$ also satisfies $ \sum_g \frac{m_g}{m}\hat\pi_{g,0}w_g\leq1$, that is $K^m_{\text{NE}}\subset K^m$. Hence, any MWBH procedure estimating $\frac{m_{g,0}}{m_g}$ by 1 uses a weight function valued in $K^m$. This immediately yields the following corollary.
\begin{cor}
Let us consider the framework defined in Sections~\ref{subsection_model} and~\ref{subsection_estimation}, with the additional assumption \eqref{CE} or \eqref{ED}+\eqref{EE}. Then
$$\lim_{m\to\infty} \Pow\left( \ADDOW \right)\geq\limsup_{m\to\infty} \Pow\left( R \right) , $$
for any $R\in\{\BH, \IHW \}$.
\end{cor}
The next corollary simply states that ADDOW outperforms many procedures of the "weighting with $\pi_0$ adaptation" literature.
\begin{cor}
Let us consider the framework defined in Sections~\ref{subsection_model} and~\ref{subsection_estimation}, with the additional assumption \eqref{CE} or \eqref{ED}+\eqref{EE}. Then
\begin{equation*}
\lim_{m\to\infty} \Pow\left( \ADDOW \right) \geq \limsup_{m\to\infty} \Pow\left(R \right),
\end{equation*}
for any $R\in\{\ZZPro1, \ZZPro2, \HZZ, \ABH \}$.
\label{cor_zz}
\end{cor}
The results for Pro2, HZZ and ABH follow directly from Theorem~\ref{thm_pow} because these are MWBH procedures. The proof for Pro1 (which is not of the MWBH type) can be found in Section~\ref{subsection_proof_pro1}.
\subsection{Results for the stabilized version}
\label{subsection_saddow_results}
Next theorem shows that, asymptotically, the procedure $\sADDOW_\beta$ is the same as $\ADDOW$. Our result is true even if $\beta=\beta_m\underset{m\to\infty}{\longrightarrow}0$ provided that the convergence is not too fast. It is proven in Section~\ref{subsection_proof_stab}.
\begin{thm}
Let us consider the framework defined in Sections~\ref{subsection_model} and~\ref{subsection_estimation}. Take a sequence $(\beta_m)_{m\geq1}$ such that $\beta_m\geq a\exp\left(-bm^{1-\nu}\right)$ for some $a\in(0,1]$, $b>0$ and $\nu>0$.
Then $\phi_{\beta_m}\to1$ almost surely. In particular, all Theorems and Corollaries of Sections~\ref{subsection_addow_results} and~\ref{subsection_comparison} hold when replacing $\ADDOW$ with $\sADDOW_{\beta_m}$.
\label{thm_stab}
\end{thm}
\section{Numerical experiments}
\label{section_simulations}
\subsection{Simulation setting}
\label{subsection_sim_setting}
In our experiments, additionally to BH which is not adaptive, three groups of procedures are compared:
\begin{itemize}
\item Group 1: some procedures not adaptive to $\pi_0$ but adaptive to the signal via optimal weights: \begin{itemize}
\item $\MWBH\left(W^{*}_{or}\right)$ where $W^{*}_{or}$ is given by equation~\ref{def_or} in the~\eqref{NE} case.
\item ADDOW in the~\eqref{NE} case, that is, IHW.
\item $\sADDOW_\beta$ in the~\eqref{NE} case for some value of $\beta$.
\item Pro2 as defined in section~\ref{subsection_previous} and in the~\eqref{NE} case.
\end{itemize}
\item Group 2: procedures only adaptive to $\pi_{g,0}$ and not to the signal strength, with an oracle adaptation to $\pi_{g,0}$:
\begin{itemize}
\item ABH as defined in section~\ref{subsection_previous} with $\hat \pi_{g,0}=\pi_{g,0}$.
\item HZZ as defined in section~\ref{subsection_previous} with $\hat \pi_{g,0}=\pi_{g,0}$.
\end{itemize}
\item Group 3: procedures that combine both adaptive properties, with an oracle adaptation to $\pi_{g,0}$:
\begin{itemize}
\item $\MWBH\left(W^{*}_{or}\right)$ where $W^{*}_{or}$ is given by equation~\ref{def_or} with $\hat \pi_{g,0}=\pi_{g,0}$.
\item ADDOW with $\hat \pi_{g,0}=\pi_{g,0}$.
\item $\sADDOW_\beta$ with $\hat \pi_{g,0}=\pi_{g,0}$ for some value of $\beta$.
\item Pro2 as defined in section~\ref{subsection_previous} with $\hat \pi_{g,0}=\pi_{g,0}$.
\end{itemize}
\end{itemize}
We consider the one-sided Gaussian framework described in Section~\ref{subsection_gaussian} for $G=2$ groups. The parameters are thus given by $m_1$, $m_2$, $m_{1,0}$, $m_{2,0}$, $\mu_1$, $\mu_2$, and $\alpha$. For the stabilisation, $q_{\beta,m}$ is estimated with realizations of $Z_{0m}$ (as defined in equation~\eqref{def_z0m}), where $Z_{0m}$ and $Z_m$ are computed as suprema on $\{k/m , 1\leq k\leq m\}$ instead of $[0,1]$ for an easier computation. Our experiments have been performed by using the three following scenarios, for which the values of $\mu_1$ and $\mu_2$ are defined according to a parameter $\bar\mu$. Each simulation of each scenario is replicated 1000 times.
\begin{itemize}
\item Scenario 1: $\mu_1=\bar \mu$ and $\mu_2=2\bar \mu$, $\alpha=0.05$, $\beta=0.001$, $m_1=m_2=2000$, $m_{1,0}/m_1=0.7$ and $m_{2,0}/m_2=0.8$. Furthermore the values of $\bar\mu$ are in $\{0.01, 0.02, 0.05\}$ and then from 0.5 to 3 with jumps of size 0.25. Here, $q_{\beta,m}$ is estimated with 10000 realizations of $Z_{0m}$.
\item Scenario 2: $\mu_1=2$ and $\mu_2=\bar \mu$, $\alpha=0.7$, $m_1=1000$ and $m_2=9000$, $m_{1,0}/m_1=0.05$ and $m_{2,0}/m_2=0.85$. Furthermore $\bar\mu\in\{1.7,1.8,1.9,2,2.1,2.2,2.3\}$.
\item Scenario 3: $\mu_1=\bar \mu$ and $\mu_2=2\bar \mu$, $\alpha=0.05$, $\beta=0.05$, $m\in\{100,300,500,1000,2000,5000 \}$, $m_1=m_2=m/2$, $m_{g,0}/m_g=0.8$. Furthermore $\bar\mu\in\{0.01,3\}$. Here, $q_{\beta,m}$ is estimated with 1000 realizations of $Z_{0m}$.
\end{itemize}
\subsection{FDR control}
\label{subsection_sim_fdr}
The FDR of all above procedures are compared in Figure~\ref{fig_set1_fdr}, Figure~\ref{fig_stabweak} and Figure~\ref{fig_stabstrong}.
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{{FDR_fifthtry_mult_G=2_2000_2000_mubar=0.01_0.02_0.05_0.5_0.75_1_1.25_1.5_1.75_2_2.25_2.5_2.75_3_pi0=0.7_0.8_alpha=0.05_nbsimu=1000_beta=0.001_qrep=10000_lambda=0.5_OrIHW_IHW_NAPro2_sIHW_AOrIHW_AIHW_APro2_ABH_HZZ_sAIHW_points_pi0shown_scaled}.pdf}
\caption{\footnotesize FDR against $\bar\mu$ in scenario 1. Group 1 in black; Group 2 in green; Group 3 in red. The type of procedure is $\MWBH\left(W^{*}_{or}\right)$ (squares); ADDOW (triangles); $\sADDOW_\beta$ (circles); Pro2 (disks); HZZ (diamonds) and finally BH/ABH (crosses). Horizontal lines: $\alpha$ and $\pi_0\alpha$ levels. See Section~\ref{subsection_sim_setting}.}
\label{fig_set1_fdr}
\end{figure}
\begin{figure}
\begin{minipage}[b]{.5\linewidth}
\centering
\includegraphics[scale=0.35]{{489_weak}.pdf}
\subcaption{\footnotesize$\bar\mu=0.01$}\label{fig_stabweak}
\end{minipage}%
\begin{minipage}[b]{.5\linewidth}
\centering
\includegraphics[scale=0.35]{{489_strong}.pdf}
\subcaption{\footnotesize$\bar\mu=3$}\label{fig_stabstrong}
\end{minipage}
\caption{\footnotesize FDR against $m$ in scenario 3. Group 1, $\ADDOW$ (black dots) and $\sADDOW_\beta$ (red triangles). Horizontal lines: $\alpha$ and $\pi_0\alpha$ levels.}\label{fig_stab}
\end{figure}
First, Figure~\ref{fig_stabstrong} shows that the convergence of the FDR holds for moderate $m$.This supports the theoretical finding of Corollary~\ref{cor_ihw} showing that the FDR shall converge to $\pi_0\alpha$ in scenario 3. This Figure also shows that when the signal is strong, $\sADDOW_\beta$ behaves as ADDOW, which is well expected for the definition of $\phi_\beta$. While Figure~\ref{fig_set1_fdr} supports the latter for large signal ($\bar\mu\geq2$), we see that the FDR control of data-driven weighted procedures (ADDOW, Pro2) can deteriorate as $\bar\mu$ decreases. This is due to an overfitting phenomenon.
As $\bar\mu$ get smaller, the overfitting seems to increase and the FDR control seems to get violated. Let us underline that this does not contradict our theory because considering a small $\bar\mu$ might imply a smaller convergence rate while $m$ stays $\leq5000$ in scenarios 1 and 3. Fortunately, in this regime, it is apparent from Figure~\ref{fig_set1_fdr} and Figure~\ref{fig_stabweak} that the regularization process correctly addresses the overfitting by maintaining the FDR control holds true. Again, this is well expected because $\sADDOW_\beta$ simply corresponds to BH in that regime, see equation~\eqref{def_saddow}.
\subsection{Power}
\label{subsection_sim_pow}
Now that the FDR control has been studied, let us compare the procedures in terms of power. First, to better emphasize the benefit of adaptation, the power is rescaled in the following way: we define the normalized difference of power with respect to BH, or DiffPow, by $$\DiffPow(R)=\frac{m}{m_1}\left(\Pow(R)-\Pow(\BH)\right),$$ for any procedure $R$.
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{{DiffPow_fifthtry_mult_G=2_2000_2000_mubar=0.01_0.02_0.05_0.5_0.75_1_1.25_1.5_1.75_2_2.25_2.5_2.75_3_pi0=0.7_0.8_alpha=0.05_nbsimu=1000_beta=0.001_qrep=10000_lambda=0.5_OrIHW_IHW_NAPro2_sIHW_AOrIHW_AIHW_APro2_ABH_HZZ_sAIHW_points_scaled}.pdf}
\caption{\footnotesize DiffPow against $\bar\mu$ in scenario 1. Same legend as Figure~\ref{fig_set1_fdr}}
\label{fig_set1_pow}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.65\linewidth]{{DiffPow_bhbest2_static_G=2_1000_9000_mubar=1.7_1.8_1.9_2_2.1_2.2_2.3_pi0=0.05_0.85_alpha=0.7_nbsimu=1000_beta=0.025_qrep=1000_lambda=0.5_IHW_AIHW_points_scaled}.pdf}
\caption{\footnotesize DiffPow of ADDOW against $\bar\mu$ in scenario 2. Group 1 in black and Group 3 in red.}
\label{fig_set2_pow}
\end{figure}
Figure~\ref{fig_set1_pow} displays the power of all the procedures defined in Section~\ref{subsection_sim_setting}. We can make several observations:
\begin{itemize}
\item We see a huge difference of behavior between the Group 1 and the Group 3. Hence, incorporating the knowledge of $\pi_0$ can lead to a large improvement of power.
\item In both groups (that is in both~\eqref{NE} and~\eqref{CE} cases) ADDOW achieves the best power, which supports Theorem~\ref{thm_pow}. Additionnaly, maybe surprisingly, in both cases, Pro2 behaves quite well, with a power close to the one of ADDOW and despite its theoretical sub-optimality. Hence, it seems to also be a good choice in practice.
\item The comparison between the Group 2 and the Group 3 shows the benefit of adding the $F_g$ adaptation to the $\pi_0$ adaptation: the fourth group has better power than the third for all signals. We can see a zone of moderate signal (around $\bar\mu=1.5$) where the two groups are close. That is the same zone where HZZ becomes better than ABH. We deduce that in that zone the optimal weighting is the same as the uniform $\hat w^{(1)}$ weighting of ABH.
\item The comparison of the DiffPow between the Group 1 and Group 2 in Figure~\ref{fig_set1_pow} shows the difference between adapting to the $F_g$'s versus adapting to $\pi_0$. No method is generally better than the other: as we see in the plot, it depends on the signal strength. We also see that neither ABH nor HZZ is better than the other.
\end{itemize}
Finally, let us discuss Figure~\ref{fig_set2_pow}. Here, the scenario 3 entails that IHW favors the large and sparse second group of hypotheses whereas the optimal power is achieved by favoring the small first group of hypotheses which contains almost only signal, as expected in remark~\ref{rk_bh_best}. As a WBH procedure with weights (1,1), BH does not favor any group. Hence, IHW has a power smaller than BH. This demonstrates the limitation of the heuristic upon which IHW is built, and underlines the necessity of estimating the $\pi_{g,0}$ when nothing lets us think that~\eqref{ED} may be met.
\section{Concluding remarks}
\label{section_conclusion}
In this paper we presented a new class of data-driven step-up procedure, ADDOW, that generalizes IHW by incorporating $\pi_{g,0}$ estimators in each group. We showed that while this procedure asymptotically controls the FDR at the targeted level, it has the best power among all MWBH procedures when the $\pi_0$ estimation can be made consistently. In particular it dominates all the existing procedures of the weighting literature and solves the $p$-values weighting issue in a group-structured multiple testing problem. As a by-product , our work established the optimality of IHW in the case of homogeneous $\pi_0$ structure. Finally we proposed a stabilization variant designed to deal with the case where only few discoveries can be made (very small signal strength or sparsity). Some numerical simulations illustrated that our properties are also valid in a finite sample framework, provided that the number of tests is large enough.
\paragraph{Assumptions}
Our assumptions are rather mild: basically we only added the concavity of the $F_g$ to the assumptions of \citet{ignatiadis2016data}. Notably we dropped the other {regularity} assumptions on $F_g$ that were made in \citet{roquain2009optimal} while keeping all the useful properties on $W^*$ in the \eqref{NE} case. Note that the criticality assumption is often made in the literature, see \citet{ignatiadis2016data} (assumption 5 of the supplementary material), \citet{zhao2014weighted} (assumption A.1), or the assumption of Theorem 4 in \citet{hu2010false}. Finally, the weak dependence assumption is extensively used in our paper. An interesting direction could be to extend our result to some strong dependent cases, for instance by assuming the PRDS (positive regression dependence), as some previous work already studied properties of MWBH procedures under that assumption, see \citet{roquain2008multi}.
{
\paragraph{Computational aspects}
The actual maximization problem of ADDOW is difficult, it involves a mixed integer linear programming that may take a long time to resolve. Some regularization variant may be needed for applications.}{ To this end, we can think to use the least concave majorant (LCM) instead of the empirical c.d.f. in equation~\eqref{eq_def_gW} {(as proposed in modification (E1) of IHW in \citet{ignatiadis2016data})}. As we show in Section~\ref{section_proof_thm}, ADDOW can be extended to that case (see especially Section~\ref{subsection_notation}) and our results are still valid for this new regularized version of ADDOW.
}
\paragraph{Toward nonasymptotic results}
Interesting direction for future research can be to investigate the convergence rate in our asymptotic results. One possible direction can be to use the work of \citet{neuvial2008asymptotic}. However, it would require to compute the Hadamard derivative of the functional involved in our analysis, which might be very challenging. Finally, another interesting future work could be to develop other versions of ADDOW that ensure finite sample FDR control property: this certainly requires to use a different optimization process, which will make the power optimality difficult to maintain.
\section{Proofs of Theorems~\ref{thm_fdr} and~\ref{thm_pow}}
\label{section_proof_thm}
\subsection{{Further generalization}}
\label{subsection_notation}
Define for any $u$ and $W$
$$\widehat H_{W}(u)=m^{-1}\left|R_{u,W} \cap\mathcal{H}_0 \right| =m^{-1}\sum_{g=1}^G \sum_{i=1}^{m_g}\mathds{1}_{\{p_{g,i}\leq \alpha u W_g(u) ,H_{g,i}=0\}} $$
and
$$\widehat P_W(u)=m^{-1}\left|R_{u,W} \cap\mathcal{H}_1 \right| =\widehat G_{W}(u)-\widehat H_{W}(u) ,$$
so that $\FDP\left( R_{u,W}\right)=\frac{\widehat H_{W}(u)}{\widehat G_{W}(u) \vee m^{-1}}$ and $\Pow\left(R_{u,W}\right)=\mathbb{E}\left[ \widehat P_W(u) \right]$ (recall that $\MWBH\left(W\right)=R_{\hat u_W,W}$).
Also define $\widehat D_g(t)=m_g^{-1}\sum_{i=1}^{m_g}\ind{p_{g,i}\leq t}$ so that $\widehat G_{W}(u)=\sum_g \frac{m_g}{m}\widehat D_g(\alpha u W_g(u))$.
For the sake of generality $\widehat D_g$ is not the only estimator of $D_g$ {(defined in equation~\eqref{def_Dg})} that we will use to prove our results (for example, we can use the LCM of $\widehat D_g$, denoted $\LCM(\widehat D_g)$, see Section~\ref{section_conclusion}). So let us increase slightly the scope of the {MWBH} class by defining $ \widetilde G_W(u)=\sum_g\frac{m_g}{m}\widetilde D_g(\alpha u W_g(u))$ for any estimator $\widetilde D_g$ such that $\widetilde D_g$ is nondecreasing, $\widetilde D_g(0)=0$, $\widetilde D_g(1)=1$ and $\left\|\widetilde D_g-D_g \right\|\overset{\mathbb{P}}{\longrightarrow}0$, where $\|\cdot\|$ is the sup norm for the bounded functions on their definition domain. Note that at least $(D_g)_g$, $(\widehat D_g)_g$ (by Lemma~\ref{supas}), and $\left(\LCM(\widehat D_g)\right)_g$ {(by Lemma~\ref{lm_lcm})} are eligible.
If $W$ is such that $ \widetilde G_W$ is nondecreasing, we then define the generalized MWBH as
\begin{equation}
\GMWBH\left( (\widetilde D_g)_g ,W\right) =R_{\tilde u_{ W}, W} \text{ where }
\tilde u_{W} = \mathcal{I}\left( \widetilde G_{W} \right).\notag
\end{equation}
If $(\widetilde D_g)_g$ is such that we can define, for all $u\in[0,1]$,
\begin{equation}
\widetilde W^*(u)\in\argmax_{w\in K^m}\widetilde G_w(u),
\label{def_wtilde}
\end{equation}
we define the generalized ADDOW by
$$\GADDOW\left( (\widetilde D_g)_g \right) = \GMWBH\left( (\widetilde D_g)_g, \widetilde W^* \right),$$
the latter being well defined because $ \widetilde G_{\widetilde W^*}$ is nondecreasing (by a proof similar to the one of Lemma~\ref{croissance}). Note that for any continuous $\widetilde D_g$, such as $\LCM(\widehat D_g)$ or $D_g$ itself, the arg max in~\eqref{def_wtilde} is non empty and GADDOW can then be defined.
What we show below are more general theorems, valid for any $\GADDOW\left( (\widetilde D_g)_g \right) $. Our proofs combined several technical lemmas deferred to Sections~\ref{subsection_asymptotical} and~\ref{subsection_technic}, which are based on the previous work of \citet{roquain2009optimal,hu2010false,zhao2014weighted}.
{
\begin{rk}
$\GADDOW\left( (\widetilde D_g)_g \right) $ when $\widetilde D_g=\LCM(\widehat D_g)$ and $\hat\pi_{g,0}=1$ is not exactly the same as IHW with modification (E1).
In our notation, their procedure is $\WBH\left( \widetilde W^*\left( \mathcal{I}\left( \widetilde G_{\widetilde W^*} \right) \right) \right).$
\end{rk}
}
\subsection{Proof of Theorem~\ref{thm_fdr}}
\label{subsection_proof_thm_fdr}
We have
$$\FDP\left( \GMWBH\left(\left(\widetilde D_g\right)_g, \widehat W^*\right) \right)=\frac{\widehat H_{\widetilde W^*}(\tilde u)}{\widehat G_{\widetilde W^*}(\tilde u)\vee m^{-1}}\in[0,1],$$
where {$\tilde u$ is defined as in~\eqref{def_uwtilde}} so by Lemma~\ref{BOUMH} we deduce that
$$\FDP\left(\GADDOW\left((\widetilde D_g)_g \right)\right)\underset{m\to\infty}{\overset{\mathbb{P}}{\longrightarrow}} \frac{ H^\infty_{W^*}(u^*)}{G^\infty_{W^*}( u^*)}=\frac{ H^\infty_{W^*}(u^*)}{u^*} , $$
and then
$$\lim_{m\to\infty}\FDR\left(\GADDOW\left((\widetilde D_g)_g \right)\right)={u^*}^{-1} H^\infty_{W^*}(u^*),$$
where $G^\infty_{W^*}$, $H^\infty_{W^*}$ and $u^*$ are defined in Section~\ref{subsection_asymptotical}.
If $\alpha \geq\bar \pi_0$, $u^*=1$ by Lemma~\ref{prop_max} and $\alpha u^* W^*_g(u^*)\geq1$ by Lemma~\ref{prop_argmax} so ${u^*}^{-1} H^\infty_{W^*}(u^*)=\pi_0\leq\bar \pi_0\leq\alpha$.
If $\alpha \leq\bar \pi_0$, $\alpha u^* W^*_g(u^*)\leq1$ by Lemma~\ref{prop_argmax} so $U(\alpha u^* W^*_g(u^*))=\alpha u^* W^*_g(u^*)$ for all $g$ and then
\begin{align}
{u^*}^{-1} H^\infty_{W^*}(u^*)&=\alpha\sum_g\pi_g\pi_{g,0}W^*_g(u^*)\notag\\
&\leq\alpha\sum_g\pi_g\bar \pi_{g,0}W^*_g(u^*)=\alpha.\label{egg1}
\end{align}
Moreover if we are in \eqref{CE} case (that is $\bar \pi_{g,0}=\pi_{g,0}$) the inequality above becomes an equality.
Finally if we are in \eqref{ED}+\eqref{EE} case (that is $\pi_{g,0}=\pi_0$ and $\bar \pi_{g,0}=\bar \pi_{0}$) we write
\begin{align}
{u^*}^{-1} H^\infty_{W^*}(u^*)&=\alpha\sum_g\pi_g\pi_{0}W^*_g(u^*)\notag\\
&=\frac{\pi_0}{\bar \pi_0}\alpha\sum_g\pi_g\bar \pi_{0}W^*_g(u^*)\notag\\
&=\frac{\pi_0}{\bar \pi_0}\alpha.\label{egg2}
\end{align}
The equalities in~\eqref{egg1} and~\eqref{egg2} are due to $\sum_g\pi_g\bar \pi_{g,0} W^*_g(u^*)=1$ (by Lemma~\ref{prop_argmax}).
\subsection{Proof of Theorem~\ref{thm_pow}}
\label{subsection_proof_thm_pow}
First, in any case,
$$\widehat P_{\widetilde W^*}(\tilde u)= \widehat G_{\widetilde W^*}(\tilde u)-\widehat H_{\widetilde W^*}(\tilde u)\overset{a.s.}{\longrightarrow} G^\infty_{W^*}(u^*)-H^\infty_{W^*}(u^*)=P^\infty_{W^*}(u^*) $$
by Lemma~\ref{BOUMH}, where $P^\infty_{W^*}$ is defined in Section~\ref{subsection_asymptotical}.. Hence the limit of $ \Pow\left( \GADDOW\left((\widetilde D_g)_g \right) \right)$ is $ P^\infty_{W^*}(u^*)$.
For the rest of the proof, we assume we are in case \eqref{CE} or \eqref{ED}+\eqref{EE}, which implies by Lemma~\ref{ASTUCE} that $W^*(u)\in\argmax_{w\in K^\infty}P^\infty_w(u)$ for all $u$, and that $P^\infty_{W^*}$ is nondecreasing. We also split the proof in two parts. For the first part we assume that for all $m$, $\widehat W$ is a weight vector $\hat w \in K^m$ therefore not depending on $u$. In the second part we will conclude with a general sequence of weight functions.
\paragraph{Part 1}$\widehat W=\hat w \in K^m$ for all $m$. Let $\ell=\limsup \Pow\left(\MWBH\left(\hat w \right) \right)$. Up to extracting a subsequence, we can assume that $\ell=\lim\Esp{\widehat P_{\hat w}(\hat u_{\hat w})}$ and $\hat\pi_{g,0}\overset{a.s.}{\longrightarrow}\bar \pi_{g,0}$ for all $g$.
Define the event
\begin{equation*}
\widetilde \Omega=\left\
\begin{array}{rcc}
\forall g,\,\hat\pi_{g,0} &\longrightarrow &\bar \pi_{g,0} \\
\sup_{w\in \mathbb{R}_+^G } \left\| \widehat P_w-P^\infty_w \right\| &\longrightarrow &0 \\
\sup_{w\in \mathbb{R}_+^G } \left\| \widehat G_w-G^\infty_w \right\| &\longrightarrow & 0
\end{array}
\right\}
\end{equation*}
then $\Pro{\widetilde\Omega}=1$ (by Lemma~\ref{supas}), $\ell=\li
\Esp{\widehat P_{\hat w}(\hat u_{\hat w})\mathds{1}_{\widetilde\Omega}}$ and by reverse Fatou Lemma $\ell\leq\Esp{\limsu
\widehat P_{\hat w}(\hat u_{\hat w})\mathds{1}_{\widetilde\Omega}}$.
Now consider that $\widetilde\Omega$ occurs and fix a realization of it, the following of this part 1 is deterministic. Let $\ell'=\limsup\widehat P_{\hat w}(\hat u_{\hat w})$. The sequences $\left(\frac{m}{m_g\hat\pi_{g,0}}\right)$ are converging and then bounded, hence the sequence $(\hat w)$ is also bounded. By compacity, once again up to extracting a subsequence, we can assume that $\ell'=\lim\widehat P_{\hat w}(\hat u_{\hat w})$ and that $\hat w$ converges to a given $w^{cv}$. By taking $m\to\infty$ in relation $\sum\frac{m_g}{m}\hat\pi_{g,0}\hat w_g\leq1$, it appears that $w^{cv}$ belongs to $K^\infty$. $\| \widehat G_{\hat w}-G^\infty_{w^{cv}} \| \leq \sup_w \| \widehat G_w-G^\infty_w\| +\| G^\infty_{\hat w} -G^\infty_{w^{cv}} \| \to0$ so by Remark~\ref{weight_vector} $\hat u_{\hat w}\to u^\infty_{w^{cv}}$
and finally
\begin{align*}
\left| \widehat P_{\widehat w}(\hat u_{\widehat w})-P^\infty_{w^{cv}}(u^\infty_{w^{cv}}) \right| &\leq \sup_{w\in \mathbb{R}_+^G } \left\| \widehat P_w-P^\infty_w \right\| +\left|P^\infty_{\widehat w}(\hat u_{\widehat w}) -P^\infty_{w^{cv}}(u^\infty_{w^{cv}}) \right|\\
&\overset{}{\longrightarrow}0,
\end{align*}
by continuity of $F_g$ and because $\omega\in\widetilde\Omega$. So $\ell'=P^\infty_{w^{cv}}(u^\infty_{w^{cv}})\leq P^\infty_{W^*}(u^\infty_{w^{cv}})$ by maximality. Note also that $G^\infty_{w^{cv}}(\cdot)\leq G^\infty_{W^*}(\cdot)$ which implies that $u^\infty_{w^{cv}}\leq u^\infty_{W^*}=u^*$ so $\ell'\leq P^\infty_{W^*}(u^*)$ because $P^\infty_{W^*}$ is nondecreasing. Finally $\limsu
\widehat P_{\hat w}(\hat u_{\hat w})\mathds{1}_{\widetilde\Omega}\leq P^\infty_{W^*}(u^*)$ for any realization of $\widetilde\Omega$, by integrating we get that $\ell\leq P^\infty_{W^*}(u^*)$ which concludes that part 1.
\paragraph{Part 2}Now consider the case where $\widehat W$ is a weight function $u\mapsto \widehat W(u)$. Observe that
$$\hat u_{\widehat W}= \widehat G_{\widehat W}(\hat u_{\widehat W})= \widehat G_{\widehat W(\hat u_{\widehat W})}(\hat u_{\widehat W}), $$
so by definition of $\mathcal{I}(\cdot)$, $\hat u_{\widehat W}\leq \hat u_{\widehat W(\hat u_{\widehat W})}$, and then
$$\widehat P_{\widehat W}(\hat u_{\widehat W})=\widehat P_{\widehat W(\hat u_{\widehat W})}(\hat u_{\widehat W})\leq \widehat P_{\widehat W(\hat u_{\widehat W})}\left(\hat u_{\widehat W(\hat u_{\widehat W})}\right) .$$
As a consequence, $\Pow\left(\MWBH\left(\widehat W\right)\right)\leq \Pow\left(\MWBH\left(\widehat W(\hat u_{\widehat W})\right)\right)$. Finally, apply part 1 to the weight vector sequence $\left(\widehat W(\hat u_{\widehat W})\right)$ to conclude.
\begin{rk}
We just showed that for every MWBH procedure, there is a corresponding WBH procedure with better power. In particular, by defining $\hat u=u_{\widehat W^*}$ the ADDOW threshold, we showed that $\hat u\leq \hat u_{\widehat W^*(\hat u)}$. But $\widehat G_{\widehat W^*}\geq \widehat G_{\hat w}$ and then $\hat u\geq u_{\hat w}$ for any $\hat w$. Hence $\hat u = \hat u_{\widehat W^*(\hat u)}$ and ADDOW is the WBH procedure associated to the weight vector $\widehat W^*(\hat u)$.
\label{MWBH<wbh}
\end{rk}
\section*{Acknowledgments}
Thanks to my advisors Etienne Roquain and Pierre Neuvial for the constant and deep improvements they enabled, in many areas of the paper. Also thanks to Christophe Giraud and Patricia Reynaud-Bouret for useful discussions. This work has been supported by CNRS (PEPS FaSciDo) and ANR-16-CE40-0019.
\bibliographystyle{imsart-nameyear}
|
1,941,325,221,167 | arxiv | \section{Introduction}
The standard model (SM) of particle physics has been enormously successful in describing all phenomena at the highest attainable energies thus far. Yet, it is widely believed to be only an effective description of a more complete theory which is valid at the highest energy scales. Of particular theoretical interest is supersymmetry (SUSY)~\cite{ref:SUSY0,ref:SUSY1,ref:SUSY2,ref:SUSY3,ref:SUSY4}
which solves the hierarchy problem~\cite{ref:hierarchy1,ref:hierarchy2} of the SM by compensating for each of the fermionic and bosonic degrees of freedom in the SM with a supersymmetric bosonic and fermionic degree of freedom, respectively. The resulting superfields have the same quantum numbers as their SM counterparts, except for spin.
Since no SUSY particle has been observed so far, they must have higher masses than their SM partners, implying that SUSY is a broken symmetry.
At the Large Hadron Collider (LHC) at CERN,
supersymmetric particles, if they exist, are predicted to be produced dominantly via QCD, through the fusion of
two gluons into a pair of gluinos, a pair of squarks, or a gluino and a squark.
The production cross-section for massive squarks or gluinos falls as a power law with the squark or gluino mass, following the available energy $\sqrt{\hat s}$ in the partonic centre-of-mass frame.
The LHC, with a proton-proton centre-of-mass energy $\sqrt{s}$ of 7~\ensuremath{\,\text{Te\hspace{-.08em}V}}\xspace, is a copious source of high-energy partons which allows to probe squark and gluino masses beyond the limits previously set at LEP and at the Tevatron.
Squarks and gluinos initiate a decay cascade in which quarks are produced, until the lightest supersymmetric particle (LSP) is created.
The dynamics of the cascade depends on the SUSY model under consideration, and in particular on the masses of the supersymmetric particles.
If R-parity is conserved, the LSP is unable to decay into SM particles and is therefore stable.
If, in addition, the LSP is a neutralino, it is weakly interacting and thus escapes detection, hence missing transverse energy (\ensuremath{E_{\mathrm{T}}^{\text{miss}}}\xspace) in the final state.
Typical hadronic decay modes for gluinos ($\tilde g$) and squarks ($\tilde q$) are $\tilde q \rightarrow q \chi_1^0$ and $\tilde g \rightarrow q q \chi_1^0$.
In these examples, the squark and the gluino directly decay to the lightest neutralino $\chi_1^0$, the gluino doing so via an off-shell squark. As a result, squark pair production usually gives rise to more than two jets, and gluino pair production to more than four jets. The transverse momenta of the jets are driven by the difference in mass between the squark or gluino and the neutralino.
Leptons can appear in the final state, for example if heavy neutralinos ($\tilde \chi_2^0 \rightarrow l^\pm \tilde l^\mp \rightarrow l^\pm l^\mp \chi_1^0 $) or charginos ($\tilde \chi_1^\pm \rightarrow \chi_1^0 W^\pm$) are created in the decays cascades of the squark or gluino.
The CMS detector~\cite{ref:CMS} is used to investigate many final states that could arise from the strong production of squarks and gluinos. An effort is made to make these final states independent, so that all analyses can ultimately be easily combined.
Because the presence of leptons is not guaranteed, investigating hadronic final states with jets and high missing transverse energy is the most efficient way to look for SUSY.
Dealing with the huge QCD background is however a challenge.
In CMS, three complementary approaches are followed. The $\alpha_T$ analysis, presented elsewhere~\cite{ref:CMSalphaT} makes use of the $\alpha_T$ variable to completely remove the QCD background from the search region, leaving solely electroweak backgrounds, namely \ensuremath{{t\overline{t}}+jets}\xspace, W+jets\xspace and \ensuremath{{\rm Z}\rightarrow \nu \nu}+jets\xspace.
The jets + \ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace analysis, summarized in Section~\ref{sec:RA2}, consists of looking for an excess of multi-jet events at high \ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace, an approximation of the \ensuremath{E_{\mathrm{T}}^{\text{miss}}}\xspace computed as the opposite of the vector sum of the jet transverse momenta.
This approach is the most efficient of the three, but requires the QCD background to be accurately controlled.
The so-called razor analysis, presented in Section~\ref{sec:Razor} relies on novel variables to reduce the QCD background to a negligible level in the search region, and to predict the background contribution.
The final search sample of this analysis has about 30\% of events in common with the jets+\ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace analysis. While the razor analysis is less efficient than the jets+\ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace analysis, it is less sensitive to the effects of initial state radiation. Requiring leptons in the final state in addition to jets and missing transverse energy strongly reduce the standard-model background.
With one isolated lepton, the QCD and \ensuremath{{\rm Z}\rightarrow \nu \nu}+jets\xspace backgrounds get suppressed.
With two opposite-sign leptons~\cite{ref:CMSOppositeSign}, the W+jets\xspace background becomes negligible, and several handles can be used for an accurate estimation of the remaining \ensuremath{{t\overline{t}}}\xspace background from the data. Asking for two same-sign leptons, or for three or more leptons, dramatically suppresses the standard-model background, for a very clean search of physics beyond the standard model, like the production of squarks and gluinos which can naturally lead to such final states.
In these proceedings, the emphasis is put on the most recent fully hadronic analyses, and several leptonic analyses are briefly summarized. Other important search fields are also being covered by CMS but could not be presented here. For example, in the context of the general gauge-mediated SUSY breaking with the lightest neutralino as the next-to-lightest supersymmetric particle and the gravitino as the lightest, a natural signature for squark or gluino production is the presence of two photons and \ensuremath{E_{\mathrm{T}}^{\text{miss}}}\xspace in the final state~\cite{ref:CMSDiPhoton}.
\section{Fully hadronic searches}
\subsection{Jets + \ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace analysis}
\label{sec:RA2}
The data used in this analysis~\cite{ref:CMSRA2} are collected using triggers requiring a minimal jet activity $\ensuremath{H_{\mathrm{T}}}\xspace^{\rm trig}$, measured as the scalar sum of the transverse momentum of the calorimeter jets reconstructed at trigger level.
The rapid increase in instantaneous luminosity during the 2010 data taking resulted in the threshold on $\ensuremath{H_{\mathrm{T}}}\xspace^{\rm trig}$ being raised from $100$ to $140$ and finally $150 \ensuremath{\,\text{Ge\hspace{-.08em}V}}\xspace$.
The particle flow algorithm~\cite{PFT-09-001,PFT-10-002} identifies and reconstructs all particles produced in the collision, namely charged hadrons, photons, neutral hadrons, muons, and electrons. The resulting list of particles is then used to reconstruct particle jets, compute the \ensuremath{E_{\mathrm{T}}^{\text{miss}}}\xspace, and quantify the lepton isolation.
The event selection starts from a loose validation region. On top of this baseline selection, tighter selection criteria are applied to define the search regions. The baseline selection requirements after trigger boil down to selecting events with (i) at least three jets with $\ensuremath{p_{\mathrm{T}}}\xspace> 50\ensuremath{{\,\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c}}\xspace$ and $| \eta | < 2.5$; (ii) $\ensuremath{H_{\mathrm{T}}}\xspace > 300 \ensuremath{\,\text{Ge\hspace{-.08em}V}}\xspace$, where \ensuremath{H_{\mathrm{T}}}\xspace is defined as the scalar sum of the transverse momenta of all the jets having $\ensuremath{p_{\mathrm{T}}}\xspace>50 \ensuremath{{\,\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c}}\xspace$ and $|\eta|<2.5$; (iii) $\ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace> 150\ensuremath{\,\text{Ge\hspace{-.08em}V}}\xspace$, where \ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace is defined as the magnitude of the vectorial sum of the \ensuremath{p_{\mathrm{T}}}\xspace of the jets having, in this case, $\ensuremath{p_{\mathrm{T}}}\xspace>30 \ensuremath{{\,\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c}}\xspace$ and $| \eta | < 5$; (iii) no isolated electron nor muon with $\ensuremath{p_{\mathrm{T}}}\xspace > 10\,\ensuremath{{\,\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c}}\xspace$. Additionally, the \ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace vector is required not to be aligned with one of the three leading jets, to reject QCD multi-jet events in which a single mis-measured jet yields high \ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace.
Two search regions were defined in this inclusive jets-plus-missing-momentum search. The first search selection, defining the ``high-\ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace search region'', tightens the baseline cuts with an $\ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace > 250\ensuremath{\,\text{Ge\hspace{-.08em}V}}\xspace$ requirement, to search for a generic invisible particle in a low background environment. The second selection adds a $\ensuremath{H_{\mathrm{T}}}\xspace > 500\ensuremath{\,\text{Ge\hspace{-.08em}V}}\xspace$ cut to the baseline selection, yielding the ``high-\ensuremath{H_{\mathrm{T}}}\xspace search region'', sensitive to cascade decays of high-mass new-physics particles where more energy is transferred to visible particles and less to the dark-matter candidate.
The main background contributions in the two search regions are estimated using data-driven techniques.
Due to its huge cross-section, QCD multi-jet production can give rise to high \ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace because of the finite jet energy resolution, or of rare but dramatic mis-measurements of the jet energy induced by various instrumental effects.
The most important instrumental effects were identified in the simulation to be related to missing channels in the ECAL, and to jet punch-through giving rise to multi-TeV fake muons in the particle jets. The simulation was used to design dedicated event filters to remove such events.
The QCD background was estimated using the so-called ``rebalance+smear'' technique. An inclusive multi-jet sample of events is selected. The energy of each jet is first rescaled to obtain a null \ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace using a maximum-likelihood fit taking into account the jet energy resolution in the process. This rescaling produces a seed event from which all sources of \ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace, possibly genuine, have been removed.
The jets are then smeared by a simulated jet energy response distribution. The simulated distribution is corrected for differences between the data and the simulation by factors obtained from di-jet asymmetry measurements.
The other standard-model background events contributing to the search regions feature at least one neutrino in the final state, hence true \ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace.
W+jets\xspace events, where the W possibly comes from a top and decays to a lepton and a neutrino, end up in the search region in case the lepton from the W decay is not identified in the analysis, either because it is a $\tau$ decaying hadronically, or an electron or muon that is lost (not identified by the lepton veto).
The contribution of this source of background is estimated by selecting from the data a control sample of events with an isolated muon and jets.
To predict the number of events with a lost lepton in the search region, the number of events in this control sample is corrected for lepton reconstruction and identification efficiency by factors measured using $Z$ events in the data, and by acceptance factors from the simulation.
To estimate the number of events in which a tau decays hadronically, the muon in the control sample is replaced by a jet representing the hadronically decaying tau, which is taken into account when applying the search selections.
The uncertainty on the W+jets\xspace background estimation (including \ensuremath{{t\overline{t}}}\xspace) is dominated by the statistical error on the number of events in the control sample.
The last source of background, especially important because it dominates at high \ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace, is \ensuremath{{\rm Z}\rightarrow \nu \nu}+jets\xspace.
As no \ensuremath{{\rm Z}\rightarrow e e}\xspace nor \ensuremath{{\rm Z}\rightarrow \mu \mu}\xspace events are observed in the search regions, these processes cannot be used to predict the \ensuremath{{\rm Z}\rightarrow \nu \nu}+jets\xspace background contribution. This contribution is instead estimated using a control sample of isolated $\gamma+$jets events, in which the photon is ignored when applying the search selections. This strategy exploits the fact that at high boson \ensuremath{p_{\mathrm{T}}}\xspace, the Z and $\gamma$ behave in a similar way, apart from electroweak coupling differences, and small residual mass effects.
The number of events in the control sample is corrected by a $Z/\gamma$ cross-section correction factor obtained from the simulation. Several other effects, such as the contamination of the control sample by multi-jet QCD events, or the photon reconstruction and identification efficiencies, are taken into account. The error on this background prediction comes from the statistical error on the number of events in the control sample, and from systematic errors mostly related to the available number of events in the simulated samples and to the estimation of the contamination of the control sample by multi-jet events. Table~\ref{tab:RA2} summarizes the results of the analysis, and shows that no excess of events is found in the data. The limit set on the number of signal events is interpreted in the context of various SUSY models in Section~\ref{sec:ModelInterpretation}.
\begin{table}[htb]
\begin{center}
\caption{Predicted and observed event yields for the baseline selection, and for the high-\ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace and high-\ensuremath{H_{\mathrm{T}}}\xspace search selections.
The last line reports the 95\% CL limit on the number of signal events given the observed number of events, and the total predicted background.
}
\label{tab:RA2}
{
\begin{tabular}{|l|rr|rr|rr|}
\hline
Background & \multicolumn{2}{c|}{Baseline} & \multicolumn{2}{c|}{High-\ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace} & \multicolumn{2}{c|}{High-\ensuremath{H_{\mathrm{T}}}\xspace} \\
& \multicolumn{2}{c|}{selection} & \multicolumn{2}{c|}{selection} & \multicolumn{2}{c|}{selection} \\\hline
$\ensuremath{{\rm Z}\rightarrow \nu \nu}+jets\xspace$ ($\gamma$+jets method) & 26.3 & $\pm 4.8$ & 7.1 & $\pm 2.2$ & 8.4 & $\pm 2.3$ \\
$W/\ensuremath{{t\overline{t}}}\xspace\to e,\mu$+X & 33.0 & $\pm 8.1 $ & 4.8 & $\pm 1.9 $ & 10.9 & $\pm 3.4$ \\
$W/\ensuremath{{t\overline{t}}}\xspace\to \tau_{\mbox{\tiny hadr}}$+X & 22.3 & $\pm 4.6 $ & 6.7 & $\pm 2.1 $ & 8.5 & $\pm 2.5$ \\
QCD & 29.7 & $\pm 15.2$ & 0.16 & $\pm 0.10$ & 16.0 & $\pm 7.9$ \\
\hline
Total background estimated from data & 111.3 & $\pm 18.5$ & 18.8 & $\pm 3.5$ & 43.8 & $\pm 9.2$ \\
Observed in $36 \pbinv$ of data & 111\hspace{3mm} & & 15\hspace{3mm} & & 40\hspace{3mm} & \\
\hline
95\% C.L. limit on signal events & 40.4 & & 9.6 & & 19.6 & \\
\hline
\end{tabular}
}
\end{center}
\end{table}
\vspace{-0.7cm}
\subsection{Razor analysis}
\label{sec:Razor}
This analysis relies on the novel ``razor'' variables~\cite{ref:RazorRogan} to define search regions and predict the background contribution in a data-driven way. For the pair-production of two heavy particles of mass $M_{\tilde q}$ decaying into a visible part and an invisible part of mass $M_\chi$, the variable $M_R$ provides an approximation of the quantity
$M_\Delta \equiv (M_{\tilde{q}}^{2}-M_{\tilde\chi}^{2})/M_{\tilde{q}}$.
The search consists of looking for a signal peak in the $M_R$ distribution, on top of a steeply falling standard-model background distribution. Cutting on the dimensionless $R$ variable strongly reduces the standard-model background, and to give its $M_R$ distribution an easy-to-control exponential shape.
The razor analysis~\cite{ref:CMSRazor} defines a set of physics objects, namely jets, isolated electrons, and isolated muons. All of these objects are used in the computation of $R$ and $M_R$, which proceeds in the following way. The objects are first grouped into two ``mega-jets'' using an hemisphere algorithm. Each mega-jet ideally corresponds to the visible part of the decay products of one of the pair-produced heavy particles. The $R$ and $M_R$ variables are then computed using the 4-momenta of the two mega-jets, and the \ensuremath{E_{\mathrm{T}}^{\text{miss}}}\xspace vector. Depending on the presence of an isolated electron or muon in the final states, the events are classified in three independent ``boxes'', the electron box, the muon box, and the hadronic box. The high $R$, high $M_R$ region of each box constitutes an independent search region. The razor analysis is thus both a fully hadronic and a single lepton analysis. In these proceedings however, the focus is put on the more efficient fully hadronic sector, for which the low-$M_R$ region of the leptonic boxes is used as a control sample
\begin{wrapfigure}[16]{r}{3.5in}
\begin{center}
\vspace{-0.7cm}
\includegraphics[width=0.49\textwidth]{HADBOX_R50_log.pdf}
\caption{Distribution of $M_R$ in the data, and background prediction for the ``razor'' analysis, with $R > 0.5$. The search region is defined by the additional requirement $M_R > 500\,\ensuremath{{\,\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c^\text{2}}}\xspace$. }
\label{fig:HADBOX}
\end{center}
\end{wrapfigure}
The event sample was collected using triggers based on the presence of a single electron, a single muon, and on $H_T^{\rm trig}$. The jets are required to have $\ensuremath{p_{\mathrm{T}}}\xspace > 30$\,\ensuremath{{\,\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c}}\xspace and $|\eta|<3$, electrons to have $\ensuremath{p_{\mathrm{T}}}\xspace > 20$\,\ensuremath{{\,\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c}}\xspace and $|\eta|<2.5$, and muons to have $\ensuremath{p_{\mathrm{T}}}\xspace > 20$\,\ensuremath{{\,\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c}}\xspace and $|\eta|<2.1$. The difference in azimuth between the two mega-jets is required to be smaller than 2.8\,rad, to reject di-jet QCD events.
The $M_R$ distribution for the data in the hadronic box, together with the full background prediction, is shown in Fig.~\ref{fig:HADBOX}, for $R>0.5$.
The background prediction is based on the observation that above a given value of $M_R$, all background distributions drop following an exponential function. At low $M_R$, the background shape is mostly driven by the efficiency of the \ensuremath{H_{\mathrm{T}}}\xspace trigger, and by the mass scales of the standard-model processes. For instance, $M_R$ peaks around the mass of the top quark for \ensuremath{{t\overline{t}}}\xspace events.
For the \ensuremath{{t\overline{t}}}\xspace, \ensuremath{{\rm Z}\rightarrow \nu \nu}+jets\xspace, and W+jets\xspace backgrounds, the parameters of the exponential function driving the evolution of the $M_R$ distribution at high $M_R$ are taken from the simulation. In the simulation, and also in Fig.~\ref{fig:HADBOX}, these parameters appear to be roughly equal, indicating a similar behaviour of these background processes in terms of $R$ and $M_R$. These parameters are then corrected by factors compatible with one, extracted from a comparison between data and simulation for W+jets\xspace events in the muon box. The relative normalization of these three sources of background is set according to the inclusive W, Z, and \ensuremath{{t\overline{t}}}\xspace cross-sections measured by CMS~\cite{ref:CMSWZ,ref:CMSttbar}. The normalization of the overall background distribution to the data is performed by measuring lepton boxes event yields, which are then corrected for lepton reconstruction and identification efficiency. A fit is finally performed in the $80 < M_R < 400$\,\ensuremath{{\,\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c^\text{2}}}\xspace region, to obtain the parameters of the \ensuremath{H_{\mathrm{T}}}\xspace trigger turn-on shape and the overall normalization of the QCD background. The shape of the QCD background was obtained using a low-bias, prescaled trigger. The background is predicted by extrapolating the resulting background distribution to the search region, defined as $M_R>500\,\ensuremath{{\,\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c^\text{2}}}\xspace$. In this region, 7 events are observed in the data, and $5.5\pm1.4$ are expected from the background. As no excess is observed, a model-independent upper limit is set on the number of signal events, $s<8.4$. This limit is interpreted in the context of various SUSY models in Section~\ref{sec:ModelInterpretation}.
\input modelDependentInterpretation
\section{Leptonic searches}
The single lepton analysis~\cite{ref:CMSSingleLepton} selects events featuring jets, \ensuremath{E_{\mathrm{T}}^{\text{miss}}}\xspace, and a single lepton in the final state.
The presence of the lepton strongly reduces the contribution of the QCD multi-jet and \ensuremath{{\rm Z}\rightarrow \nu \nu}+jets\xspace backgrounds, and provides several handles to build a data-driven prediction of the remaining background contribution from QCD, \ensuremath{{t\overline{t}}}\xspace, and W+jets\xspace. Events containing an additional lepton are vetoed, and handled by the di-lepton and multi-lepton analyses.
The event sample was collected using triggers based on the presence of a single electron or a single muon. The requirement of an \ensuremath{H_{\mathrm{T}}}\xspace trigger was added when the peak luminosity increased beyond $2 \times 10^{32}\,\ensuremath{\,\text{cm}}\xspace^{-2}s^{-1}$. The trigger selection is fully efficient with respect to the baseline selection applied offline, which consists of requiring (i) four jets with $\ensuremath{p_{\mathrm{T}}}\xspace > 30$\,\ensuremath{{\,\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c}}\xspace and $|\eta|<2.4$ with $\ensuremath{H_{\mathrm{T}}}\xspace>500\,\ensuremath{\,\text{Ge\hspace{-.08em}V}}\xspace$; (ii) an isolated lepton, which can be either a muon with $\ensuremath{p_{\mathrm{T}}}\xspace > 20$\,\ensuremath{{\,\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c}}\xspace and $|\eta|<2.1$, or an electron with $\ensuremath{p_{\mathrm{T}}}\xspace > 20$\,\ensuremath{{\,\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c}}\xspace and $|\eta|<2.4$. The search region is defined by an additional cut on the missing transverse energy, $\ensuremath{E_{\mathrm{T}}^{\text{miss}}}\xspace>250\,\ensuremath{\,\text{Ge\hspace{-.08em}V}}\xspace$.
The contribution of the main background processes to the search region, \ensuremath{{t\overline{t}}}\xspace and W+jets\xspace, is estimated using the lepton spectrum method. The foundation of this method is that, when the lepton and neutrino are produced together in a $W$ decay (either in $t\bar t$ or in $W$+jets events), the lepton \ensuremath{p_{\mathrm{T}}}\xspace spectrum is directly related to the neutrino \ensuremath{p_{\mathrm{T}}}\xspace spectrum. The lepton spectrum is used to predict the \ensuremath{E_{\mathrm{T}}^{\text{miss}}}\xspace distribution, after suitable corrections related to the effect of the $W$ polarisation on the lepton and neutrino \ensuremath{p_{\mathrm{T}}}\xspace spectra, and to the lepton acceptance and reconstruction efficiency.
Combining the electron and muon channels, 2 events are observed in the search region, while $3.6 \pm 2.9$ are expected. A 95\% model independent upper limit of 4.1 signal events is calculated. In the cMSSM, for ${\rm tan} \beta=10$, $A_0=0$, and $\mu>0$, gluino and squark masses larger than about 550\,\ensuremath{{\,\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c^\text{2}}}\xspace are excluded.
The same-sign di-lepton analysis requires, in addition to jets and \ensuremath{E_{\mathrm{T}}^{\text{miss}}}\xspace, exactly two isolated leptons of the same sign which can be electrons, muons or taus decaying hadronically. The event sample was collected using di-lepton and single-lepton triggers, but also \ensuremath{H_{\mathrm{T}}}\xspace triggers, which provide sensitivity to events with low \ensuremath{p_{\mathrm{T}}}\xspace electrons and muons. The search selection and the data-driven background estimation techniques employed where chosen according to the trigger in use (lepton or hadron), and the channel ( $l_i l_j$ where $l_{i,j} = e, \mu,\tau$). In all search regions, the predicted number of background events is compatible with zero, and no excess is observed. The analysis and the results are described in details in Ref~\cite{ref:CMSSameSign}, which also provides lepton efficiency maps that can be used to test a variety of models.
The multi-lepton analysis~\cite{ref:CMSMultiLeptons} selects events with three isolated leptons or more, acquired using single-lepton and di-lepton triggers. The events are sorted in 54 independent samples according to the relative charge of the leptons and their flavour, which can be $e, \mu$, and $\tau$. The three-lepton requirement strongly reduces the standard-model background, and the largest remaining background process is Z+jets\xspace, including Drell-Yan. The remaining background is further suppressed by requiring $\ensuremath{H_{\mathrm{T}}}\xspace>30\,\ensuremath{\,\text{Ge\hspace{-.08em}V}}\xspace$, $\ensuremath{E_{\mathrm{T}}^{\text{miss}}}\xspace>50\,\ensuremath{\,\text{Ge\hspace{-.08em}V}}\xspace$ or a $Z$ veto, depending on the considered final state. No excess is found with respect to the predicted background in search region, and limits are set in a variety of models. In particular, in the so-called co-NLSPs scenario (see Ref~\cite{Ruderman:2010kj} and references therein), squark and gluino masses lower than 830\,\ensuremath{{\,\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c^\text{2}}}\xspace and 1040\,\ensuremath{{\,\text{Ge\hspace{-.08em}V\hspace{-0.16em}/\hspace{-0.08em}}c^\text{2}}}\xspace are excluded.
\section{Conclusion}
Complementary searches for Supersymmetry and other new physics leading to similar final states were conducted at CMS using the 35\,\pbinv of data collected in 2010, in a wide variety of final states. No excess has been observed so far with respect to the expectations from the standard model, and stringent limits were set in various SUSY models.
Data-driven background estimation techniques have been used wherever possible, paving the way towards the analysis of high-luminosity 2011 data.
\section{Acknowledgements}
I would like to thank the members of the CMS collaboration for the excellent performance of the detector and of all the steps culminating in these results, as well as the members of the CERN accelerator departments for the smooth operation of the LHC machine.
\subsection{Model dependent interpretation}
\label{sec:ModelInterpretation}
The results of the fully hadronic (Sections~\ref{sec:RA2} and \ref{sec:Razor}) analyses were interpreted in the context of the constrained MSSM (cMSSM), a truncation of the full parameter space of the MSSM motivated by the minimal supergravity framework for spontaneous soft breaking of supersymmetry. In the cMSSM, the soft breaking parameters are reduced to five: three mass parameters, $m_0$, $m_{1/2}$ and $A_0$ being respectively the universal scalar mass, the universal gaugino mass, and the universal trilinear scalar coupling, as well as ${\rm tan} \beta$, the ratio of the up-type and down-type Higgs vacuum expectation values, and the sign of the supersymmetric Higgs mass parameter $\mu$. Scanning over this parameter space yields models which, while not entirely representative of the complete MSSM, vary widely in their supersymmetric mass spectra and thus in the dominant production channels and decay chains.
After fixing $A_0$, ${\rm tan} \beta$ and the sign of $\mu$, the model independent upper limit $s^*$ on the number of signal events $s$ from each analysis is projected on the $(m_0,m_{1/2})$ plane by excluding the model if $s(m_0,m_{1/2})>s^*$. The various sources of uncertainty on the signal yield and the signal contamination of the control samples are taken into account. Figures~\ref{fig:cMSSM1andSMS}(a) and (b) present the limits set by the jets+\ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace and razor analyses.
\begin{figure}[htb]
\centering
\hspace{-1.3cm}
\subfigure[]{\includegraphics[angle=0,width=0.47\textwidth]{Exclusion_m0_m12_tb10_NoSigHypPseudoData}}
\subfigure[]{\raisebox{0.5cm}{\includegraphics[angle=0,width=0.49\textwidth]{cMSSM_HADBOX_limit_tanB10-vf.pdf}}}\\
\hspace{-1.1cm}
\subfigure[]{\includegraphics[width=0.45\textwidth]{T1_Lim_NoThUnc_logZ_combined.pdf}}\hspace{0.5cm}
\subfigure[]{\includegraphics[width=0.45\textwidth]{T2_Lim_NoThUnc_logZ_combined.pdf}}
\caption{Expected and observed $95\%$ C.L. limits in the cMSSM $(m_0,m_{1/2})$ parameter plane for
(a) the jets+\ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace analysis and (b) the razor analysis. Limits on the di-gluino (c) and di-squark (d) cross-sections in simplified models, obtained by combining the three fully hadronic analyses, namely $\alpha_T$, jets+\ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace and razor.
}
\label{fig:cMSSM1andSMS}
\end{figure}
The expected limits are obtained by taking the median of the background test statistics as the result of the experiment, and the $\pm 1 \sigma$ band by taking the median $\pm1\sigma$.
The results of the fully hadronic analyses were also interpreted in the context of two benchmark simplified models~\cite{Alwall-1}: gluino-LSP production (left) and squark-LSP production (right).
The former refers to pair-produced gluinos, where each gluino directly decays to two light quarks and the LSP resulting in a four jet plus missing transverse energy final state.
The latter refers to pair-produced squarks, where each squark decays to one jet and the LSP resulting in a two jet plus missing transverse energy final state. Figures~\ref{fig:cMSSM1andSMS}(c) and (d) show the upper limit on the cross-section as a function of the physical masses of the particles involved in each model. In each bin, the upper limits obtained in the $\alpha_T$, the jets+\ensuremath{H_{\mathrm{T}}^{\text{miss}}}\xspace and the razor analyses are considered, and the minimum one is shown. Theoretical uncertainties are not included.
|
1,941,325,221,168 | arxiv | \section{Introduction}
\black{People who are Deaf and Hard-of-Hearing (DHH) consume a large volume of videos from video-sharing platforms (e.g., YouTube) for various reasons, such as entertainment, relaxation, and learning new skills. There is an imperative need for DHH individuals to have accessible video captions to understand video content. As of 2012, over 37 million people in the U.S. have hearing impairments \cite{blackwell2014summary} and often require captioning services to access audio and audio-visual information.}
\black{To caption videos, both human captioning approaches and automatic captioning approaches exist. Video captioning through human effort is time-consuming and inexperienced users usually spend at least six to eight times the length of the video to create captions from scratch \cite{ClosedCa43:online}.} Individual content creators on YouTube upload their videos, usually once per day or multiple times a week. Most of these content creators do not entirely rely on the income from their uploaded videos for a living, and 96.5\% of YouTube content creators do not even make it above the U.S. poverty line with the income from being a content creator \cite{Success63:online}. Many of the popular YouTubers must still maintain their primary employment elsewhere \cite{WhySomeS82:online}. Such individual content creators tend to have limited time or budget for their videos. Therefore, this special situation and background of individual content creators may affect the caption quality and DHH audiences' experiences with watching videos uploaded from individual content creators. \black{To save time for captioning, some content creators use Automatic Speech Recognition (ASR) technology to caption videos \cite{Useautom75:online}. However, the current ASR technology is error-prone due to variability and complexity of speech (e.g., background noise, accent), which may make the video content confusing to audiences \cite{hazen2006automatic,benzeghiba2007automatic,glasser2017deaf,o2008automatic,araujo2014context}}.
In the past few years, many content creators and DHH \black{viewers have} started multiple campaigns and events to call for better captions of online videos (Fig. \ref{fig:teaser}), for example, the ``\#NoMoreCraptions'' campaign \cite{1NewMess92:online,ANewCamp15:online} initiated by Rikki Poynter \cite{RIKKIPOY3:online}, a deaf content creator on YouTube. Poynter commented on her inspiration of starting the campaign and complained about the quality of YouTube captions, ``\textit{The lack of proper closed captioning has always inspired me, but I think the final straw was when I saw viewers of big YouTubers add in unnecessary commentary, jokes, etc. to closed captioning on videos and then yelling at the d/Deaf/HOH community when they got called out for it. I just wanted to try to make something big happen. I wanted help. I needed help.}'' \cite{Autogene63:online}. After the ``\#NoMoreCraptions'' campaign initiated, many content creators and DHH viewers supported the campaign by filming videos to express the importance of captions (e.g., Fig. \ref{fig:teaser}) and posting their perspectives on social platforms (e.g., Twitter \cite{nomorecr8:online}). Although prior research explored DHH audiences feedback towards captions in general (e.g., \cite{kawas2016improving}), little research has been done to explore the barriers of DHH audiences' experiences and challenges with video captions created by individual content creators. Furthermore, prior research mostly focused on consumers' perspectives (e.g., DHH audiences) of captions, which lacks explorations on the perspectives from caption producers (e.g., individual content creators). To improve the DHH audiences' experiences with captions created by individual content creators, we need to mutually understand DHH audiences' feedback on video captions produced by individual content creators on video-sharing platforms and individual content creators' practices, perceptions and challenges in captioning. In our work, we are interested in exploring the following research questions:
\begin{itemize}
\item (RQ1) From the perspective of DHH audiences, what are the experiences with, feedback towards, and potential improvements of the captions provided by individual content creators?
\item (RQ2) From the perspective of individual content creators, what are the existing practices and perceptions of captioning their own videos?
\item (RQ3) From the perspective of individual content creators, what are the challenges and problems involved in captioning their own videos?
\end{itemize}
\black{To explore RQ1, we first conducted semi-structured interviews with 13 DHH YouTube viewers. Based on the feedback, we revealed the practices, perceptions, and challenges of DHH audiences in video searching and filtering, concerns of caption quality, and potential improvements (e.g., small talking head at the corners of the screen made DHH audiences hard to read the lips (Fig. \ref{fig:lipreading})), and their reactions and mitigation strategies to poor-caption problems. To understand RQ2 and RQ3, we then conducted a video analysis with 189 non-commercial YouTube videos uploaded by individual content creators and a survey study with 62 individual content creators. We presented the existing captioning methods for individual content creators to caption their videos and their perceived benefits and drawbacks of these methods (e.g., caption errors could potentially demonetize the video). We further explored the content creators' perceptions and challenges of captioning (e.g., back-captioning all previous videos that do not have captions due to knowledge gaps and lack of awareness). Finally, we extended our findings through the discussion of developing a DHH-friendly video recommendation system, minimizing the effort on previewing caption quality, improving and motivating high-quality community captions, presenting different levels of caption details, and opportunities with lip-reading.
Overall, we believe our findings provide an in-depth understanding of captions generated by individual content creators and bridge the gap between DHH audiences and individual content creators on creating, interpreting, and presenting captions of user-generated videos online. Our findings will also shed light on future online-captioning system design (e.g., community-contributed captions) to HCI and CSCW researchers by understanding the practices and challenges of captions from different perspectives among caption creators and consumers.}
\section{Background and Related Work}
\subsection{\black{Usefulness of Online Video Captions for DHH Individuals}}
The large volume of videos on social media platforms and online learning environments, such as Massive Open Online Courses (MOOCs), now enable people to obtain comprehensive information online (e.g., \cite{li2021non}). Shiver and Wolfe \cite{shiver2015evaluating} interviewed 20 DHH individuals and found that all of them expressed the importance and usefulness of having captions when watching online videos with captions. Shiver and Wolfe \cite{shiver2015evaluating} further mentioned that even machine-generated captions that contain errors could also help DHH individuals understand video content. Beyond enabling DHH audiences to have access to video contents, captions could also benefit other groups of people, such as second-language learners \cite{collins2013using,garza1991evaluating,markham2001influence,neuman1992captioned,vanderplank1988value,whitney2019captioning} and native hearing audiences \cite{gernsbacher2015video}. Furthermore, having videos captioned could potentially benefit SEO (Search Engine Optimization) and help the video soar in searching \cite{WhatIsVi84:online}.
Although Federal Communications Commission (FCC) has enforced online videos to be captioned through legislation \cite{National23:online}, many DHH \black{individuals} have reported and started multiple events and campaigns to call for better captions for online videos created by individual content creators \cite{1NewMess92:online,ANewCamp15:online}. \black{To understand DHH audiences' perceptions on caption qualities, Kawas et al. \cite{kawas2016improving} conducted co-design workshops with DHH individuals and found that their participants struggled with the low-quality captions. Beyond low-quality captions, Tyler et al. \cite{tyler2009effect} found that the rate of caption delivery affects the comprehension of contents. Furthermore, the variability and complexity of human speech may affect the accuracy of captions \cite{kushalnagar2014accessibility}. More specifically, having background noise, different speech rate or speaking styles would affect the quality of the caption, thus affect the comprehension and understanding of captions \cite{o2008automatic}.}
\subsection{\black{Existing Captioning Methods}}
There exist several approaches to captioning videos, including manually captioning, automatic captioning, and crowd-sourced captioning. To caption videos manually, video producers could generate captions through captioning softwares by syncing a script with time points in a video. This approach usually takes four to six times the video's length to create the captions by professional captionists \cite{ClosedCa43:online}. Moreover, it might take six to eight times the length of the video for an untrained captionist \cite{ClosedCa43:online}. Another approach involved paying a professional company for captioning services, which might cost over \$8 per minute \cite{ClosedCa43:online}.
\black{As automatic speech recognition (ASR) algorithms have advanced, some video producers have started using ASR to recognize and transcribe spoken language into readable text \cite{Useautom75:online,wactlar1996intelligent}.
For example, some auto-captioning tools allow individual content creators to generate captions for their uploaded videos without human intervention \cite{fichten2014digital,Useautom75:online}.
However, the variability and complexity of speech often cause issues regarding recognition accuracy, caption latency, and context formalization \cite{araujo2014context,gaur2016effects,kafle2016effect,kafle2019predicting,kushalnagar2014accessibility,o2008automatic,shiver2015evaluating}.
Furthermore, background noise, multi-talker speech, human accent, and disfluent speech may further downgrade the quality of automatic captions \cite{benzeghiba2007automatic,glasser2017deaf,o2008automatic}.}
To make automatic captions work better, prior work explored the approaches such as removing the noise from the environment and changing the appearances of the automatic captions to convey ASR confidence \cite{berke2017deaf} (e.g., alternating the font size \cite{piquard2015qualitative}, font color \cite{shiver2015evaluating}, and underlining \cite{vertanen2008benefits}). Although these features were studied with DHH participants, they have not been fully integrated into video-sharing platforms, so it is unknown what the platform users will think of them in a practical scenario. Therefore, in our work, in addition to echoing some of the findings from the prior works, we reported the practices and perceptions of changing caption appearances on YouTube from the perspective of both individual content creators and DHH audiences.
Online video-sharing platforms, such as YouTube, enabled community contributions that allow video audiences or subscribers to contribute their effort to help caption YouTube videos \cite{Turnonma28:online}. Beyond the ASR approach, researchers explored the potential of leveraging community services or crowd workers for captioning \cite{huang2017leveraging,kushalnagar2012readability,lasecki2012online,lasecki2012real,lasecki2017scribe}. Due to the high cost of professional captionists and low accuracy of ASR, Lasecki et al. \cite{lasecki2012real} introduced a new approach that allowed groups of non-expert captionists to collectively caption speech and an algorithm for merging partial captions in real-time. To further improve the quality of captions created by non-expert captionists, Lasecki and Bigham \cite{lasecki2012online} presented methods that leveraged captionists quality estimation and caption quality estimation to overlap between non-expert captionists to provide optimal caption output. Moreover, Lasecki et al. \cite{lasecki2017scribe} further support non-expert captionists by managing the task load, directing different captionists to different portions of the audio stream, and adaptively determining the segment length based on each individual's typing speed.
Furthermore, Huang et al. \cite{huang2017leveraging} implemented BandCaption, a system that combines the ASR with crowd input to provide a cost-efficient captioning approach for online videos and distributed micro-tasks to crowd workers who have different strengths and needs. Huang et al. \cite{huang2017leveraging} conducted studies with people with different backgrounds and showed that different user groups could make complementary contributions based on their strengths and constraints. Although past research explored how to better support community captioning with non-expert captionists and evaluated their methods through user studies, it is unknown: 1) how community captioning was adopted by video-sharing platforms? 2) what are the practices and challenges for individual content creators to leverage community captioning to create the video captions? 3) what is the general quality of captions generated through community captioning in video-sharing platforms? 4) what are the concerns of community captions from the perspective of DHH audiences?
\red{Even though prior research has explored DHH audiences' perceptions on caption qualities of videos in general \cite{shiver2015evaluating}, there lack explorations of DHH audiences' experiences and challenges with captions of videos uploaded by individual content creators, who have special situations and background which may affect caption qualities. On the other hand, little research has widely explored the current practical approaches and challenges for individual content creators to caption their online videos on video-sharing platforms and associated challenges. In our work, we first present the feedback towards and potential improvement of the captions provided by individual content creators from DHH audiences' perspective through semi-structured interviews. Through video analysis and online surveys with YouTube content creators, we demonstrate existing captioning methods and processes, caption quality and consequences, and individual content creators' perceptions and general challenges of captioning.}
\section{Method}
\black{We first conducted semi-structured interviews with 13 DHH YouTube viewers to explore RQ1. To understand RQ2 and RQ3, we then conducted a YouTube video analysis with 189 non-commercial YouTube videos uploaded by individual content creators and a follow-up survey with 62 individual content creators. The two studies enable us to explore online video captions from both perspectives of caption creators and DHH consumers. In this section, we describe the methodological details of our studies.}
\subsection{\black{Semi-structured Interview}}
\black{In this section, we first show the study procedure of our interviews with DHH audiences (e.g., recruitment, demographic information, sample interview questions) and then present how we analyzed the data from interviews.}
\begin{table}[ht]
\small
\caption{Interviewees' Demographic Information}
\centering
\begin{tabular}{|p{1.3cm}|p{0.4cm}|p{1cm}|p{5.5cm}|p{3cm}|}
\hline
Participant & Age & Gender & Hearing Impairment Condition & Frequency of Watching YouTube\\ [0.5ex]
\hline
P1 & 50 & Female & Congenitally deaf & Nearly everyday\\
\hline
P2 & 26 & Female & Bilateral sensorineural hearing loss in 2014 & Everyday\\
\hline
P3 & 22 & Male & Acquired hard of hearing since teenager & Everyday\\
\hline
P4 & 35 & Female & Acquired hard of hearing since teenager & Everyday\\
\hline
P5 & 36 & Male & Acquired hard of hearing 12 years ago & Nearly Everyday\\
\hline
P6 & 25 & Female & Congenitally deaf & At least once a week\\
\hline
P7 & 34 & Female & Congenitally deaf & Nearly everyday\\
\hline
P8 & 18 & Female & Congenitally deaf & Everyday\\
\hline
P9 & 32 & Female & Congenitally deaf & Everyday\\
\hline
P10 & 28 & Female & Congenitally deaf & Everyday\\
\hline
P11 & 26 & Female & Congenitally deaf & Everyday\\
\hline
P12 & 63 & Male & Congenitally deaf & Everyday\\
\hline
P13 & 50 & Female & Congenitally deaf & Everyday\\[0.5ex]
\hline
\end{tabular}
\label{table:demographic}
\end{table}
\subsubsection{\black{Study Procedure}}
To understand the quality of captions created by individual content creators from the perspectives of DHH audiences, we conducted semi-structured interviews with 13 YouTube audiences who are Deaf or Hard-of-Hearing (Table \ref{table:demographic}). Our interviewees have an average age of 34, with a range from 18 to 63 years old. Nine of them are congenitally deaf, and four of them acquired deafness in their teenagers. Interviewees were recruited through social platforms (e.g., Reddit, Twitter, Facebook). To participate in our interview, the interviewee must 1) be an audience of YouTube videos; 2) be deaf or hard-of-hearing; 3) be 18 or above; 4) be able to read and write in English. The interviews were conducted through Zoom and took around 45 - 60 minutes for each interviewee. In Zoom interviews, the communication was either done through typing on Zoom chat or speech if the interviewee has an automatic speech recognition software on their computer.
In the interview, we first asked DHH participants about their demographic information (e.g., age, gender). We then asked them about their experiences of watching online videos created by individual content creators on video-sharing platforms (e.g., searching, filtering, watching, and commenting) and their associated concerns and challenges \black{(e.g., ``have you had experiences of watching a YouTube video that did not have captions once it got uploaded? If yes, please talk about the details.'')}. Afterward, we asked them regarding their perceptions of video caption on YouTube and their reactions towards poor quality captions \black{(e.g., ``what actions would you take if you come across a YouTube video with missing or poor-quality captions?'')}. Finally, we asked them about their recommended improvements on captions to improve the experiences of watching videos created by individual content creators \black{(e.g., ``what elements do you wish to add to Youtube captions?'')}. Interviewees who completed the interview were compensated by \$15 cash. The whole recruitment and study procedure was approved by the institutional review board (IRB).
\subsubsection{\black{Data Analysis}}
\black{After finishing interviews with 13 DHH YouTube viewers, we first combined all transcripts from different interviewees in the same folder. Two researchers then downloaded the folder in their local drive and independently performed open-coding on the transcripts. The researchers coded the transcripts focused on DHH audiences' practices, perceptions, and challenges on videos created by individual content creators. During the coding process, both researchers went through the transcripts multiple times.} Then, the coders met and discussed their codes. When there was a conflict, they explained their rationale for their code to each other and discussed to resolve the conflict. Eventually, they reached a consensus and consolidated the list of codes. Afterward, they performed affinity diagramming \cite{hartson2012ux} \black{through a Miro board \cite{AnOnline70:online}} to group the codes and identified the themes emerging from the groups of codes. Overall, we established three themes and 11 codes. The results introduced in the findings are organized based on these themes and codes.
\subsection{\black{YouTube Video Analysis + Survey}}
\black{To further understand individual content creators' practices, challenges, and perceptions on captioning their own videos, we conducted a mixed-methods study by including two phases: 1) a YouTube video analysis---searching, filtering, and analyzing YouTube videos related to captioning practices and challenges from the perspective of individual content creators and 2) an online survey with YouTube content creators who captioned their videos---seeking an in-depth understanding of existing practices and challenges to caption videos from the perspective of individual content creators.}
\begin{table}[ht]
\caption{Searching Keywords}
\centering
\begin{tabular}{|p{8cm}|}
\hline
\textbf{Searching Keywords} \\
\hline
Caption, Craption, Community Caption, Closed Captioning, Captioning, YouTube Caption, YouTube Automatic Caption, Automatic Caption, Self-captioning, Contribute Closed Caption, Content Creator Captioning, Captioning Services, Caption Accessibility, Caption Challenge, Caption Video, Video Accessibility, Captionist, Community Captionist \\
\hline
\textbf{Hashtag Searching Keywords} \\
\hline
\#NoMoreCraptions, \#CaptionYourVideos, \#WithCaptions, \#ClosedCaptions, \#CaptionThis, \#CaptionPlease, \#CaptionVideos, \#AutomaticCaption, \#SaveCommunityCaption, \#ContributeClosedCaptions \\
\hline
\end{tabular}
\label{table:searchterms}
\end{table}
\subsubsection{\black{YouTube Video Analysis---Data Collection}}
\black{Inspired by prior research on leveraging the richness of YouTube video contents to understand accessibility needs \cite{anthony2013analyzing,li2021non}, we conducted a YouTube video analysis to understand captioning practices and challenges from the perspective of individual content creators and find the potential reasons for caption problems that were complained about by our DHH interviewees from the previous section.}
\black{In the video analysis, we look for videos focused on captioning practices, challenges, and perceptions from individual content creators to uncover potential design implications and knowledge gaps, for example, individual content creators explaining their captioning methods and associated concerns. To search for the relevant videos about captioning, three researchers independently combined searching captioning keywords (e.g., YouTube captions, automatic captions, community captions) and leveraging the hashtag searching method \cite{gao2017hashtag,yang2012we} related to captions (e.g., \#NoMoreCraptions, \#CaptionYourVideos) (Table \ref{table:searchterms}). To come up with these searching keywords, our researchers first started with basic searching keywords (e.g., Caption, YouTube Caption) and gradually included other searching keyword combinations and hashtag keywords found from candidate video titles or descriptions. For example, we found the hashtag searching term `\#CaptionYourVideos' listed in the title from a video we searched by using ``YouTube captions''. Because each search may generate hundreds of results, we then followed the same approach as Komkaite et al. \cite{komkaite2019underneath} by stopping searching for videos after the whole page of searching results started to be irrelevant.}
In total, we initially created a video dataset of 248 relevant videos found by July 10th, 2020. We then filtered out videos if they: 1) were commercial videos that were not self-funded by content creators; 2) presented in a non-English language and did not have English caption; 3) were irrelevant to captions or videos created by individual content creators, such as captions created by professional captionists for movies companies; 4) were duplicated. We then ended up filtering 59 videos and created the final video dataset with 189 videos (V1 - V189). Among 189 videos in our dataset, most videos were uploaded in 2016 (68), while others were uploaded in 2019 (31), 2017 (24), 2020 (23), 2018 (18). The average length of videos was 354 seconds (ranging from 27 seconds to 1157 seconds).
\subsubsection{\black{YouTube Video Analysis---Data Analysis}}
\black{To code the videos, three researchers first open coded the videos independently. Then, the coders met and discussed their codes. When there was a conflict, they explained their rationale for their code to each other and discussed to resolve the conflict. Eventually, they reached a consensus and consolidated the list of codes. Afterward, we performed affinity diagramming \cite{hartson2012ux} to group the codes and identified the themes emerging from the groups of codes. Overall, we established three themes and 11 codes. The results introduced in the findings are organized based on these themes and codes.}
\subsubsection{Survey with YouTube Content Creators}
\begin{table}[ht]
\small
\caption{Survey Respondents Demographic Information}
\centering
\begin{tabular}{|p{1.4cm}|c|c|}
\hline
Category & Detail & Count\\ [0.5ex]
\hline
\multirow{6}{4em}{Age} & 18 - 24 & 22 \\
& 25 - 34 & 23 \\
& 35 - 44 & 8 \\
& 45 - 54 & 4 \\
& 55 - 64 & 3 \\
& 65 or above & 2 \\
\hline
\multirow{5}{4em}{Gender} & Male & 32 \\
& Female & 23 \\
& Non-binary & 4 \\
& Prefer not to disclose & 1 \\
& Prefer to self-describe & 2 \\
\hline
\multirow{10}{4em}{Primary Occupation} & Management & 1 \\
& Business and Financial Operation & 4 \\
& Computer and Mathematical & 6 \\
& Architecture and Engineering & 1 \\
& Life, Physical, and Social Science & 2 \\
& Educational Instruction and Library & 20 \\
& Arts, Design, Entertainment, Sports, and Media & 15 \\
& Healthcare Practitioners and Technical & 1 \\
& Food Preparation and Serving Related & 1 \\
& Sales and Related & 5 \\
& Unemployed or Retired & 5 \\
\hline
\multirow{6}{4em}{Length as a Content Creator (Years)} & Less than One Year & 9 \\
& One to Two Years & 18 \\
& Two to Three Years & 8 \\
& Three to Four Years & 5 \\
& Four to Five Years & 8 \\
& Above Five Years & 14 \\
\hline
\multirow{5}{4em}{Subscribers} & Over 1 million & 0 \\
& 100k - 1 million & 5 \\
& 10k - 100k & 9 \\
& 1k - 10k & 14 \\
& less than 1k & 34 \\
\hline
\end{tabular}
\label{table:surveydemographic}
\end{table}
Inspired by previous research \cite{anthony2013analyzing,komkaite2019underneath}, we further conducted an online survey study with YouTube content creators to acquire more details regarding content creators' practices and challenges through captioning and to address uncertainties from the video analysis. To find content creators who have experience captioned videos before, we first contacted YouTube content creators of the YouTube videos we collected and posted our recruitment script in the YouTube comment section on the videos we collected. Due to the limited response rate, we further distributed our recruitment script through social platforms (e.g., Reddit, Twitter). To participate in our online survey study, the participant must 1) be an individual content creator on YouTube; 2) have experience creating captions on her uploaded videos; 3) be 18 or above; 4) be able to read and write in English. The survey was hosted through Qualtrics \cite{Qualtric28:online}. The survey questionnaire included 22 questions covering demographics (e.g., age, gender, upload frequency, primary occupation), practices of captioning (e.g., what captioning methods have you used the most? Why?), perceptions on captioning, and challenges associated with captioning as an individual content creator (e.g., Have you had any difficulties when using community-contributed captions? Why?). \red{To encourage authentic reporting and protect participants’ privacy, we asked participants to select the number of subscribers under different intervals instead of providing the exact number of subscribers (i.e., less than 1k, 1k - 10k, 10k - 100k, 100k - 1 million, over 1 million) (Table \ref{table:surveydemographic}) \cite{YouTubeC49:online}}. Survey respondents who completed the online survey were entered into a draw for a \$100 Amazon gift card. In total, we received 62 responses from YouTube content creators (S1 - S62) (Table \ref{table:surveydemographic}). The whole recruitment and study procedure was approved by the institutional review board (IRB).
\section{Results}
\black{In the results, we will first present the findings from the interview with DHH audiences on RQ1. We will then demonstrate the findings from the YouTube video analysis and survey with individual content creators to answer RQ2 and RQ3.}
\subsection{\black{DHH Audiences' Practices, Perceptions, and Challenges on Videos Created by Individual Content Creators}}
In this section, we show our findings in three main phases regarding DHH audiences' experiences with videos created by individual content creators: 1) video searching and filtering, 2) feedback on caption quality and improvements, and 3) DHH audiences' reactions and mitigation strategies.
\subsubsection{Video Searching and Filtering}
\label{DHH practice}
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{figures/youtubefilter.JPG}
\caption{YouTube video filter in searching. The system will only display videos with captions if the audience selects the subtitle/CC filter.}
\Description{There is a searching bar at the top of the figure. There are five main categories of filters in the figure, which includes upload date, type, duration, features, and sort by. There is a blue highlight that circles Subtitles/CC under features.}
\label{fig:youtubefilter}
\end{figure}
In our interview, we asked our participants to describe their experiences and practices of searching and filtering their videos of interest in video sharing platforms. We found that they commonly used two approaches to find videos of interests: 1) using the searching filter to only display videos with captions (Fig. \ref{fig:youtubefilter}) and 2) directly exploring videos from the feed list that are automatically recommended by video-sharing platforms. Although having filters on videos with/out captions reduces the searching effort for DHH audiences, seven participants mentioned that the caption quality varied on different captioned videos. Because the filter is based on whether the video has captions or not, not the caption quality. Therefore, we found that DHH audiences tend to watch videos that content creators manually added `[CC]', `\#Captioned', or `\#WithCaption' either in the video title or in the video description. \black{\textit{``Having manual tags made me feel that the content creator has the awareness of providing captions, and it makes me trust the content is accessible.''} said P8.} Although these \textbf{manually added caption tags could create additional confidence and trust in caption qualities for DHH audiences}, P5 commented on the lack of manually added caption tags and the variation of styles, which made him hard to search:
\begin{quote}
``...The existing Subtitle or CC filter can only tell whether a video is captioned or not. From my own experiences, most of them are crappy. Afterward, I started searching for videos that the video creator manually put CC tags in the title or descriptions. This shows that the video creator considers video captions more seriously. However, I found that only less than 1\% of videos have manually added caption tags, and the styles of those tags really varied on different videos. For example, content creators added self-created captions tags like `[CC]', `cc', `[Captioned]', and `WithCaption'. These added tags gave me a hard time in video searching...''
\end{quote}
To reduce the effort DHH audiences spend manually passing through the videos and checking proper captions, DHH audiences also stated that video-sharing platforms should include the predictions of the caption quality of videos (P2, P13). \black{\textit{``Normally, If I want to make sure a video has proper captions, I have to physically open it and preview it for a while to know if the caption is readable. I do not want to waste my time on watching videos to just make sure it has proper captions.''} said P2.} As the quality measure metrics, prior research has explored Word Error Rate (WER), Term Frequency-Inverse Document Frequency (TF-IDF) \cite{nanjo2005new}, Match Error Rate (MER) \cite{morris2004and}, and Automated-Caption Evaluation (ACE) \cite{kafle2017evaluating}. However, participants reported that most video-sharing platforms \textbf{lack caption quality indications} for their videos. They stated that video-sharing platform designers should focus on choosing the most efficient quality measure metrics and explore how to visualize the caption quality indications. P13 explained her practices of checking caption qualities and needs of quality indications:
\begin{quote}
``...Every time when I find videos from the feed list, unless it mentioned [CC] in the title, I have to physically open it and check if it has proper captions. This requires me to open the video from the feed list, manually turn on the caption option, and watch the video for a while to know whether I could understand the video content from the captions. If YouTube or other platforms could show whether a video has captions and what is the confidence of the caption quality, it would save my time checking the captions, and I could even add filters to remove certain videos below the quality baseline...''
\end{quote}
In terms of directly exploring videos in the feed list, nine participants mentioned the delay of captions to be available on videos while they are already published and presented in the recommendation list. \black{According to the literature, manual captioning methods might take six to eight times the video's length to create the captions by unprofessional captionists \cite{ClosedCa43:online}. \textit{``Many videos were first uploaded and showed on the platform without any captions, because many content creators want their videos uploaded at a certain time to receive more views. When the captions were ready, content creators then added the captions to the videos.''} said P3. The \textbf{caption delay forced DHH audiences to manually save videos of interest for the future} while searching videos. Otherwise, it disappears in the feed list of video-sharing platforms.}
P12 emphasized why delay is the biggest problem:
\begin{quote}
``...There are two key problems with the caption delay. First, it may disappear from the feed list of the audiences when it gets the caption ready because the feed list usually displays the most recent videos. Second, I often lose patience while waiting for captions...''
\end{quote}
\subsubsection{Feedback on Caption Quality and Improvements}
\label{feedback caption quality}
\begin{figure}
\centering
\includegraphics[width=0.6\columnwidth]{figures/Picture1.png}
\caption{This figure compares the auto-generated caption (the bottom line) with the original transcript (the top two lines).}
\Description{This figure includes two people. There is an original transcript ``I'm gonna take this fist right here and punch you in the face!'' and there is an auto-generated caption as ``undertake. Take describes fractionally touch''}
\label{fig:autocaptionproblem}
\end{figure}
We further asked our participants regarding their perspectives of video caption qualities on YouTube and potential improvements. In this section, we uncover DHH audiences' feedback on caption styles and presentations, appearances control, and other peripheral visual contents, which may affect the experience with captions. All of our interviewees complained that the majority of videos created by individual content creators did not even have a caption or had poor-quality captions that were useless for understanding the content.
More specifically, We found that spelling, grammar, and punctuation problems are present in both auto-generated captions (e.g., Fig. \ref{fig:autocaptionproblem}) and human-generated captions. Among the many problems (e.g., spelling, punctuation, grammar), eight of our participants emphasized that \textbf{punctuation is more critical to understanding the content of the caption over other problems.} \black{They mentioned that errors such as spelling and grammar would not affect the understanding of the captions too much because they could predict the correct one from the caption context. However, punctuation problems may mess up the comprehension of the captions. For example, one sentence may not fit in one single caption and the second half of the first sentence might be displayed together with the first half of the second sentence (P4, P6, P7).} P4 stated the importance of punctuation:
\begin{quote}
``...Spelling is not that important to me, although I do prefer perfect spelling. I could sort of get the correct word from the content background and my personal experiences on predictions. But punctuation is definitely more important! If I could know where a stop is, it could help me to understand the content of the sentence. Without punctuation, it puts lots of mental effort while watching the videos on YouTube...''
\end{quote}
Beyond common caption errors, four participants reported that some captions tend to use `[]' to abbreviate certain video content. We found this \textbf{use of abbreviation in videos generates concerns on equality and fairness for the DHH population}.
For example, P2, P5, and P6 mentioned their experiences of watching some video captions that put `[Joke]' instead of the actual joke content, which made DHH audiences frustrated when watching the video with their friends and family members. P2 commented:
\begin{quote}
``...I was really frustrated when the caption just put [Joke] when they are actually telling a joke in the audio. It just makes the whole joke very very flat. Especially, it felt embarrassing when I watched a video with my friends, and they all laughed, and I just did not know what happened...''
\end{quote}
Moreover, our participants mentioned that some captions used ``[Music]'' in the caption to represent musical content. To better describe audio content in captions, all of our DHH interviewees suggested adding more detailed information that helps with identifying the content, such as ``[Rap Music done by XXX]'' rather than just ``[Music].'' Beyond that, DHH audiences also want the caption to include visually unidentifiable or hard-to-identify audio information in the caption, such as ``cough'', ``laugh'', and ``inhale''. On the other hand, three participants also posed concerns about over-detailed descriptions, which may distract DHH audiences. P9 explained the importance of \textbf{having a certain hierarchy in captions and allowing audiences to control the level of details}:
\begin{quote}
``...Honestly, I do prefer having detailed captions that describe things that are relevant to the videos. However, I have watched a movie where they added background music lyrics and all character identities in the caption and that really freaked me out. I think there should be a hierarchy in captions based on the importance to understand the video content. And I should be able to control the level of details in captions...''
\end{quote}
In terms of caption presentations, our interviewees also complained about existing presentation styles, such as font size \cite{piquard2015qualitative}, font color \cite{shiver2015evaluating}, and underlining \cite{vertanen2008benefits}, which is pre-defined by individual content creators. Five interviewees expressed \textbf{strong preferences for controlling the changes of caption appearances and the correlation between the changes and the meaning of the changes}. For example, P12 mentioned his willingness to change the font sizes according to different devices because he is 63 and often has a hard time with different caption font sizes on his phone, tablet, and laptop.
\begin{figure}
\centering
\includegraphics[width=0.5\columnwidth]{figures/lipreading1.jpg}
\caption{This figure shows the streamer explaining how to use the recording function on Windows, with a small head at the bottom right corner. The streamer's face does not directly face the camera, which makes DHH audiences hard to read his lips.}
\Description{This figure shows the streamer explaining how to use the recording function on Windows, with a small head at the bottom right corner. The streamer's face does not directly face the camera, which makes DHH audiences hard to read his lips.}
\label{fig:lipreading}
\end{figure}
Captions are not the only visual content that helps DHH audiences understand the video content. In our interview, our participants stated the importance of lip-reading to understand the video content. More importantly, our participants complained that some videos on online video platforms only show a \textbf{small talking head at the corners of the screen that made it difficult for viewers to read the speaker's lips}. Moreover, some do not directly face the camera, which makes lip-reading impossible (Fig. \ref{fig:lipreading}). P3 stated the importance of reading lips when watching a video on YouTube:
\begin{quote}
``...I would say I highly rely on lip-reading when watching YouTube videos. It is more peripheral if the caption quality is good, but it is definitely more helpful if the video has poor-quality captions. In some videos, the content creators directly face the camera, which is fine for me to read lips. But, many videos, such as game streaming videos where the content creators usually show their minimized face at a corner of the screen really pissed me off. Some of them even do not show their faces or at a weird camera angle that I cannot even see the mouth. I would recommend the social media platform could provide an option to maximize the lip region and display it on the screen...''
\end{quote}
\subsubsection{DHH Audiences' Reactions and Mitigation Strategies}
To check the availability of video captions due to caption delay, nine participants complained about the tedious process of manually going through the video list. Thus, six out of 13 interviewees mentioned that they \textbf{typically do not come back to a video for captions if the video was not properly captioned initially when they first saw the video in the feed list}. They would watch a different video that has similar content instead. Therefore, this posed concerns to individual content creators to create captions on time to prevent the loss of audiences.
In terms of reactions to caption problems, we found that 10 out of 13 DHH interviewees mentioned that they tried leaving a comment initially but ended up with other options (e.g., emailing, direct messaging on Instagram if applicable) or just did not do anything. \textbf{Leaving a caption request in the commenting section might not be an effective approach}, as P11 commented that her comments on requesting captions usually got buried in the long comment list, especially for big YouTubers:
\begin{quote}
``...Initially, when I found videos with bad captions on YouTube, I left comments to let the content creators know my concerns. However, I found the content creators barely check their comments, especially those with thousands of comments. Mine just got buried at the bottom...''
\end{quote}
P4 also mentioned that she resists posting comments to ask for captions because other video audiences piled on her by replying disrespectfully, such as ``you are wasting the content creator's time.'' Therefore, our interviewees mentioned that they tend to contact content creators more privately by trying other direct messaging approaches, such as Twitter and Instagram, to request better captions. \black{However, three interviewees further mentioned that the additional effort of sending private messages (e.g., sign up a new account and send friend requests) to individual content creators to request better captions and potential privacy concerns made them end up choosing a different video. Overall, all participants mentioned that \textbf{the process of requesting better captions is troublesome and useless} and future research should explore new channels of communication about captioning problems with privacy protections. P4 commented on the burden of creating additional accounts for private messages and the timely responses:}
\begin{quote}
``...Different YouTubers have their own preferred platforms for personal or business inquiries. If I do not have an account with a certain platform, I have to sign up a new account and send a friend request to the YouTuber in order to send private messages to the person. Honestly, it may still take a long time to have the YouTuber replied to my message or just never replied...''
\end{quote}
In this section, we first revealed DHH audiences' experiences on video searching and filtering (e.g., manually added caption tags could create additional honest signals and trust to DHH audiences). We then uncovered DHH audiences' feedback on caption quality by individual content creators (e.g., use of abbreviation in videos generate concerns on equality and fairness for the DHH population). Finally, we showed their reactions and mitigation strategies on low-quality captions and associated concerns (e.g., attacked with disrespectful replies by other video viewers). From the interview, we explored existing practices, challenges, and perceptions on video captions from the perspectives of DHH audiences. To better understand the gap between DHH audiences and individual content creators, it is also important to explore individual content creators' practices, perceptions, and challenges in captioning their own videos.
\subsection{Individual Content Creators' Practices, Perceptions, and Challenges in Captioning Their Own Videos}
In this section, we show our findings in three main phases regarding individual content creators' practices, perceptions, and challenges on captioning: 1) caption practices and processes, 2) caption quality and consequences, and 3) individual perceptions and personal challenges on captioning.
\subsubsection{Caption Practices and Processes}
\label{caption practice and processes}
In this section, we introduce the existing practices for individual content creators to caption their videos and emphasize the practical challenges associated with different captioning approaches and processes. From the video analysis and the survey, we found that individual content creators mainly caption their videos through 1) self-captioning: content creators manually create the SRT file with timestamps or use auto-sync functionality to generate associated timestamps, 2) community-contributed captioning: video audiences and subscribers volunteer and contribute in the captioning process, 3) auto-generated captioning: captions automatically generated through speech recognition algorithms with/out manual editing by content creators, and 4) third-party captioning services: third-party companies generate the SRT file with captions, and content creators upload them later. In our survey, we asked participants to rate their preference on different captioning methods on a 5-point Likert scale (5 as strongly agree with preferring the method, 1 as strongly disagree with preferring the method). We found that our survey respondents preferred mostly on self-captioning (Mean = 3.56, SD = 1.6), automatic captioning with manual editing (Mean = 3.5, SD = 1.47), and community-contributed captioning (Mean = 3.1, SD = 1.6). On the contrary, our survey respondents showed the least preferences on third-party captioning services (Mean = 2.02, SD = 1.33) and automatic captioning without editing (Mean = 1.92, SD = 1.24). We further show the practices and challenges of each captioning method.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{figures/ManualCaptioning.png}
\caption{YouTube manual captioning interface and content creators could manually input the transcript in the automatically created time periods.}
\Description{This shows a YouTube manual captioning interface. On the left side, there is a highlight that select a specific caption, there is a video frame shows at the right side with time stamp.}
\label{fig:manualcaptioning}
\end{figure}
To caption videos manually, content creators caption their videos by uploading an SRT file or using YouTube captioning interface to input captions (Fig. \ref{fig:manualcaptioning}). In Fig. \ref{fig:manualcaptioning}, content creators could type their subtitles in the automatically generated time periods on YouTube. However, unlike professional video producers, individual content creators mentioned \textbf{the extra effort of reproducing video transcripts for self-captioning} because they often do not prepare a transcript before filming the video. Instead, they usually just have a general guideline or a checklist when filming videos (V25). In our survey, We found that 43 survey respondents (i.e., individual content creators), who captioned their videos by themselves, usually had to transcribe the entire video after filming it. Furthermore, 62 individual content creators mostly agreed that `manually captioning videos is time-demanding' (Mean = 3.92, SD = 1.27, on a 5-point Likert scale). We further asked individual content creators about their agreements with whether different steps of self-captioning are challenging on a 5-point Likert scale, 5 as strongly agree, and 1 as strongly disagree. We found that the top three challenging steps for self-captioning are typing the caption word by word (Mean = 3.86, SD = 1.32), syncing/tracing the timeline of captions (Mean = 3.72, SD = 1.32), and ensuring the captions are free of errors (Mean = 3.52, SD = 1.34).
The community caption services in YouTube leverage audiences and subscribers to volunteer and contribute to the captioning process (e.g., V35) and also help to translate to other languages (V29). To get the video captioned through community contributions, content creators usually have to make the video public first. Then people who would like to contribute could add captions later. The community captioning interface (Fig. \ref{fig:communitycaptioninterface}) is similar to the self-captioning interface (Fig. \ref{fig:manualcaptioning}), except it has the button to submit the contribution for the additional approval process by the video owner. From our survey results, we found 46.8\% of our respondents leverage community caption services for captioning. To request community captions, we found that individual content creators chose to either post an individual video (V83), leave the request in the video description, or pin a comment in the video commenting section (V47). In our survey, we asked individual content creators to rate their agreement on practices of different approaches to request community-contributed captions on a 5-point Likert scale, 5 as strongly agree, and 1 as strongly disagree. We found the two of the most preferred approaches are leaving a notice in the video description (Mean = 3.28, SD = 1.39) or in the comments (Mean = 2.98, SD = 1.34).
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{figures/communitycaptioncontribution.png}
\caption{Community caption interface. Community captionists could manually input the transcript and submit it for approval by the individual content creator.}
\Description{This figure shows a community caption interface with different captions with time stamps at the right. There is a video on the right side. At the top right, there is a button got highlighted as ``Submit contribution''}
\label{fig:communitycaptioninterface}
\end{figure}
For the 53.2\% of our survey respondents who do not use community captioning or stopped using it, 78.1\% of them complained about the difficulty of finding enough volunteers and the long waiting time for the caption to get ready. From the video analysis and survey, we found that \textbf{individual content creators with fewer subscribers tend to wait longer to have captions ready through community-contributed captioning} (V140, V183). From our survey, we asked individual content creators on \textit{``how long does it usually take for community-contributed captions to be done?''}. Individual content creators, who used community-contributed captions, mentioned that it takes 16 - 24 hours on average to get their videos captioned through community captions. If we analyze the time cost to get the video captioned in two groups: subscribers higher than 1k and subscribers lower than 1k, we found that content creators with more than 1k subscribers only need to wait 8 - 16 hours on average, and it usually takes 1 to 3 days for content creators with less than 1k subscribers. In V140, the content creator commented:
\begin{quote}
``...I am not a big YouTuber who has millions of subscribers to help with captions. For small YouTubers like me, there are very few audiences who help me to caption my videos, and it usually takes a long time for my videos to get captioned...''
\end{quote}
Because community captionists are usually the video audiences of a specific channel, they have certain understandings of the video taste from the channel and are more willing to help video captions from the same channel. Due to the community caption delay, we found that a single individual content creator often has multiple videos from multiple channels that are pending for captions at the same time (V45). Therefore, they prefer directing community captionists from videos to videos for balancing the pending time of community captions (V90). However, individual content creators found that these \textbf{community captionists are unwilling to help captions for videos that they are not familiar with} which leads to even longer wait times for certain videos from different channels (V185). Due to the `anonymity' of community captioning, content creators often do not have much background understandings of these community captionists' preferences on videos. Therefore, individual content creators mentioned that different videos from different channels from the same content creator might have drastic time cost differences on captions through community captioning (V185).
In the previous section, we have shown that using auto-generated transcripts and modifying them to reduce speech recognition errors is the second most preferable method from our survey responses. In terms of the time cost, our survey respondents mentioned that they usually have to wait an average of four to eight hours for the automatic caption to be ready on YouTube. However, we found that the automatic captioning process of many video-sharing platforms is not a service that content creators could manage or control; the \textbf{auto-captions often appear automatically in the video once ASR is done without any notification} (S41). Therefore, we revealed that content creators, who have very limited time and rely on modifying automatic captions, recommended having video-sharing platforms send a notification message once the automatic captioning was finished, so they could check the caption quality immediately and prevent their audiences from reading error-prone captions generated automatically.
To save time and effort in captioning videos, content creators could pay third-party captioning companies for caption services. On average, it may cost from \$1 to \$5 per minute (V4). Comparing with the prior report in 2014 \cite{ClosedCa43:online}, we found that the cost for third-party captioning companies has decreased over the years, and some content creators with disabilities could usually negotiate for a discount (V3). Nevertheless, 43.5\% of survey respondents still mentioned that they do not have enough budget for third-party captioning services. We found that different approaches have their strengths and weaknesses, and there does not exist a single option that is a clear ``win'' for all groups of individual content creators. As a common practice, we found that \textbf{content creators usually stick to a single captioning approach without changing} (V82).
\subsubsection{Caption Quality and Consequences}
During our video analysis and survey, we found that content creators expressed their concerns on caption quality from auto-generated captions, community-contributed captions, and captions created by a third party. First, we found that \textbf{caption errors could potentially demonetize the video} (i.e., mark the video as not ad friendly) by falsely including non-ad-friendly words (e.g., swear words, racist words). From the video analysis, we found non-ad-friendly videos usually result in a lower placement in YouTube's recommendation system and eventually have fewer views (e.g., V171). In V59, the content creator explained his experiences and understanding of how poor captions containing racist words demonetized his video:
\begin{quote}
``...I was very surprised once I got my video yellow labeled (not ad friendly), which means I would basically not receive money from that video. I was pretty sure I did not say anything racist or contained any adult-only information that is not ad-friendly in the video. That video was captioned automatically, and I later found the caption contained racist words. Since then, I have stopped using auto-generated captions...''
\end{quote}
Second, 56\% of our survey respondents leverage community-contributed captions. However, some content creators complained about \textbf{unprofessional captionists sometimes adding their own comments or feelings in the community-contributed captions} (Fig. \ref{fig:commentaryCaption}). For example, promotion ads, irrelevant jokes, useless commentary, or racist speech (e.g., V58) could frustrate both content creators and their audiences. Although content creators need to approve the community captions in order for it to get published, we found that less than 1/7 of the 56\% survey respondents who use community-contributed captions spend additional time to verify the captions step by step. The majority of them directly approve and publish the captions when they are ready without checking the quality thoroughly due to limited time.
\begin{figure}
\centering
\includegraphics[width=0.5\columnwidth]{figures/nomorecraptions.jpg}
\caption{The caption includes personal comments from community captionists.}
\Description{This figure has two people, one with black shirt and another one with a jacket. There is a caption in the video frame ``odd noises* (please help me i've become obsessed with these nerds)''}
\label{fig:commentaryCaption}
\end{figure}
\subsubsection{Individual Perceptions and Personal Challenges on Captioning}
In this section, we report on content creators' perceptions, challenges, and awareness through captioning. From the video analysis and survey, we found that \textbf{content creators' individual circumstances limited their options of captioning methods}. Among the 62 content creators we surveyed, we found that only four of them consider being a content creator as their primary occupation, and many YouTubers either do not have enough money (e.g., V99) or enough time (e.g., V124) or even both to caption their videos. In V19, the content creator, who is a college student, explained her situation about captioning when auto-generated caption did not work for her because of her accent:
\begin{quote}
``...I know I should caption my videos, but I also have school deadlines, this becomes a dilemma for me. I am still on my student loan that I need to pay. Since my accent is pretty strong, it even took me more effort to modify the auto-generated captions than just manually create the transcript. I tried to post caption requests in my own channel, on Reddit, Facebook groups, but it barely helped...''
\end{quote}
Furthermore, we found \textbf{content creators with disabilities have strong accessibility needs on captioning and video editing tools}. From our survey, one survey respondent (S13) actually has hearing impairments and reported having a hard time using the current system to caption their videos, which forced her to rely on community captions or pay third-party companies for captions. S13 left a comment in the survey about her situation:
\begin{quote}
``...As a content creator with hearing impairments, I face strong difficulties through video editing processes. To caption a video, the only option for me right now is to pay third-party companies to do all the work for me. Although these captioning companies gave me the discount as \$1 per minute, it is still a considerable amount for my budget...''
\end{quote}
Beyond the physical inabilities, we found that the circumstances of having \textbf{poor-quality captions are also caused by the lack of awareness and understanding of captioning as an individual content creator}. In our survey, we asked content creators how they initially learned the importance and methods of captioning. We found that 43.5\% of the content creators said they learned by watching tutorial videos, and 24.2\% by reading articles about captions voluntarily. However, this voluntary learning requires content creators to have an awareness of captioning videos. Only 9.7\% of the content creators learned that they should caption their video through requests from viewers' comments or direct messages.
From the video analysis and survey, we found that many individual content creators are not aware of the importance of having captions when uploading videos to video-sharing problems. Later after content creators learned the importance of captioning videos, they often put themselves under pressure to back-caption all of their past videos. We learned that this large effort of \textbf{back-captioning (i.e., caption all previous videos that do not have captions) problems} made the whole captioning process very frustrating. From our survey, we found that our survey respondents started captioning their videos after uploading 30 - 50 videos. The content creator explained her situation about back-captioning problems because she started knowing the captioning process when she already uploaded over 100 videos:
\begin{quote}
``...Personally, I did not know I had to caption my videos initially when I became a YouTuber. After uploading over 100 videos, I accidentally watched Rikki's (a deaf YouTuber) video one day, and I realized the importance of having my videos captioned. I would like to claim my channel as a fully inclusive channel. However, if I only start to caption the videos from now on, some people may say something bad in my channel, such as 'your previous videos were not properly captioned.' It literally took me a whole month to caption all past videos without uploading any new videos...''
\end{quote}
In this section, we uncovered 1) caption practices and processes by individual content creators (e.g., the effort of reproducing video transcript for self-captioning), 2) individual content creators' feedback on different caption quality and associated consequences (e.g., caption errors could potentially demonetize the video), and 3) individual content creators' perceptions, general challenges and their awareness on captioning (e.g., back-captioning problems). We further discuss our results and potential design implications for video-sharing platform designers.
\begin{table}[h]
\small
\caption{\red{Summary of Discussion Points}}
\centering
\begin{tabular}{|m{3cm}|m{10cm}|}
\hline
\textbf{\red{Discussion Themes}} & \textbf{\red{Detail}}\\
\hline
\red{Designing a DHH-friendly Video Recommendation System} &
\begin{itemize}[left=0cm,topsep=0.2cm]
\item \red{Create systems with a different set of criteria (e.g., caption availability) to filter out videos.}
\item \red{Video recommendations should be designed based on video types (e.g., video without any sound, video podcast) to determine whether recommend it if it does not have proper captions.}
\item \red{Video recommendation systems should allow the audiences to control what to recommend based on accessibility needs.}
\end{itemize}\\
\hline
\red{Minimizing the Effort on Previewing Caption Quality} & \begin{itemize}[left=0cm,topsep=0.2cm]
\item \red{Leverage audiences' interaction behavior to make caption quality predictions.}
\item \red{Leverage users' ratings to evaluate caption quality.}
\end{itemize}\\
\hline
\red{Improving and motivating High-quality Community Captions} &
\begin{itemize}[left=0cm,topsep=0.2cm]
\item \red{Use machine learning algorithms for specific captioning problems (e.g., irrelevant commentary)}
\item \red{Monetize community captioning and allocate captioning tasks based on captionists' individual backgrounds.}
\end{itemize}\\
\hline
\red{Presenting Different Levels of Caption Details} &
\begin{itemize}[left=0cm,topsep=0.2cm]
\item \red{Detect and extract the abbreviated caption content.}
\item \red{Display non-speech-based caption content and have different levels of caption density.}
\end{itemize}\\
\hline
\red{Opportunities with Lip-reading} &
\begin{itemize}[left=0cm,topsep=0.2cm]
\item \red{Explore lip-reading in the presence of video captions.}
\item \red{Technologies to track lip movements.}
\end{itemize}\\
\hline
\end{tabular}
\label{table:DiscussionTable}
\end{table}
\section{Discussion}
In our results, we answered our research questions by presenting current captioning methods and processes, caption quality and audiences' reactions, caption presentation and improvements (e.g., lip-reading), and challenges of captioning and content creators' perceptions (e.g., back-captioning problems).
In this section, we focus our discussion on 1) designing a DHH-friendly video recommendation system, 2) minimizing the effort on previewing caption quality, 3) improving and motivating high-quality community captions, 4) presenting different levels of caption details, and 5) opportunities with lip-reading \red{(Table \ref{table:DiscussionTable}).}
\subsection{\black{Designing a DHH-friendly Video Recommendation System}}
\red{Video recommendations on YouTube are mainly located in two primary locations---the YouTube home page and the browse page \cite{davidson2010youtube}. Currently, the platform leverages the video metadata (e.g., content creator, publish date) and recommends a video primarily based on the audiences' preference and browsing activities (e.g., watch history and search results) \cite{davidson2010youtube}, but such a video might not be captioned (Section \ref{DHH practice}).} As a result, DHH audiences had to save a captionless video in a playlist and later check for captions (Section \ref{DHH practice}). The delay in captions and the effort to repeatedly check for captions greatly compromised their user experience. To mitigate this problem, we see an opportunity in designing a DHH-friendly recommendation system that operates on a different set of criteria and includes accessibility-related metadata into consideration when developing the recommendation algorithms. For example, the platform could use the availability of captions as a criterion or leverage existing comments and information (e.g., \cite{liu2021reuse,liu2018supporting}) to decide whether or not to recommend a video to a DHH viewer. Furthermore, the time delay in the recommendation should be a function of the characteristics of the target video. For example, a video without any sound should be recommended to the DHH audience right away, while a video podcast should wait until it has been fully captioned. Additionally, we suggest that the DHH audience should have the option to toggle on/off the recommendation delay to fit more customized needs.
\subsection{Minimizing the Effort on Previewing Caption Quality}
In our results, we pointed out the existing concerns from DHH audiences who might click-open a video, only to be disappointed by the poor caption quality after a few minutes through \black{(Section \ref{DHH practice})}. This leads to a further question: how to better predict and present the quality of the captions prior to audiences viewing the video.
\subsubsection{Leverage Interaction Data to Rate Caption Quality}
From our video analysis, we also learned that not just DHH individuals might need captions, people with auditory processing disorders (V65), non-native speakers (V30), people who are in a situation where having sound is not appropriate (e.g., library) (V64), and young kids who are trying to learn new languages (V148) may also need captions to understand video contents. Therefore, the large number of audiences may provide rich data of their interactions with the video interfaces or reactions towards captions that we can leverage to understand the caption quality. For example, once the system detects the captions have been turned on, it could track where the audience pauses/replays the video or quits to predict the caption at that timestamp may have relatively bad quality. \red{Future systems should further combine existing caption quality prediction metrics (e.g., WER \cite{woodard1982information}, ACE \cite{kafle2017evaluating}) with user's interaction data for better caption quality predictions for different types of videos.}
\subsubsection{Leverage User Score to Rate Caption Quality}
\black{We are aware that many video-sharing platforms, such as YouTube, already form the behavior of using thumbs up/down interfaces to explore users' ratings on videos \cite{siersdorfer2010useful}}. In future work, a video-sharing platform might be able to receive general feedback on the caption quality by prompting the audiences who turned on captions in the video to provide a rating. Therefore, future audiences who leverage captions to understand video content could learn the overall caption ratings from the previous audiences prior to playing the video. \red{Specifically, future work could also explore how the interactive system and user interfaces on social media platforms should be designed to allow audiences to mark the poor caption segment and receive quick clarifications and fixes.}
\subsection{\black{High-quality Community Captions}}
\black{In our results, we mentioned that content creators complained about the caption quality and time delay of community captions (Section \ref{caption practice and processes}). In this section, we further discuss the opportunities of improving and motivating high-quality community captions through machine learning algorithms for captioning problems and monetized community captions.}
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{figures/communitycaptioncredits.png}
\caption{YouTube community-contributed caption credit list in the description of the video.}
\Description{This figure shows the description of a video. It highlighted Caption author (English) at the bottom left of the figure.}
\label{fig:communitycredit}
\end{figure}
\subsubsection{\black{Machine Learning Algorithms for Captioning Problems}}
Our findings revealed that many of the individual content creators (46.8\%) leveraged community captions to make their video content accessible to a wider audience, but the community captions are not without problems (e.g., delay, irrelevant commentary) (Section \ref{caption practice and processes}). To improve the crowd caption accuracy, prior research explored machine learning algorithms to combine crowd captions from non-experts and generate more accurate captions \cite{lasecki2017scribe,lasecki2012real}. However, existing research primarily focused on evaluating their algorithms on accuracy and with paid crowd workers. As a potential direction, specialized machine learning algorithms are potentially valuable to specific captioning problems, such as reducing the irrelevant commentary (Section \ref{caption practice and processes}), by combining the captions from different non-expert captionists. \black{Furthermore, future research should also explore the benefits and challenges of certain algorithms in community captioning platforms where most of the non-expert captionists are non-paid volunteers with different background.}
\subsubsection{\black{Monetized Community Captions}}
\black{Currently, the only way of motivating community captions is by listing the contributors' IDs in the video descriptions
(Fig. \ref{fig:communitycredit}), but it does not help generate high-quality community captions.
Some of the content creators suggested that they wish the platform could allow them to use their advertisements revenue income to subsidize the caption contributors.} In V52, the content creator mentioned that he prefers only having one paid community captionist to take the responsibility of captioning each video. \black{To evaluate the caption quality from different community captioning methods, future research could leverage different caption evaluation metrics from existing research \cite{kafle2017evaluating,kafle2019predicting}}. Comparing with third-party captioning companies, having a community captionist who is personally a viewer of a specific content creator's channel or related channels could better caption the related content with their understandings of the channel's domain knowledge (e.g., terminologies in technological products). \black{Therefore, this brings future opportunities to explore the benefit of allocating captioning tasks to community captionists based on community captionists' individual preferences and knowledge}.
Employing a single paid community captionist may potentially mitigate the delay problem of community captions because the content creator may enable the single captionist to start captioning her video prior to publishing the video. Furthermore, forming a contractual relationship with the community captionist could potentially reduce occurrences of jokes, commentary information, or promotion ads in the captions. However, there might be different practical challenges from the platform level to maintain the connections between community captionists and content creators, which also brings opportunities for developers to design intuitive tools for both groups \cite{liu2019unakite,hsieh2018exploratory}. For example, \black{future work could explore what level of control should the YouTuber grant to the community captionist and how should the platform manage the community captionist and distribute their work.} \red{Additionally, it might be worth exploring ways of motivating high-quality community captions besides monetary rewards. For instance, leveraging a leaderboard mechanism \cite{butler2013effect,sun2015leaderboard} that encourages caption contributors to compete for high user scores.}
\subsection{\red{Levels of Caption Details}}
We touched upon how abbreviations and inadequate presentation of non-speech-based content robbed DHH audiences of an inclusive video-viewing experience (\black{Section \ref{feedback caption quality}}). In this section, we once again bring to readers' attention that DHH audiences desire more details in the caption, but how to address this need remains a future research question.
\subsubsection{\red{Abbreviated Caption Content}}
In our results, we included examples of where captions used [Joke] or [Music] to represent the actual joke or the music. We also concluded that DHH audiences have a negative feeling towards just inserting [Joke] in the captions to represent the actual content. We found the key reason behind this problem is that DHH audiences feel they are not treated equally on video content (P13).
We also learned that some content creators chose to use [Swear Word] instead of actually displaying the swear word in the captions, even if they did say that word in the video, to prevent getting demonetized or yellow flagged. \textit{``I do not want other people to decide what I should know or not from the video, whatever the person said in the speech, just put whatever in the caption,''} said P5. This situation also applies to racist and adult-only content. P9 continued: \textit{``I am an adult, and I do not mind having adult-only content in the caption, whatever is said in the video, just show them in the caption. If it is adult-only, you should add a notification once audiences opened your video to prevent kids from watching it, not modifying the caption.''} \black{Therefore, future research should take accessibility needs into considerations when designing filters of different contents to make sure the equity and inclusion \cite{chadwick2016digital} of the video contents to DHH audiences.}
\subsubsection{Non-speech-based Caption Content}
For audio-speech information, captionists could make sure the DHH audiences get the same level of content by transcribing the speech word-by-word in the caption. However, some auditory contents are hard to transcribe if they are not speech-based. For instance, describing music and background sounds with appropriate levels of detail would be a challenge. Too few details might throw the audience off, but too many details would take many lines of text. In fact, from our video analysis and interviews, we found most of our DHH audiences prefer having closed captions that do not have over two lines (P1 - P13, V3), and some complained that too much content might stack up on top of each other in captions (V93). \red{Future research could explore different algorithms to identify features of the non-speech sound (e.g., \cite{sporka2006non,cowling2002recognition})} that are most descriptive to DHH audiences and present them within a proper length limit. Additionally, DHH audiences' demand for levels of caption details might vary. One should consider giving DHH audiences an option to switch between a `verbose' mode and a `concise' mode.
\subsection{\black{Opportunities with Lip-reading}}
DHH interviewees have stressed the importance of leveraging lip-reading to supplement their understanding of video content in addition to reading captions (Section \ref{feedback caption quality}). \black{In this section, we outlined two research directions that we can further pursue with lip-reading.}
\subsubsection{\black{The Role of Lip-reading for Online Videos}}
\black{According to Richard Mayer's theory of dual encoding, multiple sources of visual information could compete for attention in the visual channel, which could potentially result in a distraction \cite{mayer2005cognitive}. Therefore, the effect of lip-reading in the presence of video captions remains a research question for further exploration. More specifically, future research could investigate whether lip-reading serve as a complement for captions or a replacement for captions and DHH audience's behaviors of allocating their attention between reading captions and reading lips.}
\subsubsection{\black{Make Lips Visible}}
\black{For DHH audiences who have grown accustomed to reading lips and captions at the same time, they might find the viewing experience incomplete if the speaker in the video unintentionally occludes his or her mouth (Section \ref{feedback caption quality}).} For example, many content creators show their face at an angle that is difficult for DHH viewers to see the mouth (Fig. \ref{fig:lipreading}). One direct solution to make lips more visible to the DHH viewers is to have the speaker equip a wearable camera \cite{li2019fmt,kianpisheh2019face} or explore earable sensing systems \cite{sun2021teethtap} that track the lip movement. However, such an approach would be very cumbersome and costly to individual content creators. Another possible solution is to leverage the latest computer vision techniques to reconstruct the speaker's entire face from limited inputs. For example, Elgharib et al. \cite{elgharib2020egocentric} proposed a method that generated the speaker's front face view from only the side face view, which was provided by a low-cost wearable egocentric camera. \black{Future research should explore how DHH audiences would react to the adoption of such accessibility technologies by content creators.}
\section{Limitation}
We acknowledge several limitations in our study that may have implications on how to interpret our results. We recruited both interviewees and survey respondents through social platforms, such as Facebook Group, Reddit, and Twitter. However, we may not cover the sample who do not use these social platforms. Furthermore, our survey may only attract those content creators who are willing to take surveys. In our survey study, most respondents are content creators who have less than 10k subscribers, and we did not have survey respondents who have over 1 million subscribers. This may lead to different practices and understandings of captioning approaches.
In our video analysis and survey, we only covered videos and content creators who speak English or have English captions. However, captioning practices may vary if the content creators speak a language and caption consumption practices might be different across different cultures \cite{li2021choose}. \black{In terms of using YouTube videos as a data source, beyond its benefits that we described in the methodology section (e.g., \cite{anthony2013analyzing}), there may also be tradeoffs and risks (e.g., privacy) on using YouTube videos \cite{klobas2018compulsive,myrick2015emotion,spinelli2020youtube}. We look forward to exploring more about how the risks and tradeoffs of using YouTube videos affect the DHH population and individual content creators' experiences with captions.}
\section{Conclusion}
In our work, we conducted an interview study with 13 DHH YouTube viewers, a video analysis with 189 videos on Youtube, and a survey study with 62 content creators, to explore the knowledge gap between DHH audiences and individual content creators on captioning. We introduced DHH audiences' practices, perceptions, and challenges on watching captioned videos by individual content creators, such as enhancing lip-reading qualities and avoiding using abbreviated terms in ``[]''. We also revealed the problems of different captioning methods by individual content creators (e.g., community captionists are unwilling to help captions for videos that they are not familiar with) and showed their perceptions of captioning and associated challenges (e.g., back-captioning problems). \black{We extended our findings through the discussion of developing a DHH-friendly video recommendation system, minimizing the effort on previewing caption quality, improving and motivating high-quality community captions, presenting different levels of caption details, and opportunities with lip-reading}. Overall, our work provides an understanding of the existing gaps between individual content creators and the DHH audiences on video captions.
We believe our design suggestions could shed light on the creation, interpretation, and presentation of captions generated by individual video content creators and provide design implications to video-sharing platform designers.
\begin{acks}
This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (RGPIN-2016-06326).
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
1,941,325,221,169 | arxiv | \section{Result}
The results are summarised in Table~\ref{table1}. We run three different
types of models: a scale independent case (``DGP''), a scale dependent
$k^2$ case (``$f(R)$''), and a true dark energy case fixing $c=0$ and
including dark energy perturbations in the calculations \cite{Zhao:2005vj}.
The time dependence parameter $s$ cannot be constrained by the data and
is marginalized over.
\begin{table}[t]
\begin{tabular}{c|c|c|c|c|c|}
\cline{2-6}
& \multicolumn{2}{c|}{scale indep. ($n=0$)} &
\multicolumn{2}{c|}{scale dep. ($n=2$)} &\multicolumn{1}{c|}{GR: $\mu=1$} \\ \cline{2-6}
& $w=-1$ & $w_0,w_a$ float& $w=-1$ & $w_0,w_a$ float & $w_0,w_a$ float\\ \hline
\multicolumn{1}{|c|} {$c$} &
$<4.0$ &$<4.1$ & $<0.002$& $<0.002$ &$0$\\ \hline
\multicolumn{1}{|c|} {$w_0$} &
$-1$ & $-0.90\pm0.19$& $-1$& $-0.92\pm0.20$ &$-0.91\pm0.19$\\ \hline
\multicolumn{1}{|c|} {$w_a$} &
$0$ & $-0.26\pm0.78$& $0$& $-0.32\pm0.82$ &$-0.27\pm0.78$\\ \hline
\end{tabular}
\caption{Constraints from current data on the MG parameter $c$ (marginalized
over $s$) and the effective dark energy parameters $w_0, w_a$ for the scale
independent, scale dependent MG models and the true dark energy model
($\mu=1$). For $c$ we quote the 95\% CL upper
limit, while for $w_0$ and $w_a$, we quote the median and 68\% CL error.
Fitting for the expansion in terms of $w_0,w_a$ rather than fixing $w=-1$
does not degrade the gravity constraint; fitting for gravity in terms of
$c,s$ does not degrade the expansion constraint.
}
\label{table1}
\end{table}
Figure~\ref{fig:c1D} shows the 1D probability distribution functions
(PDF) for $c$. Recall that $c=0$ is GR, and we see that all cases are
consistent with GR.
For each case we either also fit $s$ or fix it to $s=1$ for the
scale independent case (mimicking DGP) or to $s=4$ for the $k^2$ case
(mimicking a particular $f(R)$).
Note that when we marginalize over $s$ this can actually tighten the
constraints on $c$ because small values of $s$ are then permitted (recall
deviations depend on $a^s$), which strengthens deviations at higher
redshifts.
Figure~\ref{fig:cs2D} contains the 2D joint probability
contours for $c$--$s$ for scale dependent and independent cases. The filled
contours represent when the background expansion is fixed to $\Lambda$CDM;
we see that this does not have a dramatic effect on the results, implying
that there is little covariance between the gravity and expansion
parameters and that simultaneous fitting is not only desirable but practical.
\begin{figure}[t]
\includegraphics[scale=0.157]{1D.eps}
\caption{1D PDF for the MG amplitude $c$ from the latest observational
data. The left panel shows the scale independent case, with $s$ either
fixed to 1 or marginalized over, and the background either fixed to
$\Lambda$CDM or marginalized over $w_0,w_a$.
The right panel shows the analogous curves for the scale dependent ($k^2$)
case.
All PDFs are consistent with $c=0$, corresponding to GR. }
\label{fig:c1D}
\end{figure}
\begin{figure}[t]
\includegraphics[scale=0.155]{2D.eps}
\caption{68\% and 95\% CL joint constraints on the MG parameters.
Filled and unfilled contours represent the cases when the background
cosmology is fixed to $\Lambda$CDM, and the effective dark energy equation
of state parameters $w_0, w_a$ are allowed to vary, respectively. GR
corresponds to $c=0$, when the value of $s$ is moot. The left (right) panel
corresponds to the scale independent (dependent, $k^2$) case.
}
\label{fig:cs2D}
\end{figure}
In Fig~\ref{fig:w0wa}, we show the reconstructed (effective) $w(z)$ using
the constraints on the expansion parameters for both MG models and for the
true dark energy case, with $c$ and $s$ marginalized over where
appropriate. The consistency of the contours demonstrates that simultaneous
fitting of gravity and expansion does not here substantially degrade
constraints. Recall that the simultaneous fitting
enables avoidance of a possible significant bias if there is any deviation
from $\Lambda$CDM with GR. Current data is consistent with $\Lambda$CDM
cosmology even in the presence of possible gravitational modifications.
\begin{figure}[t]
\includegraphics[scale=0.157]{wz.eps}
\caption{The reconstructed $w(z)$ with 68\% CL error are shown allowing
for modified gravity (marginalized over $c, s$) in the scale independent
(left panel) and scale-dependent $k^2$ (right panel) cases by the filled
bands. The reconstruction for true dark energy, with gravity fixed to GR,
is shown by the dash-dotted curves, the same in each panel. }
\label{fig:w0wa}
\end{figure}
The lack of covariance between the gravity and expansion parameters is
not a general property true for all large scale structure observations;
for example \cite{stril} found high correlation between the gravitational
growth index $\gamma$ \cite{groexp} (closely related to $\mu$) and
$w_0, w_a$ when using the galaxy
density power spectrum for future data. This is because the density
power spectrum involves the integrated growth factor, influenced by
both expansion and modified gravity, while the PV field involves the
growth rate currently at modest accuracy and current weak lensing probes
mostly light deflection.
With future data, however, weak lensing will be more sensitive to growth,
and galaxy data will probe both growth and growth rate, so we expect
that as the constraints tighten they will also become more correlated.
Of course both the EOS and MG parameters have covariance
with the matter density $\Omega_m$, thus in principle they are correlated
indirectly. But for our current datasets the correlation is seen to be
very small.
Indeed the constraints on $w(z)=w_0+w_a(1-a)$ are nearly independent of
the MG parameters. A similar behavior was shown in Fig.~2 of
\cite{linroysoc} where the joint confidence contours of $w_0$--$w_a$
were nearly independent of the value of the modified growth index $\gamma$.
The $w(z)$ behavior reconstructed from our best fit shows the usual
``quintom'' \cite{quintom} crossing of $-1$ at $z\sim0.4$. Such a
pivot point is
expected from the strong influence of CMB constraints on the distance to
last scattering agreeing with $\Lambda$CDM; this induces the ``mirage of
$\Lambda$'' \cite{linmirage} where $w_p\equiv w(z\approx0.4)\approx -1$
even in the presence of time variation in EOS, and is not a consequence
of using the $w_0$, $w_a$ form. As other data gain in leverage relative
to this CMB geometric constraint, the crossing may disappear. On the
other hand, the effective EOS in modified gravity models can cross $-1$,
so this quintom behaviour, if confirmed by future data to high confidence
level, might be a smoking gun of modified gravity.
In summary, to test gravity in a stringent manner, we parametrize the consistent set
of gravity field equations through modifying factors in the matter growth
(Poisson) equation and light
deflection (sum of the metric potentials) equation. The Pad{\'e}
approximant form adopted can cover many different modified gravity theories
with scalar degrees of freedom. Note that a simple power law such as
$\mu=1+\mu_s a^s$ cannot properly weight both high and low redshifts and
may bias the results.
Simultaneously with fitting for these modifications we
also allow the background expansion to deviate from a $\Lambda$CDM cosmology
through an effective time varying dark energy equation of state. This is
important
as incorrectly fixing either the gravity side
or the expansion side could strongly bias the conclusions.
We then used the most recent observational data -- supernova distance data
(Union2.1 compilation), CMB (full WMAP-7yr spectra), weak lensing (CFHTLS),
and galaxy peculiar velocity (WiggleZ) -- to fit these and other
cosmological parameters, testing Einstein gravity. The simultaneous
fitting of gravity and expansion can be successfully carried out, with
little degradation in leverage while avoiding possible bias due to fixing
one or the other.
In the scale dependent case, the deviation amplitude is constrained to be
$c\lesssim0.002$ corresponding to the constraint on the Compton wavelength
$\lambda\lesssim250\,h^{-1}$Mpc (95\% CL), while in the scale independent case
cosmological data is not yet precise enough to place strong bounds
($c\lesssim4.1$ implies $\omega_{BD,0}\gtrsim0.12$).
General relativity is a good fit with these recent data. Future data
will allow more stringent limits, and as growth measurements improve the
covariance between gravity and expansion influences should increase,
making simultaneous fitting even more necessary. The function $\Sigma$
entering the light deflection equation will be tested as upcoming large
weak lensing surveys deliver data. Next generation data should greatly
advance our ability to test gravity and uncover the physical origin of
the acceleration of our universe.
We thank the Supernova Cosmology Project for providing the Union2.1 data
before publication. GZ and KK are supported by STFC
grant ST/H002774/1; EL is supported by DOE and by WCU grant R32-2009-000-10130-0. KK is also supported by the ERC and the Leverhulme
trust. DB acknowledges the support of an RCUK Academic Fellowship. HL and XZ are supported in part by the National Natural Science
Foundation of China under Grant Nos. 11033005, 10803001, 10975142
and also the 973 program No. 2010CB833000.
|
1,941,325,221,170 | arxiv | \section{Introduction}
Typical wavelength of gravitational waves from astrophysical compact
objects such as BH(black hole)-BH binaries is in some cases
very long so that
wave optics must be used instead of geometrical optics when
we discuss gravitational lensing.
More precisely,
if the wavelength becomes comparable or longer than the
Schwarzschild radius of the lens object,
the diffraction effect becomes important and as a result
the magnification factor approaches
unity \cite{Ohanian,Bliokh,Bontz,Thorne,Deguchi}.
Mainly due to the possibility that the wave effects could be
observed by future gravitational wave observations,
several authors \cite{Takahashi:2003ix,Seto:2003iw,Nakamura:1997sw,Yamamoto:2003cd,Baraldo:1999ny,T.Nakamura:1999,Yamamoto:2003wg,Suyama:2005mx,Takahashi:2005sx,Takahashi:2004mc}
have studied wave effects in gravitational lensing in recent years.
In most of the works which studied gravitational lensing phenomenon
in the framework of wave optics, isolated and normal
astronomical objects such as galaxies are concerned as lens objects.
Recently Yamamoto and Tsunoda\cite{Yamamoto:2003wg} studied wave
effects in gravitational lensing by an infinite straight cosmic string.
The metric around a cosmic string is completely different from
that around a usual massive object.
Cosmic strings generically arise as solitons in a grand unified
theory and could be produced in the early universe as a result
of symmetry breaking phase transition\cite{Hindmarsh:1994re,vilenkin}.
If symmetry breaking occurred after inflation,
the strings might survive until the present universe.
Recently, cosmic strings attract a renewed interest
partly because a variant of their formation mechanism
was proposed in the context of the brane inflation
scenario\cite{Dvali:1998pa,Dvali:1999tq,Burgess:2001fx,Alexander:2001ks,Dvali:2001fw,Jones:2002cv,Shiu:2001sy}.
In this scenario inflation is driven by the attractive force
between parallel D-branes and parallel anti D-branes
in a higher dimensional spacetime.
When those brane-anti-brane pairs collide and annihilate
at the end of inflation,
lower-dimensional D-branes, which behave like monopoles,
cosmic strings or domain walls from the view point of
four-dimensional observers, are formed generically
\cite{Majumdar:2002hy,Dvali:2002fi,Jones:2003da,Dvali:2003zj,Copeland:2003bj}.
For some time,
cosmic string was a candidate for the seed of structure
formation of our universe, but this possibility was ruled out
by the measurements of the spectrum of
cosmic microwave background (CMB)
anisotropies\cite{Spergel:2003cb,Percival:2002gq}.
The current upper bound on the dimensionless string tension
$G\mu$ is around $10^{-7} \sim 10^{-6}$, which comes from the
observations of CMB\cite{Jeong:2004ut,Pogosian:2003mz,Pogosian:2004ny,Wyman:2005tu}
and/or the pulsar timing \cite{Kaspi:1994hp,Thorsett:1996dr,McHugh:1996hd,Lommen:2002je}.
Although cosmic string cannot occupy dominant fraction of the
energy density of the universe,
its non-negligible population is still allowed observationally\cite{Bouchet:2000hd,Rocher:2004my}.
In fact, Sazhin et al.\cite{Sazhin:2005fd,Sazhin:2003cp} reported
that CSL-1,
which is a double image of elliptical galaxies with angular separation
$ 1.9~{\rm arcsec} $, could be the first case of
the gravitational lensing
by a cosmic string with $ G\mu \approx 4\times 10^{-7} $.
We study in detail wave effects in the gravitational lensing by an
infinite straight cosmic string.
In Ref.~\cite{Yamamoto:2003wg},
wave propagation around a cosmic string was studied but
they put the waveform around the string by hand.
\footnote{After submitting this paper, we have noticed a paper \cite{Linet:1986db} in which the solutions of the wave equations around the cosmic string are given, though the apparent expressions are different from those given in this paper. In \cite{Linet:1986db} the author estimated the amplitude of the diffracted wave to be suppressed by ${\cal O}(G\mu)$ compared with that corresponding to the geometrical optics. We show that the importance of the diffraction effects are determined by the combination of three parameters, $G\mu$, the distance from the string to the observer and the wavelength and that the relative amplitude of the diffracted wave can be ${\cal O}(1)$ for realistic astrophysical situations.}
Their prescription is correct only in the limit of geometrical optics,
which breaks down when the wavelength becomes longer than a certain
characteristic length.
In this paper,
we present exact solutions of the (scalar) wave equation
in a spacetime with a cosmic string.
We analytically show that our solutions reduce to the results of the
geometrical optics in the short wavelength limit.
We derive a simple analytic formula of the leading order
corrections to the geometrical optics
due to the finite wavelength effects
and also an expression for the long wavelength limit.
Interference caused by the lensing remains
due to the diffraction effects even when
only a single image can be seen in the geometrical optics.
This fact increases the lensing probability by cosmic strings.
This paper is organized as follows.
In section II,
we construct a solution of the wave equation on a background spacetime
with an infinite straight cosmic string in the case that
a source of the wave is located infinitely far.
An extension to the case in which a point source is located at a finite
distance is given in Appendix B.
In section III,
we study
properties of the solution obtained in sec. II in detail.
In section IV,
we focus on compact binaries as the sources of gravitational waves
and discuss the possible effects due to finiteness of
the lifetime and the frequency evolution of the binaries
on the detection of the gravitational waves which pass near
a cosmic string.
We also give a rough estimate for the event rate of the lensing of
gravitational waves from NS-NS mergers assuming DECIGO/BBO.
Section V is devoted to summary.
\section{A solution of the wave equation around an infinite straight cosmic string}
A solution of Einstein equations around an infinite straight
cosmic string to first order in $ G \mu $ is given
by \cite{Vilenkin:1981zs}
\begin{equation}
d^2 s= -dt^2+dr^2+{(1-\Delta)}^2
r^2 d {\theta}^2+dz^2, \label{1.1}
\end{equation}
where $ (r, z, \theta) $ is a cylindrical coordinate($ 0 \le \theta < 2
\pi $) and $2\pi\Delta\approx 8\pi G\mu$
is the deficit angle around
the cosmic string.
Spatial part of the above metric
describes the Euclidean space with a
wedge of angular size $2\pi\Delta $ removed.
Due to the deficit angle around a string,
double images of the source are observed with an angular separation
$\alt 2 \pi \Delta $
when a source is located behind the string in the limit of geometrical
optics. In general for a wave with a finite wavelength,
some interference pattern appears.
An exact solution of Einstein equations around a
finite thickness string has
been already obtained \cite{Gott:1984ef},
but we use the metric (\ref{1.1}) as a background since
the string thickness is negligibly small compared with the
Einstein radius, $\approx \pi D\Delta $, where $ D $ is the distance
from the observer to the string.
Throughout the paper,
we consider waves of a massless scalar field instead of
gravitational waves for simplicity, but
the wave equations are essentially the same in these two cases.
An extension to the cosmological setup
is straightforwardly done by adding an overall scale factor.
In that case the time coordinate $t$ is to be understood as
the conformal time. The wave equation remains unchanged
if we consider a conformally coupled field, but it is
modified for the other cases due to curvature scattering.
The correction due to curvature scattering of the Friedmann
universe is suppressed by the square of the ratio
between the wavelength and the Hubble length, which can be
neglected in any situations of our interest.
Our goal of this section is to construct a solution of the
wave equation which corresponds to a plane wave injected
perpendicularly to and scattered by the cosmic string.
This situation occurs if the distance between the source and the
string is infinitely large.
In order to construct such a solution,
we introduce a monochromatic source uniformly extended in
the $z$-direction and localized in $r-\theta$ plane,
\begin{equation}
S=\frac{B}{(1-\Delta)} \delta (r-r_o) \delta (\theta-\pi) e^{-i\omega t}, \label{1.2}
\end{equation}
where $\omega$ is the frequency and
we have introduced $B$,
a constant independent of $\Delta$,
to adjust the overall normalization
when we later take the limit $r_o\to\infty$.
The factor ${(1-\Delta)}^{-1}$ appears because $\theta$-coordinate
used in the metric (\ref{1.1}) differs from the usual angle
\begin{equation}
\varphi\equiv (1-\Delta)\theta. \label{1.25}
\end{equation}
Here we consider a
uniformly extended source instead
of a point source since the former is easier to handle.
When the limit $r_o\to\infty$ is taken,
the answers are identical in these two cases.
The case with a point-like source at a finite distance
is more complicated.
This case is treated in Appendix B.
Now the wave equation that we are to solve is
\begin{eqnarray}
&& \left( \frac{ {\partial}^2 }{\partial r^2}+\frac{1}{r}
\frac{\partial}{\partial r}+\frac{1
}{(1-\Delta)^{2} r^2} \frac{
{\partial}^2 }{\partial {\theta}^2} +{\omega}^2 \right) \phi
(r,\theta)\cr
&&\qquad\qquad =\frac{B}{1-\Delta} \delta (r-r_o) \delta (\theta-\pi).
\label{1.3}
\end{eqnarray}
Since $\phi (r,-\theta)$ satisfies
the same equation~(\ref{1.3}) as $\phi(r,\theta)$ does,
$\phi(r,\theta)$ is even in $\theta$.
Thus, it can be expanded as
\begin{equation}
\phi(r,\theta)=\sum_{m=0}^{\infty} f_m (r) \cos m \theta. \label{1.4}
\end{equation}
From Eqs.~(\ref{1.3}) and (\ref{1.4}),
the equations for $f_m(r)$ are
\begin{eqnarray}
&& \left( \frac{d^2}{dr^2}+\frac{1}{r} \frac{d}{dr}+{\omega}^2 -\frac{
{\nu}^2_m }{r^2} \right) f_m (r)\cr
&&\qquad ={\epsilon}_m \frac{{(-1)}^m}{1-\Delta}
\frac{B}{2\pi} \delta (r-r_o), \label{1.5}
\end{eqnarray}
where ${\epsilon}_o \equiv 1, {\epsilon}_m \equiv 2 (m \ge 1) $ and ${\nu}_m \equiv {(1-\Delta)}^{-1} m$.
The solution of Eq.~(\ref{1.5}) except for $r=r_o$ is a linear combination
of Bessel function and Hankel function.
We impose that the wave $\phi$ is regular at $r=0$ and pure out-going
at infinity.
Further, imposing that the wave is continuous at $r=r_o$,
$f_m(r)$ becomes
\begin{eqnarray}
f_m(r)=N_m \left( H^{(1)}_{{\nu}_m}(\omega r_o) J_{ {\nu}_m}(\omega r) \Theta (r_o-r) \right. \nonumber \\
\left. +J_{{\nu}_m}(\omega r_o) H^{(1)}_{{\nu}_m}(\omega r) \Theta (r-r_o) \right), \label{1.6}
\end{eqnarray}
where $\Theta(x)$ is the Heaviside step function.
Substituting Eq.~(\ref{1.6}) into Eq.~(\ref{1.5}),
the normalization factor $N_m$ is determined as
\begin{eqnarray}
N_m&=&\frac{B}{1-\Delta} \frac{ {\epsilon}_m {(-1)}^m}{2\pi\omega} \nonumber \\
&&\times \left[
J_{{\nu}_m}(\omega r_o) H^{(1)'}_{{\nu}_m}(\omega r_o)
-H^{(1)}_{{\nu}_m} (\omega r_o) J'_{{\nu}_m}(\omega r_o)
\right]^{-1}\cr
& =&{Br_0 \epsilon_m(-1)^m\over 4i(1-\Delta)},
\label{1.7}
\end{eqnarray}
where $'$ denotes a differentiation with respect to the argument.
From Eqs.~(\ref{1.6}) and (\ref{1.7}) with the aid of the
asymptotic formulae of the Bessel and Hankel functions,
$\phi(r,\theta)$ for $r_o \to \infty$ can be written as
\begin{eqnarray}
\phi(r,\theta)=\frac{-iB}{2\sqrt{2}(1-\Delta)} \sqrt{\frac{r_o}{\pi \omega}}
e^{i\omega r_o-i \frac{\pi}{4}} \nonumber \\
\times \sum_{m=0}^{\infty} {\epsilon}_m i^m
e^{-\frac{i m \pi\Delta }{2(1-\Delta)}}
J_{{\nu}_m}(\omega r) \cos
m\theta. \label{1.8}
\end{eqnarray}
We determine the overall normalization of the source amplitude $B$,
independently of $G\mu$,
so that Eq.~(\ref{1.8}) becomes a plane wave
$e^{i \omega r \cos \theta}$ when $G\mu=0$.
This condition leads to
$B=-2\sqrt{\frac{2\pi \omega}{r_o}}e^{-i\omega r_o-i\pi/4}$.
Then, finally $\phi$ becomes
\begin{equation}
\phi(r,\theta)=\frac{1}{1-\Delta} \sum_{m=0}^{\infty} {\epsilon}_m i^m
e^{- \frac{im \Delta\pi}{2(1-\Delta)}} J_{{\nu}_m}(\omega r) \cos
m\theta. \label{1.9}
\end{equation}
\section{Limiting behaviors of the solution}
\label{sec:behaviorsofsolution}
\subsection{Approximate waveform in the wave zone}
The solution~(\ref{1.9}) describes the waveform
propagating around a cosmic string.
But it is not easy to understand the behavior of the solution
because it is given by a series. In fact, it takes much
time to perform the summation in Eq.~(\ref{1.9}) numerically for
a realistic value of tension of the string, say,
$G\mu \lesssim 10^{-6}$ because of
slow convergence of the series.
In particular it is not manifest whether
the amplification of the solution
in the short wavelength limit coincides with the one which is obtained
by the geometrical optics approximation.
Therefore it will be quite useful if one can derive a simpler
analytic expression.
Here we reduce the formula by assuming that the distance between
the string and the observer is much larger than the wave length,
\begin{equation}
\xi \equiv \omega r\gg 1,
\end{equation}
which is valid in almost all interesting cases.
Using an integral representation of the Bessel function,
\begin{equation}
J_{\nu}(\xi)=\frac{1}{2i\pi} \int_C dt \
e^{\xi \sinh t-\nu t}, \label{3.1}
\end{equation}
where the contour of the integral $C$ is such
as shown in Fig.~\ref{contour},
Eq.~(\ref{1.9}) can be written as
\begin{eqnarray}
\phi (\xi, \theta)\!\!&=&\!\!-\frac{J_0 (\xi)}{1-\Delta}+\frac{1}{1-\Delta}
\frac{1}{2i\pi} \int_C dt~ e^{\xi \sinh t} \nonumber\\
&\times& \!\!\sum_{m=0}^{\infty} e^{-\frac{mt}{1-\Delta}
+\frac{\pi}{2}mi-\frac{i m \pi\Delta}{2(1-\Delta)}}
(e^{im\theta}+e^{-im\theta}). \label{3.2}
\end{eqnarray}
When $t$ is in the segment of the integration contour $C$
along the imaginary axis,
the summation over $m$ does not converge
because the absolute value of each
term in the summation is all unity.
In order to make the series to converge,
we need to think that the integration contour $C$ is not
exactly on the imaginary axis but $t$ always has
a positive real part.
For bookkeeping purpose,
we multiply each term in the sum by a factor $e^{-\epsilon m}$
($\epsilon$ is an infinitesimally small positive real number).
Then Eq.~(\ref{3.2}) becomes
\begin{eqnarray}
\phi (\xi, \theta)=-\frac{J_0 (\xi)}{1-\Delta}+\psi (\xi, \theta)+\psi
(\xi, -\theta), \label{3.31}
\end{eqnarray}
where $\psi (\xi,\theta)$ is defined by
\begin{eqnarray}
\psi (\xi,\theta):=\frac{1}{1-\Delta} \frac{1}{2i\pi} \int_C dt~
\frac{e^{\xi \sinh
t}}{1-e^{-\frac{t-t_\ast}{1-\Delta}}},
\label{3.3}
\end{eqnarray}
with
\begin{equation}
t_{\ast}:=-\epsilon+i\frac{\pi}{2}-i\frac{{\alpha}(\theta)}{\sqrt{\xi}},
\label{3.3a}
\end{equation}
and ${\alpha}(\theta):= (\pi \Delta -(1-\Delta) \theta) \sqrt{\xi}$.
Now we find that
all we need to evaluate
is $\psi (\xi,\theta)$ in order
to obtain an approximate formula for $\phi (\xi,\theta)$.
This integral will not be expressed by simple known functions
in general, but the integration can be performed by using
the method of steepest descent in the limit $\xi\gg 1$.
The integrand of Eq.~(\ref{3.3}) has two saddle points
located at $t=t_+\approx i\pi/2$ and
$t=t_-\approx -i\pi/2$ in the vicinity of
the integration contour $C$.
We should also notice that
the integrand has a pole at $t=t_{\ast}$, which is also
infinitesimally close to the contour of the integral $C$.
This pole is located near the saddle point at $t=t_+$
as far as $\Delta$ and $\theta$ are small.
Hence the treatment of the saddle point at $t=t_+$ is
much more delicate than that of the saddle point at $t=t_-$.
We only discuss the saddle point at $t=t_+$, then the case
at $t=t_-$ is a trivial extension.
When $\Re (t)>0, \Im (t)<i\frac{\pi}{2}$ or
$\Re (t)<0, \Im (t)>i\frac{\pi}{2}$,
which corresponds to shaded regions in Fig.~\ref{contour},
$e^{\xi \sinh t}$ diverges in the limit $\xi \to \infty$.
If ${\alpha}(\theta)>0$,
the pole at $t=t_{\ast}$ is in the bottom-left unshaded region.
In this case we cannot deform the contour to the
direction of the steepest descent at $t=t_+$
without crossing the pole at $t=t_{\ast}$.
The deformed contour which is convenient to apply the method
of the steepest descent is such that is shown
as ${\tilde C}$ in Fig.~\ref{contour}.
When we deform the integration contour from $C$ to $\tilde C$,
there arises an additional contribution corresponding to the residue at
$t=t_\ast$ when ${\alpha}(\theta)>0$.
On the other hand,
if ${\alpha}(\theta) <0$,
the pole is in the top-left shaded region.
In this case, we can deform the contour of the integral to
the direction of the steepest descent
without crossing the pole $t_{\ast}$.
Hence no additional term arises.
From these observations,
we find that it is necessary to evaluate the integral~(\ref{3.3})
separately depending on the signature of ${\alpha}(\theta)$.
Though the calculation itself can be done straightforwardly,
it is somewhat complicated because
the saddle point and the pole are close to each other.
When the pole is located inside the region around the saddle
point that contributes dominantly to the integral, a
simple Gaussian integral does not give a good approximation.
Detailed discussions about this point are given in Appendix A.
Here we only quote the final result which keeps terms up to
$O(1/\sqrt{\xi})$,
\begin{eqnarray}
\psi(\xi,\theta) &\approx &\exp \left( i\xi \cos \frac{
{\alpha}(\theta)}{\sqrt{\xi}} \right) \Theta
({\alpha}(\theta))\cr
&& -\frac{{\sigma}(\theta)}{\sqrt{\pi}}
\exp \left( i\xi+\frac{i {\alpha}(\theta)}{
(1-\Delta) \sqrt{\xi}}-\frac{i}{2}
{\tilde\alpha}^2(\theta) \right)\cr
&&\quad\times {\rm Erfc}
\left( {\sigma}(\theta) \frac{{\tilde\alpha}(\theta) }
{\sqrt{2} } e^{-i
\pi/4} \right) \cr
&& + \frac{1}{\sqrt{2\pi \xi}} \frac{1}{1-\Delta}
\frac{ e^{-i \xi +i
\pi /4} }{ 1- e^{
\frac{i}{1-\Delta}
(\pi-{\alpha}(\theta)/\sqrt{\xi})} },
\label{3.3bb}
\end{eqnarray}
where
\begin{eqnarray}
{\tilde\alpha}(\theta) &:=& i(1-\Delta)
\sqrt{\xi} \left[1-\exp \left(i \frac{ {\alpha}(\theta) }
{(1-\Delta) \sqrt{\xi}} \right) \right],
\label{3.3b1}\\
\sigma(\theta) &:=& {\rm sign}( {\alpha}(\theta) ),
\label{3.3c}
\end{eqnarray}
and
\begin{equation}
{\rm Erfc}(x):= \int_x^{+\infty} dt\, e^{-t^2}.
\label{3.3d}
\end{equation}
We are mostly interested in the cases with $\Delta, \theta\ll 1$.
Then,
we have $\alpha(\theta)/\sqrt{\xi}\ll 1$,
and therefore $\tilde\alpha(\theta)$ reduces to $\alpha(\theta)$.
The second term in Eq.~(\ref{3.3bb}) is the contribution from the
integral around the saddle point at $t=t_+$ along the contour $\tilde
C$. This term is not manifestly suppressed by $1/\sqrt{\xi}$.
As far as $\alpha(\theta)$ is fixed, this term does not vanish
in the limit $\xi\to \infty$.
Of course, if we fix $\Delta$ and $\theta$ first, and take the
limit $\xi\to \infty$, the argument of the error function goes to
$+\infty$ and the function itself vanishes.
However,
$\alpha(\theta)$ vanishes at $\theta=\pi \Delta/(1-\Delta)$.
Hence even for a very large value of $\xi$ there is always
a region of $\theta$ in which this second term cannot be neglected.
However, for $\theta$ in such a region, $\alpha(\theta)$ cannot
be very large.
Therefore,
we can safely drop the second term in the exponent.
On the other hand, the last term in Eq.~(\ref{3.3bb}),
which is the contribution from the saddle point at $t=t_-$,
is always suppressed by $1/\sqrt{\xi}$. Hence, this term does
not give any significant contribution for $\xi\gg 1$.
The first term in Eq.~(\ref{3.31})
can be dropped in the same manner for $\xi\gg 1$.
Keeping only the terms which possibly remain
in the limit $\xi\to \infty$, we finally obtain
\begin{eqnarray}
\phi(\xi,\theta) &\approx &\exp \left( i\xi \cos \frac{
{\alpha}(\theta)}{\sqrt{\xi}} \right) \Theta
({\alpha}(\theta))\cr
&& -\frac{{\sigma}(\theta)}{\sqrt{\pi}}
e^{i\xi-\frac{i}{2}
{\alpha}^2(\theta)}
{\rm Erfc}
\left( \frac{ |{\alpha}(\theta)| }
{\sqrt{2} } e^{-i
\pi/4} \right) \cr
&& + (\theta\to -\theta).
\label{3.3b}
\end{eqnarray}
For illustrative purpose,
we compared the estimate given in Eq.~(\ref{3.3bb})
with the exact solution Eq.~(\ref{1.9})
in Fig.~\ref{approx}. They agree quite well at $\xi\gg 1$.
The deficit angle and the observer's direction
are chosen to be $\Delta =0.0025$ and $\theta=0$, respectively.
\begin{figure}[h]
\begin{center}
\includegraphics[width=6.cm,clip]{fig1.eps}
\caption{Black, dotted and dashed lines are contours of the
integral $C$, ${\tilde C}$ and $C_H$,
respectively.
$\pm i \frac{\pi}{2}$ are the saddle points of $e^{\xi \sinh t}$.
}
\label{contour}
\end{center}
\end{figure}
\begin{figure}[h]
\includegraphics[width=8.cm,clip]{fig2.eps}
\caption{
Configuration of the source, the cosmic string and the observer.
{\tt A} and {\tt B} are the positions of a source.
O and P are the positions of the cosmic string and the observer,
respectively.
In this figure,
the wedge {\tt AOB} is removed and thus {\tt A} and {\tt B}
must be identified.
}
\label{configuration}
\end{figure}
\begin{figure*}[t]
\includegraphics[width=17.cm,clip]{fig3.eps}
\caption{
Comparison between the exact solution Eq.~(\ref{1.9}) and the
approximate one Eq.~(\ref{3.3bb}).
$\pi \Delta$ is $0.0025$.
Black line and dotted one correspond to the exact solution and
the approximate one,
respectively.
We see that except for small $\xi$ the dotted line overlaps the
black one.
In the right panel,
the relative error is about $10^{-3}$.
}
\label{approx}
\end{figure*}
\subsection{Geometrical optics limit}
Geometrical optics limit corresponds to the limit
$\xi \to \infty$ with $\Delta$ and $\theta$ fixed.
In this limit $\alpha(\theta)$ also goes to $+\infty$,
and hence the error function
in Eq.~(\ref{3.3b}) vanishes.
Hence the waveform in the geometrical optics limit,
which we denote as ${\phi}_{\rm go}$, becomes
\begin{eqnarray}
{\phi}_{\rm go}(\xi, \theta) & = &
e^{i\xi \cos (\pi\Delta +\varphi) } \Theta (\pi\Delta +\varphi)
\cr
&& +
e^{i\xi \cos (\pi\Delta -\varphi) }
\Theta (\pi\Delta-\varphi),
\label{go1}
\end{eqnarray}
where $\varphi$ is defined by Eq.~(\ref{1.25}).
Since $\phi$ and hence ${\phi}_{\rm go}$ are even in
$\theta$,
it is sufficient to consider the case with $\theta>0$.
In Fig.~\ref{configuration}, the configuration of
the source, the lens and the observer is drawn in the
coordinates in which the deficit angle $2\pi\Delta$ is manifest, i.e.,
the wedge {\tt AOB} is removed from the spacetime.
Both points {\tt A} and {\tt B} indicate the location of the source.
The lines {\tt OA} and {\tt OB} are to be identified. The angle made by
these two lines is the deficit angle.
The locations of the string and the observer are represented by
{\tt O} and {\tt P}, respectively.
In our current setup
the distance between {\tt O} and {\tt A}
($=r_o$) is taken to be infinite.
When $\varphi > \pi \Delta$, only the source {\tt A} can be
seen from the observer.
This corresponds to the fact that
only the first term remains for $\varphi>\pi \Delta$
in Eq.~(\ref{go1}).
For $\varphi > \pi \Delta$, we have
\begin{equation}
{\phi}_{\rm go}(\xi,\theta)=e^{i\xi \cos (\varphi+\pi\Delta)}.
\label{go2}
\end{equation}
This is a plane wave whose traveling direction is
$\varphi =-\pi \Delta$, which is the direction of
$\overrightarrow{\tt AP}$ in
Fig.~\ref{configuration} in the limit $r_o=|\overrightarrow{\tt AO}|
\to \infty$.
For $|\varphi|< \pi\Delta$,
${\phi}_{\rm go}$ is
\begin{equation}
{\phi}_{\rm go}(\xi,\theta)=e^{i\xi \cos (\varphi-\pi\Delta)}
+e^{i\xi \cos (\varphi+\pi \Delta)}. \label{go3}
\end{equation}
This is the superposition of two plane waves whose traveling
directions are different by the deficit angle $2\pi\Delta$.
Hence amplification of the images and interference
occur for $|\varphi|< \pi\Delta$ as expected.
As we shall explain below,
Eq.~(\ref{go1}) coincides with the one derived under
the geometrical optics.
In geometrical optics,
wave form is given by \cite{T.Nakamura:1999}
\begin{equation}
\phi_{go} =
\sum_{j}^{} {|u({\vec x_j})|}^{1/2}
\exp [i \omega T({\vec x_j}) -i \pi n_j], \label{go6}
\end{equation}
where $ {\vec x}$ represents a two-dimensional vector
on the lens plane
and $ T({\vec x}) $ represents the summation of time of flight of the
light ray from the source to the point ${\vec x}$ on the lens plane
and that from the point ${\vec x}$ to the observer.
$ {\vec x_j} $ is a stationary point of $ T({\vec x}) $, and
$ n_j=0,1/2, 1 $ when $ {\vec x_j} $ is a minimum, saddle and
maximum point of $ T({\vec x}) $, respectively.
The amplitude ratio $ {|u({\vec x})|}^{1/2} $ is written as
\begin{equation}
u({\vec x}) = 1/ \det [ {\delta}_{a b}-
{\partial}_a {\partial}_b \psi ({\vec x}) ], \label{go7}
\end{equation}
where $ \psi ({\vec x}) $ in Eq.~(\ref{go7}) is the deflection
potential~\cite{Schneider} which is the integral of the gravitational
potential of the lens along the trajectory between the source and the
observer.
Eq.~(\ref{go6}) represents that the wave form is
obtained by taking the sum of the amplitude ratio
$ {|u({\vec x_j})|}^{1/2} $ of
each images with the phase factor
$ e^{i \omega T({\vec x_j})-i\pi n_j} $.
If the lens is the straight string,
the spacetime is locally flat everywhere except for right on
the string.
This means that the deflection potential $\psi({\vec x})$
is zero and hence the amplitude ratio is unity for
all images~\cite{Schneider}
and the trajectory where the time of flight $T({\vec x})$
takes the extremal value is a geodesic in the conical
space,
and $T({\vec x})$ of any geodesic takes minimum,
which means $n_j=0$.
There are two geodesics if the observer is in
the shaded region in Fig.~\ref{configuration}.
The time of flight along the trajectory {\tt AP} is
\begin{equation}
T_{\tt A} =\lim_{r_o\to\infty}
|\overrightarrow{\tt AP}|
\approx r_o+r\cos(\pi\Delta+\varphi),
\label{go8}
\end{equation}
where $r \equiv |\overrightarrow{\tt OP}|$.
The time of flight along the trajectory {\tt BP} is obtained by
just replacing $\varphi$ with $-\varphi$.
Hence, substituting (\ref{go8}) into (\ref{go6}),
we find that the waveform in the geometrical optics
is the same as Eq.~(\ref{go3}) except for an overall phase
$e^{ir_o\xi}$.
This factor has been already absorbed in the choice of the
normalization factor $B$ in our formula (\ref{1.9}).
We define the amplification factor
\begin{equation}
F(\xi, \theta)=\frac{ {\phi} (\xi, \theta)}
{ {\phi}_{\rm UL} (\xi, \theta)}, \label{go4}
\end{equation}
where $ \phi_{\rm UL} $ is the unlensed waveform.
Using Eq.~(\ref{go3}), the amplification
factor of ${\phi}_{\rm go}$ for $|\varphi|<\pi \Delta$ is given by
\begin{equation}
F_{{\rm go}}(\xi,\theta)
\approx 2 e^{-i \frac{\xi}{2} { (\pi \Delta)
}^2} \cos (\pi\Delta \xi \varphi), \label{go5}
\end{equation}
where we have assumed $\varphi$ and $\Delta$ are small and dropped
terms higher than quadratic order. It might be more suggestive
to rewrite the above formula into
\begin{equation}
\left|F_{{\rm go}}(\xi,\theta)\right|
\approx 2 \cos (\pi\Delta \omega y),
\end{equation}
where $y= r\sin\varphi$. The distance from a node to the next
of when the observer is moved in $y$-direction is
$\lambda/\pi\Delta$,
where $\lambda$ is a wavelength.
This oscillation is seen in the right panel of Fig.~\ref{approx}.
\subsection{Quasi-geometrical optics approximation}
In the previous subsection,
we have derived the waveform in the limit
$\xi, |{\alpha}(\pm\theta)| \to \infty$ which corresponds to the
geometrical optics approximation.
Here we expand the waveform~(\ref{3.3b}) to the lowest order
in $1/{\alpha}(\pm\theta)$.
This includes the leading order corrections to the geometrical
optics approximation due to the finite wavelength effects.
For the same reason as we explained in the previous subsection,
we assume that $\Delta$ and $\varphi$ are small.
Using the asymptotic formula for the error function Eq.~(\ref{ap.5a}),
the leading order correction due to the finite wavelength, which
we denote as $\delta{\phi}_{\rm qgo}$, is obtained as
\begin{eqnarray}
\delta{\phi}_{\rm qgo}(\xi, \theta)
&=&-\frac{ e^{i\xi+i\pi/4} }{ \sqrt{2\pi}}
\left({1\over \alpha(\theta)}+{1\over \alpha(-\theta)}
\right) \cr
&=&-\frac{ e^{i\xi+i\pi/4} }{ \sqrt{2\pi \xi} }
\frac{2\pi \Delta}{ (\pi \Delta)^2 -{\varphi}^2 },
\label{qgo1}
\end{eqnarray}
As is expected, the correction blows up for
$|\varphi| \approx \pi\Delta$, where ${\alpha}(\theta)$ or
${\alpha}(-\theta)$ vanishes, irrespectively of the value
of $\xi$.
In such cases, we have to
evaluate the error function directly,
going back to Eq.~(\ref{3.3b}).
The expression on the first line in Eq.(\ref{qgo1}) manifestly
depends only on $\alpha(\pm \theta)$
aside from the common
phase factor $e^{i\xi}$. This feature remains true even if
we consider a small value of $\alpha(\pm \theta)$.
This can be seen by rewriting
Eq.~(\ref{3.3b}) as
\begin{eqnarray}
\phi(\xi,\theta) &\approx
&\frac{e^{i\xi-\frac{i}{2}{\alpha}^2(\theta)}}{\sqrt{\pi}}
{\rm Erfc} \left( \frac{ -{\alpha}(\theta)}{\sqrt{2i}} \right) + (\theta\to -\theta). \label{qgo1.5}
\end{eqnarray}
The common phase $e^{i\xi}$
does not affect the absolute magnitude of the wave.
Except for this unimportant overall phase,
the waveform is completely determined by $\alpha(\pm\theta)$.
The geometrical meaning of these parameters $\alpha(\pm\theta)$
is the ratio of two length scales defined on the lens plane.
To explain this,
let us take the picture that a wave is composed of a superposition
of waves which go through various points on the lens plane.
In the geometrical optics limit the paths passing through
stationary points of $T({\vec x})$, which we call the image points,
contribute to the waveform.
The first length scale is
$r_s=|\alpha(\pm\theta)|/\sqrt{\xi}\times r$
which is defined as the separation between
an image point and the string on the lens plane.
In this picture we expect that
paths whose pathlength is longer or shorter than the
value at an image point by about one wavelength
will not give a significant contribution because of
the phase cancellation. Namely, only the paths which pass
within a certain radius from an image point need
to be taken into account.
Then such a radius will be given by
$r_F=\sqrt{\lambda r}$, which we call Fresnel radius.
Namely, a wave with a finite wavelength can
be recognized as an extended beam whose transverse size is
given by $r_F$.
The ratio of these two scales gives $\alpha(\pm\theta)$:
\begin{eqnarray*}
|\alpha(\pm\theta)|={\sqrt{2\pi}r_s\over r_F}.
\end{eqnarray*}
When $r_s\gg r_F$
, i.e., $\alpha(\pm\theta)\gg 1$, the
beam width is smaller than the separation.
In this case the beam image is not shadowed by the string,
and therefore the geometrical optics becomes a good approximation.
When $r_s\lesssim r_F$, i.e.,
\begin{equation}
\alpha(\pm\theta)\lesssim 1, \label{qgo4}
\end{equation}
we cannot see the whole image of the beam,
truncated at the location of the string.
Then the diffraction effect becomes important.
The ratio of the beam image eclipsed by the string
determines the phase shift and the amplification of the wave
coming from each image.
If we substitute $|\varphi|\approx 0$ as a typical value,
we obtain a rough criterion that the diffraction effect becomes
important when
\begin{eqnarray}
\lambda \gtrsim 2\pi {(\pi \Delta)}^2 r,
\end{eqnarray}
or $\xi \lesssim {(\pi \Delta)}^{-2}$ in terms of $\xi$.
The same logic applies for a usual compact lens object.
In this case the Fresnel radius does not change but the
typical separation of the image from the lens is given by
the Einstein radius $r_E\approx \sqrt{4GMr}$, where $M$
is the mass of the lens. Then the ratio between $r_E$ and
$r_F$ is given by $r_E/r_F=\sqrt{GM/\lambda}$, which leads
to the usual criterion that the diffraction effect becomes
important when
$\lambda\gtrsim GM$\cite{Ohanian,Bliokh,Bontz,Thorne,Deguchi}.
From the above formula (\ref{qgo1}),
we can read that the leading order corrections scales like
$\propto\sqrt{\lambda/r}$.
This dependencies on $\lambda$ and $r$ differ
from the cases that the lens is composed of a normal
localized object, in which the leading order correction due to the finite
wavelength is ${\cal O}(\lambda /M)$~\cite{Takahashi:2004mc}.
The condition for the diffraction effect
to be important~(\ref{qgo4}) can be also derived directly from
Eq.~(\ref{qgo1}).
In order that the current expansion is a good approximation,
${\phi}_{\rm qgo}$ must be smaller than ${\phi}_{\rm go}$.
This requires that $1/\alpha(\pm\theta)\gg 1$, which is
identical to (\ref{qgo4}).
We plot the absolute value of the amplification factor under
the quasi-geometrical optics approximation as dashed line
in Fig.~\ref{quasi1}.
We find that the quasi-geometrical optics approximation is a good
approximation for $\xi \gtrsim \Delta^{-2}$.
For $\xi \lesssim \Delta^{-2}$,
the quasi-geometrical optics approximation gives a
larger amplification factor than the exact one.
In the quasi-geometrical optics approximation,
we find from Eqs.~(\ref{go3}) and (\ref{qgo1})
the absolute value of the amplification factor for $\varphi=0$ is
\begin{equation}
|F(\xi,0)| \approx 2 \left[ 1-\sqrt{ \frac{2}{\pi \xi {(\pi\Delta}^2}
} \cos \left( \frac{\xi}{2} {(\pi\Delta)}^2+\frac{\pi}{4} \right)
\right]^{1/2}. \label{qgo5}
\end{equation}
From this expression,
we find that the position of the first peak of the amplification
factor lies at $\xi \approx 4.25 \times {(\pi\Delta)}^{-2}$,
which can be also verified from Fig.~\ref{quasi1}.
For $\xi \lesssim \Delta^{-2}$ the present approximation
is not valid, but we know that the amplification factor should
converge to unity in the limit $\xi\to 0$, where $r_F$ is
much larger than $r_s$.
\begin{figure}[t]
\includegraphics[width=7.cm,clip]{fig4.eps}
\caption{
The absolute value of the amplification factor as a function of $\xi$
for $\theta=0$.
Black line and dashed one correspond to Eq.~(\ref{3.3b}) and the
quasi-geometrical optics approximation,
respectively.
The string tension is chosen to be $G\mu=10^{-2}$.
}
\label{quasi1}
\end{figure}
We show in Fig.~\ref{quasi2} the absolute value of the
amplification factor as a function of $\varphi$ for
four cases of $\xi$ around $\Delta^{-2}$.
Top left, top right, bottom left and bottom right panels
correspond to $\xi {(\pi \Delta)}^2=0.5,1,2$ and $4$,
respectively.
Black curves are plots for Eq.~(\ref{3.3b}) and the
dotted ones are plots for the quasi-geometrical optics
approximation.
As is expected, the error
of the quasi-geometrical optics approximation becomes
very large near $\varphi=\pi\Delta$,
where $\alpha(\theta)$ vanishes.
As the value of $\xi$ increases, the angular region in which the
quasi-geometrical optics breaks down is reduced.
Interestingly,
the absolute value of the amplification factor deviates from unity
even for $ \varphi \gtrsim \pi \Delta$ which is not observed
in the geometrical optics limit.
This is a consequence of diffraction of waves,
the amplitude of oscillation of the interference pattern
becomes smaller as $\theta$ becomes larger,
which is a typical diffraction pattern formed
when a wave passes through a single slit.
The broadening of the interference pattern due to the diffraction
effect means that the observers even in the region
$|\varphi|> \pi \Delta$ can detect signatures of the
presence of a cosmic string.
But the deviation of the amplification from unity outside the wedge
$\varphi >\pi \Delta$ is rather small except for the special case
$\xi {(\pi \Delta)}^2 \approx 1$:
for $\xi {(\pi \Delta)}^2\ll 1$ the magnification is inefficient and
for $\xi {(\pi \Delta)}^2\gg 1$ the magnification itself does not occur.
Hence the increase of the event rates of lensing by cosmic strings
compared with the estimate under the geometrical optics approximation
could be important only when the relation $\xi {(\pi \Delta)}^2 \approx 1$
is satisfied.
If we take $D=10^{28} {\rm cm}$ and $\omega=10^{-3} {\rm Hz}$ which is in
the frequency band of LISA(Laser Interferometer Space Antenna)\cite{lisa},
we find that the typical value of $G\mu$ is $\approx 2\times 10^{-9}$.
So far,
we have considered the stringy source rather than a point source.
Extension to a point source can be done in a similar manner to
the case of the stringy source and is treated in Appendix B.
The result is
\begin{eqnarray}
\phi(r,\theta,z)
\approx -{1
\over 4\pi D}
e^{i\omega D}
{\cal F}\left( \frac{\omega rr_o}{D},\theta \right), \label{qgo7}
\end{eqnarray}
where $D=\sqrt{(r+r_o)^2+z^2}$
is the distance between the source and the observer.
${\cal F}$,
which is defined by Eq.~(\ref{b1}),
is related to $\psi$ as
\begin{equation}
{\cal F}(x, \theta)= \big[ e^{-i\xi}\psi (\xi,\theta) \big] {\Big|}_{\xi \to x}+(\theta \to -\theta).
\end{equation}
Hence $\phi$ for the point source is similar to that
for the stringy source.
In particular,
assuming that $\Delta, \varphi \ll 1$, and keeping
terms which could remain for $\omega r, \omega r_o \gg 1$,
we have
\begin{eqnarray}
&F&\!\! \left(\frac{\omega rr_o}{D},\theta \right) \approx e^{-\frac{i}{2}\frac{\omega rr_o}{D} {(\pi \Delta-\varphi)}^2 } \cr
&&\times \frac{1}{\sqrt{\pi}} {\rm Erfc} \left( \frac{\varphi-\pi \Delta }{\sqrt{2i}} \sqrt{ \frac{\omega rr_o}{D} } \right)
+(\theta \to -\theta). \label{qgo8}
\end{eqnarray}
\begin{figure*}[t]
\includegraphics[width=17.cm,clip]{fig5.eps}
\caption{
Black line and dotted one correspond to Eq.~(\ref{3.3b}) and the
quasi-geometrical optics approximation,
respectively.
The string tension is chosen to be $G\mu=10^{-3}$.
}
\label{quasi2}
\end{figure*}
\subsection{Simpler derivation of Eq.~(\ref{qgo1.5}).}
We have derived an approximate waveform~(\ref{qgo1.5})
which is valid in the wave zone from the exact solution of
the wave equation Eq.~(\ref{1.9}).
Here we show that Eq.~(\ref{qgo1.5}) can be obtained by a
more intuitive and simpler method.
In the path integral formalism \cite{T.Nakamura:1999},
the wave form is given by the sum of the amplitude
$\exp \left( i\omega T(s) \right)$ for all possible
paths which connect the source and the observer.
Here $T(s)$ is the time of flight along the path $s$.
If the cosmic string resides between the source and the
observer,
the wave form will be given by the sum of two terms one
of which is obtained by the path integral over the paths
which pass through the upper side of the string ($y>0$) in
Fig.~\ref{configuration},
and the other through the lower side of it ($y<0$).
The waveform coming from the former contribution
will be given by
\begin{equation}
A \int_{-\infty}^{\infty} dz_{\tt Q}
\int^{\infty}_0 dy_{\tt Q}\, e^{i\omega
(|\overrightarrow{\tt AQ}|+|\overrightarrow{\tt QP}|)}, \label{sd1}
\end{equation}
where ${\tt Q}=(0,y_{\tt Q},z_{\tt Q})$ is a point on the lens plane specified by
$x=0$.
One can determine the normalization constant $A$ by a little more
detailed analysis, but we do not pursue it further here.
By integrating Eq.~(\ref{sd1}),
we recover the first term in Eq.~(\ref{qgo1.5}).
\subsection{Long wavelength limit}
For completeness, we consider the case in which
the wavelength is longer than the distance from the
string $\xi \lesssim 1$.
In this limit,
the first few terms in Eq.~(\ref{1.9}) dominate,
and we find
\begin{equation}
\phi (\xi,\theta) \approx \frac{1}{1-\Delta}+i\frac{e^{-\Delta \log 2-\pi \Delta/2 i}}{\Gamma (1+\Delta)} {\xi}^{1+\Delta} \cos \theta. \label{3.17}
\end{equation}
In particular,
for $\xi \to 0$ Eq.~(\ref{3.17}) becomes
${(1-\Delta)}^{-1}$ which is larger than unity.
This differs from the cases of gravitational lensing by a normal
compact object, where the amplification becomes unity in the long
wavelength limit.
The reason why the amplification differs from unity even in
the long wavelength limit is that the space has a deficit angle
and hence the structure at the spatial infinity
is different from the usual Euclidean space.
Waves with very long wavelengths do not feel the local
structure of string.
However, uniform amplification of waves should occur
as a result of total energy flux conservation because
the area of the asymptotic region at a constant distance
from the source is reduced due to the deficit angle.
In this sense such modes feel the existence of a string.
\section{Connections to observations}
\subsection{Compact binary as a source}
In this section,
we consider compact binaries as sources of gravitational waves.
Gravitational waves from compact binaries are clean in the
sense that the waves are almost monochromatic:
the time scale for the frequency to change
is much longer than the orbital period of the binary
except for the phase just before plunge.
Hence interference between two waves coming from both sides
of the cosmic string could be observed by future
detectors.
Since each compact binary has a finite lifetime,
lensing events can be classified roughly into two cases.
If the difference between the times of flight along two geodesics
is larger than the lifetime of the binary,
we will observe two independent waves separately
at different times.
On the other hand,
if the time delay is shorter than the lifetime,
what we observe is the superposition of two waves.
The remaining lifetime of the binary $T_{\rm life}$
when the period of the gravitational waves
measured by an observer is $P_{GW}$ is estimated as
\begin{equation}
T_{\rm life} \approx 9.2 \times 10^{-4} \frac{1}{ {(1+z_S)}^{5/3} }
\frac{ {(1+\eta)}^{1/3} }{\eta} { \left( \frac{P_{GW}}{GM} \right)
}^{5/3} P_{GW}, \label{com1}
\end{equation}
where $\eta$ is the mass ratio of the binary ($\eta \le 1$),
$M$ is the mass of the more massive star in the binary
and $z_S$ is the source redshift.
The time delay $T_{\rm delay}$ is
\begin{equation}
T_{\rm delay} \approx 2 \frac{r r_o}{D}
\varphi \pi \Delta. \label{com2}
\end{equation}
Taking the typical values of parameters as
$r r_o/D =1 {\rm Gpc}$ and
$\varphi=\pi \Delta$,
the condition $T_{\rm life} \gg T_{\rm delay}$ gives the upper bound
on the mass $M$,
\begin{equation}
M \ll 8\times 10^3 \frac{ {(1+\eta)}^{1/5} }{ {\eta}^{3/5} } { \left( \frac{\pi \Delta}{10^{-5}} \right) }^{-6/5} { \left( \frac{P_{GW}}{10^3 {\rm sec}} \right) }^{8/5} M_{\odot}. \label{com3}
\end{equation}
The time scale for the orbital
frequency of the binary to change is the same
order as $T_{\rm life}$.
Hence the condition $T_{\rm life} \gg T_{\rm delay}$ implies
that the frequencies of two waves are almost the same.
The left and right panels in Fig.~\ref{lisa-decigo} which
correspond to different frequencies of gravitational waves
show the region where the condition Eq.~(\ref{com3}) is
satisfied for three different values of string parameter $\Delta$.
The shaded area represent the parameter region beyond the
detector's sensitivities.
In the left and right panels
we assumed, respectively, that the threshold value for detection
in strain amplitude for LISA and DECIGO(DECihertz Interferometer Gravitational
wave Observatory)\cite{skn01}/BBO(Big Bang Observer)\cite{bbo}, which
are given by $10^{-20}{\rm Hz}^{-1/2}$ and $10^{-23}{\rm Hz}^{-1/2}$.
We find that both cases $T_{\rm life} \gg T_{\rm delay}$ and
$T_{\rm life} \ll T_{\rm delay}$ can occur both for LISA and
BBO/DECIGO.
\begin{figure*}[t]
\includegraphics[width=17.cm,clip]{fig6.eps}
\caption{
Plots of regions where Eq.~(\ref{com3}) is satisfied for three different
values of the string parameter.
Left and right panels are for $10^{-3} {\rm Hz}$ and $0.1 {\rm Hz}$
which are the frequency bands LISA and DECIGO have best sensitivities.
Shaded regions are plotted under the assumptions that the signals satisfy $SN>10$,
the redshift of the source is $1$ and $3$ year observations.
}
\label{lisa-decigo}
\end{figure*}
\subsection{Waveform}
We can easily extend our waveform~(\ref{1.9}) to the case that
the frequency of the source changes in time.
Let us write the source as $\frac{1}{1-\Delta} S(t) \delta(r-r_o) \delta
(\theta-\pi) \delta (z)$.
The Fourier transformation of $S(t)$ is defined by
\begin{equation}
S(t) =\int_{-\infty}^{\infty} d\omega ~e^{-i \omega t}S_{\omega}. \label{com4}
\end{equation}
Denoting the solution $\phi(t,\vec{x})$ for a monochromatic source
obtained in the previous sections by ${\phi}_{\omega} (\vec{x})$,
$\phi$ can be written as
\begin{equation}
\phi (t, \vec{x})= \int_0^{\infty} d\omega ~e^{-i\omega t} S_{\omega} {\phi}_{\omega}(\vec{x})+c.c.,
\end{equation}
where we assumed that $S(t)$ is real.
Substituting Eq.~(\ref{qgo7}) to the above expression,
we have
\begin{eqnarray}
\phi (t,\vec{x}) \approx -\frac{1}{4\pi D} \int_0^{\infty} d\omega ~e^{-i\omega (t-D)} {\cal F} \left( \frac{\omega rr_o}{D},\theta \right) S_\omega \nonumber \\
+c.c. \label{com4.1}
\end{eqnarray}
Eq.~(\ref{com4.1}) is a general formula which applies to
any time dependent source.
Here we consider the special case in which $S(t)$ takes the
form
\begin{equation}
S(t)=\cos \left( \int_0^t dt' ~\Omega(t') \right),
\end{equation}
with
\begin{equation}\Omega(t)=\omega_o+{\dot \omega_o}t,
\end{equation}
where ${\dot \omega_o}/{\omega_o}^2 \ll 1$ and $\omega_o >0$
are assumed. This represents a quasi-monochromatic source with
its frequency slowly changing.
Then $S_\omega$ is
\begin{equation}
S_\omega =\frac{1}{\sqrt{2\pi {\dot \omega_o}} } \left( e^{-i \frac{ {(\omega+\omega_o)}^2 }{2{\dot \omega_o}} +i\pi /4}+e^{i \frac{ {(\omega-\omega_o)}^2 }{2{\dot \omega_o}} -i\pi /4} \right). \label{com4.3}
\end{equation}
Substituting Eqs.~(\ref{qgo8}) and (\ref{com4.3})
into Eq.~(\ref{com4.1}),
and using the method of the steepest descent, we have
\begin{eqnarray}
\phi (t, \vec{x}) &\approx& \frac{1}{4\pi\sqrt{\pi} D}
{\rm Erfc} \left( \frac{ \varphi-\pi \Delta}{\sqrt{2i}} \sqrt{
\frac{\Omega ( T(\varphi) ) rr_o}{D} } \right) \cr
&&
\times e^{-iT(\varphi)
(\omega_o+\frac{1}{2} {\dot \omega_o}T(\varphi))}
+c.c.\cr
&&\qquad
+(\varphi \to -\varphi), \label{com4.4}
\end{eqnarray}
with
\begin{eqnarray}
T(\varphi) = t-D+\frac{rr_o}{2D} {(\pi \Delta-\varphi)}^2. \label{com4.5}
\end{eqnarray}
This represents a superposition of two waves coming from both
sides of the string whose arrival times differ by
$|T(\varphi)-T(-\varphi)| =\frac{2rr_o}{D} \pi \Delta |\varphi|$.
In the preceding subsections,
we study the waveforms observed in two
cases with
$T_{\rm life}\gg T_{\rm delay}$ and $T_{\rm life}\ll T_{\rm delay}$.
\subsubsection{$T_{\rm life} \gg T_{\rm delay}$}
As we have explained in the preceding subsection,
what we observe is a superposition of two waves
in this case.
Because the relative phase difference of these waves
slowly increases or decreases in time due to the
frequency change of the binary source and the
optical path difference between two geodesics,
we will observe the beat if the amplitude of the
integrated relative phase difference over observation
time is larger than ${\cal O}(1)$.
The condition that the beat is observed can be derived
as follows.
If we denote the total observation period by $T_{\rm obs}$,
then from Eq.~(\ref{com4.4}) the integrated relative phase difference
is $2\pi \Delta \varphi D {\dot \omega_o} T_{\rm obs}$,
where both $r$ and $r_o$ are assumed to be $O(D)$.
Hence we can observe the beat if
\begin{equation}
T_{\rm obs} \gtrsim \frac{1}{2\pi \Delta \varphi
D {\dot \omega_o}}. \label{com10}
\end{equation}
Because $T_{\rm life}$ is roughly the same as the time
scale for the frequency of the binary to change, i.e.
$T_{\rm life} \sim {\omega_o}/{\dot \omega_o}$,
Eq.~(\ref{com10}) can be written as
\begin{equation}
T_{\rm obs} \gtrsim \frac{T_{\rm life}}
{2\pi \Delta \varphi D\omega_o}. \label{com12}
\end{equation}
If $T_{\rm obs}$ is fixed,
e.g. $T_{\rm obs}\sim 3 {\rm yr}$ for LISA,
Eq.~(\ref{com12}) is written as an lower bound on $M$.
For $T_{\rm obs} =3 {\rm yr}$
and $P_{GW}=10^3 {\rm sec}$,
Eq.~(\ref{com12}) becomes
\begin{equation}
M \gtrsim 2.6 \times \frac{ {(1+\eta)}^{1/5} }{ {\eta}^{3/5} } { \left( \frac{\pi \Delta}{10^{-5}} \right) }^{-6/5} {\left( \frac{P_{GW}}{10^3 {\rm sec}} \right) }^{13/5} M_{\odot}. \label{com13}
\end{equation}
We show in Fig.~\ref{lisa2} the region where Eq.~(\ref{com13})
is satisfied for LISA with $T_{\rm obs} =3 {\rm yr}$.
We find that if $G\mu \lesssim 2.8 \times 10^{-8}$ which is about
one order of magnitude below the current upper bound,
LISA will detect the beat of gravitational waves for all
observable ranges in $(\mu,M)$ space as long as
$T_{\rm life}\gg T_{\rm dely}$
\footnote{
Since the lensing probability is not expected to be high,
we need a large number of events to detect a lensing event.
In such a situation, what gravitational wave detectors can detect
is a superposition of various waves. Hence, signal will almost
always have beat even if we ignore the lensing effect.}.
\begin{figure}[t]
\includegraphics[width=8cm,clip]{fig7.eps}
\caption{Plot of the region where Eq.~(\ref{com13}) is satisfied.
The frequency of the gravitational waves is assumed to be
$10^{-3} {\rm Hz}$.}
\label{lisa2}
\end{figure}
\subsubsection{$T_{\rm life} \ll T_{\rm delay}$}
If $T_{\rm delay} \gg T_{\rm life}$,
we observe the waveform of either the first term or the second one in
Eq.~(\ref{com4.4}) at a given time.
We show in fig.~\ref{oneside} the amplification of the wave corresponding
to the first term in Eq.~(\ref{com4.4} ) as a function of
$\varphi-\Delta\pi$
normalized by $1/\sqrt{\omega rr_o/D}$,
which is nothing
but $-\alpha(\theta)$ in the case discussed
in Sec.\ref{sec:behaviorsofsolution}.
We find that the amplification
approaches zero more slowly for
$\varphi -\pi \Delta>0$ and oscillates around
unity for $\varphi -\pi\Delta <0$ and
the angular size in which non-trivial oscillations due to
the diffraction effect
can be observed is given by $1/\sqrt{\omega rr_o/D}$.
Since
$T_{\rm delay}\approx (rr_o/D) \varphi\pi\Delta<T_{\rm life}\ll
\omega^{-1}$ in the present case, we have $(rr_o/D)
(\pi\Delta)^2\agt (rr_o/D)\pi\Delta\varphi\gg 1$.
Therefore this angular size of oscillation is much smaller
than $\pi\Delta$. Hence it will be very difficult to detect
a lensing event in which this diffraction effect is relevant.
\begin{figure*}[t]
\includegraphics[width=17.cm,clip]{fig8.eps}
\caption{
The absolute value of the amplification factor for
$T_{\rm life} \ll T_{\rm delay}$ as a function of $\varphi$.
Left and right panels correspond to
$\xi=0.2 {(\pi \Delta)}^{-2}$ and ${(\pi \Delta)}^{-2}$,
respectively.
}
\label{oneside}
\end{figure*}
\subsection{Estimation of the event rate}
In this section,
we estimate the detection rate of the gravitational
lensing caused by cosmic strings for planned gravitational
wave detectors such as LISA, DECIGO and BBO.
It is well known that string network obeys the scaling solution
where the appearance of the string network at any time looks
alike if it is scaled by the horizon size.
There are a few dozen strings spread crossing the horizon volume and a
number of string
loops~\cite{Albrecht:1989mk,Bennett:1989yp,Allen:1990tv}.
Since the horizon scale increases in the comoving coordinates as
time goes, the number of strings increase if there is no
interaction between them. However, since strings are typically
moving at a relativistic speed, they frequently intersect with
each other. As a result reconnection between strings occurs,
reducing the number of long strings which extend over
the horizon scale. During the process of
reduction of the number of long strings
a large number of string loops are formed, but
they shrink and decay via gravitational radiation.
Due to the balance of two effects,
the number of long strings in a horizon volume
remains almost constant in time.
The reconnection probability $p$ is essentially $1$
for gauge theory solitons~\cite{Matzner}
because reconnection allows the flux inside the string to
take an energetically favorable shortcut.
For F-strings, the reconnection is a quantum process and
its probability is roughly estimated as
$ p\sim g^2_s $, where $ g_s $ is the string coupling and
is predicted in \cite{Jackson:2004zg} that
\begin{equation}
10^{-3} \lesssim p \lesssim 1. \label{es1}
\end{equation}
For D-strings, the reconnection probability might be
$ 0.1 \lesssim p \lesssim 1 $ \cite{Jackson:2004zg}.
If the reconnection probability is less than $1$,
the number of long strings is expected to
be $ p^{-1} $ times larger than that in the case with $p=1$.
Therefore it is expected that in the context of cosmic strings
motivated by superstring theory
the number of long strings in a horizon volume
can be $ 10^3 $ or more.
To estimate the event rate for the gravitational lensing,
here we consider a compact binary
(such as binary neutron stars and/or black holes)
as a source of gravitational waves.
There are large uncertainties about the event rate of MBH (massive black
hole) merger detected by LISA or DECIGO/BBO.
Several authors \cite{Haehnelt:1994wt,Islam:2003nt,Ioka:2005pm}
employed a model in which MBH mergers are associated with the
mergers of host dark matter halos to estimate the event rate
of MBH-MBH mergers.
In this model,
the event rate is dominated by halos with the minimum mass
$M_{min}$ above which halos have a central MBH and
some scenario predict that the event rate could reach
$\sim 10^4 $ events/yr.
For DECIGO/BBO, the binary neuron stars will be observed
$\sim 10^5$ events/yr.
The probability of lensing for a single source by an infinite
straight cosmic string both at cosmological distances is
\begin{equation}
P \simeq 3\times 10^{-6} \left( \frac{\pi \Delta}{10^{-5}} \right). \label{es3}
\end{equation}
Eq.~(\ref{es3}) is derived under the geometrical optics
approximation.
In section III,
we found that the signal of lensing by cosmic strings
(the interference pattern of gravitational waves at
detectors) extends over an angular scales
larger than the deficit angle $2\pi \Delta$
when the diffraction effect is marginally important.
This is a well known fact for the gravitational lensing by
usual stellar objects \cite{Takahashi:2003ix,Ruffa:1999}.
As we estimated in section III,
the critical distance $D_c$ below which the diffraction effect
becomes important is
\begin{equation}
D_c=50 \left( \frac{P_{GW}}{10^{3} {\rm sec}} \right) { \left( \frac{\pi
\Delta}{10^{-5}} \right) }^{-2} {\rm kpc}.
\label{es4}
\end{equation}
Therefore the probability of lensing by cosmic strings may
be enhanced due to the diffraction effect
for $ \pi\Delta \approx 10^{-7} $ at LISA band ($P_{GW}\approx 10^3$sec)
and for $ \pi\Delta \approx 10^{-8} $ at DECIGO/BBO band
($P_{GW}\approx 10 $sec).
Assuming the prospective values of the parameters that determines
the rate of lensing events $\dot n$, we obtain
\begin{equation}
\dot{n} \sim 3 f { \left( \frac{p}{0.1} \right) }^{-1} \left( \frac{\pi
\Delta}{10^{-5}} \right) \left(
\frac{\dot{n}_S}{10^5 {\rm yr}^{-1}} \right) {\rm
yr}^{-1}, \label{es5}
\end{equation}
where $f$($>1$) denotes the numerical factor arising from
the enhancement of the lensing probability due to the diffraction effect.
$\dot{n}_S=10^5$ is almost upper bound on the total
event rate of neutron star mergers detectable by DECIGO/BBO.
If the event rate is even higher, the number of events
becomes comparable to or larger than the number of frequency bins.
Then we will not be able to distinguish each event, and
undistinguishable signals become confusion noise.
In the case of LISA, this bound on $\dot n_S$ is even lower.
Unfortunately,
a large number of lensing events by cosmic strings
can be expected only for marginally large $\pi\Delta(\approx G\mu/4)$
with a small reconnection probability $p$.
Finally we briefly comment on the validity of the assumption that
most of cosmic strings can be treated as straight ones
in studying gravitational lensing by them.
In geometrical optics approximation,
only light paths which satisfy the Fermat's principle contribute to the
amplification factor.
If we take into account the finiteness of the wavelength,
the trajectories whose optical path differences are less than
a few times of its wavelength will dominantly
contribute to the amplification factor.
In terms of the distance on the lens plane
($x=0$-plane in Fig.~\ref{configuration}),
the optical paths within $\alt\sqrt{\lambda D}$
from the intersection of the geodesic
will give a dominant part of the amplification factor.
In the standard literature,
the typical size of small-scale structure of a long string
is given by the
gravitational back-reaction scale $\sim 50 G\mu t$,
where $t$ is a cosmic time \cite{Vilenkin:2005jg}.
But this is not an established argument and some recent studies
suggest that the smallest size of the wiggles could be much smaller
than $50 G\mu t$ \cite{Siemens:2001dx,Siemens:2002dj}.
If we assume here that the smallest size of the wiggles is
$50 G\mu t$,
then the condition that the straight string approximation is good
is $\sqrt{\lambda D} \lesssim 50 G\mu t$.
Substituting the appropriate values of the parameters,
it gives the condition,
\begin{eqnarray}
&&1\gtrsim \frac{ \sqrt{\lambda D}}{50 G\mu t} \nonumber \\
&&=8 \times 10^{-5} { \left( \frac{\pi \Delta}{10^{-5}} \right) }^{-1} { \left( \frac{\lambda}{10^{13} {\rm cm}} \right) }^{1/2} { \left( \frac{D}{10^{26} {\rm cm}} \right) }^{1/2}. \label{es6}
\end{eqnarray}
Hence approximating a cosmic string by a straight one is good for
wide range of possible values of the parameters.
\section{Summary}
We have constructed a solution of the Klein-Gordon equation for a
massless scalar field in the flat spacetime with a deficit
angle $2\pi\Delta\approx 8\pi G\mu$ caused by
an infinite straight cosmic string.
We showed analytically that the solution in the short wavelength
limit reduces to the geometrical optics limit.
We have also derived the correction
to the amplification factor obtained
in the geometrical optics approximation
due to the finite wavelength effect
and the expression in the long wavelength limit.
The waveform is characterized by a ratio of two different length scales.
One length scale $r_s$ is defined as the separation between
the image position on the lens plane in the geometrical optics
and the string.
We have two $r_s$ since there are two images
corresponding to which side of the string the ray travels.
(When the image cannot be seen directly, we assign a negative
number to $r_s$.)
The other length scale $r_F$, which is called Fresnel radius, is
the geometrical mean of the wavelength and the typical separation
among the source, the lens and the observer.
The waveform is characterized by the ratios between $r_s$ and
$r_F$.
If $r_F>r_s$,
the diffraction effect becomes important and the interference
patterns are formed.
Even when the image in the geometrical optics is not directly
seen by the observer, the interference patterns remain.
In contrast,
in the geometrical optics
magnification and interference occur
only when the observer can see two images which travel
both sides of the string.
Namely, the angular range where lensing signals exist
is broadened by the diffraction effect.
This broadening may increase the lensing probability
by an order of magnitude compared with that
estimated by using the geometrical optics when the
distance to the source is around the critical distance
$D_c$ given in Eq.~(\ref{es4}).
We finally estimated the rate of lensing events
which can be detected by LISA and DECIGO/BBO
assuming BH-BH or NS-NS mergers as a source of gravitational waves.
For possible values of the parameters that determines the event
rate such as string reconnection rate, string tension and the event rate
of the unlensed mergers,
the lensing event rate could reach several per yr.
\acknowledgements
T.S. thanks Kunihito Ioka, Takashi Nakamura and Hiroyuki Tashiro
for useful comments.
This work is supported in part
by Grant-in-Aid for Scientific Research, Nos.
14047212 and 16740141,
and by that for the 21st Century COE
"Center for Diversity and Universality in Physics" at
Kyoto university, both from the Ministry of
Education, Culture, Sports, Science and Technology of Japan.
|
1,941,325,221,171 | arxiv | \section{Introduction}
In this paper we pursue a tight characterization of the sample complexity of
learning a classifier, under a particular data distribution, and using a
particular learning rule.
Most learning theory work focuses on providing sample-complexity upper bounds which hold for a large class of distributions. For instance,
standard distribution-free VC-dimension analysis shows that if one uses the Empirical Risk Minimization (ERM) learning rule, then the sample complexity of learning a
classifier from a hypothesis class with VC-dimension $d$ is at most $O\left(\frac{d}{\epsilon^2}\right)$, where $\epsilon$ is the maximal excess classification error \citep{VapnikCh71,AnthonyBa99}. Such upper bounds can be useful for understanding the positive aspects of a
learning rule. However, it is difficult to understand the deficiencies of a
learning rule, or to compare between different rules, based on upper bounds
alone. This is because it is possible, and is often the case, that the actual number of samples
required to get a low error, for a given data distribution using a given learning rule, is much lower than the sample-complexity upper bound. As a simple example, suppose that the support of a given distribution is restricted to a subset of the domain. If the VC-dimension of the hypothesis class, when restricted to this subset, is smaller than $d$, then learning with respect to this distribution will require less examples than the upper bound predicts.
Of course, some sample complexity upper bounds are known to be tight or to have
an almost-matching lower bound. For instance, the VC-dimension upper bound is tight \citep{VapnikCh74}. This means that there exists
\emph{some} data distribution in the class covered by the upper bound, for
which this bound cannot be improved. Such a tightness result shows that there cannot be a better upper bound
that holds for this entire class of distributions. But it does not imply that
the upper bound characterizes the true sample complexity for every
\emph{specific} distribution in the class.
The goal of this paper is to identify a simple quantity, which is a function of
the distribution, that {\em does} precisely characterize the sample complexity
of learning this distribution under a specific learning rule. We focus on the important hypothesis class of linear classifiers, and on the popular
rule of \emph{margin-error-minimization} (MEM). Under this learning rule, a learner
must always select a linear classifier that minimizes the margin-error on the input
sample.
\iffalse
We treat the case where the data is
represented as vectors in Euclidean space, and each data point is labeled as
either positive or negative. We consider the goal of learning a
linear classifier through the origin that correctly predicts the labels of data points. We obtain
a tight distribution-specific characterization of the sample complexity of
large-margin learning for a rich class of distributions.
\fi
The VC-dimension of the class of homogeneous linear classifiers in $\reals^d$ is $d$ \citep{Dudley78}. This implies a sample complexity upper bound of $O\left(\frac{d}{\epsilon^2}\right)$ using any MEM algorithm, where $\epsilon$ is the excess error relative
to the optimal margin error.\footnote{This upper bound can be derived analogously to the result for ERM algorithms with $\epsilon$ being the excess classification error. It can also be concluded from our analysis in \thmref{thm:upperbound} below.}
We also have
that the sample complexity of any MEM algorithm is at most
$O\big(\frac{B^2}{\gamma^2\epsilon^2}\big)$, where $B^2$ is the average squared norm of the data and $\gamma$ is the size of the margin \citep{BartlettMe02}.
Both of these upper bounds are tight. For instance, there exists a distribution with
an average squared norm of $B^2$ that requires as many as $C\cdot\frac{B^2}{\gamma^2\epsilon^2}$ examples to learn, for some universal constant $C$ \citep[see, e.g.,][]{AnthonyBa99}. However, the VC-dimension upper bound indicates, for instance, that if a distribution induces a large average norm but is supported by a low-dimensional sub-space, then the true number of examples required to reach a low error is much smaller. Thus, neither of
these upper bounds fully describes the sample complexity of MEM for a \emph{specific} distribution.
We obtain
a tight distribution-specific characterization of the sample complexity of
large-margin learning for a rich class of distributions.
We present a new quantity, termed the
\emph{\fullkgname}, and use it to provide a tighter distribution-dependent upper
bound, and a matching distribution-dependent lower bound for MEM. The upper bound is universal,
and the lower bound holds for a rich class of distributions with independent features.
The \fullkgname\ refines both the dimension and the average norm of the data distribution, and
can be easily calculated from the covariance matrix and the mean of the distribution.
We denote this quantity, for a margin of $\gamma$, by $k_\gamma$. Our
sample-complexity upper bound shows that
$\tilde{O}(\frac{k_\gamma}{\epsilon^2})$ examples suffice in order to learn any
distribution with a \fullkgname\ of $k_\gamma$ using a MEM algorithm with margin $\gamma$.
We further show that for every distribution in a rich
family of `light tailed' distributions---specifically, product distributions of
sub-Gaussian random variables---the number of samples required for learning by
minimizing the margin error is at least $\Omega(k_{\gamma})$.
Denote by $m(\epsilon,\gamma,D)$ the number of examples required to achieve an excess error of no more than $\epsilon$ relative to the best possible $\gamma$-margin error for a specific distribution $D$, using a MEM algorithm.
Our main result shows the following matching distribution-specific upper and lower bounds on the sample complexity of MEM:
\begin{equation}\label{eq:mainresult}
\Omega(k_\gamma(D)) \leq m(\epsilon,\gamma,D) \leq
\tilde{O}\left(\frac{k_{\gamma}(D)}{\epsilon^2}\right).
\end{equation}
Our tight characterization, and in particular the distribution-specific lower
bound on the sample complexity that we establish, can be used to compare
large-margin ($L_2$ regularized) learning to other learning rules. We provide
two such examples: we use our lower bound to rigorously establish a sample
complexity gap between $L_1$ and $L_2$ regularization previously studied in
\cite{Ng04}, and to show a large gap between discriminative and generative
learning on a Gaussian-mixture distribution. The tight bounds can also be used for active learning algorithms in which sample-complexity bounds are used to decide on the next label to query.
In this paper we focus only on large margin classification. But
in order to obtain the distribution-specific lower bound, we develop
new tools that we believe can be useful for obtaining lower
bounds also for other learning rules. We provide several new results which we use to derive our main results. These include:
\begin{itemize}
\item Linking the fat-shattering of a sample with non-negligible probability to a difficulty of learning using MEM.
\item Showing that for a convex hypothesis class, fat-shattering is equivalent to shattering with exact margins.
\item Linking the fat-shattering of a set of vectors with the eigenvalues of the Gram matrix of the vectors.
\item Providing a new lower bound for the smallest eigenvalue of a random Gram matrix generated by sub-Gaussian variables. This bound extends previous results in analysis of random matrices.
\end{itemize}
\subsection{Paper Structure} We discuss related work on sample-complexity upper bounds in \secref{sec:related}. We present the problem setting and notation in
\secref{sec:definitions}, and provide some necessary preliminaries in \secref{sec:prelim}. We then introduce the \fullkgname\ in
\secref{sec:marginadapted}. The sample-complexity upper bound is proved in
\secref{sec:upper}. We prove the lower bound in \secref{sec:lower}. In
\secref{sec:limitations} we show that any non-trivial sample-complexity lower
bound for more general distributions must employ properties other than the covariance matrix of the distribution. We summarize and discuss implication in \secref{sec:conclusions}. Proofs omitted from the text are provided in \appref{app:proofs}
\section{Related Work} \label{sec:related}
As mentioned above, most work on ``sample complexity lower bounds'' is directed at proving
that under some set of assumptions, there exists a data distribution
for which one needs at least a certain number of examples to learn
with required error and confidence
\citep[for instance][]{AntosLu98,EhrenfeuchtHaKeVa88,GentileHe98}. This type of a
lower bound does not, however, indicate much on the sample complexity
of other distributions under the same set of assumptions.
For distribution-specific lower bounds, the classical analysis of
\citet[Theorem 16.6]{Vapnik95} provides not only sufficient but
also necessary conditions for the learnability of a hypothesis class
with respect to a specific distribution. The essential condition is
that the metric entropy of the hypothesis class with respect to
the distribution be sub-linear in the limit of an infinite sample
size. In some sense, this criterion can be seen as providing a
``lower bound'' on learnability for a specific distribution. However,
we are interested in finite-sample convergence rates, and would like
those to depend on simple properties of the distribution. The
asymptotic arguments involved in Vapnik's general learnability claim
do not lend themselves easily to such analysis.
\citet{BenedekIt91} show that if the distribution is
known to the learner, a specific hypothesis class is learnable if and
only if there is a finite $\epsilon$-cover of this hypothesis class
with respect to the distribution.
\citet{Ben-DavidLuPa08} consider a similar setting, and prove
sample complexity lower bounds for learning with any data
distribution, for some binary hypothesis classes on the real line. \citet{VayatisAz99} provide distribution-specific sample complexity
upper bounds for hypothesis classes with a limited VC-dimension, as a
function of how balanced the hypotheses are with respect to the
considered distributions. These bounds are not tight for all distributions, thus they also do not fully characterize the distribution-specific sample complexity.
As can be seen in \eqref{eq:mainresult}, we do not tightly characterize the dependence of the sample complexity on the
desired error \citep[as done, for example, in ][]{SteinwartSc07}, thus our bounds are not tight for asymptotically small error
levels. Our results are most significant if the desired error level is a constant
well below chance but bounded away from zero. This is in contrast to classical statistical
asymptotics that are also typically tight, but are valid only for
very small $\epsilon$. As was recently shown by \citet{LiangSr10}, the sample
complexity for very small $\epsilon$ (in the classical
statistical asymptotic regime) depends on quantities that can be very different from those that control
the sample complexity for moderate error rates, which are more relevant for machine learning.
\section{Problem Setting and Definitions}\label{sec:definitions}
Consider a domain $\cX$, and let $D$ be a distribution over $\cX \times \{\pm 1\}$. We denote by $D_X$
the marginal distribution of $D$ on $\cX$.
The misclassification error of a classifier $h:\cX \rightarrow \reals$ on a distribution $D$ is
\[
\loss_0(h,D) \triangleq \P_{(X,Y)\sim D}[Y\cdot h(X) \leq 0].
\]
The margin error of a classifier $w$ with respect to a margin
$\gamma > 0$ on $D$ is
\[
\loss_\gamma(h,D) \triangleq \P_{(X,Y)\sim D}[Y\cdot h(X) \leq \gamma].
\]
For a given hypothesis class $\cH \subseteq \binlab^\cX$, the best achievable margin error on $D$ is
\[
\loss^*_\gamma(\cH, D) \triangleq \inf_{h \in \cH}\loss_\gamma(h,D).
\]
We usually write simply $\loss^*_\gamma(D)$ since $\cH$ is clear from context.
A labeled sample is a (multi-)set $S = \{(x_i,y_i)\}_{i=1}^m \subseteq \cX \times \binlab$. Given $S$, we denote the set of its examples without their labels by $S_X \triangleq \{x_1,\ldots,x_m\}$. We use $S$ also to refer to the uniform distribution over the elements in $S$. Thus the misclassification error of $h:\cX \rightarrow \binlab$ on $S$
is
\[
\loss(h,S) \triangleq \frac{1}{m}|\{i \mid y_i \cdot h(x_i) \leq 0\}|,
\]
and the $\gamma$-margin error on $S$ is \[
\loss_\gamma(h,S) \triangleq \frac{1}{m}|\{i \mid y_i \cdot h(x_i) \leq \gamma\}|.
\]
A learning algorithm is a function $\cA:\cup_{m=1}^\infty (\cX \times \binlab)^m \rightarrow \reals^\cX$, that receives a training set as input, and returns a function for classifying objects in $\cX$ into real values. The high-probability loss of an algorithm $\cA$ with respect to samples of size $m$, a distribution $D$ and a confidence parameter $\delta \in (0,1)$ is
\[
\loss(\cA,D,m,\delta) = \inf\{\epsilon \geq 0 \mid \P_{S \sim D^m}[\loss(\cA(S), D) \geq \epsilon] \leq \delta\}.
\]
In this work we investigate the sample complexity of learning using
margin-error minimization (MEM). The relevant class of algorithms is defined
as follows.
\begin{definition}
An \emph{margin-error minimization (MEM) algorithm} $\cA$ maps a margin parameter $\gamma > 0$ to a learning algorithm $\cA_\gamma$, such that
\[
\forall S \subseteq \cX \times \binlab, \quad\cA_\gamma(S) \in \argmin_{h \in \cH} \loss_\gamma(h,S).
\]
\end{definition}
The distribution-specific sample complexity for MEM algorithms is the sample size required to guarantee low excess error for the given distribution. Formally, we have the following definition.
\begin{definition}[Distribution-specific sample complexity]
Fix a hypothesis class $\cH \subseteq \binlab^\cX$.
For $\gamma >0$, $\epsilon,\delta \in
[0,1]$, and a distribution $D$, the \emph{distribution-specific sample complexity}, denoted by $m(\epsilon,\gamma,D,\delta)$, is the minimal sample size such that for any MEM
algorithm $\cA$, and for any $m \geq m(\epsilon,\gamma,D,\delta)$,
\[
\loss_0(\cA_\gamma,D,m,\delta) - \loss^*_\gamma(D) \leq \epsilon.
\]
\end{definition}
Note that we require that
\emph{all} possible MEM algorithms do well on the given distribution. This is because
we are interested in the MEM strategy in general, and thus we study the guarantees that can be provided regardless of any specific MEM implementation.
We sometimes omit $\delta$ and write simply $m(\epsilon, \gamma, D)$, to indicate that $\delta$ is assumed to be some fixed small constant.
In this work we focus on linear classifiers. For simplicity of notation, we assume a Euclidean space $\reals^d$ for some integer $d$, although the results can be easily extended to any separable Hilbert space.
For a real vector $x$, $\norm{x}$ stands for the Euclidean norm.
For a real matrix $\mt{X}$, $\norm{\mt{X}}$ stands for the Euclidean operator norm.
Denote the unit ball in $\reals^d$ by $\ball \triangleq \{w\in \reals^d \mid \norm{w} \leq 1\}$.
We consider the hypothesis class of homogeneous linear separators, $\cW = \{x \mapsto \dotprod{x,w} \mid w \in \ball\}$. We often slightly abuse notation by using $w$ to denote the mapping $x \mapsto \dotprod{x,w}$.
We often represent sets of vectors in $\reals^d$ using matrices. We say that
$\mt{X} \in \reals^{m\times d}$ is the matrix of a set $\{x_1,\ldots,x_m\}\subseteq
\reals^d$ if the rows in the matrix are exactly the vectors in the set. For
uniqueness, one may assume that the rows of $\mt{X}$ are sorted according to an arbitrary fixed full
order on vectors in $\reals^d$. For a PSD matrix $\mt{X}$ denote the largest
eigenvalue of $\mt{X}$ by $\lambdamax(\mt{X})$ and the smallest eigenvalue by $\lambdamin(\mt{X})$.
We use the $O$-notation as follows: $O(f(z))$ stands for $C_1+C_2f(z)$
for some constants $C_1,C_2\geq 0$. $\Omega(f(z))$ stands for $C_2f(z)-C_1$
for some constants $C_1,C_2\geq 0$. $\widetilde{O}(f(z))$ stands for $f(z)p(\ln(z)) + C$ for some polynomial $p(\cdot)$ and some constant $C > 0$.
\section{Preliminaries}\label{sec:prelim}
As mentioned above, for the hypothesis class of linear classifiers $\cW$, one can derive a sample-complexity upper bound of the form $O(B^2/\gamma^2 \epsilon^2)$, where $B^2 = \E_{X \sim D}[\norm{X}^2]$ and $\epsilon$ is the excess error relative to the $\gamma$-margin loss. This can be achieved as follows \citep{BartlettMe02}.
Let $\cZ$ be some domain. The empirical Rademacher complexity of a class of functions $\cF \subseteq \reals^\cZ$ with respect to a set $S = \{z_i\}_{i\in[m]} \subseteq \cZ$
is
\begin{equation*
\cR(\cF, S) = \frac{1}{m}\E_\sigma[|\sup_{f \in \cF} \sum_{i\in[m]}\sigma_i f(z_i)|],
\end{equation*}
where $\sigma = (\sigma_1,\ldots,\sigma_m)$ are $m$ independent uniform $\{\pm1\}$-valued variables.
The average Rademacher complexity of $\cF$ with respect to a distribution $D$ over $\cZ$ and a sample size $m$ is
\begin{equation*
\cR_m(\cF, D) = \E_{S \sim D^m}[\cR(\cF, S)].
\end{equation*}
Assume a hypothesis class $\cH \subseteq \reals^\cX$ and a loss function $\loss:\binlab\times \reals \rightarrow \reals$. For a hypothesis $h \in \cH$, we introduce the function $h_\loss:\cX \times \binlab \rightarrow \reals$,
defined by $h_\loss(x,y) = \loss(y,h(x))$. We further define the function class $\cH_\loss = \{ h_\loss \mid h \in \cH\} \subseteq \reals^{\cX \times \binlab}$.
Assume that the range of $\cH_\loss$ is in $[0,1]$. For any $\delta \in (0,1)$, with probability of $1-\delta$ over the draw of samples $S \subseteq \cX \times \binlab$ of size $m$ according to $D$, every $h \in \cH$ satisfies \citep{BartlettMe02}
\begin{equation}\label{eq:rademacherind}
\loss(h, D) \leq \loss(h, S) + 2\cR_m(\cH_\loss, D) + \sqrt{\frac{8\ln(2/\delta)}{m}}.
\end{equation}
To get the desired upper bound for linear classifiers we use the \emph{ramp loss}, which is defined as follows.
For a number $r$, denote $\chop{r} \triangleq \min(\max(r,0),1)$.
The $\gamma$-ramp-loss of a labeled example $(x,y) \in \reals^d \times \binlab$
with respect to a linear classifier $w \in \ball$
is $\ramp_\gamma(w,x,y) = \chop{1-y\dotprod{w,x}/\gamma}$.
Let $\ramp_\gamma(w,D) = \E_{(X,Y)\sim D}[\ramp_\gamma(w,X,Y)]$,
and denote the class of ramp-loss functions by
\[
\rampf_\gamma = \{(x,y) \mapsto \ramp_\gamma(w,x,y) \mid w \in \ball\}.
\]
The ramp-loss is upper-bounded by the margin loss and lower-bounded by the misclassification error. Therefore, the following result can be shown.
\begin{proposition}\label{prop:ramp}
For any MEM algorithm $\cA$, we have
\begin{equation}\label{eq:propstatement}
\loss_0(\cA_\gamma, D, m, \delta) \leq \loss^*_\gamma(\cH,D) + 2\cR_m(\rampf_\gamma, D) + \sqrt{\frac{14\ln(2/\delta)}{m}}.
\end{equation}
\end{proposition}
We give the proof in \appref{app:radramp} for completeness.
Since the $\gamma$-ramp loss is $1/\gamma$ Lipschitz, it follows from \cite{BartlettMe02} that
\[
\cR_m(\rampf_\gamma, D) \leq \sqrt{\frac{B^2}{\gamma^2 m}}.
\]
Combining this with Proposition~\ref{prop:ramp} we can conclude a sample complexity upper bound of $O(B^2/\gamma^2 \epsilon^2)$.
In addition to the Rademacher complexity, we will also use the classic notions of \emph{fat-shattering} \citep{KearnsSc94} and \emph{pseudo-shattering} \citep{Pollard84}, defined as follows.
\begin{definition}\label{def:shattered}
Let $\cF$ be a set of functions $f:\cX \rightarrow \reals$, and let $\gamma >
0$. The set $\{x_1,\ldots,x_m\} \subseteq \cX$ is
\emph{$\gamma$-shattered} by $\cF$ with the witness $r \in \reals^m$ if for all $y \in \binm$ there is an $f \in \cF$ such that
$\forall i\in [m],\:y[i](f(x_i)-r[i]) \geq \gamma$.
\end{definition}
The $\gamma$-shattering dimension of a hypothesis class is the size of the largest set that is $\gamma$-shattered by this class.
We say that a set is \emph{$\gamma$-shattered at the origin} if it is
$\gamma$-shattered with the zero vector as a witness.
\begin{definition}\label{def:pseudo}
Let $\cF$ be a set of functions $f:\cX \rightarrow \reals$, and let $\gamma >
0$. The set $\{x_1,\ldots,x_m\} \subseteq \cX$ is
\emph{pseudo-shattered} by $\cF$ with the witness $r \in \reals^m$ if for all $y \in \binm$ there is an $f \in \cF$ such that
$\forall i\in [m],\:y[i](f(x_i)-r[i]) > 0$.
\end{definition}
The pseudo-dimension of a hypothesis class is the size of the largest set that is pseudo-shattered by this class.
\section{The Margin-Adapted Dimension}\label{sec:marginadapted}
When considering learning of linear classifiers using MEM, the dimension-based upper bound and the norm-based upper bound are both tight in the worst-case sense,
that is, they are the best bounds that rely only on the dimensionality or
only on the norm respectively. Nonetheless, neither is tight in a
distribution-specific sense: If the average norm is unbounded while the
dimension is small, then there can be an arbitrarily large gap between the true
distribution-dependent sample complexity and the bound that depends on the average norm. If the
converse holds, that is, the dimension is arbitrarily large while the
average-norm is bounded, then the dimensionality bound is loose.
Seeking a tight distribution-specific analysis, one simple approach to
tighten these bounds is to consider their minimum, which is proportional to
$\min(d,B^2/\gamma^2)$. Trivially, this is an upper
bound on the sample complexity as well. However, this simple combination is also
not tight: Consider a distribution in which there are a few directions with
very high variance, but the combined variance in all other
directions is small (see \figref{fig:illustration}). We will show that in such situations the sample
complexity is characterized not by the minimum of dimension and norm, but by the sum of the number of high-variance dimensions and the average squared norm in the other directions. This behavior is captured by the \emph{\fullkgname} which we presently define, using the following auxiliary definition.
\begin{figure}[b]
\centering
\includegraphics[width = 0.7\textwidth]{draw}
\caption{Illustrating covariance matrix ellipsoids. left: norm bound is tight; middle: dimension bound is tight; right: neither bound is tight.}
\label{fig:illustration}
\end{figure}
\begin{definition}\label{def:limited}
Let $b>0$ and let $k$ be a positive integer. A distribution $D_X$ over $\reals^d$ is \emph{$(b,k)$-limited} if there exists a sub-space $V \subseteq \reals^d$ of dimension $d-k$ such that
$
\E_{X\sim D_X}[\norm{\mt{O}_V\cdot X}^2] \leq b,
$
where $\mt{O}_V$ is an orthogonal projection onto $V$.
\end{definition}
\begin{definition}[\fullkgname
The \emph{\fullkgname} of a distribution $D_X$,
denoted by $k_\gamma(D_X)$, is the minimum
$k$ such that the distribution is $(\gamma^2k,k)$-limited.
\end{definition}
We sometimes drop the argument of $k_\gamma$ when it is clear from context. It
is easy to see that for any distribution $D_X$ over $\reals^d$,
$k_{\gamma}(D_X) \leq\min(d,\E[\norm{X}^2]/\gamma^2)$. Moreover, $k_\gamma$ can
be much smaller than this minimum. For example, consider a random vector $X \in
\reals^{1001}$ with mean zero and statistically independent coordinates, such
that the variance of the first coordinate is $1000$, and the variance in each
remaining coordinate is $0.001$. We have $k_1=1$ but $d = \E[\norm{X}^2] =
1001$.
$k_\gamma(D_X)$ can be calculated from the uncentered covariance matrix $\E_{X\sim D_X}[XX^T]$ as follows: Let
$\lambda_1 \geq \lambda_2 \geq \cdots \lambda_d\geq 0$ be the eigenvalues of this matrix. Then
\begin{equation}\label{eq:kgammamin}
k_{\gamma} = \min \{ k \mid \sum_{i=k+1}^d \lambda_i \leq
\gamma^2 k\}.
\end{equation}
A quantity similar to this definition of $k_\gamma$ was studied previously in \cite{Bousquet02}. The eigenvalues of the \emph{empirical} covariance matrix were used to provide sample complexity bounds, for instance
in \cite{ScholkopfShSmWi99}. However, $k_\gamma$ generates a different type of bound,
since it is defined based on the eigenvalues of the distribution and not of the sample. We will see that for small finite samples, the latter can be quite different from the former.
Finally, note that while we define the \fullkgname\ for a finite-dimensional space for ease of notation, the same definition carries over to an infinite-dimensional Hilbert space. Moreover, $k_\gamma$ can be finite even if some of the eigenvalues $\lambda_i$ are infinite, implying a distribution with unbounded covariance.
\section{A Distribution-Dependent Upper Bound}\label{sec:upper}
In this section we prove an upper bound on the sample complexity of learning
with MEM, using the \fullkgname. We do this by providing a tighter upper bound for the Rademacher complexity of $\rampf_\gamma$.
We bound $\cR_m(\rampf_\gamma,D)$ for any $(B^2,k)$-limited distribution $D_X$, using $L_2$ covering numbers, defined as follows.
Let $(\cX, \norm{\cdot}_\circ)$ be a normed space.
An $\eta$-covering of a set $\cF \subseteq \cX$ with respect to the norm $\norm{\cdot}_\circ$
is a set $\cC \subseteq \cX$ such that
for any $f\in \cF$ there exists a $g \in \cC$ such that
$\norm{f -g}_\circ \leq \eta.$
The covering-number for given $\eta> 0$, $\cF$ and $\circ$ is the size of the smallest such $\eta$-covering, and is denoted by $\cN(\eta, \cF, \circ)$.
Let $S = \{x_1,\ldots,x_m\} \subseteq \reals^d$. For a function $f:\reals^d \rightarrow \reals$, the $L_2(S)$ norm of $f$ is $\norm{f}_{L_2(S)} = \sqrt{\E_{X \sim S}[f(X)^2]}$.
Thus, we consider covering-numbers of the form $\cN(\eta, \rampf_\gamma, L_2(S))$.
The empirical Rademacher complexity of a function class can be bounded by the $L_2$ covering numbers of the same function class as follows \citep[Lemma 3.7]{Mendelson02}:
Let $\epsilon_i = 2^{-i}$. Then
\begin{equation}\label{eq:mendelson}
\sqrt{m}\cR(\rampf_\gamma,S) \leq C\sum_{i\in[N]}\epsilon_{i-1}\sqrt{\ln\cN(\epsilon_{i}, \rampf_\gamma, L_2(S))} + 2\epsilon_{N}\sqrt{m}.
\end{equation}
To bound the covering number of $\rampf_\gamma$, we will restate the functions in $\rampf_\gamma$ as sums of two functions, each selected from a function class with bounded complexity.
The first function class will be bounded because of the norm bound on the subspace $V$ used in \defref{def:limited}, and the second function class will have a bounded pseudo-dimension. However, the second function class will depend on the choice of the first function in the sum. Therefore, we require the following lemma, which provides an upper bound on such sums of functions.
We use the notion of a \emph{Hausdorff distance} between two sets $\cG_1,\cG_2\subseteq \cX$, defined as $\Delta_H(\cG_1,\cG_2) = \sup_{g_1 \in \cG_1} \inf_{g_2 \in \cG_2} \norm{g_1 - g_2}_\circ$.
\begin{lemma}\label{lem:doublecover}
Let $(\cX, \norm{\cdot}_\circ)$ be a normed space. Let $\cF \subseteq \cX$ be a set,
and let $\cG:\cX \rightarrow 2^{\cX}$ be a mapping from objects in $\cX$ to sets
of objects in $\cX$. Assume that $\cG$ is $c$-Lipschitz with respect to the
Hausdorff distance on sets, that is assume that
\[
\forall f_1,f_2\in \cX, \Delta_H(\cG(f_1),\cG(f_2)) \leq c\norm{f_1 - f_2}_\circ.
\]
Let $\cF_\cG = \{ f+g \mid f \in \cF, g \in \cG(f)\}$.
Then
\[
\cN(\eta, \cF_\cG, \circ) \leq \cN(\eta/(2+c), \cF, \circ)\cdot
\sup_{f\in\cF}\cN(\eta/(2+c), \cG(f),\circ).
\]
\end{lemma}
\begin{proof}
For any set $A \subseteq \cX$, denote by $\cC_A$ a minimal $\eta$-covering for $A$ with respect to $\norm{\cdot}_\circ$, so that $|\cC_A| = \cN(\eta, A, \circ)$.
Let $f+g \in \cF_\cG$ such that $f \in \cF, g \in \cG(f)$.
There is a $\hat{f} \in \cC_\cF$ such that $\norm{f - \hat{f}}_\circ\leq \eta$.
In addition, by the Lipschitz assumption there is a $\tilde{g} \in \cG(\hat{f})$ such that $\norm{g - \tilde{g}}_\circ \leq c\norm{f - \hat{f}}_\circ \leq c\eta$. Lastly, there is a $\hat{g} \in \cC_{\cG(\hat{f})}$ such that $\norm{\tilde{g} - \hat{g}}_\circ \leq \eta$.
Therefore
\[
\norm{f + g - (\hat{f} + \hat{g})}_\circ \leq \norm{f - \hat{f}}_\circ + \norm{g- \tilde{g}}_\circ + \norm{\tilde{g} - \hat{g}}_\circ \leq (2+c)\eta.
\]
Thus the set $\{ f+g \mid f \in \cC_\cF, g \in \cC_{\cG(f)}\}$ is a $(2+c)\eta$ cover of $\cF_\cG$.
The size of this cover is at most $|\cC_\cF|\cdot\sup_{f \in \cF}|\cC_{\cG(f)}| \leq \cN(\eta, \cF, \circ)\cdot
\sup_{f\in\cF}\cN(\eta, \cG(f),\circ)$.
\end{proof}
The following lemma provides us with a useful class of mappings which are $1$-Lipschitz with respect to the Hausdorff distance, as required in \lemref{lem:doublecover}. The proof is provided in \appref{app:glipschitz}.
\begin{lemma}\label{lem:glipschitz}
Let $f:\cX \rightarrow \reals$ be a function and let $Z \subseteq \reals^\cX$ be a function class over some domain $\cX$.
Let $\cG:\reals^\cX \rightarrow 2^{\reals^\cX}$
be the mapping defined by
\begin{equation}\label{eq:gf}
\cG(f) \triangleq\{ x \mapsto \chop{f(x) + z(x)} - f(x) \mid z \in Z\}.
\end{equation}
Then $\cG$ is $1$-Lipschitz with respect to the Hausdorff distance.
\end{lemma}
The function class induced by the mapping above preserves the pseudo-dimension of the original function class, as the following lemma shows. The proof is provided in \appref{app:lempseudo}.
\begin{lemma}\label{lem:pseudo}
Let $f:\cX \rightarrow \reals$ be a function and let $Z \subseteq \reals^\cX$ be a function class over some domain $\cX$.
Let $\cG(f)$ be defined as in \eqref{eq:gf}.
Then the pseudo-dimension of $\cG(f)$ is at most the pseudo-dimension of $Z$.
\end{lemma}
Equipped with these lemmas, we can now provide the new bound on the Rademacher complexity of $\rampf_\gamma$ in the following theorem. The subsequent corollary states the resulting sample-complexity upper bound for MEM, which depends on $k_\gamma$.
\begin{theorem}\label{thm:upperbound}
Let $D$ be a distribution over $\reals^d \times \binlab$, and assume $D_X$ is $(B^2,k)$-limited.
Then
\[
\cR(\rampf_\gamma, D) \leq \sqrt{\frac{O(k + B^2/\gamma^2)\ln(m)}{m}}.
\]
\end{theorem}
\begin{proof}
In this proof all absolute constants are assumed to
be positive and are denoted by $C$ or $C_i$ for some integer $i$.
Their values may change from line to line or even within the same line.
Consider the distribution $\tilde{D}$ which results from drawing $(X,Y) \sim D$ and emitting \mbox{$(Y\cdot X, 1)$}. It too is $(B^2,k)$-limited, and $\cR(\rampf_\gamma,D) = \cR(\rampf_\gamma,\tilde{D})$.
Therefore, we assume without loss of generality that for all $(X,Y)$ drawn from $D$, $Y = 1$.
Accordingly, we henceforth omit the $y$ argument from $\ramp_\gamma(w,x,y)$ and write simply $\ramp_\gamma(w,x) \triangleq \ramp_\gamma(w,x,1)$.
Following \defref{def:limited}, Let $\mt{O}_V$ be an orthogonal projection onto a sub-space $V$ of dimension $d-k$
such that $\E_{X\sim D_X}[\norm{\mt{O}_V\cdot X}^2] \leq B^2$. Let $\bar{V}$ be the complementary sub-space to $V$. For a set $S = \{x_1,\ldots,x_m\} \subseteq \reals^d$, denote $B(S) = \sqrt{\frac{1}{m}\sum_{i\in[m]}\norm{\mt{O}_V\cdot X}^2}$.
We would like to use \eqref{eq:mendelson} to bound the Rademacher complexity of $\rampf_\gamma$. Therefore, we will bound $\cN(\eta,\rampf_\gamma, L_2(S))$ for $\eta > 0$. Note that
\[
\ramp_\gamma(w,x) = \chop{1-\dotprod{w,x}/\gamma}\allowbreak = 1 - \chop{\dotprod{w,x}/\gamma}.
\]
Shifting by a constant and negating do not change the covering number of a function class.
Therefore, $\cN(\eta,\rampf_\gamma, L_2(S))$ is equal to the covering number of $\{ x \mapsto \chop{\dotprod{w,x}/\gamma} \mid w \in \ball\}$.
Moreover, let
\[
\rampf_\gamma' = \{ x \mapsto \chop{\dotprod{w_a + w_b,x}/\gamma} \mid
w_a \in \ball \cap V,\: w_b \in \bar{V}\}.
\]
Then $\{ x \mapsto \chop{\dotprod{w,x}/\gamma} \mid w \in \ball\} \subseteq \rampf_\gamma'$,
thus it suffices to bound $\cN(\eta,\rampf'_\gamma, L_2(S))$.
To do that, we show that $\rampf_\gamma'$ satisfies the assumptions of \lemref{lem:doublecover} for the normed space $(\reals^{\reals^d},\norm{\cdot}_{L_2(S)})$.
Define
\[
\cF = \{ x \mapsto \dotprod{w_a,x}/\gamma \mid w_a \in \ball\cap V\}.
\]
Let $\cG:\reals^{\reals^d} \rightarrow 2^{\reals^{\reals^d}}$
be the mapping defined by
\[
\cG(f) \triangleq\{ x \mapsto \chop{f(x) + \dotprod{w_b,x}/\gamma} - f(x) \mid w_b \in \bar{V}\}.
\]
Clearly, $\cF_\cG = \{f + g \mid f \in \cF, g \in \cG(f)\} = \rampf_\gamma'$.
Furthermore, by \lemref{lem:glipschitz}, $\cG$ is $1$-Lipschitz with respect to the Hausdorff distance.
Thus, by \lemref{lem:doublecover}
\begin{equation}\label{eq:multcovers}
\cN(\eta, \rampf_\gamma', L_2(S)) \leq \cN(\eta/3, \cF, L_2(S))\cdot
\sup_{f\in\cF}\cN(\eta/3, \cG(f),L_2(S)).
\end{equation}
We now proceed to bound the two covering numbers on the right hand side.
First, consider $\cN(\eta/3, \cG(f),L_2(S))$. By \lemref{lem:pseudo}, the
pseudo-dimension of $\cG(f)$ is the same as the pseudo-dimension of $\{x \mapsto \dotprod{w,x}/\gamma \mid w \in \bar{V}\}$, which is exactly $k$, the dimension of $\bar{V}$.
The $L_2$ covering number of $\cG(f)$ can be bounded by the pseudo-dimension of $\cG(f)$ as follows \citep[see, e.g.,][Theorem 3.1]{Bartlett06}
\begin{equation}\label{eq:pseudocover}
\cN(\eta/3,\cG(f),L_2(S)) \leq C_1\left(\frac{C_2}{\eta^2}\right)^k.
\end{equation}
Second, consider $\cN(\eta/3, \cF, L_2(S))$. Sudakov's minoration theorem (\citealt{Sudakov71}, and see also
\citealp{LedouxTa91}, Theorem 3.18) states that for any $\eta > 0$
\[
\ln\cN(\eta, \cF, L_2(S)) \leq \frac{C}{m\eta^2}\E_{s}^2[\sup_{f \in \cF} \sum_{i\in[m]}s_i f(x_i)],
\]
where $s = (s_1,\ldots,s_m)$ are independent standard normal variables. The right-hand side can be bounded
as follows:
\begin{align*}
&\gamma \E_s[\sup_{f \in \cF}|\sum_{i=1}^m s_i f(x_i)|]
= \E_s[\sup_{w \in \ball\cap V}|\dotprod{w,\sum_{i=1}^m s_i x_i}|]\\
&\quad\leq \E_s[\norm{\sum_{i=1}^m s_i\mt{O}_V x_i}]
\leq \sqrt{\E_s[\norm{\sum_{i=1}^m s_i\mt{O}_V x_i}^2]}
= \sqrt{\sum_{i\in[m]}\norm{\mt{O}_V x_i}^2} = \sqrt{m}B(S).
\end{align*}
Therefore $\ln \cN(\eta, \cF, L_2(S)) \leq C\frac{B^2(S)}{\gamma^2\eta^2}.$
Substituting this and \eqref{eq:pseudocover} for the right-hand side in \eqref{eq:multcovers}, and adjusting constants, we get
\[
\ln\cN(\eta, \rampf_\gamma, L_2(S)) \leq \ln\cN(\eta, \rampf_\gamma', L_2(S)) \leq C_1(1 + k\ln(\frac{C_2}{\eta})+\frac{B^2(S)}{\gamma^2\eta^2}),
\]
To finalize the proof, we plug this inequality into \eqref{eq:mendelson} to
get
\begin{align*}
&\sqrt{m}\cR(\rampf_\gamma, S) \leq C_1\sum_{i\in[N]}\epsilon_{i-1}\sqrt{1 + k\ln(C_2/\epsilon_i)+\frac{B^2(S)}{\gamma^2\epsilon_i^2}} + 2\epsilon_{N}\sqrt{m} \\
&\leq C_1\left(\sum_{i\in[N]}\epsilon_{i-1}\left(1 + \sqrt{k\ln(C_2/\epsilon_i)}+\sqrt{\frac{B^2(S)}{\gamma^2\epsilon_i^2}}\right)\right) + 2\epsilon_{N}\sqrt{m}\\
&= C_1\left(\sum_{i\in[N]}2^{-i+1} + \sqrt{k}\sum_{i\in[N]}2^{-i+1}\ln(C_2/2^{-i}) + \sum_{i\in[N]}\frac{B(S)}{\gamma}\right) +
2^{-N+1}\sqrt{m} \\
&\leq C\left(1 + \sqrt{k} + \frac{B(S)\cdot N}{\gamma}\right) + 2^{-N+1}\sqrt{m}.
\end{align*}
In the last inequality we used the fact that $\sum_{i}i2^{-i+1} \leq 4$.
Setting $N = \ln(2m)$ we get
\begin{align*}
&\cR(\rampf_\gamma, S) \leq \frac{C}{\sqrt{m}}\left(1 + \sqrt{k} + \frac{B(S)\ln(2m)}{\gamma}\right).
\end{align*}
Taking expectation over both sides, and noting that $\E[B(S)]\leq \sqrt{\E[B^2(S)]} \leq B$, we get
\[
\cR(\rampf_\gamma, S) \leq \frac{C}{\sqrt{m}}(1 + \sqrt{k} + \frac{B\ln(2m)}{\gamma})
\leq \sqrt{\frac{O(k + B^2\ln^2(2m)/\gamma^2)}{m}}.
\]
\end{proof}
\begin{cor}[Sample complexity upper bound]\label{cor:upperbound}
Let $D$ be a distribution over $\reals^d\times \{\pm1\}$. Then
\[
m(\epsilon,\gamma,D) \leq \tilde{O}\left(\frac{k_\gamma(D_X)}{\epsilon^2}\right).
\]
\end{cor}
\begin{proof}
By \propref{prop:ramp}, we have
\[
\loss_0(\cA_\gamma, D, m, \delta) \leq \loss^*_\gamma(\cW,D) + 2\cR_m(\rampf_\gamma, D) + \sqrt{\frac{14\ln(2/\delta)}{m}}.
\]
By definition of $k_\gamma(D_X)$, $D_X$ is $(\gamma^2 k_\gamma, k_\gamma)$-limited.
Therefore, by \thmref{thm:upperbound},
\[
\cR_m(\rampf_\gamma, D) \leq \sqrt{\frac{O(k_\gamma(D_X))\ln(m)}{m}}.
\]
We conclude that
\[
\loss_0(\cA_\gamma, D, m, \delta) \leq \loss^*_\gamma(\cW,D) + \sqrt{\frac{O(k_\gamma(D_X)\ln(m) + \ln(1/\delta))}{m}}.
\]
Bounding the second right-hand term by $\epsilon$, we conclude that $m(\epsilon,\gamma,D) \leq \tilde{O}(k_\gamma/\epsilon^2)$.
\end{proof}
One should note that a similar upper bound can be obtained much more easily under a uniform upper bound on the eigenvalues of the uncentered covariance matrix.\footnote{This has been pointed out to us by an anonymous reviewer of this manuscript. An upper bound under sub-Gaussianity assumptions can be found in \cite{SabatoSrTi10b}.} However, such an upper bound would not capture the fact that a finite dimension implies a finite sample complexity, regardless of the size of the covariance.
If one wants to estimate the sample complexity, then large covariance matrix eigenvalues imply that more examples are required to estimate the covariance matrix from a sample. However, these examples need not be labeled. Moreover, estimating the covariance matrix is not necessary to achieve the sample complexity, since the upper bound holds for any margin-error minimization algorithm.
\section{A Distribution-Dependent Lower Bound}\label{sec:lower}
The new upper bound presented in \corref{cor:upperbound} can be tighter than both the norm-only and the dimension-only
upper bounds. But does the \fullkgname\ characterize the true sample
complexity of the distribution, or is it just another upper bound? To answer
this question, we first need tools for deriving sample complexity lower bounds. \secref{sec:fatlower} relates fat-shattering with a lower bound on sample complexity.
In \secref{sec:lowerbound} we use this result to relate the smallest
eigenvalue of a Gram-matrix to a lower bound on sample
complexity. In \secref{sec:subg} the family of sub-Gaussian product distributions is presented. We prove a sample-complexity
lower bound for this family in \secref{sec:lowerboundsg}.
\subsection{A Sample Complexity Lower Bound Based on Fat-Shattering}\label{sec:fatlower}
The ability to learn is closely related to the probability of a sample to be
shattered, as evident in Vapnik's formulations of learnability as a function of
the $\epsilon$-entropy \citep{Vapnik95}. It is well known that the maximal size
of a shattered set dictates a sample-complexity upper bound. In the theorem below, we show that for some hypothesis classes it also
implies a lower bound.
The theorem states that if a sample drawn
from a data distribution is fat-shattered with a non-negligible probability,
then MEM can fail to learn a good classifier for this distribution.\footnote{In contrast, the average
Rademacher complexity cannot be used to derive general lower bounds for MEM algorithms, since it is related to the rate of uniform convergence of the entire hypothesis class, while MEM algorithms choose low-error hypotheses \citep[see, e.g.,][]{BartlettBoMe05}.}
This holds not only for linear classifiers, but more generally for all \emph{symmetric} hypothesis classes. Given a domain $\cX$, we say that a hypothesis class $\cH \subseteq \reals^\cX$ is symmetric if for all $h \in \cH$, we have $-h \in \cH$ as well.
This clearly holds for the class of linear classifiers $\cW$.
\begin{theorem}\label{thm:shatterednotlearned}
Let $\cX$ be some domain, and assume that $\cH \subseteq \reals^\cX$ is a symmetric hypothesis class.
Let $D$ be a distribution over $\cX\times \{\pm 1\}$.
If the probability of a sample of size $m$ drawn from $D_X^{m}$ to be $\gamma$-shattered at the origin
by $\cW$ is at least $\eta$, then $m(\epsilon,\gamma,D,\eta/2) \geq \floor{m/2}$ for all $\epsilon < 1/2 - \loss^*_\gamma(D)$.
\end{theorem}
\begin{proof}
Let $\epsilon \leq \frac12 - \loss^*_\gamma(D)$. We show a MEM algorithm $\cA$
such that
\[
\loss_0(\cA_\gamma,D,\floor{m/2},\eta/2) \geq \frac12 > \loss^*_\gamma(D) + \epsilon,
\]
thus proving the desired lower bound on $m(\epsilon,\gamma,D,\eta/2)$.
Assume for simplicity that $m$ is even (otherwise replace $m$ with $m-1$).
Consider two sets $S,\tilde{S} \subseteq \cX \times \binlab$, each of size $m/2$,
such that $S_X\cup \tilde{S}_X$ is $\gamma$-shattered at the origin by $\cW$.
Then there exists a hypothesis $h_1 \in \cH$ such that the following holds:
\begin{itemize}
\item For all $x \in S_X \cup \tilde{S}_X$, $|h_1(x)| \geq \gamma$.
\item For all $(x,y) \in S$, $\sign(h_1(x)) = y$.
\item For all $(x,y) \in \tilde{S}$, $\sign(h_1(x)) = -y$.
\end{itemize}
It follows that $\loss_\gamma(h_1,S) = 0$. In addition, let $h_2 = -h_1$. Then $\loss_\gamma(h_2,\tilde{S}) = 0$. Moreover, we have $h_2 \in \cH$ due to the symmetry of $\cH$.
On each point in $\cX$, at least one of $h_1$ and $h_2$ predict the wrong sign. Thus $\loss_0(h_1,D) + \loss_0(h_2,D) \geq 1$. It follows that for at least one of $i \in \{1,2\}$, we have \mbox{$\loss_0(h_i, D) \geq \half$}.
Denote the set of hypotheses with a high misclassification error by
\[
\badh = \{ h\in \cH \mid \loss_0(h,D) \geq \half\}.
\]
We have just shown that if $S_X\cup \tilde{S}_X$ is $\gamma$-shattered by $\cW$ then at least one of the following holds: (1) $h_1 \in \badh \cap \argmin_{h\in \cH} \loss_\gamma(h,S)$ or (2) $h_2 \in \badh \cap \argmin_{h\in \cH} \loss_\gamma(h,\tilde{S})$.
Now, consider a MEM algorithm $\cA$ such that whenever possible, it returns a hypothesis from $\badh$. Formally, given the input sample $S$, if $\badh \cap \argmin_{h\in \cH} \loss_\gamma(h,S) \neq \emptyset$, then \linebreak[4] $\cA(S) \in \badh \cap \argmin_{h\in \cH} \loss_\gamma(h,S)$. It follows that
\begin{align*}
&\P_{S \sim D^{m/2}}[\loss_0(\cA(S),D) \geq \tfrac12] \geq \P_{S \sim D^{m/2}}[\badh \cap \argmin_{h\in \cH} \loss_\gamma(h, S) \neq \emptyset]\\
&\quad= \half (\P_{S \sim D^{m/2}}[\badh \cap\argmin_{h\in \cH} \loss_\gamma(h, S)\neq \emptyset] + \P_{\tilde{S}\sim D^{m/2}}[\badh \cap\argmin_{h\in \cH} \loss_\gamma(h, \tilde{S}) \neq \emptyset]) \\
&\quad\geq \half (\P_{S,\tilde{S} \sim D^{m/2}}[\badh \cap\argmin_{h\in \cH} \loss_\gamma(h, S)\neq \emptyset \:\text{ OR }\: \badh \cap\argmin_{h\in \cH} \loss_\gamma(h, \tilde{S}) \neq \emptyset])\\
&\quad\geq \half \P_{S,\tilde{S} \sim D^{m/2}}[S_X\cup \tilde{S}_X \text{ is $\gamma$-shattered at the origin }].
\end{align*}
The last inequality follows from the argument above regarding $h_1$ and $h_2$.
The last expression is simply half the probability that a sample of size $m$ from $D_X$ is shattered. By assumption, this probability is at least $\eta$.
Thus we conclude that $\P_{S \sim D^{m/2}}[\loss_0(\cA(S),D) \geq \half] \geq \eta/2.$
It follows that $\loss_0(\cA_\gamma,D,m/2,\eta/2) \geq \half$.
\end{proof}
As a side note, it is interesting to observe that \thmref{thm:shatterednotlearned} does not hold in general for non-symmetric hypothesis classes. For example,
assume that the domain is $\cX = [0,1]$, and the hypothesis class is the
set of all functions that label a finite number of points in $[0,1]$ by $+1$
and the rest by $-1$. Consider learning using MEM, when the distribution is uniform over $[0,1]$, and all the labels are $-1$.
For any $m > 0$ and $\gamma \in (0,1)$, a sample of size $m$ is $\gamma$-shattered at the origin with probability $1$.
However, any learning algorithm that returns a hypothesis from the hypothesis class will incur zero error on this distribution. Thus, shattering alone does not suffice to ensure that learning is hard.
\subsection{A Sample Complexity Lower Bound with Gram-Matrix Eigenvalues}\label{sec:lowerbound}
We now return to the case of homogeneous linear classifiers, and link high-probability fat-shattering to properties of the distribution.
First, we present an equivalent and simpler
characterization of fat-shattering for linear classifiers. We then use it
to provide a sufficient condition for the fat-shattering of a sample, based on the smallest eigenvalue of its Gram matrix.
\begin{theorem}\label{thm:shattercond}
Let $\mt{X} \in \reals^{m\times d}$ be the matrix of a set of size $m$ in $\reals^d$. The set is
$\gamma$-shattered at the origin by $\cW$ if and only if $\mt{X}\mt{X}^T$ is invertible and for all $y \in \binm$, $y^T (\mt{X}\mt{X}^T)^{-1} y \leq \gamma^{-2}$.
\end{theorem}
To prove \thmref{thm:shattercond} we require two auxiliary lemmas. The
first lemma, stated below, shows that for convex function classes,
$\gamma$-shattering can be substituted with shattering with exact
$\gamma$-margins.
\begin{lemma}\label{lem:exactmargin}
Let $\cF\subseteq \reals^\cX$ be a class of functions,
and assume that $\cF$ is convex, that is
\[
\forall f_1,f_2\in \cF, \forall \lambda \in [0,1],\quad
\lambda f_1 + (1-\lambda)f_2\in \cF.
\]
If $S = \{x_1,\ldots,x_m\} \subseteq \cX$ is $\gamma$-shattered by $\cF$ with
witness $r \in \reals^m$, then for every $y \in \binm$ there is an $f \in \cF$
such that for all $i \in [m],\:y[i] (f(x_i) - r[i]) = \gamma$.
\end{lemma}
The proof of this lemma is provided in \appref{app:shattercond}.
The second lemma that we use allows converting the representation
of the Gram-matrix to a different feature space, while keeping the separation properties intact.
For a matrix $\mt{M}$, denote its pseudo-inverse by $\mt{M}^+$.
\begin{lemma}\label{lem:wtilde}
Let $\mt{X} \in \reals^{m\times d}$ be a matrix such that $\mt{X} \mt{X}^T$ is invertible, and let $\mt{Y}\in \reals^{m\times k}$ such that $\mt{X}\mt{X}^T = \mt{Y} \mt{Y}^T$.
Let $r \in \reals^m$ be some real vector.
If there exists a vector $\widetilde{w} \in \reals^k$ such that $\mt{Y} \widetilde{w} = r$, then
there exists a vector $w \in \reals^d$ such that $\mt{X} w = r \text{ and } \norm{w} = \norm{ \mt{Y}^T (\mt{Y}^T)^+ \widetilde{w}} \leq \norm{\tilde{w}}$.
\end{lemma}
\begin{proof}
Denote $\mt{K} = \mt{X}\mt{X}^T = \mt{Y}\mt{Y}^T$.
Let $\mt{S} = \mt{Y}^T \mt{K}^{-1} \mt{X}$ and let $w = \mt{S}^T \widetilde{w}$. We have
$\mt{X} w = \mt{X} \mt{S}^T \widetilde{w} = \mt{X} \mt{X}^T \mt{K}^{-1} \mt{Y}\widetilde{w} = \mt{Y} \widetilde{w} = r.$ In addition,
$
\| w \|^2 = w^T w = \widetilde{w}^T \mt{S} \mt{S}^T \widetilde{w}.
$
By definition of $\mt{S}$,
\[
\mt{S} \mt{S}^T = \mt{Y}^T \mt{K}^{-1} \mt{X} \mt{X}^T \mt{K}^{-1} \mt{Y} = \mt{Y}^T \mt{K}^{-1} \mt{Y} = \mt{Y}^T (\mt{Y} \mt{Y}^T)^{-1} \mt{Y} = \mt{Y}^T (\mt{Y}^T)^{+}.
\]
Denote $\mt{O} = \mt{Y}^T (\mt{Y}^T)^{+}$. $\mt{O}$ is an orthogonal projection matrix: by the properties of the pseudo-inverse, $\mt{O} = \mt{O}^T$ and $\mt{O}^2 = \mt{O}$. Therefore
$
\norm{w}^2 = \widetilde{w}^T \mt{S} \mt{S}^T \widetilde{w} = \widetilde{w}^T \mt{O} \widetilde{w} =
\widetilde{w}^T \mt{O} \mt{O}^T \widetilde{w} = \| \mt{O} \widetilde{w} \|^2 \leq \norm{\widetilde{w}}^2.
$
\end{proof}
\begin{proof}[of \thmref{thm:shattercond}]
We prove the theorem for $1$-shattering. The case of $\gamma$-shattering follows by rescaling $X$ appropriately.
Let $\mt{X}\mt{X}^T = \mt{U} \Lambda \mt{U}^T$ be the SVD of $\mt{X}\mt{X}^T$, where $\mt{U}$ is an orthogonal matrix and $\Lambda$ is a diagonal matrix.
Let $\mt{Y} = \mt{U} \Lambda^\half$. We have $\mt{X}\mt{X}^T = \mt{Y} \mt{Y}^T$. We show that the specified conditions are sufficient and necessary for the shattering of the set:
\begin{enumerate}
\item
Sufficient: If $\mt{X}\mt{X}^T$ is invertible, then $\Lambda$ is invertible, thus so is $\mt{Y}$.
For any $y\in \binm$, Let $w_y = \mt{Y}^{-1} y$.
Then $\mt{Y} w_y = y$. By \lemref{lem:wtilde},
there exists a separator $w$ such that $\mt{X}w = y$ and $\norm{w} \leq \norm{w_y} = \sqrt{y^T (\mt{Y}\mt{Y}^T)^{-1}y} = \sqrt{y^T (\mt{X}\mt{X}^T)^{-1}y} \leq 1$.
\item Necessary: If $\mt{X}\mt{X}^T$ is not invertible then the vectors in $S$ are linearly dependent, thus $S$ cannot be shattered using linear separators \citep[see, e.g.,][]{Vapnik95}. The first condition is therefore necessary. Assume $S$ is $1$-shattered at the origin and show that the second condition necessarily holds. By \lemref{lem:exactmargin}, for all $y \in \binm$ there exists a $w_y \in \ball$ such that $\mt{X}w_y = y$.
Thus by \lemref{lem:wtilde} there exists a $\widetilde{w}_y$ such that $\mt{Y} \widetilde{w}_y = y$ and $\norm{\widetilde{w}_y} \leq \norm{w_y} \leq 1$. $\mt{X}\mt{X}^T$ is invertible, thus so is $\mt{Y}$. Therefore $\widetilde{w}_y = \mt{Y}^{-1}y$. Thus $y^T(\mt{X}\mt{X}^T)^{-1}y = y^T(\mt{Y}\mt{Y}^T)^{-1}y = \norm{\widetilde{w}_y} \leq 1$.
\end{enumerate}
\end{proof}
We are now ready to provide a sufficient condition for fat-shattering based on the smallest eigenvalue of the Gram matrix.
\begin{cor}\label{cor:lambdam}
Let $\mt{X} \in \reals^{m\times d}$ be the matrix of a set of size $m$ in $\reals^d$.
If $\lambdamin(\mt{X}\mt{X}^T) \geq m\gamma^2$ then the set is $\gamma$-shattered at the origin by $\cW$.
\end{cor}
\begin{proof}
If $\lambdamin(\mt{X}\mt{X}^T) \geq m\gamma^2$ then $\mt{X}\mt{X}^T$ is invertible and
$\lambdamax((\mt{X}\mt{X}^T)^{-1})\leq (m\gamma^2)^{-1}$. For any $y \in \binm$ we have
$\norm{y}=\sqrt{m}$ and
\[
y^T (\mt{X}\mt{X}^T)^{-1} y \leq \norm{y}^2
\lambdamax((\mt{X}\mt{X}^T)^{-1}) \leq m(m\gamma^2)^{-1} = \gamma^{-2}.
\]
By \thmref{thm:shattercond} the sample is $\gamma$-shattered at the origin.
\end{proof}
\corref{cor:lambdam} generalizes the
requirement of linear independence for shattering with no margin: A set of vectors is shattered with no margin if the vectors are linearly independent, that is if $\lambdamin>0$.
The corollary shows that for $\gamma$-fat-shattering, we can require instead $\lambdamin \geq m\gamma^2$. We can now conclude that if it is highly probable that the smallest eigenvalue
of the sample Gram matrix is large, then MEM might fail to learn
a good classifier for the given distribution. This is formulated in the following theorem.
\begin{theorem}\label{thm:inductive}
Let $D$ be a distribution over $\reals^d\times \{\pm 1\}$.
Let $m > 0$ and let $\mt{X}$ be the matrix of a sample drawn from $D^{m}_X$. Let $\eta = \P[\lambdamin(\mt{X} \mt{X}^T) \geq m \gamma^2]$.
Then for all $\epsilon < 1/2 - \loss^*_\gamma(D)$, $m(\epsilon,\gamma,D,\eta/2) \geq \floor{m/2}$.
\end{theorem}
The proof of the theorem is immediate by combining \thmref{thm:shatterednotlearned} and \corref{cor:lambdam}.
\thmref{thm:inductive} generalizes the case of learning a linear separator
without a margin: If a sample of size $m$ is linearly independent with high
probability, then there is no hope of using $m/2$ points to predict the label
of the other points. The theorem extends this observation to the case of
learning with a margin, by requiring a stronger condition than just linear
independence of the points in the sample.
Recall that our upper-bound on the sample complexity from
\secref{sec:upper} is $\tilde{O}(k_{\gamma})$. We now define
the family of sub-Gaussian product distributions, and show that for this family, the lower bound that can be deduced from \thmref{thm:inductive} is also linear in $k_{\gamma}$.
\subsection{Sub-Gaussian Distributions}\label{sec:subg}
In order to derive a lower bound on distribution\hyp{}specific sample complexity in
terms of the covariance of $X \sim D_X$, we must assume that $X$ is not too
heavy-tailed. This is because for any data distribution there exists another distribution which
is almost identical and has the same sample complexity, but has arbitrarily
large covariance values. This can be achieved by mixing the original
distribution with a tiny probability for drawing a vector with a huge norm. We
thus restrict the discussion to multidimensional sub-Gaussian
distributions. This ensures light tails of the distribution in all directions,
while still allowing a rich family of distributions, as we presently see.
Sub-Gaussianity is defined for scalar random variables
as follows \citep[see, e.g.,][]{BuldyginKo98}.
\begin{definition}[Sub-Gaussian random variables]
A random variable $X \in \reals$ is \emph{sub-Gaussian with moment $B$}, for $B \geq 0$, if
\begin{equation*}
\forall t \in \reals, \quad \E[\exp(tX)]\leq \exp(t^2B^2 /2).
\end{equation*}
In this work we further say that $X$ is sub-Gaussian with
\emph{relative moment} $\rmom > 0$ if $X$ is sub-Gaussian with moment $\rho\sqrt{\E[X^2]}$, that is,
\[
\forall t \in \reals, \quad \E[\exp(tX)]\leq \exp(t^2\rmom^2\E[X^2] /2).
\]
\end{definition}
Note that a sub-Gaussian variable with moment $B$ and relative moment $\rho$ is also sub-Gaussian with moment $B'$ and relative moment $\rho'$ for any $B' \geq B$ and $\rho' \geq \rho$.
The family of sub-Gaussian distributions is quite extensive: For instance, it
includes any bounded, Gaussian, or Gaussian-mixture random variable with mean
zero. Specifically, if $X$ is a mean-zero Gaussian random variable, $X \sim N(0, \sigma^2)$,
then $X$ is sub-Gaussian with relative moment $1$ and the inequalities in the definition above
hold with equality. As another example, if $X$ is a
uniform random variable over $\{\pm b\}$ for some $b \geq 0$, then $X$ is sub-Gaussian with relative moment $1$, since
\begin{equation}\label{eq:bernoulli}
\E[\exp(tX)] = \half(\exp(tb) + \exp(-tb)) \leq \exp(t^2b^2/2) = \exp(t^2\E[X^2]/2).
\end{equation}
Let $\mt{B} \in \reals^{d\times d}$ be a symmetric PSD matrix.
A random vector $X \in \reals^d$ is a \emph{sub-Gaussian random vector} with moment matrix $\mt{B}$
if for all $u \in \reals^d$, $\E[\exp(\dotprod{u,X})] \leq \exp(\dotprod{\mt{B}u,u}/2)$.
The following lemma provides a useful connection between the trace of the sub-Gaussian moment matrix and the moment-generating function of the squared norm of the random vector.
The proof is given in \appref{app:sgvecmgf}.
\begin{lemma}\label{lem:sgvecmgf}
Let $X \in \reals^d$ be a sub-Gaussian random vector
with moment matrix $\mt{B}$.
Then for all $t \in (0,\frac{1}{4\lambdamax(\mt{B})}]$, $\E[\exp(t \norm{X}^2)] \leq \exp(2t \cdot \trace(\mt{B})).$
\end{lemma}
Our lower bound holds for the family of sub-Gaussian product distributions, defined as follows.
\begin{definition}[Sub-Gaussian product distributions]\label{def:indsubg}
A distribution $D_X$ over $\reals^d$ is a \emph{sub-Gaussian product
distribution} with moment $B$ and relative moment $\rmom$ if there exists some
orthonormal basis $a_1,\ldots,a_d \in \reals^d$, such that for $X \sim D_X$,
$\dotprod{a_i, X}$ are independent sub-Gaussian random
variables, each with moment $B$ and relative moment $\rmom$.
\end{definition}
Note that a sub-Gaussian product distribution has mean zero, thus its
covariance matrix is equal to its uncentered covariance matrix.
For any fixed $\rmom \geq 0$, we denote by $\dfamily_\rmom$ the family of all
sub-Gaussian product distributions with relative moment $\rmom$, in arbitrary
dimension. For instance, all multivariate Gaussian distributions and all
uniform distributions on the corners of a centered hyper-rectangle are in
$\dfamily_1$. All uniform distributions over a full centered hyper-rectangle are in
$\dfamily_{3/2}$. Note that if $\rmom_1 \leq \rmom_2$, $\dfamily_{\rmom_1}
\subseteq \dfamily_{\rmom_2}$.
We will provide a lower bound for all distributions in $\dfamily_\rmom$. This lower
bound is linear in the \fullkgname\ of the distribution, thus it matches the
upper bound provided in \corref{cor:upperbound}. The constants in the lower
bound depend only on the value of $\rmom$, which we regard as a
constant.
\subsection{A Sample-Complexity Lower Bound for Sub-Gaussian Product Distributions}\label{sec:lowerboundsg}
As shown in \secref{sec:lowerbound}, to obtain a sample complexity lower bound
it suffices to have a lower bound on the value of the smallest eigenvalue of a
random Gram matrix. The distribution of the smallest eigenvalue of a random
Gram matrix has been investigated under various assumptions. The cleanest
results are in the asymptotic case where the sample size and the dimension approach
infinity, the ratio between them approaches a constant, and the coordinates of
each example are identically distributed.
\begin{theorem}[{\citealt[Theorem 5.11]{BaiSi10}}]\label{thm:asym}
Let $\{\mt{X}_i\}_{i=1}^\infty$ be a series of matrices of sizes $m_i \times d_i$, whose entries are i.i.d.~random variables with mean zero, variance $\sigma^2$ and finite fourth moments. If $\lim_{i\rightarrow \infty}\frac{m_i}{d_i} = \beta < 1$, then
$\lim_{i\rightarrow \infty} \lambdamin(\frac{1}{d_i}\mt{X}_i\mt{X}_i^T) = \sigma^2(1-\sqrt{\beta})^2.$
\end{theorem}
This asymptotic limit can be used to approximate an asymptotic lower bound on $m(\epsilon,\gamma,D)$,
if $D_X$ is a product distribution of i.i.d.~random variables with mean zero, variance $\sigma^2$, and finite fourth moment. Let $\mt{X} \in \reals^{m\times d}$ be the matrix of a sample of size $m$ drawn from $D_X$. We can find
$m = m_\circ$ such that $\lambda_{m_\circ}(\mt{X}\mt{X}^T) \approx \gamma^2m_\circ$, and use \thmref{thm:inductive} to conclude that $m(\epsilon,\gamma,D) \geq m_\circ/2$. If $d$ and $m$ are
large enough, we have by \thmref{thm:asym} that for $\mt{X}$ drawn from $D_X^m$:
\[
\lambdamin(\mt{X}\mt{X}^T) \approx d \sigma^2 (1-\sqrt{m/d})^2 = \sigma^2(\sqrt{d}-\sqrt{m})^2.
\]
Solving the equality $\sigma^2(\sqrt{d}-\sqrt{m_\circ})^2=m_\circ\gamma^2$ we get
$m_\circ = d/(1+\gamma/\sigma)^2$. The \fullkgname\ for $D_X$ is $k_\gamma
\approx d/(1+\gamma^2/\sigma^2)$, thus $\tfrac{1}{2}
k_\gamma \leq
m_\circ \leq k_\gamma$. In this case, then, the sample complexity lower bound is indeed
the same order as $k_\gamma$, which controls also the upper bound in \corref{cor:upperbound}. However,
this is an asymptotic analysis, which holds for a highly limited set of distributions.
Moreover, since \thmref{thm:asym} holds asymptotically for each
distribution separately, we cannot use it to deduce a uniform finite-sample
lower bound for families of distributions.
For our analysis we require \emph{finite-sample} bounds for the
smallest eigenvalue of a random Gram-matrix.
\citet{RudelsonVe09,RudelsonVe08} provide such
finite-sample lower bounds for distributions which are products
of identically distributed sub-Gaussians. In \thmref{thm:smallesteigwhpsg} below we
provide a new and more general result, which holds for any sub-Gaussian product distribution. The proof of \thmref{thm:smallesteigwhpsg}
is provided in \appref{app:smallesteigwhpsg}. Combining \thmref{thm:smallesteigwhpsg} with \thmref{thm:inductive} above
we prove the lower bound, stated in \thmref{thm:lowerboundsg} below.
\begin{theorem}\label{thm:smallesteigwhpsg}
For any $\rmom> 0$ and $\delta \in (0,1)$ there are $\beta > 0$ and \mbox{$C > 0$} such that the following holds.
For any $D_X \in \dfamily_\rmom$ with covariance matrix $\Sigma \leq I$,
and for any $m \leq \beta \cdot \trace(\Sigma) - C$,
if $\mt{X}$ is the $m\times d$ matrix of a sample drawn from $D_X^m$, then
\[
\P[\lambdamin(\mt{X} \mt{X}^T) \geq m] \geq \delta.
\]
\end{theorem}
\begin{theorem}[Sample complexity lower bound for distributions in $\dfamily_\rmom$]
\label{thm:lowerboundsg}
For any $\rmom >0$ there are constants $\beta > 0,C\geq 0$ such that for any $D$ with $D_X \in \dfamily_\rmom$, for any $\gamma > 0$ and for any $\epsilon < \frac{1}{2} - \loss^*_\gamma(D)$,
\[
m(\epsilon,\gamma,D,1/4) \geq \beta k_\gamma(D_X)-C.
\]
\end{theorem}
\begin{proof}
Assume w.l.o.g. that the orthonormal basis $a_1,\ldots,a_d$ of independent sub-Gaussian
directions of $D_X$, defined in \defref{def:indsubg}, is the natural basis $e_1,\ldots,e_d$. Define $\lambda_i = \E_{X\sim D_X}[X[i]^2]$,
and assume w.l.o.g. $\lambda_1 \geq \ldots \geq \lambda_d > 0$.
Let $\mt{X}$ be the $m\times d$ matrix of a sample drawn from $D_X^m$. Fix $\delta \in (0,1)$, and let $\beta$ and $C$ be the constants for $\rmom$ and $\delta$ in \thmref{thm:smallesteigwhpsg}.
Throughout this proof we abbreviate $k_\gamma \triangleq k_\gamma(D_X)$.
Let $m \leq \beta (k_\gamma-1) - C$.
We would like to use \thmref{thm:smallesteigwhpsg} to bound $\lambdamin(\mt{X}\mt{X}^T)$ with high probability, so that \thmref{thm:inductive} can be applied to get the desired lower bound. However, \thmref{thm:smallesteigwhpsg} holds only if $\Sigma \leq I$. Thus we split to two cases---one in which the dimensionality controls the lower bound, and one in which the norm controls it. The split is based on the value of $\lambda_{k_\gamma}$.
\begin{itemize}
\item Case I: Assume $\lambda_{k_\gamma} \geq \gamma^2$. Then $\forall i\in[k_\gamma],\lambda_i \geq \gamma^2$.
By our assumptions on $D_X$, for all $i\in[d]$ the random variable $X[i]$ is sub-Gaussian
with relative moment $\rmom$. Consider the random variables $Z[i] = X[i]/\sqrt{\lambda_i}$ for $i \in [k_\gamma]$. $Z[i]$ is also sub-Gaussian with relative moment $\rmom$, and $\E[Z[i]^2] = 1$.
Consider the product distribution of $Z[1],\ldots,Z[k_\gamma]$,
and let $\Sigma'$ be its covariance matrix. We have $\Sigma' = I_{k_\gamma}$,
and $\trace(\Sigma') = k_\gamma$.
Let $\mt{Z}$ be
the matrix of a sample of size $m$ drawn from this distribution. By \thmref{thm:smallesteigwhpsg},
$\P[\lambdamin(\mt{Z} \mt{Z}^T)\geq m] \geq \delta$, which is equivalent to
\[
\P[\lambdamin(\mt{X} \cdot \diag(1/\lambda_1,\ldots,1/\lambda_{k_\gamma},0,\ldots,0) \cdot \mt{X}^T)\geq m] \geq \delta.
\]
Since $\forall i\in[k_\gamma],\lambda_i \geq \gamma^2$, we have $\P[\lambdamin(\mt{X} \mt{X}^T)\geq m\gamma^2] \geq \delta$.
\item Case II:
Assume $\lambda_{k_\gamma} < \gamma^2$. Then $\lambda_i < \gamma^2$ for all $i \in \{k_\gamma,\ldots,d\}$.
Consider the random variables $Z[i] = X[i]/\gamma$ for $i \in \{k_\gamma,\ldots,d\}$. $Z[i]$ is sub-Gaussian with relative moment $\rmom$
and $\E[Z[i]^2] \leq 1$.
Consider the product distribution of $Z[k_\gamma],\ldots,Z[d]$,
and let $\Sigma'$ be its covariance matrix. We have $\Sigma' < I_{d-k_\gamma+1}$.
By the minimality in \eqref{eq:kgammamin} we also have $\trace(\Sigma') = \frac{1}{\gamma^2}\sum_{i=k_\gamma}^d \lambda_i \geq k_\gamma-1$.
Let $\mt{Z}$ be
the matrix of a sample of size $m$ drawn from this product distribution. By
\thmref{thm:smallesteigwhpsg},
$\P[\lambdamin(\mt{Z} \mt{Z}^T)\geq m] \geq \delta$. Equivalently,
\[
\P[\lambdamin(\mt{X} \cdot \diag(0,\ldots,0,1/\gamma^2,\ldots,1/\gamma^2) \cdot \mt{X}^T)\geq m] \geq \delta,
\]
therefore $\P[\lambdamin(\mt{X} \mt{X}^T)\geq m\gamma^2] \geq \delta$.
\end{itemize}
In both cases $\P[\lambdamin(\mt{X}\mt{X}^T)\geq m\gamma^2] \geq \delta$. This holds for any $m \leq \beta (k_\gamma-1) -C$, thus by \thmref{thm:inductive} $m(\epsilon, \gamma,D, \delta/2) \geq \floor{(\beta(k_\gamma-1)-C)/2}$
for $\epsilon < 1/2-\loss_\gamma^*(D)$.
We finalize the proof by setting $\delta = \frac{1}{2}$ and adjusting $\beta$ and $C$.
\end{proof}
\section{On the Limitations of the Covariance Matrix}\label{sec:limitations}
We have shown matching upper and lower bounds for the sample complexity of
learning with MEM, for any sub-Gaussian product
distribution with a bounded relative moment. This shows that the
margin-adapted dimension fully characterizes the sample complexity of learning
with MEM for such distributions. What properties of a
distribution play a role in determining the sample complexity for general distributions?
In the following theorem we show
that these properties must include more than the covariance matrix of the
distribution, even when assuming sub-Gaussian tails and bounded relative moments.
\begin{theorem}\label{thm:covariance}
For any integer $d > 1$, there
exist two distributions $D$ and $P$ over $\reals^d \times \{\pm 1\}$
with identical covariance matrices, such that for any $\epsilon,\delta \in (0,\frac14)$, $m(\epsilon, 1, P, \delta) \geq \Omega(d)$ while $m(\epsilon, 1, D,\delta) \leq \ceil{\log_2(1/\delta)}$. Both $D_X$ and $P_X$
are sub-Gaussian random vectors, with a relative moment of $\sqrt{2}$ in all directions.
\end{theorem}
\begin{proof}
Let $D_a$ and $D_b$ be distributions over $\reals^d$ such that $D_a$ is uniform over $\{\pm 1\}^d$
and $D_b$ is uniform over $\{\pm 1\}\times\{0\}^{d-1}$. Let $D_X$ be a balanced mixture of $D_a$ and $D_b$. Let $P_X$ be uniform over $\{\pm 1\}\times\{\frac{1}{\sqrt{2}}\}^{d-1}$.
For both $D$ and $P$, let $\P[Y = \dotprod{e_1, X}] = 1$.
The covariance matrix of $D_X$ and $P_X$ is $\diag(1,\half, \ldots, \half)$, thus $k_1(D_X) = k_1(P_X) \geq \Omega(d)$.
By \eqref{eq:bernoulli}, $P_X,D_a$ and $D_b$ are all sub-Gaussian product distribution with relative moment $1$, thus also with moment $\sqrt{2} > 1$.
The projection of $D_X$ along any direction $u \in \reals^d$ is sub-Gaussian with relative moment $\sqrt{2}$ as well, since
\begin{align*}
&\E_{X \sim D_X}[\exp(\dotprod{u,X})] = \half(\E_{X \sim D^a}[\exp(\dotprod{u,X})] + \E_{X \sim D^b}[\exp(\dotprod{u,X})]) \\
&=\half(\prod_{i\in[d]}(\exp(u_i)+\exp(-u_i))/2 + (\exp(u_1)+\exp(-u_1))/2) \\
&\leq \half (\prod_{i\in[d]}\exp(u_i^2/2) + \exp(u_1^2/2))\leq \exp(\norm{u}^2/2) \leq \exp((\norm{u}^2+u_1^2)/2)\\
&=\exp(\E_{X\sim D_X} [\dotprod{u, X}^2]).
\end{align*}
For $P$ we have by \thmref{thm:lowerboundsg} that for any $\epsilon \leq \frac14$, $m(\epsilon, 1, P,\frac14) \geq \Omega(k_1(P_X)) \geq \Omega(d)$.
In contrast, any MEM algorithm $\cA_1$ will output the correct separator for $D$
whenever the sample has at least one point drawn from $D_b$. This is because the separator $e_1$ is the only $w\in \ball$ that classifies this point with zero $1$-margin errors. Such a point exists in a sample of size $m$ with probability $1-2^{-m}$. Therefore $\loss_0(\cA_1,D,m,1/2^m) = 0$.
It follows that for all $\epsilon > 0$, $m(\epsilon,1,D,\delta) \leq \ceil{\log_2(1/\delta)}$.
\end{proof}
\section{Conclusions}\label{sec:conclusions}
\corref{cor:upperbound} and \thmref{thm:lowerboundsg} together provide a tight characterization of the sample complexity of any sub-Gaussian product distribution with a bounded relative moment. Formally, fix $\rmom > 0$. For any $D$ such that $D_X \in \dfamily_\rmom$, and for any $\gamma > 0$ and $\epsilon \in (0,\frac{1}{2} - \loss^*_\gamma(D))$
\begin{equation}\label{eq:doublebound}
\Omega(k_\gamma(D_X)) \leq m(\epsilon,\gamma,D) \leq \tilde{O}\left(\frac{k_{\gamma}(D_X)}{\epsilon^2}\right).
\end{equation}
The upper bound holds uniformly for all distributions, and the constants in the lower bound depend only on $\rmom$. This result shows that the true sample complexity of learning each of these distributions with MEM is characterized by the \fullkgname.
An interesting conclusion can be drawn as to the influence of the conditional distribution of labels $D_{Y|X}$: Since \eqref{eq:doublebound} holds for \emph{any} $D_{Y|X}$, the effect of the direction of the best separator on the sample complexity is bounded, even for highly non-spherical distributions.
We note that the upper bound that we have proved involves logarithmic factors which might not be necessary. There are upper bounds that depend on the margin alone and on the dimension alone without logarithmic factors. On the other hand, in our bound, which combines the two quantities, there is a logarithmic dependence which stems from the margin component of the bound. It might be possible to tighten the bound and remove the logarithmic dependence.
\eqref{eq:doublebound} can be used to easily characterize the sample complexity behavior for interesting distributions, to compare $L_2$ margin minimization to other learning methods, and to improve certain active learning strategies. We elaborate on each of these applications in the following examples.
\begin{example}[Gaps between $L_1$ and $L_2$ regularization in the presence of
irrelevant features] \hspace{0.3in} \linebreak[4]
\citet{Ng04} considers learning a single relevant
feature in the presence of many irrelevant features, and compares
using $L_1$ regularization and $L_2$ regularization. When $\norm{X}_{\infty} \leq 1$,
upper bounds on learning with $L_1$ regularization guarantee a sample
complexity of $O(\ln(d))$ for an $L_1$-based learning rule
\citep{Zhang02}. In order to compare this with the sample complexity of
$L_2$ regularized learning and establish a gap, one must use a {\em
lower bound} on the $L_2$ sample complexity. The argument provided by Ng
actually assumes scale-invariance of the learning rule, and is
therefore valid only for {\em unregularized} linear learning. In contrast,
using our results we can easily establish a lower
bound of $\Omega(d)$ for many specific distributions with
a bounded $\norm{X}_{\infty}$ and $Y=\sign(X[i])$ for some $i$.
For instance, if each coordinate is a bounded independent sub-Gaussian random variable with a bounded relative moment, we have $k_1 = \ceil{d/2}$ and \thmref{thm:lowerboundsg}
implies a lower bound of $\Omega(d)$ on the $L_2$ sample complexity.
\end{example}
\begin{example}[Gaps between generative and discriminative learning for a Gaussian mixture]
Let there be two classes, each drawn from a unit-variance spherical
Gaussian in $\reals^d$ with a large distance $2v
>> 1$ between the class means, such that $d >> v^4$. Then $\P_D[X|Y=y]
= \cN(y v\cdot e_1,I_d)$, where $e_1$ is a unit vector in
$\reals^d$. For any $v$ and $d$, we have $D_X \in \dfamily_1$. For
large values of $v$, we have extremely low margin error at $\gamma =
v/2$, and so we can hope to learn the classes by looking for a
large-margin separator. Indeed, we can calculate $k_\gamma =
\ceil{d/(1+\frac{v^2}{4})}$, and conclude that the required sample complexity is $\tilde{\Theta}(d/v^2)$. Now consider a generative
approach: fitting a spherical Gaussian model for each class. This
amounts to estimating each class center as the empirical average of
the points in the class, and classifying based on the nearest
estimated class center. It is possible to show that for any constant
$\epsilon>0$, and for large enough $v$ and $d$, $O(d/v^4)$ samples are
enough in order to ensure an error of $\epsilon$. This establishes a
rather large gap of $\Omega(v^2)$ between the sample complexity of the
discriminative approach and that of the generative one.
\end{example}
\begin{example}[Active learning] In active learning, there is an abundance of unlabeled examples, but
labels are costly, and the active learning algorithm needs to decide which labels to query based
on the labels seen so far. A popular approach to active learning involves estimating
the current set of possible classifiers using sample complexity upper bounds \citep[see, e.g.,][]{BalcanBeLa09,BeygelzimerHsLaZh10b}. Without any distribution-specific information, only general distribution-free upper bounds can be used. However, since there is an abundance of unlabeled examples,
the active learner can use these to estimate tighter distribution-specific upper bounds. In the case of linear classifiers, the \fullkgname\ can be calculated from the uncentered covariance matrix of the distribution, which can be easily estimated from unlabeled data. Thus, our sample complexity upper bounds can be used to improve the active learner's label complexity. Moreover, the lower bound suggests that any further improvement of such active learning strategies would require more information other than the
distribution's covariance matrix.
\end{example}
To summarize, we have shown that the true sample complexity of large-margin
learning of each of a rich family of distributions is characterized by the
\fullkgname. Characterizing the true sample complexity allows a better
comparison between this learning approach and other algorithms, and has many
potential applications. The challenge of
characterizing the true sample complexity extends to any distribution and any
learning approach. \thmref{thm:covariance} shows that other properties but the covariance matrix must be taken into account for general distributions.
We believe that obtaining answers to these questions is of
great importance, both to learning theory and to learning applications.
\acks{The authors thank Boaz Nadler for many insightful discussions.
During part of this research, Sivan Sabato was supported by the Adams Fellowship Program of the Israel Academy of Sciences and Humanities. This work is partly supported by the Gatsby Charitable Foundation, The DARPA MSEE project, the Intel ICRI-CI center, and the Israel Science Foundation center of excellence grant.}
|
1,941,325,221,172 | arxiv | \section{Introduction}
Knowledge of the mass loss rates ($\dot{M}$) of red supergiants (RSGs) is fundamentally important for understanding stellar evolution. Changing $\dot{M}$\ has effects on the subsequent evolution of the star, as well as the supernova (SN) type and eventual remnant \citep[e.g.][]{maeder1981most,chiosi1986evolution}.
When a RSG reaches the end of it's lifetime, it explodes as a Type IIP core-collapse supernova (CCSN), of which there have been 7 confirmed cases of RSGs as progenitors, the most recent being the 12.5 $\pm$ 1.2 M$_{\odot}$ progenitor to SN 2012aw \citep{fraser2016disappearance}.Theory predicts that the RSG progenitor stars of Type IIP supernovae can be anywhere in the range of 8.5 to 25 M$_{\odot}$ \citep[e.g.][]{meynet2003stellar}, but so far it seems the stars which explode are of a relatively low mass (15M$_{\odot}$), with no progenitors appearing in the higher end of the predicted mass range \citep[between 17 and 25 M$_{\odot}$][]{smartt2009death,smartt2015observational}.
Are all RSGs exploding as Type IIP SNe? Or does the extreme mass loss affect the final evolution of these massive stars? Stellar evolution models currently rely on observational or theoretical mass loss rate prescriptions \citep[e.g.][]{de1988mass,reimers1975circumstellar,van2005empirical,nieuwenhuijzen1990parametrization,feast1992ch}. A potential weakness of these prescriptions is that they have relied on observations of field stars, not coeval stars, leaving parameters of initial mass ($M_{\rm initial}$) and metallicity ($Z$) unconstrained which could potentially explain the large dispersions in the observed trends. \cite{georgy2015mass} discuss the implications of extreme mass loss that could occur at the end of RSGs lives \citep[see also][]{georgy2012yellow}. In this paper it was found that high mass loss rates of RSGs can lead to a blueward movement in the Hertzsprung-Russel diagram (HRD), occuring for RSGs more massive than 25M$_{\odot}$ (non-rotating models) and 20M$_{\odot}$ (rotating models). \citet{georgy2015mass} find that this blueward motion allows to fit the observed maximum mass of observed type IIP SNe progenitors. They also express the need for better determination of RSG mass-loss rates to improve the stellar modeling at this evolved stage of the star's life.
In addition to the effect on RSG evolution, it has been claimed that circumstellar dust in the wind could, in part, provide a solution to the missing high mass RSG progenitors. It is known that RSG form dust in their winds \citep[e.g][]{de2008red} and infrared interferometry has shown that this dust can lie very close to the star itself \citep{danchi1994characteristics}. \cite{walmswell2012circumstellar} have shown that by failing to take into account the additional extinction resulting from RSG winds, the luminosity of the most massive red supergiants at the end of their lives is underestimated. Mass estimates are based on mass-luminosity relations meaning that extra intrinsic extinction close to RSG progenitors would give reduced luminosities. While \cite{smartt2009death} did provide extinction estimates of nearby supergiants and of the SN itself throughout their paper, it is suggested that these could be underestimates \citep{walmswell2012circumstellar}. It is expected that this dust would then be destroyed in the SN explosion and hence not be seen in the SN spectra.
In this paper we measure the amount of circumstellar material and estimate mass-loss rates, to investigate whether this is correlated with how close the star is to supernova. We model the mid-IR excess of 19 RSGs in stellar cluster NGC2100, each which we assume has the same initial mass and composition, but where the stars are all at slightly different stages of evolution. This allows us to investigate the $\dot{M}$\ behaviour with evolution of the RSG.
We begin in Section 2 by describing our dust shell models and choice of input parameters. In Section 3 we discuss applying this to the stars in cluster NGC 2100 and the results we derive from our models. in Section 4 we discuss our results in terms of RSG evolution and as progenitors.
\section{Dust shell models}
The models used in this project were created using DUSTY \citep{ivezic1999dusty}. Stars surrounded by circumstellar dust have their radiation absorbed/re-emitted by the dust particles, changing the output spectrum of the star. DUSTY solves the radiative transfer equation for a star obscured by a spherical dust shell of a certain optical depth ($\tau_V$, optical depth at 0.55 $\micron$), inner dust temperature ($T_{\rm in}$) at the inner most radius ($R_{\rm in}$). Below we describe our choices for the model input parameters and our fitting methodology.
\subsection{Model parameters}
\subsubsection{Dust composition}
It is necessary to define a dust grain composition when creating models with DUSTY as this determines the extinction efficiency Q$_\lambda$, and hence how the dust shell will reprocess the input spectral energy distribution (SED). Observations of RSGs confirm the dust shells are O-rich, indicated by the presence of mid-IR spectral features at 12 and 18$\micron$ known to be caused by the presence of silicates. We opted for O-rich silicate dust as described by \cite{draine1984optical}. Ossenkopff 'warm' and 'cold' silicates \citep{ossenkopf1992constraints} were also considered resulting in only small changes to the output flux. The difference in fluxes from each O-rich dust type were found to be smaller than the errors on our photometry. We therefore concluded that our final results were insensitive to which O-rich dust type we chose.
\subsubsection{Grain size, a}
DUSTY also requires a grain size distribution to be specified. \cite{kochanek2012absorption} used DUSTY to model the spectrum for the RSG progenitor of SN 2012aw, opting for the MRN power law with sharp boundaries, \citep[$dn/da \propto a^{-3.5}$ for 0.005 $\micron$ $<$ a $<$ 0.25 $\micron$ ][]{mathis1977size}. This power law is more commonly associated with dust grains in the interstellar medium. \cite{van2005empirical} also used DUSTY to model dust enshrouded RSGs, choosing a constant grain size of 0.1$\micron$. However, it is also stated in this paper that the extinction of some of the most dust enshrouded M-type stars was better modelled when a smaller grain size, 0.06$\micron$ was used or a modified MRN distribution, between 0.01 and 0.1$\micron$. \cite{groenewegen2009luminosities} also investigated the effect of grain size on the output spectrum, finding a grain size of 1$\micron$ fit reasonably well to the observations of O-rich RSG stars in the SMC and LMC. Recent observations of VY Canis Majoris \citep{scicluna2015large}, a nearby dust-enshrouded RSG, estimated the dust surrounding the star to be of a constant grain size of 0.5$\micron$. This is in line with previous observations such as those by \cite{smith2001asymmetric}, who found the grain size to be between 0.3 and 1$\micron$. Taking all this into account we created models for the MRN power law as well as constant grain sizes of 0.1, 0.2, 0.3, 0.4 and 0.5$\micron$, choosing 0.3$\micron$ as our fiducial grain size. However, as we are studying stars' emission at wavelengths much greater than grain size ($\lambda $> 3$\micron$) the scattering and absorption efficiencies of the dust is largely independent of the grain size. This is discussed further in Section 3.4.
\subsubsection{Density distribution}
Here, we assumed a steady state density distribution falling off as $r^{-2}$ in the entire shell with a constant terminal velocity. We do not know the outflow velocity for RSGs in our sample so we rely on previous measurements to estimate this value. \cite{van2001circumstellar} and \cite{richards1998maser} both used maser emission to map the dust shells of other RSGs,finding v$_{\infty}$ values consistent with $\sim$20-30 km/s for the stars in their samples. We opted for a uniform rate of 25 $\pm$ 5 km/s for the outflow wind. Radiatively driven wind theory suggests that $v_{\infty}$ scales with luminosity, $v_\infty$ $\propto$ L$^{1/4}$, though this is negligible for the luminosity range we measure compared to our errors on luminosity. We specify that the shell extends to 10000 times its inner radius, such that the dust density is low enough at the outer limit so that it has no effect on the spectrum. We also require an gas to dust ratio to be input, $r_{gd}$. It has been shown that this quantity scales with metallicity \citep{marshall2004asymptotic}, so while the gas to dust ratio for RSGs in the Milky Way is around 1:200, for RSGs in the more metal poor LMC the value is higher, around 1:500. We also assumed a grain bulk density, $\rho_d$ of 3g cm$^{-3}$. The values adopted for $r_{gd}$ and $r_s$ will have an effect on the absolute values of $\dot{M}$. It is likely that changes in these properties would have little to no effect on the relative $\dot{M}$\ values and the correlation with luminosity, but the absolute value of the relation may change.
The calculation of $\dot{M}$\ requires the calculation of $\tau_\lambda$ between $R_{\rm in}$ and $R_{\rm out}$
\begin{equation} \tau_\lambda = \int\limits_{R_{\rm in}}^{R_{\rm out}} \pi a^2 Q_\lambda n(r) dr
\end{equation}
for a certain number density profile, $n(r) = n_0 (R_{\rm in}/r) ^2$, where $n_0$ is the number density at the inner radius, $R_{\rm in}$, and extinction efficiency, $Q_\lambda$. We can rearrange to find the mass-density $\rho_0$ at $R_{\rm in}$,
\begin{equation}
\rho_0 = \frac{4}{3}\frac{\tau_\lambda \rho_d a}{Q_\lambda R_{in}}
\end{equation}
By substituting this into the mass continuity equation ($\dot{M}=4 \pi r^2 \rho(r) v_\infty$) a mass loss rate can be calculated,
\begin{equation} \dot{M} = \frac{16\pi}{3} \frac{R_{in} \tau_{\lambda} \rho_d a v_\infty}{Q_{\lambda}}r_{gd}
\end{equation}
Our choice of density distribution differs from that used in other similar work, for example \cite{shenoy2016searching}, who performed a similar study on the red supergiants $\mu$ Cep and VY CMa. By adopting a constant $T_{\rm in}$ value of 1000K and allowing the density exponent to vary, \cite{shenoy2016searching} found that the best fits were obtained by adopting exponents < 2, and hence concluded that $\dot{M}$\ decreases over the lifetime of the stars. In Section 4 we show that this can be reconciled by fixing the density exponent, q=2, and allowing $T_{\rm in}$ to vary. While 1200K is the commonly adopted temperature for silicate dust sublimation, there are many observations in the literature that suggest dust may begin to form at lower $T_{\rm in}$, and hence larger radii. There is interferometric data supporting the case for RSGs having large dust free cavities, for example \cite{ohnaka2008spatially}, who used $N$-band spectro-interferometric observations to spatially resolve the dust envelope around LMC RSG WOH G64. \cite{sargent2010mass} used radiative transfer models of dust shells around two LMC AGB stars, finding bets fit models with lower $T_{\rm in}$ values of 430K and 900K. These values are comparable to previous determinations of $T_{\rm in}$ for O-rich stars from mid-IR infrared fitting similar to the work presented here \citep[e.g.][]{bedijn1987dust,schutte1989theoretical,van2005empirical} suggesting $T_{\rm in}$ often varies from the hot dust sublimation temperature of 1000-1200K.
\subsubsection{Sensitivity to $T_{eff}$}
DUSTY requires an input SED to illuminate the dust shell, so that the light can be reprocessed and re-emitted. The SEDs we use are synthesised from MARCS model atmospheres \citep{gustafsson2008grid} using TURBOSPECTRUM \citep{plez2012turbospectrum}. We opted for typical RSG parameters (log(g)=0.0, microturbulent velocity 4km/s) and an LMC-like metallicity of [Z]=-0.3, though the precise value of these parameters are relatively inconsequential to the morphology of the SED. The most important parameter is the stellar effective temperature, $T_{\rm eff}$. \cite{patrick2016chemistry} used KMOS spectra of 14 RSGs in NGC2100 (of which 13 are analysed in this paper), finding the average $T_{\rm eff}$\ to be 3890 $\pm$ 85 K. This is consistent with the temperature range observed for RSGs in the LMC by \cite{davies2013temperatures}, who found the average $T_{\rm eff}$ of a sample of RSGs in the LMC to be 4170 $\pm$ 170K, by using VLT+XSHOOTER data and fitting this to line-free continuum regions of SEDs. In this study we have opted for a fiducial SED of $T_{\rm eff}$ = 3900K in line with these findings. We also checked how sensitive our results were to this choice of SED temperature by re-running the analysis with stellar SEDs $\pm$ 300K of our fiducial SED, fully encompassing the range observed by \cite{patrick2016chemistry}. We found that the different SEDs reproduced the mid-IR excess, and therefore the inferred $\dot{M}$, almost identically with very small errors $<$ 10\%. Different $T_{\rm eff}$ values do however affect the bolometric correction and therefore the $L_{\rm bol}$, leading to errors of $\sim$0.14dex on our luminosity measurements.
\subsubsection{Departure from spherical symmetry}
Observations of RSG nebulae are often clumpy, rather than spherically symmetric \citep[e.g.][]{scicluna2015large,o2015alma,humphreys2007three}. We investigated the effect of clumped winds by comparing our 1D models with those from MOCASSIN \citep{ercolano2003mocassin,ercolano2005dusty,ercolano2008x}, a code which solves the radiative transfer equation in 3D. We found that clumping has no effect up to a filling factor of 50. As long as the dust is optically thin there is no change in the output spectrum.
\subsubsection{$T_{\rm in}$ and $\tau_V$}
Finally, DUSTY also allows inner temperature, T$_{\rm in}$, and the optical depth $\tau_V$ to be chosen. $T_{\rm in}$ defines the temperature of the inner dust shell (and hence it's position). The optical depth determines the dust shell mass. As these parameters are unconstrained, in this study we have allowed them to vary until the fit to the data is optimised. This fitting methodology is described in the next subsection.
\subsection{Fitting methodology}
We first computed a grid of dust shell models spanning a range of inner temperatures and optical depths. For each model we then computed synthetic WISE and Spitzer photometry by convolving the model spectrum with their relevent filter profiles. This synthetic model photometry was compared to each stars mid-IR photometry from WISE, IRAC and MIPS. The grid spanned $\tau_{v}$ values of 0 - 1.3 with 50 grid points, and inner temperature values from 100K to 1200K in steps of 100K. By using ${\chi^2}$ minimisation we determined the best fitting model to the sample SED.
\begin{equation}
\chi^2 =\sum \frac{ (O_{i}-E_{i})^2 }{\sigma_i^2}
\end{equation}
where O is the observed photometry, E is the model photometry, $\sigma$$^2$ is the error and i denotes the filter. In this case, the model photometry provides the "expected" data points. The best fitting model is that which produced the lowest ${\chi^2}$.
To account for systematic errors we applied a blanket error of 10\% to our observations. The errors on our best fitting model parameters were determined by models within our lowest $\chi^2$$\pm$10. This limit was chosen so that the stars with the lowest measured $\dot{M}$\, which were clearly consistent with non-detections, would have $\dot{M}$\ values consistent with 0 (or upper limits only).
\section{Application to NGC2100}
In this study we apply this dust modeling to a sample of RSGs in a young star cluster. Such clusters can be assumed to be coeval, since any spread in the age of the stars will be small compared to the age of the cluster. Hence, we can assume that all stars currently in the RSG phase had the same initial mass to within a few tenths of a solar mass. Since the stars' masses are so similar, they will all follow almost the same path across the H-R diagram. Differences in luminosity are caused by those stars with slightly higher masses evolving along the mass-track at a slightly faster rate. It is for this reason that luminosity can be taken as a proxy for evolution.
The photometry used in this paper is taken from 2MASS, Spitzer and WISE \citep{Skrutskie2006two,werner2004spitzer,wright2010wide} and is listed in Table 2. A finding chart for NGC2100 is shown in Fig. \ref{fig:findingchart} in which the RSGs are numbered based on [5.6]-band magnitude. Star \#13 has been omitted from our analysis due to large disagreements between the MIPS and WISE photometry, as well as WISE and IRAC.
\begin{table*}
\centering
\caption{Star designations and positions. Stars are numbered based on their [5.6]-band magnitude.}
\begin{tabular}{lccccc}
\hline\hline
Name & ID & RA ($\degr$) & DEC ($\degr$) & W61$^{a}$ & R74$^{b}$ \\
&&J2000 &J2000&&\\
\hline
1& J054147.86-691205.9 & 85.44944763&-69.20166779 & 6-5 & D15 \\
2& J054211.56-691248.7 & 85.54819489&-69.21353149& 6-65 & B40 \\
3& J054144.00-691202.7 & 85.43335724&-69.20075989&8-67&...\\
4& J054206.77-691231.1 & 85.52821350&-69.20866394&...&A127 \\
5& J054209.98-691328.8 & 85.54161072&-69.22468567&6-51&C32 \\
6& J054144.47-691117.1 & 85.43533325&-69.18808746&8-70&...\\
7& J054200.74-691137.0 & 85.50312042&-69.19362640&6-30&C8 \\
8& J054203.90-691307.4 & 85.51628113&-69.21873474&6-34&B4 \\
9& J054157.44-691218.1 & 85.48937225&-69.20503235&6-24&C2 \\
10& J054209.66-691311.2 & 85.54025269&-69.21979523&6-54&B47 \\
11& J054152.51-691230.8 & 85.46879578&-69.20856476&6-12&D16 \\
12& J054141.50-691151.7 & 85.42295837&-69.19770813&8-63&... \\
13& J054207.48-691250.3 & 85.53116608&-69.21398163&6-48&... \\
14& J054204.78-691058.8 & 85.51993561&-69.18302917&6-44&... \\
15& J054206.13-691246.8 & 85.52555847&-69.21302032&6-46&... \\
16& J054206.36-691220.2 & 85.52650452&-69.20561218&6-45&B17 \\
17& J054138.59-691409.5 & 85.41079712&-69.23599243&8-58&... \\
18& J054212.20-691213.3 & 85.55084229&-69.20370483&6-69&B22 \\
19& J054207.45-691143.8 & 85.53106689&-69.19552612&6-51&C12 \\
\multicolumn{6}{p{\columnwidth}}{$^{a}$ star designation from \cite{westerlund1961population}}\\
\multicolumn{6}{p{\columnwidth}}{$^{b}$ star designation from \cite{robertson1974color}}\\
\end{tabular}
\end{table*}
\begin{table*}
\centering
\tiny
\caption{Observational data. All fluxes are in units of mJy.}
\label{my-label}
\begin{tabular}{lcccccccccccccccc}
\hline\hline
Name & IRAC1 & IRAC2 & IRAC3 & IRAC4& MIPS1& 2MASS-J & 2MASS-H & 2MASS-Ks & WISE1& WISE2 & WISE3 & WISE4 \\
& (3.4 $\micron$) & (4.4 $\micron$) &(5.6 $\micron$) &(7.6$\micron$) & (23.2 $\micron$)& & & & (3.4 $\micron$)& (4.6 $\micron$)& (11.6 $\micron$) &(22 $\micron$) \\
\hline
1&176.0$\pm$0.2 & 163.00$\pm$ 0.03 & 150.0$\pm$0.05 & 141.00$\pm$0.04 & 140.00$\pm$0.063 & 244$\pm$ 4& 364$\pm$12 & 344$\pm$ 9 & 240$\pm$6 & 153$\pm$2.8 & 166$\pm$ 2.6 & 149$\pm$ 3.97 \\
2& - & 95.10$\pm$ 0.02 & 115.0$\pm$0.05 & 120.00$\pm$0.03 & 113.00$\pm$0.063 & 237$\pm$ 5 & 359$\pm$13 & 324$\pm$ 5 & 173$\pm$ 3 & 116$\pm$ 2.0 & 146$\pm$ 2.0 & 132$\pm$ 3.04 \\
3&131.0$\pm$0.1& 72.70$\pm$ 0.02& 64.7$\pm$0.04 & 43.80$\pm$0.02 & -& 232 $\pm$ 4 & 332$\pm$10 & 286$\pm$ 6 & 171 $\pm$ 3 & 82$\pm$ 1.5 & 33$\pm$ 0.7 & 25$\pm$ 1.63 \\
4& 86.6$\pm$0.1 & 65.20$\pm$ 0.02 & 60.7$\pm$0.04 & 56.50$\pm$0.02 & 33.20$\pm$ 0.063 & 161$\pm$ 3 & 220$\pm$ 4 & 196$\pm$ 4 & 120$\pm$ 3 & 65$\pm$ 1.5 & 51$\pm$ 0.8 & 36$\pm$ 1.59 \\
5&101.0$\pm$0.1 & 64.10$\pm$ 0.02 & 56.9$\pm$0.03 & 49.00$\pm$0.02 & 26.10$\pm$0.064 & 154$\pm$ 3 & 220$\pm$ 4 & 194$\pm$ 4 & 145$\pm$ 2 & 66$\pm$ 1.2 & 39$\pm$ 0.7 & 30$\pm$ 1.30 \\
6&110.0$\pm$0.1& 65.10$\pm$ 0.02 & 56.6$\pm$0.03 & 46.00$\pm$0.02 & 27.50$\pm$0.065 & 186$\pm$ 3 & 270$\pm$ 6 & 240$\pm$ 4& 137$\pm$ 2 & 70$\pm$ 1.3 & 37$\pm$ 1.0 & 35$\pm$ 3.08 \\
7& 92.6$\pm$0.1 & 62.30$\pm$ 0.02 & 53.7$\pm$0.03 & 38.10$\pm$0.02 & -&173$\pm$ 3&249$\pm$ 5&220$\pm$ 4& 125$\pm$ 2 & 60$\pm$ 1.1 & 27$\pm$ 0.6 & 12$\pm$ 1.23 \\
8& 93.5$\pm$0.1& 58.10$\pm$ 0.02 & 50.9$\pm$0.03 & 42.00$\pm$0.02 & 19.20$\pm$0.063 & 183$\pm$ 3 & 254$\pm$ 5 & 209$\pm$ 4 & 113$\pm$ 2 & 56$\pm$ 0.9 & 30$\pm$ 0.5 & 14$\pm$ 1.63 \\
9& 93.6$\pm$0.1 & 58.30$\pm$ 0.02 & 48.4$\pm$0.03 & 36.30$\pm$0.02 & -& 187$\pm$ 3 & 244$\pm$ 5 & 209$\pm$ 4 & 113$\pm$ 2 & 56$\pm$ 1.0 & 22$\pm$ 0.6 & 2$\pm$ 0.77 \\
10& 80.7$\pm$0.1& 50.70$\pm$ 0.02 & 41.3$\pm$0.03 & 26.00$\pm$0.02 & -&161$\pm$ 3 & 223$\pm$ 4 & 190$\pm$ 4& 105$\pm$ 2 & 51$\pm$ 1.0 & 15$\pm$ 0.3 & 11$\pm$ 1.14 \\
11& 68.1$\pm$0.1 & 43.30$\pm$ 0.01 & 34.7$\pm$0.03 & 23.70$\pm$0.02 & -& 108$\pm$ 2 & 156$\pm$ 3 & 143$\pm$ 2 & 84$\pm$ 1 & 41$\pm$ 0.8 & 10$\pm$ 0.5 & 4$\pm$ 0.87 \\
12& 63.7$\pm$0.1 & 41.80$\pm$ 0.01 & 32.8$\pm$0.03 & 23.00$\pm$0.02 & - & 125$\pm$ 2 & 181$\pm$ 3 & 156$\pm$ 3 & 83$\pm$ 1 & 39$\pm$ 0.7 & 13$\pm$ 0.5 & 3$\pm$ 1.25 \\
13& 75.2$\pm$0.1 & - & 32.4$\pm$0.03 & 21.50$\pm$0.02 & - & 131$\pm$ 3 & 178$\pm$ 4 & 160$\pm $3 & 140$\pm$ 2 & 73$\pm$ 1.3 & 20$\pm$ 0.4 & 21$\pm$ 1.07 \\
14& 65.7$\pm$0.1 & 37.50$\pm$ 0.01 & 29.2$\pm$0.03 & 16.60$\pm$0.02 & - &117$\pm$ 2 & 171$\pm$ 3 & 142$\pm$ 2 & 75$\pm$ 1& 35$\pm$ 0.6 & 6$\pm$ 0.4 & - \\
15& 62.9$\pm$0.1 & 36.30$\pm$ 0.01 & 28.7$\pm$0.02 & 17.80$\pm$0.02 & - & 120$\pm$ 3 & 166$\pm$ 5 & 137$\pm$ 3 &106$\pm$ 9 & 44$\pm$ 3.7 & 4$\pm$ 1.2 & - \\
16& 62.0$\pm$0.1 & 37.00$\pm$ 0.01 & 28.4$\pm$0.02 & 17.80$\pm$ 0.02 & -& 112$\pm$ 2 & 162$\pm$ 3 & 142$\pm$ 2 & 94$\pm$ 1 & 49$\pm$ 0.9 & 19$\pm$ 0.4 & 14$\pm$ 1.03 \\
17& 59.7$\pm$0.1 & 35.70$\pm$ 0.01 & 27.1$\pm$0.02 & 16.80$\pm$0.02 & -&113$\pm$ 2 & 155$\pm$ 3 & 135$\pm$ 2 & 73$\pm$ 1 & 34$\pm$ 0.6 & 9$\pm$ 0.3 & 3$\pm$ 1.08 \\
18& 51.4$\pm$0.1 & 31.50$\pm$ 0.01 & 24.0$\pm$0.02 & 14.40$\pm$0.01 & - & 105$\pm$ 2 & 142$\pm$ 3 & 121$\pm$ 2 & 64$\pm$ 1 & 30$\pm$ 0.6 & 5$\pm$ 0.2 & - \\
19& 53.8$\pm$0.1 & 30.40$\pm$ 0.01 & 22.6$\pm$0.02 & - & - & 101$\pm$ 2 & 144$\pm$ 2 & 119$\pm$ 2 & 61$\pm$ 1 & 28$\pm$ 0.5 & 3$\pm$ 0.2 & - \\
\end{tabular}
\end{table*}
\begin{figure*}
\caption{Finding chart for RSGs in NGC2100. The stars are numbered based on [5.6]-band magnitude.}
\centering
\label{fig:findingchart}
\includegraphics[width=\textwidth]{ngc2100_FC_8micron_small.eps}
\end{figure*}
The RGSs in NGC2100 can be seen as a clump of stars in CMD space with a $K_S$-band magnitude less than 9.49 within a 2 arcminute radius of the cluster centre. This identified 19 candidate RSGs. By plotting J-K vs. K it was possible to locate RSGs in the data as a clump of stars clearly separated from the field stars. This is shown in Fig. \ref{fig:colcol}. The red circles indicate RSGs.
From the photometry alone it was possible to see evidence of $\dot{M}$\ evolving with evolution of the RSG. This qualitative evidence is shown in the [8-12] vs. [5.6] CMD, Fig. \ref{fig:8min12}. The 5.6-band magnitude can be used as a measure of luminosity as the bolometric correction at this wavelength is largely insensitive to the RSGs temperatures, whilst also being too short a wavelength to be significantly affected by emission from circumstellar dust. The [8-12] colour can be used as a measure of dust shell mass as it measures the excess caused by the broad silicate feature at 10$\micron$. It can be seen from Fig. \ref{fig:8min12} that more luminous (and therefore, more evolved) RSGs have a larger amount of dust surrounding them (shown by the increasing colour, meaning they appear more reddened), suggesting dust mass increases with age.
Below we discuss our modeling results and compare them to mass-loss rate prescriptions frequently used by stellar evolution groups.
\begin{figure}
\caption{Colour-magnitude plot using J-K$_{\rm s}$ vs. K$_{\rm s}$ to locate RSGs in NGC 2100. This plot also shows a14Myr PARSEC isochrone \citep{tang2014new,chen2015parsec} at LMC metallicity (non-rotating). Isochrones have been adjusted for the distance to the LMC and a foreground extinction of $A_V$=0.5. The extinction noted in the legend is {\it in addition }to the foreground extinction already known to be present in the LMC \citep[][]{niederhofer2015no}. }
\centering
\label{fig:colcol}
\includegraphics[width=\columnwidth]{kvskminj_Jan16_isochrones_MARCH.eps}
\end{figure}
\begin{figure}
\caption{Colour magnitude plot of RSGs in the cluster to show increasing dust mass with age. [5.6]-band magnitude is used as an indicator of $L_{\rm bol}$ and the [8-12] colour is used as a measure of dust shell mass. The [8-12] colour is useful as is includes the mid-IR excess and the excess caused by the broad silicate feature.}
\centering
\label{fig:8min12}
\includegraphics[width=\columnwidth]{coli4min12vsi3_Dec15_rsgsonly.eps}
\end{figure}
\subsection{Modeling results}
We ran our fitting procedure for the 19 RSGs located in NGC2100, our results are shown in Table 2. Figures \ref{fig:allcont}-\ref{fig:lowmdot} shows an example model fit with observed photometry for star \#1. The plot shows our best fit model spectra (green line), the models within our error range (blue dotted lines) and the various contributions to the flux, including scattered flux, dust emission and attenuated flux. It also shows the photometric data (red crosses) and model photometry (green crosses). The 10$\micron$ silicate bump can be clearly seen due to dust emission (pink dashed line). The plot also shows the significant effect scattering within the dust shell (grey dotted/dashed line), contributing to a large proportion of the optical output spectrum.
The fitting procedure did not include the JHK photometry bands as these bands are strongly affected by extinction, but when over-plotting this photometry (once de-reddened) it was found to be in good agreement with the model spectrum for all stars except \#1 and \#2, for which the model over-predicts the near-IR flux. This deficit in the observed NIR flux is not present for the other RSGs in our sample. Figures \ref{fig:medmdot} and \ref{fig:lowmdot} show the model fits for stars \#8 and \#12 respectively, representative of medium and low $\dot{M}$\ values.
We attempted to explain the missing near-IR (NIR) flux in stars \#1 and \#2 by adapting the fitting procedure to include JHK photometry and to include a lower $T_{\rm eff}$\ SED. This gave us a better fit to the near-IR photometry but at the expense of a poorer fit to the 3-8$\micron$ region, where the model now underpredicted the flux. We considered whether this fit could be improved by dust emission. To achieve this, it would require either unphysically high dust temperatures above the sublimation temperature for silicate dust, or it would require an increase in dust mass of a factor of 100. This would lead to significantly poorer fits to the mid-IR photometry and can therefore be ruled out. There was only a small effect on the best fit $\dot{M}$\ found (less than 10\%). The only change to our results from making these adjustments was that the $L_{\rm bol}$ was reduced for stars \#1 and \#2 by approximately 0.3 dex. We discuss these results further in Section 4.1.
In Fig. \ref{fig:allcont} we show a contour plot illustrating the degeneracy between our two free parameters, $T_{\rm in}$ and $\tau_{\rm v}$, with $\dot{M}$\ contours for the best fit $\dot{M}$\ and upper and lower $\dot{M}$\ contours overplotted. It can be seen that the lines of equal $\dot{M}$\ run approximately parallel to the $\chi^2$ contours. This means that despite the degeneracy between $\tau_V$ and $T_{\rm in}$ the value of $\dot{M}$\ is well constrained and robust to where we place the inner dust rim.
Fit results for all stars modelled are shown in Table 2. We find a varying $T_{\rm in}$ value for the stars in the sample, rather than a constant value at the dust sublimation temperature of 1200K. For each of the stars a best fit value of $\tau$ and $T_{\rm in}$ is found. Lower $T_{\rm in}$ values have also been found in other studies \citep[c.f.][]{groenewegen2009luminosities}. When compared to the stars' calculated luminosities, it can be seen that lower luminosity stars have a greater spread in $T_{\rm in}$ values, while higher $\dot{M}$\ stars have $T_{\rm in}$ values that are more constrained. We find that all stars in our sample are consistent with $T_{\rm in}$ $\sim$ 600K. $L_{\rm bol}$ is found by integrating under the model spectra with errors on $L_{\rm bol}$ dominated by the uncertainty in $T_{\rm eff}$. The value of $A_V$ is found from the ratio of input and output fluxes at 0.55$\micron$ and is intrinsic to the dust shell. For stars numbered 15, 18 and 19 the value of $\dot{M}$\ is so low it can be considered as a non-detection, leaving $T_{\rm in}$ unconstrained.
\begin{figure*}
\caption{ \textit{Left panel:} Model plot for the star with the highest $\dot{M}$\ value in NGC 2100 including all contributions to spectrum. The "error models" are the models that fit within the minimum $\chi^2$+10 limit. The stars are numbered based on 5.6-band magnitude (\#1 being the star with the highest [5.6]-band magnitude). The silicate bump at 10$\micron$ is clearly visible on the spectra suggesting a large amount of circumstellar material. \textit{Right panel:} Contour plot showing the degeneracy between $\chi^2$ values and best fitting $\dot{M}$\ values in units of 10$^{-6}$ M$_\odot$ yr$^{-1}$. The green lines show the best fit $\dot{M}$\ and upper and lower $\dot{M}$\ isocontours. It can be seen that while there is some degeneracy between inner dust temperature and optical depth the value of $\dot{M}$\ is independent of this. }
\centering
\label{fig:allcont}
\includegraphics[height=7cm,bb=60 0 850 566,clip]{1datafit_finalplotsSSTSL2J054147p86-691205p9.eps}
\includegraphics[height=7cm]{contourplots_new1.eps}
\end{figure*}
\begin{figure*}
\caption{Same as Fig. \ref{fig:allcont} for star \#8, which has an intermediate $\dot{M}$\ value. It can be seen in the model plot (left) that it is possible to fit both the near-IR and mid-IR photometry. $\dot{M}$\ values are in units of 10$^{-6}$ M$_\odot$ yr$^{-1}$.}
\centering
\label{fig:medmdot}
\includegraphics[height=7cm,bb=60 0 850 566,clip]{8datafit_finalplotsSSTSL2J054203p90-691307p4.eps}
\includegraphics[height=7cm]{contourplots_new8.eps}
\end{figure*}
\begin{figure*}
\caption{Same as Fig. \ref{fig:allcont} for star \#12, which has a low $\dot{M}$\ value. It can be seen in this plot that it is possible to fit both the near-IR photometry and mid-IR photometry. $\dot{M}$\ values are in units of 10$^{-6}$ M$_\odot$ yr$^{-1}$.}
\centering
\label{fig:lowmdot}
\includegraphics[height=7cm,bb=60 0 850 566,clip]{12datafit_finalplotsSSTSL2J054141p50-691151p7.eps}
\includegraphics[height=7cm]{contourplots_new12.eps}
\end{figure*}
A positive correlation between $\dot{M}$\ and luminosity is illustrated in Fig. \ref{fig:mdotvsL}, implying that $\dot{M}$\ increases by a factor of 40 during the RSG phase, which according to model predictions should last approximately 10$^6$ years for stars with initial masses of 15M$_\odot$ \citep{georgy2013populations}, see Section 3.2. This plot also shows some mass loss rate prescriptions for comparison (assuming a $T_{\rm eff}$ of 4000K); \cite{de1988mass}(hereafter dJ88) , \cite{reimers1975circumstellar,kudritzki1978absolute}, \cite{van2005empirical},\cite{nieuwenhuijzen1990parametrization}(hereafter NJ90) and \cite{feast1992ch}. See Section 3.1.1 for further discussion of the $\dot{M}$\ prescriptions. We find our results are best fit by dJ88, van Loon and Reimer's prescriptions, with dJ88 providing a better fit for the more evolved stars (where the mass loss mechanism is stronger).
Our stars form a tight correlation, whereas previous studies of $\dot{M}$\ with $L_{\rm bol}$ (e.g. \cite{van2005empirical}) have shown a large spread in results. This could be due to previous studies looking at field stars, whereas our study has looked at RSGs in a coeval cluster. As for the three stars with negligible $\dot{M}$\ values, it is possible that no appreciable amount of dust has formed around these RSGs yet meaning the dust driven wind hasn't taken effect. We considered the possibility that these stars were foreground stars but after checking their v$_{rad}$ values \citep{patrick2016chemistry} we find they are all consistent with being within the cluster.
\subsubsection{Mass loss rate prescriptions}
Each $\dot{M}$\ prescription was calculated using different methods. The $T_{\rm eff}$\ was set to 3900K for all prescriptions shown in Fig. \ref{fig:mdotvsL}.
The empirical formula for dJ88 was derived by comparing $\dot{M}$\ values found from 6 different methods from literature for 271 stars of spectral types O through M. Determination of $\dot{M}$\ for M type stars included the modeling of optical metallic absorption lines of nearby RSGs \citep[under the assumption the lines form in the wind][]{sanner1976mass} and using mid-IR photometry and hydrodynamics equations to find $v_{\infty}$ \citep{gehrz1971mass}. This relation is a two parameter fit on of $L_{\rm bol}$ and $T_{\rm eff}$. \cite{de1988mass} found that each method found the same $\dot{M}$\ value to within the error limits no matter the star's position on the HR-diagram. The NJ90 prescription \citep{nieuwenhuijzen1990parametrization} is a second formulation of the dJ88 formula, including stellar mass. Due to the narrow mass range for RSGs (8-25M$_\odot$) and the very weak dependence on M it has very little effect on the $\dot{M}$\ found from this formulation.
Reimer's law \citep{reimers1975circumstellar,kudritzki1978absolute} is a semi-empirical formula derived by measurements of circumstellar absorption lines for companions in binary systems. This has been repeated for 3 such systems only but provides an accurate measurement of $\dot{M}$. The formula depends on surface gravity, $g$, but can be expressed in terms of $R$,$L$ and $M$ (in solar units) as shown by \cite{mauron2011mass}.
Van Loon's prescription is an empirical formula based on optical observations of dust enshrouded RSGs and Asymptotic Giant Branch (AGB) stars within the LMC, where
$\dot{M}$\ was derived by modeling the mid-IR SED using DUSTY. \cite{van2005empirical} assumed a constant grain size of 0.1$\micron$, but state that this value was varied for some of the stars to improve fits. $T_{\rm in}$ values were first assumed to be between 1000 and 1200K, but again the author states for some stars in the sample this was reduced to improve the fit of the data and the DUSTY model.
The most widely used $\dot{M}$\ prescription, dJ88, provides the best fit to our observations for the more evolved stars. The van Loon prescription also agrees quite well, even though one might expect that this study's focus on dust-enshrouded stars would be biased towards higher $\dot{M}$\ stars. All of the prescriptions over predict the $\dot{M}$\ for the lowest luminosity stars. Though, this may be due to the fact that dust in these stars has yet to form (and hence $r_{gd}$ > 500 for these).
\begin{table*}
\caption{Results for stars in NGC 2100. Stars are numbered with \#1 having the highest [5.6]-band magnitude and \#19 having the lowest. Luminosities quoted are in units of log($L_{\rm bol}$/L$_\odot$). $A_V$ is the extinction intrinsic to the dust shell.}.
\centering
\begin{tabular}{lcccccc}
\hline\hline
Star & $T_{\rm in}$ (K) & $\tau_V$ & $\dot{M}$\ (10$^{-6}$M$_\odot$ yr$^{-1}$) & $L_{\rm bol}$ &$A_V$ \\ [0.5ex]
\hline
1&$ 600^{+ 200}_{- 100}$&$0.56^{+0.21}_{-0.14}$&$ 9.89^{+ 4.20}_{- 3.17}$&$ 5.09\pm 0.09$&$0.09^{+0.04}_{-0.02}$ \\
2&$ 600^{+ 200}_{- 100}$&$0.64^{+0.26}_{-0.16}$&$ 9.97^{+ 4.52}_{- 3.19}$&$ 4.97\pm 0.09$&$0.10^{+0.05}_{-0.03}$ \\
3&$ 600^{+ 400}_{- 200}$&$0.16^{+0.08}_{-0.05}$&$ 1.98^{+ 1.07}_{- 0.74}$&$ 4.84\pm 0.09$&$0.02^{+0.01}_{-0.01}$ \\
4&$ 800^{+ 400}_{- 200}$&$0.45^{+0.32}_{-0.16}$&$ 3.17^{+ 2.34}_{- 1.29}$&$ 4.71\pm 0.09$&$0.07^{+0.06}_{-0.03}$ \\
5&$ 700^{+ 300}_{- 200}$&$0.29^{+0.16}_{-0.08}$&$ 2.54^{+ 1.49}_{- 0.86}$&$ 4.73\pm 0.09$&$0.04^{+0.03}_{-0.01}$ \\
6&$ 500^{+ 300}_{- 100}$&$0.21^{+0.13}_{-0.02}$&$ 3.25^{+ 2.12}_{- 0.72}$&$ 4.77\pm 0.09$&$0.03^{+0.02}_{-0.00}$ \\
7&$1200^{+ 0}_{- 500}$&$0.27^{+0.07}_{-0.14}$&$ 0.82^{+ 0.27}_{- 0.46}$&$ 4.68\pm 0.09$&$0.04^{+0.01}_{-0.02}$ \\
8&$1000^{+ 200}_{- 400}$&$0.29^{+0.16}_{-0.10}$&$ 1.29^{+ 0.76}_{- 0.51}$&$ 4.68\pm 0.09$&$0.04^{+0.03}_{-0.01}$ \\
9&$1200^{+ 0}_{- 400}$&$0.16^{+0.08}_{-0.05}$&$ 0.48^{+ 0.26}_{- 0.18}$&$ 4.68\pm 0.09$&$0.02^{+0.01}_{-0.01}$ \\
10&$ 600^{+ 500}_{- 200}$&$0.11^{+0.08}_{-0.03}$&$ 1.06^{+ 0.80}_{- 0.36}$&$ 4.63\pm 0.09$&$0.01^{+0.01}_{-0.00}$ \\
11&$1200^{+ 0}_{- 700}$&$0.11^{+0.05}_{-0.06}$&$ 0.28^{+ 0.14}_{- 0.16}$&$ 4.55\pm 0.09$&$0.01^{+0.01}_{-0.01}$ \\
12&$1200^{+ 0}_{- 600}$&$0.16^{+0.08}_{-0.08}$&$ 0.39^{+ 0.21}_{- 0.21}$&$ 4.51\pm 0.09$&$0.02^{+0.01}_{-0.01}$ \\
14&$-$&$<0.05$&$<0.12$&$ 4.53\pm 0.10$&$-$ \\
15&$-$&$<0.03$&$ <0.08$&$ 4.56\pm 0.09$&$-$ \\
16&$ 400^{+ 300}_{- 100}$&$0.13^{+0.06}_{-0.02}$&$ 2.27^{+ 1.14}_{- 0.57}$&$ 4.55\pm 0.09$&$0.020^{+0.010}_{-0.000}$ \\
17&$1100^{+ 100}_{- 700}$&$0.08^{+0.05}_{-0.05}$&$ 0.23^{+ 0.15}_{- 0.15}$&$ 4.49\pm 0.09$&$0.010^{+0.010}_{-0.010}$ \\
18&$-$&$<0.03$&$<0.08$&$ 4.43\pm 0.09$&$-$ \\
19&$-$&$<0.03$&$<0.07$&$ 4.45\pm 0.11$&$-$ \\
\hline
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\caption{Plot showing $\dot{M}$\ versus $L_{\rm bol}$. A positive correlation can be seen suggesting $\dot{M}$\ increases with evolution. This is compared to some mass loss rate prescriptions. The downward arrows show for which stars we only have upper limits on $\dot{M}$.}
\centering
\label{fig:mdotvsL}
\includegraphics[width=\textwidth]{LvsMdot.eps}
\end{figure*}
\subsection{Cluster age and initial masses}
It was necessary to know the initial masses of the stars in our cluster. By comparing various stellar evolutionary models, such as \citep{brott2011rotating}, STARS \citep{eldridge2009spectral} and Geneva \citep{georgy2013populations}, and our lowest measured $L_{\rm bol}$ of $\sim$4.5L$_\odot$ as a constraint, see Fig. \ref{fig:masstrack} we derived an initial mass for the stars in the sample . It should be noted that \cite{brott2011rotating}'s mass tracks are not evolved to the end of Helium burning, but since we are only interested in the initial mass of the cluster stars this does not affect our conclusions. The current Geneva models at LMC metallicity are currently only available for masses up to 15M$_\odot$, but seem to imply a mass greater than 14M$_\odot$. We conclude that an $M_{\rm initial}$ of $\sim$14M$_{\odot}$ - 17M$_\odot$ seems most likely.
Cluster age was derived using PARSEC non-rotating isochrones \citep{tang2014new,chen2015parsec} at Z$\sim$0.006. These isochrones were used as they have the added advantage of coming with photometry. We used $M_{\rm initial}$ as a constraint and found an age of 14Myrs. \cite{patrick2016chemistry} estimated the age of NGC 2100 to be 20 $\pm$ 5 Myr using SYCLIST stellar isochrones \citep{georgy2013grids} at SMC metallicity and at solar metallicity. The explanation for this difference in cluster age is due to the use of non-rotating isochrones in our study as it is known stellar rotation causes stars to live longer, and hence infer an older cluster age. When using rotating isochrones we found the same cluster age.
\begin{figure}
\centering
\caption{Plot showing $M_{\rm initial}$ vs luminosity for various mass tracks. The plot shows the upper and lower luminosity values at each $M_{\rm initial}$ for STARS \citep{eldridge2009spectral} (pink solid lines) at $Z \sim 0.008$, \citep{brott2011rotating} non-rotating models at LMC metallicity (green dashed line), and Geneva rotating (red dotted line) and non-rotating (blue dotted line) \citep{georgy2013populations} at LMC metallicity. The Geneva models do not currently cover masses greater than 15$M_\odot$ at this metallicity. The grey shaded region shows the upper and lower luminosities derived for the stars in our sample. Using our lowest measured $L_{\rm bol}$ of $\sim 4.5 L_\odot$ as a constraint we find $M_{\rm initial}$ of $\sim 14 M_{\odot} - 17 M_{\odot}$ from the evolutionary models.}
\centering
\label{fig:masstrack}
\includegraphics[width=\columnwidth]{masstrack_all.eps}
\end{figure}
\subsection{Extinction}
We determined the extinction due to the dust wind from the ratio of the input and output flux at 0.55 $\micron$. This extinction is intrinsic to the circumstellar dust shell and is independent of any foreground extinction. Due to scattering within the dust wind the effect of extinction is small, see Table 2. As discussed by \cite{kochanek2012absorption}, enough light is scattered by the dust shell back into the line of sight of the observer so little flux is lost. In apparent contradiction, \cite{davies2013temperatures} derived extinctions for a sample of RSGs in the SMC and LMC of a few tenths of a magnitude. As the mass of the progenitor RSGs to the IIP SN are found from mass-luminosity relations, an extinction this high could have an effect on the mass calculation, causing them to be underestimated.
We next fit isochrones to the CMD of our sample and by dereddening this it was possible to further estimate extinction present for the RSGs. We used a 14Myr PARSEC stellar evolutionary track isochrone \citep{tang2014new,chen2015parsec}. After adjusting the isochrone to a distance of 50kpc and the extinction law towards the LMC \citep{koornneef1982gas} we found that there is additional extinction towards the RSGs that is not present for blue supergiants (BSGs) in the cluster, see Fig. \ref{fig:colcol}. The isochrone shows that the RSGs require additional extinction in order to fit with the model, with stars \#1 and \#2 possessing even further extinction (see Section 4.1). This is further to the foreground extinction already known to be present for NGC 2100 \citep[around 0.5 mag, ][]{niederhofer2015no}. From this we can infer an intrinsic RSG extinction of approximately $A_V$$\sim$0.5 that is not present for other stars in the cluster.
We considered the possibility that this extra extinction could be due to cool dust at large radii from the stars not detectable in the mid-IR. To do this we created DUSTY models at 30K with an optical depth of 2, large enough to produce the extra extinction of $A_V$ of $\sim$0.5 mags. If this dust were present it would emit at around 100$\micron$ with a flux density of > 1Jy. A flux this high would be within the detection limits for surveys such as Herschel's HERITAGE survey \citep{meixner2013herschel}, which mapped the SMC and LMC at wavelengths of 100$\micron$ and above. After checking this data we found no evidence of the stars within NGC2100 emitting at this wavelength, suggesting that the additional extinction local to RSGs is not caused by a spherically symmetric cold dust shell.
We also considered the effect of differential extinction on the cluster. \cite{niederhofer2015no} found that a low level of differential extinction is present in NGC 2100, but after analysing Herschel 100$\micron$ to 250$\micron$ images \citep{meixner2013herschel} it seems the core of the cluster, where the RSGs are, remains clear of dust. Star \#2 is spatially coincident with the BSGs, whereas star \#1 is away from the cluster core. From Herschel images we see no reason to expect that the foreground extinction should be unusually high for these objects. We therefore see no argument for the RSGs having different foregound extinction than the BSGs. Clumpy cold dust at larger radii could potentially explain the extra extinction in RSGs, we investigate this possibility further in Section 4.1.
\subsection{Sensitivity to grain size distribution}
To check how robust our results were to a change in the grain size distribution, we created grids of models for various constant grain sizes of 0.1 $\micron$, 0.2 $\micron$, 0.4 $\micron$ and 0.5 $\micron$ (in addition to the 0.3 $\micron$ grain size). The maximum grain size of 0.5 $\micron$ was chosen as 0.5 $\micron$ was recently found to be the average grain size for dust grains around the well known RSG VY Canis Majoris \citep{scicluna2015large}. We then derived $\dot{M}$\ values.
The results can be seen in Fig. \ref{fig:gsvsmdot}, where the $\dot{M}$\ value for each star is plotted for each constant grain size. It is clear that increasing grain size does not have an effect on the derived $\dot{M}$\ values. Similarly, the grain size also does not seem to have a significant effect on the value of $A_V$. This can be seen in Fig. \ref{fig:gsvsav}. The stars chosen are representative of high $\dot{M}$\ (\#2), intermediate $\dot{M}$\ (\#7) and low $\dot{M}$\ (\#9).While the $A_V$ does fluctuate slightly, the values remain within error boundaries of each other. A$_{v}$ is affected by grain size s described by Mie theory, which states that the scattering efficiency of the dust is dependent on grain size, $a$. Extinction is dominated by particles of size $\sim$$\lambda/3$. When $\lambda$ $\gg$ grain size, scattering and absorption efficiency tend to 0. When dust grains are larger than a certain size, fewer particles are needed to reproduce the mid-IR bump causing a reduction in $A_V$. $\dot{M}$\ remains unaffected as the overall mass of the dust shell remains the same whether there are more smaller grains or fewer large grains.
\begin{figure}
\caption{Plot showing $\dot{M}$\ derived at each constant grain size. Each colour represents a different star from NGC2100. The stars chosen are representative of high $\dot{M}$\ (\#2), intermediate $\dot{M}$\ (\#7) and low $\dot{M}$\ (\#9).}
\centering
\label{fig:gsvsmdot}
\includegraphics[width=\columnwidth]{mdot_vs_grainsize_3stars.eps}
\end{figure}
\begin{figure}
\caption{Plot showing grain size versus A$_{v}$. Each colour represents a different star from NGC 2100. The stars chosen are representative of high $\dot{M}$\ (\#2), intermediate $\dot{M}$\ (\#7) and low $\dot{M}$\ (\#9).}
\centering
\label{fig:gsvsav}
\includegraphics[width=\columnwidth]{av_vs_grainsize_3stars.eps}
\end{figure}
\section{Discussion}
\subsection{Evidence for enhanced extinction to stars \#1 and \#2}
As discussed in Section 3.3, stars \#1 and \#2 were found to be the most luminous in our sample, as well as having the strongest measured $\dot{M}$\ values. From our fitting procedure we found that the near-IR flux for these stars was overestimated by our best fit model spectrum. We now discuss possible causes for the discrepant near-IR photometry of these stars.
Next, we included the near-IR photometry into our fitting procedure to see the effect this would have on the output $L_{\rm bol}$. The $L_{\rm bol}$ values were derived by integrating under the best fit model spectrum for each star. As we did not initially include JHK photometry in our fitting procedure, it was possible that the derived $L_{\rm bol}$ would be overestimated also. When including the JHK photometry, the fits were improved at near-IR wavelengths but the mid-IR photometry fits became poorer. This had little effect on the best fit $\dot{M}$\ values and the trend of increasing $\dot{M}$\ with luminosity was still observed, well modelled by the dJ88 prescription \citep{de1988mass}, with stars \#1 and \#2 having lower $L_{\rm bol}$ by $\sim$0.3 dex. As we were unable to reliably fit both the near-IR and mid-IR photometry we concluded that stars \#1 and \#2 were intrinsically redder than the other, less evolved stars in our sample.
Choice of input SED could also have affected the measured $L_{\rm bol}$ as lower $T_{\rm eff}$\ causes the peak of the spectrum to shift to shorter wavelengths. The $T_{\rm eff}$\ of star \#1 has been found to be 4048$\pm$68K \citep{patrick2016chemistry} so we do not believe the underestimated near-IR flux to be an effect of input SED. Nevertheless, we repeated our fitting procedure using a lower $T_{\rm eff}$\ s of 3600K finding that this now underestimated the JHK photometry. We also calculated luminosities for each star based on the $K$-band calibration described by \cite{davies2013temperatures} and find the integrated luminosities of all stars are consistetent to a 1:1 relation within errors except for stars \#1 and \#2, which were underpredicted by this calibration. This further supports the suggestion that these stars have more self-extinction than the other RSGs in our sample.
After repeating our fitting procedure with lower $T_{\rm eff}$\ and to include near-IR photometry we believe the most likely explanation is that stars \#1 and\#2 have extra extinction that cannot be explained by the inner dust wind. These stars are the most evolved in our sample, so it is possible this enhanced extinction only becomes apparent towards the end of an RSGs life.
|t is known that RSGs have extended clumpy nebulae, for example $\mu$ Cep \citep{de2008red}. If $\mu$ Cep were at the distance of the LMC, the cold dust emitting at 100$\micron$ would be too faint to be observable, at a level of around 0.2 Jy (before we account for a factor of 2 lower dust to gas ratio for the LMC). It is therefore possible that the enhanced extinction we observe for stars \#1 and \#2 is caused by the stars being surrounded by cold, clumpy dust that emits at similarly low levels.
We considered the possibility that the poor fits to the JHK and mid-IR photometry for stars \#1 and \#2 is due to extreme variability. If the mid-IR data we used in our analysis was taken at a time when the near-IR brightness of these stars was lower than when the 2MASS data were taken, this would cause our best fit SED to overestimate the flux at the JHK wavelengths. Star \#1 ($\equiv$HV 6002) is variable in the J and H bands by 0.13 mag and 0.11 mag, respectively (from minimum observed brightness to maximum observed brightness) and the 2MASS photometry we use in our analysis is the peak of this variability. In the V-band this variability is higher (~ 0.6 mag). We find that even at maximum brightness, the V-band photometry (corrected for foreground reddening) does not fit with our best fit SED. When we further de-redden the V-band photometry for the intrinsic reddening implied by the difference between our fit and the JHK photometry, we find the V-band photometry fits well with the best fitting SED with no tuning. We therefore conclude variability cannot explain the missing flux at JHK from our mid-IR photometry fits for stars \#1 and \#2. However, if we attribute this extra reddening to extinction, this could provide a self consistent explanation.
\subsection{Effects of using a shallower density distribution}
\cite{shenoy2016searching} presented a recent study of cool dust around hypergiant stars VY Canis Majoris and $\mu$ Cep. Using photometry and DUSTY modeling to derive $\dot{M}$\ values, they adopted a fixed inner radius temperature of $T_{\rm in}$=1000K and a power law dust mass density distribution ($\rho$(r) $\propto$ r$^{-q}$) with a single index q throughout the shell. They then went on to test a range of optical depths and a range of power law indices q$\leq$2. They found that a power law with a q=2 did not produce enough cool dust to match the long wavelength end of the observed SED, instead concluding that a power law of $\rho$(r) $\propto$ r$^{-1.8}$ was more appropriate. This implies $\dot{M}$\ decreases with time since there was more dust present at large radii than would be for a fixed $\dot{M}$.
By setting a fixed $T_{\rm in}$ at the sublimation temperature for silicate dust \cite{shenoy2016searching} are left with not enough cool dust at large radii. However, it is possible that the data could be fit equally well by fixing q=2 and allowing $T_{\rm in}$ to vary. We tested this for $\mu$ Cep by creating a model using the best fit parameter's found by \cite{shenoy2016searching} using the same density distribution ($T_{\rm in}$=1000K, $\tau_{37.2\micron}$=0.0029 and q=1.8) and then attempted to fit this spectrum using a q=2 density law and allowing $T_{\rm in}$ to vary. We found that a model with an inner dust temperature of 600K and q=2 density law fit Shenoy et al.'s model to better than $\pm$10\% at all wavelengths $\leq$70$\micron$, comparable to the typical photometric errors. If we include the 150$\micron$ data-point, noting that Shenoy et al.'s best-fit model overpredicted the flux of $\mu$ Cep at this wavelength. We can again fit the q=1.8 model with a steady state wind by adjusting the $T_{\rm in}$ value to 500K, giving a fit to better than 15\% at all wavelengths.
\cite{shenoy2016searching} also fit intensity profiles to the PSF of $\mu$ Cep. Models were computed using different density power law indices (q=1.8 and q=2) and a constant inner dust temperature of 1000K. Shenoy et al. concluded the PSF of $\mu$ Cep was best matched by an intensity profile of q=1.8 and $T_{\rm in}$ = 1000K out to 25 arcseconds. To check the robustness of this conclusion, we created DUSTY models using the model atmosphere in our grid most similar to that of Shenoy et al. (MARCS, $T_{\rm eff}$ = 3600K), with the same parameters as in Shenoy et al. (q=1.8, $T_{\rm in}$ = 1000K). We then also created a second DUSTY model using the parameters we found to give an equally good fit to the SED (q=2, $T_{\rm in}$ = 600K, discussed previously). The intensity profiles for both of these models was convolved with the PSF from Shenoy et al. We found the two models to be indistinguishable for both the SED and the intensity profile out to 25 arcseconds. From this we conclude that $\mu$ Cep data can be equally well modelled by a steady state wind and a cooler inner dust temperature.
A density power law index q<2 implies a mass-loss rate that decreases over time. Specifically, if $R_{\rm out}$ = 1000$R_{\rm in}$ then $\dot{M}$\ will be found to decrease by a factor of 1000$^{2 - q}$ in the time it takes for the dust to travel from $R_{\rm in}$ to $R_{\rm out}$. For q=1.8, $\dot{M}$\ would decrease by a factor of 4 through the time it takes for the dust to travel to the outer radius. In the case of $\mu$ Cep, \cite{shenoy2016searching} concluded that the $\dot{M}$\ must have decreased by a factor of 5 (from $5 \times 10^{-6}$ to $1 \times 10^{-6}$ M$_\odot$ yr$^{-1}$) over a 13,000 year history. If $\mu$ Cep's $\dot{M}$\ increases as the star evolves to higher luminosities, as we have found for the RSGs in NGC 2100\footnote{Although $\mu$ Cep has a higher initial mass and metallicity compared to NGC 2100, all evolutionary models predict an increase in luminosity with evolution, with the length of the RSG phase depending on the mass loss.} then this is inconsistent with the conclusions of Shenoy et al. This inconsistency can be reconciled if we assume the winds are steady-state (q=2) and allow $T_{\rm in}$ to be slightly cooler. From our best fit q=2 model we find an $\dot{M}$\ value of $3.5 \times 10^{-6}$, corresponding approximately to a density-weighted average of Shenoy et al.'s upper and lower mass loss rates.
As a further test of our conclusions that $\dot{M}$\ increases with evolution, we ran our fitting procedure and this time set our $T_{\rm in}$ to a constant value of 1200K. We still find an increase in $\dot{M}$\ with evolution. Although the fits at this constant $T_{\rm in}$ are worse at longer wavelengths, the warm dust (i.e. the most recent ejecta) is still accurately matched at shorter wavelengths (<8$\micron$). This relative insensitivity of $\dot{M}$\ to the inner dust radius is illustrated in Fig. \ref{fig:allcont}, where the contours of constant $\dot{M}$\ run parallel to the $\chi^2$ trenches. This shows again the degeneracy of optical depth and $T_{\rm in}$ where many combinations of the two result in the same value of $\dot{M}$. Even when fixing $T_{\rm in}$, we still find a positive correlation between $\dot{M}$\ and luminosity.
\subsection{Consequences for stellar evolution}
We find a clear increase in $\dot{M}$\ with RSG evolution, by a factor of $\sim$40 through the lifetime of the star. These results are well described by mass-loss rate prescriptions currently used by some stellar evolution models, particularly dJ88 which matches the $\dot{M}$\ of the most evolved RSGs in our study (see Fig. \ref{fig:mdotvsL}). We find very little spread of $L_{\rm bol}$ with $\dot{M}$\, unlike that observed for field RSGs \citep[e.g.][]{van2005empirical}. The spread observed in previous results could be due to a varying $M_{\rm initial}$ in the sample stars. By focussing our study on a coeval star cluster we have kept metallicity and initial mass fixed, showing the mass-loss rate prescriptions fit well for LMC metallicity and $M_{\rm initial}$ of 14M$_\odot$.
Mass loss due to stellar winds is a hugely important factor in determining the evolution of the most massive stars. There is uncertainty about the total amount of mass lost during the RSG phase, and therefore about the exact nature of the immediate SNe progenitors. \cite{meynet2015impact} studied the impact of $\dot{M}$\ on RSG lifetimes, evolution and pre-SNe properties by computing stellar models for initial masses between 9 and 25M$_\odot$ and increasing the $\dot{M}$\ by 10 times and 25 times. The models were computed at solar metallicity (Z$\sim$0.014) for both rotating and non-rotating stars. It was found that stronger $\dot{M}$\ had a significant effect on the population of blue, yellow and RSGs. It has been discussed previously that yellow supergiants (YSGs) could be post-RSG objects \cite[e.g.][]{georgy2012yellow,yoon2010evolution}, suggesting a possible solution to the "missing" Type IIP SNe progenitors. \cite{georgy2015mass} also discuss the case for an increased $\dot{M}$\ during the RSG phase. By increasing the standard $\dot{M}$\ by a factor of 3 in the models, \cite{georgy2015mass} find a blueward motion in the HRD is observed for stars more massive than 25M$_\odot$ (non-rotating models) or 20M$_\odot$ (rotating models, see \cite{georgy2012yellow}.
As can be seen in Fig. \ref{fig:mdotvsL} we find the accepted $\dot{M}$\ prescriptions commonly used in stellar evolution codes fit well when the variables Z and $M_{\rm initial}$ are fixed. For this $M_{\rm initial}$ ($\sim$15M$_{\odot}$) and at LMC metallicity altering the $\dot{M}$\ prescriptions seems unjustified. Increasing the $\dot{M}$\ by a factor of 10 \citep[as in][]{meynet2015impact} would result in a strong conflict with our findings.
We plan to further study this by looking at higher mass RSGs in galactic clusters at solar metallicity. This will allow us to make a better comparison to the evolutionary predictions discussed by \cite{meynet2015impact} and \cite{georgy2015mass}. As well as this, the type IIP SNe that have been observed have all been of solar metallicity so it will be possible to make more accurate comparisons.
\subsubsection{Application to SNe progenitors and the red supergiant problem}
In the previous sections, we have found that the most evolved stars in the cluster appear more reddened than others within the cluster. We now ask the question, if star \#1 were to go SN tomorrow, what would we infer about it's initial mass from limited photometric information? This is relevant in the context of the "red supergiant problem", first identified by \cite{smartt2009death} and updated in \cite{smartt2015observational}. Here it is suggested that RSG progenitors to Type IIP SNe are less massive than predicted by stellar evolution theory. Theory and observational studies strongly suggest that the progenitors to Type II-P events are red supergiants (RSGs) and could be anywhere in the range of 8.5 to 25M$_{\odot}$ \citep[e.g.][]{meynet2003stellar,levesque2005physical}. However, no progenitors appeared in the higher end of this predicted mass range, with an upper limit of 18M$_{\odot}$. Many of the luminosities (and hence masses) in this study were based on upper limit single band magnitudes only. In each \cite{smartt2009death} assumed a spectral type of M0 ($\pm$ 3 subtypes) and hence a BC$_v$ of -1.3 $\pm$ 0.3. The level of extinction considered was estimated from nearby stars or from Milky Way dust maps \citep{schlegel1998maps}. The presence of enhanced reddening that may occur at the end of the RSGs life, such as we observe for the two most evolved stars in our study (\#1 and \#2), was not considered.
We now apply similar assumptions to those of \cite{smartt2009death} to \#1, to see what we would infer about the star's initial mass were it to explode tomorrow. We find an excess reddening between J-K of 0.2 and an excess between H-K of 0.15, assuming $T_{\rm eff}$\ = 3900K. If we attribute this reddening to extinction, this implies an average $K_s$-band extinction of $A_K$ = 0.23 $\pm$ 0.11, leading to an optical $V$-band extinction of $A_V$ = 2.1 $\pm$ 1.1 \citep[based on LMC extinction law][]{koornneef1982gas}. If we take this stars' measured V-band magnitude \citep[$m_V$ = 13.79][]{bonanos2009spitzer} and adjust to $m_{\rm bol}$ using the bolometric correction BCv = -1.3 \citep[in line with][]{smartt2009death}, the measured L$_{\rm bol}$ without considering any extra extinction is 10$^{4.33}$L$_\odot$. When we factor in the extra reddening, this increases to 10$^{5.14 \pm 0.44}$L$_\odot$, in good agreement with the luminosity we derived from integration under the best fit DUSTY spectra. This increase will have a significant effect on the mass inferred. When extinction is {\it not} considered a mass of 8M$_\odot$ is found. From mass tracks, we have determined the initial mass of the cluster stars to be in the range of 14M$_\odot$-17M$_\odot$. Hence, the mass determined for the most evolved star in the cluster from single band photometry is clearly underestimated when applying the same assumptions as used by \citep{smartt2009death}. When extinction {\it is} taken into account the mass increases to $\sim$17 $\pm$ 5 M$_\odot$ (in close agreement with the mass inferred from cluster age, see Section 3.2).
An alternative explanation for the redder colours of \#1 and \#2 is that they may have very late spectral type. Indeed, spectral type has been speculated to increase as RSGs evolve \citep{negueruela2013population,davies2013temperatures}. A colour of (J-K) = 0.17 would imply a supergiant of type M5 \citep{koornneef1983near,elias1985m}. If we consider stars \#1 and \#2 to be of this spectral type, this would require a BC$_v$ of approximately -2.3, giving a luminosity of $\sim$ 10$^{4.73} L_{\odot}$. This would lead to an inferred mass of 11 M$_\odot$, an increase on the 8$M_\odot$ inferred when the star was assumed to be of type M0, but still lower than the 14M$_\odot$ - 17M$_\odot$ found from mass tracks.
Based on the enhanced reddening we have observed for stars \#1 and \#2 it is interesting to see what effect an increased level of extinction would have on other progenitors studied by \cite{smartt2009death}. We considered three case studies, the progenitors to SN 1999gi, 2001du and 2012ec (of which SN 1999gi and 2001du are based on upper limits, with SN 2012ec having a detection in one band). We have chosen these SNe as they have host galaxies with sub-solar metallicity comparable to the LMC.
\begin{itemize}
\item{SN 1999gi}
The progenitor site to SN 1999gi was first studied by \cite{smartt2001upper}, the 3$\sigma$ detection limit was determined to be $m_{F606W}$ = 24.9 leading to a luminosity estimate of log($L_{\rm bol}$/$L_\odot$)$\sim$4.49 $\pm$ 0.15 and upper mass limit of 14M$_\odot$. The upper limit to this luminosity was revisited by \citep{smartt2015observational} and revised upwards to be 10$^{4.9}$ once an ad-hoc extinction of $A_V$ = 0.5 was applied. Based on STARS and Geneva models, \cite{smartt2015observational} find the upper limit to the progenitor star's initial mass to be 13M$_\odot$. If we assume the progenitor to SN 1999gi had similar levels of extinction to star \#1 ($A_V$ = 2.4, including the ad-hoc extinction applied by Smartt). This leads to an extra $R$-band extinction $A_R$ = 1.4 \citep{koornneef1982gas} and therefore an increase in luminosity of 0.58 dex. This revises the upper limit on initial mass to 23 M$_\odot$, substantially higher than the upper mass originally stated.
\item{SN 2001du}
This RSG progenitor was observed in the F336W, F555W and F814W bands, which were all non detections. The 3$\sigma$ upper limit was based on F814W as this waveband is least affected by extinction. From this \cite{smartt2009death} find a luminosity of log($L_{\rm bol}$/$L_\odot$)$\sim$4.57 $\pm$ 0.14. When including an extra ad hoc $A_V$ = 0.5, \cite{smartt2015observational} find the mass of this progenitor to be 10M$_\odot$ and a luminosity of 10$^{4.7}$$L_{\rm bol}$. If we again assume additional optical extinction $A_V$ = 1.4 \cite[on top of the ad hoc extinction included by ][]{smartt2015observational} we find an $I$-band extinction $A_I$ = 0.95 leading to an increase in measured $L_{\rm bol}$ of 0.38 mag. This would revise the upper mass limit for this progenitor to $\sim$ 17 M$_\odot$.
\item{SN 2012ec}
Finally the RSG progenitor to SN 2012ec, originally discussed by \cite{maund2013supernova}. These authors used a foreground reddening of E(B-V)=0.11 and constrained $T_{\rm eff}$ to < 4000K using an upper limit in the F606W band. Using a $F814W$ pre-explosion image the progenitor candidate is found to have a brightness of $m_{F814W}$ = 23.39 $\pm$ 0.18. \cite{maund2013supernova} estimate the luminosity to be log($L_{\rm bol}$/$L_\odot$) = 5.15 $\pm$ 0.19 leading to a mass range of 14 - 22 M$_\odot$. If we again apply a similar level of extinction we measure for star \#1 to the progenitor of SN 2012ec we infer a luminosity of log($L_{\rm bol}$/$L_\odot$) = 5.41, leading to a mass of between 22 - 26 M$_\odot$ based on Fig. 2 of \cite{smartt2009death}.
\end{itemize}
From the three case studies above, we have shown that by including similar levels of reddening that we find in the most evolved stars in NGC 2100, the initial mass estimates for Type IIP SN progenitors increase substantially. When applied to all objects in the \cite{smartt2009death} sample this may resolve the inconsistency between theory and observations and hence solve the red supergiant problem.
One argument against extinction being the cause of the red supergiant problem comes from X-ray observations of SN. \cite{dwarkadas2014lack} used the X-ray emission from IIP SNe to estimate the pre-SNe $\dot{M}$\ for RSGs, arguing for an upper limit of 10$^{-5}$M$_\odot$yr$^{-1}$. By using the mass loss rate - luminosity relation of \cite{mauron2011mass} and the mass-luminosity relation from STARS models \citep{eggleton1971evolution}, this upper limit to the mass-loss rate was transformed into an upper mass limit of 19M$_\odot$, in good agreement with \cite{smartt2009death}. While this number is in good agreement with \cite{smartt2009death} we estimate the errors on this measurement must be substanstial. \cite{dwarkadas2014lack} converts an X-ray luminosity into a value of $\dot{M}$\ (a conversion which must have some systematic uncertainties, but as we do not know we assume this to be consistent with zero) and from this $\dot{M}$\ finds a luminosity of the progenitor using the calibration in \cite{mauron2011mass}. This calibration between $\dot{M}$\ and luminosity has large dispersion of a factor of ten (see Fig. 5 of \cite{mauron2011mass}), but if we are again optimistic take these to be half that, a factor of five. From this, a progenitor mass was calculated under the assumption from mass-luminosity relation for which RSG luminosity scales as L $\sim$ M$^{2.3}$, increasing the errors further. Even with our optimistic estimates, we find the error to be $\pm$\ a factor of two, or around 19$\pm$10M$_\odot$. Therefore, we conclude that X-ray observations of IIP SNe provide only a weak constraint on the maximum initial mass of the progenitor star, and cannot rule out that circumstellar extinction is causing progenitor star masses to be underestimated.
\section{Conclusion}
Understanding the nature of the mass loss mechanism present in RSGs remains an important field of study in stellar astrophysics. Here a method of deriving various stellar parameters, $T_{\rm in}$, $\tau_V$, $\dot{M}$\, was presented as well as evidence for an increasing value of $\dot{M}$\ with RSG evolution. By targetting stars in a coeval cluster it was possible to study $\dot{M}$\ while keeping metallicity, age and $M_{\rm initial}$ constrained. As all stars currently in the RSG phase will have the same initial mass to within a few tenths of a solar mass, it is possible to use luminosity as a proxy for evolution, due to those stars with slightly higher masses evolving through the HR diagram at slightly faster rates. From our study we can conclude the following:
\begin{itemize}
\item The most luminous stars were found to have the highest value of $\dot{M}$\, evidenced observationally by colour-magnitude diagrams and also by a positive correlation between bolometric luminosity and $\dot{M}$.
\item Our results are well modelled by various mass-loss rate prescriptions currently used by some stellar evolution groups, such as dJ88 and Reimer's, with dJ88 providing a better fit for the RSGs with stronger $\dot{M}$. We therefore see no evidence for a significantly increased $\dot{M}$\ rate during the RSG phase as has been suggested by various stellar evolutionary groups
\item We also presented extinction values for each star, first determined from DUSTY models and next determined by isochrone fitting. While the warm dust created low extinction values in the optical range ($A_V$$\sim$0.01 mag), isochrone fitting showed that RSGs may have an intrinsic optical extinction of approximately $A_V$ = 0.5mag. This extinction cannot come from the warm inner dust, but may come from clumpy cool dust at larger radii. This supports the suggestion that RSGs create their own extinction, more so than other stars in the same cluster.
\item We also find that the two most luminous (therefore most evolved) stars in our sample show enhanced levels of reddening compared to the other RSGs. If we attribute this reddening to further extinction, this implies an average $K_S$-band extinction of $A_K$ = 0.23 $\pm$ 0.11. We do not find evidence for cold dust emitting at wavelengths of 100$\micron$ as we first suspected, so we as yet do not know the source of this extra reddening towards the RSGs.
\item When taking the enhanced reddening into account it seems the inferred progenitor masses to Type II-P SNe often increase significantly, providing a potential solution to the red supergiant problem. If this level of extinction is applied to all known RSG progenitors (assuming all RSGs show enhanced reddening at the end of their lives) the inconsistency between theory and observations may be resolved.
\end{itemize}
Future work will involve applying this technique to RSGs at solar metallicity to see if the mass-loss rate prescriptions are still appropriate. We also plan to apply this technique to clusters where the stars have higher initial masses closer to the upper RSG limit.
\section*{Acknowledgements}
We thank the anonymous referee for comments which have helped us improve the paper. We thank Rolf-Peter Kudritzki and Stephen Smartt for useful discussions. We also acknowledge the use of the SIMBAD database, Aladin, IDL software packages and astrolib.
\bibliographystyle{mnras}
|
1,941,325,221,173 | arxiv | \section{Introduction}\label{sec:introduction}
During the past 1\,Myr (the late Pleistocene), the polar ice sheets grew slowly (glaciation) then retreated abruptly (deglaciation or glacial termination) repeatedly, with an interval of about 100\,kyr \citep{hays76}. These quasi-periodic glacial-interglacial cycles dominated terrestrial climate change. They are recorded by paleoclimatic proxies such as $\delta^{18}$O (the scaled $^{18}{\rm O}/^{16}{\rm O}$ isotope ratio) in foraminiferal calcite, which is sensitive to changes in global ice volume and ocean temperature. Following on from the work of Adh\'emar, Croll, and others, Milankovitch proposed that climate change is driven by the insolation (the received solar radiation) during the northern hemisphere summer at northerly latitudes \citep{milankovitch41}. This insolation depends on the Earth's orbit and axial tilt (obliquity), and Milankovitch suggested that through various climate response mechanisms, variations in these orbital elements -- in particular eccentricity, obliquity, and precession\footnote{This involves both the orbital and the axial precession.} -- can cause climate change (``Milankovitch forcing''). Many studies have broadly confirmed Milankovitch's theory and the role of Milankovitch forcing in driving Pleistocene climate change, for example by spectral analyses of paleoclimatic time series derived from deep-sea sediments \citep{hays76, shackleton73,kominz79}. These studies have demonstrated that the climate variance is concentrated in periods of about 19\,kyr, 23\,kyr, 42\,kyr and 100\,kyr which are close to the dominant periods in precession ($\sim$23 and 19\,kyr), obliquity ($\sim$41\,kyr), and eccentricity ($\sim$100 and 400\,kyr).
There are, however, several difficulties in reconciling the Milankovitch theory with observation. Two in particular arise when trying to explain the 100\,kyr cycles. The first is the transition from the 41\,kyr dominant period in climate variations to a 100\,kyr dominant period at the mid-Pleistocene around 1\,Myr ago (hereafter ``Myr ago'' is written ``Ma''). The second difficulty is generating 100\,kyr sawtooth variations from orbital forcings and climate response mechanisms (\citealt{imbrie93}, \citealt{huybers07}, \citealt{lisiecki10}). On the one hand, and as shown in Figure \ref{fig:milankovitch_diagram}, the onset of 100\,kyr power at the mid-Pleistocene transition (MPT) occurs without a corresponding change in the summer insolation at high northern latitudes (represented by the daily-averaged insolation on 21 June at $65^{\circ}$N). On the other hand, the $\sim$100\,kyr eccentricity cycle produces only negligible 100\,kyr power in seasonal or mean annual insolation variations, despite its modulation of the precession amplitude. Furthermore, the variations of eccentricity and the northern summer insolation are weak while the 100\,kyr climatic variations are strong, notably in marine isotope stage (MIS) 11 (see Figure \ref{fig:milankovitch_diagram} and \citet{imbrie80,howard97}). These problems are referred to as the ``100\,kyr problem'' \citep{imbrie93}.
\begin{figure*}[ht!]
\centering
\includegraphics[scale=0.8]{Milankovitch_model.pdf}
\caption{Climate variations over the Pleistocene. The present day is at time zero on the right. The $\delta^{18}$O record (lower solid line) stacked by \cite{lisiecki05} is compared with the daily-averaged insolation at the summer solstice at $65^{\circ}N$, $\bar{Q}^{\rm day}65^{\circ}N$ (upper solid line), the obliquity (dashed line), and the eccentricity (dotted line) calculated by \cite{laskar04}. The latter two have been scaled to have a common amplitude. The grey region around $-1000$\,kyr represents the MPT extending from $-1250$\,kyr to $-770$\,kyr \citep{clark06}. The grey bar extending from $-423$ to $-362$\,kyr represents marine isotope stage (MIS) 11. The $\delta^{18}$O variations are dominated by 41\,kyr and 100\,kyr cycles before and after the MPT respectively. }
\label{fig:milankovitch_diagram}
\end{figure*}
Various models with different climate forcings and response mechanisms have been proposed to solve the 100\,kyr problem. Many are based on either deterministic climate forcing models or stochastic internal climate variations. The former proposes that the 100\,kyr cycles are driven by orbital variations, particularly precession and eccentricity \citep{imbrie80,paillard98,gildor00}. Many models treat the insolation variation as a pacemaker which sets the phase of the glacial-interglacial oscillation by directly controlling summer melting of ice sheets \citep{gildor00}. In this latter hypothesis, stochastic internal climate variability plays the main role in generating the 100\,kyr glacial cycles \citep{saltzman82,pelletier03,wunsch03}. A general approach is to combine the deterministic and stochastic elements within a framework of nonlinear dynamics, which allows for the occurrence of bifurcation and synchronisation in the climate system (see review by \citealt{crucifix12b}).
Other proposed hypotheses include glaciation cycles controlled by the accretion of interplanetary dust when the Earth crosses the invariable plane \citep{muller97} or by the cosmic ray flux modulated by the Earth's magnetic field (measured as the geomagnetic paleointensity, GPI; \citealt{christl04,courtillot07}). Some models also try to explain the MPT with \citep{raymo97, paillard98,honisch09,clark06} or without \citep{huybers09,lisiecki10,imbrie11} an internal change in the climate system.
The above models comprise both climate forcings and responses. According to various studies \citep{saltzman87,maasch90,ghil94,raymo97,paillard98,clark99,tziperman03,ashkenazy04}, climate forcings frequently determine the time of occurrence of some climate feature, such as the onset of deglaciation.
Many recent studies have employed concepts from chaos theory to address the problem of climate change \citep{crucifix12b,parrenin12,crucifix13,mitsui14,ashwin15,williamson15}, which then allow the concept of "pacing" to be described more rigorously as a forcing mechanism. \cite{huybers11} noted that many tens of pacing models have been proposed, yet we lack the means to choose between them.
Our current work aims to compare different forcing mechanisms by using a simple ice volume model for the Pleistocene glacial-interglacial cycles. We adopt the pacing model given by \cite{huybers05} and combine it with different forcings in order to predict the glacial terminations, which are identified from several $\delta^{18}$O records. Our models do not describe the physical mechanism of the climate response to external forcings. We aim instead only to measure the role of different forcings in determining the times of deglaciations. Due to the large and rapid change in ice volume at deglaciation, these times are relatively easy to identify, so the time uncertainties associated with identification are small. They are nonetheless still affected by the overall uncertainty in the chronology of the $\delta^{18}$O record \citep{huybers05}.
A common approach for assessing a model is to use p-values to reject a null hypotheses \citep{huybers05,huybers11}. However, it is well established that p-values can give very misleading results \citep{berger87,jaynes03,christensen05,bailer-jones09,feng13}, so we instead compare models using the Bayesian evidence. This compares models on an equal footing and takes into account the different flexibility (or complexity) of the models \citep{kass95,spiegelhalter02,vonToussaint11}.
This paper is organized as follows. In section \ref{sec:data} we assemble the data -- stacked $\delta^{18}$O records -- and identify the glacial terminations. In section \ref{sec:bayes} we summarize the Bayesian inference method as we use it. We build models based primarily on orbital elements to predict the Pleistocene glacial terminations in section \ref{sec:model}. These are compared for different data sets and time scales in section \ref{sec:comparison}. We perform a test of sensitivity of the results to the model parameters and choice of time scales in section \ref{sec:sensitivity}. Finally, we discuss our results and conclude in section \ref{sec:conclusion}.
\section{Data}\label{sec:data}
\subsection{$\delta^{18}$O from a depth-derived age model}\label{sec:delta18O}
The past climate can be reconstructed from isotopes recorded in ice cores or deep sea sediment cores. Air bubbles trapped at different depths in ice cores can be used to reconstruct the past atmospheric temperature, for example. Ice cores have so far been used to trace the climate back to about 800\,kyr \citep{augustin04}. In order to reconstruct the climate back to 2\,Ma, the $\delta^{18}$O ratio recorded in the calcite (CaCO$_3$) in foraminifera fossils (including species of benthos and plankton) in ocean sediment cores can be used. We use the $\delta^{18}$O ratio as a measure of variations in the global ice volume, although we note that this is also sensitive to the temperature and isotope composition of seawater, for which corrections can be made. For a discussion of the interpretation of marine calcite $\delta^{18}$O see for example \cite{shackleton67} and \cite{mix84}.
In order to calibrate $\delta^{18}$O measurements and to assign ages to sediment cores, one could assume either a constant sedimentation rate (determined using radiometrically dated geomagnetic reversals), or a constant phase relationship between $\delta^{18}$O and an insolation forcing based on the Milankovitch theory (see \citealt{huybers04} for details). The former is the ``depth-derived age model'' \citep{huybers04, huybers07}. The latter is referred to as ``orbital tuning'' \citep{imbrie84,martinson87,shackleton90}. Clearly this latter method is not appropriate for testing theories related to Milankovitch forcings, because it already assumes a link between $\delta^{18}$O variations and orbital forcings.
\cite{huybers07} (hereafter H07) stacked and averaged twelve benthic and five planktic $\delta^{18}$O records to generate three $\delta^{18}$O global records: an average of all $\delta^{18}$O records (``HA'' data set); an average of the benthic records (``HB'' data set); an average of the planktic records (``HP'' data set).\footnote{The planktic $\delta^{18}$O records may not produce a stack as good as benthic records because surface water is less uniform in temperature and salinity than the deep ocean \citep{lisiecki05}.} In addition to these three data sets, we also analyze the orbital-tuned benthic $\delta^{18}O$ stacked by \cite{lisiecki05} (``LR04'' data set), despite its orbital assumptions. The LR04 record was re-calibrated by H07 to generate a tuning-independent LR04 data set (``LRH'' data set; see the supplementary material of H07 for details).
We standardize each of the above $\delta^{18}$O records over the past 2\,Myr to have zero mean and unit variance, to produce what we call the $\delta^{18}$O anomalies as shown in Figure \ref{fig:huybers_data} (DD, ML, MS are explained below). We identify the deglaciations in the next section. We see that the sawtooth 100\,kyr glacial-interglacial cycles become significant over the late Pleistocene while 41\,kyr cycles dominate over the early Pleistocene. From now on, we will use the term ``late Pleistocene'' to mean the time span 1\,Ma to 0\,Ma, and ``early Pleistocene'' to mean 2\,Ma to 1\,Ma.
\begin{figure*}[ht!]
\centering
\includegraphics[scale=0.7]{d18O_data_all_v2.pdf}
\caption{The variation of $\delta^{18}$O with time as determined by a depth-derived age-model (HA, HB, HP, and LRH) and an orbital-tuning model (LR04). The past 2000\,kyr is divided into two parts: the early Pleistocene extending from 2\,Ma to 1\,Ma and the late Pleistocene extending from 1\,Ma to the present.
The deglaciations we identify for each data set are show in red: the point is the mean time, the error bar is the uncertainty. In the late Pleistocene, we identify three additional sets of terminations: the DD terminations are denoted by blue lines while the ML/MS terminations are denoted by green lines. These each consists of 12 major terminations, which are indicated by the numbers (we use the convention of splitting termination 3 into two events). What we call minor terminations are all the red points which are not major terminations.}
\label{fig:huybers_data}
\end{figure*}
\subsection{Identification of deglaciations}\label{sec:identification}
Rather than trying to model the full time series of $\delta^{18}$O variations, we focus instead only on the times of glacial terminations (deglaciations).
This is because an orbital forcing should determine predominantly the timing of a deglaciation rather than the detailed variation of the ice volume \citep{gildor00,paillard98,huybers05}. This not only simplifies the problem (thus making results more robust), but is also in line with our goal of trying to identify the main pacemakers for deglaciations, rather than trying to model the continuous response of the climate to orbital forcings. Here we describe how we identify the deglaciations.
From Figure \ref{fig:milankovitch_diagram}, we see that the $\delta^{18}$O amplitudes are larger in the late Pleistocene than in the early Pleistocene. This is interpreted to mean that after the MPT, ice sheets both grew to larger volumes and retreated more rapidly to ice-free conditions.
This rapid and abrupt shift from extreme glacial to extreme interglacial conditions defines 11 well-established late-Pleistocene terminations \citep{broecker84,raymo97b}. Because termination 3 is sometimes split into two terminations \citep{huybers11} -- labeled 3a and 3b (Figure \ref{fig:huybers_data}) -- we actually identify 12 {\it major terminations} over the late Pleistocene. The times of these major terminations as established by various publications has been collated by \cite{huybers11} and are given in his supplementary material. Based on his Table S2, we define three sets of terminations which cover just the late Pleistocene:
\begin{itemize}
\item DD: termination times and corresponding uncertainties estimated from the depth-derived timescale in H07;
\item MS: termination times and corresponding uncertainty equal to the median and standard deviation (respectively) of different termination times for each event given in the literature \citep{imbrie84,shackleton90,lisiecki05,jouzel07,kawamura07};
\item ML: termination times as in the MS data set, but with larger uncertainties obtained by adding the time uncertainties of the depth-derived time scales in quadrature with the corresponding uncertainties in the MS data set.
\end{itemize}
These terminations are shown as vertical lines in Figure \ref{fig:huybers_data}.
In addition to these major terminations, there are also minor terminations characterized by transitions from moderate glacial to moderate interglacial conditions. Considering the ambiguity in defining these \citep{huybers05,lisiecki10}, we identify terminations in our $\delta^{18}$O records using the method of H07. A termination is identified when a local maximum and the following minimum (defined as a maximum-minimum pair) have a difference in $\delta^{18}$O larger than one standard deviation of the whole $\delta^{18}$O record. The time of a termination is the mid-point of the maximum-minimum pair and the age uncertainty of this mid-point is calculated from a stochastic sediment accumulation rate model \citep{huybers07}. We identify sustained events in all data sets by filtering $\delta^{18}$O with different moving-average (or "Hamming") filters. The data sets are show in Figure \ref{fig:huybers_data}. We use the term ``major terminations'' to refer to terminations identified in these data sets which coincide with the major terminations in the DD, MS, or ML data sets. All other terminations we refer to as minor terminations. The data on these are listed in Table \ref{tab:terminations}.
Finally, we also define three additional hybrid data sets. As the HA data set is a stack of both benthic and planktic records, we combine the early-Pleistocene terminations identified from the HA data set together with late-Pleistocene terminations from the DD, ML, and MS data sets to generate the HADD, HAML, and HAMS data sets, respectively.
Thus starting from our five original data sets (HA, HB, HP, LR04, LRH), we have a total of 11 data sets of glacial terminations against which we will compare our models (see Table \ref{tab:terminations}).
\begin{table*}
\caption{Terminations (major and minor) identified from different $\delta^{18}$O records using H07's method (HA, HB, HP, LR04 and LRH) and the DD, MS and ML data sets of major terminations. Combining the early Pleistocene terminations of HA with the DD, MS and ML data sets, we obtain the hybrid data sets of HADD, HAMS and HAML. For each column, the termination ages are listed on the left side and the age uncertainties are listed on the right side (also see Figure \ref{fig:huybers_data}). All quantities are in units of kyr. }
\label{tab:terminations}
\centering
\scalebox{0.9}{
\begin{tabular}{|c|cc|cc|cc|cc|cc||cc|cc|cc|}
\hline
&\multicolumn{2}{|c|}{HA}&\multicolumn{2}{c|}{HB}&\multicolumn{2}{c|}{HP}&\multicolumn{2}{c|}{LR04}&\multicolumn{2}{c||}{LRH}&\multicolumn{2}{c|}{DD}&\multicolumn{2}{c|}{MS}&\multicolumn{2}{c|}{ML}\\\hline
\multirow{17}{*}{{\parbox[t]{1.5cm}{Late\\Pleistocene\\(between 1\\and 0\,Ma)}}}&-10&0.81&-10&0.81&-11&1.9&-12&2.2&-12&2.2&-11&1.9&-13&1.8&-13&3.1\\
&-127&5.3&-127&5.3&-127&5.3&-131&6.3&-125&5&-124&5&-128&3.6&-128&6.6\\
&-209&6.6&-209&6.6&-209&6.6&-219&7.5&-208&6.4&-208&6.4&-218&4.3&-218&8.7\\
&-233&6.4&-233&6.4&-233&6.4&-245&7&-233&6.4&-231&6.3&-244&4.8&-244&8.6\\
&-323&6.8&-321&7&-323&6.8&-290&7.5&-321&7&-326&7&-337&4.5&-337&9.8\\
&-415&7.4&-415&7.4&-415&7.4&-335&8.4&-413&7.6&-423&7.1&-421&4.4&-421&8.2\\
&-537&6.5&-535&6.6&-537&6.5&-531&7.3&-581&6.9&-622&5.8&-621&2.7&-621&6.4\\
&-581&6.9&-581&6.9&-537&6.5&-531&7.3&-581&6.9&-622&5.8&-621&2.7&-621&6.4\\
&-621&5.8&-621&5.8&-601&6.4&-581&6.9&-621&5.8&-714&4.5&-712&7.5&-712&8.8\\
&-705&5.9&-705&5.9&-622&5.8&-621&5.8&-705&5.9&-794&3.7&-793&1.8&-793&1.8\\
&-743&5&-742&4.8&-705&5.9&-708&5.4&-741&4.5&-864&5.7&-864&0.84&-864&5.8\\
&-789&4.2&-789&4.2&-745&5.5&-743&5&-788&4.2&-957&5.8&-958&1.7&-958&6.0\\
&-866&5.8&-866&5.8&-787&4.1&-791&4.1&-865&5.7&&&&&&\\
&-911&6&-911&6&-845&8&-867&5.7&-912&6&&&&&&\\
&-955&5.9&-955&5.9&-865&5.7&-915&5.9&-955&5.9&&&&&&\\
&-996&5.5&-996&5.5&-955&5.9&-959&5.7&-978&7&&&&&&\\
&& & & & & &-983&6.5 & &&&&&&&\\
\hline
\multirow{21}{*}{{\parbox[t]{1.5cm}{Early\\Pleistocene\\(between 2\\and 1\,Ma)}}}&-1029&5.6&-1029&5.6&-1030&5.6&-1031&5.5&-1027&5.5&&&&&&\\
&-1080&6.6&-1080&6.6&-1075&6.1&-1085&6.5&-1079&6.5&&&&&&\\
&-1111&8.1&-1111&8.1&-1109&8&-1117&8&-1109&8&&&&&&\\
&-1170&10.4&-1171&10.5&-1149&9.9&-1192&11.4&-1172&10.5&&&&&&\\
&-1235&11.7&-1234&11.7&-1173&10.5&-1244&12&-1234&11.7&&&&&&\\
&-1279&12.3&-1279&12.3&-1235&11.7&-1285&12.3&-1278&12.3&&&&&&\\
&-1316&12.9&-1316&12.9&-1279&12.3&-1325&12.7&-1317&13&&&&&&\\
&-1358&13.2&-1358&13.2&-1324&12.7&-1363&13.1&-1359&13.2&&&&&&\\
&-1403&13.3&-1403&13.3&-1353&13&-1405&13.2&-1405&13.2&&&&&&\\
&-1445&13.4&-1445&13.4&-1407&13.2&-1447&13.3&-1445&13.4&&&&&&\\
&-1485&13.2&-1485&13.2&-1449&13.2&-1493&12.9&-1485&13.2&&&&&&\\
&-1521&12.9&-1521&12.9&-1481&13.1&-1529&12.5&-1521&12.9&&&&&&\\
&-1560&12.9&-1559&12.4&-1521&12.9&-1569&12&-1561&12.3&&&&&&\\
&-1641&10.8&-1642&10.8&-1562&12.3&-1609&11.5&-1608&11.5&&&&&&\\
&-1688&9.8&-1689&9.8&-1607&11.5&-1644&10.7&-1641&10.8&&&&&&\\
&-1741&7.4&-1741&7.4&-1640&10.8&-1694&9.4&-1690&9.7&&&&&&\\
&-1783&6.9&-1783&6.9&-1742&7.4&-1743&7.3&-1741&7.4&&&&&&\\
&-1855&7.7&-1855&7.7&-1784&7&-1783&6.9&-1855&7.7&&&&&&\\
&-1897&7.3&-1897&7.3&-1820&6.9&-1859&7.6&-1855&7.7&&&&&&\\
&-1940&5.8&-1940&5.8&-1856&7.7&-1940&5.8&-1941&5.9&&&&&&\\
&&&&&-1893&7.1&&&&&&&&&&\\
\hline
\end{tabular}}
\end{table*}
As there are dating errors and identification uncertainties, we cannot know exactly when a deglaciation occurred. To take into account these uncertainties, we treat the time of each deglaciation probabilistically by defining a Gaussian distribution with the mean and standard deviation equal to the time and time uncertainty (respectively) of the termination. The terminations in a data set are therefore represented as a sequence of Gaussians, which will be modeled as described in the following section.
\section{Bayesian modelling approach}\label{sec:bayes}
We use the standard Bayesian probabilistic framework (e.g.\ \citealp{kass95,jeffreys61,mackay03,sivia2006}) to compare how well the different models explain the paleontological data. This approach takes into account the measurement errors, accounts consistently for the differing degrees of complexity present in our models, and compares models symmetrically. Our specific methodology is outlined briefly in this section. It is described in more detail in \cite{bailer-jones11, bailer-jones11b}, where we also present arguments why this approach should be preferred to hypothesis testing using p-values.
The posterior probability of a model $M$ postulated to describe a data set $D$ is given by the rules of probability as
\begin{equation}
P(M|D) = \frac{P(D|M)P(M)}{P(D)},
\label{eqn:bayes1}
\end{equation}
where $P(M)$ is the prior of model $M$, and $P(D)$ can be considered here as a normalization constant. $P(D|M)$ is the {\it evidence} of model $M$ which can be written mathematically as
\begin{equation}
P(D|M)=\int P(D|\boldsymbol{\theta},M)P(\boldsymbol{\theta}|M)d\boldsymbol{\theta} \ .
\label{eqn:bayes2}
\end{equation}
$\boldsymbol{\theta}$ is the set of parameters of model $M$, $P(D|\boldsymbol{\theta},M)$ is the {\it likelihood} -- the probability of observing the data $D$ given specific values of the model parameters -- and $P(\boldsymbol{\theta}|M)$ is the {\em prior distribution} of parameters of this model.
Ideally we would be interested in evaluating the $P(M|D)$ for different models, as this is the probability of a model being true given the observed data. However, this would require that we define {\em all} possible models. Thus in practice we compare models by looking at the ratio of model posterior probabilities.
If we cannot (or choose not to) distinguish between models a priori, then we set $P(M)$ to be equal for all models. It follows from equations~\ref{eqn:bayes1} and \ref{eqn:bayes2} that this ratio for models $M_1$ and $M_2$ is
\begin{equation}
\frac{P(M_1|D)}{P(M_2|D)} = \frac{P(D|M_1)}{P(D|M_2)}=\frac{\int P(D|\boldsymbol{\theta_1},M_1) P(\boldsymbol{\theta_1}|M_1)d \boldsymbol{\theta_1}}{\int P(D|\boldsymbol{\theta_2},M_2) P(\boldsymbol{\theta_2}|M_2)d \boldsymbol{\theta_2}}.
\label{eqn:bayes3}
\end{equation}
The above ratio of the evidences is called the {\it Bayes factor} and is used to compare how well a model (relative to another model) predicts the data, independent of the values of the model parameters. Note that this does not involve tuning the model parameters, which is why using the evidence takes into account differing model complexities. A (maximum) likelihood ratio test, in contrast, automatically favors more complex models (e.g.\ ones with more parameters), because such model can be tuned to fit the data better without them suffering any penalty on account of their increased complexity: an arbitrarily complex model will fit the data arbitrarily well. The evidence automatically balances model complexity against fitting accuracy to find the most plausible model, as described in the above references.
If we had good reasons to adopt unequal model priors (i.e.\ other information favored one model over another), then we should instead look at the product of the Bayes factor with the ratio of these priors, but this is not done here.
To account for the time uncertainties in the glacial terminations, we interpret a termination time as a Gaussian measurement model
\begin{equation}
P(t_j|\tau_j) = \frac{1}{\sqrt{2\pi}\sigma_j}e^{(t_j-\tau_j)^2/2\sigma_j^2}
\label{eqn:measurement}
\end{equation}
where $t_j$ is the {\it measured} time of termination $j$ (identified from a stacked $\delta^{18}$O record), $\sigma_j$ is the estimated uncertainty in that measurement and $\tau_j$ is the (unknown) {\it true} termination time.
If $D$ comprises $N$ independently measured events, then the probability of observing the complete data set $D=\{t_j\}$ is just the product
\begin{equation}
\begin{array}{r@{}l}
P(D|\boldsymbol{\theta},M)&{}\displaystyle=\prod\limits_j^N P(t_j|\boldsymbol{\theta},M)\\
&{}\displaystyle=\prod\limits_j^N \int_{\tau_j}P(t_j|\tau_j)P(\tau_j|\boldsymbol{\theta},M)d\tau_j
\end{array}
\label{eqn:likelihood}
\end{equation}
where the second line just follows from the marginalization rule of probability.
$P(t_j|\boldsymbol{\theta},M)$, the {\em event likelihood}, is the probability that an event (termination) $j$ is observed at time $t_j$. It is equal to the integral of the product of the measurement model with the model-predicted probability of the true time of the event, $P(\tau_j|\boldsymbol{\theta},M)$, over all values of the true time. That is, we marginalize (average) over the unknown true time. (This is explained further in section \ref{sec:termination} after we have introduced the models.)
This model-predicted probability of the times of the events, i.e.\ the deglaciations, is the {\em time series model}. This will be derived in section~\ref{sec:model} from the orbital forcing and pacing models.
We then have all the ingredients we need to calculate the likelihood (equation~\ref{eqn:likelihood}), and therefore the evidence (equation~\ref{eqn:bayes2}) for a given time series model for a given data set. Both the likelihood calculation and the evidence calculation involve an integral. We perform these numerically. The former is one dimensional (over time), so is straightforward. The latter is multi-dimensional (over the model parameters), so we use a Monte Carlo method. This involves drawing parameter samples from the parameter prior distribution, $P(\boldsymbol{\theta}|M)$, calculating the likelihood for each, and then averaging the result. In each case we draw $10^5$ samples.
The Bayes factor is a positive number. The larger it is compared to unity, the more we favor model 1 over model 2. Based on the criterion given by \cite{kass95}, we conclude that model 1 should be favored over model 2 if the Bayes factor is more than 10 (and 2 over 1 if it is less than 0.1). If the Bayes factor lies between 0.1 and 10, we cannot favor either model.
\section{Time series models}\label{sec:model}
In section \ref{sec:forcing} we introduce various climate forcing models, such as those based on variations of the Earth orbital parameters. In section \ref{sec:pacing} we define the pacing models. We use this term in a somewhat narrower sense than is often used in the literature \citep{saltzman84,tziperman06}. Here a pacing model is one which modulates the effect of a continuously variable forcing mechanism through the introduction of a threshold. Specifically, the ice volume is unaffected by the forcing mechanism until the ice volume exceeds some threshold, where the value of this threshold depends on the magnitude of the forcing. Having defined the forcing and pacing models, we use them in section \ref{sec:termination} to predict a sequence of glacial termination times. For a given forcing/pacing model $M$, and values of its parameters $\boldsymbol{\theta}$, this is the term $P(\tau_j|\boldsymbol{\theta},M)$ in equation~\ref{eqn:likelihood}. In section \ref{sec:comparison} we will compare these model-predicted terminations with the measured ones, using the the Bayesian approach to compare the overall ability of the models to explain the data.
\subsection{Forcing models}\label{sec:forcing}
Insolation influences the climate in a number of ways, both directly through mechanisms such as heating the lower atmosphere, and indirectly through modifying the ice accumulation rate and other mechanisms \citep{berger78,Berger1978139,saltzman90}. Mainstream thinking holds that climate change is most sensitive to the northern summer insolation at high latitudes because the temperature in continental areas, of which there is more in the northern hemisphere, is critical for ice melting or sublimation \citep{milankovitch41}. The summer insolation at high latitudes depends on the geometry of the Earth's orbit and the inclination of Earth's spin axis, and thus depends on the eccentricity, precession, and obliquity (hereafter referred to collectively as ``orbital elements'', even though obliquity is not orbital).
Variations in these alter how the insolation varies with season (from orbital and axial precession), with latitude (from obliquity changes), and with time scale (e.g.\ eccentricity variations occur at dominant periods of 100\,kyr and 400\,kyr).
Milankovitch proposed that the combination of orbital elements which gives rise to the measured summer insolation at $65^{\circ}$N is crucial to generating the glacial-interglacial cycles \citep{milankovitch41,hays76}. To model orbital forcings more generally, we define an orbital forcing model, $f(t)$, as a combination of eccentricity, precession, and obliquity, which is proportional to the insolation over certain time scales, seasons, and latitudes.
We build the following forcing models based on the reconstructed time-varying eccentricity, $f_{\rm E}(t)$, precession, $f_{\rm P}(t)$, obliquity, $f_{\rm T}(t)$, and four different combinations thereof:
\begin{equation}
\begin{aligned}
f_{\rm E}(t) \,&=\, e(t)\\
f_{\rm P}(t) \,&=\, e(t) \sin(\omega(t)-\phi)\\
f_{\rm T}(t) \,&=\, \epsilon(t)\\
f_{\rm EP}(t) \,&=\, \alpha^{1/2}f_{\rm E}(t) + (1-\alpha)^{1/2}f_{\rm P}(t)\\
f_{\rm ET}(t) \,&=\, \alpha^{1/2}f_{\rm E}(t) + (1-\alpha)^{1/2}f_{\rm T}(t)\\
f_{\rm PT}(t) \,&=\, \alpha^{1/2}f_{\rm P}(t) + (1-\alpha)^{1/2}f_{\rm T}(t)\\
f_{\rm EPT}(t) \,&=\, \alpha^{1/2}f_{\rm E}(t) + \beta^{1/2}f_{\rm P}(t) + (1-\alpha-\beta)^{1/2}f_{\rm T}(t),
\label{eqn:ts_function}
\end{aligned}
\end{equation}
where $e(t)$, $\epsilon(t)$, and $e(t)\sin(\omega(t)-\phi)$ are the eccentricity, obliquity, and precession index (or climatic precession), respectively. $\omega(t)$ is the angle between perihelion and the vernal equinox, and $\phi$ is a parameter controlling the phase of the precession. We use the variations of these three orbital elements over the past 2\,Myr as calculated by \cite{laskar04}. We standardize each of $f_E(t)$, $f_P(t)$, and $f_T(t)$ to have zero mean and unit variance, and then combine them to generate the compound models. $\alpha$ and $\beta$ are contribution factors which determine the relative contribution of each component in the compound models, where $0\leq\alpha\leq 1$ and $0\leq\beta\leq 1$. In addition to these models, we also use the daily-averaged insolation at 65$^\circ$N on July 21 as a proxy for the Milankovitch forcing, $f_{\rm CMF}$.
Beyond orbital forcings, we also consider the influence of variations of the Earth's orbital inclination and of the cosmic ray flux. To do this we build an inclination-based forcing model, $f_{\rm Inc}(t)$, using the orbital inclination calculated by \cite{muller97}, and we model the cosmic ray forcing as a geomagnetic paleointensity (GPI) time series (standardized to the mean and unit variance), $f_G(t)$, as collected by \cite{channell09}.
All forcing models and corresponding prior distributions over their parameters (``forcing parameters'') are shown in Table \ref{tab:ts_models}. In this table and the following sections, all parameters are treated as dimensionless variables by setting the time unit to be 1\,kyr (ice volume is on a relative scale). For the precession model, we set $\phi=0$ to treat precession according to the Milankovitch theory (although in section \ref{sec:sensitivity} we will allow the phase of the precession to vary in order to check the sensitivity of our results to this assumption). As we do not have any prior information about the values of the contribution factors in the compound models, we adopt uniform prior distributions over the interval $[0,1]$ for these.
\begin{table*}[t]
\centering
\caption{The termination models and corresponding forcing models and parameters. In addition to any forcing model parameters listed, the termination models have pacing parameters and the background fraction parameter. The prior distributions of these parameters are described in sections \ref{sec:forcing}, \ref{sec:pacing}, and \ref{sec:termination}, respectively. }
\label{tab:ts_models}
\begin{tabular}{llll}
\hline
Termination & Description & Forcing & Forcing model \\
model & & model & parameters\\
\hline
Periodic &100\,kyr pure periodic model & None & --- \\
Eccentricity &Eccentricity& $f_{\rm E}(t)$ & --- \\
Precession &Precession& $f_{\rm P}(t)$ & $\phi$\\
Tilt &Tilt or obliquity& $f_{\rm T}(t)$ & ---\\
EP &Eccentricity plus Precession& $f_{\rm EP}(t)$ & $\alpha$, $\phi$\\
ET &Eccentricity plus Tilt& $f_{\rm ET}(t)$ &$\alpha$\\
PT &Precession plus Tilt& $f_{\rm PT}(t)$ &$\alpha$, $\phi$\\
EPT &Eccentricity plus Precession plus Tilt&$f_{\rm EPT}(t)$&$\alpha$, $\beta$, $\phi$\\
CMF &(Classical) Milankovitch forcing& $f_{\rm CMF}(t)$ &---\\
Inclination &Inclination& $f_{\rm Inc}(t)$ &---\\
GPI &Geomagnetic paleointensity& $f_{\rm G}(t)$ &---\\
\hline
\end{tabular}
\end{table*}
Figure \ref{fig:forcing_models} shows the single-component forcing models (which do not have any adjustable parameters). All forcing models will be included in pacing models and corresponding termination models in the following sections. Hereafter, for each forcing model, the corresponding pacing and termination models share the same name as shown in the first column of Table \ref{tab:ts_models}.
\begin{figure*}[ht!]
\centering
\includegraphics[scale=0.7]{forcing_models-2Myr.pdf}
\caption{The single-component forcing models.
A deglaciation is likely to be triggered by a peak in the forcing. The values of eccentricity, precession, obliquity and Milankovitch forcing (CMF) are calculated by \cite{laskar04}, the orbital inclination relative to the invariable plane is given by \cite{muller97}, and the GPI record is from \cite{channell09}.}
\label{fig:forcing_models}
\end{figure*}
\subsection{Pacing models}\label{sec:pacing}
As described earlier, we use the term ``pacing'' to mean that some aspect of the climate system is independent of external forcings until the climate system reaches a threshold, whereby the value of this threshold is dependent upon the forcing. We model the pacing effect on ice volume variations using the deterministic version of the stochastic model introduced by \cite{huybers05}. In that model the ice volume at time $t$ is
\begin{equation}
v(t)=v(t-\Delta t)+\eta(t) \quad \quad \text{and if } v(t)>h(t) \text{ then terminate},
\label{eqn:deterministic}
\end{equation}
where
\begin{equation}
h(t)=h_0-af(t),
\label{eqn:threshold}
\end{equation}
and $\Delta t$ is a constant time interval. Thus the ice volume changes in discrete steps until it passes a threshold $h(t)$, which is itself modulated by a climate forcing $f(t)$ with a contribution factor $a$. The initial ice volume is $v_0$ and the {\em background threshold}, $h_0$, is either a constant or can itself vary with time. We set $\eta(t)$ to be unity while the threshold has not been reached; after that the glaciation is terminated by setting $\eta(t)$ to a constant negative value such that the ice volume linearly decreases to 0 within 10\,kyr of the threshold having been exceeded.\footnote{In practice the ice volume can go slightly negative due to the finite value of $\Delta t$, but this is of no practical consequence.} After this $\eta(t)$ is set to unity, the next cycle starts.
The threshold and the deglaciation duration are chosen to generate approximately 100 and 41\,kyr glacial cycles \citep{huybers05}.
If the contribution factor $a$ is zero, the ice volume will vary with a period modulated by the background threshold, $h_0$. We define this model as the Periodic model. In general the period may vary with time.
However, if
$h_0$ is constant, then the Periodic model has a constant period of value $h_0+10$\,kyr. Because $h_0$ controls the period of ice volume variations, different values of $h_0$ are required to model the 100\,kyr cycles in the late Pleistocene and the 41\,kyr cycles in the early Pleistocene (see Figure \ref{fig:icevolume}). We therefore first build pacing models to separately predict the deglaciations over the early and late Pleistocene using the constant background threshold model. We then use a varying background threshold (either linear or sigmoidal) to try to model the whole Pleistocene. We now describe these models in more detail.
\begin{figure*}[ht!]
\centering
\includegraphics[scale=0.8]{diagram_icevolume_multisigmoid_paper2.pdf}
\caption{Effect of the threshold in the pacing model. Different values of the threshold, $h_0(t)$, are shown: constant (red), linear (green), sigmoidal (blue, cyan, black). The legend shows the values of the parameters of the linear and sigmoid background thresholds according to equations \ref{eqn:linear} and \ref{eqn:sigmoid} respectively. The Periodic model is achieved using a constant threshold over some time span. By changing it from $h_0=30$ in the early Pleistocene to $h_0=90$ in the late Pleistocene, we can reproduce an abrupt change in the period of ice volume variations from $\sim$41\,kyr to $\sim$100\,kyr.}
\label{fig:icevolume}
\end{figure*}
\subsubsection{Constant background threshold}\label{sec:pacing_constant}
A constant background threshold is appropriate for modeling glacial-interglacial cycles without a transition such as the MPT. One realization of such a pacing model with the threshold modulated by a PT forcing model is shown in Figure \ref{fig:pacing_model}. The ice volume grows until it passes the forcing-modulated threshold. The ice volume then decreases rapidly to zero within the next 10\,kyr. We see that a deglaciation tends to occur when the insolation is near a local maximum. Hence the pacing model (equations \ref{eqn:deterministic} and \ref{eqn:threshold}) can generate $\sim$100\,kyr saw-tooth cycles which enables a forcing mechanism to pace the phase of these cycles.
\begin{figure*}[ht!]
\centering
\includegraphics[scale=0.6]{example_termination_model_1000kyr_paper.pdf}
\caption{A pacing model with threshold $h(t)$ modulated by the PT forcing model with $\alpha=0.5$ and $\phi=0$ (equation \ref{eqn:ts_function}). The pacing model parameters are: background threshold $h_0=90$; initial ice volume $v_0=25$; contribution factor of forcing $a=25$. The dashed line denotes the constant threshold, and the grey line represents the threshold modulated by the PT forcing model, i.e. $h(t)=h_0-af_{\rm PT}(t;\alpha=0.5,\phi=0)$. }
\label{fig:pacing_model}
\end{figure*}
The pacing model has three parameters: $v_0$, $h_0$, $a$. Rather than fixing these to some expected values, we assign a probability distribution to them. This is the prior which appears in equation~\ref{eqn:bayes2}, which shows that by averaging the likelihood over values drawn from this prior we get the evidence for the model.
As described above, a periodic pacing model is generated by adopting a constant threshold, $h(t)=h_0$ and $a=0$. When forcings are added onto the constant threshold (to give $a\neq0$), the ice volume variations then have an average period of about $(h_0+10-a)$\,kyr, because ice volume accumulation tends to terminate at a forcing maxima. For this reason we use different prior distributions on $a$ and $h_0$ depending on whether we
are trying to model the early (41\,kyr cycles) or late (100\,kyr cycles) Pleistocene.
Specifically, we use prior distributions for $v_0$, $h_0$, and $a$ which are uniform over the following intervals (and zero outside): $0<v_0<90\gamma$, $90\gamma<h_0<130\gamma$, $15\gamma<a<35\gamma$, where $\gamma=0.4$ when we model $\sim$41\,kyr cycles and $\gamma=1$ when we model $\sim$100\,kyr cycles.
The range of $v_0$ is just the range of the ice volume variation, while the mean values of the prior distributions of $h_0$ and $a$ with $\gamma=1$ are the fitted values obtained by \cite{huybers11}. For the periodic model, $a$ is zero and $h_0$ has a uniform prior distribution over $70\gamma <h_0< 110\gamma$. In section \ref{sec:sensitivity}, we will check how sensitive our results are to this choice of priors.
\subsubsection{Linear trend background threshold}\label{sec:pacing_linear}
The constant background threshold model is incapable of modeling the transition from the 41\,kyr world to the 100\,kyr world. If we treat $h_0$ as a step function as shown in Figure \ref{fig:icevolume} (red lines), the corresponding pacing model predicts an abrupt MPT with an extra parameter (the time of the transition). But to model the MPT, we will introduce another two versions of the pacing model by allowing the background threshold to vary with time (linearly and nonlinearly).
Studies have suggested various mechanisms which may be involved in climate change before and after the MPT \citep{saltzman84,maasch90,ghil94,raymo97,paillard98,clark99,tziperman03,ashkenazy04}. H07 suggests that a simple model with a threshold modulated by obliquity and a linear trend can explain changes in glacial variability over the last 2\,Myr without invoking complex mechanisms. To investigate this, we replace the threshold constant $h_0$ with a linear trend in time
\begin{equation}
h_0=pt+q,
\label{eqn:linear}
\end{equation}
where $p$ and $q$ are the slope and intercept of the trend respectively. To predict the transition from 41\,kyr cycles to 100\,kyr cycles with reasonable parameter sets, we adopt the following uniform prior distributions for the pacing parameters: $0<v_0<36$, $0<p<0.1$, $106<q<146$ and $10<a<30$. For the Periodic model we use $a=0$ and a uniform prior for $q$ between 86 and 126. These ranges are adopted so that the pacing model predicts the 41\,kyr and 100\,kyr cycles with similar period uncertainties as produced by the ranges of parameters in the pacing model with a constant background threshold (section \ref{sec:pacing_constant}).
An example of the linear trend is shown with the green line in Figure \ref{fig:icevolume}. If the threshold is not modulated by any forcing (i.e.\ $a=0$, the Periodic model), then the pacing model generates a gradual transition from 50\,kyr cycles 2\,Ma to 110\,kyr cycles at the present.
\subsubsection{Sigmoid trend background threshold}\label{sec:pacing_sigmoid}
To enable a more rapid onset of the MPT, we introduce another version of the pacing model with a nonlinear trend in the background threshold, defined using the sigmoid function as
\begin{equation}
h_0=0.6k/(1+e^{-(t-t_0)/\tau})+0.4k,
\label{eqn:sigmoid}
\end{equation}
where $k$ is a scaling factor, $t_0$ denotes the transition time, and $\tau$ represents the time scale of the MPT. The uniform priors of the parameters of this version of pacing models are set to be: $0<v_0< 36$, $90<k<130$, $10<\tau<500$, $10<a<30$, and $-700<t_0<-1250$, as motivated by the range of MPT time given by \cite{clark06}. For the Periodic model we set $a=0$ and change the range of $k$ to be $70<k<110$. The reason for choosing these priors is the same as given in section \ref{sec:pacing_linear}.
Figure \ref{fig:icevolume} illustrates this model. A late transition time, $t_0$, moves the trend to the present, and a smaller transition time scale, $\tau$, generates a more rapid transition.
The values of $0.6k$ and $0.4k$ in the above equation are set in order to rescale the trend model such that the ice volume threshold including a sigmoid trend allows both $\sim$41\,kyr and $\sim$100\,kyr ice volume variations.
\subsection{Termination models}\label{sec:termination}
Using the same method described in section \ref{sec:data} for the data, we identify glacial terminations in the ice volume time series generated by the pacing models. The age uncertainty of each termination is equal to half of the duration of the termination. As with the data, a single termination is represented as a Gaussian probability distribution over time, which is just the term $P(\tau_j|\boldsymbol{\theta},M)$ in equation \ref{eqn:likelihood} (see section \ref{sec:bayes}). The full set of predicted terminations forms the time series model which we will compare with the data. We use the term ``termination model'' to refer to the combination of a forcing model and a pacing model, which together has a number of parameters. These are listed in Table \ref{tab:ts_models}. Each of these termination models can have different background threshold models, as was explained in section \ref{sec:pacing}.
Figure \ref{fig:termination_model} shows schematically how we compare the model-derived terminations (red line) with the data on one termination (black line). The event likelihood (the integral in equation \ref{eqn:likelihood}) for a termination is calculated by integrating over time the product of the probability distribution of the observed time of the termination, $P(t_j|\sigma_j,\tau_j)$, with the model prediction of the true termination time, $P(\tau_j|\boldsymbol{\theta},M)$. The product of event likelihoods for all terminations in a data set is the likelihood for the termination model with specific values of the parameters of the forcing and pacing model.
By calculating the likelihood for many different values of those parameters (drawn from their prior distributions), and averaging them, we arrive at the evidence for that termination model (equation~\ref{eqn:bayes2}).
To accommodate other contributions from the climate system to the timing of a termination, we add a constant background probability to the termination model. This is defined using the background fraction $b=H_b/(H_b+H_g)$, where $H_b$ is the amplitude of the background and $H_g$ is the difference between the maximum and minimum of the Gaussian sequence.
The background fraction is a parameter of the model which we do not measure, so we assign it a prior (uniform from 0 to 0.1) and marginalize over this too.
\begin{figure*}[ht!]
\centering
\includegraphics[scale=0.6]{example_termination_model_1000kyr_gauss.pdf}
\caption{Schematic illustration of the components in the likelihood calculation (equation \ref{eqn:likelihood}). The red line is the termination model generated from the pacing model shown in Figure \ref{fig:pacing_model}. The black line represents the measured data on termination $j$. Its time and uncertainty are interpreted probabilistically as a Gaussian distribution over time.}
\label{fig:termination_model}
\end{figure*}
Let us summarize our modelling procedure. A forcing model (Figure \ref{fig:forcing_models}) modulates the ice volume threshold (equation \ref{eqn:threshold}) of the pacing model (equation \ref{eqn:deterministic}) from which the
termination model (e.g.\ red line in Figure \ref{fig:termination_model}) is derived. This is then compared with a sequence of terminations identified from a $\delta^{18}$O data set using our Bayesian procedure.
\section{Results of the model comparison}\label{sec:comparison}
\subsection{Evidence and Bayes factor}\label{sec:BF}
We calculate the Bayesian evidence of the termination models listed in Table \ref{tab:ts_models} for each of the data sets shown in Table \ref{tab:terminations}.
We calculate this for terminations extending over three different time spans: 1\,Ma to 0\,Ma, 2\,Ma to 1\,Ma and 2\,Ma to 0\,Ma. The first time span is the same as that chosen by \cite{huybers11}. However, other studies claim that the onset of 100\,kyr cycles occurred around 0.8\,Ma. We will examine in section \ref{sec:sensitivity} how sensitive our results are to the choice of time span. According to the time span in question, we need to choose the appropriate pacing model, because this determines the dominant period.
The Bayes factor (BF) is just the ratio of the evidence for two models. Rather than reporting Bayes factors for various pairs of models, we will report them for all models relative to a simple reference termination model. This reference model is just a uniform probability distribution over the time of deglaciations, and has no parameters. It corresponds to a constant probability in time of a deglaciation, but its choice is arbitrary as it just serves to put the evidences on a convenient scale.
Bayes factors should only be used to compare different models for a common data set.
This is because their definition requires that the factor $P(D)$ in equation~\ref{eqn:bayes3} cancels out.
\subsubsection{Late Pleistocene (1-0\,Ma)}\label{sec:LP}
The deglaciations identified using H07's method (in the data sets HA, HB, HP, LR04, and LRH) contain many minor terminations which may be better explained by models which predict $\sim$41\,kyr cycles. Thus, we choose constant background thresholds with $\gamma=1$ and $\gamma=0.4$ for all termination models in order to predict 100\,kyr and 41\,kyr variations, respectively, over the past 1\,Myr.
The BF for each termination model relative to the uniform model is shown in Figure \ref{fig:BF_2D}. We see that the HA, HB, LR04, and LRH data sets favor the models with a tilt component and with $\gamma=0.4$. Although compound models such as EPT and CMF sometimes have BFs slightly higher than the Tilt model, precession and eccentricity may not be necessary to explain the terminations identified in these data sets.
The HP data set favors the PT model with $\gamma=1$. This could be caused by a mismatch between the terminations identified in HP and the terminations identified in other data sets. For example, around the time of termination 6 (Figure \ref{fig:huybers_data}), two terminations are identified in HP while only one termination is identified in other data sets. The discrepancy between HP and other data sets is larger before 0.8\,Ma, which indicates a more ambiguous definition of terminations, particularly for planktic $\delta^{18}$O. On account of this, in section \ref{sec:sensitivity} we will narrow the time span to 0-0.8\,Ma (a more conservative time scale of late Pleistocene). Nevertheless, for all the data sets containing minor terminations, tilt is a common factor in the preferred models.
For the DD, ML, and MS data sets, the PT and CMF models with $\gamma=1$ are favored. In other words,
the major terminations are better predicted by a model involving precession and tilt rather than either alone, although tilt alone can pace minor terminations. Because the EPT and CMF models have lower BFs than the PT model, the eccentricity component is unlikely to pace the glacial terminations directly. Yet eccentricity can determine the glacial terminations indirectly through modulating the amplitude of the precession maxima (i.e.\ $e\sin{\omega}$). A similar conclusion was drawn by \cite{huybers11} using p-values.
We note that the rejection of a null hypothesis in this way does not automatically validate the alternative hypothesis. The Bayesian approach allows one to directly compare multiple models in a symmetric fashion.
Since the late Pleistocene is characterized predominantly by major terminations, we conclude that late Pleistocene climate change is paced by a combination of obliquity and precession. This does not automatically imply that there is no link between major terminations and eccentricity variations. Eccentricity may determine the 100 kyr cycles in the late Pleistocene, while obliquity and precession influence the exact timing of the terminations \citep{lisiecki10}. This could be studied in future work by introducing an eccentricity dependence into the pacing model.
\begin{figure*}
\centering
\vspace{-1in}
\includegraphics[scale=0.4]{BFs_2D_late.pdf}
\includegraphics[scale=0.4]{BFs_2D_early.pdf}
\includegraphics[scale=0.4]{BFs_2D_08Myr.pdf}
\includegraphics[scale=0.4]{BFs_2D_whole.pdf}
\caption{The Bayes factors relative to the uniform model for terminations occurring over the past 1\,Myr (upper left), from 2\,Ma to 1\,Ma (upper right), over the past 0.8\,Myr (lower left), and over the past 2\,Myr (lower right). The logarithm of the Bayes factor is shown on a color scale for each model (vertical axis) and data set (horizontal axis). Upper left panel: the models above and below the white line have constant background threshold defined by $\gamma=1$ and $\gamma=0.4$, respectively. Upper right panel: all models have a constant background threshold defined by $\gamma=0.4$. Lower left panel: Same as the upper left panel but for terminations over the past 0.8\,Myr. Lower right panel: the upper, middle, and lower blocks (of ten models each, separated by the white line) use a linear trend, a sigmoid trend, and a constant background threshold (respectively) with $\gamma=0.4$. In all panels except the top right one, the data sets on the left side of the dashed line include minor late-Pleistocene terminations while the data sets on the right side do not.
}
\label{fig:BF_2D}
\end{figure*}
\subsubsection{Early Pleistocene (2-1\,Ma)}\label{sec:EP}
Here we only consider the HA, HP, HB, L04, and LRH data sets, because the DD, ML, and MS sets have no terminations in the early Pleistocene. We only calculate BFs for models with $\gamma=0.4$ (and not $\gamma=1$), because this reproduces periods on the order of 41\,kyr, and such cycles are obvious in all data sets (Figure \ref{fig:huybers_data}). We exclude the GPI model because the GPI record has a time span less than 2\,Myr. The BFs for the termination models are shown in the upper right panel of Figure \ref{fig:BF_2D}.
We see that the Tilt model is favored by all data sets. The combination of tilt with other orbital elements does not give a higher BF, so we conclude that the other orbital elements do not play a major role in pacing the deglaciations over the early Pleistocene.
It is important to realise that although the Bayesian evidence generally penalizes more complex models, this does not automatically result in a lower Bayes factor for such models. They can achieve higher Bayes factors if the model is supported by the data sufficiently strongly (see the references in section \ref{sec:bayes}).
\subsubsection{Whole Pleistocene (2-0\,Ma)}\label{sec:WP}
For the whole Pleistocene we use the data sets HA, HB, HP, LR04, and LRH as well as the hybrid data sets, HADD, HAML, and HAMS. We use pacing models with and without a trend threshold to model the terminations. The BFs for the above models and data sets are shown in the lower left panel of Figure \ref{fig:BF_2D}.
For the HA, HB, and LR04 data sets, the Tilt model with $\gamma=0.4$ is favored. Other combinations with the tilt component and with $\gamma=0.4$ yield similar BFs. However, for the HP and LRH data sets, the PT model with a sigmoid trend is favored and this model also gives high BFs for the HA, HB, and LR04 data sets. For all these data sets, the Precession, Eccentricity, and Periodic models have rather low BFs. These results indicate a major role for tilt and a minor role for precession in pacing major and minor Pleistocene deglaciations.
For all the above data sets, the CMF model with $\gamma=0.4$ has a high BF, but not higher than other models with a tilt component. CMF is an optimized version of the EPT model. Faced with different models which give similar Bayes factors, we will normally want to choose the simplest, which here is PT. We will investigate this further in section \ref{sec:sensitivity}.
For the HADD, HAML, and HAMS data sets, the PT model with a threshold modulated by a sigmoid trend is favored, and those compound models with a tilt component also have high BFs. The whole Pleistocene deglaciations appear to be paced by the combination of precession and obliquity. This is consistent with the results for the late-Pleistocene deglaciations.
The physical reason why precession becomes important after the MPT is beyond the scope of our work and is still under debate.
On account of the existence of the MPT, modeling the whole Pleistocene with a constant background threshold model makes little sense, so those corresponding results should not be given much weight. (This corresponds to assigning all those models a smaller model prior, $P(M)$.) More appropriate are the models with linear and sigmoid background thresholds. Among these, we see that the EPT and CMF models have BFs about ten times lower than the PT model.
We conclude that eccentricity does not play a significant role in pacing terminations over the whole Pleistocene. We also find that the PT model with a sigmoid background threshold is more favored than the PT model with a linear background threshold, which indicates that the MPT may not be as gradual as claimed by \citep{huybers07}. We will discuss this further in section \ref{sec:sensitivity}.
According to Figure \ref{fig:BF_2D}, the Inclination and GPI models are not favored, and in fact are less favored than the reference uniform model (as BF$<$1).
Thus we find that the geomagnetic paleointensity does not pace glacial cycles over the last 2\,Myr, although we note that there is some controversy over the link between the GPI and climate change \citep{courtillot07,pierrehumbert08,bard08,courtillot08}. In contrast to the conclusion of \cite{muller97}, there we find no evidence for a link between the orbital inclination and ice volume change.
\subsection{Discrimination power}\label{sec:discrimination}
To validate our method as an effective inference tool to select out the true model, we generate simulated data from each model and then evaluate the Bayes factors for all models on these data.
The data are simulated with the following parameters for all models except the Periodic one: $h_0=110\gamma$, $a=25\gamma$, $b=0$, and $v_0=45\gamma$, where $\gamma=1$ for terminations simulated over the last 1\,Myr and $\gamma=0.4$ for the time range 2 to 1\,Ma.
For the Periodic model we use instead of $h_0=90\gamma$ and of course $a=0$. Recall that the period of the resulting time series is approximately $h_0+10-a$. Other parameters in corresponding forcing models are fixed at $\alpha=0.5$ for compound models with two components, $\alpha=0.3$ and $\beta=0.2$ for the EPT model, and $\phi=0$ for models with a precession component.
\begin{figure*}[ht!]
\centering
\includegraphics[scale=0.4]{BFs_2D_sim_late.pdf}
\includegraphics[scale=0.4]{BFs_2D_sim_early.pdf}
\caption{Discrimination power. As Figure \ref{fig:BF_2D} but for simulated data extending over the last 1 Myr (left) and from 2\,Ma to 1\,Ma (right). The dashed line indicates the results for the true model for each data set. Ideally this BF would be much higher than all other BFs in that column.}
\label{fig:BF_2D_sim}
\end{figure*}
The BFs for simulated data over the last 1\,Myr are shown in the left panel of Figure \ref{fig:BF_2D_sim}.
We see that all models based on a single orbital element are correctly selected, although those models combining the correct single orbital element with other elements may also give comparable BFs. When models have similar BFs we would generally want to favor the one with fewest components.
This again corresponds to using a larger value of the model prior, $P(M)$ (see section~\ref{sec:bayes}).
Incorrect models, in contrast, generally receive much lower Bayes factors. For the PT-simulated data set, the PT model is correctly discriminated from the CMF model (a fitted EPT model). We also see that although the ET model may not be correctly selected out when its BF is similar to that obtained for EP, PT, EPT, and CMF models, the ratios of the Bayes factors are close to unity. The much larger ratios between them for the real data validate our inference of the ET model (see section \ref{sec:comparison}). Figure \ref{fig:BF_2D_sim} shows that the EP model is not favored over the Eccentricity model even when the former is the true model. However, the Eccentricity model is never favored on any of the real data sets, so this misidentification does not occur in practice. In conclusion, this discrimination test indicates that our identification (in section \ref{sec:LP}) of the PT model as the best model for the late Pleistocene is reliable.
We then apply the same test to the period 2-1\,Ma, which uses a different value of $\gamma$ as explained above. The results are shown in the right panel of Figure \ref{fig:BF_2D_sim}. We see that the correct model always has a larger Bayes factor than the other models. Yet we also see that for data simulated from the PT model, the CMF and EPT models have similar BFs as the PT model. However, as the PT model is not as fine tuned as the CMF model and has fewer adjustable parameters than the EPT model, we would generally invoke Occam's razor to select the PT model.
This experiment confirms that Bayesian model comparison and our interpretation of the Bayes factors allows us to select the correct model. We conclude that tilt (or obliquity) is the main ``pacemaker'' of the deglaciations over the last 2\,Myr, while precession may pace the deglaciations over the late Pleistocene. This indicates that precession becomes important in pacing terminations after the MPT. Other climate forcings, including GPI and inclination forcing, are unlikely to pace the deglaciations over the Pleistocene.
\section{Sensitivity test}\label{sec:sensitivity}
We now perform a sensitivity test to check how sensitive a model's BF is to the choices of time scale and model priors.
To do this we first change the time of the onset of the 100\,kyr cycles from 1\,Ma to 0.8\,Ma. We recalculate the BFs and show them in the lower left panel of Figure \ref{fig:BF_2D}. We see that the combination of obliquity and precession (i.e.\ the PT model) still paces the major terminations (DD, ML, and MS) better than obliquity alone. So our conclusion is robust to this change of the late-Pleistocene time span.
We then change the prior distributions over some model parameters and keep others fixed. We apply this sensitivity test to the ML, HA, and HAML data sets with time spans of 1--0\,Ma, 2--1\,Ma, and 2--0\,Ma, respectively. These three data sets are representative and conservative because they contain the major terminations as well as minor terminations identified in the HA data set, which is stacked from both benthic and planktic data sets. In each case we select the most favored types of the pacing model according to the model comparison in section \ref{sec:comparison}. They are: the constant background threshold with $\gamma=1$ for ML; the constant background threshold with $\gamma=0.4$ for HA; sigmoid background threshold for HAML. For each model, we change the range of the uniform prior on each parameter as follows (the name in parentheses is used to label the change in Figure \ref{fig:BF_2D_sen})
\begin{itemize}
\item $\lambda=0 \rightarrow -10 \leq \lambda\leq 10$ ({\it lag}): Here we account for the possible time lag between the forcing and its effect (as was suggested by previous studies such as \citealt{hays76} and \citealt{imbrie80}). $\lambda$ represents the time lag(s) of any model listed in Table \ref{tab:ts_models}, and $\lambda$ ranges -10 to 10\,kyr in steps of 1\,kyr. For models with a single component, a time lag is achieved by shifting the corresponding time series to the past or to the future. For compound models, each component is shifted independently, and the corresponding evidences are calculated by marginalizing the likelihood over time lags of all components.
\item $90\gamma<h_0<130\gamma \rightarrow 80\gamma<h_0<140\gamma \text{ ({\it hlarge}) and } 100\gamma<h_0<120\gamma$ ({\it hsmall}): We extend or shrink the upper and lower limits of the background threshold, $h_0$, by 10$\gamma$. Changing the prior distribution of $h_0$ is equivalent to changing the prior distribution of the period of a pacing model, because the average period is about $h_0+10-a$ (see section \ref{sec:pacing_constant}). The above changes only apply to models with $a\neq 0$ while the prior distribution of the Periodic model ($a=0$) is changed from $70\gamma<h_0<110\gamma$ first to $60\gamma<h_0<120\gamma$ and then to $80\gamma<h_0<100\gamma$. For models with a sigmoid trend, the prior distribution of $k$ is changed from $90<k<130$ first to $80<k<140$ and then to $100<k<120$.
\item $15\gamma<a<35\gamma \rightarrow 5\gamma<a<45\gamma \text{ ({\it alarge}) or } 20\gamma<a<30\gamma$ ({\it asmall}): We extend or shrink the range of contribution factor of forcing, $a$, around its mean. These changes do not apply to the Periodic model, for which $a=0$.
\item $0<b<0.1 \rightarrow 0<b<0.2 \text{ ({\it blarge}) or } 0<b<0.05$ ({\it asmall}): We double or halve the upper limit of $b$, the contribution factor of the background in the termination model.
\item $\phi=0 \rightarrow -\pi<\phi<\pi$ ({\it phi}): We now allow any value for the the phase of the precession, $\phi$, which is related to the season of the insolation that forces the climate change.
\end{itemize}
\begin{figure*}[ht!]
\centering
\includegraphics[scale=0.5]{BFs_2D_sensitivity.pdf}
\vspace{-0.5in}
\caption{Sensitivity test. Bayes factors for several models with a change in the range of priors (compared to what was used in section \ref{sec:BF} and Figure \ref{fig:BF_2D}). These are shown for three different data sets (and time scales) in the three blocks separated by white horizontal lines. In each block the logarithm of the Bayes factor is show on a color scale for each model (vertical axis) and change in prior (horizontal axis). The first column -- labeled `none' -- gives the BFs for models with the original priors for reference. Some models are not relevant for certain prior changes, so the corresponding slots are empty (white). The three blocks are as follows. Top: pacing model with a constant background threshold with $\gamma=1$ for the ML data set (0--1\,Ma). Middle: pacing model with a constant background threshold with $\gamma=0.4$ for the HA data set (1--2\,Ma). Bottom: pacing model with a sigmoid background threshold for the HAML data set (0--2\,Ma).
}
\label{fig:BF_2D_sen}
\end{figure*}
The BFs for the models with each of the above changes are shown Figure \ref{fig:BF_2D_sen}, separated into three blocks corresponding to the different data sets, ML, HA, HAML. For the ML data set (1--0\,Ma; top block), the PT and CMF models are favored over the Tilt model for all changes in the priors. The PT and CMF models without time lags are also favored over corresponding models with lags. This indicates that the Tilt and Precession models pace climate change without significant time lags. Over the early Pleistocene (middle block), the Tilt model is marginally favored. The BFs of the EPT model vary a lot but are never higher than the Tilt model. For the HAML data set (2--0\,Myr; bottom block), the model combining a sigmoid trend and the PT forcing is favored for all changed priors. Moreover, the BF for the PT model increases when shrinking the range of the background fraction, $b$. The relative lack of significance of the background suggests a significant influence of obliquity and precession over the past 2\,Myr.
To further investigate the role of precession in pacing the major late-Pleistocene deglaciations, we marginalize the likelihoods for the PT model over all its parameters except for the contribution factor of precession contribution factor, $\alpha$, and phase, $\phi$. (Note that the evidence is the likelihood marginalized over {\em all} model parameters.) We do this for the ML data over the last 1\,Myr. The distribution of this marginalized likelihood (relative to the uniform model) is shown in the left panel of Figure \ref{fig:likelihood_distribution}. The highest values occur for phases ranging from $-50^\circ<\phi<+50^\circ$, indicating that the main pacemaker under this model is either the intensity of the northern hemisphere summer insolation or the duration of the southern summer (we cannot distinguish between these based on available data). While very small contribution factors, $\alpha<0.1$, are strongly disfavored, the model is otherwise not very sensitive to $\alpha$. Since $\alpha$ determines the size of the contribution of precession to the PT model (equation \ref{eqn:ts_function}), this means that some precession contribution is favored, but the exact amount is not well constrained. This broad high likelihood range of $\alpha$ and $\phi$ means that the pacing depends on the overall northern hemisphere summer insolation at a range of northern latitudes (or equivalently the duration of the southern summer) rather than that at a specific latitude and time in summer. This is consistent with \citealt{huybers11}'s conclusion that ``climate systems are thoroughly interconnected across temporal and spatial scales''.
\begin{figure*}[ht!]
\centering
\includegraphics[scale=0.45]{RML_2D_2model_sigmoid.pdf}
\caption{The distribution of the logarithm of the marginalized likelihood relative to the uniform model, $\log_{10}$(RML), for the PT model as a function of two model parameters. The left panel shows the distribution over the precession contribution factor ($\alpha$) and phase ($\phi$) for the PT model with $\gamma=1$ for the ML data set (the last 1\,Myr). The right panel shows the distribution over the transition time ($t_0$) and transition time scale ($\tau$) for the PT model with a sigmoid background threshold for the HAML data set (the last 2\,Myr). $10^6$ and $1.6\times 10^6$ sample points sampled in a $200\times 200$ grid were used to construct the left and right distributions respectively. For each panel, the most favored region is identified by applying a $25\times 25$ grid to the distribution, and is denoted by a cross.
Note that the scales saturate: likelihoods above or below the limits of the color bar are plotted using the extreme color.
}
\label{fig:likelihood_distribution}
\end{figure*}
We found in section \ref{sec:WP} that the pacing model with a sigmoid background threshold model was favored when modeling the whole Pleistocene. We now identify which parameters of that model are most favored by the data. To do this we calculate the marginalized likelihood (relative to the uniform model) for the PT model with a sigmoid background threshold as a function of both the transition time scale, $\tau$, and transition midpoint, $t_0$, on the HAML data set (i.e.\ we marginalize over all other parameters): see the right panel of Figure \ref{fig:likelihood_distribution}.
To explore this more completely we have extended the upper limit of $t_0$ from -700\,kyr to -300\,kyr.
The peak is at around $\tau=100$\,kyr (about one glacial-interglacial cycle) and $t_0=-715$\,kyr. To visualize this transition, a sigmoid background model with this value of $\tau$ is shown in Figure \ref{fig:icevolume}. Defining the transition duration as the time taken for the ice volume to change from 25\% to 75\% of its maximum value, $\tau=100$\,kyr corresponds to a transition duration of 220\,kyr. This timescale for the MPT is consistent with the findings of \cite{honisch09,mudelsee97,tziperman03,martinez11}. It is shorter (more abrupt transition) that found by H07 and others \citep{raymo04,liu04,medina05,blunier98}, although Figure \ref{fig:likelihood_distribution} shows that longer time scales are not that improbable (but note that the likelihoods are shown on a logarithmic scale). The transition time of 715\,kyr ago is somewhat later than the mid-point of the MPT of $\sim$-900\,kyr identified by \cite{clark06} using a frequency spectrogram analysis. Yet our data/analysis permits a range of values,
although we see that the region around -900\,kyr is disfavored for low values of $\tau$. Discrepancies from previous results could also arise from the fact that we use just termination data.
As a final sensitivity test, we change the sign of the contribution factor of forcing, $a$, to model possible anticorrelations between forcing models and the data over the late Pleistocene.
We find that this significantly reduces the BF for all favored models, which shows that models with anticorrelations are a poor description of the data.
\section{Summary and conclusions}\label{sec:conclusion}
Using likelihood-based model comparison, we find that a combination of obliquity (axial tilt) and precession is the main pacemaker of the 12 major glacial terminations in the late Pleistocene. Obliquity alone can trigger minor terminations over the whole Pleistocene. The obliquity and precession pace the Pleistocene terminations without significant time lags, and their pacing roles can be identified with high significance.
We confirm the dominant role of obliquity in pacing the glacial terminations over the early Pleistocene. In contrast to
the conclusion of H07, we find that a model with obliquity alone describes the major and minor Pleistocene deglaciations (together) better than a model which combines obliquity with a trend in the background threshold.
Thus obliquity is sufficient to explain at least the time of minor terminations before and after the MPT, without reparameterizing the model as done by H07 and \cite{raymo97,paillard98,ashkenazy04,paillard04,clark06}.
We observe that precession becomes important in pacing the $\sim$100\,kyr glacial-interglacial cycles after the MPT. Through the comparison of models with a linear trend and models with a sigmoid trend in the background threshold, we find that the glacial terminations over the whole Pleistocene can be paced by a combination of precession, obliquity, and a sigmoid trend in the background threshold. Using marginalized likelihoods, we find that the MPT has a time scale
(the time required for ice volume to grow from 25\% to 75\% of the maximum)
of about 220\,kyr and a mid-point at around 715\,kyr before the present. This is rather late compared with other studies \citep{clark06}, although our data/analysis supports a broad range of values. Note that we do not assume the existence of a strict periodicity in the data, in contrast to some studies based on power spectrum analyses.
Since there is no significant change in the power spectrum of the insolation before and after the MPT, the MPT must be caused by a rapid change of {\em response} of the climate to the insolation, rather than by the insolation itself. This is consistent with previous studies \citep{paillard98,parrenin03,ashkenazy04,clark06}.
We also find that geomagnetic forcing and forcing by changes in the inclination of the Earth's orbital plane are unlikely to cause significant climate change over the last 2\,Myr. This weakens the
suggestion that the Earth's orbital inclination relative to the invariable plane influences the climate \citep{muller97}. Our results also suggest that the modulation of cosmic rays or solar activity by the Earth's magnetic field has at best a limited impact on climate change on timescales between 10\,kyr and 1\,Myr, challenging the hypothesis that connects the geomagnetic paleointensity with climate change \citep{channell09}.
The Bayesian modelling approach is well suited to multiple model comparison, because it evaluates all their evidences explicitly: a model is not selected just because some alternative ``noise'' model is rejected. Uncertainties in the data are also accommodated. Moreover, the approach automatically and consistently takes into account the model complexity, in contrast to most other methods (e.g.\ frequentist hypothesis testing, maximum likelihood ratio tests) which will favor more complex models unless they are penalized in some ad hoc way.
Our conclusions are reasonably robust to changes of parameters, priors, time scales, and data sets. The main uncertainty in our work comes from the identification of glacial terminations over the Pleistocene, although we have used different data sets of terminations to reduce this uncertainty. In future work, a more sophisticated Bayesian method (e.g.\ the method introduced by \citealt{bailer-jones12}) could be employed to model the full time series of climate proxies. Using this model inference approach, we may learn more about the mechanisms involved in the climate response to Milankovitch forcings.
\section*{Acknowledgements}
We thank Joerg Lippold for pointing us to relevant literature, Marcus Christl for providing $^{10}$Be data, and Martin Frank for explaining the method of reconstructing the history of solar activity. Morgan Fouesneau, Eric Gaidos, and Gregor Seidel gave valuable comments on the manuscript. We also thank anonymous referee and the associate editor, Michel Crucifix, for their valuable comments. This work has been carried out as part of the Gaia Research for European Astronomy Training (GREAT-ITN) network. The research leading to these results has received funding from the European Union Seventh Framework Programme ([FP7/2007-2013] under grant agreement no.\ 264895.
\bibliographystyle{elsarticle-harv.bst}
|
1,941,325,221,174 | arxiv | \section{Introduction}
\label{sec:Introduction}
Clusters of galaxies are a powerful probe of astrophysics and cosmology \citep[][]{Voit2005}. For example, the cluster mass function is very sensitive to the cosmic mean matter density, to the initial fluctuation amplitude \citep[][]{PressSchechter1974,FrenkEtal1990,EkeColeFrenk1996,ShethTormen1999} and to dark energy dynamics \citep[][]{BartelmannDoranWetterich2006,GrossiSpringel2009,FrancisLewisLinder2009}. It can be predicted to high accuracy by numerical simulations \citep[][]{JenkinsEtal2001,WarrenEtal2006,LukicEtal2007}.
However, the masses of galaxy clusters cannot be observed directly. Thus one either needs to infer the masses of observed clusters from some more directly observable cluster property, requiring accurate knowledge of the observable-mass relation and its scatter, or one must directly compare observed and predicted cluster abundance as a function of such observables. These include the X-ray luminosity and temperature of the intracluster gas \citep[][]{BorganiEtal2001,ReiprichBoehringer2002,StanekEtal2006,PiffarettiValdarnini2008,VikhlininEtal2009}, the number and velocity dispersion of the cluster galaxies \citep[][]{Zwicky1937_gal_and_cluster_masses,RinesEtal2003,KochanekEtal2003,BeckerEtal2007}, the number of giant arcs \citep[][]{BartelmannEtal1998,WambsganssBodeOstriker2004,FedeliEtal2008}, the weak-lensing signal \citep[][]{TysonWenkValdes1990,CyprianoEtal2004,Hoekstra2007,JohnstonEtal2007_SDSS_cluster_wl_II_arXiv,ReyesEtal2008}, and the Sunyaev-Zel'dovich signal induced by the cluster \citep[][]{WhiteHernquistSpringel2002,SchulzWhite2003,BonaldiEtal2007,StaniszewskiEtal2009}.
Predictions for many cluster observables (e.g. the X-ray luminosity or the cluster richness) and for their relation to the cluster mass require modelling of astrophysical processes such as gas cooling and galaxy formation. Although inaccuracies in such astrophysical models are an unpleasant source of uncertainty for cosmological parameter estimation, they mean that comprehensive observations of clusters can be used to constrain cosmological parameters and models for cluster/galaxy evolution simultaneously.
The largest sample of observed galaxy clusters currently available is the maxBCG cluster catalogue \citep{KoesterEtal2007_SDSS_clusters}, This was extracted from the Sloan Digital Sky Survey (SDSS)\footnote{\texttt{http://www.sdss.org}} using maxBCG, an optical cluster-finding algorithm \citep{KoesterEtal2007_MaxBCG}. Constraints on the scatter in the velocity dispersion-richness relation and the mass-richness relation of these clusters have been derived from cluster X-ray and galaxy velocity dispersion observations \citep{BeckerEtal2007,RozoEtal2009_MaxBCG_III_scatter}. Weak-lensing measurements of average cluster mass profiles have also been used to calibrate the mass-richness relation, the mass-optical luminosity relation, and mass-to-light ratio profiles of the maxBCG clusters \citep{SheldonEtal2009_SDSS_cluster_wl_I,SheldonEtal2009_SDSS_cluster_wl_III,JohnstonEtal2007_SDSS_cluster_wl_II_arXiv,ReyesEtal2008}. These data provide significant constraints on cosmological parameters \citep{RozoEtal2009_MaxBCG_V_cosmology_arXiv}.
In this work, we investigate how well physically based models for galaxy formation in a $\Lambda$CDM universe can reproduce the observed relations of cluster richness and luminosity to other cluster properties, most notably mass. We also investigate what information on cosmological parameters and galaxy evolution can be obtained by comparing model predictions to observation. We use the Millennium Simulation \citep[][]{SpringelEtal2005_Millennium} and two smaller $N$-body simulations of cosmic structure formation \citep[][]{WangEtal2008} in conjunction with semi-analytic models of galaxy evolution \citep[][]{DeLuciaBlaizot2007,WangEtal2008} to create mock catalogues of galaxy clusters selected similarly to the maxBCG catalogue. We compute cluster abundances, average cluster masses, and weak-lensing mass profiles as a function of cluster richness and luminosity, and we compare these to observational results for the SDSS maxBCG sample. In addition, we investigate the scatter in the mass-richness relation, and we discuss how well one can recover the cluster mass function and the weak-lensing mass profiles from the richness-binned cluster abundances and mean masses.
The semi-analytic galaxy models used here couple star formation in galaxies to the properties of the evolving dark matter halo distribution in which the galaxies live. The models have been adjusted to be consistent with various observations, e.g. the luminosities, stellar masses, morphologies, gas contents and correlations of galaxies at low redshift, but they have not been tuned to match the properties of rich clusters.
The comparison to observations provided here is thus a direct test of these models and their description of the physical processes relevant for galaxy formation. This contrasts with halo occupation distribution models \citep{CooraySheth2002}, where the galaxy populations of clusters are adjusted to fit observation without considering in detail how they could have been built up by physical processes within the evolving dark matter distribution.
Our paper is organised as follows. We discuss the $N$-body simulations and galaxy models, as well as our methods for creating the simulated cluster samples from them in Sec.~\ref{sec:methods}. Results for our simulated cluster samples and a comparison to observation are presented in Sec.~\ref{sec:results}. Our paper concludes with a summary and discussion in Sec.~\ref{sec:discussion}.
\section{Methods}
\label{sec:methods}
We use cosmological $N$-body simulations to analyse the matter distribution in and around galaxy clusters in two different $\Lambda$CDM cosmologies. We infer the properties of the galaxies in the clusters from model galaxy catalogues generated by applying semi-analytic galaxy formation models to halo assembly trees generated from the outputs of the $N$-body simulations. We then compute richness and luminosity estimates for clusters in the model galaxy catalogues taking into account several observational features of optical cluster-finding algorithms, in particular maxBCG by \citet{KoesterEtal2007_MaxBCG}.
\subsection{The $N$-body simulations}
\label{sec:N_body_simulations}
\begin{table}
\center
\caption{
\label{tab:simulation_cosmological_parameters}
The cosmological parameters at redshift 0 for the three simulations used in this study. The parameters are: the baryon density $\Omega_\mathrm{b}$, the matter density $\Omega_\mathrm{M}$, and the energy density of the cosmological constant $\Omega_\Lambda$ (in units of the critical density), the Hubble constant $h$ (in units of $100\,\mathrm{km}\mathrm{s}^{-1}\ensuremath{\mathrm{Mpc}}^{-1}$), the primordial spectral index $n$ and the normalisation parameter $\sigma_8$ for the linear density power spectrum.
}
\begin{tabular}{l l l l}
\hline
\hline
& MS \& WMAP1 & WMAP3 \\
\hline
$\Omega_\mathrm{b}$ & 0.045 & 0.04 \\
$\Omega_\mathrm{M}$ & 0.25 & 0.226 \\
$\Omega_\mathrm{\Lambda}$ & 0.75 & 0.774 \\
$h$ & 0.73 & 0.743 \\
$n$ & 1 & 0.947 \\
$\sigma_8$ & 0.9 & 0.722 \\
\hline
\end{tabular}
\end{table}
Our study is based on three different $N$-body simulations: the Millennium Simulation (MS) by \citet{SpringelEtal2005_Millennium}, and two smaller simulations WMAP1 and WMAP3 by \citet{WangEtal2008}.\footnote{We refer the reader to \citet{SpringelEtal2005_Millennium}, \citet{WangEtal2008}, and references therein for more details about the simulations.}
The simulations assume a flat $\Lambda$CDM cosmology with parameters given in Table~\ref{tab:simulation_cosmological_parameters}. Both the MS and the WMAP1 simulation use a parameter set that was derived by combining the WMAP 1st-year results \citep{SpergelEtal2003_WMAP_1stYear_Data} with results from the 2dFGRS \citep{CollessEtal2003_2dF_Data}. The WMAP3 simulation employs cosmological parameters that are consistent with data from the WMAP 3rd-year release, the Cosmic Background Imager, and the Very Small Array \citep{SpergelEtal2007_WMAP_3rdYear_Data}, with a bias towards values differing from those used for the other two simulations.
The most prominent differences between the two sets of cosmological parameters are the normalisation parameter $\sigma_8$ and the spectral index $n$ for the density power spectrum: The MS and the WMAP1 simulation assume $\sigma_8=0.9$ and $n=1$, whereas the WMAP3 simulation assumes lower values $\sigma_8=0.722$ and $n=0.94$. Hence, there is less power on small scales in the matter power spectrum of the WMAP3 simulation than in the MS and the WMAP1 simulation. This results in a substantial delay in structure formation and less massive collapsed structures at any given redshift in the WMAP3 simulation.
\begin{table}
\center
\caption{
\label{tab:simulation_numericial_parameters}
Numerical parameters for the three simulations used in this study. The parameters are: the comoving cube size $L$, the particle number $n_\mathrm{p}$, the particle mass $m_\mathrm{p}$, and the effective force softening length $\epsilon$.
}
\begin{tabular}{l l l l}
\hline
\hline
& MS & WMAP1 & WMAP3 \\
\hline
$L$ [$h^{-1}\ensuremath{\mathrm{Mpc}}$] & 500 & 125 & 125 \\
$n_\mathrm{p}$ & $2160^3$ & $560^3$ & $560^3$ \\
$m_\mathrm{p}$ [$h^{-1}\ensuremath{\mathrm{M}_\odot}$] & $8.6\times10^8$ & $8.6\times10^8$ & $7.8\times10^8$ \\
$\epsilon$ [$h^{-1}\ensuremath{\mathrm{kpc}}$] & 5 & 5 & 5 \\
\hline
\end{tabular}
\end{table}
The simulations were run using a parallel TreePM version of \textsc{GADGET2} \citep{Springel2005_GADGET2}. The numerical parameters of the simulations are listed in Table~\ref{tab:simulation_numericial_parameters}. The main difference between the MS and the WMAP simulations is the simulation box size. The large volume of the MS provides us with a large sample of galaxies and galaxy clusters with statistical errors comparable to the `cosmic variance' errors of the SDSS maxBCG sample. The smaller WMAP simulations differ in their cosmological parameters, but share the same numerical parameters and initial conditions.\footnote{
Their initial density fields are identical except for small amplitude adjustments needed to reproduce the correct matter power spectra.
}
This reduces the influence of sampling noise when comparing results between the WMAP simulations and allows us to study the influence of cosmology on our results.
For each simulation, the particle data were stored on disk at 64 output times. These snapshots contain information on dark-matter halos, which have been identified running a friend-of-friend (FOF) group finding algorithm on the set of simulation particles. The halos were later decomposed into subhalos using \textsc{SUBFIND} \citep{SpringelEtal2001_SUBFIND} to identify gravitationally self-bound locally overdense regions. The most massive subhalo, called the main subhalo, typically contains 90\% of the FOF halo mass and shares its centre. Detailed merging history trees of all self-bound dark-matter subhalos were then computed. The resulting merger trees were used as input for the semi-analytic models discussed in the following section.
\subsection{The semi-analytic galaxy models}
\label{sec:SAMs}
We use semi-analytic galaxy formation models from the Munich family \citep{KauffmannEtal1999_I,SpringelEtal2001_SUBFIND,DeLuciaKauffmannWhite2004, SpringelEtal2005_Millennium, CrotonEtal2006,DeLuciaBlaizot2007} to set the optical properties of galaxies in the $N$-body simulations. These semi-analytic models assume that galaxies form from gas that accumulates in the centre of each dark-matter subhalo in the simulation. Star formation in each galaxy is coupled to its subhalo properties via simple prescriptions for gas cooling, star formation, chemical enrichment and feedback from supernova and central black holes (AGNs), as well as for merging of galaxies once their dark matter subhalos have merged. Certain parameters quantifying the efficiency of these processes can be adjusted in order to maximise agreement of the results with observation.
For the MS, we use the publicly available galaxy catalogue\footnote{http://www.mpa-garching.mpg.de/Millennium} that was generated using the galaxy model described in \citet{DeLuciaBlaizot2007}. This MS model reproduces various observed relations for galaxies, in particular the observed luminosity, colour, gas content and morphology distributions \citep{CrotonEtal2006,DeLuciaEtal2006,KitzbichlerWhite2007} and the observed two-point correlation functions \citep{SpringelEtal2005_Millennium,KitzbichlerWhite2007}.
The galaxy model of \citet{DeLuciaBlaizot2007} has been applied by \citet{WangEtal2008} to the WMAP1 simulation using the same set of efficiency parameters. As expected, this gives similarly good agreement with observations as for the MS. Here, we will use these model galaxies, which we refer to below as the WMAP1-A model, for the WMAP1 simulation.
\citet{WangEtal2008} also applied the galaxy modelling technique of \citet{DeLuciaBlaizot2007} to the WMAP3 simulation, but with two slightly different sets of efficiency parameters. One, which we call WMAP3-B here, employs the same star formation efficiency as the WMAP1-A model, but lower supernova and AGN feedback efficiencies. The other model, called WMAP3-C in the following, employs a higher star-formation efficiency but also higher feedback efficiencies. Both the WMAP3-B and the WMAP3-C model show good agreement with low-redshift galaxy observations, e.g. of the galaxy luminosity function and the galaxy clustering \citep{WangEtal2008}. In the following, we will consider both models for the WMAP3 simulation.
\subsection{The ridgeline galaxies}
\label{sec:ridgeline_galaxies}
Optical cluster finding algorithms such as maxBCG \citep{KoesterEtal2007_MaxBCG} locate galaxy clusters by searching for local overdensities of E/S0 ridgeline galaxies in angular and redshift space. These galaxies are common in known clusters, and they are relatively easy to find, since many of them are bright and their observed colours are strongly correlated with luminosity and redshift. Because of their small scatter in colour, a narrow search range in colour can be chosen at each redshift. This narrow search range forces one to know the mean ridgeline colours rather accurately. Accurate mean ridgeline colours are also important for accurately quantifying the ridgeline galaxy content of the clusters.
Although the semi-analytic models can reproduce many of the observed properties of galaxies, there are still some discrepancies. In particular, the models do not reproduce the colours of passively evolving galaxies to the degree required for a direct application of the ridgeline colour-redshift relation used for the maxBCG catalogue \citep[see, e.g.,][for possible reasons]{WeinmannEtal2006}. We therefore `measure' the mean ridgeline colours of model galaxies as a function of redshift:
We roughly identify the mean colour of the ridgeline galaxies in the colour-magnitude diagram (where the ridgeline population induces a visible overdensity among the bright galaxies with colours close to the observed mean ridgeline colour). We then fit a Gaussian with mean $\bar{x}$ and variance $\sigma^2$ to the distribution of galaxy colours in a region around our initial colour guess.
We repeat the fit considering all galaxies with absolute $i$-band magnitudes $M_i\leq-20$ and colours in the range $\bar{x}\pm 3 \sigma$ until $\bar{x}$ converges.\footnote{
The dependence of the mean ridgeline colour on magnitude is ignored in this procedure, since for all models the slope in the colour-magnitude relation is very small in the considered colours and magnitude range.}
The resulting mean ridgeline colours for model galaxies in the MS as a function of redshift is compared to the corresponding relation for SDSS galaxies in Fig.~\ref{fig:ridgeline_colors}. The mean ridgeline colours of the model are close the observed ridgeline colours at low redshift, but they deviate significantly at higher redshift. The measured ridgeline width $\sigma_{g-r}\approx0.05$ for all considered galaxy models and redshifts, which is in agreement with observations \citep{KoesterEtal2007_MaxBCG}. The measured width in $(r-i)$-colour $\sigma_{r-i}\approx0.03$, which is smaller than the observed value $\sigma_{r-i}=0.06$.
\begin{figure}
\centerline{\includegraphics[width=1\linewidth]{g_r_vs_redshift}}
\caption{
\label{fig:ridgeline_colors}
The mean colour $\overline{g-r}(z)$ of the ridgeline galaxies as a function of redshift $z$ for the galaxy model of \protect\citet{DeLuciaBlaizot2007} in the MS compared to the mean colour-redshift relation for the ridgeline galaxies in the SDSS \protect\citep[][]{KoesterEtal2007_MaxBCG}.
}
\end{figure}
Following the maxBCG observational procedure, we consider $g-r$ and $r-i$ colours to identify ridgeline galaxies in our simulations. For each simulation snapshot with redshift $0.1\leq z \leq 0.3$, we select all objects in our semi-analytic galaxy catalogues whose $g-r$ and $r-i$ values are both within $2\sigma$ of the mean ridgeline colours. As mean colours, we take the values measured from the simulations. For the ridgeline width $\sigma$, we take the observed values $\sigma_{g-r}=0.05$ and $\sigma_{r-i}=0.06$ \citep{KoesterEtal2007_MaxBCG}.
Besides colour, galaxy brightness is used to select ridgeline galaxies. We thus further select from all model galaxies surviving the colour selection those with apparent observer-frame $i$-band magnitude $i^\text{obs}\leq i^\text{obs}_\text{lim}$. Here, we employ the same magnitude limit $i^\text{obs}_\text{lim}$ as \citet{KoesterEtal2007_MaxBCG} (B. Koester, private communication). This magnitude limit corresponds to an absolute rest-frame magnitude limit $M^\text{rest}_i \approx -20.25 + 5\log_{10}h \approx -20.9$ for the cosmologies considered here.
\subsection{The galaxy clusters}
\label{sec:model_clusters}
We identify galaxy clusters in the simulations with dark matter halos found by the FOF algorithm. From the simulation data stored on disk, we obtain for each such halo (hence cluster candidate) the positions of the centre, the virial radius $\ensuremath{R^\text{crit/mean}_{200}}$ (i.e. the radius of the largest sphere enclosing $200\times$ the critical/mean density) and the mass $\ensuremath{M^\text{crit/mean}_{200}}$ (i.e. the mass enclosed within $\ensuremath{R^\text{crit/mean}_{200}}$).
The semi-analytic galaxy models provide us with information about the galaxies associated with each dark matter halo.
For each halo, we measure the total number $\Nint$ of associated ridgeline galaxies selected as described in the preceding section. In addition, we count the number of ridgeline galaxies $\ensuremath{N^\text{gal}_{1\Mpc}}$ within a physical radius of $1h^{-1}\,\ensuremath{\mathrm{Mpc}}$ in projection along a simulation box axis. The result is used to compute the ``observationally defined'' radius $\ensuremath{R^\text{gal}_{200}} = 0.156 (\ensuremath{N^\text{gal}_{1\Mpc}})^{0.6} h^{-1}\,\ensuremath{\mathrm{Mpc}}$ \citep[see][]{HansenEtal2005,KoesterEtal2007_MaxBCG}. We then calculate a scaled galaxy richness $N^\text{gal}_{200}$ by counting all ridgeline galaxies within a projected radius $\ensuremath{R^\text{gal}_{200}}$. A cluster luminosity $L^\text{gal}_{200}$ is then computed by summing the $i$-band luminosities of all ridgeline galaxies within $\ensuremath{R^\text{gal}_{200}}$. These procedures mimic closely those employed to estimate richnesses and luminosities for the real maxBCG catalogue.
To improve statistics, we perform the measurements of the projected quantities $\ensuremath{N^\text{gal}_{1\Mpc}}$, $\ensuremath{R^\text{gal}_{200}}$, $N^\text{gal}_{200}$, and $L^\text{gal}_{200}$ along all three simulation box axes. Each projection is treated as an individual cluster in the subsequent analysis. For the computation of cluster densities, the resulting triplication of clusters is taken into account by assuming a three times larger simulation volume.
Several effects hamper a direct comparison between observed clusters and our simulated clusters at the stage described so far. Here, we will not take into account fragmentation, but we do correct for contamination of the observational data by foreground and background galaxies.\footnote{
Fragmentation is insignificant in the maxBCG sample, but overmerging slightly boosts the cluster richness estimates mainly due to contamination by foreground and background structures \citep[][]{KoesterEtal2007_MaxBCG}.}
Using spectroscopic data, \citet{KoesterEtal2007_SDSS_clusters} found that roughly 16\% of the galaxies identified by maxBCG as cluster ridgeline galaxies are, in fact, projections. We include such projections in our simulated clusters in a very simple way, by randomly duplicating about 19\% of the ridgeline galaxies. As a result, our model clusters appear to be contaminated at the 16\% level.
\begin{figure}
\centerline{\includegraphics[width=1\linewidth]{offset_distribution}}
\caption{
\label{fig:offset_distribution}
The distribution of projected offset $R_s$ between the `true' and 'apparent' centres of miscentred clusters in the MS (dashed line). The distribution is well fit by a 2D Gaussian (solid line).
}
\end{figure}
We also consider the effect of misidentifying the cluster centre. For each cluster in the simulations, we calculate galaxy numbers, radii, etc. not only using the `true' centre, but also using the position of the second-most massive subhalo as `apparent' centre. The resulting distribution of projected offsets $R_s$ between `true' and 'apparent' centre is shown in Fig.~\ref{fig:offset_distribution} for the MS. The distribution can be approximated by a two-dimensional Gaussian distribution with
\begin{equation}
\label{eq:offset_distribution}
\mathrm{pdf}(R_s)=\frac{R_s}{\sigma_s^2}\exp\left(-\frac{R_s^2}{2 \sigma_s^2}\right)
\end{equation}
and $\sigma_s=0.38h^{-1}\ensuremath{\mathrm{Mpc}}$, which agrees well with the offset distribution found for the maxBCG algorithm run on simulated data \citep{JohnstonEtal2007_SDSS_cluster_wl_II_arXiv}. We find a similar value, $\sigma_s=0.41h^{-1}\ensuremath{\mathrm{Mpc}}$, for the WMAP1 simulation, and somewhat smaller values, $\sigma_s=0.34h^{-1}\ensuremath{\mathrm{Mpc}}$, for the WMAP3 simulation.
When needed for our analysis, we will use a probability
\begin{equation}
\label{eq:a_priori_center_fraction}
\tilde{p}_\text{c}(\Nint) = \frac{1+0.04 \Nint}{2.2 + 0.05 \Nint}
\end{equation}
that a cluster in the simulation with $\Nint$ ridgeline galaxies is correctly centred. Empirically, this yields roughly the same probability $p_\text{c}(N^\text{gal}_{200})$ that a cluster with measured richness $N^\text{gal}_{200}$ is correctly centred as was found for the maxBCG algorithm by \citet{JohnstonEtal2007_SDSS_cluster_wl_II_arXiv}.
In our simulations, centre misidentification tends to reduce the number $N^\text{gal}_{200}$ of ridgeline galaxies within $\ensuremath{R^\text{gal}_{200}}$ for a given cluster. Consequently, the number density of clusters with $N^\text{gal}_{200}$ above a given threshold is slightly decreased by centre misidentification. Another consequence of centre misidentification in our simulations is a slightly higher average cluster mass and ridgeline galaxy number $\Nint$ for a given measured richness $N^\text{gal}_{200}$. All these effects may be smaller for the actual maxBCG algorithm. This algorithm disfavours identifying the cluster centre with galaxies that lead to low $N^\text{gal}_{200}$ in comparison to galaxies that yield a larger cluster richness. In the following, we will thus discuss results for our simulated cluster samples in the case that centre misidentification is ignored, unless stated otherwise.
\subsection{The cluster samples}
\label{sec:model_cluster_samples}
A comparison of cluster abundances in our simulated cluster samples to observation requires knowledge of the volumes and areas of the real surveys. For the SDSS maxBCG cluster sample, we assume an effective survey area of $7400\,\ensuremath{\mathrm{deg}}^2$ and a redshift range of $0.1\leq z \leq 0.3$ \citep[][]{RozoEtal2009_MaxBCG_V_cosmology_arXiv}. This yields an effective survey volume of $4.3 \times10^8 h^{-3}\,\ensuremath{\mathrm{Mpc}}^{3}$ for the MS and WMAP1 cosmology, and $4.4 \times10^8 h^{-3}\,\ensuremath{\mathrm{Mpc}}^{3}$ for the WMAP3 cosmology.
For each snapshot of our simulations with redshift $0.1\leq z \leq 0.3$, we create a model cluster catalogue containing the projections of all clusters in the simulation box. We calculate the statistical properties of interest (e.g. the cluster densities or average cluster mass as a function of cluster richness) for each of these snapshot catalogues (properly taking into account the increased cluster number due to inclusion of multiple projections of each cluster). We then compute the properties of a cluster sample in a volume-limited survey with $0.1\leq z \leq 0.3$ by an average over the snapshot results, where each snapshot is weighted by its cluster abundance and its volume fraction in the survey.
The simulation cube volume $L^3=1.25 \times10^8 h^{-3}\,\ensuremath{\mathrm{Mpc}}^{3}$ for the MS, which is about one quarter of the SDSS maxBCG survey volume. The simulation cube is used in three different projections and at six different redshifts to construct the model cluster samples, which increases the effective sample size.\footnote{
The resulting effective sample size is difficult to quantify since the subsamples created from the different projections and redshifts are not independent.}
One can thus expect the statistical errors due to sample variance to be roughly the same for the MS model and the SDSS maxBCG cluster sample, and about eight times larger for the WMAP models (with their 64 times smaller box volume).
Where appropriate, we estimate the errors of our models due to sample variance in the following way:
We divide the simulation cube of the MS into 64 smaller cubes each having the size of the WMAP simulations. We calculate the observables for each of these subcubes separately. The standard deviation of the results from the different subcubes serves as an estimate for the statistical error of the WMAP1-A model. The statistical errors for the other models are than extrapolated from the WMAP1-A error using simple assumptions about the scaling with volume, numbers, etc.
\section{Results}
\label{sec:results}
Here, we compare the properties of clusters in our various galaxy formation models to the observed properties of clusters in the SDSS maxBCG catalogue. We investigate average cluster properties as a function of galaxy content by dividing our simulated cluster samples into bins of richness $N^\text{gal}_{200}$ and luminosity $L^\text{gal}_{200}$ (as was done for the maxBCG cluster sample). The average properties of the model clusters binned by $N^\text{gal}_{200}$ are listed for the different galaxy formation models in Tables~\ref{tab:clusters_summary_MS}-\ref{tab:clusters_summary_WMAP3_C}. The properties of model clusters when binned by $L^\text{gal}_{200}$ are shown in Tables~\ref{tab:clusters_summary_MS_L}-\ref{tab:clusters_summary_WMAP3_C_L}.
We first present results for stacked weak-lensing mass profiles and for the abundance of clusters, both as a function of galaxy richness. We then discuss the mean and scatter of cluster mass as functions of richness and luminosity. Finally, we study how well the cluster mass function and the stacked weak-lensing mass profiles can reconstructed from the cluster abundance together with the mean and scatter of the mass-richness relation.
\subsection{Cluster mass profiles}
\label{sec:cluster_mass_profiles}
For each cluster in the simulations, we compute weak-lensing mass profiles by projecting the simulation particles and the galaxies in a cuboid region of $35h^{-1}\,\ensuremath{\mathrm{Mpc}}$ transverse physical side length and $100h^{-1}\,\ensuremath{\mathrm{Mpc}}$ comoving thickness centred on the cluster. The projected particles and galaxies (assumed to contribute their stellar mass as points)\footnote{
Although particles in the simulations represent the total mass in the simulated parts of the universe, we do not compensate for the additional stellar mass, since (i) the mass in stars is very small compared to the total mass in collapsed objects, and (ii) gas physics increases the dark-matter density in the inner part of the halos compared to collisionless simulations \citep[e.g.][]{BarnesWhite1984,GnedinEtal2004}.
} are binned in annuli to compute the circularly averaged surface mass density $\Sigma(R)$ at radius $R$ from the projected cluster centre, the mean enclosed surface mass density $\bar{\Sigma}(R)$ inside $R$, and the weak-lensing mass profile
\begin{equation}
\Delta \Sigma(R) = \bar{\Sigma}(R) - \Sigma(R).
\end{equation}
The weak-lensing mass profile $\Delta \Sigma(R)$ is proportional to the average tangential shear $\EV{\gamma_\mathrm{t}}(R)$ around the projected cluster centre, and can therefore be measured using weak-lensing observations \citep[][]{SchneiderKochanekWambsganss_book}. To increase signal-to-noise, the measured tangential shear may be averaged over a sample of clusters. The resulting shear signal can then be converted to an average mass profile for the observed cluster sample.
In this section, the weak-lensing mass profiles of the simulated clusters in each snapshot are averaged in bins of $N^\text{gal}_{200}$. The average profiles from each snaphot are then combined with appropriate weights to obtain a mean profile for each $N^\text{gal}_{200}$-bin in a volume-limited survey. In doing this, we also take into account the effect of cluster centre misidentification, which has a considerable impact on the average profiles at radii $R<1h^{-1}\,\ensuremath{\mathrm{Mpc}}$ (see Sec.~\ref{sec:cluster_mass_profile_fits}).
\begin{figure*}
\centerline{\includegraphics[width=0.8\linewidth]{mass_profiles}}
\caption{
\label{fig:mass_profiles}
The average weak-lensing mass profile $\Delta \Sigma(R)$ as a function of radius $R$ for clusters in different richness bins in the MS (dashed lines), WMAP1-A (dotted lines), WMAP3-B (dash-dotted lines), and the WMAP3-C model (dash-dot-dotted lines) compared to the observed profiles in the SDSS \protect\citep[][points with error bars]{SheldonEtal2009_SDSS_cluster_wl_I}.
}
\end{figure*}
In Fig.~\ref{fig:mass_profiles}, the weak-lensing mass profiles for the simulated clusters in the MS and the WMAP models are compared to the measured profiles of maxBCG clusters in the SDSS \citep{SheldonEtal2009_SDSS_cluster_wl_I}. The simulated and observed profiles agree remarkably well in detailed shape and amplitude. This is strong evidence that our models provide a realistic description not only of the density profile and galaxy content of galaxy clusters, but also of the maxBCG cluster selection and richness measurement.
Differences between the galaxy models and the observations are small but noticeable. The simulated density profiles of the MS tend to be above the observed profiles in the poorer clusters but are an excellent fit in the rich systems. The WMAP3 model profiles fit better in the poor clusters but are mostly below the observed profiles in richer clusters. For radii $R\lesssim 1h^{-1}\ensuremath{\mathrm{Mpc}}$, the mass profiles of the WMAP3 models are roughly 30\% lower than in the MS/WMAP1 models. This suggests that for given richness, the maxBCG clusters are on average slightly less massive than clusters in the MS/WMAP1 simulations but slightly more massive than those in the WMAP3 simulations. We will investigate this issue in more detail in Sec.~\ref{sec:mass_richness_relation}.
\subsection{Cluster abundance}
\label{sec:cluster_abundance}
\begin{table}
\center
\caption{
\label{tab:cluster_densities}
The area density $n^\text{ang}_{\geq 10}$ [in $\ensuremath{\mathrm{deg}}^{-2}$] of clusters with richness $N^\text{gal}_{200}\geq 10$ and redshift $0.1 \leq z \leq 0.3$ for the model clusters in the MS and WMAP simulations and for the SDSS maxBCG clusters \protect\citep{KoesterEtal2007_SDSS_clusters}. Considered are the cases that all cluster centres are correctly identified ($p_\text{c}=1$), and that only a fraction of clusters given by Eq.~\ref{eq:a_priori_center_fraction} is correctly centred ($p_\text{c}<1$).
}
\begin{tabular}{l l l}
\hline
\hline
& $p_\text{c}=1$ & $p_\text{c}<1$ \\
\hline
SDSS & & 1.8 \\
MS & 1.7 & 1.4 \\
WMAP1-A & 1.8 & 1.6 \\
WMAP3-B & 0.6 & 0.5 \\
WMAP3-C & 0.9 & 0.7 \\
\hline
\end{tabular}
\end{table}
Table~\ref{tab:cluster_densities} compares the area density $n^\text{ang}_{\geq 10}$ of clusters with richness $N^\text{gal}_{200}\geq10$ and redshift $0.1 \leq z \leq 0.3$ in our simulated surveys to the observed abundance of maxBCG clusters in the SDSS. If we ignore misidentification of the cluster centres, the MS and the WMAP1-A model yield cluster abundances very similar to the real maxBCG cluster sample. In contrast, the cluster abundances for the WMAP3 models are a factor 2-3 lower than that observed in the SDSS. The lowest abundance is found for the WMAP3-B model.
To obtain an estimate for the statistical error on the cluster density, we employ the subsampling method described in Sec.~\ref{sec:model_cluster_samples}: We use the MS model to create subsamples with sizes equal to the WMAP1-A model. From these subsamples, we estimate a standard deviation for the area density of clusters of 25\% for the WMAP1-A model. Taking into account the lower cluster densities in the WMAP3 models, a slightly larger statistical error of 30-40\% can be assumed for these. A simple extrapolation of the WMAP1-A error yields a much smaller error of 3\% for the cluster density in the MS model.
Taking into account centre misidentification in our simulations reduces the cluster abundances by $\approx 20\%$ and thus increases the discrepancy between the WMAP3 models and the SDSS cluster sample. The abundance is then also somewhat low even in the MS case, suggesting, as noted above, that our procedure for modelling the effects of centre misidentification may overestimate the effect in the real maxBCG catalogues.
\begin{figure}
\centerline{\includegraphics[width=1\linewidth]{cluster_density_vs_redshift}}
\caption{
\label{fig:cluster_density_vs_redshift}
The comoving abundance $n_{\geq10}^\text{com}(z)$ of clusters with richness $N^\text{gal}_{200}\geq 10$ as a function of redshift $z$ in the SDSS maxBCG catalogue \protect\citep{KoesterEtal2007_SDSS_clusters} and in our simulated cluster catalogues based on the MS and the WMAP simulations.
}
\end{figure}
The redshift dependence of the comoving cluster abundance is shown in Fig.~\ref{fig:cluster_density_vs_redshift}. All our models show a slight decrease of the comoving cluster density with increasing redshift. The statistical errors on the comoving cluster abundance estimated with the subsampling method are similar to the errors on the cluster area density, i.e. 25-40\% for the WMAP models and $\sim3\%$ for the MS. For $z>0.15$, the abundance in the MS and WMAP1-A model agree very well with the maxBCG results. In contrast, there is a much larger observed density at low redshifts $z<0.15$, which is not seen in the models and presumably reflects nearby large-scale structure such as the SDSS Great Wall.
\begin{figure}
\centerline{\includegraphics[width=1\linewidth]{N_200_histogram}}
\caption{
\label{fig:cluster_histogram}
The surface density $n^\text{ang}(N^\text{gal}_{200})$ of clusters with redshift $0.1 \leq z \leq 0.3$ as a function of richness $N^\text{gal}_{200}$. Counts in the SDSS maxBCG cluster sample \protect\citep{KoesterEtal2007_SDSS_clusters} are compared to counts in our simulated cluster catalogues for the MS and the WMAP simulations.
}
\end{figure}
The dependence of the cluster counts on richness $N^\text{gal}_{200}$ is illustrated in Fig.~\ref{fig:cluster_histogram}. Cluster abundances in the MS and WMAP1-A models are lower than in the SDSS for $N^\text{gal}_{200} < 20$, but exceed the observed abundances for $N^\text{gal}_{200}\geq20$. Abundances in the WMAP3 models are always lower than in the MS and WMAP1-A model and below the observations. For $N^\text{gal}_{200}\geq9$, the cluster abundances in the WMAP3 models are 2-20 times lower than the abundances in the WMAP1-A model and 2-5 times lower than the observed abundances. Hence, the differences are always larger than the statistical errors inferred from the subsampling (which are $\sim$10-100\% for the WMAP models and $\sim$1-10\% for the MS and SDSS). The differences are largest in the high-$N^\text{gal}_{200}$ tail of the distribution.
The low cluster abundance for the WMAP3 models in comparison to the MS and WMAP1-A models, in particular for large $N^\text{gal}_{200}$, are a reflection of the different cosmologies. Rich clusters have massive dark matter halos, and there are far fewer massive halos in the WMAP3 cosmology than in the WMAP1 cosmology. This is mainly due to the lower value of $\sigma_8$. The higher star formation efficiency in the WMAP3 models does not sufficiently enhance the number of bright ridgeline galaxies per unit mass to compensate for the decrease in the number of massive halos. As a result, there are fewer rich clusters in the WMAP3 models than in the MS and WMAP1-A models.
The observed abundance of rich clusters is slightly lower than the abundance predicted for the MS and much larger than the abundance predicted by the WMAP3 models. This suggests that $0.72<\sigma_8<0.9$ for our Universe, with $\sigma_8$ probably closer to 0.9 than to 0.72. This is consistent with some recent estimates [e.g. $\sigma_8 = 0.80 \pm 0.02$ by \citealp[][]{LesgourguesEtal2007},
$\sigma_8=0.81 \pm 0.03$ by \citealp[][]{KomatsuEtal2009}, and $\sigma_8 = (0.83\pm 0.03)(\Omega_\mathrm{M}/0.25)^{-0.41}$ by \citealp[][]{RozoEtal2009_MaxBCG_V_cosmology_arXiv}].
The results for the abundances of rich clusters alone are not sufficient to definitely conclude that $0.72<\sigma_8<0.9$. For example, problems with the modelling of the ridgeline galaxies and their identification could have lead to inaccurate estimates for the cluster richness and thus to incorrect cluster abundances. However, there is complementary evidence from the mass-richness relation, which we discuss in Sec.~\ref{sec:mass_richness_relation} and \ref{sec:mass_function}.
\subsection{Mass-richness relation}
\label{sec:mass_richness_relation}
\begin{figure}
\centerline{\includegraphics[width=1\linewidth]{M_crit_200_vs_N_200}}
\caption{
\label{fig:gal_number_vs_crit_cluster_mass}
The average cluster mass $\ev{\ensuremath{M^\text{crit}_{200}}}$ vs. richness $\ev{N^\text{gal}_{200}}$ relations for cluster catalogues from the MS and WMAP simulations are compared to the relation derived from the SDSS maxBCG catalogues by \protect\citet{JohnstonEtal2007_SDSS_cluster_wl_II_arXiv}.
}
\end{figure}
The weak-lensing mass profiles discussed in Sec.~\ref{sec:cluster_mass_profiles} can be used to estimate spherical-overdensity cluster masses. This can be done, e.g., by a non-parametric conversion of the weak-lensing mass profiles into average 3D density profiles \citep[][]{JohnstonEtal2007}, or by fitting a parametrised model of the average cluster density to the shear data \citep[][]{JohnstonEtal2007_SDSS_cluster_wl_II_arXiv}.
In Fig.~\ref{fig:gal_number_vs_crit_cluster_mass}, average cluster masses $\ev{\ensuremath{M^\text{crit}_{200}}}$ in our simulated catalogues\footnote{
We use the masses measured directly from the matter distribution in the simulations. As discussed in Sec.~\ref{sec:cluster_mass_profile_fits}, these are consistent with the masses obtained from parametric fits to the weak-lensing mass profiles.
}
are shown as a function of cluster richness $\ev{N^\text{gal}_{200}}$ and are compared to the corresponding relation for SDSS maxBCG clusters by \citet{JohnstonEtal2007_SDSS_cluster_wl_II_arXiv}, who calculated the cluster masses by fitting parametric models to the observed lensing signal.\footnote{We multiplied the masses given in \citet{JohnstonEtal2007_SDSS_cluster_wl_II_arXiv} by a factor $1.18\pm0.06$, which is a photo-$z$ bias correction advocated by \citet{MandelbaumEtal2008} and \citet{RozoEtal2009_MaxBCG_III_scatter}.} Remarkably, all our simulations reproduce the observed mass-richness relation within $\sim30\%$ over the two orders of magnitude spanned by the SDSS clusters. This is corroborates our finding in Sec.~\ref{sec:cluster_mass_profiles} that our models provide an adequate description of the statistical properties of optically selected galaxy clusters.
At given richness, the differences in cluster density profiles between models and observations (see Fig.~\ref{fig:mass_profiles}) imply differences in mean cluster mass. For $N^\text{gal}_{200}<10$, the MS yields cluster masses that are 30-40\% higher than the SDSS maxBCG cluster masses derived by \citet{JohnstonEtal2007_SDSS_cluster_wl_II_arXiv}. For $N^\text{gal}_{200}\geq10$, the MS masses are up to 20\% higher than the SDSS maxBCG masses. The cluster masses of the WMAP1-A model are comparable to those of the MS, but seem to be affected by sampling noise for large $\ev{N^\text{gal}_{200}}$. The cluster masses in the WMAP3-B model are lower than those for the MS model by 20-30\%, and fall below the SDSS cluster masses at large richness. Among our models, the lowest cluster masses are found for the WMAP3-C model, where the values are always smaller than those in the SDSS.
The mass-richness relation for the WMAP1-A model differs by up to 30\% from the relation for the MS model (which is much less affected by sampling noise due to its 64 times larger volume). Using subsamples created from the MS as described in Sec.~\ref{sec:model_cluster_samples}, we estimate a standard deviation for the binned cluster mass of 2\% for $N^\text{gal}_{200}=3$ and 20\% for $71\leqN^\text{gal}_{200}\leq220$ for the WMAP1-A model. A similar statistical error can be assumed for the WMAP3 models. This is consistent with the differences between the MS and WMAP1-A models being solely due to sampling noise.
\begin{figure}
\centerline{\includegraphics[width=1\linewidth]{M_mean_200_vs_N_200}}
\caption{
\label{fig:gal_number_vs_mean_cluster_mass}
The average cluster mass $\ev{\ensuremath{M^\text{mean}_{200}}}$ vs. richness $\ev{N^\text{gal}_{200}}$ relations for cluster catalogues from our MS and WMAP simulations are compared to the SDSS relation given by \protect\citet{ReyesEtal2008}.
}
\end{figure}
Another analysis of the weak-lensing data for the SDSS maxBCG cluster sample has been performed by \citet{ReyesEtal2008}. In Fig.~\ref{fig:gal_number_vs_mean_cluster_mass}, we compare the average cluster masses $\ev{\ensuremath{M^\text{mean}_{200}}}$ of our simulated clusters to their results as a function of richness. As for $\ev{\ensuremath{M^\text{crit}_{200}}}$, the average cluster masses $\ev{\ensuremath{M^\text{mean}_{200}}}$ in the MS model are up to 30\% larger than the SDSS maxBCG cluster masses, whereas the cluster masses of the WMAP3 models are comparable to those in the SDSS.
\begin{table}
\center
\caption{
\label{tab:N_200_vs_M_200crit_fit_params}
The best-fit parameters (calculated from a least-squares fit of $\log_{10}\ev{N^\text{gal}_{200}}$ against $\log_{10}\ev{\ensuremath{M^\text{crit}_{200}}}$) for the mass-richness relation~\eqref{eq:N_200_vs_M_200_fit} of simulated clusters with $N^\text{gal}_{200}\geq9$ is compared to the best-fitting parameters for the SDSS maxBCG clusters with masses measured by \protect\citet{JohnstonEtal2007_SDSS_cluster_wl_II_arXiv}.
}
\begin{tabular}{l c c c c}
\hline
\hline
& $M^\text{crit}_{200|20}$ [$h^{-1} \ensuremath{\mathrm{M}_\odot}$] & $\alpha_N^\text{crit}$ \\
\hline
SDSS & $(1.12 \pm 0.03)\times10^{14}$ & $1.14 \pm 0.04$ \\
MS & $(1.24 \pm 0.01)\times10^{14}$ & $1.09 \pm 0.01$ \\
WMAP1-A & $(1.11 \pm 0.08)\times10^{14}$ & $1.21 \pm 0.09$ \\
WMAP3-B & $(1.05 \pm 0.04)\times10^{14}$ & $0.97 \pm 0.05$ \\
WMAP3-C & $(0.92 \pm 0.04)\times10^{14}$ & $1.05 \pm 0.05$ \\
\hline
\end{tabular}
\end{table}
\begin{table}
\center
\caption{
\label{tab:N_200_vs_M_200mean_fit_params}
The best-fit parameters (calculated from a least-squares fit of $\log_{10}\ev{N^\text{gal}_{200}}$ against $\log_{10}\ev{\ensuremath{M^\text{mean}_{200}}}$) for the mass-richness relation~\eqref{eq:N_200_vs_M_200_fit} of simulated clusters with $N^\text{gal}_{200}\geq9$ is compared to the best-fit parameters for the SDSS maxBCG clusters with masses measured by \protect\citet{ReyesEtal2008}.
}
\begin{tabular}{l c c c c}
\hline
\hline
& $M^\text{mean}_{200|20}$ [$h^{-1} \ensuremath{\mathrm{M}_\odot}$] & $\alpha_N^\text{mean}$ \\
\hline
SDSS & $(1.42 \pm 0.03)\times10^{14}$ & $1.19 \pm 0.03$ \\
MS & $(1.68 \pm 0.02)\times10^{14}$ & $1.08 \pm 0.01$ \\
WMAP1-A & $(1.53 \pm 0.08)\times10^{14}$ & $1.19 \pm 0.06$ \\
WMAP3-B & $(1.58 \pm 0.04)\times10^{14}$ & $1.06 \pm 0.03$ \\
WMAP3-C & $(1.38 \pm 0.04)\times10^{14}$ & $1.13 \pm 0.04$ \\
\hline
\end{tabular}
\end{table}
The mass-richness relations shown in Fig.~\ref{fig:gal_number_vs_crit_cluster_mass} and \ref{fig:gal_number_vs_mean_cluster_mass} suggest a power law, although with a steeper slope for $\ev{N^\text{gal}_{200}}<10$ than for $\ev{N^\text{gal}_{200}}\gtrsim10$. Here, we fit the mean mass-richness relation for clusters with $N^\text{gal}_{200} \geq 9$ by:
\begin{equation}
\label{eq:N_200_vs_M_200_fit}
\ensuremath{M^\text{crit/mean}_{200}}(N^\text{gal}_{200}) = M^\text{crit/mean}_{200|20}
\left(\frac{N^\text{gal}_{200}}{20}\right)^{\alpha_N^\text{crit/mean}}.
\end{equation}
The best-fit parameters for our simulated catalogues are compared to those for the SDSS in Tables~\ref{tab:N_200_vs_M_200crit_fit_params} and \ref{tab:N_200_vs_M_200mean_fit_params}.
The lower cluster masses in the WMAP3 models than in the MS and WMAP1-A models again reflect the different cosmologies. The less evolved dark matter structure in the WMAP3 cosmology requires more efficient star formation in order to match observed galaxy numbers. This results in more ridgeline galaxies in a dark matter halo of given mass. Consequently, for a given richness, halos are less massive in the WMAP3 models than in the MS and WMAP1-A models.
Except for the largest richness bin, observed cluster masses are smaller than in the MS model. This suggests a normalisation $\sigma_8 < 0.9$ for our Universe \citep[again consistent with the recent estimate $\sigma_8=0.81 \pm 0.03$ by][]{KomatsuEtal2009}.
\subsection{Mass-luminosity relation}
\label{sec:mass_luminosity_relation}
\begin{figure}
\centerline{\includegraphics[width=1\linewidth]{M_crit_200_vs_L_200}}
\caption{
\label{fig:gal_lum_vs_crit_cluster_mass}
Average cluster mass $\ev{\ensuremath{M^\text{crit}_{200}}}$ vs. the total $i$-band luminosity $\ev{L^\text{gal}_{200}}$ of ridgeline galaxies within $\ensuremath{R^\text{gal}_{200}}$. Results for our MS and WMAP simulations are compared to SDSS results based on cluster masses by \protect\citet{JohnstonEtal2007_SDSS_cluster_wl_II_arXiv}.
}
\end{figure}
\begin{table}
\center
\caption{
\label{tab:L_200_vs_M_crit_200_fit_params}
The best-fit parameters for the mass-luminosity relation~\eqref{eq:L_200_vs_M_200_fit} in our simulations (calculated from a
least-squares fit of $\log_{10}\ev{L^\text{gal}_{200}}$ vs. $\log_{10}\ev{\ensuremath{M^\text{crit}_{200}}}$) are compared to the values by \protect\citet{JohnstonEtal2007_SDSS_cluster_wl_II_arXiv} for the SDSS maxBCG cluster sample.
}
\begin{tabular}{l c c}
\hline
\hline
& $M^\text{crit}_{200|40}$ [$h^{-1} \ensuremath{\mathrm{M}_\odot}$] & $\alpha_L^\text{crit}$ \\
\hline
SDSS & $(1.09 \pm 0.03)\times 10^{14}$ & $1.23 \pm 0.03$ \\
MS & $(1.13 \pm 0.02)\times 10^{14}$ & $1.18 \pm 0.01$ \\
WMAP1-A & $(1.00 \pm 0.04)\times 10^{14}$ & $1.16 \pm 0.04$ \\
WMAP3-B & $(0.89 \pm 0.02)\times 10^{14}$ & $1.16 \pm 0.02$ \\
WMAP3-C & $(0.81 \pm 0.02)\times 10^{14}$ & $1.17 \pm 0.02$ \\
\hline
\end{tabular}
\end{table}
The mass-luminosity relation computed by binning our simulated cluster samples in luminosity $L^\text{gal}_{200}$ is shown in Fig.~\ref{fig:gal_lum_vs_crit_cluster_mass}. For given luminosity, the MS model yields cluster masses very similar to the SDSS masses of \citet{JohnstonEtal2007_SDSS_cluster_wl_II_arXiv}, whereas the WMAP3 model produces mean cluster masses that are generally smaller than the SDSS masses. The best-fit parameters for the mass-luminosity relation
\begin{equation}
\label{eq:L_200_vs_M_200_fit}
\ensuremath{M^\text{crit}_{200}}(L^\text{gal}_{200}) = M^\text{crit}_{200|40} \left(\frac{L^\text{gal}_{200}}{4\times10^{11}h^{-2}\ensuremath{\mathrm{L}_\odot}}\right)^{\alpha_L^\text{crit}}
\end{equation}
are listed in Table~\ref{tab:L_200_vs_M_crit_200_fit_params}.
The differences in the mass-luminosity relation between the various galaxy formation models can be explained in the same way as the differences in the mass-richness relation. The higher star formation efficiency in the WMAP3 models creates more bright ridgeline galaxies in a halo of a given mass $\ensuremath{M^\text{crit}_{200}}$ than in the MS and WMAP1-A models. This leads to lower average cluster masses at given luminosity $L^\text{gal}_{200}$.
\subsection{The scatter in the mass-richness relation}
\label{sec:mass_richness_scatter}
\begin{figure}
\centerline{\includegraphics[width=1\linewidth]{pdf_of_log_M_crit_200}}
\caption{
\label{fig:log_mass_pdf}
The distribution of the logarithm of cluster mass $\log_{10}(\ensuremath{M^\text{crit}_{200}})$ in various bins of richness $N^\text{gal}_{200}$. Shown are distributions for the MS model (dashed/dotted lines) and fits of these distributions to a normal distribution (solid lines).
}
\end{figure}
To compute a cluster mass function from cluster counts and mean masses as a function of richness, one needs to model the scatter in mass at each richness. In Fig.~\ref{fig:log_mass_pdf}, distributions of the logarithm of cluster mass $\log_{10}(\ensuremath{M^\text{crit}_{200}})$ are shown for the MS model for several bins in $N^\text{gal}_{200}$. These distributions are well described by gaussians.
For our various galaxy formation models, the standard deviation $\sigma_{\log_{10}(\ensuremath{M^\text{crit}_{200}})}$ of the scatter in the logarithm of cluster mass $\log_{10}(\ensuremath{M^\text{crit}_{200}})$ at given $N^\text{gal}_{200}$ is listed for various $N^\text{gal}_{200}$- and $L^\text{gal}_{200}$-bins is in Tables~\ref{tab:clusters_summary_MS}-\ref{tab:clusters_summary_WMAP3_C_L}. The scatter decreases with increasing $N^\text{gal}_{200}$ or $L^\text{gal}_{200}$ and tends to be larger at given $N^\text{gal}_{200}$ than at the corresponding $L^\text{gal}_{200}$.
Our values for the scatter in the mass-richness relation are in good agreement with those found empirically by \citet{RozoEtal2009_MaxBCG_III_scatter} for the maxBCG cluster sample (using X-ray luminosities as an additional mass proxy): We find $\sigma_{\log_{10}(\ensuremath{M^\text{crit}_{200}})}\approx0.15\,$-$\,0.2$ for the richness bins with $N^\text{gal}_{200}\geq9$, while \citet{RozoEtal2009_MaxBCG_III_scatter} find $\sigma_{\log_{10}(\ensuremath{M^\text{crit}_{200}})}\approx0.20\pm0.09$ for $N^\text{gal}_{200}\approx40$. Moreover, our values are consistent with the scatter in the velocity dispersion-richness relation derived by \citet{BeckerEtal2007} for the maxBCG clusters, if centre misidentification is taken into account. Note that there may be additional effects that increase the observed scatter, but are not modelled here.
\subsection{The mass function}
\label{sec:mass_function}
\begin{figure}
\centerline{\includegraphics[width=1\linewidth]{mass_function}}
\caption{
\label{fig:mass_function}
The differential abundance $\diff{n}(\ensuremath{M^\text{crit}_{200}})/\diff{\ensuremath{M^\text{crit}_{200}}}$ as a function of mass $\ensuremath{M^\text{crit}_{200}}$ for clusters with redshift $0.1 \leq z \leq 0.3$ in the MS: directly measured from the simulation (solid line) and reconstructed from the richness bins with $N^\text{gal}_{200}\geq 3$ (dashed line) or $N^\text{gal}_{200}\geq 9$ (dotted line).
}
\end{figure}
When the cluster abundance and the mass distribution at each richness is known, it is straightforward to reconstruct the cluster mass function \citep[][]{RozoEtal2009_MaxBCG_III_scatter}. The differential cluster number density (or differential mass function) is then given by a sum over all richness bins:
\begin{equation}
\diff{n}(\ensuremath{M^\text{crit}_{200}})/\diff{\ensuremath{M^\text{crit}_{200}}} = \sum_{i=1}^{N_\text{bins}} n_i
\mathrm{pdf}_i(\ensuremath{M^\text{crit}_{200}}),
\end{equation}
where $n_i$ denotes the space density and $\mathrm{pdf}_i(\ensuremath{M^\text{crit}_{200}})$ the mass distribution for clusters in richness bin $i$.
The results of Sec.~\ref{sec:mass_richness_scatter} show that the mass distributions $\mathrm{pdf}_i(\ensuremath{M^\text{crit}_{200}})$ at each richness can be approximated by a log-normal distribution with mean $\EV{\ensuremath{M^\text{crit}_{200}}}$ and scatter $\sigma_{log_{10}(\ensuremath{M^\text{crit}_{200}})}$ given by the values in Table~\ref{tab:clusters_summary_MS}-\ref{tab:clusters_summary_WMAP3_C}. As Fig.~\ref{fig:mass_function} illustrates for the MS model, the reconstructed mass function matches the true mass function well for $\ensuremath{M^\text{crit}_{200}} \gtrsim 2\times10^{14}$. For smaller masses, the richness-selected cluster sample becomes incomplete in mass, and thus the reconstruction fails.
\begin{figure}
\centerline{\includegraphics[width=1\linewidth]{reconstructed_mass_function}}
\caption{
\label{fig:reconstructed_mass_function}
The differential abundance $\diff{n}(\ensuremath{M^\text{crit}_{200}})/\diff{\ensuremath{M^\text{crit}_{200}}}$ as a function of mass $\ensuremath{M^\text{crit}_{200}}$ for clusters with redshift $0.1 \leq z \leq 0.3$ reconstructed from the masses and abundances of clusters with richness $N^\text{gal}_{200}\geq3$. Compared are the results for MS and WMAP simulations and the SDSS (calculated from the abundances by \citealp{KoesterEtal2007_SDSS_clusters} and \protect\citealp{SheldonEtal2009_SDSS_cluster_wl_I}, the masses by \protect\citealp{JohnstonEtal2007_SDSS_cluster_wl_II_arXiv}, and the scatter by \protect\citealp{RozoEtal2009_MaxBCG_III_scatter}).
}
\end{figure}
The reconstructed cluster mass functions for the different galaxy models and the SDSS are compared in Fig.~\ref{fig:reconstructed_mass_function}.
The figure illustrates clearly why we expect a $\Lambda$CDM cosmology with normalisation $0.72<\sigma_8<0.9$ to provide a better fit to the SDSS cluster data than the models considered here.
The values for the WMAP3 models are always much smaller those reconstructed from the SDSS, while the MS yields values above the observations. The reconstructed cluster mass function for the WMAP1 model generally follows the MS results, but is visibly affected by sampling noise for larger cluster masses.
Since the cluster mass function can be recovered from the cluster abundances and cluster masses as functions of richness, these quantities cannot vary independently if the cluster mass function is fixed. Different assumptions about the galaxy formation physics or the richness measurements that, for given richness, lead to higher cluster abundances will also yield lower cluster masses (and vice versa).
Thus, the abundance-richness relation discussed in Sec.~\ref{sec:cluster_abundance} and the mass-richness relation discussed in Sec.~\ref{sec:mass_richness_relation} provide complementary information on the cosmology.
\subsection{Fits to the cluster mass profiles}
\label{sec:cluster_mass_profile_fits}
The results of Sec.~\ref{sec:mass_richness_scatter} justify the use of a log-normal mass distribution for fits to observed mass profiles \citep[e.g., by][]{JohnstonEtal2007_SDSS_cluster_wl_II_arXiv,ReyesEtal2008}. Here, we illustrate that one can indeed obtain a good fit to the simulated mean mass profiles of clusters with assumptions similar to those used, e.g., by \citet{JohnstonEtal2007_SDSS_cluster_wl_II_arXiv}.
\begin{figure}
\centerline{\includegraphics[width=0.8\linewidth]{center_vs_off_center_mass_profiles}}
\caption{
\label{fig:center_vs_off_center_mass_profiles}
Comparison of the average weak-lensing mass profiles $\Delta \Sigma(R)$ as a function of radius $R$ for all clusters (solid line), for the correctly centred clusters (dashed line), and for the incorrectly centred clusters (dashed line) in the MS.
}
\end{figure}
The mass profiles discussed in Sec.~\ref{sec:cluster_mass_profiles} are a mixture of correctly and incorrectly centred clusters. In Fig.~\ref{fig:center_vs_off_center_mass_profiles}, the simulated average mass profiles of all clusters are compared to the profiles of correctly and incorrectly centred clusters. The profiles agree well for large radii, but differ significantly below a certain radius $R \approx 0.5h^{-1}\,\ensuremath{\mathrm{Mpc}}$ for clusters with richness $N^\text{gal}_{200}=3$, and $R\approx 1h^{-1}\,\ensuremath{\mathrm{Mpc}}$ for clusters with $N^\text{gal}_{200}\geq9$.
As Fig.~\ref{fig:center_vs_off_center_mass_profiles} illustrates, centre misidentification has a significant impact on the average cluster mass profiles and thus needs to be taken into account in profile fits. Since stacking weak-lensing mass profiles is linear, we can discuss the contributions from correctly and incorrectly centred clusters separately. A weighted average of fits to these two components constitutes a fit to the average mass profile of all clusters in a richness bin.
We assume that the average mass profile of the correctly centred clusters in a richness bin consists of a central galaxy component, which we model as point mass, a mean dark matter halo modelled as an average over spherical NFW profiles \citep{NavarroFrenkWhite1997}, and a contribution from neighbouring mass concentrations. A log-normal distribution with mean $\EV{\ensuremath{M^\text{crit}_{200}}}$ and standard deviation $\sigma_{log_{10}(\ensuremath{M^\text{crit}_{200}})}$ given by Table~\ref{tab:clusters_summary_MS} is assumed for the halo masses. Furthermore, we assume that the concentration $c$ of halos with mass $\ensuremath{M^\text{crit}_{200}}$ follows a log-normal distribution with mean
\begin{equation}
\EV{c}(\ensuremath{M^\text{crit}_{200}}) = 4.67\left(\frac{\ensuremath{M^\text{crit}_{200}}}{10^{14}h^{-1}\ensuremath{\mathrm{M}_\odot}}\right)^{-0.11}
\end{equation}
and standard deviation $\sigma_{\log_{10}(c)}=0.15$ \citep[][]{NetoEtal2007}.
\begin{figure}
\centerline{\includegraphics[width=0.8\linewidth]{mass_profile_fit}}
\caption{
\label{fig:mass_profile_fit}
Fit to the weak-lensing mass profile $\Delta \Sigma(R)$ as a function of radius $R$ of correctly centred clusters in the MS model. Shown are the measured profiles (solid line), the 3-component fit (dashed line), the central galaxy contribution (dash dotted line), the DM halo contribution (dotted line), and the contribution from neighbouring masses (dash-dot-dotted line).
}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.8\linewidth]{mass_profile_fit_off_center}}
\caption{
\label{fig:mass_profile_fit_off_center}
Fit to the weak-lensing mass profile $\Delta \Sigma(R)$ as a function of radius $R$ of incorrectly centred clusters in the MS model. Shown are the measured profiles (solid line), the 4-component fit (short-dashed line), the central galaxy contribution (dash-dotted line), the DM halo contribution (dotted line), the contribution from neighbouring masses (dash-dot-dotted line), and the subhalo contribution (long-dashed line).
}
\end{figure}
The resulting fit to the average weak-lensing mass profile of correctly centred clusters is shown in Fig.~\ref{fig:mass_profile_fit} for clusters in the MS model with richness $18\leq N^\text{gal}_{200} \leq 25$. A fit of similar quality can also be obtained for the other richness bins.
We now turn to the contribution from incorrectly centred clusters. Figure~\ref{fig:center_vs_off_center_mass_profiles} shows that the average profile of the incorrectly centred clusters increases with decreasing radius even for small radii. This is a consequence of the particular choice for the apparent centre of these clusters in our simulation, namely another massive cluster galaxy within a massive subhalo. If the apparent cluster centres were chosen randomly, one would expect the profile to decrease with decreasing radius for $R<1h^{-1}\,\ensuremath{\mathrm{Mpc}}$ \citep{JohnstonEtal2007_SDSS_cluster_wl_II_arXiv}.
To obtain a good fit to the average mass profiles of incorrectly centred clusters, we thus need four components: a central galaxy component (again modelled as point mass), a mean dark matter halo modelled as an NFW profile convolved with a 2D Gaussian, a contribution from neighbouring masses, and a subhalo, which we model as a truncated NFW profile \citep[][]{BaltzMarshallOguri2009}. As Fig.~\ref{fig:mass_profile_fit_off_center} illustrates, the subhalo component is essential for a good fit to the simulated mass profiles.
\section{Summary and Discussion}
\label{sec:discussion}
In this work, we have used $N$-body simulations of cosmic structure formation with semi-analytic galaxy formation modelling to test this modelling and to investigate how the properties of optically selected galaxy groups and clusters depend on cosmological parameters. We have created catalogues of simulated galaxy groups/clusters from model galaxy catalogues \citep[by][]{DeLuciaBlaizot2007,WangEtal2008}. We have computed weak-lensing mass profiles and various other properties for these clusters as a function of cluster richness $N^\text{gal}_{200}$ and luminosity $L^\text{gal}_{200}$ \citep[as defined by the maxBCG algorithm of][]{KoesterEtal2007_MaxBCG}, and compared the results to observations of clusters in the SDSS \citep[][]{SheldonEtal2009_SDSS_cluster_wl_I,JohnstonEtal2007_SDSS_cluster_wl_II_arXiv,ReyesEtal2008}.
We find that the simulated weak-lensing mass profiles and the observed profiles of the SDSS maxBCG clusters agree remarkably well in detailed shape and amplitude. Moreover, all simulations reproduce the observed mass-richness relation within $\sim30\%$ over the whole range probed by the SDSS clusters. The MS and the WMAP1-A simulation also yield cluster abundances very similar to the observed abundances. This shows that the models considered here provide a good description of the masses, density profiles and optical properties of galaxy clusters as well as the optical cluster selection and richness estimation of the maxBCG algorithm. Evidently, using mock galaxy catalogues based on a large high-resolution $\Lambda$CDM structure formation simulation and semi-analytic galaxy formation models makes it possible to create very realistic mock cluster catalogues for surveys like the SDSS (when problems with the ridgeline colours are overcome).
Although the underlying $N$-body simulations assume different cosmological parameters that lead to different DM halo abundances \citep[][]{SpringelEtal2005_Millennium, WangEtal2008}, all the galaxy models used here are able to reproduce the observed abundance and two-point correlations of galaxies reasonably well through careful adjustment of their star-formation efficiency and feedback parameters \citep[][]{DeLuciaBlaizot2007,WangEtal2008}. They differ, however, in their predicted cluster abundance as a function of $N^\text{gal}_{200}$. The MS model and the WMAP1-A model, both of which use cosmological parameters based on 1st-year WMAP data \citep[][]{SpergelEtal2003_WMAP_1stYear_Data}, predict cluster abundances that are compatible with the observed values \citep[][]{SheldonEtal2009_SDSS_cluster_wl_I}, whereas the WMAP3-A and WMAP3-B models, which use parameters consistent with the WMAP 3rd-year results \citep[][]{SpergelEtal2007_WMAP_3rdYear_Data}, yield abundances that are lower by a factor 2-3.
The cluster masses predicted as a function of richness $N^\text{gal}_{200}$ or luminosity $L^\text{gal}_{200}$ also differ for the different galaxy formation models. At given richness or luminosity, the MS and WMAP1-A models produce clusters that are up to 30\% more massive than observed, while the WMAP3 cluster masses are similar to or lower than the observed masses.
The different abundances and average cluster masses in our various galaxy formation models are primarily a reflection of the different underlying cosmologies. Because halos in the WMAP3 cosmology are less massive than in the MS/WMAP1 cosmology, the WMAP3 models need more efficient star formation to match the observed galaxy number densities. This produces more ridgeline galaxies in a dark matter halo of given mass. Consequently, clusters at a given richness are less massive in the WMAP3 models than in the MS and WMAP1-A models. Nevertheless, the higher star formation efficiency in the WMAP3 models does not fully compensate for the lower number of massive halos. So there are fewer rich clusters in the WMAP3 models than in the MS and WMAP1-A models.
The lower cluster masses in the observations compared to the MS model suggests that our Universe would be better described by $\sigma_8<0.9$. The higher observed rich cluster abundance than in the WMAP3 models suggests that $\sigma_8>0.72$. Thus, more precise agreement between predicted and observed cluster properties is expected for an intermediate value $0.72<\sigma_8<0.9$. This corroborates the findings by \citet[][]{RozoEtal2009_MaxBCG_V_cosmology_arXiv} that the SDSS maxBCG cluster data favour $\sigma_8\approx0.83$.
Our results confirm that the mass distribution of clusters of given richness is well described by a log-normal distribution. This justifies both the assumption of such distributions and the specific scatter values adopted in previous work which modelled stacked cluster mass profiles or reconstructed cluster mass functions. Fits to the stacked mass profile of clusters whose centre has erroneously been identified with a non-central cluster galaxy should take into account a halo component associated with this non-central galaxy in addition to its stellar mass, the halo component of the main cluster, and surrounding large-scale structure.
Our simulations required many simplifying assumptions about the richness measurements (e.g. about limiting magnitudes, colours, or projection effects), which could result in biased estimates. Although predictions for cluster abundances or for cluster masses are individually subject to such modelling errors, they do not vary independently for varying assumptions about the
richness measurement. For example, assuming a fainter magnitude limit results in lower cluster masses \emph{and} higher cluster numbers. Thus, it proves difficult to decrease average cluster masses for MS model in order to better match the observations without producing too many rich clusters. Similarly, the cluster numbers in the WMAP3 models can only be brought into agreement with the observed counts at the cost of cluster masses which are too small.
Similar reasoning reveals that changes to the galaxy formation description alone, though they could change the number, colour, or brightness of the model galaxies, could only lead to better agreement for either cluster abundance or cluster mass, but not both. This stems from the constraint that the abundances and masses of clusters as a function of richness must `add up' to reproduce the underlying cluster mass function regardless of the specific galaxy formation model. To reach better agreement for both, the number of massive clusters has to be adjusted, too, by changing the cosmology.
Our results demonstrate that, on the one hand, the cluster mass-richness relation and the cluster abundance-richness function together provide strong constraints for the cosmology even without perfect knowledge of the galaxy formation physics. On the other hand, our findings show that the cluster abundance and cluster mass as functions of richness can also be used directly, in addition to galaxy abundance and galaxy two-point correlation, to test the galaxy models.
In future work, one should test how much the agreement between observed and predicted cluster properties can be improved by choosing different cosmological parameters for the simulations \citep[e.g. the values currently favoured by other observations such as][]{KomatsuEtal2009}. In addition, the galaxy models discussed here should be improved to better match the observed colours of ridgeline galaxies at higher redshift. (If we had not adjusted the ridgeline colour selection by hand, the models would have contained almost no clusters with $z>0.25$.) Future simulations should also probe other cosmologies and galaxy models. These simulations need to have a much larger volume than the WMAP simulations to have good enough statistics to match up-coming observations. (The statistical errors on the mass-richness relation in the WMAP simulations are comparable to the uncertainties in current observations.)
More realistic modelling of cluster selection and characterisation could be achieved by running the observational cluster-finding algorithms on mock galaxy catalogues created from the simulations. Moreover, ray-tracing techniques could be used to simulate realistically the weak-lensing mass measurements and to assess their statistical accuracy and possible systematic uncertainties.
Finally we note that there is some tension between our finding that the abundances and weak-lensing masses of clusters favour cosmologies with a normalisation $\sigma_8\approx0.8$ over those with $\sigma_8=0.72$ \citep[in agreement with estimates by][]{RozoEtal2009_MaxBCG_V_cosmology_arXiv,KomatsuEtal2009} and the findings by \citet{LiEtal2009} and \citet{CacciatoEtal2009} that galaxy-galaxy lensing and galaxy clustering data are consistent with $\sigma_8=0.73$. It is beyond the scope of this paper to analyse possible reasons for this discrepancy and whether it can be resolved with better handling of measurement systematics or improved structure formation models, but this should be done in future work.
\section*{Acknowledgments}
We thank Gabriella De Lucia, J{\'er}{\'e}my Blaizot, Jie Wang, Ben Koester, Erin Sheldon, Jan Hartlap, and Peter Schneider for helpful discussions. We thank Jie Wang and collaborators for granting access to their simulation data. We thank Erin Sheldon for providing the SDSS cluster mass profiles. This work was supported by the DFG within the Priority Programme 1177 under the projects SCHN 342/6 and WH 6/3.
|
1,941,325,221,175 | arxiv | \section{Introduction}\label{S:Intro}
The rapid growth of wireless communications and the foreseen spectrum occupancy problems, due to the exponentially increasing consumer demands on mobile traffic and data, motivated the evolution of the concept of cognitive radio (CR) \cite{FCC_2002}.
CR systems require intelligent reconfigurable wireless devices, capable of sensing the conditions of the surrounding radio frequency (RF) environment and modifying their transmission parameters accordingly, in order to achieve the best overall performance, without interfering with other users \cite{ED_Bartlett}.
One fundamental task in CR is spectrum sensing, i.e., the identification of temporarily vacant portions of spectrum, over wide ranges of spectrum resources and determine the available spectrum holes on its own.
Spectrum sensing allows the exploitation of the under-utilized spectrum, which is considered to be an essential element in the operation of CRs.
Therefore, great amount of effort has been put to derive optimal, suboptimal, ad-hoc, and cooperative solutions to the spectrum sensing problem (see for example
\cite{
A_survey_of_spectrum_sensing_CR,
Opportunistic_Spectrum_Access_in_CR_Networks_Under_Imperfect_Spectrum_Sensing,
Relay_selection_in_CR_networks_with_interference_constrains,
Mutiuser_CR_networks_Joint_Impact_of_Direct_and_Relay_Communications,
A:SpecSensingOFM_CFO,
ED_Cooperative_Spectrum_Sensing_in_CR,
CR_GMD,
A:EqualGainCombining_for_Coop_Spec_Sensing_in_CRs,
A:Optimization_of_cooperatice_spectrum_sensing_with_ED_in_CR_networks,
A:On_the_performance_of_eigenvalue_based_Coop_Spect_Sensing_for_CR,
A:Unified_Analysis_of_Coop_Spect_Sensing_over_Composite_and_Generalized_Fading_Channels}).
However, the majority of these works ignore the imperfections associated with the RF front-end.
Such imperfections, which are encountered in the widely deployed low-cost direct-conversion radio (DCR) receivers (RXs), include in-phase (I) and quadrature-phase (Q) imbalance (IQI) \cite{Energy_Detection_under_IQI}, low-noise amplifier (LNA) nonlinearities \cite{CR_LNA} and phase noise (PHN)~\cite{ED_PHN}.
The effects of RF imperfections in general were studied in several works~\cite{
B:Schenk-book,
B:wenk2010mimo,
RF_impairments_generalized_model,
A:A_new_look_at_dual_hop_relaying_performance_limits_with_hw_impairments,
Sensitivity_of_Spectrum_Sensing_to_RF_impairments,
Cyclostationary_Sensing_of_OFDM_RF_impairments,
IQI_IRR_practical_values,
A:IQI_TX_RX_AF_Alouini,
A:IQI_in_AF_Nakagami_m,
A:OFDM_OR_IQI,
A:IQI_in_Two_Way_AF_relaying,
A:Impairments_on_AF_relaying,
C:Massive_MIMO_systems_with_HW_constrained_BS,
A:Joint_Comp_IQI_and_PHN,
ED_PHN,
A:Joint_Mitigation_of_Nonlinear_and_baseband_distortion_in_wideband_DCRs,
C:High_Dynamic_Range_RF_FEs_from_multiband_multistandar_to_CR,
C:Implementation_issues_in_spectrum_sensing_for_CR,
Likelihood_based_specrum_sensing_of_OFDM_IQI,
Effects_of_IQI_on_blind_spectrum_sensing_for_OFDMA_overlay_CR
}.
However, only recently, the impacts of RF imperfections in the spectrum sensing
capabilities of CR was investigated
\cite{
C:High_Dynamic_Range_RF_FEs_from_multiband_multistandar_to_CR,
C:Implementation_issues_in_spectrum_sensing_for_CR,
Sensitivity_of_Spectrum_Sensing_to_RF_impairments,
Cyclostationary_Sensing_of_OFDM_RF_impairments,
Likelihood_based_specrum_sensing_of_OFDM_IQI,
Effects_of_IQI_on_blind_spectrum_sensing_for_OFDMA_overlay_CR,
Energy_Detection_under_IQI,
ED_PHN}.
In particular, the importance of improved front-end linearity and sensitivity was illustrated in
\cite{C:High_Dynamic_Range_RF_FEs_from_multiband_multistandar_to_CR} and
\cite{C:Implementation_issues_in_spectrum_sensing_for_CR},
while the impacts of RF impairments in DCRs on single-channel energy and/or cyclostationary based sensing were discussed in
\cite{Sensitivity_of_Spectrum_Sensing_to_RF_impairments} and
\cite{Cyclostationary_Sensing_of_OFDM_RF_impairments}.
Furthermore, in
\cite{Likelihood_based_specrum_sensing_of_OFDM_IQI} the authors presented closed-form expressions for the detection and false alarm probabilities for the Neyman-Pearson detector, considering the spectrum sensing problem in single-channel orthogonal frequency division multiplexing (OFDM) CR RX, under the joint effect of transmitter and receiver IQI.
On the other hand, multi-channel sensing under IQI was reported in
\cite{Effects_of_IQI_on_blind_spectrum_sensing_for_OFDMA_overlay_CR},
where a three-level hypothesis blind detector was introduced.
Moreover, the impact of RF IQI on energy detection (ED) for both single-channel
and multi-channel DCRs was investigated in
\cite{Energy_Detection_under_IQI}, where it was shown that the false alarm probability in a multi-channel environment increases significantly, compared to the ideal RF RX case. Additionally, in
\cite{ED_PHN}, the authors analyzed the effect of PHN on ED, considering a multi-channel DCR and additive white Gaussian noise (AWGN) channels, whereas in
\cite{A:Spectrum_Sensing_Under_RF_Non_linearities}, the impact of third-order non-linearities on the detection and false alarm probabilities for classical and cyclostationary energy detectors considering imperfect LNA, was~investigated.
In this work, we investigate the impact on the multi-channel energy-based spectrum sensing mechanism of the joint effects of several RF impairments, such as LNA non-linearities, PHN and IQI. After assuming
flat-fading Rayleigh channels and complex Gaussian primary user (PU) transmitted signals, and approximating the joint effects of RF impairments by a complex Gaussian process (an approximation which has been validated both in theory and by experiments, see
\cite{A:IQI_in_AF_Nakagami_m} and the references therein), we derive closed-form expressions for
the probabilities of false alarm and detection. Based on these expressions, we investigate the impact of RF impairments on ED. Specifically, the contribution of this paper can be summarized as~follows:
\begin{itemize}
\item We, first, derive analytical closed-form expressions for the false alarm and detection probabilities for an ideal RF front-end ED detector, assuming flat fading Rayleigh channels and complex Gaussian transmitted signals. To the best of the authors' knowledge, this is the first time that such expressions are presented in the open technical literature, under these~assumptions.
\item Next, a signal model that describes the joint effects of all RF impairments is presented. This model is built upon an approximation of the joint effects of RF impairments by a complex Gaussian process \cite{A:IQI_in_AF_Nakagami_m} and is tractable to algebraic manipulations.
\item Analytical closed-form expressions are provided for the evaluation of false alarm and detection probabilities of multi-channel EDs constrained by RF impairments, under Rayleigh fading. Based on this framework, the joint effects of RF impairments on spectrum sensing performance are~investigated.
\item Finally, we address an analytical study for the detection capabilities of cooperative spectrum sensing scenarios considering both cases of ideal ED detectors and multi-channel EDs constrained by RF impairments.
\end{itemize}
The remainder of the paper is organized as follows. The system and signal model for both ideal and hardware impaired RF front-ends are described in Section \ref{sec:SSM}.
The analytical framework for evaluating the false alarm and detection probabilities, when both ideal sensing or RF imperfections are considered, are provided in Section \ref{sec:Probabilities}.
Moreover, analytical closed-form expression for deriving the false alarm and detection probabilities, when a cooperative spectrum sensing with decision fusion system is considered, are provided in Section \ref{sec:Cooperative_Spectrum_Sensing}.
Numerical and simulation results that illustrate the detrimental effects of RF impairments in spectrum sensing are presented in Section \ref{sec:Numerical_Results}. Finally, Section~\ref{sec:Conclusions} concludes the paper by summarizing our main findings.
\subsubsection*{Notations}
Unless otherwise stated, $(x)^{*}$ stands for the complex conjugate of $x$, whereas $\Re\left\{ x\right\} $ and $\Im\left\{ x\right\} $ represent the real and imaginary part of $x$, respectively.
The operators $E\left[\cdot\right]$ and $\left|\cdot\right|$ denote the statistical expectation and the absolute value, respectively.
The sign of a real number $x$ is returned by the operator $\sign\left(x\right)$.
The operator $\card\left(\mathcal{A}\right)$ returns the cardinality of the set $\mathcal{A}$.
$U\left(x\right)$ and $\exp\left(x\right)$ denote the unit step function and the exponential function, respectively.
The lower \cite[Eq. (8.350/1)]{B:Gra_Ryz_Book} and upper incomplete Gamma functions \cite[Eq. (8.350/2)]{B:Gra_Ryz_Book} are represented by $\gamma\left(\cdot,\cdot\right)$ and $\Gamma\left(\cdot,\cdot\right)$, respectively, while the Gamma function \cite[Eq. (8.310)]{B:Gra_Ryz_Book} is denoted by $\Gamma\left(\cdot\right)$.
Moreover, $\Gamma\left(a,x,b,\beta\right)=\int_{x}^{\infty}t^{a-1}\exp\left(-t-b t^{-\beta}\right) dt$ is the extended incomplete Gamma function defined by \cite[Eq. (6.2)]{B:chaudhry2001class}.
Finally, $\Q\left(x\right)=\frac{1}{\sqrt{2\pi}}\int_{x}^{\infty}\exp\left(-t^{2}/2\right)dt$ is the Gaussian Q-function.
\section{System and signal model}\label{sec:SSM}
In this section, we briefly present the ideal signal model, which is referred to as ideal RF front-end in what follows.
Build upon that, we demonstrate the practical signal model, where the RX is considered to suffer from RF imperfections, such as LNA nonlinearities, PHN and IQI.
Note that it is assumed that $K$ RF channels are down-converted to baseband using the wideband direct-conversion principle, which is referred to as multi-channel down-conversion~\cite{Direct_conversion}.
\subsection{Ideal RF front-end\label{sub:Ideal-RF-front-end}}
The two hypothesis, namely absence/presence of primary user (PU), is denoted with parameter $\theta_{k}\in\left\{ 0,1\right\}$. Suppose the $n$-th sample of the PU signal, $s\left(n\right),$ is conveyed over a flat-fading wireless channel, with channel gain, $h\left(n\right),$ and additive noise $w\left(n\right)$. The received wideband RF signal is passed through various RF front-end stages, including filtering, amplification, analog I/Q demodulation (down-conversion) to baseband and sampling. The wideband channel after sampling is assumed to have a bandwidth of $W$ and contain $K$ channels, each having bandwidth $W_{ch}=W_{sb}+W_{gb}$,
where $W_{sb}$ and $W_{gb}$ are the signal band and total guard band bandwidth within this channel, respectively. Additionally, it is assumed that the sampling is performed with rate $W$. Note, that the rate of the signal is reduced by a factor of $L=W/W_{sb}\geq K$, where for simplicity we assume $L\in\mathbb{Z}$.
Under the ideal RF front-end assumption, after the selection filter, the $n-$th sample of the baseband equivalent received signal vector for the $k^{\text{th}}$ channel
($k\in S\left\{ -K/2,\ldots,-1,1\ldots,K/2\right\} $) is given by
\begin{align}
r_{k}\left(n\right) & =\Re\left\{ r_{k}\left(n\right)\right\} +j\Im\left\{ r_{k}^{\text{}}\left(n\right)\right\}
=\theta_{k}h_{k}\left(n\right)s_{k}\left(n\right)+w_{k}\left(n\right),\label{Rx_ideal_signal_model}
\end{align}
where $h_{k}$, $s_{k}$ and $w_{k}$ are zero-mean circular symmetric complex white Gaussian (CSCWG) processes with variances $\sigma_{h}^{2}$, $\sigma_{s}^{2}$ and $\sigma_{w}^{2}$, respectively. Furthermore,
\begin{align}
\Re & \left\{ r_{k}^{\text{}}\left(n\right)\right\} =\theta_{k}\Re\left\{ h_{k}\left(n\right)\right\} \Re\left\{ s_{k}\left(n\right)\right\}
-\theta_{k}\Im\left\{ h_{k}\left(n\right)\right\} \Im\left\{ s_{k}\left(n\right)\right\} +\Re\left\{ w_{k}\left(n\right)\right\} ,\\
\Im & \left\{ r_{k}^{\text{}}\left(n\right)\right\} =\theta_{k}\Im\left\{ h_{k}\left(n\right)\right\} \Re\left\{ s_{k}\left(n\right)\right\}
+\theta_{k}\Re\left\{ h_{k}\left(n\right)\right\} \Im\left\{ s_{k}\left(n\right)\right\} +\Im\left\{ w_{k}\left(n\right)\right\} .
\end{align}
\subsection{Non-ideal RF front-end\label{sub:Non-ideal-RF-front-end}}
In the case of non-ideal RF front-end, the $n$-th sample of the impaired baseband equivalent received signal vector for the $k^{\text{th}}$channel is
given by~\cite{Energy_Detection_under_IQI} and~\cite{B:Schenk-book}
\begin{align}
r_{k}\left(n\right) & =\Re\left\{ r_{k}\left(n\right)\right\} +j\Im\left\{ r_{k}\left(n\right)\right\}
=\xi_{k}\left(n\right)\theta_{k}h_{k}\left(n\right)s_{k}\left(n\right)+\eta_{k}\left(n\right)+w_{k}\left(n\right),\label{Rx_signal_model}
\end{align}
with
\begin{align}
\Re\left\{ r_{k}\left(n\right)\right\}\hspace{-0.1cm} =\hspace{-0.1cm}\theta_{k}\Re\left\{ h_{k}\left(n\right)\xi_{k}\right\} \Re\left\{ s_{k}\left(n\right)\right\}
\hspace{-0.1cm} - \hspace{-0.1cm}
\theta_{k}\Im\left\{ h_{k}\left(n\right)\xi_{k}\right\} \Im\left\{ s_{k}\left(n\right)\right\}
\hspace{-0.1cm} + \hspace{-0.1cm}
\Re\left\{ \eta_{k}\left(n\right)
\hspace{-0.1cm}+\hspace{-0.1cm}
w_{k}\left(n\right)\right\} ,
\end{align}
and
\begin{align}
\Im\left\{ r_{k}\left(n\right)\right\} \hspace{-0.1cm}=\hspace{-0.1cm}\theta_{k}\Im\left\{ h_{k}\left(n\right)\xi_{k}\right\} \Re\left\{ s_{k}\left(n\right)\right\}
\hspace{-0.1cm} - \hspace{-0.1cm}
\theta_{k}\Re\left\{ h_{k}\left(n\right)\xi_{k}\right\} \Im\left\{ s_{k}\left(n\right)\right\}
\hspace{-0.1cm} + \hspace{-0.1cm}
\Im\left\{ \eta_{k}\left(n\right)
\hspace{-0.1cm} + \hspace{-0.1cm}
w_{k}\left(n\right)\right\} ,
\end{align}
where $\xi_{k}$ denotes the amplitude and phase rotation due to PHN caused by common phase error (CPE), LNA nonlinearities and IQI,
and is given by
\begin{equation}
\xi_{k}=\gamma_{0}K_{1}\alpha,\label{eq:ksi}
\end{equation}
while $\eta_{k}$ denotes the distortion noise from impairments in the RX, and specifically due to PHN caused by inter carrier interference (ICI), IQI and non-linear distortion noise, and is given by
\begin{align}
\eta_{k}\left(n\right) & =K_{1}\left(\gamma_{o}e_{k}\left(n\right)+\psi_{k}\left(n\right)\right
+K_{2}\left(\gamma_{o}^{*}\left(\alpha \theta_{-k} h_{-k}^{*}\left(n\right)s_{-k}^{*}\left(n\right)+e_{-k}^{*}\left(n\right)\right)\right)
+K_{2}\psi_{-k}^{*}\left(n\right).
\label{eta}
\end{align}
After denoting as $\Theta_{k}=\left\{ \theta_{k-1},\theta_{k+1}\right\} $
and $H_{k}=\left\{ h_{k-1},h_{k+1}\right\} $, this distortion noise
term can be modeled as
$\eta_{k}\sim\mathcal{CN}\left(0,\sigma_{\eta_{k}}^{2}\right),$
with
\begin{align}
\sigma_{\eta_{k}}^{2}
\hspace{-0.1cm}=\hspace{-0.1cm}
\left|\gamma_{0}\right|^{2}
\hspace{-0.1cm}
\left(\left|K_{1}\right|^{2}\sigma_{e,k}^{2}
\hspace{-0.1cm}+\hspace{-0.1cm}
\left|K_{2}\right|^{2}\sigma_{e,-k}^{2}\right)
&
\hspace{-0.1cm}+\hspace{-0.1cm}
\left|K_{1}\right|^{2}\sigma_{\psi\left|H_{k},\Theta_{k}\right.}^{2}\hspace{-0.1cm}
\hspace{-0.1cm}+\hspace{-0.1cm}
\left|K_{2}\right|^{2}\sigma_{\psi\left|H_{-k},\Theta_{-k}\right.}^{2} \hspace{-0.1cm}
\hspace{-0.1cm}+\hspace{-0.1cm}\left|\gamma_{0}\right|^{2}\left|K_{2}\right|^{2}\left|\alpha\right|^{2}\theta_{-k}
\hspace{-0.1cm}
\left|h_{-k}\right|^{2}
\hspace{-0.1cm}
\sigma_{s}^{2}.\label{sigma_eta}
\end{align}
It should be noted that this model has been supported and validated
by many theoretical investigations and measurements \cite{MIMO_transmission_with_residual_transmit_RF_impairments,
A:IQI_in_AF_Nakagami_m,
C:Massive_MIMO_systems_with_HW_constrained_BS,
B:wenk2010mimo,
A_theoretical_characterization_of_nonlinear_distortion_effects_in_OFDM_systems,
Impairments_on_AF_relaying,
Experimental_Investigation_of_TDD_Reciprocity_Based_ZF,
RF_impairments_generalized_model}.
Next, we describe how the various parameters in (\ref{eq:ksi}), (\ref{eta}) and (\ref{sigma_eta}) stem from the imperfections associated with the RF front-end.
\subsubsection*{LNA Nonlinearities}
The parameters $\alpha$ and $e_{k}$ respresent the nonlinearity parameters, which model the amplitude/phase distortion and the nonlinear distortion noise, respectively.
According to Bussgang's theorem \cite{papoulis}, $e_{k}$ is a zero-mean Gaussian error term with variance $\sigma_{e_{k}}^{2}$. Considering an ideal clipping power amplifier (PA), the amplification
factor $\alpha$ and the variance $\sigma_{e_{k}}^{2}$, are given by
\begin{equation}
\alpha=1-\exp\left(-\IBO\right)+\sqrt{2\pi}\IBO\Q\left(2\IBO\right),
\end{equation}
\begin{equation}
\sigma_{e_{k}}^{2}=\sigma_{s}^{2}\left(1-\alpha^{2}-\exp\left(-\IBO\right)\right),
\end{equation}
where $\IBO=A_{o}^{2}/\sigma_{s}^{2}$ denotes the input back-off
factor and $A_{o}$ is the PA's clipping level.
Furthermore, if a polynomial model is employed to describe the effects of nonlinearities, the amplification factor $\alpha$ and the variance $\sigma_{e_{k}}$, are given by
\begin{align}
\alpha = \sum_{n=0}^{M-1}\beta_{n+1} 2^{-n/2} \sigma_{s}^{2} \Gamma\left(1+n/2\right), \\
\sigma_{e_{k}} = \sum_{n=2}^{2M}\gamma_{n} 2^{-n/2} \sigma_{s}^{2} \Gamma\left(1+n/2\right) - \left|a\right|^{2}\sigma_{s}^{2},
\end{align}
where
\begin{align}
\gamma_{n}=\sum_{m=1}^{n-1}\widehat{\beta}_{m}\widehat{\beta}_{n-m}^{*},
\text{ and }
\widehat{\beta}_{m}=\left\{\begin{array}{c l}\beta_{m}, & 1\leq m \leq M+1 \\ 0, & m>M+1 \end{array} \right.
\end{align}
\subsubsection*{I/Q Imbalance}
The IQI coefficients $K_{1}$ and $K_{2}$ are given by
\begin{align}
K_{1}=\frac{1+\epsilon e^{-j\theta}}{2}\text{ and }K_{2}=\frac{1-\epsilon e^{j\theta}}{2},
\end{align}
with $\epsilon$ and $\theta$ denote the amplitude and phase mismatch, respectively. It is noted that for perfect I/Q matching, this imbalance parameters become $\epsilon=1$, $\theta=0$; thus in this case $K_{1}=1$ and $K_{2}=0$. The coefficients $K_{1}$ and $K_{2}$ are related through
$K_{1}=1-K_{2}^{*}$
and the image rejection ratio (IRR), which determines the amount of attenuation of the image frequency band, namely
$\rm{IRR}=\left|{K_{1}}/{K_{2}}\right|^{2}$.
With practical analog front-end electronics, $\rm{IRR}$ is typically in the range of $20-40$~$\rm{dB}$~\cite{Direct_conversion,IQI_IRR_practical_values,Boul1508:Effects}.
\subsubsection*{Phase noise}
The parameter, $\gamma_{0}$, stands for CPE, which is equal for all channels, and $\psi_{k}$ represents the ICI from all other neighboring channels due to spectral regrowth caused by PHN.
Notice that, since the typical $3\text{ dB}$ bandwidth values for the oscillator process is in the order of few tens or hundreds of Hz, with rapidly fading spectrum after this point (approximately $10\text{dB}/\text{decade}$), for channel bandwidth that is typical few tens or hundreds $\rm{KHz}$, the only effective interference is due to leakage from successive neighbors only \cite{ED_PHN}. Consequently, the ICI term can be approximated as \cite{ED_PHN}
\begin{align}
\psi_{k}\left(n\right) & \approx\theta_{k-1}\gamma\left(n\right)h_{k-1}\left(n\right)s_{k-1}\left(n\right)
+\theta_{k+1}\gamma\left(n\right)h_{k+1}\left(n\right)s_{k+1}\left(n\right),
\end{align}
with $\gamma\left(n\right)=\exp\left({j\phi\left(n\right)}\right)$ and $\phi\left(n\right)$ being a discrete Brownian error process, i.e.,
$\phi\left(n\right)=\sum_{m=1}^{n}\phi\left(m-1\right)+\epsilon\left(n\right),$
where $\epsilon\left(n\right)$ is a zero mean real Gaussian variable with variance
$\sigma_{\epsilon}^{2}=\frac{4\pi\beta}{W}$
and $\beta$ being the $3\text{ }\rm{dB}$ bandwidth of the local oscillator~process.
The interference term $\psi_{k}$ in \eqref{eta} might have zero or non-zero contribution depending on the existence of PU signals in the successive neighboring channels. In general, this term is typically non-white and strictly speaking cannot be modeled by a Gaussian process. However, for practical $3$ $\rm{dB}$ bandwidth of the oscillator process, the influence of the regarded impairments can all be modeled as a zero-mean Gaussian process with $\sigma_{\left.\psi_{k}\right|\left\{ H_{k},\Theta_{k}\right\} }^{2}$ given by
\begin{align}
\sigma_{\left.\psi\right|\left\{ H_{k},\Theta_{k}\right\} }^{2}
=\theta_{k-1}A_{k-1}\left|h_{k-1}\left(n\right)\right|^{2}\sigma_{s}^{2}
+\theta_{k+1}A_{k+1}\left|h_{k+1}\left(n\right)\right|^{2}\sigma_{s}^{2},
\label{sigma_psi}
\end{align}
where
\begin{equation}
A_{k-1}=\frac{\left|I\left(f_{k-1}-f_{k}+f_{\text{cut-off}}\right)-I\left(f_{k-1}-f_{k}-f_{\text{cut-off}}\right)\right|}{2\pi f_{\text{cut-off}}},
\label{A_kplyn1}
\end{equation}
\begin{equation}
A_{k+1}=\frac{\left|I\left(f_{k+1}-f_{k}+f_{\text{cut-off}}\right)-I\left(f_{k+1}-f_{k}-f_{\text{cut-off}}\right)\right|}{2\pi f_{\text{cut-off}}},
\label{A_ksyn1}
\end{equation}
and $f_{k}$ is the centered normalized frequency of the $k^{th}$ channel, i.e.,
$f_{k}=\sign\left(k\right)\frac{2\left|k\right|-1}{2K}$
and $f_{\text{cut-off}}$ is the normalized cut-off frequency of the $k^{th}$ channel, which can be obtained~by
$f_{\text{cut-off}}=\frac{W_{sb}}{2W}.$
Furthermore,
\begin{align}
I\left(f\right)= &
\left(f_{\text{cut-off}}-f\right)\tan^{-1}\hspace{-0.1cm}\left(\delta\tan\left(\pi\left(f_{\text{cut-off}}-f\right)\right)\right)
+\left(f_{\text{cut-off}}+f\right)\tan^{-1}\hspace{-0.1cm}\left(\delta\tan\left(-\pi\left(f_{\text{cut-off}}+f\right)\right)\right)
\nonumber \\ &
- \frac{1}{\delta}\left(\left(f_{\text{cut-off}}+f\right)\cot\left(\pi\left(f_{\text{cut-off}}+f\right)\right)\right.
-\left.\left(f_{\text{cut-off}}-f\right)\cot\left(\pi\left(f_{\text{cut-off}}-f\right)\right)\right)\nonumber \\&
+\frac{1}{\pi\delta}\left(\log\left(\left|sin\left(\pi\left(f_{\text{cut-off}}+f\right)\right)\right|\right)\right.
+\left.\log\left(\left|sin\left(\pi\left(f_{\text{cut-off}}-f\right)\right)\right|\right)\right),
\end{align}
with
$\delta=\frac{\exp\left({-2\pi\beta/W}\right)+1}{\exp\left({-2\pi\beta/W}\right)-1}.$
Due to Eqs. (\ref{A_kplyn1}) and (\ref{A_ksyn1}), it follows that
$A_{k-1}=A_{k+1}$.
\subsubsection*{Joint effect of RF impairments}
Here, we explain the joint impact of RF imperfections in the spectra of the down-converted received signal. Comparing Eq. \eqref{Rx_signal_model} with Eq. \eqref{Rx_ideal_signal_model}, we observe that the RF imperfections result to not only amplitude/phase distortion, but also neighbor and mirror interference, as demonstrated intuitively in Fig.~\ref{fig:RF_imp_effects}.
According to \eqref{eq:ksi} and \eqref{sigma_eta}, LNA nonlinearities cause amplitude/phase distortion and an additive nonlinear distortion noise, whereas, based on \eqref{sigma_psi}, PHN causes interference to the received baseband signal at the $k^\text{th}$ channel, due to the received baseband signals at the neighbor channels $k-1$ and $k+1$.
Moreover, based on (\ref{sigma_eta}), the joint effects of PHN and IQI, described by the terms $\left|K_{1}\right|^{2}\sigma_{\psi\left|H_{k},\Theta_{k}\right.}^{2}$, $\left|K_{2}\right|^{2}\sigma_{\psi\left|H_{-k},\Theta_{-k}\right.}^{2}$ and $\left|\gamma_{0}\right|^{2}\left|K_{2}\right|^{2}\left|\alpha\right|^{2}\theta_{-k}\left|h_{-k}\right|^{2}\sigma_{s}^{2}$, result to interference to the signal at the $k^\text{th}$ ($k\in\{-\frac{K}{2}+1,\cdots,\frac{K}{2}+1\}$) channel by the signals at the channels $-k-1$, $-k$, $-k+1$, $k-1$ and $k+1$.
Note that if $k=-\frac{K}{2}$ or $k=\frac{K}{2}$, then PHN and IQI cause interference to the signal at the $k^\text{th}$ channel due to the signals at the channels $-k$, $-k+1$ and $k-1$.
Consequently, in this case, the terms that refer to the signals at the channels $-k-1$ and $k+1$ should be omitted.
\begin{figure}
\centering\includegraphics[width=0.7\linewidth,trim=0 0 0 0,clip=false]{images/rf_imperfections_effects.eps}
\caption{Spectra of the received signal:
(a) before LNA (passband RF signal),
(b) after LNA (passband RF signal),
(c) after down-conversion (baseband signal), when local oscillator's PHN is considered to be the only RF imperfection,
(d) after down-conversion (baseband signal), when IQI is considered to be the only RF imperfection, (e) after down-conversion (baseband signal), the joint effect of LNA nonlinearities, PHN and IQI.}
\label{fig:RF_imp_effects}
\end{figure}
Furthermore, the joint effects of LNA nonlinearties and IQI are described by the first term and the last terms in \eqref{sigma_eta}, i.e., $\left|K_{1}\right|^{2}\sigma_{e,k}^{2}+\left|K_{2}\right|^{2}\sigma_{e,-k}^{2}$ and $\left|\gamma_{0}\right|^{2}\left|K_{2}\right|^{2}\left|\alpha\right|^{2}\theta_{-k}\left|h_{-k}\right|^{2}\sigma_{s}^{2}$, respectively, and result to additive distortion noises and mirror channel interference.
Finally, the amplitude and phase distortion caused by the joint effects of all RF imperfections are modeled by the parameter $\xi$ described in~\eqref{eq:ksi}.
\section{False Alarm/Detection Probabilities for Channel Detection\label{sec:Probabilities}}
In the classical ED, the energy of the received signals is used to determine whether a channel is idle or busy. Based on the signal model described in Section \ref{sec:SSM}, the ED calculates the test statistics for the $k$ channel as
\begin{align}
T_{k} & =\frac{1}{N_{s}}\sum_{m=0}^{N_{s}-1}\left|r_{k}\left(n\right)\right|^{2}
=\frac{1}{N_{s}}\sum_{m=0}^{N_{s}-1}\Re\left\{ r_{k}\left(n\right)\right\} ^{2}+\Im\left\{ r_{k}\left(n\right)\right\} ^{2},\label{ED_classic}
\end{align}
where $N_{s}$ is the number of complex samples used for sensing. This test statistic is compared against a threshold $\gamma_{th}\left(k\right)$ to yield the sensing decision,
i.e., the ED decides that the channel $k$ is busy if $T_{k}>\gamma_{th}\left(k\right)$ or idle~otherwise.
\subsection{Ideal RF front-end}
Based on the signal model presented in \ref{sub:Ideal-RF-front-end}
and taking into consideration that
\begin{align}
\sigma^{2} & =E\left[\Re\left\{ r_{k}\right\} ^{2}\right]=E\left[\Im\left\{ r_{k}\right\} ^{2}\right]
=\theta_{k}\left(\Re\left\{ h_{k}\right\} ^{2}+\Im\left\{ h_{k}\right\} ^{2}\right)\frac{\sigma_{s}^{2}}{2}+\frac{\sigma_{w}^{2}}{2},
\end{align}
and $E\left[\Re\left\{ r_{k}\right\} \Im\left\{ r_{k}\right\} \right]=0$
for a given channel realization $h_{k}$ and channel occupation $\theta_{k}$,
the received energy follows chi-square distribution with $2N_{s}$
degrees of freedom and cumulative distribution function (CDF) given
by
\begin{equation}
F_{T_{k}}\left(x\left|h_{k},\theta_{k}\right.\right)=\frac{\gamma\left(N_{s},\frac{N_{s}x}{2\sigma^{2}}\right)}{\Gamma\left(N_{s}\right)}.\label{eq:ideal_cond}
\end{equation}
The following theorem returns a closed-form expression for the CDF
of the test statistics assuming that the channel is busy.
\begin{thm}
\label{thm:CDF_ideal}The CDF of the energy statistics assuming an ideal RF front end and a busy channel can be evaluated by
\begin{align}
& F_{T_{k}}\left(x\left|\theta_{k}=1\right.\right)=
1- \exp\left(\frac{\sigma_{w}^2}{\sigma_h^2 \sigma_s^2}\right)
\sum_{k=0}^{N_s-1}\frac{1}{k!} \left(\frac{N_s x}{\sigma_h^2 \sigma_s^2}\right)^k \Gamma\left(-k+1,\frac{\sigma_w^2}{\sigma_h^2 \sigma_s^2},\frac{N_s x}{\sigma_h^2 \sigma_s^2},1\right),
\label{CDF_theta_1_ideal}
\end{align}
\end{thm}
\begin{IEEEproof}
Since $h_{k}\sim\mathcal{CN}\left(0,\sigma_{h}^{2}\right)$, it follows that the parameter $\sigma^{2}$ follows exponential distribution with probability density function (PDF) given~by
\begin{equation}
f_{\sigma^{2}}\left(x\left|\theta_{k}=1\right.\right)=\frac{2\exp\left(\frac{\sigma_{w}^{2}}{\sigma_{s}^{2}\sigma_{h}^{2}}\right)}{\sigma_{s}^{2}\sigma_{h}^{2}}\exp\left(-\frac{2x}{\sigma_{s}^{2}\sigma_{h}^{2}}\right),
\end{equation}
with $x\in\left[\frac{\sigma_{w}^{2}}{2},\infty\right)$. Hence, the unconditional CDF can be expressed as
\begin{align}
F_{T_{k}}\left(x\left|\theta_{k}=1\right.\right)
=\frac{1}{\Gamma\left(N_{s}\right)}\frac{2\exp\left(\frac{\sigma_{w}^{2}}{\sigma_{s}^{2}\sigma_{h}^{2}}\right)}{\sigma_{s}^{2}\sigma_{h}^{2}}
\int_{\frac{\sigma_{w}^{2}}{2}}^{\infty}\gamma\left(N_{s},\frac{N_{s}x}{2y}\right)\exp\left(-\frac{2y}{\sigma_{h}^{2}\sigma_{s}^{2}}\right)dy,
\end{align}
which is equivalent to
\begin{align}
F_{T_{k}}\left(x\left|\theta_{k}=1\right.\right)
&
=\frac{1}{\Gamma\left(N_{s}\right)}\frac{2\exp\left(\frac{\sigma_{w}^{2}}{\sigma_{s}^{2}\sigma_{h}^{2}}\right)}{\sigma_{s}^{2}\sigma_{h}^{2}}
\int_{\frac{\sigma_{w}^{2}}{2}}^{\infty}\Gamma\left(N_{s}\right)\exp\left(-\frac{2y}{\sigma_{h}^{2}\sigma_{s}^{2}}\right)dy
\nonumber \\ &
- \frac{1}{\Gamma\left(N_{s}\right)}\frac{2\exp\left(\frac{\sigma_{w}^{2}}{\sigma_{s}^{2}\sigma_{h}^{2}}\right)}{\sigma_{s}^{2}\sigma_{h}^{2}}
\int_{\frac{\sigma_{w}^{2}}{2}}^{\infty}\Gamma\left(N_{s},\frac{N_{s}x}{2y}\right)\exp\left(-\frac{2y}{\sigma_{h}^{2}\sigma_{s}^{2}}\right)dy,
\end{align}
or
\begin{align}
F_{T_{k}}\left(x\left|\theta_{k}=1\right.\right)
= 1 - \frac{1}{\Gamma\left(N_{s}\right)}\frac{2\exp\left(\frac{\sigma_{w}^{2}}{\sigma_{s}^{2}\sigma_{h}^{2}}\right)}{\sigma_{s}^{2}\sigma_{h}^{2}}
\int_{\frac{\sigma_{w}^{2}}{2}}^{\infty}\Gamma\left(N_{s},\frac{N_{s}x}{2y}\right)\exp\left(-\frac{2y}{\sigma_{h}^{2}\sigma_{s}^{2}}\right)dy .
\label{Eq:F_T_k_th_1_proof}
\end{align}
Since $N_s$ is a positive integer, the upper incomplete Gamma function can be written as a finite sum \cite[Eq. (8.352/2)]{B:Gra_Ryz_Book}, and hence \eqref{Eq:F_T_k_th_1_proof} can be re-written as
\begin{align}
F_{T_{k}}\left(x\left|\theta_{k}=1\right.\right)
= 1 - \frac{2\exp\left(\frac{\sigma_{w}^{2}}{\sigma_{s}^{2}\sigma_{h}^{2}}\right)}{\sigma_{s}^{2}\sigma_{h}^{2}}
\sum_{k=0}^{N_{s}-1} \int_{\frac{\sigma_{w}^{2}}{2}}^{\infty}\frac{1}{k!} \left(\frac{N_{s} x}{2 y}\right)^{k} \exp\left(-\frac{N_{s} x}{2 y}-\frac{2y}{\sigma_{h}^{2}\sigma_{s}^{2}}\right)dy .
\label{Eq:F_T_k_th_2_proof}
\end{align}
After some algebraic manipulations and using \cite[Eq. (6.2)]{B:chaudhry2001class}, (\ref{Eq:F_T_k_th_2_proof}) can be written as in (\ref{CDF_theta_1_ideal}). This concludes
the proof.
\end{IEEEproof}
Based on the above analysis, the false alarm probability for the ideal RX can be obtained~by
\begin{align}
{\cal P}_{fa}(\gamma) & = P_r\left(T_{k}>\gamma\left|\theta_{k}=0 \right.\right)
=\frac{\Gamma\left(N_{s},\frac{N_{s}\gamma}{\sigma_{w}^{2}}\right)}{\Gamma\left(N_{s}\right)},
\label{Eq:P_FA_Ideal_RF}
\end{align}
while the probability of detection can be calculated~as
\begin{align}
{\cal P}_{d}(\gamma)& \hspace{-0.1cm}=\hspace{-0.1cm} P_r\left(T_{k}\hspace{-0.1cm} >\hspace{-0.1cm} \gamma\left|\theta_{k}\hspace{-0.1cm}=\hspace{-0.1cm}1 \right.\right)
\hspace{-0.1cm}=\hspace{-0.1cm}{\exp\left(\frac{\sigma_{w}^2}{\sigma_h^2 \sigma_s^2}\right)} \hspace{-0.1cm} \sum_{k=0}^{N_s-1}\hspace{-0.1cm}\frac{1}{k!} \hspace{-0.1cm} \left(\frac{N_s \gamma}{\sigma_h^2 \sigma_s^2}\right)^k \Gamma\left(-k+1,\frac{\sigma_w^2}{\sigma_h^2 \sigma_s^2},\frac{N_s \gamma}{\sigma_h^2 \sigma_s^2},1\right).
\label{Eq:P_D_Ideal_RF}
\end{align}
\subsection{Non-Ideal RF Front-End}
Based on the signal model presented in \ref{sub:Non-ideal-RF-front-end}, and assuming given channel realization and channel occupancy vectors
$H=\left\{ H_{-k},h_{-k},h_{k},H_{k}\right\} $ and $\Theta=\left\{ \Theta_{-k},\theta_{-k},\theta_{k},\Theta_{k}\right\} $,
respectively, it holds that
\begin{align}
\sigma^{2} &\hspace{-0.1cm} = \hspace{-0.1cm} E\left[\Re\left\{ r_{k}\right\} ^{2}\right]\hspace{-0.1cm} = \hspace{-0.1cm} E\left[\Im\left\{ r_{k}\right\} ^{2}\right]
\hspace{-0.1cm} = \hspace{-0.1cm} \theta_{k}\left(\Re\left\{ h_{k}\right\}^{2} \hspace{-0.1cm} + \hspace{-0.1cm} \Im\left\{ h_{k}\right\}^{2}\right)\hspace{-0.1cm} \left(\Re\left\{ \xi_{k}\right\} ^{2} \hspace{-0.1cm} + \hspace{-0.1cm} \Im\left\{ \xi_{k}\right\} ^{2}\right)\frac{\sigma_{s}^{2}}{2}
\hspace{-0.1cm} + \hspace{-0.1cm} \frac{\sigma_{w}^{2}\hspace{-0.1cm} +\hspace{-0.1cm} \sigma_{\eta_{k}}^{2}}{2},\label{sigma_received_non_ideal}
\end{align}
and $\Re\left\{ r_{k}\right\}$, $\Im\left\{ r_{k}\right\}$ are uncorrelated random variables, i.e., $E\left[\Re\left\{ r_{k}\right\} \Im\left\{ r_{k}\right\} \right]=0$.
Thus, the received energy, given by (\ref{ED_classic}), follows chi-square distribution with $2N_{s}$ degrees of freedom and CDF given by
\begin{align}
F_{T_{k}}\left(x\left|H,\Theta\right.\right)=\frac{\gamma\left(N_{s},\frac{N_{s}x}{2\sigma^{2}}\right)}{\Gamma\left(N_{s}\right)},
\label{Eq:F_Tk_cond_H_Theta}
\end{align}
where $\sigma^{2}$ can be expressed, after taking into account (\ref{sigma_eta}), (\ref{sigma_psi})
and (\ref{sigma_received_non_ideal}), as
\begin{align}
\sigma^{2}
&
=\theta_{k}\mathcal{A}_{1}\left|h_{k}\right|^{2}+\theta_{k-1}\mathcal{A}_{2}\left|h_{k-1}\right|^{2}+\theta_{k+1}\mathcal{A}_{2}\left|h_{k+1}\right|^{2}
+\theta_{-k+1}\mathcal{A}_{3}\left|h_{-k+1}\right|^{2}
\nonumber \\ &
+\theta_{-k-1}\mathcal{A}_{3}\left|h_{-k-1}\right|^{2}+\theta_{-k}\mathcal{A}_{4}\left|h_{-k}\right|^{2}
+\mathcal{A}_{5}.\label{eq:sigma}
\end{align}
In the above equation,
$\mathcal{A}_{1} =\left|\xi_{k}\right|^{2}\frac{\sigma_{s}^{2}}{2},$
$\mathcal{A}_{2} =\left|K_{1}\right|^{2}A_{k-1}\frac{\sigma_{s}^{2}}{2},$
$\mathcal{A}_{3} =\left|K_{2}\right|^{2}A_{-k+1}\frac{\sigma_{s}^{2}}{2},$
$\mathcal{A}_{4} =\left|\gamma_{0}\right|^{2}\left|K_{2}\right|^{2}\left|a\right|^{2}\frac{\sigma_{s}^{2}}{2},$
and
$\mathcal{A}_{5}=\frac{\sigma_{w}^{2}}{2}+\frac{\left|\gamma_{0}\right|^{2}}{2}\left(\left|K_{1}\right|^{2}\sigma_{e,k}^{2}+\left|K_{2}\right|^{2}\sigma_{e,-k}^{2}\right)$
model the amplitude distortion due to the joint effects of RF impairments, the interference from the $k-1$ and $k+1$ channels, the interference from the $-k-1$ and $-k+1$ channels due to PHN, the mirror interference due to IQI, and the distortion noise due to the joint effects of RF impairments, respectively.
The following theorems return analytical closed-form expressions for the CDF of the energy test statistics for a given channel occupancy vector, when at least one channel of $\{-k-1,-k,-k+1,k-1,k,k+1\}$ is busy and when all channels are idle.
\begin{thm}
\label{thm:The-CDF-of-non-ideal}The CDF of the energy statistics
assuming an non-ideal RF front end and an arbitrary channel occupancy vector
$\Theta$ that is different than the all idle vector, can be evaluated by \eqref{Eq:F_Tk_final}, given at the top of the next page,
\begin{figure*}
\begin{align}
& F_{T_{k}} \left(x\left|\Theta\right.\right)=\sum_{i=2}^{3}U\left(m_{i}-2\right)w_{1,i}w_{2,i}\mathcal{A}_{i} \exp\left(-\frac{\mathcal{A}_5}{\mathcal{A}_{i}}\right)
+\sum_{i=1}^{4}U\left(m_{i}-2\right)w_{1,i} \mathcal{A}_{i} \left(\mathcal{A}_{5}+\mathcal{A}_{i}\right) \exp\left(-\frac{\mathcal{A}_{5}}{\mathcal{A}_{i}}\right)
\nonumber \\
& +\sum_{i=1}^{4}U\left(m_{i}-1\right)\left(U\left(1-m_{i}\right)-\mathcal{A}_{5}U\left(m_{i}-2\right)\right)w_{1,i}\mathcal{A}_{i} \exp\left(-\frac{\mathcal{A}_5}{\mathcal{A}_{i}}\right)
\nonumber \\ &
- \sum_{i=2}^{3} \sum_{k=0}^{N_{s}-1} U\left(m_{i}-2\right) \frac{1}{k!} \frac{w_{1,i}w_{2,i}}{\mathcal{A}_{i}^{k-1}} \left(\frac{N_{s} x}{2}\right)^{k} {\Gamma\left(-k+1,\frac{\mathcal{A}_{5}}{\mathcal{A}_{i}},\frac{N_s x}{2\mathcal{A}_{i}},1\right)}
\nonumber \\
& -\sum_{i=1}^{4}\sum_{k=0}^{N_{s}-1}U\left(m_{i}-1\right)\left(U\left(1-m_{i}\right)-\mathcal{A}_{5}U\left(m_{i}-2\right)\right)\frac{1}{k!}\frac{w_{1,i}}{\mathcal{A}_{i}^{k-1}}\left(\frac{N_{s} x}{2}\right)^{k}
{\Gamma\left(-k+1,\frac{\mathcal{A}_{5}}{\mathcal{A}_{i}},\frac{N_s x}{2\mathcal{A}_{i}},1\right)}
\nonumber \\ &
- \sum_{i=1}^{4}\sum_{k=0}^{N_{s}-1}U\left(m_{i}-2\right)
\frac{1}{k!} \frac{w_{1,i}}{\mathcal{A}_{i}^{k-1}} \left(\frac{N_{s}x}{2}\right)^{k} {\Gamma\left(-k+2,\frac{\mathcal{A}_{5}}{\mathcal{A}_{i}},\frac{N_{s} x}{2\mathcal{A}_{i}},1\right)}.
\label{Eq:F_Tk_final}
\end{align}
\hrulefill{}\vspace{-2pt}
\end{figure*}
where $w_{1,i}$ and $w_{2,i}$ are given by
\begin{align}
w_{1,i} & =\frac{\exp\left(\frac{\mathcal{A}_{5}}{\mathcal{A}_{i}}\right)}{\Gamma\left(m_{i}\right)\left(\prod_{j=1}^{4}\mathcal{A}_{j}^{m_{j}}\right)}\prod_{j=1,j\neq i}^{4}\left(\frac{1}{\mathcal{A}_{j}}-\frac{1}{\mathcal{A}_{i}}\right)^{-m_{j}},\label{eq:w1}
\end{align}
and
\begin{equation}
w_{2,i}=\sum_{j=1,j\neq i}m_{j}\left(\frac{1}{\mathcal{A}_{j}}-\frac{1}{\mathcal{A}_{i}}\right)^{-1},
\label{eq:w2}
\end{equation}
respectively.
\end{thm}
\begin{IEEEproof}
According to \cite{A:Karagiannidis-2006-ID448} and after some basic algebraic manipulations, its PDF can be written~as
\begin{align}
f_{\sigma^{2}}\left(x\left|\Theta\right.\right)&=\sum_{i=2}^{3}U\left(m_{i}-2\right)w_{1,i}w_{2,i}\exp\left(-\frac{x}{\mathcal{A}_{i}}\right)\nonumber \\
&+ \sum_{i=1}^{4}U\left(m_{i}-1\right)\left(U\left(1-m_{i}\right)-\mathcal{A}_{5}U\left(m_{i}-2\right)\right)w_{1,i}\exp\left(-\frac{x}{\mathcal{A}_{i}}\right)\nonumber \\
& +\sum_{i=1}^{4}U\left(m_{i}-1\right)U\left(m_{i}-2\right)w_{1,i}x\exp\left(-\frac{x}{\mathcal{A}_{i}}\right),
\label{Eq:Conditional_PDF_of_sigma}
\end{align}
where $x\in\left[{\cal A}_{5},\infty\right)$, $m=\left[\theta_{k},\theta_{k-1}+\theta_{k+1},\theta_{-k+1}+\theta_{-k-1},\theta_{-k}\right]$,
$w_{1,i}$ and $w_{2,i}$ are defined by (\ref{eq:w1}) and (\ref{eq:w2})
respectively.
Based on the above, the CDF of the received energy, in case of non-ideal RF front-end, unconditioned with respect to $\Theta$, can be expressed~as
\begin{align}
F_{T_{k}} \left(x\left|\Theta\right.\right)=\sum_{i=2}^{3}U\left(m_{i}-2\right)w_{1,i}w_{2,i}\mathcal{I}_{1,i}
&+\sum_{i=1}^{4}U\left(m_{i}-1\right)\left(U\left(1-m_{i}\right)-\mathcal{A}_{5}U\left(m_{i}-2\right)\right)w_{1,i}\mathcal{I}_{1,i}\nonumber \\
& +\sum_{i=1}^{4}U\left(m_{i}-1\right)U\left(m_{i}-2\right)w_{1,i}\mathcal{I}_{2,i},\label{Eq:test1}
\end{align}
with
\begin{align}
\mathcal{I}_{1,i} & =\frac{1}{\Gamma\left(N_{s}\right)}\int_{\mathcal{A}_{5}}^{\infty}\exp\left(-\frac{y}{\mathcal{A}_{i}}\right)\gamma\left(N_{s},\frac{N_{s}x}{2y}\right)dy,\label{I1}\\
\mathcal{I}_{2,i} & =\frac{1}{\Gamma\left(N_{s}\right)}\int_{\mathcal{A}_{5}}^{\infty}y\exp\left(-\frac{y}{\mathcal{A}_{i}}\right)\gamma\left(N_{s},\frac{N_{s}x}{2y}\right)dy.\label{I2}
\end{align}
Eqs. (\ref{I1}) and (\ref{I2}), after some basic algebraic manipulations,
and using \cite[Eq. (8.352/2)]{B:Gra_Ryz_Book} and \cite[Eq. (6.2)]{B:chaudhry2001class}, can be written as
\begin{align}
\mathcal{I}_{1,i} & =\mathcal{A}_{i} \exp\left(-\frac{\mathcal{A}_5}{\mathcal{A}_{i}}\right)- \sum_{k=0}^{N_{s}-1}\frac{\left(N_{s}-1\right)!}{k!}\left(\frac{N_{s} x}{2}\right)^{k} \frac{1}{\mathcal{A}_{i}^{k+1}} \frac{\Gamma\left(-k+1,\frac{\mathcal{A}_{5}}{\mathcal{A}_{i}},\frac{N_s x}{2\mathcal{A}_{i}},1\right)}{\Gamma\left(N_{s}\right)},
\label{I1i_final}
\end{align}
and
\begin{align}
\mathcal{I}_{2,i} & \hspace{-0.1cm} = \hspace{-0.1cm}
\mathcal{A}_{i} \left(\mathcal{A}_{5}+\mathcal{A}_{i}\right) \exp\left(-\frac{\mathcal{A}_{5}}{\mathcal{A}_{i}}\right)
\hspace{-0.1cm}-\hspace{-0.1cm} \sum_{k=0}^{N_{s}-1}\hspace{-0.1cm} \frac{\left(N_{s}-1\right)!}{k!} \left(\frac{N_{s}x}{2}\right)^{k} \frac{1}{\mathcal{A}_{i}^{k+1}}\frac{\Gamma\left(-k+2,\frac{\mathcal{A}_{5}}{\mathcal{A}_{i}},\frac{N_{s} x}{2\mathcal{A}_{i}},1\right)}{\Gamma\left(N_{s}\right)}.
\label{I2i_final}
\end{align}
Hence, taking into consideration \eqref{I1i_final}, \eqref{I2i_final} and since $U\left(m_{i}-1\right)U\left(m_{i}-2\right)=U\left(m_{i}-2\right)$, Eq. \eqref{Eq:test1} results to \eqref{Eq:F_Tk_final}. This concludes the proof.
\end{IEEEproof}
\begin{thm}
The CDF of the energy statistics assuming a non-ideal RF front-end and that the channel occupancy vector $\Theta=\tilde{\Theta}_{2,0}=\left[0, 0, 0, 0, 0, 0\right]$), can be obtained~by
\begin{align}
F_{T_k}\left(x\left|\tilde{\Theta}_{2,0}\right.\right) = \frac{\gamma\left(N_s,\frac{N_s x}{2\mathcal{A}_5}\right)}{\Gamma\left(N_s\right)}.
\label{Eq:CDF_non_ideal_all_idle}
\end{align}
\end{thm}
\begin{IEEEproof}
If the channel occupancy vector $\Theta$ is the all idle vector, i.e., $\Theta=\tilde{\Theta}_{2,0}=\left[0, 0, 0, 0, 0, 0\right]$, then, in accordance to \eqref{eq:sigma}, the signal variance can be expressed as
$\sigma^2_{\tilde{\Theta}_{2,0}} = \mathcal{A}_5.$
According to \eqref{Eq:F_Tk_cond_H_Theta}, since $\sigma^2_{\tilde{\Theta}_{2,0}}$ is independent of $H$, the CDF of the energy statistics, assuming an non-ideal RF front-end, when all the channels of $\{-k-1, -k, -k+1, k-1, k, k+1\}$ are idle, can be obtained by \eqref{Eq:CDF_non_ideal_all_idle}. This concludes the proof.
\end{IEEEproof}
Based on the above analysis, the detection probability of the energy detector
with RF impairments~is
\begin{equation}
\mathcal{P}_{D}=\sum_{i=1}^{\card\left(\tilde{\Theta}_{1}\right)}P_{r}\left(\tilde{\Theta}_{1}\right)\left(1-F_{T_{k}}\left(\gamma^{\text{ni}}\left|\tilde{\Theta}_{1}\right.\right)\right),\label{Eq:P_D_RF}
\end{equation}
where $\tilde{\Theta}_{1}$ is the set defined as
$\tilde{\Theta}_{1}=\left[\theta_{k}=1,\theta_{k-1},\theta_{k+1},\theta_{-k+1},\theta_{-k-1},\theta_{-k}\right].$
Similarly, the probability of false alarm is
\begin{align}
\mathcal{P}_{FA}=\sum_{i=1}^{\card\left(\tilde{\Theta}_{2,c}\right)} P_r\left(\tilde{\Theta}_{2}\right)\left(1-F_{T_{k}}\left(\gamma^{\text{ni}}\left|\tilde{\Theta}_{2,c}\right.\right)\right)
+ P_r\left(\tilde{\Theta}_{2,0}\right)\frac{\Gamma\left(N_{s},\frac{N_{s}x}{2\mathcal{A}_{5}}\right)}{\Gamma\left(N_{s}\right)},\label{Eq:P_F_RF}
\end{align}
where $\Pr\left(\Theta\right)$ denotes the probability of the given channel occupancy $\Theta$, $\tilde{\Theta}_{2,c}$ is the set defined~as
$\tilde{\Theta}_{2,c}=\tilde{\Theta}_{2}-\tilde{\Theta}_{2,0},$
and $\tilde{\Theta}_{2}$ is the set defined as
$\tilde{\Theta}_{2}=\left[\theta_{k}=0,\theta_{k-1},\theta_{k+1},\theta_{-k+1},\theta_{-k-1},\theta_{-k}\right].$
Note that \eqref{Eq:P_F_RF} applies even when the channel $K$ or $-K$ is sensed. However, in this case $\tilde{\Theta}_{1}$ and $\tilde{\Theta}_{2}$ can be obtained by
$\tilde{\Theta}_{1}=\left[\theta_{k}=1,\theta_{k-1},\theta_{k+1}=0,\theta_{-k+1},\theta_{-k-1}=0,\theta_{-k}\right]$ and $\tilde{\Theta}_{2}=\left[\theta_{k}=0, \theta_{k-1}, \theta_{k+1}=0, \theta_{-k+1}, \theta_{-k-1}=0, \theta_{-k}\right]$, respectively.
\section{Cooperative Spectrum Sensing with Decision Fusion}\label{sec:Cooperative_Spectrum_Sensing}
In this section, we consider a cooperative spectrum sensing scheme, in which each SU makes a binary decision on the channel occupancy, namely `0' or `1' for the absence or presence of PU activity, respectively, and the one-bit individual decisions are forwarded to a FC over a narrowband reporting channel. The sensing channels (the channels between the PU and the SUs) are considered identical and independent. Moreover, we assume that the decision device of the FC is implemented with the $k_{\rm{SU}}$-out-of-$n_{\rm{SU}}$ rule, which implies that if there are $k_{\rm{SU}}$ or more SUs that individually decide that the channel is busy, the FC decides that the channel is occupied. Note that when $k_{\rm{su}}=1$, $k_{\rm{su}}=n_{\rm{su}}$ or $k_{\rm{su}}=\lceil n/2\rceil$, the $k_{\rm{su}}$-out-of-$n_{\rm{su}}$ rule is simplified to the OR rule, AND rule and Majority rule, respectively.
\subsection{Ideal RF Front-End}
Here, we derive closed form expression for the false alarm and detection probabilities, assuming that the RF front-ends of the SUs are ideal, considering both scenarios of error free and imperfect reporting channels.
\subsubsection{Reporting Channels without Errors}
If the channel between the SUs and the FC is error free, the false alarm probability (${\cal P}_{C,fa}$) and the detection probability (${\cal P}_{C,d}$) are given by \cite[Eq. (17)]{ED_Cooperative_Spectrum_Sensing_in_CR}
\begin{align}
{\cal P}_{C,fa}\hspace{-0.15cm}=\hspace{-0.15cm} \sum_{i=k_{\rm{su}}}^{n_{\rm{su}}}\left(\begin{array}{c}n_{\rm{su}}\\i\end{array} \right) \left({\cal P}_{fa}\right)^{i} \left(1-{\cal P}_{fa}\right)^{n_{\rm{su}}-i}
\text{ and }
{\cal P}_{C,d}\hspace{-0.15cm}=\hspace{-0.15cm} \sum_{i=k_{\rm{su}}}^{n_{\rm{su}}}\left(\begin{array}{c}n_{\rm{su}}\\i\end{array} \right) \left({\cal P}_{d}\right)^{i} \left(1-{\cal P}_{d}\right)^{n_{\rm{su}}-i}.\label{Eq:P_FA_C_D_ideal}
\end{align}
Taking into consideration (\ref{Eq:P_FA_Ideal_RF}) (\ref{Eq:P_D_Ideal_RF}) and (\ref{CDF_theta_1_ideal}) and after some basic algebraic manipluations, Eqs.~(\ref{Eq:P_FA_C_D_ideal}) can be expressed as
\begin{align}
{\cal P}_{C,fa}&= \sum_{i=k_{\rm{su}}}^{n_{\rm{su}}}\left(\begin{array}{c}n_{\rm{su}}\\i\end{array} \right)\left(\frac{\Gamma\left(N_{s},\frac{N_{s}\gamma\left(k\right)}{\sigma_{w}^{2}}\right)}{\Gamma\left(N_{s}\right)}\right)^{i}
\left(\frac{\gamma\left(N_{s},\frac{N_{s}\gamma\left(k\right)}{\sigma_{w}^{2}}\right)}{\Gamma\left(N_{s}\right)}\right)^{n-i},
\\
{\cal P}_{C,d}=
&\sum_{i=k_{\rm{su}}}^{n_{\rm{su}}}\left(\begin{array}{c}n_{\rm{su}}\\i\end{array} \right)
\left({\exp\left(\frac{\sigma_{w}^2}{\sigma_h^2 \sigma_s^2}\right)} \sum_{k=0}^{N_s-1}\frac{1}{k!} \left(\frac{N_s \gamma}{\sigma_h^2 \sigma_s^2}\right)^k \Gamma\left(-k+1,\frac{\sigma_w^2}{\sigma_h^2 \sigma_s^2},\frac{N_s \gamma}{\sigma_h^2 \sigma_s^2},1\right)\right)^{i}
\nonumber\\& \times
\left(1-{\exp\left(\frac{\sigma_{w}^2}{\sigma_h^2 \sigma_s^2}\right)} \sum_{k=0}^{N_s-1}\frac{1}{k!} \left(\frac{N_s \gamma}{\sigma_h^2 \sigma_s^2}\right)^k \Gamma\left(-k+1,\frac{\sigma_w^2}{\sigma_h^2 \sigma_s^2},\frac{N_s \gamma}{\sigma_h^2 \sigma_s^2},1\right)\right)^{n_{\rm{su}}-i}.
\end{align}
\subsubsection{Reporting Channels with Errors}
If the reporting channel is imperfect, error occur on the detection of the transmitted, by the SU, bits. In this case, the false alarm and the detection probabilities can be derived by \cite[Eq. (18)]{ED_Cooperative_Spectrum_Sensing_in_CR}
\begin{align}
{\cal P}_{C,{\cal X}} = \sum_{i=k_{\rm{su}}}^{n_{\rm{su}}} \left(\begin{array}{c}n\\i \end{array}\right) \left({\cal P}_{{\cal X},e}\right)^{i} \left(1-{\cal P}_{{\cal X},e}\right)^{n_{\rm{su}}-i},
\label{Eq:PCX}
\end{align}
where
${\cal P}_{{\cal X},e} = {\cal P}_{{\cal X}} \left(1-P_{e}\right) + \left(1-{\cal P}_{{\cal X}}\right) P_{e},$
is the equivalent false alarm (`${\cal X}=fa$') or detection (`${\cal X}=d$') probability and $P_{e}$ is the cross-over probability of the reporting channel, which is equal to the bit error rate (BER) of the channel. Considering binary phase shift keying (BPSK), ideal RF front-end in the FC and Rayleigh fading, the BER can be expressed as
\begin{align}
P_{e} = \frac{1}{2}\left(1-\sqrt{\frac{\gamma_{r}}{1+\gamma_{r}}}\right),
\end{align}
with $\gamma_{r}$ be the signal to noise ratio (SNR) of the link between the SUs and the FC.
\subsection{Non-Ideal RF Front-End}
In this subsection, we consider that the RXs front-end of the SUs suffer from different level RF imperfections.
\subsubsection{Reporting Channels without Errors}
In this section, we assume that the reporting channel is error free and
that the SU $j$ sends $d_{j,k}=0$ or $d_{j,k}=1$ to the FC to report absence or presence of PU activity at the channel $k$.
If the sensing channel $k$ is idle ($\theta_{k}=0$), then the probability that the $j^{\text{th}}$ SU reports that the channel is busy ($d_{j,k} =1$), can be expressed as ${\cal P}_{fa, j}$, while the probability that the $j^{\text{th}}$ SU reports that the channel is idle ($d_{j,k} =0$), is given by $\left(1-{\cal P}_{fa, j}\right)$. Therefore, since each SU decides individually whether there is PU activity in the channel $k$, the probability that the $n$ SUs report a given decision set $\mathcal{D}=\left[d_{1,k}, d_{2,k}, \cdots, d_{n_{\rm{su}},k}\right]$, if $\theta_{k}=0$, can be written~as
\begin{align}
{\cal{P}}_{fa}(\mathcal{D})=\prod_{j=1}^{n_{\rm{su}}}
\left(
U\left(-d_{j,k}\right)\left(1-{\cal P}_{fa, j}\right)
+U\left(d_{j,k}-1\right){\cal P}_{fa, j}\right).
\end{align}
Furthermore, based on the \rm{$k_{\rm{su}}$-out-of-$n_{\rm{su}}$} rule, the FC decides that the $k^{\text{th}}$ channel is busy, if the $k_{\rm{su}}$ out of the $n_{\rm{su}}$ SUs reports ``1''. Consequently, for a given decision set, the false alarm probability at the FC can be evaluated by
\begin{align}
{\cal P}_{C,FA\left|\mathcal{D}\right.} =
U\left(\sum_{l=1}^{n_{\rm{su}}}d_{l,k}-k_{\rm{su}}\right)
\prod_{j=1}^{n_{\rm{su}}}
\left(
U\left(- d_{j,k}\right)
\left(1-{\cal P}_{fa, j}\right)
+U\left(d_{j,k}-1\right)
{\cal P}_{fa, j}
\right).
\end{align}
Hence, for any possible $\mathcal{D}$, the false alarm at the FC, using \rm{$k_{\rm{su}}$-out-of-$n_{\rm{su}}$} rule, can be obtained~by
\begin{align}
{\cal P}_{C,FA} = \sum_{i=1}^{\card\left(\mathcal{D}\right)}
U\left(\sum_{l=1}^{n_{\rm{su}}}d_{l,k}-k_{\rm{su}}\right)
\prod_{j=1}^{n_{\rm{su}}}
\left(
U\left(-d_{j,k}\right)\left(1-{\cal P}_{fa, j}\right)
+U\left(d_{j,k}-1\right){\cal P}_{fa, j}\right).
\label{Eq:Cooperative_FA_dif_imp_general_no_error}
\end{align}
Similarly, the detection probability at the FC, using $k_{\rm{su}}$-out-of-$n_{\rm{su}}$ rule, can be expressed as
\begin{align}
{\cal P}_{C,D} = \sum_{i=1}^{\card\left(\mathcal{D}\right)}
U\left(\sum_{l=1}^{n_{\rm{su}}}d_{l,k}-k_{\rm{su}}\right)
\prod_{j=1}^{n_{\rm{su}}}
\left(
U\left(-d_{j,k}\right)\left(1-{\cal P}_{d, j}\right)
+U\left(d_{j,k}-1\right){\cal P}_{d, j}\right).
\label{Eq:Cooperative_D_dif_imp_general_no_error}
\end{align}
Note that if the FC uses the OR rule, Eqs. \eqref{Eq:Cooperative_FA_dif_imp_general_no_error} and \eqref{Eq:Cooperative_D_dif_imp_general_no_error} can be simplified~to
\begin{align}
{\cal P}_{\text{OR},FA}=1-\prod_{i=1}^{n_{\rm{su}}} \left(1-{\cal P}_{fa, i}\right),
\text{ and }
{\cal P}_{\text{OR},D}=1-\prod_{i=1}^{n_{\rm{su}}} \left(1-{\cal P}_{d, i}\right),
\end{align}
respectively, while if the FC uses the AND rule, Eqs. \eqref{Eq:Cooperative_FA_dif_imp_general_no_error} and \eqref{Eq:Cooperative_D_dif_imp_general_no_error} can be simplified~to
\begin{align}
{\cal P}_{\text{AND},FA}=\prod_{i=1}^{n_{\rm{su}}} {\cal P}_{fa, i},
\text{ and }
{\cal P}_{\text{AND},D}=\prod_{i=1}^{n_{\rm{su}}} {\cal P}_{d, i},
\end{align}
respectively.
In the special case where all the SUs suffer from the same level of RF impairments, the false alarm probability (${\cal P}_{C,fa}$) and the detection probability (${\cal P}_{C,d}$) are given by
\begin{align}
{\cal P}_{C,FA} \hspace{-0.1cm}=\hspace{-0.1cm} \sum_{i=k_{\rm{su}}}^{n_{\rm{su}}}\left(\begin{array}{c}n_{\rm{su}}\\i\end{array} \right) \left({\cal P}_{FA}\right)^{i} \left(1-{\cal P}_{FA}\right)^{n_{\rm{su}}-i},\label{Eq:P_C_FA_RF}
\text{ and }
{\cal P}_{C,D} \hspace{-0.1cm}=\hspace{-0.1cm} \sum_{i=k_{\rm{su}}}^{n_{\rm{su}}}\left(\begin{array}{c}n_{\rm{su}}\\i\end{array} \right) \left({\cal P}_{D}\right)^{i} \left(1-{\cal P}_{D}\right)^{n_{\rm{su}}-i},
\end{align}
where ${\cal P}_{FA}$ and ${\cal P}_{D}$ are given by (\ref{Eq:P_F_RF}) and (\ref{Eq:P_D_RF}), respectively.
\subsubsection{Reporting Channels with Errors}
Next, we consider and imperfect reporting channel. In this scenario, the false alarm and the detection probabilities can be derived by
\begin{align}
{\cal P}_{C,\mathcal{X}} = \sum_{i=1}^{\card\left(\mathcal{D}\right)}
U\left(\sum_{l=1}^{n_{\rm{su}}}d_{l,k}-k_{\rm{su}}\right)
\prod_{j=1}^{n_{\rm{su}}}
\left(
U\left(-d_{j,k}\right)\left(1-{\cal P}_{\mathcal{X},e, j}\right)
+U\left(d_{j,k}-1\right){\cal P}_{\mathcal{X},e, j}\right),
\label{Eq:Cooperative_dif_imp_general_error}
\end{align}
where ${\cal P}_{{\cal X},e,j}$ can be derived by
\begin{align}
{\cal P}_{{\cal X},e,j} = {\cal P}_{{\cal X},j} \left(1-P_{e,j}\right) + \left(1-{\cal P}_{{\cal X},j}\right) P_{e,j},
\label{PXej}
\end{align} with ${\cal P}_{{\cal X},j}$ denoting the equivalent false alarm (`${\cal X}=FA$') or detection (`${\cal X}=D$') probability of the $j^{\text{th}}$ SU and $P_{e,j}$ being the cross-over probability of the reporting channel connecting the $j^{\text{th}}$ SU with the FC. Notice that since ${\cal P}_{{\cal X},j}\in\left[0,1\right]$, based on \eqref{PXej} ${\cal P}_{{\cal X},e,j}$ is bound by $P_{e,j}$ and $1-P_{e,j}$.
In the special case where all the SUs suffer from the same level of RF impairments, Eq. \eqref{Eq:Cooperative_dif_imp_general_error} can be expressed as \cite[Eq. (18)]{ED_Cooperative_Spectrum_Sensing_in_CR}
\begin{align}
{\cal P}_{C,{\cal X}} = \sum_{i=k_{\rm{su}}}^{n_{\rm{su}}} \left(\begin{array}{c}n_{\rm{su}}\\i \end{array}\right) \left({\cal P}_{{\cal X},e}\right)^{i} \left(1-{\cal P}_{{\cal X},e}\right)^{n_{\rm{su}}-i}.
\label{PcX}
\end{align}
\section{Numerical and Simulation Results}\label{sec:Numerical_Results}
In this section, we investigate the effects of RF impairments on the spectrum sensing performance of EDs by illustrating analytical and Monte-Carlo simulation results for different RF imperfection levels. In particular, we consider the following insightful scenario. It is assumed that there are $K=8$ channels and the second channel is sensed (i.e., $k=2$). The signal and the total guard band bandwidths are assumed to be $W_{sb}=1\text{ }\rm{MHz}$ and $W_{gb}=125\text{ }\rm{KHz}$, respectively, while the sampling rate is chosen to be equal to the bandwidth of wireless signal as $W=9\text{ }\rm{MHz}$. Moreover, the channel occupancy process is assumed to be Bernoulli distributed with probability, $q=1/2$, and independent across channels, while the signal variance is equal for all channels. The number of samples is set to $5$ ($N_s=5$), while it is assumed that $\sigma_{h}^{2}=\sigma_{w}^{2}=1$.
In addition, for simplicity and without loss of generality, we consider an ideal clipping PA.
In the following figures, the numerical results are shown with continuous lines, while markers are employed to illustrate the simulation results.
Moreover, the performance of a classical ED with ideal RF front-end is used as a~benchmark.
Figs. \ref{fig:FA_Thershold_IBO_SNR} and \ref{fig:ROC_IBO_SNR} demonstrate the impact of LNA non-linearities on the performance of the classical ED, assuming different $\rm{SNR}$ values.
Specifically, in Fig. \ref{fig:FA_Thershold_IBO_SNR}, false alarm probabilities are plotted against threshold for different $\rm{SNR}$ and $\rm{IBO}$ values, considering $\beta = 100\text{ }\rm{Hz}$, $\rm{IRR}=25\text{ }\rm{dB}$ and phase imbalance equal to $\phi=3^{o}$.
It becomes evident from this figure that the analytical results are identical with simulation results; thus, verifying the presented analytical framework.
Additionally, it is observed that for a given $\rm{IBO}$ value, as $\rm{SNR}$ increases, the interference for the neighbor and mirror channels increases; hence, the false alarm probability increases. On the contrary as $\rm{IBO}$ increases, for a given $\rm{SNR}$ value, the effects of LNA non-linearities are constrained, and therefore the false alarm probability decreases.
\begin{figure}
\centering\includegraphics[width=0.75\linewidth,trim=0 0 0 0,clip=false]{images/FA_vs_Threshold_diff_IBO.eps}
\vspace{-0.43cm}
\caption{False alarm probability vs Threshold for different values of IBO and occupied channel $\rm{SNR}$ values, when $\rm{IRR}$ and $\beta$ are considered to be equal to $25\rm{dB}$ and $100\rm{Hz}$, respectively.}
\label{fig:FA_Thershold_IBO_SNR}
\vspace{-0.8cm}
\end{figure}
In Fig. \ref{fig:ROC_IBO_SNR}, receiver operation curves (ROCs) are plotted for different $\rm{SNR}$ and $\rm{IBO}$ values, considering the $\beta=100\text{ }\rm{Hz}$, $\rm{IRR}=25\text{ }\rm{dB}$ and $\phi=3^o$.
We observe that for low $\rm{SNR}$ values, LNA non-linearities do not affect the ED performance. However, as $\rm{SNR}$ increases, the distortion noise caused due to the imperfection of the amplifier increases; as a result, LNA non-linearities become to have more adverse effects on the spectrum capabilities of the classical ED, significantly reducing its performance for low $\rm{IBO}$ values. Furthermore, as $\rm{IBO}$ increases, the effects of LNA non-linearities become constrained and therefore the performance of the non-ideal ED tends to the performance of the ideal~ED.
\begin{figure}
\centering\includegraphics[width=0.75\linewidth,trim=0 0 0 0,clip=false]{images/ROC_for_diff_IBO_and_SNR.eps}
\vspace{-0.43cm}
\caption{ROC for different values of IBO and occupied channel SNR values, when $IRR$ and $\beta$ are considered to be equal to $25\rm{dB}$ and $100\rm{Hz}$, respectively.}
\label{fig:ROC_IBO_SNR}
\vspace{-0.8cm}
\end{figure}
\begin{figure}
\centering\includegraphics[width=0.75\linewidth,trim=0 0 0 0,clip=false]{images/ROC_vs_beta_IBO_6dB.eps}
\vspace{-0.43cm}
\caption{ROCs for different values of $\beta$ and occupied channel SNR values, when $\IBO$ and $\rm{IRR}$ are considered to be equal to $6\rm{dB}$ and $25\rm{dB}$, respectively.}
\label{fig:ROC_beta_IBO_6dB}
\vspace{-0.8cm}
\end{figure}
Fig.\ref{fig:ROC_beta_IBO_6dB} illustrates the impact of PHN on the performance of the classical ED, assuming various $\rm{SNR}$ values, when $\rm{IRR} = 25\text{ }\rm{dB}$, $\phi=3^{o}$ and $\rm{IBO}=6\text{ }\rm{dB}$. We observe that for practical levels of IQI and PHN, the signal leakage from channels $-k+1$ and $-k-1$ to channel $-k$ due to PHN is small, therefore the signal leakage to channel $k$ from the channel $-k-1$ and $-k+1$ due to the joint effect of PHN and IQI is in the range of $\left[-70\text{ }\rm{dB},-50\text{ }\rm{dB} \right]$. Consequently, in the low $\rm{SNR}$ regime the leakage from the channels $-k-1$ and $-k+1$ do not affect the spectrum sensing capabilities.
Hence, it becomes evident that at low $\rm{SNR}$ values, PHN do not affect the spectrum sensing capability of the classical ED compared with the ideal RF front-end ED.
On the other hand, as $\rm{SNR}$ increases, PHN has more detrimental effects on the spectrum sensing capabilities of the classical ED, significantly reducing the ED performance for high $\beta$ values.
\begin{figure}
\centering\includegraphics[width=0.75\linewidth,trim=0 0 0 0,clip=false]{images/ROC_vd_IRR.eps}
\vspace{-0.43cm}
\caption{ROCs for different values of $IRR$ and occupied channel SNR values, when $\IBO$ and $\beta$ are considered to be equal to $6\rm{dB}$ and $100\rm{Hz}$, respectively.}
\label{fig:ROC_vs_IRR}
\vspace{-0.8cm}
\end{figure}
The effects of IQI on the spectrum sensing performance of ED are presented at Fig. \ref{fig:ROC_vs_IRR}. In particular, in this figure, ROCs are plotted assuming various $\rm{SNR}$s, when the $IBO=6\text{ }\rm{dB}$ and $\beta = 100Hz$.
Again, the analytical results coincide with simulation results, verifying the derived expressions. Moreover, at low $\rm{SNR}$s, it is observed that there is no significant performance degradation due to IQI. Nonetheless, as $\rm{SNR}$ increases, the interference of the mirror channels increases and as a result this RF imperfection notably affects the spectrum sensing performance.
Additionally, for a given $\rm{SNR}$, we observe that as $\rm{IRR}$ increases, the signal leakage of the mirror channels, due to IQI, decreases; hence, the performance of the non-ideal ED tends to become identical to the one of the ideal ED.
Finally, when compared with the spectrum sensing performance affected by
LNA nonlinearities, as depicted in Fig. \ref{fig:ROC_IBO_SNR}, it becomes apparent that the impact of LNA nonlinearity to the spectrum sensing performance is more detrimental than the impact of~IQI.
\begin{figure}
\centering\includegraphics[width=0.75\linewidth,trim=0 0 0 0,clip=false]{images/ROC_cooperative_comp_n5.eps}
\vspace{-0.43cm}
\caption{ROCs for ideal (continuous line) and non-ideal (dashed lines) RF front-end , when the CR network is equipped with $5$ SUs, $\rm{SNR}=0\text{ }\rm{dB}$, the reporting channel is considered error free, and $IBO=3\text{ }\rm{dB}$, $IRR=20\text{ }\rm{dB}$, and $\beta=100\text{ }\rm{Hz}$, for all the~SUs.}
\label{fig:ROC_cooperative_error_free_n5}
\vspace{-0.8cm}
\end{figure}
The effects of RF impairments in cooperative sensing, when the reporting channel is considered error free, is illustrated in Fig. \ref{fig:ROC_cooperative_error_free_n5}. In this figure, ROCs for ideal (continuous lines) and non-ideal (dashed lines) RF front-end SUs are presented, considering a CR network composed of $n_{\rm{su}}=5$ SUs, and a FC, which uses the OR or AND rule to decide whether the sensing channel is idle or busy. The EDs of the SUs are assumed identical with $\rm{IBO}=3\text{ }\rm{dB}$, $\rm{IRR}=20\text{ }\rm{dB}$, and $100\text{ }\rm{Hz}$ $3\text{ }\rm{dB}$ bandwidth. Again it is shown that the analytical results are identical with simulation results; thus, verifying the presented analytical framework.
When a given decision rule is applied, it becomes evident from the figure that the RF imperfections cause severe degradation of the sensing capabilities of the CR network. For instance, if the OR rule is employed and false alarm probability is equal to $14\%$, the RF impairments results to about $31\%$ degradation compared with the ideal RF front-end~scenario. This result indicates that it is important to take into consideration the hardware constraints of the low-cost spectrum sensing~SUs.
\begin{figure}
\centering\includegraphics[width=0.75\linewidth,trim=0 0 0 0,clip=false]{images/Different_RF_imperfections_CRN.eps}
\vspace{-0.43cm}
\caption{ROCs for ideal and non-ideal RF front-end, when the CR network is equipped with $5$ SUs under different levels of RF imperfections, $\rm{SNR}=0\text{ }\rm{dB}$, the reporting channel is considered error free and the FC uses $\rm{AND}$ or $\rm{OR}$ rule. $\text{S}_{1}$ and $\text{S}_{2}$ stands for SUs with $\rm{IBO}=3\text{ }\rm{dB}$ and $\rm{IRR}=20\text{ }\rm{dB}$, and $\rm{IBO}=6\text{ }\rm{dB}$ and $\rm{IRR}=300\text{ }\rm{dB}$, respectively.}
\label{fig:ROC_cooperative_error_free_n5_diff_RF}
\vspace{-0.8cm}
\end{figure}
In Fig. \ref{fig:ROC_cooperative_error_free_n5_diff_RF}, ROCs are illustrated for a CR network composed of $n=5$ SUs, which suffer from different levels of RF imperfections, and a FC that employs the $\rm{AND}$ or the $\rm{OR}$ rule to decide whether the sensing channel is idle or busy. In this scenario, we consider two types of SUs, namely $S_1$ and $S_2$. The RF front-end specifications of $S_1$ are $\rm{IBO}=3\text{ }\rm{dB}$, $\rm{IRR} = 20\text{ }\rm{dB}$ and $\beta = 100\text{ }\rm{Hz}$, whereas the specifications of $S_2$ are $\rm{IBO}=6\text{ }\rm{dB}$, $\rm{IRR} = 30\text{ }\rm{dB}$ and $\beta = 100\text{ }\rm{Hz}$. In other words, the CR network, in this scenario, includes both SUs of almost the worst ($S_1$) and almost optimal ($S_2$) quality. As benchmarks, the ROCs of a CR network equipped with classical ED sensor nodes in which the RF front-end is considered to be ideal, and CR networks that uses only $S_1$ or only $S_2$ sensor nodes are presented. In this figure, we observe the detrimental effects of the RF imperfections of the ED sensor nodes to the sensing capabilities of the CR network.
Furthermore, it is demonstrated that as the numbers of $S_1$ and $S_2$ SUs are decreasing and increasing respectively, the energy detection performance of the FC tends to become identical to the case when all the SUs are considered to be ideal.
This was expected since $S_2$ SUs have higher quality RF front-end characteristics than the other set of~SUs.
\section{Conclusions}\label{sec:Conclusions}
We studied the performance of multi-channel spectrum sensing, when the RF front-end is impaired by hardware imperfections. In particular, assuming Rayleigh fading, we provided the analytical framework for evaluating the detection and false alarm probabilities of energy detectors when LNA nonlinearities, IQI and PHN are taken into account.
Next, we extended our study to the case of a CR network, in which the SUs suffer from different levels of RF impairments, taking into consideration both scenarios of error free and imperfect reporting channels.
Our results illustrated the degrading effects of RF imperfections on the ED spectrum sensing performance, which bring significant losses in the utilization of the spectrum. Among others, LNA non-linearities were shown to have the most detrimental effect on the spectrum sensing~performance.
Furthermore, we observed that in cooperative spectrum sensing, the sensing capabilities of the CR system are significantly influenced by the different levels of RF imperfections of the~SUs.
Therefore, hardware constraints should be seriously taken into consideration when designing direct conversion CR RXs.
|
1,941,325,221,176 | arxiv | \section{Introduction}
\subsection{Multiscale problems and the problem of sparsity}
Simulations of multiscale problems are expensive and, typically, require
some type of a model reduction.
Our approaches seek adaptive reduced-order models, locally in space,
and construct multiscale basis functions in each coarse region
to represent the solution space. These approaches share
common concepts with homogenization and upscaling methods
\cite{papanicolau1978asymptotic, bakhvalov1989homogenisation, eh09, weh02,numerical-homo,2d-waves, fish2008mathematical, fish2013practical},
where local effective properties are constructed.
In contrast, in multiscale methods, local multiscale
basis functions \cite{hw97,jennylt03,melenk1996partition, eh09, oz07,GMsFEM-wave,GMsFEM-mixed,elastic-jcp,GMsFEM-elastic,Chung-Leung-Cicp, fish2013practical, zch15, hl15}
are constructed to represent the solution space.
These basis functions are typically constructed in the snapshot spaces
\cite{egh12}.
In this paper, we investigate cases, when the basis functions are sparse
in the snapshot space.
We discuss several examples, which include multiscale
parameter-dependent problems and the Helmholtz equation.
The parameter-dependent multiscale problems are motivated by stochastic
problems, where the parameter is used to describe the uncertainties.
\subsection{Sparse GMsFEM Concepts}
In this paper, we use the GMsFEM framework
(\cite{galvis2015generalized,Ensemble, eglmsMSDG, eglp13oversampling,calo2014multiscale,chung2014adaptive1,chung2015generalizedperforated,chung2015residual,chung2015online,egh12})
and investigate
the sparsity within GMsFEM snapshots. To illustrate the main idea
of our approach, we consider
\[
L u = f,
\]
where $L$ is a differential operator. For example, in the paper, we consider
parameter-dependent heterogeneous flows, $Lu=-div(\kappa(x;\mu)\nabla u)$,
and the Helmholtz equation, $Lu=-div(\kappa(x)\nabla u) - \Omega^2 n(x) u$.
The main idea of GMsFEM
is to construct a snapshot space and identify a subspace,
called the offline or online space depending whether the problem is
parameter-dependent.
This subspace is used
to solve the underlying problem at a reduced cost.
The snapshot and online spaces are constructed in each coarse element
(see next section for more precise definitions), where a coarse element is a region,
which is much larger than the characteristic fine-length scale
(see Figure \ref{snapshotoverview1}).
For each coarse region,
$\tau_j$, we construct snapshot vectors, $\{ \psi^j_i \}$
(here $i$ is the numbering of the snapshot functions),
that represent the local solution space.
We denote the snapshot space
by
\[
V_{\text{snap}}^{\tau_j}=\text{Span}_i\{\psi_i^j \}, \ \
V_{\text{snap}}=\text{Span}_{i,j}\{\psi_i^j \}.
\]
In GMsFEM, the online spaces are constructed using the elements of local
snapshot functions.
In many examples, the snapshot space can be large and
the online space can be a sparse subspace of the snapshot space.
The objective of this paper is to investigate these cases.
\subsection{Snapshot spaces}
The snapshot spaces play an important role in the GMsFEM. They are designed
to capture the solution space locally and are used to preserve
some features of the solution space, e.g., mass conservation.
Typical snapshot spaces consist of local solutions constructed using
some sets of boundary conditions or right hand sides.
With an appropriate choice of snapshot spaces (e.g., using oversampling
\cite{eglp13oversampling}), one can improve the convergence
of GMsFEM substantially.
To convey the concept of snapshot spaces, we present some examples.
We will consider two examples discussed above. We start with a simplified
example related to the parameter-dependent case,
$L_0u=-div(\kappa(x;\mu=0)\nabla u)$, i.e., the problem without a parameter.
In each coarse-grid block
$\tau_j$ (see the left plot in Figure \ref{snapshotoverview1}),
we consider a local solution
\begin{equation}
\label{def:snap1}
L_0(\psi_i^j)=0\ \text{in}\ \tau_j
\end{equation}
subject to some boundary conditions, where these boundary conditions
play an important role in defining snapshot functions.
One option is to choose all possible boundary
conditions considering all unit vectors on the boundary of $\tau_j$. More precisely,
$\psi_i^j(x)=\delta_i(x)$ on $\partial \tau_j$, where
$\delta_i(x)$ is $1$ at the node
$i$ and zero elsewhere. The computations of these snapshot functions
are
expensive. Instead,
we use the boundary conditions, which are randomly distributed numbers
on the fine-grid
nodes of the boundary $\partial \tau_j$
(see the left plot in Figure \ref{snapshotoverview1}).
The random boundary conditions allow extracting the essential information
provided we choose several more snapshot vectors than the number of modes,
we would like to use.
For {\it parameter-dependent problems}, the snapshot vectors are defined as above
(\ref{def:snap1}) for each pre-selected value of $\mu_m$.
For example, for {\it one-dimensional case} $\tau_j=[x_j,x_{j+1}]$,
for parameter-independent problem,
the snapshot space in $\tau_j$
consists of two solutions $\psi^j$ and $\psi^{j+1}$, such that
$\psi^n(x_l)=\delta_{nl}$, $n=j,j+1$, $l=j,j+1$,
where $\delta_{jl}$ is the Kronecker symbol and $\psi^n$ ($n=j,j+1$)
is a solution of
${d\over dx}\left( \kappa(x;\mu=0) {d\over dx}\psi^n\right)=0$ in $\tau_j$
(see Figure \ref{snapshot1D} for illustration).
For {\it parameter-dependent problems},
the snapshot vectors in $\tau_j=[x_j,x_{j+1}]$ are the solutions of
\[
{d\over dx}\left( \kappa(x;\mu_m) {d\over dx}\psi_m^n\right)=0\ \text{in} \ \tau_j,
\]
$\psi^n_m(x_l)=\delta_{nl}$, $n=j,j+1$, $l=j,j+1$.
For multi-dimensional examples, we can construct the snapshots similarly for
each $\tau_j$ and for using different boundary conditions and
use one index to represent the snapshot
vectors as $\psi_i^j$.
For the second example,
$Lu=-div(\kappa(x)\nabla u) - \Omega^2 n(x) u$, we choose the snapshot
vectors to be functions $ e^{i \Omega k_i\cdot x}$ for a set of pre-defined values
of $k_i$ on a unit circle (see the right plot in Figure \ref{snapshotoverview1}).
\begin{figure}[htb]
\centering
\includegraphics[width=3in, height=3in]{snapshot_1d}
\caption{Illustration of snapshot concepts in one dimensional example.}
\label{snapshot1D}
\end{figure}
The multiscale basis functions are constructed in the snapshot space. Our earlier
approaches seek a small dimensional subspace of the snapshot space by performing
a local spectral decomposition (based on analysis). However, these approaches
use all snapshot vectors when seeking multiscale basis functions. In a number of
applications, the solution is sparse in the snapshot space. I.e., in the expansion
\[
u=\sum_{i,j} c_{i,j} \psi_i^j,
\]
many coefficients $c_{i,j}$ are zeros. In this case, one can save computational effort
by employing sparsity techniques. In this paper,
our main goal is to discuss how GMsFEM can be designed if the solution
is sparse in the snapshot space. We describe two classes of approaches
and present a framework for constructing sparse GMsFEM.
The main challenge in these applications is to construct a snapshot space,
where the solution is sparse.
In our first example, this can be achieved,
because an online parameter value $\mu$ can be close to some of the pre-selected
offline
values of $\mu$'s, and thus, the multiscale basis functions (and the solution)
can have a sparse representation in the snapshot space. In our second example,
we select cases where the solution $u$ contains only a few snapshot
vectors corresponding to directions $k_i$.
We note that if the snapshot space is not chosen carefully, one may not have
the sparsity.
In general, there can be many other examples
and our goal is to show how local multiscale model reduction techniques can
be used for such problems.
\begin{figure}[htb]
\centering
\includegraphics[width=3in, height=2in]{SnapshotOverview1.jpg}
\includegraphics[width=3in, height=2in]{SnapshotOverview2}
\caption{Illustration of snapshot concepts. Left: For the first problem; Right: For the second problem.}
\label{snapshotoverview1}
\end{figure}
\subsection{Approaches for identifying sparse solutions in snapshot spaces}
The main goal
of this paper is to discuss how to explore sparsity ideas within GMsFEM
for constructing
local multiscale basis functions. We consider two distinct cases.
\begin{itemize}
\item First approach: ``Local-Sparse Snapshot Subspace Approach''. Determining the online sparse space locally via local spectral sparse decomposition in the snapshot space (motivated by
parameter-dependent problems).
\item Second approach: ``Sparse Snapshot Subspace Approach''. Determining the online space globally via a global solve (motivated by using plane wave snapshot vectors and the Helmholtz equation).
\end{itemize}
See Figure \ref{overview1} for illustration. We use sparsity techniques
(e.g., \cite{candes2006compressive,candes2008restricted,candes2006robust,mackey2014compressive,schaeffer2013sparse})
to identify local multiscale basis functions
and solve the global problem.
In above approaches, the snapshot functions can be linearly dependent.
In fact, in general, we would like to have a large snapshot space that can
contain
a sparse representation of the solution. In both approaches formulated above,
the linear
dependency is removed. In the first approach, it is removed by sparse POD.
We note that in the original GMsFEM approach \cite{egh12},
POD across all
snapshot functions is used to remove
linear dependence. However, this can result in a loss of sparsity, i.e.,
the solution may contain many nonzero coefficients when represented
in the snapshot space. Thus, for sparsity, it is important to avoid
a POD step across all snapshot vectors. In our first example,
we need to avoid using POD for all $\mu$'s. For this reason, we
design a special sparse POD method using randomized snapshot functions.
It both eliminates linearly dependent snapshot functions and
identifies a sparse solution space. In the second method, $l_1$
minimization can
be used for all snapshots, even if they are linearly dependent.
By adding more snapshot vectors, we hope to identify a sparse representation
of the solution.
The second method will eliminate the linear dependence and identify a sparse
solution. Note that in our example, the snapshot vectors are linearly
independent.
\begin{figure}[htb]
\centering
\includegraphics[width=0.65 \textwidth]{SparsityOverview2.png}
\caption{Illustration of our approaches.}
\label{overview1}
\end{figure}
For the first case,
we consider parameter-dependent elliptic equations of the form
\begin{equation} \label{eq:original_NL}
-\mbox{div} \big( \kappa(x;\mu) \, \nabla u \big)=f \, \, \text{in} \, D,
\end{equation}
where $u=g$ on $\partial D$, and $\mu$ is a parameter.
Some of the existing approaches for parameter-dependent problems closely related to the proposed approaches include
reduced basis techniques \cite{ barrault, ct_POD_DEIM11}. In these approaches, the
reduced order model is constructed via a greedy algorithm.
In the proposed approach, we attempt to approximate
the solution space locally in each coarse block using $l_1$ minimization.
Local multiscale basis functions are constructed using GMsFEM.
In previous approaches, we attempted to compress
the local solutions corresponding to some pre-selected values, $\mu$, in the offline stage. This can lead to large dimensional
offline spaces. In this paper, we propose an approach to compute eigenvectors using
$l_1$ minimization based on randomized snapshots and oversampling.
This provides an efficient approach to identify sparse eigenvector
representation in GMsFEM.
The proposed approach gives a sparse representation of
the multiscale basis functions in terms of the snapshot space vectors
and has several advantages. (1) It allows quickly assembling of the stiffness
matrix in the online space since it involves a few elements of the
snapshot space. (2) We can downscale the solution to the fine grid
much faster using sparse representation. (3) It avoids a POD based
step proposed in the original GMsFEM formulation (see \cite{egh12}),
which is performed across all $\mu$'s and
which can result to a large dimensional representation
of the online multiscale basis functions in terms of snapshot functions.
In the second example, we consider the Helmholtz equation
\begin{equation} \label{eq:Helmholtz}
-\mbox{div} \big( \kappa(x) \, \nabla u \big) - \Omega^2 n(x) u =f \, \, \text{in} \, D,
\end{equation}
where $\Omega$ is the frequency.
We will use plane waves as the
snapshot vectors.
In the computational examples considered in this paper, the solution has a few dominant propagating directions,
and is, therefore, spanned by only a few plane waves. This observation leads to the solution sparsity in our snapshot space.
However, the choice of local spectral decomposition
is not available for determining these dominant directions and we study using
$l_1$ minimization directly in the space of snapshot vectors.
We consider not very large snapshot spaces, and snapshot vectors having closed form formulas for constant media properties ($\kappa(x)$ and $n(x)$).
For the Helmholtz equation (\ref{eq:Helmholtz})
with low frequencies, we can expect a sparsity in our examples, which we exploit.
Thus, sparsity techniques are the natural methodologies in these situations.
In general, we can also consider the frequency to be a parameter,
and apply similar techniques used for the first example.
\subsection{Summary of numerical results}
Two test cases for the first approach are presented,
where we consider parametrized
conductivity fields. In the first case, the conductivity
is parametrized as an affine combination of two heterogeneous
conductivity fields. In the second case, we use a nonlinear
parameter dependence. In particular, we consider an initial conductivity
field with a channel and inclusions. The parametrization is introduced
such that these high-conductivity features spatially move within the domain.
This is a more challenging example because high-conductivity features
appear in many parts of the domain. Numerical results show that our approach
provides an accurate approximation of the solution using a few degrees of freedom and the solution is sparse in an appropriate snapshot space.
Numerical results for the second approach involve solving the Helmholtz
equation in media with two isolated heterogeneous inclusions. We consider a domain with two distinct properties, where plane wave
solutions can provide a good approximation.
In this case, the solution is spanned by only a few plane waves, and is, therefore, sparse
in the space of plane waves with many propagating directions.
However, in general, we do not know which plane wave directions are
dominant and our algorithm identifies these directions by using $l_1$ minimization.
The paper is organized in the following way. In the next section,
we present preliminaries and discuss GMsFEM, coarse and fine grid concepts.
In Section 3, we propose our new construction for the online space.
Section 4 is devoted to numerical results. In Section 5, we present conclusions.
\section{Preliminaries}
\label{prelim_NL}
To discretize (\ref{eq:original_NL}) or (\ref{eq:Helmholtz}),
we let $\mathcal{T}^H$ be a usual conforming partition of the computational domain $D$ into finite elements (triangles, quadrilaterals, tetrahedrals, etc.) and $\mathcal{E}^H$ denotes all the edges in the coarse mesh $\mathcal{T}^H$. We refer to this partition as the coarse grid and assume that each coarse subregion is partitioned into a connected union of fine-grid blocks. The fine-grid partition will be denoted by $\mathcal{T}^h$. We use $\{x_i\}_{i=1}^{N_v}$ (where $N_v$ denotes the number of coarse nodes) to denote the vertices of
the coarse mesh $\mathcal{T}^H$, and define the neighborhood of the node $x_i$ by
\begin{equation} \label{neighborhood}
\omega_i=\bigcup\{ K_j\in\mathcal{T}^H; ~~~ x_i\in \overline{K}_j\}.
\end{equation}
See Figure~\ref{schematic_ov} for an illustration of neighborhoods and elements subordinated to the coarse discretization. We emphasize the use of $\omega_i$ to denote a coarse neighborhood, and $K$ to denote a coarse element throughout the paper.
\begin{figure}[htb]
\centering
\includegraphics[width=0.65 \textwidth]{plotschematic_ov}
\caption{Illustration of a coarse neighborhood and coarse element}
\label{schematic_ov}
\end{figure}
Next, we briefly outline the global coupling and the role of coarse basis
functions for the respective formulations that we consider. For the discontinuous Galerkin (DG) formulation, we use a coarse element $K$ as the support for basis
functions, and for the continuous Galerkin (CG) formulation, we use $\omega_i$ as the support of basis functions.
In turn, throughout this chapter, we use the notation
\begin{equation} \label{cgordg}
\tau_i = \left\{ \begin{array}{cc}
\omega_i & \text{for} \, \, \, \text{CG} \\
K_i & \text{for} \, \, \, \text{DG} \\
\end{array}\right.
\end{equation}
when referring to a coarse region where respective local computations are performed (see Figure \ref{schematic_ov}).
To further motivate the coarse basis construction, we offer a brief outline of the global coupling.
In particular, we note that
our approach will employ multiple basis functions per coarse neighborhood.
Both CG and DG solutions will be sought
as $u^{\text{DG/CG}}_{\text{ms}}(x;\mu)=\sum_{i,k} c_{k}^i \psi_{k}^{\tau_i}(x; \mu)$, where $\psi_{k}^{\tau_i}(x; \mu)$ are the basis functions
(without loss of generality, we write basis functions as parameter-dependent).
Once the basis functions are identified, the global coupling is given through the variational form
\begin{equation}
\label{eq:globalG_cg}
a_{\text{DG/CG}}(u^{\text{DG/CG}}_{\text{ms}},v;\mu)=(f,v), \quad \text{for all} \, \, v\in
V_{\text{on}}^{\text{DG/CG}},
\end{equation}
where $V_{\text{on}}^{\text{DG/CG}}$ is used to denote the space formed by those basis functions and $a_{\text{DG/CG}}$ is a bilinear form which will be defined later on. Throughout, for the convenience,
we use the same notations for the discrete
and continuous representations of spatial fields.
\section{Sparse GMsFEM}
\label{cgdgmsfem}
In this section, we will give the detailed constructions of our sparse GMsFEM.
We start with an outline of the approach.
\subsection{Outline}
\label{locbasis}
In this section, we present an outline of the algorithm.
Assume that the snapshot space is
$V_{\text{snap}}^{\tau} = \text{Span}\{\psi_i^{\text{snap}} \}$
for a generic element $\tau$.
We assume that the solution is sparse in this snapshot space and
consider two approaches (see Figure \ref{overview1} for illustration).
Throughout the paper, we will assume that the solution is sparse
in the snapshot space. We
will discuss the assumption on the sparsity later in
Section \ref{sec:conclusion}.
\vspace{0.3cm}
\noindent
{\bf General outline of the sparse GMsFEM}:
\begin{itemize}
\item[1.] Coarse grid generation.
\item[2.] Construction of snapshot space, where the solution is sparse.
\item[3.]
\begin{itemize}
\item
{\it First approach. Local-Sparse Snapshot Subspace Approach}. Seek a subspace of the snapshot space and
construct multiscale basis functions that are sparse in the snapshot space.
\item
{\it Second approach. Sparse Snapshot Space Approach}. Solve for the sparse solution
in the snapshot space directly within a global formulation.
\end{itemize}
\end{itemize}
In the first approach, we perform local
calculations to identify multiscale basis functions that are
sparse in the snapshot space. Here, we will use approaches similar
to sparse POD. Then, the global problem is solved in the space
of multiscale basis functions. The resulting solution is sparse.
In the second approach,
we apply directly sparse solution techniques and find the solution that is sparse
in the snapshot space. This approach is more expensive
since it uses a large snapshot space. However, in some examples,
we can not identify local basis functions in the offline stage and
such approaches
give sparse solutions in the online stage.
\subsection{First approach. Local-Sparse Snapshot Subspace Approach}
We first give a general idea of this approach. We consider
a local snapshot space $V_{\text{snap}}^{\tau}=\text{Span}\{\psi_i^{\text{snap}} \}$.
In the local snapshot space, we seek multiscale basis functions $\{\psi_i^{\text{on}}\}$
that are sparse in the local snapshot space and which have
smallest energies (similar to sparse POD). For example,
following to \cite{schaeffer2013sparse}, we can consider
\[
\min\limits_{\Psi\in \mathbb{R}^{n\times M_{\text{on}}^{\tau}}}{1\over \nu}\norm{\Psi}_{1}+\text{Tr}\langle \Psi^{T}A_{s}(\mu)\Psi\rangle,\text{ s.t. }\Psi^T\Psi=\text{I},
\]
where $n$ is the dimension of the fine-scale space, $M_{\text{on}}^{\tau}$ is the number of online basis functions
and $A_s(\mu)$ is the stiffness matrix formed in the snapshot space.
Our proposed local approach avoids expensive direct local
eigenvalue calculations and uses randomized snapshots.
This approach will be
used for problems where one knows a local basis construction principle.
The latter typically uses
some local generalized eigenvalue problems in the
snapshot spaces \cite{schaeffer2013sparse}.
Once multiscale basis functions (that are sparse in the snapshot space)
are constructed, we solve the problem on a coarse grid.
Here, we will consider the parameter-dependent problem
(\ref{eq:original_NL}).
We can consider the computation of the parameter-dependent coarse space
as an online procedure.
In the latter, our objective will be solving
many problems for a given value of
the parameter with different boundary conditions and the right hand sides.
\subsubsection{Snapshot space}
We first construct a snapshot space $V_{\text{snap}}^{\tau}$
(for a generic $\tau$).
Construction of the snapshot space involves solving the local problems for various choices of input parameters, and we describe it below.
We generate snapshots using random boundary conditions by
solving a small number of local problems imposed with random boundary conditions,
\begin{equation}
\label{eq:random bc}
\begin{aligned}
-\mbox{div}(\kappa (x; \mu_j)\nabla \psi_{l,j}^{\tau, \text{rsnap}}) &=0\ \ \text{in } \tau^+\\
\psi_{l,j}^{\tau, \text{rsnap}}&=r_{l} \text{ on } \partial\tau^+,
\end{aligned}
\end{equation}
where $r_{l}$ are independent identically distributed (i.i.d.) standard Gaussian random vectors on
the fine-grid nodes of the boundary, $l=1, \cdots,L$. $\mu_j$ ($j=1,\dots,J$) is a specified set of fixed parameter values, and $J$ denotes the number of parameter values we choose. Here, $\tau^+$ is an oversampled region shown in the Figure \ref{schematic_ov} as $\omega_i^{+}$ or $K^+$ for conforming Galerkin formulation or discontinuous Galerkin formulation.
The space generated by $\psi_{l,
j}^{ \tau, \text{rsnap}}$ is a subspace of the space generated by all local snapshots
$\Psi_{k, j}^{\tau, \text{snap}}$, $k=1,\cdots, N$, and $N$ denotes the number of boundary nodes.
Denote $\Psi_{j}^{\tau, \text{rsnap}}=[\psi_{1,j}^{\tau, \text{rsnap}}, \cdots, \psi_{L,j}^{\tau, \text{rsnap}}]$ and $\Psi_{j}^{\tau, \text{snap}}=[\psi_{1,j}^{\tau, \text{snap}}, \cdots, \psi_{N,j}^{\tau, \text{snap}}]$.
Therefore, for each parameter $\mu_j$, there exists a randomized matrix $\mathcal{R}$
with rows composed by the random boundary vectors $r_{l}$ (as shown in Equation \eqref{eq:random bc}), such that,
\begin{align}\label{eqn:random_snapshots}
\Psi_{j}^{\tau, \text{rsnap}}=\mathcal{R}\Psi_{j}^{\tau, \text{snap}}.
\end{align}
Now, we are ready to present the local snapshot space as follows,
$$
V_{\text{snap}}^{\tau} = \text{Span}\{ \psi_{l,j}^{ \tau, \text{rsnap}}:~~~1\leq j \leq J ~~ \text{and} ~~ 1\leq l \leq L \},
$$
for each coarse subdomain $\tau$.
\begin{remark}
Note that we impose the same random vectors for the local snapshot calculation in Equation \eqref{eq:random bc} for $\mu_j$, $j=1,\cdots, J$ in order to obtain a sparse online space.
\end{remark}
\subsubsection{Sparse local space calculations}
For a given input parameter $\mu$, we next construct the associated
coarse space
$V^{\tau}_{\text{on}}(\mu)$ \emph{for each} $\mu$ value on each coarse subdomain $\tau$.
In principle, we want this to be a small dimensional subspace of the snapshot space for computational efficiency.
The coarse space will be used within the finite element
framework to solve the original global problem, where a continuous or discontinuous Galerkin coupling of the multiscale basis functions is used to compute the global solution. In particular, we seek a subspace of the snapshot space $V_{\text{snap}}^{\tau}$ such that it can approximate any element of the snapshot space in
an appropriate sense. For the convenience of the presentation, we
denote $\Psi_{\text{snap}}^{\tau}=[\psi_{1}^{\tau, \text{snap}},\cdots,\psi_{J}^{\tau, \text{snap}}]$.
Similar as the generation of local snapshot space, we obtain the local sparse online basis via a local problem solved by $l_1$ optimization in the space of the corresponding local snapshot space. Here, we will use a smaller subspace of $V_{\text{snap}}^{\tau}$ as the test space constructed through multiplication of a random matrix $T_{\text{random}}^{\tau}$. Denote $T_{\text{random}}^{\tau}$ as a matrix of size $(L\times J) \text{by }q$ ($q<< (L\times J)$) with rows of i.i.d standard Gaussian random vectors. Then the test space is defined as $\Psi_{\text{snap}}^{\tau}T_{\text{random}}^{\tau}$.
Specifically, the local problem is arranged as follows.
Find $U_{l}$, such that, $\psi_{l}^{\tau, \text{on}}=\Psi_{\text{snap}}^{\tau}U_{l}$, and
\begin{align}
\label{eq:l1opt}
U_{l}=\text{argmin} {1\over \nu}\norm{U}_{1} \text{ subject to } A_c(\mu)U_{l}=F_{l}.
\end{align}
Here,
$A_c(\mu)=({\Psi_{\text{snap}}^{\tau}T_{\text{random}}^{\tau}})^T A(\mu) \Psi_{\text{snap}}^{\tau}$ and $F_{l} = ({\Psi_{\text{snap}}^{\tau}T_{\text{random}}^{\tau}})^T R_{l}$ with $A(\mu)$ being the local stiffness matrix and $R_{l}$ being the right hand side for the local problem with Dirichlet boundary condition $r_{l}$. Namely, we are solving the following local problems in $V_{\text{snap}}^{\tau}$ with the $l_1$ minimized coefficient vector $U_l$ for the testing space equal to $\Psi_{\text{snap}}^{\tau}T_{\text{random}}^{\tau}$,
\begin{equation}
\label{eq:online}
\left\{
\begin{aligned}
-\mbox{div}(\kappa (x; \mu)\nabla \psi_{l}^{\tau^+, \text{on}})=0\ \ \text{in}\ \tau^+\\
\psi_{l}^{\tau^+, \text{on}}=r_{l} \text{ on } \partial\tau^+.
\end{aligned}
\right.
\end{equation}
Later, we will briefly introduce the algorithm to solve Equation \eqref{eq:l1opt}.
Note that we impose the same random vectors as the boundary conditions in Equation \eqref{eq:random bc} to guarantee the sparse solution in Equation \eqref{eq:online}.
We can obtain the local online snapshot functions on the target
domain $\tau$ by restricting the solution of the local problem, $\psi_{l}^{\tau^+, \text{on}}$
to $\tau$ (which is denoted by $\psi_{l}^{\tau, \text{on}}$).
Now we are ready to present the local online snapshot space as follows,
$$
V_{\text{on}}^{\tau} = \text{Span}\{ \psi_{l}^{\tau, \text{on}}:~~~1\leq l \leq L \},
$$
for each coarse subdomain $\tau$. Then we denote $\Psi_{\text{on}}^{\tau}=[\psi_{1}^{\tau, \text{on}}, \cdots, \psi_{L}^{\tau, \text{on}}]$.
Next we will select the dominant modes from $V_{\text{on}}^{\tau}$ by the following eigenvalue problem,
\begin{equation} \label{oneig}
A^{\tau, \text{on}}(\mu) z_k^{\tau, \text{on}} = \lambda_k^{\tau, \text{on}} S^{\tau, \text{on}}(\mu) z_k^{\tau, \text{on}},
\end{equation}
where
\begin{equation*}
\displaystyle A^{\tau, \text{on}}(\mu) = [a^{\text{on}}(\mu)_{mn}] = [\int_\tau \kappa(x; \mu)
\nabla \psi_m^{\tau, \text{on}} \cdot \nabla \psi_n^{\tau, \text{on}}] = {\Psi_{\text{on}}^{\tau}}^T A(\mu) \Psi_{\text{on}}^{\tau}
\end{equation*}
%
%
\begin{equation*}
\displaystyle S^{\tau, \text{on}}(\mu) = [s^{\text{on}}(\mu)_{mn}] = [\int_{\tau} {\kappa}(x; \mu)
\psi_m^{\tau, \text{on}} \psi_n^{\tau, \text{on}}] = {\Psi_{\text{on}}^{\tau}}^T S(\mu) \Psi_{\text{on}}^{\tau},
\end{equation*}
and $\kappa(x; \mu)$ is now parameter dependent.
To generate the coarse space we then choose the smallest $M_{\text{on}}^{\tau}$ eigenvalues from Equation \eqref{oneig} and form the corresponding eigenvectors in $R_{\text{on}}^{\tau}$ by setting
$\phi_k^{\tau,\text{on}} = \sum\limits_{j=1}^{L}\psi_{j}^{\tau,\text{on}} z_{k,j}^{\tau,\text{on}}$
(for $k=1,\ldots, M_{\text{on}}^{\tau}$), where $z_{k,j}^{\tau,\text{on}}$ are the coordinates of
the vector $\phi_{k}^{\tau,\text{on}}$.
Above, we presented an algorithm for solving an eigenvalue problem
in the space which has a sparse representation in the snapshot space.
One can attempt to obtain sparse spectral basis from Equation \eqref{eq:online} using $l_1$-minimization method in the local snapshot space $V_{\text{snap}}^{\tau}$ (\cite{hale2008fixed, yin2008bregman}).
Here, we can use the algorithm proposed in \cite{ozolins2014compressed}.
The general basis pursuit problem is as follows
\begin{align}
\label{eq:l1prb}
\min\limits_{x\in \mathbb{R}^n}\norm{x}_{1}+\nu\norm{Cx-f(x)},
\end{align}
where $f\in\mathbb{R}^m$, $C\in\mathbb{R}^{m\times n}$, and $m<\!\!<n$.
We refer to \cite{yin2008bregman} for the Bregman algorithm to solve \eqref{eq:l1prb}.
Instead of solving Equation \eqref{eq:online} in the online stage, we can generate the sparse spectral basis directly following the algorithm in
\cite{ozolins2014compressed}. The first $M_{\text{on}}^{\tau}$ sparse spectral basis can be solved by
\begin{align}
\label{eq:sparse_eig}
\min\limits_{\Psi\in \mathbb{R}^{n\times M_{\text{on}}^{\tau}}}{1\over \nu}\norm{\Psi}_{1}+\text{Tr}\langle \Psi^{T}A_{c}(\mu)\Psi\rangle,\text{ s.t. }\Psi^T\Psi=\text{I}.
\end{align}
\subsubsection{Global coupling}
\label{globcoupling}
The multiscale basis functions constructed above can be coupled via DG or CG
formulation. Below, we present DG approach (similar results are
observed when CG approach is used).
One uses the discontinuous Galerkin (DG) approach
(see also \cite{riviere2008discontinuous,ABCM_unified_2002})
to couple multiscale basis functions. This may avoid the use of
the partition of unity functions;
however, a global formulation needs to be chosen carefully.
The global formulation is given by
\begin{equation}
a_{\text{DG}}(u_{H}^{\text{DG}},v)=(f,v),\quad\forall v\in V_{\text{on}},\label{eq:ipdg}
\end{equation}
where the bilinear form $a^{\text{DG}}$ is defined as
\begin{equation}
a_{\text{DG}}(u,v)=a_{H}(u,v)-\sum_{E\in\mathcal{E}^{H}}\int_{E}\Big(\average{{\kappa}\nabla{u}\cdot{n}_{E}}\jump{v}+\average{{\kappa}\nabla{v}\cdot{n}_{E}}\jump{u}\Big)+\sum_{E\in\mathcal{E}^{H}}\frac{\gamma}{h}\int_{E}\overline{\kappa}\jump{u} \jump{v} \label{eq:bilinear-ipdg}
\end{equation}
with
\begin{equation}
a_{H}({u},{v})=\sum_{K\in\mathcal{T}_{H}}a_{H}^{K}(u,v),\quad a_{H}^{K}(u,v)=\int_{K}\kappa\nabla u\cdot\nabla v,
\end{equation}
where $\gamma>0$ is a penalty parameter, ${n}_{E}$ is a fixed unit
normal vector defined on the coarse edge $E \in \mathcal{E}^H$.
Note that, in (\ref{eq:bilinear-ipdg}),
the average and the jump operators are defined in the classical way.
Specifically, consider an interior coarse edge $E\in\mathcal{E}^{H}$
and let $K^{+}$ and $K^{-}$ be the two coarse grid blocks sharing
the edge $E$. For a piecewise smooth function $G$, we define
\[
\average{G}=\frac{1}{2}(G^{+}+G^{-}),\quad\quad\jump{G}=G^{+}-G^{-},\quad\quad\text{ on }\, E,
\]
where $G^{+}=G|_{K^{+}}$ and $G^{-}=G|_{K^{-}}$ and we assume that
the normal vector ${n}_{E}$ is pointing from $K^{+}$ to $K^{-}$.
Moreover, on the edge $E$, we define $\overline{\kappa} = (\kappa_{K^+}+\kappa_{K^-})/2$
where $\kappa_{K^{\pm}}$ is the maximum value of $\kappa$ over $K^{\pm}$.
For a coarse edge $E$ lying on the boundary $\partial D$, we define
\[
\average{G}=\jump{G}=G,\quad \text{ and }\quad \overline{\kappa} = \kappa_{K} \quad\quad\text{ on }\, E,
\]
where we always assume that ${n}_{E}$ is pointing outside of $D$.
We note that the DG coupling (\ref{eq:ipdg})
is the classical interior penalty discontinuous Galerkin (IPDG) method
\cite{riviere2008discontinuous}
with our multiscale basis functions as the approximation space.
We can obtain the discontinuous Galerkin spectral multiscale space as
\begin{equation} \label{dgspace}
V_{\text{on}}^{\text{DG}}(\mu) = \text{Span} \{ \phi_{k}^{K,\text{on}}: \, \, \, 1 \leq k \leq M_{\text{on}}^{K}, K \in \mathcal{T}^H \}.
\end{equation}
We can obtain an operator matrix constructed by the basis functions of $V_{\text{on}}^{\text{DG}}(\mu)$. We denote the matrix as $\Phi_{0}$ where $\Phi_0 = \left[ \phi_1^{\text{DG}} , \ldots, \phi_{N_c}^{\text{DG}} \right]$. Recall that $N_c$ denotes the total number of coarse basis functions.
Solving the problem \eqref{eq:original_NL} in the coarse space $V_{\text{on}}^{\text{DG}}(\mu)$ using the DG formulation described in Equation \eqref{eq:ipdg} is equivalent to seeking
$u^{\text{DG}}_{\text{ms}}(x; \mu) = \sum_i c_i \phi_i^{\text{DG}}(x; \mu) \in V_{\text{on}}^{\text{DG}}$ such that
\begin{equation} \label{dgvarform}
a^{\text{DG}}(u_{\text{ms}}^{\text{DG}}, v; \mu) = (f, v) \quad \text{for all} \,\,\, v \in V_{\text{on}}^{\text{DG}},
\end{equation}
where
$ \displaystyle a^{\text{DG}}(u, v; \mu) $ and $f( v)$ are defined in Equation \eqref{eq:bilinear-ipdg}.
We can obtain a coarse system
\begin{equation}
A_0 U_0^{\text{DG}} = F_0,
\end{equation}
where $U^{\text{DG}}_0$ denotes the discrete coarse DG solution, and
\begin{equation*}
A_0(\mu) = R_0^T A(\mu) R_0 \quad \text{and} \quad F_0 = R_0^T F,
\end{equation*}
where $A(\mu)$ and $F$ are the standard, fine-scale stiffness matrix and forcing vector corresponding to the form in Equation \eqref{dgvarform}. After solving the coarse system, we can use the operator matrix $R_0$ to obtain the fine-scale solution in the form of $R_0U_0^{\text{DG}}$.
\subsubsection{Computational cost}
In this section, we discuss the computational cost. For this, we assume
that we have chosen $J$ parameters,
$\mu_1,..., \mu_J$, and
$L$ boundary conditions $r_1,..., r_L$,
for constructing the snapshot space.
Then, the cost for snapshot calculations will be the same as solving
$J \times L$ local problems for randomized snapshots.
Next, we compute the cost of solving
$L_{\text{on}}$ online randomized snapshots. Each online snapshot
calculation requires solving $l_1$ minimization with a constraint
involving $q \times (L \times J)$ matrix (see (\ref{eq:l1opt})).
The cost of eigenvalue computation with
$L_{\text{on}}$ snapshots is considered to be small as it involves a
small eigenvalue problem of the size $L_{\text{on}}\times L_{\text{on}}$.
The online cost is mainly due to solving (1) solving
$L_{\text{on}}$ online randomized snapshots (2) solving a global problem
on a coarse grid. The cost of solving a global problem is small
if the solution has a sparse representation. As for the cost of solving
online randomized snapshots, this will be small compared to solving local
problems in the online stage if local problems have a high resolution.
Moreover, as we pointed out
earlier
that the proposed approach allows a fast assembly of the stiffness
matrix in the online space since it involves a few elements of the
snapshot space. It also avoids using all snapshot vectors,
which can result to a large dimensional representation
of the online multiscale basis functions.
\subsection{Second approach. Sparse Snapshot Subspace Approach}
In this approach, we will use
an appropriate snapshot space to compute the sparse solution directly.
Again, we consider
a local snapshot space $V_{\text{snap}}^{\tau}=\text{Span}\{\psi_i^{\tau, \text{snap}} \}$.
In some applications, we may not be able to reduce the dimension of the multiscale space locally. In this case, we can use sparsity techniques directly
in the global snapshot space to compute the solution.
The procedure can be more expensive; however, can yield more accurate solutions.
More precisely, we seek the solution in the global snapshot space of
$V_{\text{snap}} = \text{Span}\{ \psi_i^{\text{snap}} \}
$
using $l_1$ minimization with testing space, $V_\text{test}$, spanned by the random combination of snapshot basis functions. This is equivalent to find $u_\text{ms}=\sum_i U^\text{ms}_i \psi^\text{snap}_i\in V_\text{snap} $ where
\begin{align}
\label{eq:l1_global}
U^\text{ms}=\text{argmin} \norm{U}_{1} \text{ subject to } a_\text{DG}(\sum_i U_i \psi^\text{snap}_i,v)=(f,v), \;\forall v\in V_\text{test}.
\end{align}
In the following section, we will take the Helmholtz problem as an example and discuss the procedure of using this approach to compute a sparse solution. More precisely, we consider the following problem: find $u$ such that
\[
-\nabla \cdot( \kappa(x) \nabla u) - \Omega^2 n(x)u = f, \quad \text{in} \quad D
\]
with the Dirichlet boundary condition $u|_{\partial D }=g$.
\subsubsection{Snapshot and test space}
In this section, we present the construction of the snapshot space $V_\text{snap}$ and the test space $V_\text{test}$. Since we are solving the Helmholtz equation with a fixed frequency $\Omega$, we can assume the solution, $u$, can be written as a linear combination of plane waves, namely, $u = \sum_k \beta_k e^{i \Omega k\cdot x}$. Therefore, we consider our snapshot basis to be some plane waves in each coarse block $K\in \mathcal{T}^H$, that is,
\[
V_\text{snap} = \text{Span}\{ \psi_{m,j}:\,1\leq m \leq N_d \text{ and } 1\leq j \leq M\},
\]
where
\begin{equation}
\label{eq:SparseTrig}
\Psi_{m,j}(x) = \left\{ \begin{array}{cc}
e^{i \Omega k_m\cdot x} & \text{for} \, \, \, x\in K_j \\
0 & \text{otherwise} \\
\end{array}\right.
\end{equation}
with $k_m= (\sin(\pi m/N_d),\cos(\pi m/N_d)) $
, $N_d$ as the number of propagating directions and $M$ the number of coarse blocks.
We note that the plane wave basis is not new in solving the Helmholtz equation
and it was used in a number of papers (see \cite{huttunen2007use, colton1985novel,colton1987numerical,tezaur2006three, hiptmair2011plane} and
the references therein).
Next, we will show the construction of the test space. We consider $\{ r^{(l)} \}^{N_t}_{l=1}$ as a collection of i.i.d. standard Gaussian random vectors with $N_t<<N_d$. Then the testing space $V_\text{test}$ is defined by
\[
V_\text{test}=\text{Span} \{ \phi_{l,j}: \phi_{l,j} =\sum_{m=1}^{N_d} r^{(l)}_{m}\psi_{m,j}, \,1\leq l \leq N_t \text{ and } 1\leq j \leq M\ \}.
\]
Since $N_t<<N_d$, we have a test space with dimension much smaller the snapshot space ($\text{dim}(V_\text{test}) = N_t M<<N_d M=\text{dim}(V_\text{snap} )$).
\subsubsection{Sparse solution in the snapshot space}
After constructing the snapshot and test space, we can couple the system globally by IPDG method. That is, we find $u_\text{ms}\in V_{\text{snap}}$ such that
\begin{equation}
\label{eq:dg_app2}
a_\text{DG}(u_\text{ms},v) = (f,v),\,\quad \forall v\in V_\text{test},
\end{equation}
where \begin{equation}
a_{\text{DG}}(u,v)=a_{H}(u,v)-\sum_{E\in\mathcal{E}^{H}}\int_{E}\Big(\average{{\kappa}\nabla{u}\cdot{n}_{E}}\jump{v}+\average{{\kappa}\nabla{v}\cdot{n}_{E}}\jump{u}\Big)+\sum_{E\in\mathcal{E}^{H}}\frac{\gamma}{h}\int_{E}\overline{\kappa}\jump{u} \jump{v} \label{eq:bilinear-ipdg2}
\end{equation}
with
\begin{equation}
a_{H}({u},{v})=\sum_{K\in\mathcal{T}_{H}}a_{H}^{K}(u,v),\quad a_{H}^{K}(u,v)=\int_{K}\kappa\nabla u\cdot\nabla v - \Omega^2 \int_{K}n(x) uv,
\end{equation}
where $\gamma>0$ is a penalty parameter, ${n}_{E}$ is a fixed unit
normal vector defined on the coarse edge $E \in \mathcal{E}^H$. Moreover, $\{ \cdot \}$ and $[\cdot]$ are the average and jump operators defined before.
As the dimension of test space is smaller that the dimension of snapshot space, the Equation \eqref{eq:dg_app2} does not have a unique solution. To seek for a sparse solution, we will solve a $l_1$ minimization problem subject to Equation \eqref{eq:dg_app2}, more precisely, we will find $u_\text{ms}=\sum_{l,j} U^\text{ms}_{l,j}\phi_{l,j} \in V_\text{snap}$ such that
\begin{equation}
\label{eq:l1-helmholtz}
U^\text{ms} = \text{argmin} \{ \|U \|_{\l_1}\} \text{ subject to } a_{DG}(U^\text{ms}_{l,j}\phi_{l,j},v)=(f,v),\quad\;\forall v\in V_\text{test}.
\end{equation}
\subsubsection{Cost of computations}
In this section, we discuss the computational cost associated
with the second approach.
In this example,
the cost for snapshot (plane waves)
calculations is cheap as they are analytically
described.
In addition, the linear system in (\ref{eq:l1-helmholtz}) has dimension $N_t M \times N_d M$,
which is a highly under-determined system as $N_t << N_d$.
Thus, one can solve the $l_1$ minimization problem
(\ref{eq:l1-helmholtz}) efficiently
by using, for example, the Bregman's method \cite{yin2008bregman}.
Moreover, if an adaptivity can be used and the problem requires
very few snapshot vectors or very few test vectors
in many regions excepts a few coarse regions,
this will increase the efficiency of the proposed approach.
In this case, if we denote by $N_t^{(i)}$ and $N_d^{(i)}$
the number of test and snapshot vectors
in the region $i$ (using adaptivity), then the
linear system in (\ref{eq:l1-helmholtz}) has dimension $\left(\sum_{i=1}^M
N_t^{(i)} \right) \times \left(\sum_{i=1}^M N_d^{(i)} \right)$.
Thus, if $\sum_{i=1}^M N_t^{(i)}$ or $\sum_{i=1}^M N_d^{(i)}$ is not large,
one can gain computational efficiency.
We note that this approach is more expensive compared to the first approach,
where we perform the sparsity calculations at the local coarse-grid level.
\section{Numerical results}
\subsection{First Approach. Local-Sparse Snapshot Subspace Approach}
In this section, we will present some numerical examples by using the first approach to compare the sparse multiscale solution. We consider the domain $D = [0,1]^2$. The coarse mesh size $H$ is $1/10$ and each coarse grid block is subdivided into a $10\times 10$ grid, therefore, the fine mesh size $h=1/100$;
\subsubsection*{Example 1}
{\bf Setup.}
In our first example, we will consider the source function $f=1$ and the medium parameter $\kappa(\mu) = (1-\mu)\kappa_1 + \mu\kappa_2$,
where $\kappa_1$ and $\kappa_2$ are shown in Figure \ref{fig:decofperm}. We choose the offline values of
$\mu$, $\mu_i=0.2,0.4,0.6,0.8$,
for computing the online snapshot space as discussed above.
{\bf Discussions of numerical results.}
In Table \ref{table:sparse_DG Harmonic}, we show the convergence history of our method for $\mu=0.5$,
where we define$\| u\|_{H^1_{\kappa}(D)}^2 = \int_D \kappa |\nabla u|^2$.
The fine-grid solution and the numerical solution are shown in Figure \ref{fig:sol_case1}.
First, we note
that there is an irreducible error due to the use of the snapshot
space, which consists of harmonic functions. This error is of order of
the coarse mesh size and this is the reason, the error decay is slow (below
$10$ \%) as
we increase the dimension. In these problems,
because our selected $\mu=0.5$ is near to $\mu=0.4$ and $\mu=0.6$,
we observe that the sparsity is close to $50$ \%, i.e., we only use
snapshots corresponding to nearby values of $\mu$.
We observe that when the snapshot space dimension
is $9600$ (i.e., $24$ randomized solutions per coarse block and per
each value of $\mu_i$), the nonzero coefficients in the expansion of
basis functions (over the whole domain) in terms of $9600$ snapshot
vectors are $4850$. The optimal expansion for this case
corresponds when $\mu=0.5$
is selected for snapshot construction and in this case, the number of
nonzero coefficients is $2400$ ($24$ per coarse region).
We note that if we consider small dimensional online spaces and
the full snapshot space, then the sparsity is very small.
For example, if we use $12$ randomized solutions per coarse block
and per each value of $\mu_i$ for identifying multiscale basis functions
per each coarse region, then, we will be using only $1/2$ of the snapshot
vectors and thus, the sparsity (the number of nonzero coefficients
of the solution in the snapshot space) will be $25$ \%.
As we observe
that our numerical examples identify appropriate sparsity of the solution
space. We expect a more significant gain in the sparsity if more parameter
values are used.
{\bf Why to expect a sparsity.} Next, we briefly describe why to expect
a sparsity in this problem. Because
the snapshot space consists of
local problems corresponding to
multiple values of $\mu$, we expect that for an online value of $\mu$,
we will have a local (in $K$) coefficient $\kappa(x;\mu)$, which
is similar to one of the snapshot solutions. Thus, it is more advantageous
to use $l_1$ minimization techniques, which will select a small dimensional
subspace of the snapshot space corresponding to the coefficient that is close
to $\kappa(x;\mu)$ with the online value of $\mu$. Such situations may occur
in various applications. Moreover, in these examples, it is more advantageous
to use local spectral decomposition and avoid a large-scale $l_1$ minimization
problem.
\begin{table}[htb!]
\centering
\caption{Convergence history of the DGMsFEM using oversampling Harmonic basis. The fine-scale dimension is 12100.
The full snapshot space dimension is 9600.}
\label{table:sparse_DG Harmonic}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{$\text{dim}(V_{\text{on}})$} &
\multicolumn{2}{c|}{ $\|u-u_{\text{ms}} \|$ (\%) } \\
\cline{2-3} {}&
$\hspace*{0.8cm} L^{2}(D) \hspace*{0.8cm}$ &
$\hspace*{0.8cm} H^{1}_\kappa(D) \hspace*{0.8cm}$
\\
\hline\hline
$400$ & $15.05$ & $31.84$ \\
\hline
$600$ & $2.89$ & $13.71$ \\
\hline
$800$ & $1.22$ & $10.20$ \\
\hline
$1000$ & $1.12$ & $9.83$ \\
\hline
$\text{dim}(V_{\text{snap}})$ & $1.07$ & $9.59$ \\
\hline
\end{tabular}
\end{table}
\begin{figure}\centering
\subfigure[$\kappa_1(x)$]{\label{fig:permi}
\includegraphics[width = 0.45\textwidth, keepaspectratio = true]{permi}
}
\subfigure[$\kappa_2(x)$]{\label{fig:permii}
\includegraphics[width = 0.45\textwidth, keepaspectratio = true]{permii}
}
\caption{Decomposition of permeability field}\label{fig:decofperm}
\end{figure}
\begin{figure}[ht]\centering
\includegraphics[scale=0.4]{Case1_sol}\includegraphics[scale=0.4]{Case1_num_sol_8basis}
\protect\caption{Left: Fine grid solution Right: Numerical solution ($8$ basis).}
\label{fig:sol_case1}
\end{figure}
\subsubsection*{Example 2}
{\bf Setup.} In our second example, we consider the source function $f$ to be the same as in the previous example. As for the medium parameter, we use a nonlinear
function where the high permeability channel and inclusions move as
we change the parameter. The expression for the medium parameter is
\[
\kappa(\mu)= \kappa_1(x+\mu,y)
\]
In Figure \ref{fig:parameter_case2}, we show four different values of $\mu$, ($\mu=0,0.15,0.3,0.45$), that is used to construct the snapshot space. This is a complicated case as the
high-conductivity region is not fixed.
{\bf Discussions of numerical results.} In Table \ref{table:sparse_DG Harmonic case2}, we show the convergence history of our method. The fine grid solution and the numerical solution are shown in Figure \ref{fig:sol_case2} with $\mu=0.14$. As we see from this table that the error is
larger compared to the previous case. The error decreases as we increase the dimension of the space. However, due to the fact that we do not span
all (or many) parameter values, the decay is slow.
As for the sparsity, we achieve a better sparsity compared to the previous
example because the online value of $\mu=0.14$ is close to one of
selected offline values $\mu=0.15$.
In fact, we observe
that when the snapshot space dimension
is $9600$, as before, (i.e., $24$ randomized solutions per coarse block and per
each value of $\mu_i$), the nonzero coefficients in the expansion of
basis functions (over the whole domain) in terms of $9600$ snapshot
vectors are $3700$. The optimal expansion for this case (i.e.,
the case with $24$ randomized solutions per coarse block)
corresponds when $\mu=0.15$
is selected for snapshot construction and in this case, the number of
nonzero coefficients is $2400$ ($24$ per coarse region).
If we consider small dimensional online spaces and
the full snapshot space, then the sparsity is very small, as before.
For example, if we use $6$ randomized solutions per coarse block
and per each value of $\mu_i$ for identifying multiscale basis functions
per each coarse region, then, we will be using only $1/4$ of the snapshot
vectors and thus, the sparsity will be $9.5$ \%.
As we observe
that our numerical examples identify appropriate sparsity of the solution
space. Again,
we expect a more significant gain in the sparsity if more parameter
values are used.
\begin{table}[htb!]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{$\text{dim}(V_{\text{on}})$} &
\multicolumn{2}{c|}{ $\|u-u^{\text{on}} \|$ (\%) } \\
\cline{2-3} {}&
$\hspace*{0.8cm} L^{2}(D) \hspace*{0.8cm}$ &
$\hspace*{0.8cm} H^{1}_\kappa(D) \hspace*{0.8cm}$
\\
\hline\hline
$800$ & $13.50$ & $31.06$ \\
\hline
$1000$ & $12.02$ & $29.45$ \\
\hline
$1200$ & $9.72$ & $26.66$ \\
\hline
$1400$ & $7.97$ & $24.13$ \\
\hline
$\text{dim}(V_{\text{snap}})$ & $5.78$ & $20.27$ \\
\hline
\end{tabular}
\caption{Convergence history of the DGMsFEM using oversampling Harmonic basis. The fine-scale dimension is 12100. The full snapshot space dimension is 9600. }
\label{table:sparse_DG Harmonic case2}
\end{table}
\begin{figure}[ht]\centering
\includegraphics[scale=0.4]{kappa_1}\includegraphics[scale=0.4]{kappa_2}
\includegraphics[scale=0.4]{kappa_3}\includegraphics[scale=0.4]{kappa_4}
\protect\caption{medium parameter. Top-Left: $\kappa(\mu_{1})$, Top-Right: $\kappa(\mu_{2}),$
Bottom-Left: $\kappa(\mu_{3})$, Bottom-Right: $\kappa(\mu_{4})$}
\label{fig:parameter_case2}
\end{figure}
\begin{figure}[ht]\centering
\includegraphics[scale=0.4]{Case2_sol}\includegraphics[scale=0.4]{Case2_num_sol_14basis}
\protect\caption{Left: Fine grid solution Right: Numerical solution ($14$ basis).}
\label{fig:sol_case2}
\end{figure}
\begin{remark}
We have implemented CG-GMsFEM using spectral basis approach and observed similar results.
\end{remark}
\subsection{Second Approach. Sparse Snapshot Subspace Approach}
In this section, we will show the numerical example by using second approach to directly calculate the sparse multiscale solution by $l_1$ minimization.
\subsubsection*{Example 1}
{\bf Setup.} In this example, we consider the domain $D=[0,1]^{2}$
that is partitioned into the
coarse grid with grid size $H=1/8$ and each coarse block is subdivided
into $16\times16$ fine square blocks with length $h=H/16$.
Therefore, the fine mesh size $h=\cfrac{1}{128}$. We consider $\Omega=2$, $\kappa\equiv1$ and $n(x)$
is shown in Figure \ref{fig:para_n}. We consider a zero source function with Dirichlet boundary condition $g$, given by $g=e^{-i\Omega k\cdot x}$ where $k=(\sin(\cfrac{\pi}{4}),\cos(\cfrac{\pi}{4}))$.
{\bf Discussions of numerical results.}
We will compare our result with the reference solution, which is calculated on the fine grid and shown in Figure
\ref{fig:case2_sol}.
Notice that, within each coarse grid block, the reference solution has few dominant propagating directions,
which suggests sparsity of the solution in the snapshot space.
In this case, the snapshot space is spanned by local plane waves with dimension $\text{dim}(V_\text{snap})=1280$,
as defined in (\ref{eq:SparseTrig}) with $k_i$'s distributed uniformly.
The snapshot solution (i.e., if we use all snapshot vectors)
has
$1.63\%$ relative error with respect to the fine-scale solution.
We compare the solutions in Figure \ref{fig:case2_snap}.
As we observe, the snapshot solution is accurate.
Next, we calculate the sparse solution by varying the dimension of the test
space.
The latter defines a sparse solution in the subspace of the test space.
The numerical solution calculated with $4$ test basis per coarse grid block is shown in Figure \ref{fig:case2_test4}. In Table \ref{table:sparse_DG case2}, we show the convergence history of the second approach, where $\|u\|_{H^1(D)}^2 = \int_D |\nabla u|^2$.
As we observe that for low dimensional test spaces, the solution is very
sparse in the snapshot space
(and this sparsity is
about the same as the test space). We increase the dimension of the test space
to achieve a higher accuracy.
For example, for the solution
with the sparsity $408$ (i.e., $408$ non-zero coefficients in the span of $1280$
snapshot vectors), we have $1.63$ \% $L^2$ error.
{\bf Why to expect a sparsity.} Next, we briefly describe why to expect
a sparsity in this problem.
The snapshot space consists of
local problems corresponding to different directions $k_m$ (see (\ref{eq:SparseTrig})). In this problem,
we expect that the solution will consist of plane wave solutions with a few
directions. By using plane wave solutions, we can identify these few directions. Note that in this example, we can not identify local spectral decomposition.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{para_n}
\protect\caption{Parameter $n(x)$.}
\label{fig:para_n}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[scale=0.4]{n10_ref_sol_r} \includegraphics[scale=0.4]{n10_ref_sol_i}
\protect\caption{Reference solution $u$, Left: Real part of the solution; Right: Imaginary part of the solution.}
\label{fig:case2_sol}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{n10_snap_sol_r} \includegraphics[scale=0.4]{n10_snap_sol_i}
\protect\caption{Snapshot solution (i.e., when using all snapshot vectors), $u$, Left: Real part; Right: Imaginary part.}
\label{fig:case2_snap}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{n10_4test_sol_r} \includegraphics[scale=0.4]{n10_4test_sol_i}
\protect\caption{Numerical solution with $4$ test basis functions, $u$, Left: Real part of the solution. Right: Imaginary part of the solution.}
\label{fig:case2_test4}
\end{figure}
\begin{table}[htb!]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{$\text{dim}(V_{\text{test}})\times 2$} &
\multicolumn{2}{c|}{ $\|u-u_{\text{ms}} \|$ (\%) } &\\
\cline{2-3} {}&
$\hspace*{0.8cm} L^{2}(D) \hspace*{0.8cm}$ &
$\hspace*{0.8cm} H^{1}(D) \hspace*{0.8cm}$ &
sparsity of the sol
\\
\hline\hline
$128$ & $41.10$ & $195.38$ &128 \\
\hline
$256$ & $21.12$ & $45.16$ &252\\
\hline
$384$ & $14.62$ & $32.62$ &344\\
\hline
$512$ & $3.49$ & $14.40$ &408\\
\hline
$\text{dim}(V_{\text{snap}})$ & $1.63$ & $9.88$ & 1280\\
\hline
\end{tabular}
\caption{Convergence history of the DGMsFEM to compute sparse multiscale solution directly. The fine-scale dimension is 16384. The snapshot space dimension is 1280.}
\label{table:sparse_DG case2}
\end{table}
\subsubsection*{Example 2}
{\bf Setup.} In this example,
we consider the domain $D=[0,1]^{2}$ is partitioned into the
coarse grid with grid size $H=1/16$ and each coarse block is subdivided
into $16\times16$ fine square fine block with length $h=H/16$, therefore, the fine mesh size $h=\cfrac{1}{128}$. We consider $\Omega=8$ and $n(x)$
is shown in figure \ref{fig:para_n2}. The parameter $\kappa$, source function $f$, and boundary condition $g$, are the same as the previous example. Because of higher value of $\Omega$,
we take the fine grid $2$ times finer.
{\bf Discussions of numerical results.}
We will compare our results with the multiscale approach with the reference solution, which calculated on the fine grid and shown in Figure \ref{fig:case2_sol2}.
Notice that, within each coarse grid block, the reference solution has few dominant propagating directions,
which suggests sparsity of the solution in the snapshot space.
In this case, the snapshot space is spanned by local plane waves with dimension $\text{dim}(V_\text{snap})=5120$,
as defined in (\ref{eq:SparseTrig}) with $k_i$'s distributed uniformly.
The snapshot solution error is $2.44\%$ and it is shown in Figure \ref{fig:case2_snap2}.
As we observe the snapshot solution is accurate.
Next, we calculate the sparse solution by varying the dimension of the test
space.
The latter defines a sparse solution in the subspace of the test space.
The numerical solution calculated with $4$ test basis per coarse grid block is shown in Figure \ref{fig:case2_test5_2}.
In Table \ref{table:sparse_DG case2_2}, we show the convergence history of the second approach.
As we observe that for low dimensional test spaces, the solution is very sparse
in the snapshot space. We increase the dimension of the test space to
achieve a higher accuracy.
For example, the solution with $1958$ nonzero coefficients in the snapshot
space provides $4.25$ \% accurate solution in $L^2$ sense.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{para_n2}
\protect\caption{Parameter $n(x)$.}
\label{fig:para_n2}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[scale=0.4]{n10_w8_ref_sol_r} \includegraphics[scale=0.4]{n10_w8_ref_sol_i}
\protect\caption{Reference solution $u$, Left: Real part of the solution; Right: Imaginary part of the solution.}
\label{fig:case2_sol2}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{n10_w8_snap_sol_r} \includegraphics[scale=0.4]{n10_w8_snap_sol_i}
\protect\caption{Snapshot solution (i.e., when using all snapshot vectors), $u$, Left: Real part; Right: Imaginary part.}
\label{fig:case2_snap2}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{n10_w8_5test_sol_r} \includegraphics[scale=0.4]{n10_w8_5test_sol_i}
\protect\caption{Numerical solution with $4$ test basis functions, $u$, Left: Real part of the solution. Right: Imaginary part of the solution.}
\label{fig:case2_test5_2}
\end{figure}
\begin{table}[htb!]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{$\text{dim}(V_{\text{test}})\times 2$} &
\multicolumn{2}{c|}{ $\|u-u_{\text{ms}} \|$ (\%) } &\\
\cline{2-3} {}&
$\hspace*{0.8cm} L^{2}(D) \hspace*{0.8cm}$ &
$\hspace*{0.8cm} H^{1}(D) \hspace*{0.8cm}$ &
sparsity of the sol
\\
\hline\hline
$512$ & $79.12$ & $176.91$ &500 \\
\hline
$1024$ & $74.07$ & $103.93$ &913\\
\hline
$1536$ & $40.69$ & $51.79$ &1294\\
\hline
$2048$ & $17.27$ & $23.74$ &1591\\
\hline
$2560$ & $4.25$ & $8.63$ &1958\\
\hline
$\text{dim}(V_{\text{snap}})$ & $2.44$ & $6.18$ & 5120\\
\hline
\end{tabular}
\caption{Convergence history of the DGMsFEM to compute sparse multiscale solution directly. The fine-scale dimension is 66049. The snapshot space dimension is 5120.}
\label{table:sparse_DG case2_2}
\end{table}
\section{Conclusions}
\label{sec:conclusion}
\subsection{Summary of the results}
In the paper, we develop approaches to identify sparse multiscale basis functions
in the snapshot space within GMsFEM.
The snapshot spaces are constructed in a special way that
allows sparsity for the solution. We consider two apporaches. In the first approach,
local multiscale basis functions are constructed, which are sparse in the snapshot
space. These multiscale basis functions are constructed by identifying dominant
modes in the snapshot space using $l_1$ minimization techniques.
As for the application, we consider parameter-dependent multiscale problems.
In the second approach, we apply $l_1$ minimization techniques directly to solve
the global problem. This approach is more expensive as it directly deals with
a large snapshot space. As for the application, we consider Helmholtz equations.
For both approaches and their respective applications, we present numerical
results and discuss computational savings. Our numerical examples are
simplistic and are designed to convey the main idea of the proposed approach.
\subsection{Sparsity assumption}
Both approaches assume that the solution is sparse in the snapshot space.
The latter requires special snapshot spaces, which can yield this sparsity.
For example. for local snapshot vectors considered in the paper, this requires
identifying boundary conditions for the snapshot solutions, which can sparsily
represent
the solution. This may not be easy in general, though in some examples can still
be achieved. Besides examples presented in the paper, one can consider
scale separation cases and use piecewise linear boundary conditions.
Whether a general framework for constructing these snapshot vectors is possible
remains an open question.
\subsection{Adaptivity and online basis functions}
In general, once offline spaces are identified,
we can use adaptivity \cite{chung2014adaptive1, chung2014adaptive2}
and online basis functions
\cite{chung2015residual,chung2015online} to achieve a small error.
The adaptivity is accomplished by identifying the regions
with large residuals and enriching the spaces in those regions.
In our earlier works \cite{chung2014adaptive1, chung2014adaptive2},
we have shown that one needs to use some special
error indicators. For the first approach, we can use the ``next''
eigenvector obtained from local eigenvalue decomposition
to construct multiscale basis functions. For the second
approach, one can increase the test space for additional
multiscale basis functions.
In the regions with largest residuals,
we can also use online basis functions to reduce the error substantially.
Online basis functions are computed locally and identified as
the localized basis functions which can give a largest reduction
in the error. These basis functions involve solving local
problems with a residual on the right hand side (see \cite{chung2015online} for
online basis functions for DG).
In this paper, we can apply adaptivity and online basis functions
as discussed in \cite{chung2014adaptive2, chung2015online}.
For parameter-dependent problems,
one can consider identifying online basis functions
for a set of $\mu_j$'s following the analysis in \cite{chung2015online}.
This will give a
local eigenvalue problem. Another important problem for our future
consideration is to identify the values of
$\mu_1$,..., $\mu_J$ by adaptivity.
\bibliographystyle{plain}
|
1,941,325,221,177 | arxiv | \section{Introduction}\label{Sec_Intro}
Radio-Frequency (RF) waves can be utilized for transmission of both information and power simultaneously. RF transmissions of these quantities have traditionally been treated separately. Currently, the community is experiencing a paradigm shift in wireless network design, namely unifying transmission of information and power, so as to make the best use of the RF spectrum and radiation, as well as the network infrastructure for the dual purpose of communicating and energizing \cite{Clerckx_Zhang_Schober_Wing_Kim_Vincent_arxiv}. This has led to a growing attention in the emerging area of Simultaneous Wireless Information and Power Transfer (SWIPT). As one of the primary works in the information theory literature, Varshney studied SWIPT in \cite{Varshney_2008}, in which he characterized the capacity-power function for a point-to-point discrete memoryless channel. Recent results in the literature have also revealed that in many scenarios, there is a tradeoff between information rate and delivered power. Just to name a few, frequency-selective channel \cite{Grover_Sahai_2010}, MIMO broadcasting \cite{Zhang_Keong_2013}, interference channel \cite{Park_Clerckx_2013}.
The main challenege in Wireless Power Transfer (WPT) is to increase the Direct-Current (DC) power at the output of the harvester without increasing transmit power. The harvester, known as rectenna, is composed of an antenna followed by a rectifier.\footnote{In the literature, the rectifier is usually considered as a nonlinear device (usually a diode) followed by a low-pass filter. The diode is the main source of nonlinearity induced in the system.} In \cite{Trotter_Griffin_Durgin_2009,Clerckx_Bayguzina_2016}, it is shown that the RF-to-DC conversion efficiency is a function of the rectenna's structure, as well as its input waveform (power and shape). Accordingly, in order to maximize the rectenna's output power, a systematic waveform design is crucial to make the best use of an available RF spectrum \cite{Clerckx_Bayguzina_2016}. In \cite{Clerckx_Bayguzina_2016}, an analytical model for the rectenna's output is introduced via the Taylor expansion of the diode characteristic function and a systematic design for multisine waveform is derived. The nonlinear model and the design of the waveform was validated using circuit simulations in \cite{Clerckx_Bayguzina_2016, Clerckx_Bayguzina_2017} and recently confirmed through prototyping and experimentation in \cite{Kim_Clerckx_Mitcheson}. Those works also confirm the inaccuracy of linear dependence of the rectifier's output power on its input power\footnote{The linear model has for consequence that the RF-to-DC conversion efficiency of the energy harvester (EH) is constant and independent of the harvester's input waveform (power and shape) \cite{Zhang_Keong_2013,Zeng_Clerckx_Zhang2017}.}. As one of the main conclusions, it is shown that the rectifier's nonlinearity is beneficial to the system performance and has a significant impact on the design of signals and systems involving wireless power.
The SWIPT literature has so far, to a great extent, ignored the nonlinearity of the EH and has focused on the linear model of the rectifier, e.g., \cite{Grover_Sahai_2010,Zhang_Keong_2013,Park_Clerckx_2013}. However, it is recognized that considering the harvester nonlinearity changes the design of SWIPT at the physical layer and medium access control layer \cite{Clerckx_Zhang_Schober_Wing_Kim_Vincent_arxiv}. Nonlinearity leads to various energy harvester models \cite{Clerckx_Bayguzina_2016,Boshkovska_Ng_Zlatanov_2017,Alevizos_Bletsas_2018}, new designs of modulation and input distribution \cite{Varasteh_Rassouli_Clerckx_ITW_2017,Varasteh_Rassouli_Clerckx_arxiv,Bayguzina_Clerckx_2018}, waveform \cite{Clerckx_2016}, RF spectrum use \cite{Clerckx_2016}, transmitter and
receiver architecture \cite{Clerckx_2016,Varasteh_Rassouli_Clerckx_arxiv,Kang_Kim_Kim_2018} and resource allocation \cite{Boshkovska_Ng_Zlatanov_2017,Xiong_Wang_2017,Xu_Ozcelikkale_McKelvey_2017}.
Of particular interest is the role played by nonlinearity on SWIPT signalling in single-carrier and multi-carrier transmissions \cite{Clerckx_2016,Varasteh_Rassouli_Clerckx_ITW_2017,Varasteh_Rassouli_Clerckx_arxiv,Clerckx_Zhang_Schober_Wing_Kim_Vincent_arxiv,Morsi_Jamali}. In multi-carrier transmissions, it is shown in \cite{Clerckx_2016} that inputs modulated according to the Circular Symmetric Complex Gaussian (CSCG) distributions, improve the delivered power compared to an unmodulated continuous waves. Furthermore, in \cite{Varasteh_Rassouli_Clerckx_ITW_2017}, it is shown that for an AWGN channel with complex Gaussian inputs under average power and delivered power constraints, depending on the receiver demand on information and power, the power allocation between real and imaginary components is asymmetric. As an extreme point, when the receiver merely demands for power requirements, all the transmitter power budget is allocated to either real or imaginary components. In \cite{Varasteh_Rassouli_Clerckx_arxiv,Morsi_Jamali}, it is shown that the capacity achieving input distribution of an AWGN channel under average, peak and delivered power constraints is discrete in amplitude with a finite number of mass-points and with a uniformly distributed independent phase. In multi-carrier transmission, however, it is shown in \cite{Clerckx_2016} that non-zero mean Gaussian input distributions lead to an enlarged Rate-Power (RP) region compared to CSCG input distributions. This highlights that the choice of a suitable input distribution (and therefore modulation and waveform) for SWIPT is affected by the EH nonlinearity and motivates the study of the capacity of AWGN channels under nonlinear power constraints.
Our interests in this paper lie in the apparent difference in input distribution for single-carrier and multi-carrier transmission, that is single-carrier favors asymmetric inputs \cite{Varasteh_Rassouli_Clerckx_ITW_2017}, while multi-carrier favors non-zero mean inputs \cite{Clerckx_2016}. We aim at tackling the design of input distribution for SWIPT under nonlinear constraints using a unified framework based on non-zero mean and asymmetric distributions. To that end, we study SWIPT in a multi-carrier setting subject to nonlinearities of the EH. We consider a frequency-selective channel subject to transmit average power and receiver delivered power constraints. We mainly focus on complex Gaussian inputs, where inputs of each real subchannel are independent of each other and on each real subchannel the inputs are independent and identically distributed (iid).
We are aiming at reconciling the two main observations of the previous paragraph: that is, outperforming of asymmetric Gaussian inputs and non-zero mean Gaussian inputs compared to CSCG inputs in single-carrier transmission \cite{Varasteh_Rassouli_Clerckx_ITW_2017} and multi-carrier transmission \cite{Clerckx_2016}, respectively. The contributions of this paper are listed below.
\begin{itemize}
\item First, taking the advantage of the small-signal approximation for rectenna's nonlinear output introduced in \cite{Clerckx_2016}, we obtain the general form of the delivered power in terms of system baseband parameters. It is shown that, first, unlike the linear model, the delivered power at the receiver is dependent on higher moments of the channel input, such as the first, second and forth moments. Second, the amount of delivered power on each subchannel is dependent on its adjacent subchannels.
\item Assuming non-zero mean Gaussian inputs, an optimization algorithm is introduced. Numerical optimizations reveal that for the scenarios where the receiver is interested in both information and power, simultaneously, the inputs are with non-zero mean and non-zero variance. Two important observations are made: first, that allowing the input to be non-zero mean improves the rate-power region, significantly, and second, that for receiver demands, which concerns information and power, the power allocation between real and imaginary components of each complex subchannel is asymmetric in general. These results, can be thought of as generalization of the results in \cite{Varasteh_Rassouli_Clerckx_ITW_2017} and \cite{Clerckx_2016}, where asymmetric power allocation (in flat fading channels) and non-zero mean inputs (in frequency-selective channels) are proposed, respectively, in order to achieve larger RP regions.
\item As a special scenario, we consider the optimized zero mean Gaussian inputs under the assumption of nonlinear EH. For this case, optimality conditions are derived. It is shown that (similar to non-zero mean inputs) under nonlinear assumption for the EH, the power allocation on each subchannel is dependent on other subchannels as well. Forcing the optimality conditions to be satisfied (numerically), it is observed that a larger RP region is obtained in contrast to the optimal zero mean inputs under the linear assumption for the EH.
\end{itemize}
\textit{Organization}: In Section \ref{Sec:sys_model}, we introduce the system model. In Section \ref{Sec:Prob}, the studied problem is introduced. In Section \ref{Sec:Power}, The delivered power at the output of the EH is obtained in terms of system baseband parameters. In Section \ref{Sec:RP}, the rate-power maximization over frequency-selective channels with non-zero mean Gaussian inputs is considered. As a special case, the optimality conditions for power allocation on different subchannels are obtained for zero mean Gaussian inputs. In Section \ref{Sec_Numerical}, WPT and SWIPT optimization for the studied problem is introduced and numerical results are presented. We conclude the paper in Section \ref{Sec:Conc} and the proofs for some of the results are provided in the Appendices at the end of the paper.
\textit{Notation}: Throughout this paper, random variables and their realizations are represented by capital and small letters, respectively. $\mathbb{E}[Y(t)]$ and $\mathcal{E}[Y(t)]$ denote the expectation over statistical randomness and the average over time of the process $Y(t)$, respectively, i.e.,
\begin{align}
\mathbb{E}[Y(t)]&=\int_{\infty}^{\infty} y(t)dF_{Y(t)}(y),\\
\mathcal{E}[Y(t)]&=\lim_{T\rightarrow \infty}\frac{1}{T}\int_{-T/2}^{T/2} Y(t)dt,
\end{align}
where $F_{Y(t)}(y)$ denotes the Cumulative Distribution Function (CDF) of the process $Y(t)$. $\otimes$ denotes circular convolution. The standard CSCG distribution is denoted by $\mathcal{CN}(0,1)$. Complex conjugate of a complex number $c$ is denoted by $c^{*}$. $\Re\{\cdot\}$ and $\Im\{\cdot\}$ are real and imaginary operators, respectively. For a complex random variable $V$, we denote $\mathbb{E}[|V|^4]=Q$, $\mathbb{E}[|V|^2]=P$, $\mathbb{E}[V^2]=\bar{P}$, $\mathbb{E}[V]=\mu$ and $\mathbb{E}[|V-\mu|^2]=\sigma^2$. The moments corresponding to real and imaginary components of $V$ are represented by subscripts $r$ and $i$, respectively, i.e., $\mathbb{E}[\Re\{V\}^4]=Q_{r}$, $\mathbb{E}[\Re\{V\}^2]=P_{r}$, $\mathbb{E}[\Re\{V\}]=\mu_{r}$ and $\mathbb{E}[|\Re\{V\}-\mu_r|^2]=\sigma_r^2$ and similarly for imaginary counterparts. $(\cdot)_N$ denotes remainder of the argument with respect to $N$. $\delta_k=1$ for $k=0$ and zero elsewhere. $\mathrm{sinc}(t)=\frac{\sin(\pi t)}{\pi t}$ and $\delta^{l}_{k}\triangleq 1-\delta_{l-k}$. $f^x$ denotes the partial derivative of the function $f$ with respect to $x$, i.e., $\frac{\partial f}{\partial x}$. The vector $[V_0,\ldots,V_{N-1}]$ is represented by $\pmb{V}^N$. Throughout the paper, complex subchannels and their real/imaginary components are referred to as c-subchannels and r-subchannels, respectively.
\section{System Model}\label{Sec:sys_model}
Considering a point-to-point $L$-tap frequency-selective AWGN channel, in the following, we explain the operation of the transmitter and the receiver.
\subsection{Transmitter}
The transmitter utilizes Orthogonal Frequency Division Multiplexing (OFDM) to transmit information and power over the channel. Let $\pmb{V}^N$ denote the modulated Information-Power (IP) complex symbols over $N$ sub-carriers (c-subchannels), occupying the overall bandwidth of $f_w$ Hz and being uniformly separated by $f_w/N$ Hz. Inverse Discrete Furrier Transform (IDFT)\footnote{In this paper we consider $X_k=\frac{1}{\sqrt{N}}\sum_{n=0}^{N-1}x[n]e^{-j\frac{2\pi nk}{N}}$ and $x[n]=\frac{1}{\sqrt{N}}\sum_{k=0}^{N-1}X_ke^{j\frac{2\pi nk}{N}}$ for DFT and IDFT definitions, respectively.} is applied over IP symbols $\pmb{V}^N$ and Cyclic Prefix (CP) is added to produce the time domain signal $X[n]$ given by
\begin{align}
X[n+L] &= \frac{1}{\sqrt{N}}\sum_{k=0}^{N-1}V_ke^{\frac{j2\pi nk}{N}}, ~n=0,...,N-1.
\end{align}
Next, the signal
\begin{align}\label{eqn_1}
X(t)=\sum_{n=0}^{N+L-1}X[n]\text{sinc}(f_wt-n),
\end{align}
is upconverted to the carrier frequency $f_c$ and is transmitted over the channel.
\subsection{Receiver}
The filtered received RF waveform at the receiver is modelled as
\begin{align}
Y_{\text{rf}}(t) &=\sqrt{2}\Re\left\{Y(t)e^{j2\pi f_ct}\right\},
\end{align}
where $Y(t)$ is the baseband equivalent of the channel output with bandwidth $[-f_w/2,f_w/2]$ Hz. In order to guarantee narrowband transmission, we assume that $f_c\gg2f_w$.
\textit{Delivered Power}: The power of the signal $Y_{\text{rf}}(t)$ (denoted by $P_{\text{dc}}$) is harvested using a rectenna. The delivered power is modelled as
\begin{align}\label{eqn_2}
P_{\text{dc}}=\mathbb{E}\mathcal{E}[k_2Y_{\text{rf}}(t)^2 + k_4 Y_{\text{rf}}(t)^4],
\end{align}
where $k_2$ and $k_4$ are constants\footnote{The reader is referred to \cite{Clerckx_Bayguzina_2016} for detailed explanations of the model. Also note that according to \cite{Clerckx_2016}, rectenna's output is in the form of current with unit Ampere. However, since power is proportional to current, with abuse of notation, we refer to the term in (\ref{eqn_2}) as power.}.
\textit{Information Receiver}: The signal $Y_{\text{rf}}(t)$ is downconverted and sampled with sampling frequency $f_w$ producing $Y[m]\triangleq Y(m/f_w)$ given by
\begin{align}\label{eqn_3}
Y[m]&=\sum\limits_{d=0}^{L-1} \tilde{h}_dX[m-d]+Z[m],~m=L,\ldots, N+L-1,
\end{align}
where $Z[m]$ represents a sample of the additive noise at time $t=m/f_w$. $\tilde{h}_d$ is the $d^{\text{th}}$ c-subchannel tap and $X[m-d]$ is a sample of the signal $X(t)$ given in (\ref{eqn_1}) at time $(m-d)/f_w$.
Considering one OFDM block, the receiver discards the CP and converts the $N$ symbols back to the frequency domain by applying DFT on (\ref{eqn_3}), such that
\begin{align}\label{eqn_4}
Y_{l}&= h_lV_l+W_l,~l=0,\cdots,N-1,
\end{align}
where $Y_{l},l=0,\cdots,N-1$ is the DFT of $Y[m],m=L,...,L+N-1$. $h_l,~V_l$ and $W_l$ are DFTs of the extended channel vector $\tilde{\mathbf{h}}\triangleq [\tilde{h}_0,\cdots,\tilde{h}_{L-1},0,\cdots,0]_{1\times N}$, symbols $X[m],m=L,...,L+N-1$ (equivalently, samples of $X(t)$ at times $m/f_w$) and noise samples $Z[m],m=L,...,L+N-1$, respectively. That is,
\begin{subequations}
\begin{align}
h_l&=\frac{1}{\sqrt{N}}\sum_{n=0}^{N-1}\tilde{\mathbf{h}}\left[n\right]e^{-\frac{j2\pi nl}{N}}, ~~l=0,\cdots,N-1,\\
V_l &= \frac{1}{\sqrt{N}}\sum_{n=L}^{L+N-1}X\left[n\right]e^{-\frac{j2\pi nl}{N}}, ~~l=0,\cdots,N-1,
\end{align}
\end{subequations}
and similarly for $W_l,~l=0,\cdots,N-1$. We assume $W_l,~l=0,\cdots,N-1$ as iid and CSCG random variables with variance $\sigma_w^2$, i.e., $W_l\sim \mathcal{CN}(0,\sigma_w^2)$ for $l=0,\cdots,N-1$. The channel frequency response is assumed to be known at the transmitter.
\section{Problem statement}\label{Sec:Prob}
We aim at maximizing the rate of transmitted information, as well as the amount of delivered power at the receiver, given that the input in each c-subchannel $l=0,\ldots,N-1$ is distributed according to a non-zero mean complex Gaussian distribution. We also assume that in each c-subchannel the real and imaginary components are independent. Accordingly, the optimization problem consistes in the maximization of the mutual information between the channel input $\pmb{V}^N$ and the channel output $\pmb{Y}^N$ (see eq. \ref{eqn_4}) under an average power constraint at the transmitter and a delivered power constraint at the receiver, such that, $V_{lr}\sim \mathcal{N}(\mu_{lr},P_{lr}-\mu_{lr}^2)$ and $V_{li}\sim \mathcal{N}(\mu_{li},P_{li}-\mu_{li}^2)$ for $l=0,\cdots,N-1$. Hence, we have
\begin{equation}\label{eqn_21}
\begin{aligned}
& \underset{ \mu_{lr},\mu_{li},P_{lr},P_{li},~l=0,\ldots N-1}{\text{max}}
& & I\left(\pmb{V}^N;\pmb{Y}^N\right) \\
& \text{s.t.}
& & \left\{\begin{array}{l}
\sum_{l=0}^{N-1}P_l\leq P_a \\
P_{\text{dc}}\geq P_d
\end{array}\right.,
\end{aligned}
\end{equation}
where $P_l=P_{lr}+P_{li}$ and $\mu_l=\mu_{lr}+j\mu_{li}$ are the average power and mean of the $l^{\text{th}}$ c-subchannel, respectively. $P_a$ is the available power budget at the transmitter. $P_d$ is the minimum amount of average delivered power at the receiver. Maximization is taken over all the means $\mu_{lr},~\mu_{li}$ and powers $P_{lr},~P_{li}$ ($l=0,\ldots,N-1$) of independent complex Gaussian inputs $\pmb{V}^N$, such that the constraints are satisfied.
\section{Power metric in terms of channel baseband parameters}\label{Sec:Power}
In this section, we study the delivered power at the receiver based on the model in (\ref{eqn_2}). Note that most of the communication processes, such as, coding/decoding, modulation/demodulation, etc, are done at the baseband. Therefore, from a communication system design point of view, it is most preferable to have baseband equivalent representation of the system. Henceforth, in the following Proposition, we derive the delivered power $P_{\text{dc}}$ at the receiver in terms of system baseband parameters. For brevity of representation, we neglect the delivered power from CP, and also we assume that $N$ is odd (calculations can be easily extended to even values of $N$, following similar steps). The following proposition, expresses the delivered power $P_{\text{dc}}$ in (\ref{eqn_2}) in terms of the channel and its input baseband parameters.
\begin{prop}\label{Lemma1}
Given that the inputs on each r-subchannel are iid and that the inputs on different r-subchannels are independent, the delivered power $P_{\text{dc}}$ at the receiver can be expressed in terms of the channel baseband parameters and statistics of the channel input distribution as
\begin{align}\nonumber
P_{\text{dc}}&=\sum\limits_{l=0}^{N-1}\Bigg\{\alpha_lQ_l+\Big(\beta_l+g(P_l)\Big)P_l+\eta+\Re\bigg\{\bar{P}_l\sum_{k=1}^{\frac{N-1}{2}}\mu_{(l+k)_N}^*\mu_{(l-k)_N}^*\Phi_{l,k}\bigg\}\\\label{eqn_39}
&+\delta_{(N-1)}^l\cdot\sum_{k=1}^{\frac{N-1}{2}}\!\!\!\sum_{\substack{m=l+1\\m\neq(l+k)_N\\m\neq (l-k)_N}}^{N-1}\!\!\!\!\!\Re\Big\{\mu_{l}\mu_{m}\mu_{(l-k)_N}^*\mu_{(m-k)_N}^* \Psi_{l,m,k}\Big\}\Bigg\} \triangleq\sum\limits_{l=0}^{N-1}f_{ib}(Q_l,P_l,\bar{P}_l,\mu_l,h_l,N),
\end{align}
where $N$ is odd and $\alpha_l,~\beta_l,~\gamma_{l,m}$, $\eta$, $\Phi_{l,k}$, $\Psi_{l,n,k}$ and $g(P_l)$ are defined as
\begin{subequations}
\begin{align}\label{eqn_5}
\alpha_l&=\frac{3k_4}{4N}(|h_l|^4+|h_l^u|^4),\\\label{eqn_6}
\beta_l&=k_2|h_l|^2+3k_4\sigma_w^2\left(|h_l|^2+|h_l^u|^2\right), \\\label{eqn_7}
\gamma_{m,l}&=\frac{3k_4}{N}(|h_l|^2|h_m|^2+|h_l^u|^2|h_m^u|^2),\\
\eta&=k_2\sigma_w^2+3Nk_4\sigma_w^4,\\
\Phi_{l,k}&=\frac{3k_4}{2N}\big(h_l^2h_{(l+k)_N}^* h_{(l-k)_N}^*+{h_l^u}^2h_{(l+k)_N}^{u*}h_{(l-k)_N}^{u*}\big),\\
\Psi_{l,m,k}&=\frac{3k_4}{N}\big(h_lh_m^{*}h_{(l-k)_N}^*h_{(m-k)_N}+h_l^uh_m^{u*}h_{(l-k)_N}^{u*}h_{(m-k)_N}^{u}\big),\\
g(P_l)&=\delta^l_{N-1}\sum_{m=l+1}^{N-1}\gamma_{m,l}P_m,
\end{align}
\end{subequations}
with $h_l^u,~l=0,\cdots,N-1$ being the samples of the channel at times between two consecutive information samples (for more details see Appendix\ref{Sec:Channel_u}).
\end{prop}
\textit{Proof}: See Appendix\ref{app:2}.
\begin{rem}\label{rem_1}
We note that as also mentioned in Proposition \ref{Lemma1}, the delivered power is based on the assumption that the inputs on different r-subchannels are independent as well as being iid on each r-subchannel. Obtaining a closed form expression for the delivered power $P_{\text{dc}}$ at the receiver when the inputs on different r-subchannels are not iid is cumbersome. This is due to the fact that the fourth moment of the received signal $Y_{\text{rf}}(t)$ creates dependencies among the inputs of different r-subchannels. As another point, we note that in the calculations for the delivered power in Proposition \ref{Lemma1}, we neglect the delivered power from CP. This along with the aforementioned assumptions on the input distributions, bears the fact that the real delivered power (based on the introduced model in (\ref{eqn_2})) is larger than (\ref{eqn_39}). Indeed, the subscript $ib$ in (\ref{eqn_39}) stands for inner bound in order to express this point.
\end{rem}
\begin{rem}
Note that similar results in \cite{Varasteh_Rassouli_Clerckx_ITW_2017} are reported for single-carrier AWGN channel, where the delivered power is dependent on higher moments of the channel input. In \cite{Clerckx_2016}, superposition of deterministic and CSCG signals are assumed for multi-carrier transmissions with the assumption that the receiver utilizes power splitter. Part of the signal is used for power transfer and the other part is used for information transmissions\footnote{We note that the model considered for signal transmission in this paper is different from the multi-subband orthogonal transmission considered in \cite{Clerckx_2016}.}. In comparison to the results in \cite{Clerckx_2016}, we note that, here, the channel input is generalized in the sense that it allows asymmetric power allocation across all r-subchannels. Also, at the receiver, no power splitter is assumed\footnote{This scenario considered in this paper can be considered as an optimistic upperbound on the system performance, since (so far) in practice, it is not possible to decode information and harvest power from the same signal, jointly.}.
\end{rem}
\section{Rate-Power Maximization Over Gaussian Inputs}\label{Sec:RP}
In this section, we consider the SWIPT optimization problem in (\ref{eqn_21}). We obtain the optimality conditions in their general form (assuming non-zero mean inputs) to be used in Section \ref{Sec_Numerical} in order to obtain (locally) optimal power allocations for different r-subchannels. In order to better understand the problem, the optimality conditions are specialized for zero mean Gaussian inputs, analytically.
\subsection{SWIPT with non-zero mean complex Gaussian inputs}
Assuming that the inputs of c-subchannels $\pmb{V}^N$ are in general with non-zero mean, the problem in (\ref{eqn_21}) can be rewritten as follows
\begin{equation}\label{eqn_23}
\begin{aligned}
& \underset{ \substack{P_{lr},P_{li},\mu_{lr},\mu_{li}\\l=0,...,N-1}}{\text{max}}
& & \!\!\sum\limits_{l=0}^{N-1}c_0\big(\log (1+a_l\sigma_{lr}^2)+\log(1+a_l\sigma_{li}^2)\big)\\
& \text{s.t.}
& & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\left\{\begin{array}{l}
\sum_{l=0}^{N-1}P_l\leq P_a, \\
\sum_{l=0}^{N-1} f_{ib}(P_l,\bar{P_l},\mu_l,h_l,N)\geq P_d,\\
\sigma_{lr}^2\geq 0 ,\sigma_{li}^2\geq 0,~l=0,...,N-1
\end{array}\right.,
\end{aligned}
\end{equation}
where $c_0=\frac{f_w}{2N}$, $a_l=\frac{2N|h_l|^2}{f_w\sigma_w^2}$, $\sigma_{lr}^2=P_{lr}-\mu_{lr}^2$, $\sigma_{li}^2=P_{li}-\mu_{li}^2$. Note that for a Gaussian distribution in the function $f_{ib}(\cdot)$, we have $Q_l=3(P_{lr}^2+P_{li}^2)-2(\mu_{li}^4+\mu_{lr}^4)+2P_{lr}P_{li}$, $\bar{P}_l=P_{lr}-P_{li}+2j\mu_{lr}\mu_{li}$.
In Section \ref{Sec_Numerical}, we consider the numerical optimization of problem (\ref{eqn_23}) by considering its Lagrangian\footnote{The problem in (\ref{eqn_23}) is not convex and any solution obtained from solving the dual problem is in general a local optima.}. The KKT conditions for problem (\ref{eqn_23}) are detailed in Appendix\ref{app_KKT}. As it can be seen from the KKT conditions in Appendix\ref{app_KKT}, unfortunately, it is cumbersome to derive analytical results on the optimal solution of problem (\ref{eqn_23}). However, it can be shown that for the optimal solution, the average power constraint is satisfied with equality (see Appendix\ref{app_KKT} for the details).
As explained in Section \ref{Sec_Numerical}, numerical results reveal that non-zero mean asymmetric complex Gaussian inputs result in larger RP region compared to their zero mean counterparts. However, in order to better understand the problem in its general form (assuming non-zero mean), it is beneficial to look into the optimality conditions of zero mean inputs.
\subsection{SWIPT with zero mean complex Gaussian inputs}
In the following, we obtain the optimality conditions for power allocation among different r-subchannels, when the input distributions are complex Gaussian with zero mean and with independent components.
\begin{lem}\label{Lem_48}
If $\pmb{\mu}^N=\pmb{0}^N$, the optimal power allocation $\pmb{P}_{r}^{N^\star},~\pmb{P}_{i}^{N^\star}$ for problem (\ref{eqn_23}) satisfies the average power and delivered power constraints with equality, i.e.,
\begin{subequations}\label{eqn_50}
\begin{align}
\sum_{l=0}^{N-1}P_l^{\star}&= P_a,\\\label{eqn_47}
\sum_{l=0}^{N-1} f_{ib}(P^{\star}_l,\bar{P^{\star}_l},0,h_l,N)&= P_d,
\end{align}
\end{subequations}
with $f_{ib}(P^{\star}_l,\bar{P^{\star}_l},0,h_l,N)=\alpha_lQ^{\star}_l+\Big(\beta_l+g(P^{\star}_l)\Big)P^{\star}_l+\eta$. Also for the optimal vectors $\pmb{P}_{r}^{N^\star},~\pmb{P}_{i}^{N^\star}$ we have
\begin{subequations}\label{eqn_48}
\begin{align}\label{eqn_48_1}
P_{lr}^{\star}\cdot\left(\lambda_1-G_l(\pmb{P}_{r}^{N^{\star} },\pmb{P}_{i}^{N^{\star}})\right)&=0,~l=0,...,N-1,\\\label{eqn_48_2}
P_{li}^{\star}\cdot\left(\lambda_1-G_l(\pmb{P}_{i}^{N^{\star}},\pmb{P}_{r}^{N^{\star}})\right)&=0,~l=0,...,N-1,
\end{align}
\end{subequations}
with
\begin{align}\label{eqn_51}
G_l(\pmb{P}_{r}^{N},\pmb{P}_{i}^{N})\triangleq \frac{c_1a_l}{1+a_lP_{lr}}+6\lambda_2 \alpha_lP_{lr}+\lambda_2(2\alpha_l P_{li}+\beta_l+g_1(P_{l})),
\end{align}
for some
\begin{subequations}
\begin{align}\label{eqn_53}
\lambda_1&\geq \max_{l=0,\ldots,N-1}\{G_l(\pmb{P}_{r}^{N^{\star} },\pmb{P}_{i}^{N^{\star}}),G_l(\pmb{P}_{i}^{N^{\star}},\pmb{P}_{r}^{N^{\star}})\},\\
\lambda_2&\geq0,
\end{align}
\end{subequations}
and $g_1(P_l)\triangleq\sum_{\substack{m=0\\m\neq l}}^{N-1}\gamma_{m,l} P_m$. For $\lambda_2=0$, the optimal power allocations are simplified to waterfilling solution, i.e.,
\begin{align}
P_{lr}^{\star}=P_{li}^{\star}=\max\left\{0,\frac{c_1}{\lambda_1}-\frac{1}{a_l}\right\},~\text{for}~l=0,\cdots,N-1.
\end{align}
\end{lem}
\textit{Proof}: See Appendix\ref{app_KKT_zeromean}.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.4]{Fig_Optimal_Conditions.pdf}
\caption{Representation of intersection of $\lambda_1=1.6529$ with the functions $G_l(\pmb{P}_{r}^{N },\pmb{P}_{i}^{N})$ and $G_l(\pmb{P}_{i}^{N},\pmb{P}_{r}^{N})$, defined in (\ref{eqn_51}) with the parameters $c_1=0.0801,~a_l=2250,~\lambda_2=0.0498,~\alpha_l=4.9857,~\beta_l=1.6484$ and $g_1(P_l)=6.2822$. The reported parameters here correspond to the optimal solution of the strongest c-subchannel considered in Section \ref{Sec_Numerical} with average power constraint $P_a=1$, delivered power constraint $P_d=3.5716$ and noise variance $\sigma_w^2=0.1$.}\label{Fig_1}
\par\end{centering}
\vspace{0mm}
\end{figure}
\begin{rem}\label{rem_10}
Note that the delivered power in the $l^{\text{th}}$ c-subchannel for zero mean Gaussian inputs, i.e., \[f_{ib}(P^{\star}_l,\bar{P^{\star}_l},0,h_l,N)=\alpha_lQ^{\star}_l+\Big(\beta_l+g(P^{\star}_l)\Big)P^{\star}_l+\eta,\]
is dependent on other c-subchannels through $g(P^{\star}_l)$\footnote{Note that for zero mean inputs with nonlinear EH, $P_{lr}=P_{lr}^{\star},P_{li}=P_{li}^{\star}$ yields the same delivered power/ transmitted information as $P_{lr}=P_{li}^{\star},P_{li}=P_{lr}^{\star}$.}. This is in contrast with the linear model, where the delivered power is obtained as $|h_l|^2P_l+\sigma_w^2$.
\end{rem}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.4]{Fig_OptCon_topview.pdf}
\caption{Illustration of Figure \ref{Fig_1} from top view (along $z-$axis). There are $4$ solutions denoted by $p_1,~p_2,~p_3$ and $p_4$, where point $p_1$ is not admissible due to contradicting the average power constraint $P_a=1$.}\label{Fig_110}
\par\end{centering}
\vspace{0mm}
\end{figure}
\begin{rem}
The optimality conditions of Lemma \ref{Lem_48} in (\ref{eqn_48}) can be interpreted as follows. The functions $G_l(\pmb{P}_{r}^{N},\pmb{P}_{i}^{N}),~l=0,\ldots,N-1$ are positive and convex (the Hessian matrix is positive definite). Also note that $G_l(\pmb{P}_{i}^{N},\pmb{P}_{r}^{N})$ is a mirrored version of $G_l(\pmb{P}_{r}^{N},\pmb{P}_{i}^{N})$ with respect to the surface $P_{lr}=P_{li}$. Assume that $\lambda_2>0$ is given and that $\lambda_1$ is chosen as a large value (so that it satisfies (\ref{eqn_53})). Consider the intersection of the horizontal surface $\lambda_1$ with functions $G_l(\pmb{P}_{r}^{N},\pmb{P}_{i}^{N})$ and $G_l(\pmb{P}_{i}^{N},\pmb{P}_{r}^{N})$ for some index $l$. Depending on the value of $\lambda_1$ and shape of the functions $G_l$, different pairs of $(P_{lr},P_{li})$ satisfy simultaneously
\begin{align}\label{eqn_52}
\lambda_1=G_l(\pmb{P}_{r}^{N},\pmb{P}_{i}^{N})=G_l(\pmb{P}_{i}^{N},\pmb{P}_{r}^{N}).
\end{align}
The number of these solution pairs $(P_{lr},P_{li})$ for each index $l$ can be verified to vary from three to four. That is, if $\lambda_1>G_l(\pmb{0}^N,\pmb{0}^N)$, there are three solutions, and if $\lambda_1\leq G_l(\pmb{0}^N,\pmb{0}^N)$, there are four solutions for (\ref{eqn_52})\footnote{Note that $\lambda_1$ must satisfy the condition in (\ref{eqn_53}) as well.}. In Figure \ref{Fig_1}, an illustration of the intersection of the aforementioned three surfaces for a specific index $l$ is provided, where four pairs of solutions are recognized. In Figure \ref{Fig_110}, the same illustration is presented along the $z-$axis from the top. Points $p_1,~p_2,~p_3$ and $p_4$ denote the solution pairs that satisfy (\ref{eqn_52}). Note that depending on the average power constraint, some (or all) of the points $p_1,~p_2,~p_3$ and $p_4$ are not admissible (for example, here $p_1$ is not admissible). If there is no point satisfying the average power constraint, the power allocated to the corresponding c-subchannel is zero (in order to satisfy (\ref{eqn_48})). Otherwise, there are more than one set of power allocations $(P_{lr},~P_{li})$ that satisfy the optimality necessary conditions. Accordingly, the power allocation could be either symmetric (corresponding to either of the points $p_1,~p_4$) or asymmetric (corresponding to either of the points $p_2,~p_3$). Note that both points $p_2$ and $p_3$ contribute the same amount in the delivered power and transmitted information (as noted in Remark \ref{rem_10}). Therefore, they can be chosen interchangeably.
\end{rem}
\begin{rem}\label{rem_111}
The optimality conditions in (\ref{eqn_48}) can be solved numerically using programming for solving nonlinear equations with constraints ($P_{lr},P_{li}\geq 0$ for $l=0,\ldots,N-1$). In Section \ref{Sec_Numerical}, it is observed through numerical optimization that for mere WPT purposes (equivalently large values of $\lambda_2$) all the available power at the transmitter is allocated to either real or imaginary component of the strongest c-subchannel. Additionally, note that, (for zero mean Gaussian inputs), although optimized for WPT, the amount of transmitted information is never zero.
\end{rem}
\section{Numerical Optimization}\label{Sec_Numerical}
In this section, we provide numerical results regarding the power allocation for different r-subchannels under a fixed average power and different delivered power constraints in order to obtain different RP regions corresponding to different types of complex Gaussian inputs introduced earlier.
\subsection{Non-zero mean inputs}\label{Sec_NZM}
We note that, the optimization problem in (\ref{eqn_21}) is not convex, and accordingly, the final solution (obtained via numerical optimization) is in general a local stationary point. Due to nonconvexity of the studied problem, the final solution is dependent on the initial starting point. In order to alleviate the effect of the initial point, in our optimization, we first focus on the WPT aspect of the optimization problem with deterministic input signals\footnote{We note that although we first optimize over deterministic signals for WPT, optimizing over means and powers for SWIPT results in the same solutions, i.e., signals with almost zero variance, however, in the expense of a long simulation time. Therefore, for the starting point of the RP region, we chose the input to be deterministic.}, i.e., the variance of different r-subchannels are close to zero with a good approximation. In this case, with deterministic input signals we have $\mu_{lr}=\sqrt{P_{lr}}$, $\mu_{li}=\sqrt{P_{li}}$ for $l=0,\cdots,N-1$. Therefore, the delivered power $P_{\text{dc}}$ reads as
\begin{align}\nonumber
P_{\text{dc}}&=\sum\limits_{l=0}^{N-1}\Bigg\{\alpha_l|\mu_l|^4+\Big(\beta_l+g(|\mu_l|^2)\Big)|\mu_l|^2+\eta+\Re\bigg\{\mu_l^2\sum_{k=1}^{\frac{N-1}{2}}\mu_{(l+k)_N}^*\mu_{(l-k)_N}^*\Phi_{l,k}\bigg\}\\
&+\delta_{(N-1)}^l\cdot\sum_{k=1}^{\frac{N-1}{2}}\!\!\!\sum_{\substack{m=l+1\\m\neq(l+k)_N\\m\neq (l-k)_N}}^{N-1}\!\!\!\!\!\Re\Big\{\mu_{l}\mu_{m}\mu_{(l-k)_N}^*\mu_{(m-k)_N}^* \Psi_{l,m,k}\Big\}\Bigg\}\triangleq \sum\limits_{l=0}^{N-1}f_{\text{WPT}}(\mu_l,h_l,N).
\end{align}
Accordingly, we consider the following WPT problem
\begin{equation}\label{eqn_33}
\begin{aligned}
& \underset{ \substack{\mu_{l}\\l=0,...,N-1}}{\text{max}}
& & \!\!\sum\limits_{l=0}^{N-1}f_{\text{WPT}}(\mu_l,h_l,N)\\
& \text{s.t.}
& & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\sum_{l=0}^{N-1}|\mu_l|^2= P_a,
\end{aligned}
\end{equation}
where the proof for the average power constraint satisfied with equality has been provided in Appendix\ref{app_KKT}. The algorithm (WPT optimization with deterministic inputs) is run for a large number of times (here we run the algorithm 1000 times) using the Matlab command \texttt{fmincon()}, and each time with a new and randomly generated initial complex mean vector $\pmb{\mu}^N$. After this stage, the solution corresponding to the highest delivered power $P_{\text{dc}}$ is chosen as the initial starting point for the SWIPT optimization.
Next, in order to solve the optimization for SWIPT, we consider the following maximization, which is the weighted summation of the transmitted information and the delivered power
\begin{equation}\label{eqn_34}
\begin{aligned}
& \underset{ \substack{P_{lr},P_{li},\mu_{lr},\mu_{li}\\l=0,...,N-1}}{\text{max}}
& & \!\!\sum\limits_{l=0}^{N-1}c_0\big\{\log (1+a_l\sigma_{lr}^2)+\log(1+a_l\sigma_{li}^2)\big\}+\lambda_2 f_{ib}(P_l,\bar{P_l},\mu_l,h_l,N)\\
& \text{s.t.}
& & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\left\{\begin{array}{l}
\sum_{l=0}^{N-1}P_l= P_a, \\
P_{lr}\geq \mu_{lr}^2 ,P_{li}\geq \mu_{li}^2,~l=0,...,N-1
\end{array}\right..
\end{aligned}
\end{equation}
We solve this problem using the Matlab command \texttt{fmincon()} as follows. $\lambda_2$ is given different values, starting from larger ones\footnote{Note that $\lambda_2$ can be interpreted as $-\frac{\partial I(V^N;Y^N)}{\partial P_{\text{dc}}}$. Therefore, intuitively a larger value of $\lambda_2$ corresponds to a higher delivered power and lower transmitted information.}. For the first round of the optimization (corresponding to the largest value of $\lambda_2$), the (locally) optimal solution obtained through previous optimization (WPT with deterministic inputs) is used as the starting point (the power for different r-subchannels is considered as $P_{lr}=\mu_{lr}^2,~P_{li}=\mu_{li}^2,~l=0,\ldots,N-1$). Similarly, for the subsequent values of $\lambda_2$, we use the solution corresponding to the previous value of $\lambda_2$. The detailed description of the optimization is presented in Algorithm \ref{euclid}.
\begin{algorithm}
\caption{SWIPT algorithm (Non-zero mean inputs)}\label{euclid}
\begin{algorithmic}[1]
\Procedure{WPT Optimization}{}
\State $M \gets$ Large number
\For {$s=1:M$}
\State Randomly initialize $\pmb{\mu}^{N}$, $\pmb{\mu}_{(s)}^{N*}=\arg \max$ (\ref{eqn_33})
\State $P_{dc,(s)}=\sum\limits_{l=0}^{N-1}f_{\text{WPT}}(\mu_{l,(s)}^{*},h_l,N)$, $S=\arg \max\limits_{s} P_{dc,(s)}$
\EndFor
\EndProcedure
\Procedure{SWIPT Optimization}{}
\State $\lambda_2\gets\lambda_{max}$, $s=1$
\State $\pmb{P}_{(s),r}^{N}\gets [\mu^{*2}_{0r,(S)},\ldots,\mu^{*2}_{(N-1)r,(S)}]$, $\pmb{P}_{(s),i}^{N}\gets [\mu^{*2}_{0i,(S)},\ldots,\mu^{*2}_{(N-1)i,(S)}]$, $\pmb{\mu}_{(s)}^{N}\gets \pmb{\mu}_{(S)}^{N*}$
\While {$\lambda_2>\lambda_{min}$}
\State $\{\pmb{P}_{(s),r}^{N*},\pmb{P}_{(s),i}^{N*}\}=\arg \max$ (\ref{eqn_34})
\State $\text{Inf}(s)\gets \sum\limits_{l=0}^{N-1}c_0\big\{\log (1+a_l\sigma_{lr,(s)}^{*2})+\log(1+a_l\sigma_{li,(s)}^{*2})\big\}$, $P_{\text{dc}}(s)\gets$ (\ref{eqn_39})
\State $s\gets (s+1)$, $\lambda_2 \gets (\lambda_2 -stp)$
\State $\pmb{P}_{(s),r}^{N}\gets \pmb{P}_{(s-1),r}^{N*}$, $\pmb{P}_{(s),i}^{N}\gets \pmb{P}_{(s-1),i}^{N*}$, $\pmb{\mu}_{(s)}^{N}\gets \pmb{\mu}_{(s-1)}^{N*}$
\EndWhile
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Zero mean inputs}
In order to obtain the optimal power allocations for zero mean complex Gaussian inputs, we follow a similar approach presented in Section \ref{Sec_NZM}. The optimization problem considered here is given as
\begin{equation}\label{eqn_54}
\begin{aligned}
& \underset{ \substack{P_{lr},P_{li}\\l=0,...,N-1}}{\text{max}}
& & \!\!\sum\limits_{l=0}^{N-1}c_0\big\{\log (1+a_lP_{lr})+\log(1+a_lP_{li})\big\}+\lambda_2 f_{ib}(P_l,\bar{P_l},0,h_l,N)\\
& \text{s.t.}
& & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\left\{\begin{array}{l}
\sum_{l=0}^{N-1}P_l= P_a, \\
P_{lr}\geq 0 ,P_{li}\geq 0,~l=0,...,N-1
\end{array}\right..
\end{aligned}
\end{equation}
The optimization is explained in Algorithm \ref{euclid1}\footnote{We note that, as an alternative approach, the optimality conditions in Lemma \ref{Lem_48} can be used in order to find the optimal power allocations. To do so, solving the nonlinear equations (\ref{eqn_50}) and (\ref{eqn_48}) have to be considered with the constraints $P_{li},P_{li}\geq 0$. Accordingly, one can use the MATLAB command \texttt{fsolve()}. The optimization is initialized with a very small (in norm) power vector and each time the vector is updated until a condition on convergence is met.}.
\subsection{Numerical results}
In this section, we present the results obtained through numerical simulations. First, we focus on the optimized RP regions corresponding to different types of channel inputs. Later, we compare the constellation of optimized non-zero mean and zero mean complex Gaussian inputs on different points of their corresponding optimized RP region.
\begin{algorithm}
\caption{SWIPT algorithm (Zero mean inputs)}\label{euclid1}
\begin{algorithmic}[1]
\Procedure{SWIPT Optimization}{}
\State $\lambda_2\gets\lambda_{max}$, $M \gets$ Large number
\While {$\lambda_2>\lambda_{min}$}
\For {$t=1:M$}
\State Randomly initialize $\{\pmb{P}_{r}^{N},\pmb{P}_{i}^{N}\}$
\State $\{\pmb{P}_{(t),r}^{N*},\pmb{P}_{(t),i}^{N*}\}=\arg \max$ (\ref{eqn_54})
\State $IP(t)=$ Calculate cost in (\ref{eqn_54}) for $\{\pmb{P}_{(t),r}^{N*},\pmb{P}_{(t),i}^{N*}\}$
\EndFor
\State $S=\arg \max\limits_{t} IP(t)$, save $\{\pmb{P}_{(S),r}^{N*},\pmb{P}_{(S),i}^{N*}\}$, $\lambda_2 \gets (\lambda_2 -stp)$
\EndWhile
\EndProcedure
\end{algorithmic}
\end{algorithm}
In Figure \ref{Fig_2}, the RP regions for \textit{Asymmetric Non-zero mean Gaussian} (ANG), presented in this paper and \textit{Symmetric Non-zero mean Gaussian} (SNG) presented in \cite{Clerckx_2016} and \textit{Zero mean Gaussian} (ZG) are shown\footnote{The channel we have used for our simulations comprises $N=9$ c-subchannels with coefficients as $[ -1.2+0.1i , -0.4-1.3i , -.1-1.6i , 0.6-1.5i , -1.35-.1i , -1.1+0.2i , -0.9-.01i , 0.7+0.1i , 0.65+.01i]$.}. We also obtain the RP region corresponding to the optimal power allocations for the linear model assumption of the EH. This is done by obtaining the power allocations from \cite[Equation (9)]{Grover_Sahai_2010} for different constraints and calculating the corresponding delivered power and transmitted information. This region is denoted by \textit{Zero mean Gaussian for Linear model} (ZGL). As it is observed in Figure \ref{Fig_2}, due to the asymmetric power allocation in ANG, there is an improvement in the RP region compared to SNG. Additionally, it is observed that ANG and SNG achieve larger RP region compared to optimized ZG and that performing better than ZGL (highlighting the fact that for scenarios that the nonlinear model for EH is valid, ZGL is not optimal anymore). The main reason of improvement in the RP regions corresponding to ANG, SNG is due to the fact that allowing the mean of the channel inputs to be non-zero boosts the fourth order term (More explanations can be found in \cite{Clerckx_2016}.) in (\ref{eqn_2}), resulting in more contribution in the delivered power at the receiver.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.3]{Fig11.pdf}
\caption{The optimized RP regions corresponding to ANF, SNG, ZG and ZGL with average power constraint $P_a=1$ and noise variance $\sigma_w^2=0.1$.}\label{Fig_2}
\par\end{centering}
\vspace{0mm}
\end{figure}
\begin{figure}[h!]
\begin{centering}
\begin{subfigure}
\par\hfill
\includegraphics[scale=0.213]{Power_alloc_Ro1.pdf}
\end{subfigure}
\begin{subfigure}
\par\hfill
\includegraphics[scale=0.213]{Power_alloc_Ro2.pdf}
\end{subfigure}
\begin{subfigure}
\par\hfill
\includegraphics[scale=0.213]{Power_alloc_Ro3.pdf}
\end{subfigure}
\caption{From left to right, the mean and variance of different r-subchannels' input corresponding to the points $A$, $B$ and $C$ in Figure \ref{Fig_2}, respectively. As we move forward from point $A$ to point $C$, the variance of different r-subchannels increase, whereas their corresponding means shrink to zero.}\label{Fig_3}
\end{centering}
\end{figure}
In Figure \ref{Fig_3}, from left to right, the optimized inputs in terms of their complex mean $\mu_l,~l=0,\ldots,8$ (represented as dots) and their corresponding r-subchannel variances $\sigma_{lr}^2,\sigma_{li}^2,~l=0,\ldots,8$ (represented as ellipses) are shown for points $A,~B$ and $C$ in Figure \ref{Fig_2}, respectively. Point $A$ represents the maximum delivered power with the zero transmitted information (note that information of a deterministic signal is zero). Point $B$ represents the performance of a typical input used for power and information transfer. Finally, point $C$ represents the performance of an input obtained via waterfilling (when the delivered power constraint is inactive). From these $3$ plots it is observed that as we move from point $A$ to point $C$, the mean of different r-subchannels decrease, however, they (means of different r-subchannels) keep their relative structure, roughly. Also, as we move to point $C$, the means of different r-subchannels get to zero with their variances increasing asymmetrically until the power allocation gets to waterfilling solution (where the power allocation between the real and imaginary components are symmetric). This result is in contrast with the results in \cite{Clerckx_2016}, where the power allocation to the real and imaginary components in each c-subchannel is symmetric. Similar results regarding the benefit of asymmetric power allocation has also been reported in \cite{Varasteh_Rassouli_Clerckx_ITW_2017} for deterministic AWGN channel with nonlinear EH.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.3]{zm_CONS.pdf}
\caption{Representation of variances of different c-subchannels corresponding to the point $E$ in Figure \ref{Fig_2}. The strongest c-subchannel receives more power compared to the other c-subchannels and the other c-subchannels attain CSCG inputs.}\label{Fig_4}
\par\end{centering}
\vspace{0mm}
\end{figure}
In Figure \ref{Fig_2}, the point $D$ corresponds to the input, where all of the c-subchannels other than the strongest one (in terms of the $\max\limits_{l=0,\ldots,N-1}|h_l|^2$) are with zero power. For the strongest c-subchannel, at point $D$, all the transmit power is allocated to either real or imaginary component of the c-subchannel. The reason for this observation is explained in Remark \ref{rem_111}. This observation is also inline with the result of \cite{Varasteh_Rassouli_Clerckx_ITW_2017}, where it is shown that for a flat fading channel, the maximum power is obtained by allocating all the transmitter power to only one r-subchannel. Note that this is different from the power allocation with the linear model (i.e. ZGL), for which all the transmit power would also be allocated to the strongest c-subchannel to maximize delivered power but equally divided among the real and imaginary parts of the input.
In Figure \ref{Fig_4}, the variances of different r-subchannels corresponding to the point $E$ in Figure \ref{Fig_2} are illustrated. Numerical optimization reveals that, as we move from point $D$ to point $C$ (increasing the information demand at the receiver) in Figure \ref{Fig_2}, the variance of the strongest c-subchannel varies asymmetrically (in its real and imaginary components). This observation can be justified as follows. For higher values of $\lambda_2$ (equivalent to higher delivered power demands), the strongest c-subchannel receives a power allocation similar to the solutions $p_2$ or $p_3$ in Figure \ref{Fig_110}, whereas the other c-subchannels take the power allocation corresponding to the point $p_4$ in Figure \ref{Fig_110}\footnote{For very low average power constraints, it is observed that the power allocation is symmetric across all the c-subchannels. This can be justified by noting that for very low average power constraints, the admissible power allocations correspond to solutions similar to the point D in Figure \ref{Fig_110}. }. Note that the power allocation in point $C$ is the waterfilling solution. .
\begin{rem}
In Figure \ref{Fig_10} (using the optimization algorithm, explained earlier in Algorithm \ref{euclid}) the RP regions are obtained for $N=7,~9,~11,~13$. It is observed that the delivered power at the receiver is increased by the number of the c-subchannels $N$. This is due to the presence of input moments (higher than 2) in the delivered power in (\ref{eqn_39}), and is inline with observations made in \cite{Clerckx_Bayguzina_2016,Clerckx_2016}\footnote{We note that, in practical implementations, this observation (increasing delivered power with $N$) cannot be valid for all $N$, and the delivered power is saturated after some $N$. This is due to the diode breakdown effect, which has not been considered in our model (\ref{eqn_2}) due to small signal analysis. This is further discussed in \cite{Clerckx_2016}.}.
As another interesting observation, in Figure \ref{Fig_11}, the numerically optimized inputs for WPT (under the assumption of flat fading for the channel) are illustrated for $N=3,~5,~7,~9$. As mentioned in Algorithm \ref{euclid}, for each $N$, the optimization is run for many times, each time fed with a randomly generated starting point. In Figure \ref{Fig_11}, the optimized inputs for WPT purposes (zero variance inputs) are illustrated. The phases of the mean on different c-subchannels are also equally spaced.
\end{rem}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.3]{Fig_3.pdf}
\caption{The optimized RP regions corresponding to nonezero mean Gaussian inputs for $N=3,~5,~7,~9$ with an average power constraint $P_a=1$ and noise variance $\sigma_w^2=0.1$.}\label{Fig_10}
\par\end{centering}
\vspace{0mm}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.25]{Fig_4.pdf}
\caption{The numerically optimized inputs for WPT (under flat fading assumption for the channel) for $N=3,~5,~7,~9$ with an average power constraint $P_a=1$ and noise variance $\sigma_w^2=0.1$.}\label{Fig_11}
\par\end{centering}
\vspace{0mm}
\end{figure}
\section{Conclusion}\label{Sec:Conc}
In this paper, we studied SWIPT signalling for frequency-selective channels under transmit average power and receiver delivered power constraints. We considered an approximation for the nonlinear EH, which is based on truncation (up to fourth moment) of the Taylor expansion of the rectenna's diode characteristic function. For independent input distributions on different r-subchannels and iid inputs on each r-subchannel, we obtained the delivered power in terms of the system baseband parameters, which demonstrates the dependency of the delivered power on the mean as well as higher moments of the channel input distribution. Assuming that the transmitter is constrained to utilize Gaussian distributions, we show that in general non-zero mean Gaussian inputs attain a larger RP region compared to their zero mean counterparts. As a special scenario, for zero mean Gaussian inputs, we obtained the conditions for optimal power allocation on different r-subchannels. Using numerical optimization, it is observed that optimized non-zero mean inputs (with asymmetric power allocation in each c-subchannel) achieve larger RP region, compared to their optimized zero mean as well as non-zero mean (with symmetric power allocation in each c-subchannel \cite{Clerckx_2016}) counterparts.
|
1,941,325,221,178 | arxiv | \section{Introduction}
\label{sec-introduction}
Recognising that one part of an image contains a particular object, image
structure or set of local image features is a fundamental sub-problem in many
image processing and computer vision algorithms. For example, it can be used to
perform object detection or recognition by identifying parts belonging to an
object category
\citep{Lowe99,Leibe_etal08,Csurka_etal04}, or for
navigation by identifying and locating landmarks in a scene
\citep{Ozuysal_etal10,Se_etal05}, or for image mosaicing/stitching by
identifying corresponding locations in multiple images
\citep{Szeliski06,BrownLowe07}. Similarly, extracting a distinctive image
structure in one image and then recognising and locating that same feature in
another image of the same scene taken from a different viewpoint, or at a
different time, is essential to solving the stereo and video correspondence
problems, and hence, for calculating depth or motion, and for performing image
registration and tracking
\citep{LucasKanade81,Brown_etal03,Gall_etal11,ZitovaFlusser03}. Finally,
locating specific image features is fundamental to tasks such as edge detection
\citep{Canny86}. In this latter case the image structure being searched for is
usually defined mathematically ({\it e.g.},\xspace a Gaussian derivative for locating intensity
discontinuities), whereas for solving image stitching and correspondence problems the image
structure being searched for is extracted from another image, and in the case of
object or landmark recognition the target image features may have been learnt
from a set of training images.
Image patch recognition has traditionally been accomplished using template
matching \citep{Ma_etal09,WeiLai08,Goshtasby_etal84,BarneaSilverman72}. In this
case the image structure being searched for is represented as an array of pixel
intensity values, and these values are compared with the pixel intensities
around every location in the query image. There are many different metrics that
can be used to quantify how well the template matches each location in an image;
such as the sum of absolute differences (SAD), the normalised cross-correlation
(NCC), the sum of squared differences (SSD) or the zero-mean normalised cross
correlation\footnote{Also known as the sample Pearson correlation coefficient.}
(ZNCC). For any metric it is necessary to define a criteria that needs to be met
for a patch of image to be considered a match to the template. For
correspondence problems, it is often assumed that the image structure being
searched-for will match exactly one location in the query image. The location
with the highest similarity to the template is thus considered to be the
matching location. However, additional criteria, such as the ratio of the two
highest similarity values, may be used to reject some such putative matches. In other
tasks, the image structure being searched for may occur zero, one or multiple
times in the query image. In this case it is necessary to define a global
threshold to distinguish those image locations where the template matches the
query image from those locations where it does not. It is also typically the
case that for a patch of image to be considered a match to the template it must
be more similar to the template than its immediate neighbours. Hence, locations
where the template is considered to match the image are ones where the
similarity metric is a local maximum and exceeds a global threshold.
With traditional template matching, because the metric for assessing the
similarity of the template and a patch of image compares intensity values, the
result can be effected by changes in illumination. This issue can be resolved by
using a metric such as ZNCC which subtracts the mean intensity from the template
and from the image patch it is being compared to. This results in a comparison
of the relative intensity values and produces a similarity measure that is
tolerant to differences in lighting conditions between the query image and the template.
Another issue is that the metric for assessing the similarity of a template to a
patch of image is based on pixel-wise comparisons of intensity values. The
results will therefore be effected if the pixels being compared do not
correspond to the same part of the target image structure. This problem can
arise due to differences in the appearance of the image structure between the
query image and the template, caused by variations in viewpoint, partial
occlusion, and non-rigid deformations. To be able to recognise image features
despite such changes in appearance one approach is to use multiple templates
that represent the searched-for image patch across the range of expected changes
in appearance. However, even small differences in scale, orientation, aspect
ratio, {\it etc.}\xspace can result in sufficient mis-alignment at the pixel-level that a low
value of similarity is calculated by the similarity metric for all
transformations of the template when compared to the correct location in the
query image. Hence, to allow tolerance to viewpoint, even when using multiple
templates for each image feature, it is necessary to use a low threshold to
avoid excluding true matches. However, given the large number of comparisons
that are being made between all templates and all image locations, a low
threshold will inevitably lead to false-positives. There is thus an
irreconcilable need both for a high threshold to avoid false matches and for a
low threshold in order not to exclude true matches in situations where the
template is not perfectly aligned with the image. These problems have lead to
template matching being abandoned in favour of alternative methods (see
\autoref{sec-review}) for most tasks except for low-level ones such as edge
detection.
This article shows that the performance of template matching can be
significantly improved by requiring templates to compete with each other to
match the image. The particular type of competition used in the proposed method,
called Divisive Input Modulation \citep[DIM][]{Spratling_etal09,Spratling17a},
implements a form of probabilistic inference known as ``explaining away''
\citep{Kersten_etal04,LochmannDeneve11}. This
means that the similarity between a template and a patch of image takes into
account not only the similarity in the pixel intensity values at corresponding
locations in the template and the patch, but also the range of alternative
explanations for the patch intensity values represented by the same template at
other locations and by other templates. If the similarity between a template and
each image location is represented by an array, then this array is dense for
traditional template matching. In contrast, due to the competition employed by
the proposed method, the array of similarity values is very sparse. Those
locations that match a template can therefore be more readily identified and
there is typically a much larger range of threshold values that separate true
matches from false matches.
\section{Related Work}
\label{sec-review}
Given the issues with template matching discussed above, many alternative
methods for locating image features have been developed. Typically, these
alternative methods change the way the template and image patch are represented,
so that the comparison is performed in a different feature-space, or change the
computation that is used to perform the comparison, or use a combination of
both.
One alternative is to employ a classifier in place of the comparison of
corresponding pixel intensity values used in traditional template matching. For
example, random trees and random ferns can be trained using image patches seen
from multiple viewpoints in order to robustly recognise those image features
when they appear around keypoints extracted from a new image
\citep{Gall_etal11,Ozuysal_etal10,LepetitFua06}. Sliding-window
based methods apply the classifier, sequentially, to all regions within the
image \citep{DalalTriggs05,Lampert_etal08}, while region-based methods select a
sub-set of image patches for presentation to the classifier
\citep{Girshick_etal16,Gu_etal09,Uijlings_etal13}. In each case, the classifier
provides robustness to changes in appearance due to, for example, viewpoint or
within-class variation. Further tolerance to appearance can be achieved by using
windows with different sizes and aspect ratios. A classifier in the form of a
deep neural network can also be used to directly assess the similarity between
two image patches \citep{ZagoruykoKomodakis15,ZagoruykoKomodakis17}.
Instead of being used to perform the comparison between a template and an image
patch, a deep neural network can also be used to extract features from the image
and template for comparison ({\it i.e.},\xspace a deep neural network can be used to change the
feature-space, rather than change the similarity computation). For example,
\citet{Kim_etal17} used a convolutional neural network (CNN) to represent both
the template and the image in a new feature-space, the comparison was then
carried out using NCC.
Histogram matching is another method that changes the feature-space. Histogram
matching compares the colour histograms of the template and image patch, and
hence, disregards all spatial information \citep{Comaniciu_etal00}. This will
introduce tolerance to differences in appearance, but also reduces the ability
to discriminate between spatially distinct features. Co-occurrence based
template matching (CoTM) calculates the cost of matching a template to an image
patch as inversely proportional to the probability of the corresponding pixel
values co-occurring in the image \citep{Kat_etal18}. This can be achieved by
mapping the points in the image and template to a new feature-spaced defined by
the co-occurrence statistics. However, this method does not work well for
grayscale images or images containing repeating texture, and is not tolerant to
differences in illumination \citep{Kat_etal18}.
Another approach to is to perform comparisons on more distinctive image features
than image intensity values. For example, the scale-invariant feature transform
(SIFT) generates an image descriptor that is invariant to illumination,
orientation, and scale and partially invariant to affine distortion
\citep{Lowe99,Lowe04}. Methods to allow SIFT descriptors to be matched across
images with invariance to affine transformations have also been developed
\citep{MorelYu09,DongSoatto15}.
Many alternative feature descriptors have also been proposed,
such as SURF \citep{Bay_etal06}, BRIEF \citep{Calonder_etal10},
ORB \citep{Rublee_etal11}, GLOH \citep{MikolajczykSchmid05}, DAISY
\citep{Tola_etal08}, and BINK \citep{Saleiro_etal17}. However, experiments
comparing the performance of different image descriptors for finding matching
locations between images of the same scene suggest that SIFT remains one of the
most accurate methods
\citep{Balntas_etal18,MikolajczykSchmid05,Wu_etal13,TareenSaleem18,Balntas_etal17b,Mukherjee_etal15,Khan_etal15}.
It is also possible to learn image descriptors, and this approach can improve
performance beyond that of hand-crafted descriptors
\citep{Trzcinski_etal12,Brown_etal11,Simonyan_etal14,Schonberger_etal17}.
Recently, learning image descriptors using deep neural networks has become a
popular approach
\citep{Kwang_etal16,Simo-Serra_etal15,Balntas_etal18,Balntas_etal17,Balntas_etal16,ZagoruykoKomodakis15,ZagoruykoKomodakis17,Mitra_etal17}.
Another alternative to traditional template matching, that can perform image
patch matching with tolerance to changes in appearance, is image alignment. In
these methods the aim is to find the affine transformation that will align a
template with the image \citep{LucasKanade81,ZhangAkashi15,Korman_etal13}. For
example, FAsT-Match is a relatively recent algorithm of this type that measures
the similarity between a template and an image patch by first searching for the
2D affine transformation that maximises the pixel-wise similarity
\citep{Korman_etal13}. However, it is limited to working with grayscale images
and the large search-space of possible affine transformations makes this
algorithm slow. A more recent variation on this algorithm, OATM
\citep{Korman_etal18}, has increased speed but remains both slower and less
accurate than another approach, DDIS, which is discussed in the following
paragraph.
Another approach is to define alternative metrics for comparing the template
with the image that allow for mis-alignment between the pixels in the template
and the corresponding pixels in the image, rather than rigidly comparing pixels
at corresponding locations in the template and the image. Typically, these
metrics are based on measuring the distance between points in the template and
the best matching points in the image
\citep{Talmi_etal17,Dekel_etal15,Oron_etal18}.
For example, the Best-Buddies Similarity (BBS) metric
\citep{Dekel_etal15,Oron_etal18} is computed by counting the proportion of
sub-regions in the template and the image patch that are ``best-buddies''. For each
sub-region in the template the most similar sub-region (in terms of position and
colour) is found in the image patch. For each sub-region in the image patch the
most similar sub-region in the template is calculated in the same way. A pair of
sub-regions are best-buddies if they are most similar to each other. Sub-regions
can be best-buddies even if they are not at corresponding locations in the
template and the image patch, and this thus provides tolerance to differences in
appearance between the template and the patch. Deformable diversity similarity
(DDIS) is similar to BBS, but it differs in the way it deals with spatial
deformations, and the criteria used for determining if sub-regions in the image
patch and template match \citep{Talmi_etal17}. Specifically, for every
sub-region of the image patch the most similar sub-region (in terms of colour
only) is found in the template. The contribution of each such match to the
overall similarity is inversely weighted by the number of other sub-regions that
have been matched to the same location in the template, and by the spatial
distance between the matched sub-regions in the image patch and the
template. DDIS produces the current state-of-the-art performance when applied to
template matching in colour-feature space on standard benchmarks
\citep{Talmi_etal17,Kat_etal18}.
This article proposes another alternative method of image patch matching. Like
traditional template matching, the proposed method compares pixel-intensity
values in the template with those at corresponding locations in the image patch.
However, in contrast to traditional template matching the similarity between any
one template and the image patch is not independent of the other templates.
Instead, the templates compete with each other to be matched to the image
patch. This article describes empirical tests of this new approach to template
matching that demonstrate that it provides tolerance to differences in
appearance between the template and the same image feature in the query
image. This results in more accurate identification of features in an image
compared to both traditional template matching and recent state-of-the-art
alternatives to template matching \citep{Talmi_etal17,Dekel_etal15,Oron_etal18,Kat_etal18,Kim_etal17,ZagoruykoKomodakis15,ZagoruykoKomodakis17}.
\section{Methods}
\label{sec-methods}
\subsection{Image Pre-processing and Template Definition}
\label{sec-methods_preproc}
Image features are better distinguished using relative intensity (or contrast)
rather than absolute intensity. For this reason, ZNCC is a sensible choice of
similarity metric for template matching. ZNCC subtracts the mean intensity from
the template and from the image patch it is being compared to. Subtracting the
mean intensity will obviously result in positive and negative relative intensity
values. However, non-negative inputs are required by the mechanism that is used
in this article to implement template matching using explaining
away\footnote{This method is derived \citep{Spratling_etal09} from the version
of non-negative matrix factorisation \citep[NMF][]{LeeSeung01,LeeSeung99} that
minimises the Kullback-Leibler (KL) divergence between the input and a
reconstruction of the input created by the additive combination of elementary
image components (see \autoref{sec-methods_matching}). Because it minimises
the KL divergence it requires the input to be non-negative. Reconstructing
image data through the addition of image components, as occurs in NMF and in
the proposed algorithm, is considered an advantage as it is consistent with
the image formation process in which image components are added together (and
not subtracted) in order to generate images
\citep{Beyeler_etal19,LeeSeung01,LeeSeung99,Hoyer04}. In previous work the
algorithm used here to implement template matching using explaining away has
been used to simulate the response properties of neurons in the primary visual
cortex \citep{Spratling11a}. In biological neural networks, variables are
represented by firing rates which can not be negative. Hence, in these
previous applications the restriction to working with non-negative values had
the advantage of increasing the biological-plausibility of the model. In this
context, the pre-processing defined in this section can be considered to be a
simple model of the processing that is performed in the retina to generate ON
and OFF responses that respectively signal increases and decreases in
brightness. } (see \autoref{sec-methods_matching}). Hence, to allow the
proposed method to process relative intensity values the input image was
pre-processed as follows.
A grayscale input image $I$ was convolved with a 2D circular-symmetric Gaussian
mask $g$ with standard deviation equal to $\sigma$ pixels, such that: $\bar{I}=I
\ast g$. $\bar{I}$ is an estimate of the local mean intensity across the
image. To avoid a poor estimate of $\bar{I}$ near the borders, the image was
first padded on all sides with intensity values that were mirror reflections of
the image pixel values near the borders of $I$. The width of the padding was
equal to the width of the template on the left and right borders, and equal to
the height of the template on the the top and bottom borders. Once calculated
$\bar{I}$ could be cropped to be the same size as the original input
image. However, to avoid edge-effects when template matching, $\bar{I}$ was left
padded and all the arrays the same size as $\bar{I}$ ({\it i.e.},\xspace $\mathbf{X}$, $\mathbf{R}$, $\mathbf{E}$, and
$\mathbf{Y}$ see \autoref{sec-methods_matching}) were cropped to be the same size as the
original image once the template matching method had been applied\footnote{An
alternative approach to avoid edge effects is to set to zero the similarity
values near to the borders of the image. This alternative approach is employed
in other patch matching algorithms such as BBS and DDIS, as can be seen in the
4th and 5th rows of \autoref{fig-bss_data_examples}. This alternative approach
has the advantage of increasing the processing speed, as similarity values do
not need to be calculated for regions of the image adjacent to the edges, but
has the disadvantage that it may prevent detection of the image patch
corresponding to the target if it is very close to the border of the
image.}. The relative intensity can be approximated as $\mathbf{X}=2(I-\bar{I})$. To
produce only non-negative input to the proposed method, the positive and
rectified negative values of $\mathbf{X}$ were separated into two images $\mathbf{X}_1$ and
$\mathbf{X}_2$. Hence, for grayscale images the input to the model was two arrays
representing increases and decreases in local contrast. For colour images each
colour channel was pre-processed in the same way, resulting in six input arrays
($\mathbf{X}_1 \dots \mathbf{X}_6$) representing the increases and decreases about the average
value in each colour channel.
For grayscale images a template consists of two arrays of values ($\mathbf{w}_{j1}$
and $\mathbf{w}_{j2}$) which are compared to $\mathbf{X}_1$ and $\mathbf{X}_2$. Similarly for a
colour image a template consists of six arrays of values ($\mathbf{w}_{j1} \dots
\mathbf{w}_{j6}$). These arrays can be produced by performing the pre-processing
operation described in the previous paragraph on a standard template of
intensity values (which could have been defined mathematically, have been
learnt, or have been a patch extracted from an image). Alternatively, the
templates can be created by extracting regions from images that have been
processed as described in the preceding paragraph. This latter method was used
in all the experiments reported in this article. The value of $\sigma$ was set
equal to half of the template width or height, whichever was the smaller of the
two dimensions.
\subsection{Template Matching}
\label{sec-methods_matching}
The proposed method of template matching was implemented using the Divisive Input
Modulation (DIM) algorithm \citep{Spratling_etal09}. This algorithm has been
used previously to simulate neurophysiological \citep[][]{Spratling11a} and
psychological data \citep[][]{Spratling16b} and applied to tasks in robotics
\citep[][]{MuhammadSpratling15}, pattern recognition \citep[][]{Spratling14b}
and computer vision \citep[][]{Spratling13a,Spratling17a}.
The DIM algorithm is described in
these previous publications, but this description is repeated here for the
convenience of the reader. DIM was implemented using the following equations:
\begin{equation}
\mathbf{R}_i= \sum_{j=1}^{p} \left(\v_{ji} \ast \mathbf{Y}_j\right)
\label{eq-pcbc_r}
\end{equation}
\begin{equation}
\mathbf{E}_i=\mathbf{X}_i \oslash \left[\mathbf{R}_i\right]_{\epsilon_2}
\label{eq-pcbc_e}
\end{equation}
\begin{equation}
\mathbf{Y}_j \leftarrow \left[\mathbf{Y}_j\right]_{\epsilon_1} \odot \sum_{i=1}^{k} \left(\mathbf{w}_{ji} \star \mathbf{E}_i\right)
\label{eq-pcbc_y}
\end{equation}
Where $i$ is the index over the number of input channels (the maximum index $k$
is two for grayscale images and six for colour images), $j$ is an index over the
number, $p$, of different templates being compared to the image; $\mathbf{X}_i$ is a
2-dimensional array generated from the original image by the pre-processing
method described in \autoref{sec-methods_preproc}; $\mathbf{R}_i$ is a 2-dimensional
array representing a reconstruction of $\mathbf{X}_i$; $\mathbf{E}_i$ is a 2-dimensional array
representing the discrepancy (or residual error) between $\mathbf{X}_i$ and $\mathbf{R}_i$; $\mathbf{Y}_j$ is a 2-dimensional
array that represent the similarity between template $j$ and the image at each
pixel; $\mathbf{w}_{ji}$ is a 2-dimensional array representing channel $i$ of template
$j$ defined as described in \autoref{sec-methods_preproc}; $\v_{ji}$ is a
2-dimensional array also representing template values ($\v_{ji}$ and $\mathbf{w}_{ji}$
differ only in the way they are normalised, as described below); $\left[
v\right]_{\epsilon}=\alg{max}(\epsilon,v)$; $\epsilon_1$ and $\epsilon_2$ are
parameters; $\oslash$ and $\odot$ indicate element-wise division and
multiplication respectively; $\star$ represents cross-correlation; and $\ast$
represents convolution (which is equivalent to cross-correlation with the kernel
rotated $180^o$).
DIM attempts to find a sparse set of elementary components that when combined
together reconstruct the input with minimum error \citep{Spratling14b}. For the
current application, the elementary components are the templates reproduced at every
location in the image, and all templates at all locations can be thought of as a
``dictionary'' or ``codebook'' that can be used to reconstruct many different
inputs. The activation dynamics, described by \autorefs{eq-pcbc_r},
\ref{eq-pcbc_e} and~\ref{eq-pcbc_y}, perform gradient descent on the residual
error in order to find values of $\mathbf{Y}$ that accurately reconstruct the current
input \citep{Achler13,Spratling_etal09,Spratling_dim-learning}. Specifically,
the equations operate to find values for $\mathbf{Y}$ that minimise the Kullback-Leibler
divergence between the input ($\mathbf{X}$) and the reconstruction of the input ($\mathbf{R}$)
\citep{Spratling_etal09,SolbakkenJunge11}. The activation dynamics thus result
in the DIM\xspace} %{PC/BC\--DIM\xspace algorithm selecting a subset of dictionary elements that best
explain the input. The strength of an element in $\mathbf{Y}$ reflects the strength with
which the corresponding dictionary entry ({\it i.e.},\xspace template) is required to be
present in order to accurately reconstruct the input at that location.
Each element in the similarity array $\mathbf{Y}$ can be considered to represents a
hypothesis about the image features present in the image, and the input $\mathbf{X}$
represents sensory evidence for these different hypotheses. Each similarity
value is proportional to the belief in the hypothesis represented, {\it i.e.},\xspace the
belief that the image features represented by that template are present at that
location in the image. If a template and a patch of image have strong
similarity this will inhibit the inputs being used to calculate the similarity
of the same template at nearby locations (ones where the templates overlap
spatially), and will also inhibit the inputs being used to calculate the
similarity of other templates at the same and nearby locations. Thus,
hypotheses that are best supported by the sensory evidence inhibit other
competing hypotheses from receiving input from the same evidence. Informally we
can imagine that overlapping templates inhibit each other's inputs. This
generates a form of competition between templates, such that each one
effectively tries to block other templates from responding to the pixel
intensity values which it represents
\citep{Spratling_etal09,Spratling_dim-learning}. This competition between the
templates performs explaining away
\citep{Kersten_etal04,LochmannDeneve11,Spratling_dim-learning,Spratling_etal09}. If
a template wins the competition to respond to ({\it i.e.},\xspace have a high similarity to) a
particular pattern of inputs, then it inhibits other templates from responding
to those same inputs. Hence, if one template explains part of the evidence ({\it i.e.},\xspace
a patch of image), then support from this evidence for alternative hypotheses
({\it i.e.},\xspace templates) is reduced, or explained away.
The sum of the values in each template, $\mathbf{w}_j$, was normalised to sum to
one. The values of $\v_j$ were made equal to the corresponding values of $\mathbf{w}_j$,
except they were normalised to have a maximum value of one.
The cross-correlation operator used in equation~\ref{eq-pcbc_y} calculates the
similarity, $\mathbf{Y}$, for the same set of templates, $\mathbf{w}$, at every pixel location in
the image. The convolution operation used in equation~\ref{eq-pcbc_r}
calculates the reconstructions, $\mathbf{R}$, for the same set of templates, $\v$, at every
pixel location in the image. The rotation of the kernel performed by convolution
ensures that each channel of the reconstruction $\mathbf{R}_i$ can be compared
pixel-wise to the actual input $\mathbf{X}_i$.
For all the experiments described in this paper (except those exploring
parameter sensitivity reported in \autoref{sec-parameter_sensitivity})
$\epsilon_1$ and $\epsilon_2$ were given the values
$\frac{\epsilon_2}{max\left(\sum_j v_{ji}\right)}$ and $1\times 10^{-2}$
respectively. Parameter $\epsilon_1$ allows elements of $\mathbf{Y}$ that are equal to
zero, to subsequently become non-zero.
Parameter $\epsilon_2$ prevents division-by-zero errors and determines the
minimum strength that an input is required to have in order to effect the values
of $\mathbf{Y}$. As in all previous work with DIM, these parameters have been given
small values compared to typical values of $\mathbf{Y}$ and $\mathbf{R}$, such that they have
negligible effects on the steady-state values of $\mathbf{R}$, $\mathbf{E}$ and $\mathbf{Y}$. To
determine these steady-state values, all elements of $\mathbf{Y}$ were initially set to
zero, and \autorefs{eq-pcbc_r} to~\ref{eq-pcbc_y} were then iteratively updated
with the new values of $\mathbf{Y}$ calculated by \autoref{eq-pcbc_y} substituted into
\autorefs{eq-pcbc_r} and~\ref{eq-pcbc_y}. This iterative process was terminated
after 10 iterations for the experiments reported in
\autorefs{sec-correspondence_bbs} and \ref{sec-correspondence_vgg} (where the
number of templates varied between 1 and 31) and 20 iterations for the
experiments reported in \autoref{sec-template_matching_vgg} (where 70 templates
were used). It is necessary to increase the number of iterations used as the
number of templates increases as the competition between the templates takes
longer to be resolved. However, for a fixed number of templates the results were
not particularly sensitive to the exact number of iterations used (see
\autoref{sec-parameter_sensitivity}). The values in array $\mathbf{Y}_j$ produced at
the end of the iterative process were used as a measure of the similarity
between template $j$ and the input image over all spatial locations.
\subsection{Post-Processing}
\label{sec-methods_postproc}
While the similarity array $\mathbf{Y}_j$ for template $j$ is sparse, it is not always
the case that the best matching location is represented by a single element with
a large value. Often the best matching location will be represented by a small
population of neighbouring elements with high values. The size of this
population is usually proportional to the size of the template. To sum the
similarity values within neighbourhoods the similarity array produced by each
template was convolved with a binary-valued kernel that contained ones within an
elliptically shaped region with width and height equal to $\lambda$ times the
width and height of the template. The size of the region over which values were
summed was restricted to be at least one pixel, so that a small $\lambda$ and/or
a small template size did not result in a summation of zero pixels, but would
instead result in an output that was the same as the original $\mathbf{Y}_j$. A value of
$\lambda=0.025$ was used in all experiments unless otherwise stated. However,
the results were not particularly sensitive to the value of $\lambda$ and
similar results were obtained with a range of different values (see
\autoref{sec-parameter_sensitivity}).
\subsection{Implementation}
Open-source software, written in MATLAB, which performs all the experiments
described in this article is available for download from:
\url{http://www.corinet.org/mike/Code/dim_patchmatching.zip}. This code
compares the performance of DIM to that of several other methods. Firstly,
the BBS method \citep{Dekel_etal15,Oron_etal18} which was implemented using the
code supplied by the authors of BBS\footnote{\url{http://people.csail.mit.edu/talidekel/Code/BBS_code_and_data_release_v1.0.zip}}. Secondly,
the DDIS algorithm \citep{Talmi_etal17} which was implemented using the code
provided by the authors of DDIS\footnote{\url{https://github.com/roimehrez/DDIS}}.
Finally, traditional template matching using ZNCC as the similarity metric which
was implemented using the MATLAB command normxcorr2. This command is part of the
MATLAB Image Processing Toolbox.
As described in \autoref{sec-methods_preproc}, the proposed method can be
applied to grayscale or colour images. For colour images, the best results were
found using images in CIELab colour space. For a fair comparison, all other
algorithms were also applied to colour images. Specifically, BBS was
applied to RGB images \citep[as in][]{Dekel_etal15}, DDIS was also applied to RGB
images \citep[as in][]{Talmi_etal17}. ZNCC was applied to HSV images as this was
found to produce better results than either RGB or CIELab.
To apply ZNCC to colour images the similarity
values were calculated using ZNCC independently for each colour channel, and
then these values were summed to produce the final measure of similarity.
\section{Results}
\label{sec-results}
\subsection{Correspondence using the Best Buddies Similarity Benchmark}
\label{sec-correspondence_bbs}
This section describes an experiment in which a template is extracted from one
image and is used to find the best matching location in a second image of the
same scene. Specifically, each image pair consists of two frames from a
video. Within each image the bounding-box of a target object has been defined
manually. Between the images in each pair, the image patch corresponding to the
target changes in appearance due to variations in viewpoint and lighting
conditions, changes in the pose of the target, partial occlusions, non-rigid
deformations of the target, and due to changes in the surrounding context and
background. In total there are 105 image pairs taken from 35 colour videos that
have previously been used as an object tracking benchmark
\citep{Wu_etal13b}. This dataset\footnote{\label{footnote-BBS}
\url{http://people.csail.mit.edu/talidekel/Best-Buddies Similarity.html}.
This dataset uses images that are 20 frames apart. A similar dataset with 25,
50, or 100 frames between pairs of images was used to test the BBS algorithm in
\citep{Oron_etal18}. However, this alternative dataset has not been made
publically available.}
was originally prepared to evaluate the performance of the BBS template matching algorithm
\citep{Dekel_etal15}. Experimental procedures equivalent to those used in
\citet{Dekel_etal15} have also been used here. Specifically, the bounding-box in
the first image of each pair was used to define a template. The template
matching method was then used to calculate the similarity between this template
and every location in the second image of the same pair. The single location
which had the highest similarity was used as the predicted location of the
target. A bounding-box, the same size as the one in the first image, was defined
around this predicted location and the overlap between this and the bounding-box
provided in the ground-truth data was determined, and used as a measure of the
accuracy of the template matching. This bounding box overlap was calculated, in
the standard way, as the intersection over union (IoU) of the two bounding boxes.
\begin{figure*}[tbp]
\begin{center}
\includegraphics[width=0.22\textwidth,trim=0 0 350 0, clip]{bbs_data_pair10_DIM4}
\includegraphics[width=0.22\textwidth,trim=0 0 350 0, clip]{bbs_data_pair22_DIM4}
\includegraphics[width=0.22\textwidth,trim=0 0 350 0, clip]{bbs_data_pair81_DIM4}
\includegraphics[width=0.22\textwidth,trim=0 0 350 0, clip]{bbs_data_pair88_DIM4}
\rotatebox{90}{\hspace*{10mm}\textcolor{white}{{\small(1 tpl)}}}\hfill
\includegraphics[width=0.22\textwidth,trim=175 0 175 0, clip]{bbs_data_pair10_DIM4}
\includegraphics[width=0.22\textwidth,trim=175 0 175 0, clip]{bbs_data_pair22_DIM4}
\includegraphics[width=0.22\textwidth,trim=175 0 175 0, clip]{bbs_data_pair81_DIM4}
\includegraphics[width=0.22\textwidth,trim=175 0 175 0, clip]{bbs_data_pair88_DIM4}
\rotatebox{90}{\hspace*{10mm}\textcolor{white}{{\small(1 tpl)}}}\hfill
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair10_ZNCC}
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair22_ZNCC}
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair81_ZNCC}
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair88_ZNCC}
\rotatebox{90}{\hspace*{9.5mm}ZNCC\textcolor{white}{{\small(1 tpl)}}}\hfill
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair10_BBS}
\includegraphics[width=0.2201\textwidth,trim=350 0 0 0, clip]{bbs_data_pair22_BBS}
\includegraphics[width=0.2201\textwidth,trim=350 0 0 0, clip]{bbs_data_pair81_BBS}
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair88_BBS}
\rotatebox{90}{\hspace*{11mm}BBS\textcolor{white}{{\small(1 tpl)}}}\hfill
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair10_DDIS}
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair22_DDIS}
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair81_DDIS}
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair88_DDIS}
\rotatebox{90}{\hspace*{10mm}DDIS\textcolor{white}{{\small(1 tpl)}}}\hfill
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair10_DIM0}
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair22_DIM0}
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair81_DIM0}
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair88_DIM0}
\rotatebox{90}{\hspace*{1.5mm}DIM {\small(1 template)}}\hfill
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair10_DIM4}
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair22_DIM4}
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair81_DIM4}
\includegraphics[width=0.22\textwidth,trim=350 0 0 0, clip]{bbs_data_pair88_DIM4}
\rotatebox{90}{\hspace*{11mm}DIM\textcolor{white}{{\small(1 tpl)}}}\hfill
\caption{Example results for different algorithms when applied to the task
of finding corresponding locations in 105 pairs of colour video frames. Images
in the first row show the target templates (outlined in yellow) in the
initial frame of the video. Images in the second row show the location of
the target identified by the DIM algorithm (outlined in cyan) and the
location of the target defined by the ground-truth data (outlined in
yellow) in a later frame of the same video. The third to seventh rows show
the similarity of the target template to the second image as determined by
(from row 3 to 7): ZNCC, BBS, DDIS, DIM with no additional templates, and
DIM with up to four additional templates chosen by maximum
correlation. Darker pixels correspond to stronger similarity. Note,
matching was performed using colour templates and colour images, but for
clarity the images are shown in grayscale in rows 1 and 2.}
\label{fig-bss_data_examples}
\end{center}
\end{figure*}
Results for typical example image pairs are shown in
\autoref{fig-bss_data_examples}. The images on the top row are the first images
in each of four image pairs. The yellow box superimposed on each image shows the
target image patch. The second row shows the second image in each pair. Two
boxes are superimposed on these second images. The yellow box shows the location
corresponding to the target defined by the ground-truth data. The cyan box shows
the location of the target predicted by the DIM algorithm. The remaining rows
show the similarity between the template and the second image calculated by
several different methods. The strongest measures of similarity are represented
by the darkest pixels. The similarity array calculated by ZNCC
(\autoref{fig-bss_data_examples} third row) is dense. The similarity arrays
produced by BBS (\autoref{fig-bss_data_examples} fourth row), DDIS
(\autoref{fig-bss_data_examples} fifth row), and DIM
(\autoref{fig-bss_data_examples} sixth and seventh rows), become increasingly
sparse, and hence, the peaks become increasingly well-localised and easily
distinguishable from non-matching locations.
Two results are shown for the DIM algorithm. The first result
(\autoref{fig-bss_data_examples} sixth row) shows the similarity of the target
template to the second image when only the target template was used by DIM. The
second result for the DIM algorithm (\autoref{fig-bss_data_examples} seventh row) shows the
similarity calculated when up to four additional templates, the same size as the
bounding-box defining the target, were also extracted from the first image and
used as templates for non-target locations by DIM. These additional templates
were extracted from around locations where the correlation between the target
template and the image was strongest, excluding locations where the additional
templates would overlap with each other or with the bounding-box defining the
target. The exact number of additional templates varied between different image
pairs, and in some cases over the 105 image pairs, there were zero additional
templates due to the target bounding-box being large compared to the size of the
image. For the particular examples shown in \autoref{fig-bss_data_examples} all
results were produced using four additional templates, except that for the
right-most image where two additional templates were used. As can be seen by
comparing the sixth and seventh rows of \autoref{fig-bss_data_examples},
including additional, non-target, templates tends to increase the sparsity of
the similarity array produced by DIM for the target template. This increased
sparsity results from the increased competition when there are more templates
(see \autoref{sec-methods_matching}).
\begin{figure}[tbp]
\begin{center}
\subfigure[]{\includegraphics[scale=0.4]{bbs_data_success_ALL}
\subfigure[]{\includegraphics[scale=0.4,trim=55 0 0 0, clip]{bbs_data_success_top7_ALL}}
\caption{The performance of different algorithms when applied to the task of finding
corresponding locations in 105 pairs of colour video frames. Each curve shows the
fraction of targets for which the overlap between the ground-truth and
predicted bounding-boxes exceeded the threshold indicated on the x-axis. (a)
Results when using the target location predicted by the maximum
similarity. (b) Results when using the maximum overlap predicted by the
seven highest similarity values. The results for DIM are produced using up
to four additional templates chosen by maximum correlation.}
\label{fig-bbs_data_success}
\end{center}
\end{figure}
The results across all 105 image pairs are summarised in
\autoref{fig-bbs_data_success}(a). This graph shows the proportion of image
pairs for which the overlap between the ground-truth and the predicted
bounding-box (the IoU) exceeded a threshold for a range of different threshold
values. It can be seen that the success rate of DIM exceeds that of the other
methods at all thresholds. Following the methods used in \citet{Dekel_etal15},
the overall accuracy of each method was summarised using the area under the
success curve (AUC). These quantitative results are shown in
\autoref{tab-bbs_data_AUC}, and compared to the published results for several
additional algorithms that have been evaluated on the same dataset.
\citet{Dekel_etal15} also assessed the accuracy of BBS by taking the largest
overlap across the seven locations with the highest similarity values. The same
analysis was done here by finding the seven largest peaks in the similarity
array for the target template, ignoring values that were not local maxima. These
results are presented in \autoref{fig-bbs_data_success}(b). It can be seen that
DIM produces similar results to DDIS, but significantly better performance than
both ZNCC and BBS over a wide range of threshold values.
\begin{table}[tbp]
\begin{center}
\renewcommand{\baselinestretch}{1}\large\normalsize
\begin{tabular}{ll} \hline
\hspace*{3mm} {\bf Algorithm} & {\bf AUC} \\
\hline
\emph{Baseline}\\
\hspace*{3mm} SSD & 0.43 \citep{Dekel_etal15} \\
\hspace*{3mm} NCC & 0.47 \citep{Dekel_etal15} \\
\hspace*{3mm} SAD & 0.49 \citep{Dekel_etal15} \\
\hspace*{3mm} ZNCC & 0.54\\
\emph{State-of-the-art}\\
\hspace*{3mm} BBS \citep{Dekel_etal15,Oron_etal18} & 0.55 \citep{Dekel_etal15}\\
\hspace*{3mm} CNN (2ch-deep) \citep{ZagoruykoKomodakis15,ZagoruykoKomodakis17} & 0.59 \citep{Kim_etal17}\\
\hspace*{3mm} CoTM \citep{Kat_etal18} & 0.62$^*$ \citep{Kat_etal18} \\
\hspace*{3mm} CNN (SADCFE) \citep{Kim_etal17} & 0.63 \citep{Kim_etal17}\\
\hspace*{3mm} CNN (2ch-2stream) \citep{ZagoruykoKomodakis15,ZagoruykoKomodakis17} & 0.63 \citep{Kim_etal17}\\
\hspace*{3mm} DDIS \citep{Talmi_etal17} & 0.64 \citep{Kat_etal18} \\
\emph{Proposed}\\
\hspace*{3mm} DIM (1 template) & 0.58 \\
\hspace*{3mm} DIM (1 to 4 templates) & {\bf 0.69} \\
\hline
\end{tabular}
\caption{Quantitative comparison of results for different algorithms when
applied to the task of finding corresponding locations in 105 pairs of colour video
frames. Results are given in terms of the area under the success curve
(AUC). $^*$We failed to reproduce the result for CoTM given in
\citep{Kat_etal18}: using the code released by the
authors\protect\footnotemark\xspace to perform the matching, together with our
code to run the benchmark, produced an AUC of 0.54. Using the same code for
running the benchmark and the code supplied by the original authors it was
possible to reproduce the published results for both BBS and DDIS.}
\label{tab-bbs_data_AUC}
\end{center}
\end{table}
\footnotetext{\url{http://www.eng.tau.ac.il/~avidan/}}
From \autoref{tab-bbs_data_AUC} it can be seen that the performance of DIM is
strongly effected by the inclusion of additional templates extracted from other
parts of the first image. This is to be expected, as the proposed method
performs explaining away, and the accuracy of this inference process will
increase when additional templates compete with the target template such that
regions of the image that should have low similarity with the target template
are explained-away by the additional templates. Without any additional templates
the performance of DIM is more similar to ZNCC, this is also to be expected as both
methods are comparing the relative intensity values in the template and each
patch of image.
\begin{figure}[tbp]
\begin{center}
\subfigure[]{\includegraphics[scale=0.4]{bbs_data_AUC_vs_additional_templates_DIM}\label{fig-bbs_data_num_templates}}
\subfigure[]{\includegraphics[scale=0.4,trim=70 0 0 0, clip]{bbs_data_AUC_vs_additional_templates_DIM_backgroundLeuven}\label{fig-bbs_data_num_templates_background}}
\caption{The effect of the maximum number of additional templates, and their
selection method, on the performance of the proposed method, DIM, when
applied to finding corresponding locations in 105 pairs of colour video
frames. Results are shown when the additional templates were selected from
(a) the first image in each pair, and (b) an unrelated image.}
\label{fig-bbs_data_num_templates_both}
\end{center}
\end{figure}
A number of different methods of choosing the additional, non-target, templates
used by DIM were investigated. \Autoref{fig-bbs_data_num_templates} shows the
AUC obtained for different maximum numbers of additional templates, when these
templates were selected from locations where the correlation between the target
template and the image was strongest, from locations selected by the SIFT
interest point detector, and locations chosen at random. In each case,
additional templates were chosen such that they did not overlap with each other
or with the bounding-box defining the target so as to ensure a diversity of
additional templates. The exact number of additional templates varied between
different image pairs, depending on how many non-overlapping regions, equal in
size to the target template, could fit within the first image in each pair.
It can be seen that the performance of DIM increased as the number of non-target
templates increased, and that for a large number of additional templates the AUC
plateaued between 0.66 and 0.69 regardless of how the additional templates were
selected. Furthermore, for all three methods of selecting additional templates
the performance of DIM exceeded that of the current state-of-the-art method,
DDIS, when two or more additional templates were used. It can also be observed
from \autoref{fig-bbs_data_num_templates} that the initial increase in performance
with the number of additional templates was faster when the non-target templates
were chosen by maximum correlation. In other words, the best performance was
achieved with fewer additional templates if those additional templates were
chosen so that they were the regions of the first image that were most similar to,
and hence most easily mistaken for, the target region.
One concern is that the benefits of performing explaining away will disappear
when the target appears in a completely different context. In other words, if
the background of the first image is different from that for the second image,
then additional, non-target, templates extracted from the first image will be
ineffective at competing with the target template when they are matched to the
second image. To explore this issue, the experiment described in the preceding
paragraph was repeated, but the additional templates were taken from an
unrelated image (the first ``Leuven'' image from the Oxford VGG Affine Covariant
Features Dataset \citep{MikolajczykSchmid05,Mikolajczyk_etal05}, see
\autoref{sec-correspondence_vgg}). The results of this experiment are shown in
\autoref{fig-bbs_data_num_templates_background}. As expected, there was a
deterioration in performance. However, as long as sufficient ($\ge20$)
additional templates were used then the performance of DIM (an AUC of $\ge64$) was
still as good as, or better than, all other methods that have been applied to
this benchmark (see \autoref{tab-bbs_data_AUC}).
\subsection{Correspondence using the Oxford VGG Affine Covariant Features Dataset}
\label{sec-correspondence_vgg}
This section describes an experiment similar to that in
\autoref{sec-correspondence_bbs}, but using the images from the Oxford VGG
Affine Covariant Features
Benchmark\footnote{\url{http://www.robots.ox.ac.uk/~vgg/research/affine/}}
\citep{MikolajczykSchmid05,Mikolajczyk_etal05}. This dataset has been widely
used to test the ability of interest point detectors to locate corresponding
points in two images. The dataset consists of eight image sequences (seven
colour and one grayscale). Each sequence consists of six images of the same
scene. These images differ in viewpoint (resulting in changes in perspective,
orientation, and scale), illumination/exposure, blur/de-focus and JPEG
compression. The ground-truth correspondences are defined in terms of
homographies (plane projective transformations) which relate any location in the
first image of each sequence to its corresponding location in the remaining five
images in the same sequence. In this experiment, templates were extracted from
the first image in each sequence and the best matching locations were found in
each of the remaining five images in the same sequence.
Images were scaled to half their original size to reduce the time taken to
perform this experiment. From the first image in each sequence templates were
extracted (as described in \autoref{sec-methods_preproc}). These templates were
extracted from around keypoints identified using the Harris corner detector. A
keypoint detector was used to identify suitable locations for matching in order
to exclude locations that no algorithm could be expected to match such as
regions of uniform colour and luminance. The results were not dependent on the
particular keypoint detector used. From the first image in each sequence 25
keypoints were chosen after excluding those for which: 1) the bounding box
defining the extent of the template was not entirely within the image; 2) the
bounding box around the corresponding location in the query image was not
entirely within the image; 3) the Manhattan distance between the keypoints was
less than 24 pixels, or less than the the size of the bounding box defining the
extent of the template (which ever distance was smaller). These criteria for
rejecting keypoints ensured that the templates did not fall off the edge of
either image in each pair (criteria 1 and 2), and increased the diversity of
image features that were being matched (criteria 3).
For the DIM algorithm, no additional templates were used as the 25 templates
extracted from the first image competed with each other, and hence, for each
template the remaining 24 templates effectively acted as additional templates
representing non-target image features.
\begin{figure*}[tbp]
\begin{center}
\includegraphics[scale=0.4,trim=0 20 110 0, clip]{vgg_data_sequence_success_ZNCC_17px}
\includegraphics[scale=0.4,trim=20 20 110 0, clip]{vgg_data_sequence_success_ZNCC_33px}
\includegraphics[scale=0.4,trim=20 20 110 0, clip]{vgg_data_sequence_success_ZNCC_49px}
\rotatebox{90}{\hspace*{17mm}ZNCC}
\includegraphics[scale=0.4,trim=0 20 110 0, clip]{vgg_data_sequence_success_BBS_17px}
\includegraphics[scale=0.4,trim=20 20 110 0, clip]{vgg_data_sequence_success_BBS_33px}
\includegraphics[scale=0.4,trim=20 20 110 0, clip]{vgg_data_sequence_success_BBS_49px}
\rotatebox{90}{\hspace*{17mm}BBS}
\includegraphics[scale=0.4,trim=0 20 110 0, clip]{vgg_data_sequence_success_DDIS_17px}
\includegraphics[scale=0.4,trim=20 20 110 0, clip]{vgg_data_sequence_success_DDIS_33px}
\includegraphics[scale=0.4,trim=20 20 110 0, clip]{vgg_data_sequence_success_DDIS_49px}
\rotatebox{90}{\hspace*{17mm}DDIS}
\includegraphics[scale=0.4,trim=0 0 117 0, clip]{vgg_data_sequence_success_DIM_17px}
\includegraphics[scale=0.4,trim=20 0 117 0, clip]{vgg_data_sequence_success_DIM_33px}
\includegraphics[scale=0.4,trim=20 0 117 0, clip]{vgg_data_sequence_success_DIM_49px}
\rotatebox{90}{\hspace*{20mm}DIM}\\[1.5mm]
\includegraphics[width=0.8\textwidth,trim=13 0 0 0, clip]{stats_correspondence_vgg_data_legend} \\%legend
\caption{The performance of different algorithms when applied to the task of
finding corresponding locations across image sequences from the Oxford
VGG affine covariant features dataset (at half size with 25 templates per
image pair). Each curve shows the fraction of targets for which the
overlap between the ground-truth and predicted bounding-boxes exceeded the
threshold indicated on the x-axis. Results for different image sequences
are shown using different line styles, as indicated in the key. Each row
shows results for a different algorithm (from top to bottom): ZNCC, BBS,
DDIS, and DIM. Each column shows results for a different template size
(from left to right): 17-by-17, 33-by-33, and 49-by-49 pixels.}
\label{fig-vgg_data_success}
\end{center}
\end{figure*}
The success curves produced by each algorithm for each sequence are shown in
\autoref{fig-vgg_data_success}, for three different sizes of templates. For all
four methods the results generally improved as the template size increased.
Differences in appearance between images in the Bikes and Trees sequences are
primarily due to changes in image blur. It can be seen from
\autoref{fig-vgg_data_success} that all four algorithms produced some of their
strongest performances when matching locations on these images. The exception was
BBS which produced poor performance on the Trees sequence when the template size
was small. This is likely to be due to metric used by BBS being insufficiently
discriminatory to distinguish distinct locations in the leaves of the
trees. Images in the Leuven sequence vary primarily in terms of illumination and
exposure. ZNCC and DIM accurately matched points across these images. In contrast,
DDIS and BBS showed very little tolerance to illumination changes. Differences in
appearance between images in the Wall and Graffiti sequences are primarily due
to changes in viewpoint. On the Wall sequence, ZNCC and DIM produced good
performance, while DDIS and BBS produced poor performance. In contrast, on the
Graffiti sequence DDIS produced the best performance of the four methods. These
differences are likely due to the Graffiti sequence having more distinctive
image regions, while the Wall images contain many similar looking locations as
it is a brick texture. Images in the UBC sequence differ in their JPEG quality. It can
be seen from \autoref{fig-vgg_data_success} that all four algorithms produced
some of their strongest performances when matching locations on these images,
except ZNCC and BBS for the small template sizes where the performance was
mediocre. Differences in appearance between images in the Bark and Boat
sequences are primarily caused by changes in scale and in-plane rotation. Both
these sequences were among the most challenging for all four methods. However,
the performance for both DDIS and BBS improved as the template size increased.
\begin{table}[tbp]
\begin{center}
\renewcommand{\baselinestretch}{1}\large\normalsize
\begin{tabular}{lllll} \hline
{\bf Algorithm} & & {\bf AUC} & & \\
\multicolumn{2}{r}{patch size (pixels):}& 17-by-17 & 33-by-33 & 49-by-49 \\
\hline
ZNCC & & 0.4996 & 0.5937 & 0.6314 \\
BBS \citep{Dekel_etal15,Oron_etal18}& & 0.1782 & 0.3834 & 0.4747 \\
DDIS \citep{Talmi_etal17} & & 0.3952 & 0.4905 & 0.5334 \\
DIM (25 templates) & &{\bf 0.5591} &{\bf 0.6308} &{\bf 0.6569}\\
\hline
\end{tabular}
\caption{Quantitative comparison of results for different algorithms applied to
the task of finding corresponding locations across images in the Oxford
VGG affine covariant features dataset (at half size). Results are given in
terms of the area under the success curve (AUC) for all 1000 template-image
comparisons across all eight sequences in the dataset (25 templates per image
pair).}
\label{tab-vgg_data_AUC}
\end{center}
\end{table}
The overall accuracy of each method was summarised using the area under the
success curve (AUC) for all 1000 template matches performed (25 templates
matched to 5 images in each of 8 sequences). As the same number of template
matches were performed for each sequence, this is equivalent to the AUC for the
average of the individual success curves for the eight sequences shown in
\autoref{fig-vgg_data_success}. These quantitative results are shown in
\autoref{tab-vgg_data_AUC}. It can be seen that the proposed method, DIM,
significantly out-performs the other methods on this task. Surprisingly, both
BBS and DDIS are less accurate than the baseline method, ZNCC.
\subsection{Template Matching using the Oxford VGG Affine Covariant Features Dataset}
\label{sec-template_matching_vgg}
In the preceding two sections template matching algorithms have been evaluated
using correspondence tasks. In such tasks it is assumed that the target always
appears in the query image. However, in many real-world applications such an
assumption is not appropriate, as it is not known if the searched-for image
feature appears in the query image. In such applications it is, therefore, not
appropriate to select the single location with the highest similarity to the
template as the matching location. Instead, it is necessary to apply a threshold
to the similarity values to distinguish locations where the template matches the
image from those where it does not. To avoid counting multiple matches within a
small neighbourhood, it is also typically the case that the similarity must be a
local maxima as well as exceeding the global threshold.
To evaluate the ability of the proposed method to perform template matching
under these conditions an experiment was performed using the colour images ({\it i.e.},\xspace
excluding the Boat sequence) from the Oxford VGG Affine Covariant Features
Benchmark \citep{MikolajczykSchmid05,Mikolajczyk_etal05}. Only the colour images
were used as in this experiment templates extracted from one sequence were
matched to images from the other sequences: it was therefore necessary to have
all templates and query images either in colour or grayscale.
Images were scaled to one-half their original size to reduce the time taken
to perform this experiment. From the first image in each sequence 10 templates
were extracted from around keypoints identified using the Harris corner detector
and using the same criteria as described in \autoref{sec-correspondence_vgg}. A
total of 70 templates were thus defined (10 for each of the 7 colour
sequences). All these 70 templates were matched to each of the 5 query images in each
sequence ({\it i.e.},\xspace to 35 colour images): a total of 2450 template-image comparisons in all.
For every location in the similarity array that was both a local maxima and
exceeded a global threshold a bounding box, the same size as the template, was
defined in the query image. These locations found by the template matching
method were compared to the true location that matched that template in the
query image: if the template came from the first image in the same sequence, the
comparison was with a bounding box (the same size as the template) defined
around the transformed location of the keypoint from around which the template
had been extracted; if the template came from a different sequence then there
was no matching bounding box in the query image. If the two bounding boxes (one
predicted by template matching and one from the ground truth data), had an
overlap (IoU) of at least 0.5 this was counted as a true-positive. A
ground-truth bounding box not predicted by the template matching process was
counted as a false-negative, while matches found by the template matching
algorithm that did not correspond to the ground-truth bounding box (or multiple
matches to the same ground-truth) were counted as false-positives.
\begin{figure*}[tbp]
\begin{center}
\subfigure[]{\includegraphics[scale=0.4,trim=0 0 0 0, clip]{template_matching_vgg_data_halfsize_ALL_17px.pdf}}
\subfigure[]{\includegraphics[scale=0.4,trim=0 0 0 0, clip]{template_matching_vgg_data_halfsize_ALL_33px.pdf}}
\subfigure[]{\includegraphics[scale=0.4,trim=0 0 0 0, clip]{template_matching_vgg_data_halfsize_ALL_49px.pdf}}
\caption{The performance of different algorithms when applied to performing
template matching in colour images from the Oxford VGG affine covariant
features dataset (at half size). Each curve shows the trade-off between
precision and recall for different thresholds applied to the similarity
values. A match was considered correct if the bounding box overlap between the
predicted location and the true location was at least 0.5. Results are shown
for three different sizes of template (a) 17-by-17 pixels, (b) 33-by-33
pixels, and (c) 49-by-49 pixels.}
\label{fig-vgg_data_precision_recall}
\end{center}
\end{figure*}
The total number of true-positives ($TP$), false-positives ($FP$), and
false-negatives ($FN$) were found for all 70 templates when matched to all 35
query images. These values were then used to calculate recall
($\frac{TP}{TP+FN}$) and precision ($\frac{TP}{TP+FP}$). By varying the global
threshold used to define a match, precision-recall curves were plotted to show
how detection accuracy varied with
threshold. \Autoref{fig-vgg_data_precision_recall} shows the precision-recall
curves for each method for three different sizes of templates. The performance
of each algorithm was summarised by calculating the f-score
($=2\frac{recall.precision}{recall+precision}=\frac{2TP}{2TP+FP+FN}$) at the
threshold that gave the highest value. The f-score measures the best trade-off
between precision and recall. The f-scores for each algorithm are shown in
\autoref{tab-vgg_data_fscores}.
\begin{table}[tbp]
\begin{center}
\renewcommand{\baselinestretch}{1}\large\normalsize
\begin{tabular}{lllll} \hline
{\bf Algorithm} & & {\bf f-score} & & \\
\multicolumn{2}{r}{patch size (pixels):} & 17-by-17 & 33-by-33 & 49-by-49 \\
\hline
ZNCC & & 0.2842 & 0.5508 & 0.5493 \\
BBS \citep{Dekel_etal15,Oron_etal18} & & 0.0822 & 0.3704 & 0.4579 \\
DDIS \citep{Talmi_etal17} & & 0.5198 & 0.5355 & 0.4959 \\
DIM (70 templates) & & {\bf 0.6542} & {\bf 0.7230} & {\bf 0.7513} \\
\hline
\end{tabular}
\caption{Quantitative comparison of results for different algorithms applied to
the task of template matching in colour images from the Oxford VGG affine
covariant features dataset (at half size). Results are given in terms of the
highest f-score for all 2450 template-image comparisons across the seven
colour sequences in the dataset (70 templates per image).}
\label{tab-vgg_data_fscores}
\end{center}
\end{table}
From both \autoref{fig-vgg_data_precision_recall} and
\autoref{tab-vgg_data_fscores} it can be seen that the proposed algorithm, DIM,
had the best performance. The performance of BBS improved as the template size
increased, but at all sizes the performance was well below that of the baseline
method, ZNCC.
This suggests that the similarity values calculated by BBS vary widely in
magnitude such that true and false matches can not be reliably distinguished.
The performance of DDIS was similar for all template sizes, but was only superior
to that of ZNCC at the smallest size.
\subsection{Parameter Sensitivity}
\label{sec-parameter_sensitivity}
The proposed method employs a number of parameters:
\begin{itemize}\itemsep0em
\item the number of additional templates used;
\item the size of the image patches;
\item the standard deviation, $\sigma$, of the Gaussian mask used to pre-process the images;
\item the value of $\epsilon_1$ in equation \autoref{eq-pcbc_y};
\item the value of $\epsilon_2$ in equation \autoref{eq-pcbc_e};
\item The scale factor, $\lambda$, used to determine the size of the
elliptical region used to sum neighbouring similarity values;
\item the number of iterations performed by the DIM algorithm.
\end{itemize}
The preceding experiments have already explored the influence of some of these
parameters, specifically, the effects of varying the number of additional
templates is shown in \autoref{fig-bbs_data_num_templates_both}, and the
experiments in \autorefs{sec-correspondence_vgg} and
\ref{sec-template_matching_vgg} have examined the effects of varying the size
of the image patch. Additional experiments were carried out to measure the
sensitivity of the proposed algorithm to the other parameters. These
experiments applied the algorithm to finding corresponding locations across
105 pairs of colour video frames from the BBS dataset (as in
\autoref{sec-correspondence_bbs}). In each experiment one parameter was
altered at a time while all other parameters were kept fixed at their default
values. The results are shown in \autoref{tab-parameter_sensitivity}.
It can be seen that the value of $\sigma$ used to pre-process the images could be
increased or decreased by a factor of two, and the algorithm still produced
performance on this task that exceeded the previous state-of-the-art. However,
increasing or decreasing this parameter further had a detrimental effect on
performance. This is not surprising as when $\sigma$ is too large or too small
$\bar{I}$ becomes a poor estimate of the local image intensity: in the limit
$\bar{I}$ becomes equal to the average intensity of the whole image (when
$\sigma$ is very large), or $\bar{I}=I$ (when $\sigma$ is very small). In the
latter case the input to the template matching method becomes an image where all
pixels have a value of zero.
The algorithm was tolerant to large changes in $\epsilon_1$, and
$\epsilon_2$. However, when $\epsilon_2$ was increased by a factor of 10 from
its default value performance deteriorated. This is becomes this large value of
$\epsilon_2$ is significant compared to the values of $\mathbf{R}$ (and $\mathbf{X}$), and this
causes the DIM algorithm to fail to accurately reconstruct its input, and hence,
has non-negligible effects on the steady-state values of $\mathbf{R}$, $\mathbf{E}$ and $\mathbf{Y}$.
Using a very small value of $\lambda$ (which is equivalent to skipping the
post-processing stage described in \autoref{sec-methods_postproc}) resulted in a
AUC of 0.687. As shown in \autoref{tab-parameter_sensitivity} larger values of
$\lambda$ were beneficial, but if $\lambda$ became too large performance
deteriorated. This is because when the summation region is large, small
similarity values scattered across a large region of the image can be summed-up
to produce what appears to be a strong match from multiple, unrelated, weak
matches with the template.
At the end of the first iteration of the DIM algorithm the similarity values are
given by: $\mathbf{Y}_j = \frac{\epsilon_1}{\epsilon_2} \odot \sum_{i=1}^{k}
\left(\mathbf{w}_{ji} \star \mathbf{X}_i\right)$, {\it i.e.},\xspace the cross-correlation between the
templates and the pre-processed image. Hence, unsurprisingly, when only one
iteration was performed, performance was very poor and similar to simple
correlation-based methods like NCC (compare \autoref{tab-parameter_sensitivity}
row ``iterations'' and column ``$\div 10$'' with \autoref{tab-bbs_data_AUC} row
``NCC''). Two iterations was also insufficient for the DIM algorithm to find an
accurate and sparse representation of the image. However, with between 5 and 50
iterations the proposed method produced accurate results that were consistently
equal to or better than those of the previous state-of-the-art. Performance
deteriorated when a very large number of iterations was performed. However, this
can be offset by increasing the value of $\lambda$. For example, using 100
iterations and $\lambda=0.075$ produced an AUC of 0.666. This can be explained
by the similarity values becoming sparser as the number of iterations increases,
allowing a larger summation region to be used without such a risk of integrating
across unrelated similarity values.
\begin{table}[tbp]
\begin{center}
\renewcommand{\baselinestretch}{1}\large\normalsize
\begin{tabular}{lccccccc} \hline
{\bf Parameter} & {\bf Standard} & \multicolumn{6}{c}{{\bf AUC when value changed by:}}\\
& {\bf Value} & $\div 10$ & $\div 5$ & $\div 2$ & $\times 2$ & $\times 5$ & $\times 10$ \\
\hline
$\sigma$ (see \autoref{sec-methods_preproc}) & 0.5min(w,h) & 0.536 & 0.597 & 0.681 & 0.674 & 0.609 & 0.554\\
$\epsilon_1$ (see \autoref{eq-pcbc_y}) & $\frac{\epsilon_2}{max\left(\sum_j v_{ji}\right)}$ & 0.688 & 0.688 & 0.690 & 0.690 & 0.689 & 0.688 \\
$\epsilon_2$ (see \autoref{eq-pcbc_e}) & 0.01 & 0.690 & 0.690 & 0.690 & 0.686 & 0.691 & 0.624\\
$\lambda$ (see \autoref{sec-methods_postproc}) & 0.025 & 0.695 & 0.695 & 0.695 & 0.682 & 0.682 & 0.636\\
iterations (see \autoref{sec-methods_matching}) & 10 & 0.451 & 0.593 & 0.687 & 0.668 & 0.644 & 0.632 \\
\hline
\end{tabular}
\caption{Evaluation of the sensitivity of the proposed algorithm to its
parameter values. Note, w and h stand for the width and height of the
template. Performance was evaluated using the AUC produced for the task of
finding corresponding locations in 105 pairs of colour video frames from the BBS
dataset ({\it i.e.},\xspace using the same procedure as used to generate the result shown in
\autoref{tab-bbs_data_AUC}). Using the standard parameter values the AUC is
equal to 0.690.}
\label{tab-parameter_sensitivity}
\end{center}
\end{table}
\subsection{Computational Complexity}
\label{sec-speed}
The focus of this work was to develop a more accurate method of template
matching. Computational complexity was therefore not of prime concern. However,
for completeness, this section provides a comparison of the computational
complexity of DIM.
To calculate the cross-correlation or convolution of a $M$-by$N$ pixel image
with a $m$-by-$n$ pixel template, requires $mn$ multiplications at each image
pixel, so the approximate computational complexity is $O(MNmn)$. In the DIM
algorithm, to avoid edge effects, the image is padded by $2m$ in width and $2n$
in height. In this case, the computational complexity of 2D cross-correlation is
$O((M+2m)(N+2n)mn)$.
In the DIM algorithm, 2D convolution is applied to calculate the values of $\mathbf{R}$
for each channel (see \autoref{eq-pcbc_r}) and 2D cross-correlation is used to
calculate the values of $\mathbf{Y}$ for each template (see \autoref{eq-pcbc_y}). Both
these updates are performed at each of $i$ iterations. So if there are $c$
channels and $t$ templates, then the complexity is approximately
$O(2(M+2m)(N+2n)mncti)$. Added to this is the time taken to compute the
element-wise division (see \autoref{eq-pcbc_e}) and multiplication (see
\autoref{eq-pcbc_y}) operations at each iteration, which has a complexity of
$O(MN(c+t)i)$. However, this is negligible compared to the time taken to perform
the cross-correlations and convolutions.
It is well-known that 2D convolution and 2D cross-correlation can be performed
in Fourier space with complexity $O(MNlog(MN))$, assuming $m \le M$ and $n \le
N$. This method is therefore faster when $mn$ is larger than
$log((M+2m)(N+2n))$. Using the Fourier method of calculating the
cross-correlations and convolutions, the complexity of DIM would be
approximately $O(2(M+2m)(N+2n)log(MN)cti)$. Cross-correlation and
convolution are inherently parallel processes as each output value is
independent of the other output values. Hence, with appropriate multi-core
hardware the computational complexity of 2D cross-correlation and 2D convolution
becomes $O(mn)$. With such parallel computation, the computational complexity of
DIM would be approximately $O(2mni)$, as the value of $\mathbf{R}$ across all channels
and the values of $\mathbf{Y}$ for all templates could also be calculated in parallel.
This compares to the complexity of ZNCC which is $O(MNmnct)$ (or $O(mn)$ on
parallel hardware); BBS which is $O(MNm^2n^2ct)$ \citep{Oron_etal18}; and DDIS
which is $O(9mnlog(mn)t+MNmnt+MNt(c+log(mn)))$ \citep{Talmi_etal17}. To compare
the real-world performance, execution times for each algorithm were recorded on
a computer with Intel Core i7-7700K CPU running at 4.20GHz. This machine ran
Ubuntu GNU/Linux 16.04 and MATLAB R2017a. All code was written in MATLAB. For
DDIS faster, compiled, code is available to reduce execution times on machines
running Microsoft Windows. The code for DIM was not compiled for a fair
comparison. The total time taken to perform the experiment described in
\autoref{sec-correspondence_bbs} ({\it i.e.},\xspace to perform template matching across 105
colour image pairs), the time taken to perform the experiment described in
\autoref{sec-correspondence_vgg} with 17-by-17 pixel templates ({\it i.e.},\xspace to match 25
templates across 40 image pairs), and the time taken to perform the experiment
described in \autoref{sec-template_matching_vgg} with 17-by-17 pixel templates
({\it i.e.},\xspace to match 70 templates to 35 images) are shown in
\autoref{tab-execution_times}. While DIM is not as fast as the simple, baseline,
method it is the fastest of the other methods while also being the most
accurate. It also has the potential to be much faster if implemented on
appropriate parallel hardware.
\begin{table}[tbp]
\begin{center}
\renewcommand{\baselinestretch}{1}\large\normalsize
\begin{tabular}{lrrr} \hline
{\bf Algorithm} & \multicolumn{3}{l}{{\bf Execution Time (s)}} \\
\hline
ZNCC & {\bf 15 (0.14)} & {\bf 33 (0.03)} & {\bf 59 (0.02)}\\
BBS \citep{Dekel_etal15,Oron_etal18} & 2193 (20.89) & 724 (0.72) & 1512 (0.62)\\
DDIS \citep{Talmi_etal17} & 4802 (45.73) & 17391 (17.39) & 40102 (16.37)\\
DIM & 938 (8.93) & 209 (0.21) & 1366 (0.56)\\
\hline
\end{tabular}
\caption{Comparison of the execution times of different algorithms. The first
column of times show the total time taken when the algorithms where applied to
the task of finding corresponding locations in 105 pairs of colour video frames
({\it i.e.},\xspace to obtain the results shown in \autoref{fig-bbs_data_success}). The
second column of times are for the task of finding 25 corresponding points in
each pair of images from the Oxford VGG affine covariant features dataset, at
one-half size, using 17-by-17 pixel templates ({\it i.e.},\xspace to obtain the results shown
in the first column of \autoref{fig-vgg_data_success}). The third column of
times are for the algorithms when applied to the task of matching 70 17-by-17
pixel templates to 35 colour images from the Oxford VGG affine covariant
features dataset, at half size ({\it i.e.},\xspace to obtain the results shown in
\autoref{fig-vgg_data_precision_recall}a). The values in brackets are the
average times taken to compare one template with one image in each task.}
\label{tab-execution_times}
\end{center}
\end{table}
\section{Conclusions}
\label{sec-discussion}
This article has evaluated a method of performing template matching that is
shown to be both accurate and tolerant to differences in appearance due to
viewpoint, variations in background, non-rigid deformations, illumination,
blur/de-focus and JPEG compression. This advantageous behaviour is achieved by
causing the templates to compete to match the image, using the existing DIM
algorithm \citep{Spratling_etal09,Spratling17a}. Specifically, the competition
is implemented as a form of probabilistic inference known as explaining away
\citep{Kersten_etal04,LochmannDeneve11,Spratling_dim-learning,Spratling_etal09}
which causes each image element to only provide support for the template that is
the most likely match. Explaining away produces a sparse array of similarity
values in which the peaks are easily identified, and in which similarity values
that are reduced in magnitude by differences in appearance are still distinct
from those similarity values at non-matching locations. Using a variety of
tasks, the proposed method was shown to out-perform traditional template
matching, and recent state-of-the-art methods
\citep{Talmi_etal17,Dekel_etal15,Oron_etal18,Kat_etal18,Kim_etal17,ZagoruykoKomodakis15,ZagoruykoKomodakis17}.
Specifically, the proposed method was compared to the BBS algorithm
\citep{Dekel_etal15,Oron_etal18}, and several other recent methods
\citep{ZagoruykoKomodakis15,ZagoruykoKomodakis17,Kat_etal18,Kim_etal17,Talmi_etal17},
using the same dataset and experimental procedures defined by the authors of the
BBS algorithm. This task required target objects from one frame of a colour
video to be located in a subsequent frame. Changes in the appearance of the
target were due to variations in camera viewpoint or the pose of the target,
partial occlusions, non-rigid deformations, and changes in the surrounding
context, background, and illumination. On this dataset the proposed algorithm
produced significantly more accurate results than the BBS algorithm, and more
recent algorithms that have been applied to the same dataset
\citep{ZagoruykoKomodakis15,ZagoruykoKomodakis17,Kat_etal18,Kim_etal17,Talmi_etal17}.
Furthermore, using the Oxford VGG Affine Covariant Features Dataset
\citep{MikolajczykSchmid05,Mikolajczyk_etal05} it was shown that these findings
generalise to other tasks and other images. In this second set of images,
changes in the appearance of the target were due to variations in camera
viewpoint, illumination/exposure, blur/de-focus, and JPEG compression. The
proposed method considerably outperformed some recently proposed
state-of-the-art methods \citep{Dekel_etal15,Oron_etal18,Talmi_etal17} on these
additional experiments.
The present results demonstrate that the proposed method is tolerant to a range
of factors that cause differences between the template and the target as it
appears in the query image. However, it is only weakly tolerant to changes in
appearance caused by viewpoint ({\it i.e.},\xspace changes in perspective, orientation, and
scale). The tolerance of DIM to viewpoint changes could, potentially, be improved
using a number of techniques.
Firstly, by using additional templates representing transformed versions of the
searched-for image patch. For example, to recognise a patch of image at a range
of orientations it would be possible to include additional templates showing
that patch at different orientations. The final similarity measure at each image
location would then be determined by finding the maximum of the similarity
values calculated for each individual template representing transformed versions
of the same template at that location. Additional, unreported, experiments have
shown that this method works well. However, to deal with an unknown
transformation between the template and the query image it is necessary to use a
large number of affine transformed templates showing many possible combinations
of changes in scale, rotation and shear, and hence, this method is
computationally expensive and not very practical.
Secondly, it would be possible to split templates into multiple sub-templates.
The sub-templates could be matched to the image, using DIM, and each one could
vote for the location of the target. By allowing some tolerance in the range of
sub-template locations that vote for the same target location, this method could
provide additional tolerance to changes in appearance, particularly changes
caused by image shear and changes in perspective. Essentially, this method
would perform template matching using an algorithm analogous to the implicit
shape model \citep[ISM; ][]{Leibe_etal08}, which employs the generalised Hough
transform \citep{Ballard81,DudaHart72}. Both the sub-template matching and the
votong processes could be implemented using the DIM algorithm
\citep{Spratling17a,Spratling16c}.
Thirdly, it would be possible to apply the method to a different feature-space,
one in which the features were tolerant to changes in appearance. It has been
shown that for other methods significant improvement in performance can be
achieved by applying the method to a feature-space defined by the output of
certain layers in a CNN \citep{Kim_etal17,Kat_etal18,Talmi_etal17}. For example,
\citet{Kim_etal17} showed that applying NCC to features extracted by a CNN, in
comparison to using NCC to compare colour intensity values, produced an increase
of 0.15 in the AUC for the experiment described in
\autoref{sec-correspondence_bbs}.
An obvious direction for future work on the proposed algorithm is to apply it to
a similar feature-space extracted by a deep neural network.
In terms of practical applications, the proposed method
has already been applied, as part of a hierarchical DIM network, to object
localisation and recognition \citep{Spratling17a} and to the low-level task of
edge-detection \citep{Spratling13a,WangSpratling16b}. Future work might also
usefully explore applications of the proposed method to stereo correspondence,
3D reconstruction, and tracking. To facilitate such future work all the code
used to produce the results reported in this article has been made freely available.
\bibsep=0pt
|
1,941,325,221,179 | arxiv | \section{Introduction and Abbreviations}
\subsection{Algebraic, topological, and set-theoretical weak choice forms}
Firstly, we study new relations of some algebraic, topological, and set-theoretical weak forms of $\mathsf{AC}$ with other weak forms of $\mathsf{AC}$.
\subsubsection{Weak choice forms} We recall the following weak forms of $\mathsf{AC}$ from \cite{HR1998}.
\begin{itemize}
\item \cite[\textbf{Form 269}]{HR1998}: For every cardinal $\mathfrak{m}$, there is a set $A$ such that $2^{\vert A\vert^{2}} \geq \mathfrak{m}$ and there is a choice function on the collection of $2$ element subsets of $A$ (In the absence of $\mathsf{AC}$, a set $\mathfrak{m}$ is called a {\em cardinal} if it is the {\em cardinality} $\vert x\vert$ of some set $x$, where $\vert x\vert$ is the set $\{y : \vert y\vert = \vert x\vert$ and $y$ is of least rank$\}$ (cf. \cite[$\S$ 11.2]{Jec1973})).
\item \cite[\textbf{Form 233}]{HR1998}: If a field has an algebraic closure then it is unique up to isomorphism.
\item \cite[\textbf{Form 304}]{HR1998}: There does not exist a Hausdorff space $X$ such that every infinite subset of $X$ contains an infinite compact subset.
\item $\mathsf{AC^{LO}}$ \cite[\textbf{Form 202}]{HR1998}: Every linearly ordered family of non-empty sets has a
choice function.
\item $\mathsf{LW}$ \cite[\textbf{Form 90}]{HR1998}: Every linearly ordered set can be well-ordered.
\item $\mathsf{AC^{WO}}$ \cite[\textbf{Form 40}]{HR1998}: Every well-orderable set of non-empty sets has a choice
function.
\item $\mathsf{AC_{n}^{-}}$ for each $n \in \omega, n \geq 2$ \cite[\textbf{Form 342(n)}]{HR1998}: Every infinite family $\mathcal{A}$ of $n$-element sets has a partial choice function, i.e., $\mathcal{A}$ has an infinite subfamily $\mathcal{B}$ with a choice function.
\item The {\em Chain/Antichain Principle}, $\mathsf{CAC}$ \cite[\textbf{Form 217}]{HR1998}: Every infinite partially ordered set (poset) has an infinite chain or an infinite antichain.
\end{itemize}
\subsubsection{Introduction and known results} Pincus proved that \textbf{Form 233} holds in the basic Fraenkel model (cf. \cite[\textbf{Note 41}]{HR1998}). It is also known that in the basic Fraenkel model, \textbf{Form 269} fails, where as \textbf{Form 304} holds (cf. \cite[\textbf{Notes 91, 116}]{HR1998}). Fix any natural number $2\leq n\in\omega$. Tachtsis \cite[\textbf{Theorem 2.1}]{Tac2016a} constructed a permutation model where $\mathsf{AC_{2}^{-}}$ fails but $\mathsf{CAC}$ holds. Halbeisen--Tachtsis \cite[\textbf{Theorem 8}]{HT2020} constructed a similar permutation model (we denote by $\mathcal{N}_{HT}^{1}(n)$) where $\mathsf{AC_{n}^{-}}$ fails but $\mathsf{CAC}$ holds.
\subsubsection{Results} We prove the following.
\begin{enumerate}
\item $\mathsf{AC^{LO}}$ does not imply \textbf{Form 269} in $\mathsf{ZFA}$. Hence, neither $\mathsf{LW}$ nor $\mathsf{AC^{WO}}$ implies \textbf{Form 269} in $\mathsf{ZFA}$ (cf. \textbf{Theorem 3.1}).
\item \textbf{Form 269} fails in $\mathcal{N}_{HT}^{1}(n)$ (cf. \textbf{Theorem 3.2}).
\item \textbf{Form 233} and \textbf{Form 304} hold in $\mathcal{N}_{HT}^{1}(n)$ (cf. \textbf{Theorem 3.2}). Consequently, for any integer $n\geq 2$, neither \textbf{Form 233} nor \textbf{Form 304} implies $\mathsf{AC_{n}^{-}}$ in $\mathsf{ZFA}$.
\end{enumerate}
\subsection{Partition models and permutations of infinite sets}
We study the status of different weak forms of $\mathsf{AC}$ in the finite partition model (a type of Fraenkel–Mostowski permutation model) introduced in \cite{Bru2016}.
\subsubsection{Weak choice forms and abbreviations}
We recall the necessary weak forms of $\mathsf{AC}$.
\begin{itemize}
\item $\mathsf{AC_{n}}$ for each $n \in \omega, n \geq 2$ \cite[\textbf{Form 61}]{HR1998}: Every family of $n$-element sets has a choice function.
\item \cite[\textbf{Form 64}]{HR1998}: There are no amorphous sets (An infinite set $X$ is {\em amorphous} if $X$ cannot be written as a disjoint union of two infinite subsets).
\item $\mathsf{DF = F}$ \cite[\textbf{Form 9}]{HR1998}: Every Dedekind-finite set is finite (A set $X$ is called {\em Dedekind-finite} if $\aleph_{0} \not
\leq \vert X\vert$ i.e., if there is no one-to-one function $f : \omega \rightarrow X$. Otherwise, $X$ is called {\em Dedekind-infinite}).
\item $\mathsf{W_{\aleph_{\alpha}}}$ (cf. \cite[\textbf{Chapter 8}]{Jec1973}): For every $X$, either $\vert X\vert \leq \aleph_{\alpha}$ or $\vert X\vert \geq \aleph_{\alpha}$. We recall that $\mathsf{W_{\aleph_{0}}}$ is equivalent to $\mathsf{DF=F}$ in $\mathsf{ZF}$.
\item $\mathsf{DC_{\kappa}}$ for an infinite well-ordered cardinal $\kappa$ \cite[\textbf{Form 87($\kappa$)}]{HR1998}: Let $\kappa$ be an infinite well-ordered cardinal (i.e., $\kappa$ is an aleph). Let $S$ be a non-empty set and let $R$ be a binary relation such that for every $\alpha<\kappa$ and every $\alpha$-sequence $s =(s_{\epsilon})_{\epsilon<\alpha}$ of elements of $S$ there exists $y \in S$ such that $s R y$. Then there is a function $f : \kappa \rightarrow S$ such that for every $\alpha < \kappa$, $(f\restriction \alpha) R f(\alpha)$. We note that $\mathsf{DC_{\aleph_{0}}}$ is a reformulation of $\mathsf{DC}$ (the principle of Dependent Choices \cite[\textbf{Form 43}]{HR1998}). We denote by $\mathsf{DC_{<\lambda}}$ the assertion $(\forall\eta < \lambda)\mathsf{DC_{\eta}}$.
\item \cite[\textbf{Form 3}]{HR1998}: For every infinite cardinal $\mathfrak{m}$, $2\mathfrak{m} = \mathfrak{m}$. We denote the above principle as
‘$\forall$ infinite $\mathfrak{m}$ ($2\mathfrak{m} = \mathfrak{m}$)’.
\item $\mathsf{UT(WO, WO, WO)}$ \cite[\textbf{Form 231}]{HR1998}: The union of a well-ordered collection of well-orderable sets is well-orderable.
\item The {\em Axiom of Multiple Choice, $\mathsf{MC}$} \cite[\textbf{Form 67}]{HR1998}: Every family $\mathcal{A}$ of non-empty sets has a multiple choice function, i.e., there is a function $f$ with domain $\mathcal{A}$ such that for every $A \in \mathcal{A}$, $f(A)$ is a non-empty finite subset of $A$.
\item $\mathsf{\leq\aleph_{0}}$-$\mathsf{MC}$ (cf. \cite[$\S$1]{HST2016}): For any family $\{A_{i} : i \in I\}$ of non-empty sets, there is a function $F$ with domain $I$ such that for all $i \in I$, $F(i)$ is a non-empty countable (i.e., finite or countably infinite)
subset of $A_{i}$.
\end{itemize}
We recall the following abbreviations from \cite{Tac2019} and \cite{Tac2016}.
\begin{itemize}
\item $\mathsf{ISAE}$ (cf. \cite[$\S$2]{Tac2019}): For every infinite set $X$, there is a permutation $f$ of $X$ without fixed points and such that $f^{2} = $id$_{X}$.
\item $\mathsf{EPWFP}$ (cf. \cite[$\S$2]{Tac2019}): For every infinite set $X$, there exists a permutation of $X$ without fixed points.
\item For a set $A$, Sym$(A)$, FSym$(A)$ and $\aleph_{\alpha}$Sym$(A)$ denote respectively the set of all permutations of $A$, the set of all $\phi \in$ Sym$(A)$ such that $\{x \in A : \phi(x) \neq x\}$ is finite, and the set of all $\phi \in$ Sym$(A)$ such that $\{x \in A : \phi(x) \neq x\}$ has cardinality at most $\aleph_{\alpha}$ (cf. \cite[$\S$2]{Tac2019}).
\item $\mathsf{MA(\kappa)}$ for a well-ordered cardinal $\kappa$ (cf. \cite[$\S$1]{Tac2016}): If $(P,<)$ is a nonempty, c.c.c. quasi order and if $\mathcal{D}$ is a family of $\leq\kappa$ dense sets in $P$, then there is a filter $\mathcal{F}$ of $P$ such that $\mathcal{F}\cap D\not=\emptyset$ for all $D\in \mathcal{D}$.
\end{itemize}
\subsubsection{Introduction and known results} Bruce \cite{Bru2016} constructed the finite partition model $\mathcal{V}_{p}$, which is a variant of the basic Fraenkel model (labeled as Model $\mathcal{N}_{1}$ in \cite{HR1998}). Many, but not all, properties of $\mathcal{N}_{1}$ transfer to $\mathcal{V}_{p}$. In particular, Bruce proved that the set of atoms has no amorphous subset in $\mathcal{V}_{p}$ unlike in $\mathcal{N}_{1}$, where as $\mathsf{UT(WO, WO, WO)}$, $\mathsf{\neg AC_{2}}$, and $\mathsf{\neg (DF=F)}$ hold in $\mathcal{V}_{p}$ as in $\mathcal{N}_{1}$. At the end of the paper, Bruce asked which other choice principles hold in $\mathcal{V}_{p}$ (cf. \cite[$\S$5]{Bru2016}). We study the status of some weak choice principles in $\mathcal{V}_{p}$. We also study the status of some weak choice principles in a variant of the finite partition model mentioned in \cite[$\S$5]{Bru2016}. In particular, let $A$ be an uncountable set of atoms, let $\mathcal{G}$ be the group of all permutations of $A$, and let the supports be countable partitions of $A$. We call the corresponding permutation model $\mathcal{V}^{+}_{p}$. At the end of the paper, Bruce asked about the status of different weak choice forms in $\mathcal{V}^{+}_{p}$.
\subsubsection{Results} Fix any integer $n\geq 2$. We prove the following.
\begin{enumerate}
\item $\mathsf{W_{\aleph_{\alpha+1}}}$ implies `for any set $X$ of size $\aleph_{\alpha+1}$, Sym(X) $\neq$ $\aleph_{\alpha}$Sym(X)' in $\mathsf{ZF}$ (cf. \textbf{Proposition 4.1}).
\item If $X\in \{`\forall$ infinite $\mathfrak{m}(2\mathfrak{m} = \mathfrak{m})$', $\mathsf{ISAE}, \mathsf{EPWFP}, \mathsf{MA(\aleph_{0})}, \mathsf{AC_{n}}, \mathsf{MC}, \mathsf{\leq\aleph_{0}}$-$\mathsf{MC}\}$, then $X$ fails in $\mathcal{V}_{p}$ (cf. \textbf{Theorem 4.2}).
\item If $X\in \{`\forall$ infinite $\mathfrak{m}(2\mathfrak{m} = \mathfrak{m})$', $\mathsf{ISAE}, \mathsf{EPWFP}, \mathsf{AC_{n}}, \mathsf{W_{\aleph_{1}}}, \mathsf{DC_{\aleph_{1}}}\}$, then $X$ fails in $\mathcal{V}^{+}_{p}$ (cf. \textbf{Theorem 4.4}).
\end{enumerate}
\subsection{Variants of Chain/Antichain principle and permutations of infinite sets}
Thirdly, we study new relations of $\mathsf{EPWFP}$ and two variants of $\mathsf{CAC}$ with weak forms of $\mathsf{AC}$.
\subsubsection{Weak choice forms and abbreviations} We recall the necessary weak choice principles.
\begin{itemize}
\item $\mathsf{WOC_{n}^{-}}$ for each $n \in \omega, n \geq 2$ (cf. \cite[\textbf{Definition 1 (2)}]{HT2020}): Every infinite well-orderable family of $n$-element sets has a partial choice function.
\item $\mathsf{LOC_{n}^{-}}$ for each $n \in \omega, n \geq 2$ (cf. \cite[\textbf{Definition 1 (2)}]{HT2020}): Every infinite linearly orderable family of $n$-element sets has a partial choice function.
\item $\mathsf{LOKW_{4}^{-}}$ (cf. \cite[\textbf{Definition 1 (2)}]{HT2020}): Every infinite linearly orderable family $\mathcal{A}$ of $4$-element sets has a partial Kinna--Wagner selection function, i.e., there exists an infinite subfamily $\mathcal{B}$ of $\mathcal{A}$ and a function $f$ such that $dom(f) = \mathcal{B}$
and for all $B \in \mathcal{B}$, $\emptyset \not= f(B)\subsetneq B$.
\item $\mathsf{AC_{fin}^{\aleph_{1}}}$: Every family $\{A_{i}: i\in \aleph_{1}\}$ of non-empty finite sets has a choice function.
\item $\mathsf{PAC^{\aleph_{\alpha}}_{fin}}$: Every $\aleph_{\alpha}$-sized family $\mathcal{A}$ of non-empty finite sets has an $\aleph_{\alpha}$-sized subfamily $\mathcal{B}$ with a choice function.
\end{itemize}
Fix any regular $\aleph_{\alpha}$. We recall the following abbreviations from \cite{Ban2}, \cite{BG1} and \cite{HHK2016}.
\begin{itemize}
\item $\mathsf{CAC_{1}^{\aleph_{\alpha}}}$: If in a poset all antichains are finite and all chains have size $\aleph_{\alpha}$, then the set has size $\aleph_{\alpha}$.
\item $\mathsf{CAC^{\aleph_{\alpha}}}$: If in a poset all chains are finite and all antichains have size $\aleph_{\alpha}$, then the set has size $\aleph_{\alpha}$.
\item $\mathsf{PUU}$ (cf. \cite[$\S$2]{HHK2016}): For every infinite set $X$, $Y$, for every onto function
$f : X \rightarrow Y$, for every ultrafilter $\mathcal{F}$ of $Y$ , $f^{-1}(\mathcal{F}) = \{f^{-1}(F) : F \in \mathcal{F}\}$ extends to
an ultrafilter of $X$.
\item $\mathsf{BPI(X)}$ (cf. \cite[$\S$1]{HHK2016}): Every filterbase of $X$ is included in an ultrafilter of $X$ ($\mathsf{BPI(\omega)}$
is \cite[\textbf{Form 225}]{HR1998}).
\end{itemize}
\subsubsection{Introduction and known results} The principle $\mathsf{PUU}$ was introduced in \cite{HK2015}. Later, Herrlich, Howard, and Keremedis \cite{HHK2016} investigated the deductive strength of $\mathsf{PUU}$ without $\mathsf{AC}$. They proved that $\mathsf{PUU}$ fails in Jech’s Model, which is labeled as Model $\mathcal{N}_{2}(\aleph_{1})$ in \cite{HR1998} (cf. proof of \cite[\textbf{Theorem 4 (vi)}]{HHK2016}).
We recall Erd\H{o}s--Dushnik--Miller theorem and the fact that $\mathsf{CAC_{1}^{\aleph_{\alpha}}}$ and $\mathsf{CAC^{\aleph_{\alpha}}}$ are applications of it in $\mathsf{ZFC}$.
\begin{thm}{\textbf{($\mathsf{ZFC}$; Erd\H{o}s--Dushnik--Miller theorem)}} {\em If $\kappa$ is an uncountable cardinal, then $\kappa \rightarrow (\kappa, \aleph_{0})^{2}$, i.e., if $f:[\kappa]^{2}\rightarrow \{0,1\}$ is a coloring, then either there is a set of cardinality $\kappa$ monochromatic in color 0 or else there is an infinite set monochromatic in color 1.}\end{thm}
We proved that neither $\mathsf{CAC_{1}^{\aleph_{\alpha}}}$ nor $\mathsf{CAC^{\aleph_{\alpha}}}$ implies {\em `there are no amorphous sets'} in $\mathsf{ZFA}$, $\mathsf{DC}$ does not imply $\mathsf{CAC_{1}^{\aleph_{0}}}$ in $\mathsf{ZF}$, and ($\mathsf{LOC_{2}^{-} + MC}$) does not imply $\mathsf{CAC_{1}^{\aleph_{0}}}$ in $\mathsf{ZFA}$ (cf. \cite{Ban2,BG1}).
\subsubsection{Results} In this note, we observe the following.
\begin{enumerate}
\item Fix any $k\in\omega\backslash\{0,1\}$. A weaker version of $\mathsf{CAC^{\aleph_{0}}_{1}}$, namely the statement {\em `If in a poset $(P,\leq)$ with width $k$ all chains are countable, then $P$ is countable'}, does not imply $\mathsf{AC_{fin}^{\omega}}$ in $\mathsf{ZFA}$ (cf. \textbf{Proposition 5.1}).
\item There is a model of $\mathsf{ZFA}$ where $\mathsf{LOKW_{4}^{-}}$ fails but the statement {\em `If in a poset $(P,\leq)$ all antichains have size $2$ and all chains are countable, then $P$ is countable'} holds (cf. \textbf{Proposition 5.2 (1)}).
\item Fix a natural number $n$ such that $n>4$. There is a model of $\mathsf{ZFA}$ where $\mathsf{LOC_{n}^{-}}$ fails but the statement {\em `If in a poset $(P,\leq)$ all antichains have size $2$ and all chains are countable, then $P$ is countable'} holds (cf. \textbf{Proposition 5.2 (2)}).
\item $\mathsf{CAC^{\aleph_{\alpha}}}$ implies the statement {\em `Every family $\mathcal{A} = \{(A_{i}, \leq_{i}): i \in \aleph_{\alpha+1}\}$
such that for each $i \in \aleph_{\alpha+1}$, $A_{i}$ is finite and $\leq_{i}$ is a linear order on $A_{i}$,
has an $\aleph_{\alpha+1}$-sized subfamily with a choice function'} in $\mathsf{ZF}$ (cf. \textbf{Proposition 5.3(1)}).
\item Let $X$ be a $T_{1}$-space. Additionally, suppose $X$ is either $\mathcal{K}$-Loeb or second-countable. Then $\mathsf{CAC^{\aleph_{\alpha}}}$ implies the statement {\em `Every family $\mathcal{A} = \{A_{i}: i \in \aleph_{\alpha+1}\}$
such that for each $i \in \aleph_{\alpha+1}$, $A_{i}$ is a finite subset of $X$,
has an $\aleph_{\alpha+1}$-sized subfamily with a choice function'} in $\mathsf{ZF}$ (cf. \textbf{Proposition 5.3(2)}).
\item ($\mathsf{LOC_{2}^{-} + MC}$) neither imply $\mathsf{EPWFP}$ nor imply $\mathsf{CAC^{\aleph_{\alpha}}_{1}}$ in $\mathsf{ZFA}$ (cf. \textbf{Theorem 5.4}).
\item ($\mathsf{LOC_{2}^{-} + MC}$) does not imply $\mathsf{PUU}$ in $\mathsf{ZFA}$ (cf. \textbf{Theorem 5.6}).
\item Let $\aleph_{\alpha+1}$ be a successor aleph. We study a new model to prove that $\mathsf{DC_{<\aleph_{\alpha+1}}}+\mathsf{WOC_{2}^{-}}$ neither imply $\mathsf{EPWFP}$ nor imply $\mathsf{CAC^{\aleph_{\alpha}}_{1}}$ in $\mathsf{ZF}$ (cf. \textbf{Theorem 5.7}).
\end{enumerate}
\subsection{Van Douwen’s Choice Principle in two recent permutation models}
Howard, Saveliev, and Tachtsis \cite[\textbf{p.175}]{HST2016} gave an argument to prove that $\mathsf{vDCP}$ holds in the basic Fraenkel model. We modify the argument slightly to prove that $\mathsf{vDCP}$ holds in two recently constructed permutation models (cf. $\S$ 6).
\subsubsection{Weak choice forms and abbreviations} We recall the following weak forms of $\mathsf{AC}$.
\begin{itemize}
\item $\mathsf{UT(\aleph_{0}, \aleph_{0}, cuf)}$ \cite[\textbf{Form 420}]{HR}: Every countable union of countable sets is a cuf set (A set $X$ is called {\em cuf set} if $X$ is expressible as a countable union of finite sets).
\item $\mathsf{MC(\aleph_{0}, \aleph_{0})}$ \cite[\textbf{Form 350}]{HR1998}: Every denumerable (i.e., countably infinite) family of denumerable sets has a multiple choice function.
\item {\em Van Douwen’s Choice Principle, $\mathsf{vDCP}$}: Every family $X = \{(X_{i}, \leq_{i}) : i \in I\}$ of linearly ordered sets
isomorphic with $(\mathbb{Z}, \leq)$ ($\leq$ is the usual ordering on $\mathbb{Z}$) has a choice function.
\end{itemize}
We recall the following abbreviation due to Keremedis, Tachtsis, and Wajch from \cite{KTW2021}.
\begin{itemize}
\item $\mathsf{M(IC, DI)}$: Every infinite compact metrizable space is Dedekind-infinite.
\end{itemize}
\subsubsection{Results} Howard and Tachtsis \cite[\textbf{Theorem 3.4}]{HT2021} proved that the statement $\mathsf{LW} \land \mathsf{\neg MC(\aleph_{0}, \aleph_{0})}$ has a permutation model, say $\mathcal{M}$.
The authors of \cite[proof of \textbf{Theorem 3.3}]{CHHKR2008} constructed a permutation model $\mathcal{N}$ where $\mathsf{UT(\aleph_{0}, \aleph_{0}, cuf)}$ holds. Keremedis, Tachtsis, and Wajch \cite[\textbf{Theorem 13}]{KTW2021} proved that $\mathsf{LW}$ holds and $\mathsf{M(IC, DI)}$ fails in $\mathcal{N}$. We prove the following.
\begin{enumerate}
\item $\mathsf{vDCP}$ holds in $\mathcal{N}$ and $\mathcal{M}$ (cf. \textbf{Proposition 6.1}).
\end{enumerate}
\subsection{Spanning subgraphs and weak choice forms} Fix any $2< n\in \omega$ and any even integer $4\leq m\in \omega$. H\"{o}ft and Howard \cite{HH1973} proved that $\mathsf{AC}$ is equivalent to {\em `Every connected graph contains a partial subgraph which is a tree'}. Delhomm\'{e}--Morillon \cite[\textbf{Proposition 1, Corollary 1, Remark 1}]{DM2006} proved that $\mathsf{AC}$ is equivalent to {\em `Every connected graph
has a spanning tree'}, {\em `Every bipartite connected graph has a spanning subgraph omitting $K_{n,n}$'} as well as {\em `Every connected graph admits a spanning $m$-bush'}. We study new relations between variants of the above statements and weak forms of $\mathsf{AC}$.
\subsubsection{Weak choice forms and abbreviations} We recall the following weak forms of $\mathsf{AC}$.
\begin{itemize}
\item $\mathsf{AC_{fin}^{\omega}}$ \cite[\textbf{Form 10}]{HR1998}: Every denumerable family of non-empty finite sets has a choice function. We recall an equivalent formulation of $\mathsf{AC_{fin}^{\omega}}$.
\begin{itemize}
\item $\mathsf{UT(\aleph_{0}, fin, \aleph_{0})}$ \cite[\textbf{Form 10 A}]{HR1998}: The union of denumerably many pairwise disjoint finite sets is denumerable.
\end{itemize}
\item Let $n \in \omega \backslash \{0, 1\}$.
$\mathsf{AC^{\omega}_{\leq n}}$: Every denumerable family of non-empty sets, each with at most $n$
elements, has a choice function.
\item $\mathsf{AC_{WO}^{WO}}$ \cite[\textbf{Form 165}]{HR1998}: Every well-orderable family of non-empty well-orderable sets has a choice function. \end{itemize}
Fix any $2< k,n\in \omega$ and any even integer $4\leq m\in \omega$. We introduce the following abbreviations.
\begin{itemize}
\item $\mathcal{Q}_{lf,c}^{n}$: Any infinite locally finite connected graph has a spanning subgraph omitting $K_{2,n}$.
\item $\mathcal{Q}_{lw,c}^{k,n}$: Any infinite locally well-orderable connected graph has a spanning subgraph omitting $K_{k,n}$.
\item $\mathcal{P}_{lf,c}^{m}$: Any infinite locally finite connected graph has a spanning $m$-bush.
\end{itemize}
We denote by $P_{G}$, the class of those infinite graphs whose only components are $G$.
For any graph $G_{1}=(V_{G_{1}}, E_{G_{1}})\in P_{G}$, we construct a graph $G_{2}=(V_{G_{2}}, E_{G_{2}})$ as follows: Let $t\not\in V_{G_{1}}$ and let $\{A_{i}: i\in I\}$ be the components of $G_{1}$. Let $V_{G_{2}}=\{t\}\bigcup V_{G_{1}}$ and $E_{G_{1}}\subseteq E_{G_{2}}$. For each $i \in I$ and every element $x\in A_{i}$, let $\{t,x\}\in E_{G_{2}}$. We denote by $P'_{G}$, the class of graphs of the form $G_{2}$.
\subsubsection{Results} We prove the following in $\mathsf{ZF}$.
\begin{enumerate}
\item $\mathsf{AC_{\leq n-1}^{\omega}}$ + $\mathcal{Q}_{lf,c}^{n}$ is equivalent to $\mathsf{AC_{fin}^{\omega}}$ for any $2< n\in \omega$ (cf. \textbf{Proposition 7.1(1)}).
\item $\mathsf{UT(WO,WO,WO)}$ implies $\mathsf{AC_{\leq n-1}^{WO}}$ + $\mathcal{Q}_{lw,c}^{k,n}$ and the later implies $\mathsf{AC_{WO}^{WO}}$ for any $2< k,n\in \omega$ (cf. \textbf{Proposition 7.1(2)}).
\item $\mathcal{P}_{lf,c}^{m}$ is equivalent to $\mathsf{AC_{fin}^{\omega}}$ for any even integer $m\geq 4$ (cf. \textbf{Proposition 7.1(3)}).
\item Fix any $2< k\in \omega$. If each $A_{i}$ is $K_{k}$, then $\mathsf{AC_{k^{k-2}}}$ implies {\em `Every graph from the class $P'_{K_{k}}$ has a spanning tree}' (cf. \textbf{Proposition 7.2(1)}).
\item Fix any $2< k\in \omega$. If each $A_{i}$ is $C_{k}$, then $\mathsf{AC_{k}}$ implies {\em `Every graph from the class $P'_{C_{k}}$ has a spanning tree}' (cf. \textbf{Proposition 7.2(2)}).
\item Fix any $2\leq p,q< \omega$. If each $A_{i}$ is $K_{p,q}$, then $(\mathsf{AC_{p^{q-1}q^{p-1}}+AC_{p+q}})$ implies {\em `Every graph from the class $P'_{K_{p,q}}$ has a spanning tree}' (cf. \textbf{Proposition 7.2(3)}).
\end{enumerate}
\section{Known results and definitions}
\begin{defn} {\textbf{(Topological definitions)}}
Let $\textbf{X}=(X,\tau)$ be a topological space. We say $\textbf{X}$ is {\em Baire} if for every countable
family $\mathcal{O} = \{O_{n} : n \in \omega\}$ of dense open subsets of $X$,
$\bigcap \mathcal{O} \neq \emptyset$. We say $\textbf{X}$ is {\em compact} if for every $U \subseteq \tau$ such that $\bigcup U = X$ there is a finite subset $V\subseteq U$ such that $\bigcup V = X$.
The space $\textbf{X}$ is called a {\em $T_{1}$-space} if given any two points $a\neq b$ in $X$, there is an open set containing $a$ but not $b$, and there is an open set containing $b$ but not $a$. The space $\textbf{X}$ is called a {\em Hausdorff (or $T_{2}$-space)} if any two distinct points in $X$ can be separated by disjoint open sets, i.e., if $x$ and $y$ are distinct points of $X$, then there exist disjoint open sets $U_{x}$ and $U_{y}$ such that $x\in U_{x}$ and $y\in U_{y}$. The space $\textbf{X}$ is called {\em second countable} if the topology of $\textbf{X}$ has a countable basis. Let $\mathcal{K}(\textbf{X})$ be the collection of all compact subsets of $\textbf{X}$, and $\mathcal{K}^{*}(\textbf{X}) =\mathcal{K}(\textbf{X})\backslash\{\emptyset\}$. We say $\textbf{X}$ is {\em $\mathcal{K}$-Loeb} if $\mathcal{K}^{*}(\textbf{X})$ has a choice function.
\end{defn}
\begin{defn}{\textbf{(Algebraic definitions)}}
A {\em permutation} on a finite set $X$ is a one-to-one correspondence from $X$ to itself. The set of all permutations on $X$, with operation defined to be the composition of mappings, is the {\em symmetric group} of $X$, denoted by $Sym(X)$. Fix $r\leq \vert X\vert$. A permutation $\sigma \in Sym(X)$ is a {\em cycle of length $r$} if
there are distinct elements $i_{1},...,i_{r}\in X$ such that $\sigma(i_{1})=i_{2},\sigma(i_{2})=i_{3},...,\sigma(i_{r})=i_{1}$ and $\sigma(i)=i$ for all $i\in X\backslash \{i_{1},..., i_{r}\}$. In this case we write $\sigma=(i_{1},...,i_{r})$. A cycle of length 2 is called a {\em transposition}. We recall that $(i_{1},...,i_{r})=(i_{1}, i_{r})(i_{1},i_{r-1})...(i_{1}, i_{2})$. So, every permutation can be written as a product of transpositions. A permutation $\sigma\in Sym(X)$ is an {\em even permutation} if it can be written as the product of an even number of transpositions; otherwise
it is an {\em odd permutation}. An {\em alternating group} of $X$, denoted by $Alt(X)$, is the group of all even permutations in $Sym(X)$. If $\mathcal{G}$ is a group and $X$ is a set, an {\em action of $\mathcal{G}$ on $X$} is a group homomorphism $F: \mathcal{G} \rightarrow Sym(X)$. If a group $\mathcal{G}$ acts on a set $X$, we say $Orb_{\mathcal{G}}(x)=\{gx:g\in \mathcal{G}\}$ is the orbit of $x \in X$ under the action of $\mathcal{G}$. We recall that different orbits of the action are disjoint and form a partition of $X$ i.e., $X=\bigcup \{Orb_{\mathcal{G}}(x): x\in X\}$.
Let $\{G_{i}:i\in I\}$ be an indexed collection of groups.
Define the following set.
\begin{equation}\prod_{i\in I}^{weak}G_{i}=\left\{f:I\rightarrow \bigcup_{i\in I} G_{i}\;\middle|\; (\forall i\in I)f(i)\in G_{i}, f(i)= 1_{G_{i}} \text{ for all but finitely many } i\right\}. \end{equation}
The {\em weak direct product} of the groups $\{G_{i}:i\in I\}$ is the set $\prod^{weak}_{i\in I}G_{i}$ with the operation of component-wise multiplicative defined for all $f,g\in \prod^{weak}_{i\in I}G_{i}$ by $(fg)(i)=f(i)g(i)$ for all $i\in I$. A field $\mathcal{K}$ is {\em algebraically closed} if every non-constant polynomial in $\mathcal{K}[x]$ has a root in $\mathcal{K}$.
\end{defn}
\begin{defn}{\textbf{(Combinatorial definitions)}}
The {\em degree} of a vertex $v\in V_{G}$ of a graph $G=(V_{G}, E_{G})$ is the number of edges emerging from $v$.
A graph $G=(V_{G}, E_{G})$ is {\em locally finite} if every vertex of $G$ has finite degree. We say that a graph $G=(V_{G}, E_{G})$ is {\em locally well-orderable} if for every $v \in V_{G}$, the set of neighbors of $v$ is well-orderable. Given a non-negative integer $n$, a {\em path of length $n$} in the graph $G=(V_{G}, E_{G})$ is a one-to-one finite sequence $\{x_{i}\}_{0\leq i \leq n}$ of vertices such that for each $i < n$, $\{x_{i}, x_{i+1}\} \in E_{G}$; such a path joins $x_{0}$ to $x_{n}$. The graph $G$ is {\em connected} if any two vertices are joined by a path of finite length.
For each integer $n \geq 3$, an {\em $n$-cycle} of $G$ is a path $\{x_{i}\}_{0\leq i< n}$ such that $\{x_{n-1}, x_{0}\} \in G$ and an {\em $n$-bush} is any connected graph with no $n$-cycles. We denote by $K_{n}$ the complete graph on $n$ vertices. We denote by $C_{n}$ the circuit of length $n$.
A {\em forest} is a graph with no cycles and a {\em tree}
is a connected forest.
A {\em spanning} subgraph $H=(V_{H}, E_{H})$ of $G=(V_{G}, E_{G})$ is a subgraph that contains all the vertices of $G$ i.e., $V_{H}=V_{G}$. A {\em complete bipartite graph} is a graph $G=(V_{G}, E_{G})$ whose vertex set $V_{G}$ can be partitioned into two subsets $V_{1}$ and $V_{2}$ such that no edge has both endpoints in the same subset, and every possible edge that could connect vertices in different subsets is a part of the graph.
A complete bipartite graph with partitions of size $\vert V_{1}\vert = m$ and $\vert V_{2}\vert = n$, is denoted by $K_{m,n}$ for any natural number $m,n$.
Let $(P, \leq)$ be a partially ordered set or a poset.
A subset $D \subseteq P$ is called a {\em chain} if $(D, \leq\restriction D)$ is linearly ordered. A subset $A\subseteq P$ is called an {\em antichain}
if no two elements of $A$ are comparable under $\leq$. The size of the largest antichain of the poset $(P, \leq)$ is known as its {\em width}. A subset $C \subseteq P$ is called {\em cofinal} in $P$ if for every $x \in P$ there is an element $c \in C$ such that $x \leq c$.
\end{defn}
\subsection{Permutation models.} In this subsection, we provide a brief description of the construction of Fraenkel-Mostowski permutation models of $\mathsf{ZFA}$ from \cite[\textbf{Chapter 4}]{Jec1973}.
Let $M$ be a model of $\mathsf{ZFA+AC}$ where $A$ is a set of atoms or urelements. Let $\mathcal{G}$ be a group of permutations of $A$. A set $\mathcal{F}_{1}$ of subgroups of $\mathcal{G}$ is a normal filter on $\mathcal{G}$ if for all subgroups $H, K$ of $\mathcal{G}$, the following holds.
\begin{enumerate}
\item $\mathcal{G}\in \mathcal{F}_{1}$,
\item If $H\in \mathcal{F}_{1}$ such that $H\subseteq K$, then $K\in \mathcal{F}_{1}$,
\item If $H\in \mathcal{F}_{1}$ and $K\in \mathcal{F}_{1}$ then $H\cap K\in \mathcal{F}_{1}$,
\item If $\pi\in \mathcal{G}$ and $H\in \mathcal{F}_{1}$, then $\pi H \pi^{-1}\in \mathcal{F}_{1}$,
\item For each $a \in A$, $\{\pi \in \mathcal{G} : \pi(a) = a\} \in \mathcal{F}_{1}$.
\end{enumerate}
Let $\mathcal{F}$ be a normal filter of subgroups of $\mathcal{G}$.
For $x\in M$, we say
\begin{equation}
sym_{\mathcal {G}}(x) =\{g\in \mathcal {G} : g(x) = x\} \text{ and } \text{fix}_{\mathcal{G}}(x) =\{\phi \in \mathcal{G} : \forall y \in x (\phi(y) = y)\}.
\end{equation}
We say $x$ is {\em symmetric} if $sym_{\mathcal{G}}(x)\in\mathcal{F}$ and $x$ is {\em hereditarily symmetric} if $x$ is symmetric and each element of transitive closure of $x$ is symmetric. We define the permutation model $\mathcal{N}$ with respect to $\mathcal{G}$ and $\mathcal{F}$, to be the class of all hereditarily symmetric sets. It is well-known that $\mathcal{N}$ is a model of $\mathsf{ZFA}$ (cf. \cite[\textbf{Theorem 4.1}]{Jec1973}).
If $\mathcal{I}\subseteq\mathcal{P}(A)$ is a normal ideal, then the set $\{$fix$_{\mathcal{G}}(E): E\in\mathcal{I}\}$ generates a normal filter (say $\mathcal{F}_{\mathcal{I}}$) over $\mathcal G$. Let $\mathcal{N}$ be the permutation model determined by $M$, $ \mathcal{G},$ and $\mathcal{F}_{\mathcal{I}}$. We say $E\in \mathcal{I}$ {\em supports} a set $\sigma\in \mathcal{N}$ if fix$_{\mathcal{G}}(E)\subseteq sym_{\mathcal{G}} (\sigma$).
\begin{lem}
{\em The following hold.
\begin{enumerate}
\item An element $x$ of $\mathcal{N}$ is well-orderable in $\mathcal{N}$ if and only if {\em fix$_{\mathcal{G}}(x)\in \mathcal{F}_{\mathcal{I}}$} {\em (cf. \cite[\textbf{Equation (4.2), p.47}]{Jec1973})}. Thus, an element $x$ of $\mathcal{N}$ with support $E$ is well-orderable in $\mathcal{N}$ if {\em fix$_{\mathcal{G}}(E) \subseteq$ fix$_{\mathcal{G}}(x)$}.
\item For all $\pi \in \mathcal{G}$ and all $x\in \mathcal{N}$ such that $E$ is a support of x, {\em $sym_{\mathcal{G}}(\pi x) = \pi$ $sym_{\mathcal{G}}(x)\pi^{-1}$}
and {\em fix$_{\mathcal{G}}(\pi E) = \pi$ fix$_{\mathcal{G}}(E)\pi^{-1}$}
{\em (cf. \cite[proof of \textbf{Lemma 4.4}]{Jec1973})}.
\item $\mathsf{BPI(\aleph_{1})}$ holds in any Fraenkel-Mostowski permutation models {\em (cf. \cite[\textbf{Theorem 4 (vi)}]{HHK2016})}.
\end{enumerate}
}
\end{lem}
A {\em pure set} in a model $M$ of $\mathsf{ZFA}$ is a set with no atoms in its transitive closure.
The {\em kernel} is the class of all pure sets of $M$. In this paper,
\begin{itemize}
\item Fix an integer $n\geq 2$. We denote by $\mathcal{N}_{HT}^{1}(n)$ the permutation model constructed in \cite[\textbf{Theorem 8}]{HT2020}.
\item We denote by $\mathcal{N}_{1}$ the basic Fraenkel model (cf. \cite{HR1998}).
\item We denote by $\mathcal{V}_{p}$ the finite partition model constructed in \cite{Bru2016}.
\item We denote by $\mathcal{V}_{p}^{+}$ the countable partition model mentioned in \cite[$\S$5]{Bru2016}.
\item We denote by $\mathcal{N}_{6}$ the L\'{e}vy’s permutation model (cf. \cite{HR1998}).
\item Fix a natural number $n$ such that $n= 3$ or $n >4$ and an infinite well-ordered cardinal number $\kappa$. We denote by $\mathcal{M}_{\kappa,n}$ the permutation model constructed in \cite[\textbf{Theorem 5.3}]{Ban2}.
\item Fix an infinite well-ordered cardinal number $\kappa$. We denote by $\mathcal{M}_{\kappa,4}$ the permutation model constructed in \cite[\textbf{Theorem 10(ii)}]{HT2020}.
\end{itemize}
We refer the reader to \cite[\textbf{Note 103}]{HR1998} for the definition of an injectively boundable statement.
\begin{thm}{(\textbf{Pincus’ Transfer Theorem}; cf. \cite[\textbf{Theorem 3A3}]{Pin1972})}
{\em If $\Phi$ is a conjunction of injectively boundable statements which hold in the Fraenkel–Mostowski model $V_{0}$, then there is a $\mathsf{ZF}$ model $V \supset V_{0}$ with the same ordinals and cofinalities as $V_{0}$, where $\Phi$ holds.}
\end{thm}
\subsection{Known results}
\begin{lem}(\textbf{Keremedis--Herrlich--Tachtsis}; cf. \cite[\textbf{Remark 2.7}]{Tac2016}, \cite[\textbf{Theorem 3.1}]{KH1962})
{\em The following hold.
\begin{enumerate}
\item $\mathsf{AC_{fin}^{\omega} + MA(\aleph_{0})}\rightarrow$ `for every infinite set $X$, $2^{X}$ is Baire'.
\item `For every infinite set $X$, $2^{X}$ is Baire' $\rightarrow$ `For every infinite set $X$, $\mathcal{P}(X)$ is Dedekind-infinite'.
\end{enumerate}
}
\end{lem}
\begin{lem}(\textbf{L\'{e}vy}; \cite{Lev1962})
{\em $\mathsf{MC}$ if and only if every infinite set has a well-ordered partition into non-empty finite sets.}
\end{lem}
\begin{lem}(\textbf{Howard--Saveliev--Tachtsis}; \cite[\textbf{Lemma 1.3, Theorem 3.1}]{HST2016})
{\em The following hold.
\begin{enumerate}
\item $\mathsf{\leq\aleph_{0}}$-$\mathsf{MC}$ if and only if every infinite set has a well-ordered partition into non-empty countable sets.
\item $\mathsf{\leq\aleph_{0}}$-$\mathsf{MC}$ implies ``for every infinite set $X$, $\mathcal{P}(X)$ is Dedekind-infinite”, which in turn is equivalent
to ``for every infinite set $P$ there is a partial ordering $\leq$ on $P$ such that $(P,\leq)$ has a countably infinite disjoint family of cofinal subsets".
\end{enumerate}
}
\end{lem}
\begin{lem}{($\mathsf{ZF}$; \textbf{Delhomme--Morillon}; \cite[\textbf{Lemma 1}]{DM2006})}
{\em Given a set $X$ and a set $A$ which is the range of no mapping with domain $X$, consider a mapping $f : A \rightarrow \mathcal{P}(X)\backslash \{\emptyset\}$. Then
\begin{enumerate}
\item There are distinct $a$ and $b$ in $A$ such that $f(a)\cap f(b) \neq \emptyset$.
\item If the set $A$ is infinite and well-orderable, then for every positive integer $p$, there is an $F \in [A]^{p}$ such that $\bigcap f[F]:=\bigcap_{a\in F} f(a)$
is non-empty.
\end{enumerate}
}
\end{lem}
\begin{lem}{(\textbf{Tachtsis; \cite[\textbf{Theorem 3.1}]{Tac2019}})}
{\em The following hold.
\begin{enumerate}
\item Each of the following statements implies the one beneath it:
\begin{enumerate}
\item $\forall$ infinite $\mathfrak{m}(2\mathfrak{m} = \mathfrak{m})$;
\item $\mathsf{ISAE}$;
\item $\mathsf{EPWFP}$;
\item For every infinite set $X$, Sym($X$) $\neq$ FSym($X$);
\item there are no strictly amorphous sets.\footnote{Let $\mathcal{U}$ be a finitary partition of an amorphous set $X$. Then all but finitely many elements of $\mathcal{U}$ have the same cardinality, say $n(\mathcal{U})$. An amorphous set $A$ is called {\em strictly amorphous} if there is no infinite partition of $A$ with $n(\mathcal{U})>1$.}
\end{enumerate}
\item $\mathsf{DF = F}$ implies ``For every infinite set $X$, Sym($X$) $\neq$ FSym($X$)".
\end{enumerate}
}
\end{lem}
\begin{lem}{(\textbf{Pincus; \cite[\textbf{Note 41}]{HR1998}})} {\em If $\mathcal{K}$ is an algebraically closed field, if $\pi$ is a non-trivial automorphism of $\mathcal{K}$ satisfying $\pi^{2}= 1_{\mathcal{K}}$ (the identity on $\mathcal{K}$), and if $i \in\mathcal{K}$ is a square root of $-1$, then $\pi(i) = -i \neq i$.}
\end{lem}
\begin{lem}{(\textbf{Herrlich--Howard--Keremedis; \cite[\textbf{Theorem 4(v)}]{HHK2016}})}
{\em $\mathsf{PUU \land BPI(\omega_{1})}$ implies
$\mathsf{AC_{fin}^{\aleph_{1}}}$ in $\mathsf{ZF}$.}
\end{lem}
\begin{lem}(cf. \cite[\textbf{Corollary 4.2}]{Ban2})
The statement {\em `If $(P, \leq)$ is a poset such that P is well-ordered, and if all antichains in P are finite and all chains in P are countable, then P is countable'} holds in any Fraenkel-Mostowski model.
\end{lem}
\begin{lem}(\textbf{Cayley’s formula}; $\mathsf{ZF}$)
{\em The number of spanning trees in $K_{n}$ is $n^{n-2}$ for any $n\in \omega\backslash \{0,1,2\}$.}
\end{lem}
\begin{lem}(\textbf{Scoin's formula}; $\mathsf{ZF}$)
{\em The number of spanning trees in $K_{m,n}$ is $n^{m-1}m^{n-1}$ for any $n,m\in \omega\backslash \{0,1\}$.}
\end{lem}
\begin{lem}(cf. \cite[\textbf{Chapter 30, Problem 5}]{KT2006})
{\em $\mathsf{AC}_{m}$ implies $\mathsf{AC}_{n}$ if $m$ is a multiple of $n$.}
\end{lem}
\section{\textbf{Form 269}, \textbf{Form 233}, and \textbf{Form 304}}
\begin{thm}
{\em $\mathsf{AC^{LO}}$ does not imply \textbf{Form 269} in $\mathsf{ZFA}$. Hence, neither $\mathsf{LW}$ nor $\mathsf{AC^{WO}}$ implies \textbf{Form 269} in $\mathsf{ZFA}$.}
\end{thm}
\begin{proof}
We present two known models.
{\em First model}: Fix a successor aleph $\aleph_{\alpha +1}$. We recall the permutation model $\mathcal{V}$ given in the proof of \cite[\textbf{Theorem 8.9}]{Jec1973}.
In order to describe $\mathcal{V}$, we
start with a model $M$ of $\mathsf{ZFA + AC}$ with a set $A$ of atoms of cardinality $\aleph_{\alpha+1}$. Let $\mathcal{G}$ be the group of all permutations of $A$ and let $\mathcal{F}$ be the normal filter of subgroups of $\mathcal{G}$ generated by $\{$fix$_{\mathcal{G}}(E) : E \in [A]^{< \aleph_{\alpha+1}}\}$. Let
$\mathcal{V}$ be the permutation model determined by $M$, $\mathcal{G}$, and $\mathcal{F}$. In the proof of \cite[\textbf{Theorem 8.9}]{Jec1973}, Jech proved that $\mathsf{AC^{WO}}$ holds in $\mathcal{V}$.
We recall a variant of $\mathcal{V}$ from \cite[\textbf{Theorem 3.5(i)}]{Tac2019}. Let $M$ and $A$ as above, and let $\mathcal{N}$ be the permutation model determined by $M$, $\mathcal{G}'$ and $\mathcal{F}'$, where $\mathcal{G}'$ is the group of permutations of $A$ which move at most $\aleph_{\alpha}$ atoms, and $\mathcal{F}'$ is the normal filter on $\mathcal{G}'$ generated by $\{$fix$_{\mathcal{G}'} (E) : E \in [A]^{< \aleph_{\alpha+1}}\}$. Tachtsis \cite[\textbf{Theorem 3.5(i)}]{Tac2019} proved that $\mathcal{N}=\mathcal{V}$ and if $X\in\{\mathsf{LW}, \mathsf{AC^{LO}}\}$, then, $X$ holds in $\mathcal{N}$. We slightly modify the arguments of \cite[\textbf{Note 91}]{HR1998} to prove that \textbf{Form 269} fails in $\mathcal{N}$. We show that for any set $X$ in $\mathcal{N}$ if the set $[X]^{2}$ of two element subset of $X$ has a choice function, then $X$ is well orderable in $\mathcal{N}$. Assume that $X$ is such a set and let $E$ be a support of $X$ and a choice function $f$ on $[X]^{2}$. In order to show that $X$ is well-orderable in $\mathcal{N}$, it is enough to prove that
fix$_{\mathcal{G}'}(E)$ $\subseteq$ fix$_{\mathcal{G}'}(X)$ (cf. \textbf{Lemma 2.4(1)}). Assume fix$_{\mathcal{G}'}(E)$ $\nsubseteq$ fix$_{\mathcal{G}'}(X)$, then there is a $y \in X$ and a $\phi \in$ fix$_{\mathcal{G}'}(E)$ with $\phi(y) \neq y $. Under such assumptions, Tachtsis constructed a permutation $\psi \in$ fix$_{\mathcal{G}'}(E)$ such
that $\psi(y) \neq y$ but $\psi^{2}(y)=y$ (cf. the proof of $\mathsf{LW}$ in $\mathcal{N}$ from \cite[\textbf{Theorem 3.5(i)}]{Tac2019}). This contradict our choice of $E$ as a support for a choice function on $[X]^{2}$ since $\psi$ fixes $ \{\psi(y),y\}$ but moves both of its elements. So \textbf{Form 269} fails in $\mathcal{N}$.
{\em Second model}: We consider the permutation model $\mathcal{N}$ given in the proof of \cite[\textbf{Theorem 4.7}]{Tac2019a} where $\mathsf{LW}$ and $\mathsf{AC^{LO}}$ hold. Following the above arguments and the arguments in \cite[\textbf{claim 4.10}]{Tac2019a}, we can see that \textbf{Form 269} fails in $\mathcal{N}$.
\end{proof}
\begin{thm}
{\em Fix any regular $\aleph_{\alpha}$ and any $2\leq n\in \omega$. There is a model $\mathcal{M}$ of $\mathsf{ZFA}$ where $\mathsf{CAC^{\aleph_{\alpha}}_{1}}$ and $\mathsf{CAC^{\aleph_{\alpha}}}$ hold and $\mathsf{AC_{n}^{-}}$ fails.
Moreover, the following hold in $\mathcal{M}$.
\begin{enumerate}
\item \textbf{\em Form 269} fails.
\item \textbf{\em Form 233} holds.
\item \textbf{\em Form 304} holds.
\end{enumerate}
}
\end{thm}
\begin{proof}
We consider the permutation model constructed by Halbeisen--Tachtsis \cite[\textbf{Theorem 8}]{HT2020} where for arbitrary integer $n\geq 2$, $\mathsf{AC_{n}^{-}}$ fails but the statement “For every regular $\aleph_{\alpha}$, $\mathsf{CAC^{\aleph_{\alpha}}_{1}+ CAC^{\aleph_{\alpha}}}$” holds (cf. \cite{BG1, Ban2,HT2020}). We fix an arbitrary integer $n\geq 2$ and recall the model constructed in the proof of \cite[\textbf{Theorem 8}]{HT2020}.
We start with a model $M$ of $\mathsf{ZFA+AC}$ where $A$ is a countably infinite set of atoms written as a disjoint union $\bigcup\{A_{i}:i\in \omega\}$ where for each $i\in \omega$, $A_{i}=\{a_{i_{1}},a_{i_{2}},..., a_{i_{n}}\}$ and $\vert A_{i}\vert = n$. The group $\mathcal{G}$ is defined in \cite{HT2020} in a way so that if $\eta\in \mathcal{G}$, then $\eta$ only moves finitely many atoms and for all $i\in \omega$, $\eta(A_{i})=A_{k}$ for some $k\in \omega$. Let $\mathcal{F}$ be the filter of subgroups of $\mathcal{G}$ generated by $\{$fix$_{\mathcal{G}}(E): E\in [A]^{<\omega}\}$.
We denote by $\mathcal{N}_{HT}^{1}(n)$
the Fraenkel–Mostowski permutation model determined by $M$, $\mathcal{G}$, and $\mathcal{F}$. If $X$ is a set in $\mathcal{N}_{HT}^{1}(n)$, then without loss of generality we may assume that $E=\bigcup_{i=0}^{m} A_{i}$ is a support of $X$ for some $m\in\omega$.
\begin{claim}
{\em Suppose $X$ is not a well-ordered set in $\mathcal{N}_{HT}^{1}(n)$, and let $E=\bigcup_{i=0}^{m} A_{i}$ be a support of $X$. Then there is a $t\in X$ with support $F\supseteq E$, such that the following hold.
\begin{enumerate}
\item There is a permutation $\psi$ in {\em fix$_{\mathcal{G}}E$} and an element $y \in X$ such that $t \neq y$, $\psi(t) = y$ and $\psi(y) = t$.
\item There is a $k \in F \backslash E$ such that for all $\phi_{1},\phi_{2}\in$ {\em fix$_{\mathcal{G}}(F\backslash\{k\})$}, $\phi_{1}(t) = \phi_{2}(t)$ iff $\phi_{1}(k) = \phi_{2}(k)$.
\end{enumerate}
}
\end{claim}
\begin{proof}
Since $X$ is not well-ordered, and $E$ is a support of $X$,
fix$_{\mathcal{G}}(E)\nsubseteq$ fix$_{\mathcal{G}} (X)$ by \textbf{Lemma 2.4(1)}. So there is a $t \in X$ and a $\psi \in$ fix$_{\mathcal{G}}(E)$ such that $\psi(t) \neq t$. Let $F$ be a support of $t$ containing $E$. Without loss of generality we may assume that $F$ is a union of finitely many $A_{i}$'s. We sligtly modify the arguments of \cite[\textbf{claim 4.10}]{Tac2019a} to prove (1).
(1). Let $W = \{a \in A : \psi(a) \neq a\}$. We note that $W$ is finite since if $\eta\in \mathcal{G}$, then $\eta$ only moves finitely many atoms. Let $U$ be a finite subset of $A$ which is disjoint from $F \cup W$ and such that there exists a bijection $H : tr(U) \rightarrow tr((F \cup W)\backslash E)$ (where for a set $x \subseteq A$, $tr(x) = \{i \in \omega : A_{i} \cap x \neq \emptyset\}$) with the property that if $i \in tr((F \cup W)\backslash E)$ is such that $A_{i} \subseteq (F \cup W)\backslash E$ then $A_{H^{-1}(i)} \subseteq U$; otherwise if $A_{i}
\nsubseteq (F \cup W)\backslash E$,
which means that $A_{i} \cap F = \emptyset$ and $A_{i} \nsubseteq W$, then $\vert W \cap A_{i}\vert =\vert U \cap A_{H^{-1}(i)}\vert$. Let $f : U \rightarrow (F \cup W)\backslash E$ be a bijection such that $\forall i \in tr(U)$, $f \restriction U \cap A_{i}$ is a one-to-one function from $U \cap A_{i}$ onto $((F \cup W)\backslash E) \cap A_{H(i)}$. Let $f' : \bigcup_{i\in tr(U)} A_{i}\backslash (U \cap A_{i}) \rightarrow \bigcup_{i\in tr(U)} A_{H(i)}\backslash(((F \cup W)\backslash E) \cap A_{H(i)})$ be a bijection such that $\forall i \in tr(U)$, $f' \restriction (A_{i}\backslash (U \cap A_{i}))$ is a one-to-one function from $A_{i}\backslash (U \cap A_{i})$ onto $A_{H(i)}\backslash(((F \cup W)\backslash E) \cap A_{H(i)})$. Let
\begin{center}$\delta = \prod_{u\in U} (u, f (u))\prod_{u\in \bigcup_{i\in tr(U)} A_{i}\backslash (U \cap A_{i})} (u, f' (u))$
\end{center}
be a product of disjoint transpositions. It is clear that $\delta$ only moves finitely many atoms, and for all $i\in \omega$, $\delta(A_{i})=A_{k}$ for some $k\in \omega$. Moreover, $\delta \in$ fix$_{\mathcal{G}}(E)$, $\delta^{2}(t) = t$, and $\delta(t)\neq t$ by the arguments in \cite[\textbf{claim 4.10}]{Tac2019a}.
(2). Let $U'=\bigcup_{A_{i}\subseteq F\backslash E} A_{H^{-1}(i)}$. Let $F_{u}=(u,f(u))$ be a transposition for all $u\in U'$ and let $\delta' = \prod_{u\in U'} F_{u}$.
We can sligtly modify the arguments of \cite[\textbf{claim 4.10}]{Tac2019a} to see that $\delta'(t)\neq t$.\footnote{For reader’s convenience, we write down the proof. Assume on the
contrary that $\delta'(t) = t$. Since $F$ is a support of $t$, we have that $\delta'(F)$ is a support of $\delta'(t) = t$.
Now $\delta'(F) = \delta'((F\backslash E) \cup E) = \delta'(F\backslash E) \cup \delta'(E) = U' \cup E$. So, $U' \cup E$ is a support of $t$. Now $\psi \in$ fix$_{\mathcal{G}}(U') \cap $fix$_{\mathcal{G}}(E)$ (since $U' \cap W = \emptyset$). So, $\psi$ fixes $U' \cup E$ pointwise and thus $\psi(t)=t$ since $U' \cup E$ is a support of $t$. This contradicts the assumption that $\psi(t)\neq t$.}
Thus there is at least one $u\in U'$ such that $F_{u}(t) \neq t$. Define $\phi:=F_{u}$.
We prove that for $\phi_{1},\phi_{2}\in$ fix$_{\mathcal{G}}(F-\{f(u)\})$, $\phi_{1}(t) = \phi_{2}(t)$ iff $\phi_{1}(f(u)) = \phi_{2}(f(u))$; If $\phi_{1}(f(u)) = \phi_{2}(f(u))$, then $\phi_{1}$ and $\phi_{2}$ agree on a support of $t$ and therefore $\phi_{1}(t) = \phi_{2}(t)$. Let $\phi_{1}(f(u)) \neq \phi_{2}(f(u))$. Let
\begin{center}
$\beta = (f(u), \phi_{1}(f(u)))(\phi(f(u)), \phi_{2}(f(u)))$
\end{center}
be the product of the two transpositions that fixes $F\backslash \{f(u)\}$ pointwise. We can see that $\beta$ agrees with $\phi_{1}$ on a support of $t$, and agrees with $\phi_{2}\phi^{-1}$ on a support of $\phi(t)$. Since $t \neq \phi(t)$, $\beta(t) \neq \beta(\phi(t))$. Consequently, $\phi_{1}(t)=\beta(t)\neq\beta(\phi(t)) = \phi_{2}\phi^{-1}(\phi(t))=\phi_{2}(t)$.
\end{proof}
\begin{claim}
{\em In $\mathcal{N}_{HT}^{1}(n)$, the following hold.
\begin{enumerate}
\item \textbf{\em Form 269} fails.
\item \textbf{\em Form 233} holds.
\item \textbf{\em Form 304} holds.
\end{enumerate}
}
\end{claim}
\begin{proof}
(1). Following \textbf{claim 3.3(1)} and the arguments in the proof of \textbf{Theorem 3.1} we can see that \textbf{Form 269} fails in $\mathcal{N}_{HT}^{1}(n)$.
(2). We follow the arguments due to Pincus from \cite[\textbf{Note 41}]{HR1998} and use \textbf{claim 3.3(1)} to prove that \textbf{Form 233} holds in $\mathcal{N}_{HT}^{1}(n)$.
For reader's convenience, we write down the proof. Let $(\mathcal{K}, +, \cdot,0,1)$ be a field in $\mathcal{N}_{HT}^{1}(n)$ with finite support $E \subset A$ and assume that $\mathcal{K}$ is algebraically closed. Without loss of generality assume that $E = \bigcup_{i=0}^{m} A_{i}$. We show that every element of $\mathcal{K}$ has support $E$ which implies that $\mathcal{K}$ is well orderable in $\mathcal{N}_{HT}^{1}(n)$ and therefore the standard proof of the uniqueness of algebraic closures (using $\mathsf{AC}$) is valid in $\mathcal{N}_{HT}^{1}(n)$. For the sake of contradiction, assume that $x\in\mathcal{K}$ does not have support $E$. Let $F=\bigcup_{i=0}^{m+k} A_{i}$ be a support of $x$ containing $E$.
By \textbf{claim 3.3(1)}, there is a permutation $\psi$ in fix$_{\mathcal{G}}E$ such that $\psi(x)\neq x$ and $\psi^{2}$ is the identity. The permutation $\psi$ induces an automorphism of $(\mathcal{K}, +, \cdot,0,1)$ and we can therefore apply \textbf{Lemma 2.11} to conclude that $\psi(i) = - i \neq i$ for some square root $i$ of -1 in $\mathcal{K}$.
We can follow the arguments from \cite[\textbf{Note 41}]{HR1998} to see that for every permutation $\pi$ of $A$ that fixes $E$ pointwise, $\pi(i)=i$ for every square root $i$ of $-1$ in $\mathcal{K}$. Hence we arrive at a contradiction.
(3). We sligtly modify the arguments of \cite[\textbf{Note 116}]{HR1998} and use \textbf{claim 3.3} to prove that \textbf{Form 304} holds in $\mathcal{N}_{HT}^{1}(n)$. Let $X$ be an infinite Hausdorff space in $\mathcal{N}_{HT}^{1}(n)$, and $E=\bigcup_{i=0}^{m} A_{i}$ be a support of $X$ and its topology. We show there is an infinite $Y \subseteq X$ in $\mathcal{N}_{HT}^{1}(n)$ such that $Y$ has no infinite compact subsets in $\mathcal{N}_{HT}^{1}(n)$. If $X$ is well orderable then we can use transfinite induction without using any form of choice to finish the proof. Assume that $X$ is not well orderable in $\mathcal{N}_{HT}^{1}(n)$.
By \textbf{claim 3.3(1)}, there is a $x\in X$ with support $F=\bigcup_{i=0}^{m+k} A_{i}$, a permutation $\phi\in$ fix$_{\mathcal{G}}E$ and an element $y \in X$ such that $x \neq y$, $\phi(x) = y$ and $\phi(y) = x$.
By \textbf{claim 3.3(2)}, there is a $k \in F \backslash E$ such that for all $\phi_{1},\phi_{2}\in$ fix$_{\mathcal{G}}(F\backslash\{k\})$, $\phi_{1}(x) = \phi_{2}(x)$ iff $\phi_{1}(k) = \phi_{2}(k)$.
Then $f = \{(\psi(x),\psi(k)) : \psi \in \mathcal{G}$, $\psi\in$ fix$_{\mathcal{G}}(F\backslash \{k\})\}$ is a bijection in $\mathcal{N}_{HT}^{1}(n)$ from $\{\psi(x) : \psi \in \mathcal{G}$, $\psi\in$ fix$_{\mathcal{G}}(F\backslash \{k\})\}$ to $A-(F\backslash \{k\})$. Define $Y$:= $\{\psi(x) : \psi \in \mathcal{G}$, $\psi\in$ fix$_{\mathcal{G}}(F\backslash \{k\})\}$.
Since $\phi(x)\neq x$ and $X$ is an infinite Hausdorff space, we can choose open sets $C$ and $D$ so that $x \in C$, $\phi(x) \in D$ and $C \cap D = \emptyset$. Since $Y$ can be put in a one to one correspondence with a subset of the atoms in the model and \textbf{$A$ is amorphous in $\mathcal{N}_{HT}^{1}(n)$} (cf. \cite{HT2020}), every subset of $Y$ in the model must be finite or cofinite. Thus at least one of $Y \cap C$ or $Y \cap D$ is finite. We may assume that $Y \cap C$ is finite. Then we can conclude that $C = \{\psi(C)\cap Y : \psi \in \mathcal{G}, \psi\in$ fix$_{\mathcal{G}}(F\backslash \{k\})\}$ is an open cover for $Y$ and each element of $C$ is finite. So for any infinite subset $Z$ of $Y$, $C$ is an open cover for $Z$ without a finite subcover.
\end{proof}
\end{proof}
\section{Partition models, weak choice forms, and permutations of infinite sets}
Tachtsis \cite[\textbf{Theorem 3.1(2)}]{Tac2019} proved that $\mathsf{DF = F}$ implies ``For every infinite set $X$, Sym(X) $\neq$ FSym(X)'' in $\mathsf{ZF}$. Inspired from that idea we may observe the following.
\begin{prop}
{($\mathsf{ZF}$)} {\em The following hold.
\begin{enumerate}
\item $\mathsf{W_{\aleph_{\alpha+1}}}$ implies `for any set $X$ of size $\aleph_{\alpha+1}$, Sym(X) $\neq$ $\aleph_{\alpha}$Sym(X)'.
\item Each of the following statements implies the one beneath it:
\begin{enumerate}
\item $\forall$ infinite $\mathfrak{m}(2\mathfrak{m} = \mathfrak{m})$;
\item $\mathsf{ISAE}$;
\item $\mathsf{EPWFP}$;
\item for any $X$ of size $\aleph_{\alpha+1}$, Sym(X) $\neq$ $\aleph_{\alpha}$Sym(X).
\end{enumerate}
\end{enumerate}
}
\end{prop}
\begin{proof}
(1). Let $X$ be a set of size $\aleph_{\alpha+1}$ and let us assume Sym(X)= $\aleph_{\alpha}$Sym(X). We prove that there is no injection $f$ from $\aleph_{\alpha+1}$ into $X$. Assume there exists such an $f$. Let $\{y_{n}\}_{n\in \aleph_{\alpha+1}}$ be an enumeration of the elements of $Y=f(\aleph_{\alpha+1})$.
We can use transfinite recursion, without using any form of choice, to construct a bijection $f:Y\rightarrow Y$ such that $f(x)\neq x$ for any $x\in Y$. Define $g:X \rightarrow X$ as follows: $g(x)=f(x)$ if $x\in Y$, and $g(x)=x$ if $x\in X\backslash Y$. Clearly $g \in$ Sym(X)$\backslash$ $\aleph_{\alpha}$Sym(X), and hence Sym(X) $\neq $ $\aleph_{\alpha}$Sym(X), a contradiction.
(2). $(a)\implies (b) \implies (c)$ follows from \textbf{Lemma 2.10(1)} and $(c) \implies (d)$ is straightforward.
\end{proof}
\subsection{Weak choice forms in the finite partition model} We recall the finite partition model $\mathcal{V}_{p}$ from \cite{Bru2016}. In order to describe $\mathcal{V}_{p}$, we start with a model $M$ of $\mathsf{ZFA+AC}$ where $A$ is a countably infinite set of atoms. Let $\mathcal{G}$ be the group of all permutations of $A$. Let $S$ be the set of all finite partitions of $A$ and let $\mathcal{F}$ = $\{H:$ $H$ is a subgroup of $\mathcal{G}$, $H \supseteq$ fix$_{\mathcal{G}}(P)$ for some $P \in S\}$ be the normal filter of subgroups of $\mathcal{G}$ where $\text{fix}_{\mathcal{G}}P=\{\phi \in \mathcal{G} : \forall y \in P (\phi(y) = y)\}$. The model $\mathcal{V}_{p}$ is the permutation model determined by $M$, $\mathcal{G}$ and $\mathcal{F}$. In $\mathcal{V}_{p}$ there is a set, which has no infinite amorphous subset.
\begin{thm}
{\em The following hold in $\mathcal{V}_{p}$.
\begin{enumerate}
\item If $X\in \{`\forall$ infinite $\mathfrak{m}(2\mathfrak{m} = \mathfrak{m})$', $\mathsf{ISAE}, \mathsf{EPWFP}\}$, then $X$ fails.
\item $\mathsf{AC_{n}}$ fails for any integer $n\geq 2$.
\item $\mathsf{MA(\aleph_{0})}$ fails.
\item If $X\in \{\mathsf{MC}, \mathsf{\leq\aleph_{0}}$-$\mathsf{MC}\}$, then $X$ fails.
\end{enumerate}
}
\end{thm}
\begin{proof}
(1). By \textbf{Lemma 2.10}, it is enough to show that (Sym($A$))$^{\mathcal{V}_{p}}$ = FSym($A$).
For the sake of contradiction, assume that
$f$ is a permutation of $A$ in $\mathcal{V}_{p}$, which moves infinitely many atoms.
Let $P =\{P_{j}:j\leq k\}$ be a support of $f$ for some $k\in\omega$.
Without loss of generality, assume that $P_{0},..., P_{n}$ are the singleton and tuple blocks for some $n< k$.
Then there exist $n< i \leq k$ where $a \in P_{i}$ and $b \in \bigcup P\backslash (P_{0}\cup...\cup P_{n} \cup \{a\})$ such that $b = f(a)$.
\textbf{Case (i):}
Let $b \in P_{i}$. Consider $\phi\in$ fix$_{\mathcal{G}}(P)$ such that $\phi$
fixes all the atoms in all the blocks other than $P_{i}$ and $\phi$ moves every atom in $P_{i}$ except $b$. Thus, $\phi(b)=b$, $\phi(a)\neq a$, and $\phi(f) = f$ since $P$ is the support of $f$. Thus
$(a,b)\in f\implies (\phi(a), \phi(b))\in \phi(f)\implies (\phi(a), b)\in f$.
So $f$ is not injective; a contradiction.
\textbf{Case (ii):} Let $b \not\in P_{i}$. Consider $\phi\in$ fix$_{\mathcal{G}}(P)$ such that $\phi$
fixes all the atoms in all the blocks other than $P_{i}$ and $\phi$ moves every atom in $P_{i}$. Then again we can obtain a contradiction as in Case (i).
(2). Fix any integer $n\geq 2$. We show that the set $S = \{x: x\in [A]^{n}\}$ has no choice function in $\mathcal{V}_{p}$. Assume that $f$ is a choice function of $S$ and let $P$ be a support of $f$.
Since $A$ is countably infinite and $P$ is a finite partition of $A$, there is a $p \in P$ such that $\vert p\vert$ is infinite. Let $a_{1}, a_{2},...,a_{n} \in p$ and $\pi \in $ fix$_{\mathcal{G}}(P)$ be such that $\pi a_{1} = a_{2}$, $\pi a_{2} = a_{3}$,..., $\pi a_{n-1} = a_{n}$, $\pi a_{n} = a_{1}$. Without loss of generality, we assume that $f(a_{1}, a_{2},...,a_{n})= a_{1}$. Thus,
$\pi f(a_{1}, a_{2},...,a_{n})= \pi a_{1} \implies f(\pi(a_{1}), \pi(a_{2}),...,\pi(a_{n}))= a_{2} \implies
f(a_{2}, a_{3},...,a_{n}, a_{1})= a_{2}.$
Thus $f$ is not a function; a contradiction.
(3). It is known that $\mathcal{P}(A)$ is Dedekind-finite and $\mathsf{UT(WO,WO,WO)}$ holds in $\mathcal{V}_{p}$ (cf. \cite[\textbf{Proposition 4.9, Theorem 4.18}]{Bru2016}). So $\mathsf{AC_{fin}^{\omega}}$ holds as well. Thus by \textbf{Lemma 2.6(2)}, the
statement “for every infinite set $X$, $2^{X}$ is Baire” is false in $\mathcal{V}_{p}$. Hence by \textbf{Lemma 2.6(1)}, $\mathsf{MA(\aleph_{0})}$ is false in $\mathcal{V}_{p}$.
(4). Follows from \textbf{Lemmas 2.7, 2.8(1)} and the fact that $\mathsf{UT(WO,WO,WO)}$ holds in $\mathcal{V}_{p}$. Alternatively, we can also use \textbf{Lemma 2.8(2)}, to see that $\mathsf{\leq\aleph_{0}}$-$\mathsf{MC}$ fails in $\mathcal{V}_{p}$ since $\mathcal{P}(A)$ is Dedekind-finite in $\mathcal{V}_{p}$. So we may also conclude by \textbf{Lemma 2.8(2)} that the statement “for every infinite set $P$ there is a partial ordering $\leq$ on $P$ such that $(P, \leq)$ has a countably infinite disjoint family of cofinal subsets" fails in $\mathcal{V}_{p}$.
\end{proof}
\subsection{Weak choice forms in the countable partition model}
Let $M$ be a model of $\mathsf{ZFA+AC}$ where $A$ is an uncountable set of atoms and $\mathcal{G}$ is the group of all permutations of $A$.
\begin{lem}
{\em Let $S$ be the set of all countable partitions of $A$. Then $\mathcal{F}$ = $\{H:$ $H$ is a subgroup of $\mathcal{G}$, $H \supseteq$ {\em fix$_{\mathcal{G}}(P)$} for some $P \in S\}$ is the normal filter of subgroups of $\mathcal{G}$.}
\end{lem}
\begin{proof}
We modify the arguments of \cite[\textbf{Lemma 4.1}]{Bru2016} slightly and verify the clauses 1-5 of a normal filter (cf. $\S$2.1).
\begin{enumerate}
\item We can see that $\mathcal{G}\in \mathcal{F}$.
\item Let $H \in \mathcal{F}$ and $K$ be a subgroup of $\mathcal{G}$ such that $H \subseteq K$. Then there exist $P \in S$ such that fix$_{\mathcal{G}}(P) \subseteq H$. So, fix$_{\mathcal{G}}(P) \subseteq K$ and $K \in \mathcal{F}$.
\item Let $K_{1}, K_{2} \in \mathcal{F}$. Then there exist $P_{1}, P_{2} \in S$ such that fix$_{\mathcal{G}}(P_{1})\subseteq K_{1}$ and fix$_{\mathcal{G}}(P_{2})\subseteq K_{2}$. Let $P_{1} \land P_{2}$ denote the coarsest common refinement of $P_{1}$ and $P_{2}$, given by $P_{1} \land P_{2} = \{p \cap q : p\in P_{1}, q \in P_{2}, p \cap q\neq \emptyset\}$. Clearly, fix$_{\mathcal{G}}(P_{1} \land P_{2})\subseteq$ fix$_{\mathcal{G}}(P_{1})$ $\cap$ fix$_{\mathcal{G}}(P_{2})$ $\subseteq$ $K_{1} \cap K_{2}$. Since the product of two countable sets is countable, $P_{1} \land P_{2}\in S$. Thus $K_{1} \cap K_{2} \in \mathcal{F}$.
\item Let $\pi \in \mathcal{G}$ and $H \in \mathcal{F}$. Then there exists $P \in S$ such that fix$_{\mathcal{G}}(P) \subseteq H$. Since fix$_{\mathcal{G}}(\pi P)$ = $\pi$ fix$_{\mathcal{G}}(P)\pi^{-1} \subseteq \pi H \pi^{-1}$ by \textbf{Lemma 2.4(2)}, it is enough to show $\pi P \in S$. Clearly, $\pi P$ is countable, since $P$ is countable. Following the arguments of \cite[\textbf{Lemma 4.1(iv)}]{Bru2016} we can see that $\pi P$ is a partition of $A$.
\item Fix any $a \in A$. Consider any countable partition $P$ of $A$ where $\{a\}$ is a singleton block of $P$. We can see that fix$_{\mathcal{G}}P\subseteq\{\pi \in \mathcal{G} :\pi (a) = a\}$. Thus, $\{\pi \in \mathcal{G} :\pi (a) = a\}\in \mathcal{F}$.
\end{enumerate}
\end{proof}
We call the permutation model (denoted by $\mathcal{V}^{+}_{p}$) determined by $M$, $\mathcal{G}$, and $\mathcal{F}$, the countable partition model. We recall the following variant of the basic Fraenkel model (the model $\mathcal{N}_{12}(\aleph_{1})$ in \cite{HR1998}): Let $A$ be an uncountable set of atoms, $\mathcal{G}$ be the group of all permutations of $A$, and the supports are countable subsets of $A$.
\begin{thm}
{\em The following hold.
\begin{enumerate}
\item $\mathcal{N}_{12}(\aleph_{1}) \subset \mathcal{V}^{+}_{p}$.
\item if $X\in \{`\forall$ infinite $\mathfrak{m}(2\mathfrak{m} = \mathfrak{m})$', $\mathsf{ISAE}, \mathsf{EPWFP}\}$, then $X$ fails in $\mathcal{V}^{+}_{p}$.
\item $\mathsf{AC_{n}}$ fails in $\mathcal{V}^{+}_{p}$ for any integer $n\geq 2$.
\item if $X\in \{\mathsf{W_{\aleph_{1}}}, \mathsf{DC_{\aleph_{1}}}\}$, then $X$ fails in $\mathcal{V}^{+}_{p}$.
\end{enumerate}
}
\end{thm}
\begin{proof}
(1). Let $x\in \mathcal{N}_{12}(\aleph_{1})$ with support $E$. So fix$_{\mathcal{G}}(E) \subseteq sym_{\mathcal{G}}(x)$. Then $P = \{\{a\}\}_{a\in E} \cup \{A\backslash E\}$ is a countable partition of $A$, and fix$_{\mathcal{G}}(P)$ = fix$_{\mathcal{G}}(E)$. Thus fix$_{\mathcal{G}}(P) \subseteq sym_{\mathcal{G}}(x)$ and so $x \in \mathcal{V}^{+}_{p}$ with support $P$.
(2). Similarly to the proof of $\mathsf{\neg EPWFP}$ in $\mathcal{V}_{p}$ (cf. the proof of \textbf{Theorem 4.2(1)}), one may verify that if $f$ is a permutation of $A$ in $\mathcal{V}^{+}_{p}$, then the set $\{x \in A : f(x) \neq x\}$ has cardinality at most $\aleph_{0}$. Since $A$ is uncountable, it follows that `for any uncountable $X$, Sym(X) $\neq$ $\aleph_{0}$Sym(X)' fails in $\mathcal{V}^{+}_{p}$. Consequently, if $X\in \{`\forall$ infinite $\mathfrak{m}(2\mathfrak{m} = \mathfrak{m})$', $\mathsf{ISAE}, \mathsf{EPWFP}\}$, then $X$ fails in $\mathcal{V}^{+}_{p}$ by \textbf{Proposition 4.1(2)}.
(3). Fix any integer $n\geq 2$. Similarly to the proof of \textbf{Theorem 4.2(2)}, one may verify that the set $S = \{x: x\in [A]^{n}\}$ has no choice function in $\mathcal{V}^{+}_{p}$. Consequently, $\mathsf{AC_{n}}$ fails in $\mathcal{V}^{+}_{p}$.
(4). We can use the arguments in (2) and \textbf{Proposition 4.1(1)} to show that $\mathsf{W_{\aleph_{1}}}$ fails in $\mathcal{V}^{+}_{p}$. The rest follows from the fact that $\mathsf{DC_{\aleph_{1}}}$ implies $\mathsf{W_{\aleph_{1}}}$ in $\mathsf{ZF}$ (cf. \cite[\textbf{Theorem 8.1(b)}]{Jec1973}). However, we write a different argument. In order to show that $\mathsf{W_{\aleph_{1}}}$ fails in $\mathcal{V}^{+}_{p}$, we prove that there is no injection $f$ from $\aleph_{1}$ into $A$. Assume there exists such an $f$ with support $P$, and let $\pi \in $ fix$_{\mathcal{G}}(P)$ be such that $\pi$ moves every atom in each non-singleton block of $P$. Since $P$ contains only countably many singletons, $\pi$ fixes only countably many atoms. Fix $n \in \aleph_{1}$. Since $n$ is in the kernel (the class of all pure sets), we have $\pi (n) = n$. Thus $\pi(f(n)) = f(\pi(n))= f(n)$. But $f$ is one-to-one, and thus, $\pi$ fixes $\aleph_{1}$ many values of $f$ in $A$, a contradiction.
\end{proof}
\section{Variants of Chain/Antichain principle and permutations of infinite sets}
\begin{prop}
{\em Fix any $k\in\omega\backslash\{0,1\}$. There is a model of $\mathsf{ZFA}$ where $\mathsf{AC_{fin}^{\omega}}$ fails but the statement `If in a poset $(P,\leq)$ with width $k$ all chains are countable, then $(P,\leq)$
is countable' holds.}
\end{prop}
\begin{proof}
We recall L\'{e}vy’s permutation
model (labeled as Model $\mathcal{N}_{6}$ in \cite{HR1998}) whose description is as follows:
We start with a model $M$ of $\mathsf{ZFA + AC}$ with a countably infinite
set $A$ of atoms which is written as a disjoint union $\bigcup\{P_{n}:n\in \omega\}$, where $P_{n}=\{a_{1}^{n}, a_{2}^{n},...,a_{p_{n}}^{n}\}$ such that $p_{n}$ is the $n^{th}$-prime number. Let $\mathcal{G}$ be the group generated by the following permutations $\pi_{n}$ of $A$.
\begin{center}
$\pi_{n}: a_{1}^{n} \mapsto a_{2}^{n} \mapsto ... \mapsto a_{p_{n}}^{n}\mapsto a_{1}^{n}$ and $\pi_{n}(x)=x$ for all $x\in A\backslash P_{n}$.
\end{center}
Let $\mathcal{F}$ be the filter of subgroups of $\mathcal{G}$ generated by $\{$fix$_{\mathcal{G}}(E): E\in [A]^{<\omega}\}$. The model $\mathcal{N}_{6}$ is the permutation model determined by $M$, $\mathcal{G}$, and $\mathcal{F}$.
In $\mathcal{N}_{6}$, $\mathsf{AC_{fin}^{\omega}}$ fails (cf. \cite[proof of \textbf{Theorem 7.11, p.110}]{Jec1973}). Fix any $k\in\omega\backslash\{0,1\}$. Let $(P,\leq)$ be a poset in $\mathcal{N}_{6}$ with width $k$ and all chains in $(P,\leq)$ are countable. By \cite[\textbf{claim 3.6}]{Tac2019b}, $(P,\leq)$ can be well-ordered.
The rest follows from \textbf{Lemma 2.13}.
\end{proof}
\begin{prop}
{\em The following hold.
\begin{enumerate}
\item There is a model of $\mathsf{ZFA}$ where $\mathsf{LOKW_{4}^{-}}$ fails but the statement `If in a poset $(P,\leq)$ all antichains have size $2$ and all chains are countable, then $P$ is countable' holds.
\item Fix a natural number $n$ such that $n>4$. There is a model of $\mathsf{ZFA}$ where $\mathsf{LOC_{n}^{-}}$ fails but the statement `If in a poset $(P,\leq)$ all antichains have size $2$ and all chains are countable, then $P$ is countable' holds.
\end{enumerate}
}
\end{prop}
\begin{proof}
(1). We recall the permutation model from the second assertion of \cite[\textbf{Theorem 10(ii)}]{HT2020} (we denote by $\mathcal{M}_{\kappa,4}$) whose description is as follows: Let $\kappa$ be any infinite well-ordered cardinal number. We start with a model $M$ of $\mathsf{ZFA+AC}$ where $A$ is a
$\kappa$-sized set of atoms written as a disjoint union $A=\bigcup\{A_{\alpha}:\alpha< \kappa\}$, where for all $\alpha<\kappa$, $A_{\alpha}=\{a_{\alpha,1},a_{\alpha,2},a_{\alpha,3},a_{\alpha,4}\}$ such that $\vert A_{\alpha}\vert=4$ for all $\alpha<\kappa$. Let $\mathcal{G}$ be the weak direct product of $Alt(A_{\alpha})$'s where $Alt(A_{\alpha})$ is the alternating group on $A_{\alpha}$ for each $\alpha<\kappa$. Thus every element $\eta \in \mathcal{G}$ moves only finitely many atoms. Let $\mathcal{F}$ be the normal filter of subgroups of $\mathcal{G}$ generated by $\{$fix$_{\mathcal{G}}(E): E\in [A]^{<\omega}\}$. The model $\mathcal{M}_{\kappa,4}$ is the permutation model determined by $M$, $\mathcal{G}$ and $\mathcal{F}$.
In $\mathcal{M}_{\kappa,4}$, $\mathsf{LOKW_{4}^{-}}$ fails (cf. proof of the second assertion of \cite[\textbf{Theorem 10(ii)}]{HT2020}). Let $(P,\leq)$ be a poset in $\mathcal{M}_{\kappa,4}$ where
all antichains have size $2$ and
all chains are countable. Let $E\in [A]^{<\omega}$ be a support of $(P,\leq)$. Following the arguments of \cite[\textbf{claim 3.5}]{Tac2019b} we can see that for each $p \in P$, the set $Orb_{E}(p)=\{\phi(p): \phi\in$ fix$_{\mathcal{G}}(E)\}$ is an antichain in $P$ since every element $\eta\in \mathcal{G}$ moves only finitely many atoms. Following the arguments of \cite[\textbf{claim 3.6}]{Tac2019b} we can see that $\mathcal{O}=\{Orb_{E}(p): p\in P\}$ is a well-ordered partition of $P$. We note that all antichains in $P$ have size $2$, thus $\vert Orb_{E}(p)\vert=2$ for each $p\in P$. Following the arguments of \cite[\textbf{Theorem 10(ii)}]{HT2020}, $P=\bigcup_{p\in P}Orb_{E}(p)$ is well-orderable (cf. the proof of $\mathsf{LOC_{2}^{-}}$ in \cite[\textbf{Theorem 10(ii)}]{HT2020}).
The rest follows from \textbf{Lemma 2.13}.
(2). Let $n$ be a natural number such that $n > 4$ and $\kappa$ be any infinite well-ordered cardinal number. Consider the permutation model $\mathcal{M}_{\kappa,n}$ constructed in \cite[\textbf{Theorem 5.3}]{Ban2} whose description is as follows: We start with a model $M$ of $\mathsf{ZFA+AC}$ where $A$ is a $\kappa$-sized set of atoms written as a disjoint union $A=\bigcup\{A_{\beta}:\beta< \kappa\}$, where for all $\beta<\kappa$, $A_{\beta}=\{a_{\beta,1},a_{\beta,2},...,a_{\beta,n}\}$ such that $\vert A_{\beta}\vert=n$ for all $\beta<\kappa$. Let $\mathcal{G}$ be the weak direct product of $Alt(A_{\beta})$'s where $Alt(A_{\beta})$ is the alternating group on $A_{\beta}$ for each $\beta<\kappa$.
Consequently, every element $\eta\in \mathcal{G}$ moves only finitely many atoms. Let $\mathcal{F}$ be the normal filter of subgroups of $\mathcal{G}$ generated by $\{$fix$_{\mathcal{G}}(E): E\in [A]^{<\omega}\}$. The model $\mathcal{M}_{\kappa,n}$ is the permutation model determined by $M$, $\mathcal{G}$ and $\mathcal{F}$. In \cite[\textbf{Theorem 5.3}]{Ban2}, we observed that $\mathsf{LOC_{n}^{-}}$ fails in $\mathcal{M}_{\kappa,n}$. Let $(P,\leq)$ be a poset in $\mathcal{M}_{\kappa,n}$ where
all antichains have size $2$ and
all chains are countable. By the arguments of (1), $P$ can be written as a well-ordered disjoint union $\bigcup\{W_{\alpha} : \alpha<\delta\}$ of antichains, hence as a well-ordered disjoint union of 2-element sets. Applying the group-theoretic facts from \cite[\textbf{Theorem 11}, \textbf{Case 1}]{HHT2012}, we may observe that $P$ is well-orderable in $\mathcal{M}_{\kappa,n}$. The rest follows from \textbf{Lemma 2.13}.
\end{proof}
\begin{prop}{($\mathsf{ZF}$)}
{\em Let $\aleph_{\alpha}$ and $\aleph_{\alpha+1}$ be regular alephs. Then the following hold.
\begin{enumerate}
\item $\mathsf{CAC^{\aleph_{\alpha}}}$ implies the statement `Every family $\mathcal{A} = \{(A_{i}, \leq_{i}): i \in \aleph_{\alpha+1}\}$
such that for each $i \in \aleph_{\alpha+1}$, $A_{i}$ is finite and $\leq_{i}$ is a linear order on $A_{i}$,
has an $\aleph_{\alpha+1}$-sized subfamily with a choice function'.
\item Let $X$ be a $T_{1}$-space. Additionally, suppose $X$ is either $\mathcal{K}$-Loeb or second-countable. Then $\mathsf{CAC^{\aleph_{\alpha}}}$ implies the statement `Every family $\mathcal{A} = \{A_{i}: i \in \aleph_{\alpha+1}\}$
such that for each $i \in \aleph_{\alpha+1}$, $A_{i}$ is a finite subset of $X$,
has an $\aleph_{\alpha+1}$-sized subfamily with a choice function'.
\item $\mathsf{CAC^{\aleph_{\alpha}}_{1}}$
implies
$\mathsf{PAC^{\aleph_{\alpha+1}}_{fin}}$ and $\mathsf{DC_{<\aleph_{\alpha+1}}}$ does not imply $\mathsf{CAC^{\aleph_{\alpha}}_{1}}$.
\end{enumerate}
}
\end{prop}
\begin{proof}
(1). Let $\mathcal{A} = \{(A_{i}, \leq_{i}): i \in \aleph_{\alpha+1}\}$ be a family
such that for each $i \in \aleph_{\alpha+1}$, $A_{i}$ is finite and $\leq_{i}$ is a linear order on $A_{i}$. Without loss of generality, we may assume that $\mathcal{A}$ is pairwise
disjoint. Let $P$ = $\bigcup_{i\in\aleph_{\alpha+1}} A_{i}$. We partially order $P$ by requiring $x \prec y$ if and only if there exists an index $i \in \aleph_{\alpha+1}$ such that $x, y \in A_{i}$ and $x \leq_{i} y$. We can see that $P$ has size at least $\aleph_{\alpha+1}$ and the only chains of $(P,\prec)$ are the finite sets $A_{n}$ and subsets of $A_{n}$ where $n\in \aleph_{\alpha+1}$. By $\mathsf{CAC^{\aleph_{\alpha}}}$, $P$ has an antichain of size at least $\aleph_{\alpha+1}$, say $C$.
Let $M =\{m \in \aleph_{\alpha+1} : C \cap A_{m}\not= \emptyset\}$. Since $C$ is an antichain and $\mathcal{A}$ is the family of all chains of $(P,\prec)$, we have $M =\{m \in \aleph_{\alpha+1} : \vert C \cap A_{m}\vert =1\}$. Clearly, $f =\{ (m,c_{m}) : m \in M\}$, where for $m \in M$, $c_{m}$ is the unique element of $C \cap A_{m}$, is a choice function of the subset $\mathcal{B}=\{A_{m} : m \in M\}$ of $\mathcal{A}$ of size $\aleph_{\alpha+1}$. Thus $\mathcal{B}$ is an $\aleph_{\alpha+1}$-sized subfamily of $\mathcal{A}$ with a choice function.
(2). Let $\mathcal{A} = \{A_{i}: i \in \aleph_{\alpha+1}\}$ be a family such that for each $i \in \aleph_{\alpha+1}$, $A_{i}$ is a finite subset of $X$. Then there exists a family $\{\leq_{n}:n\in \aleph_{\alpha+1}\}$ such that, for every $n\in \aleph_{\alpha+1}$, $\leq_{n}$ is a well-ordering on $A_{n}$ (cf. the arguments in the proof of \cite[\textbf{Proposition 2.2}]{KW}). The rest follows from the arguments of (1).
(3). We can slightly modify the arguments of \cite[\textbf{Theorem 4.5} $\&$ \textbf{Corollary 4.6}]{Ban2} to obtain the results. For the sake of convenience of the reader we write down the arguments. Let $\mathcal{A}=\{A_{n} : n \in \aleph_{\alpha+1}\}$ be a family of non-empty finite sets. Without loss of generality, we assume that $\mathcal{A}$ is disjoint. Define a binary relation $\leq$ on $A =\bigcup \mathcal{A}$ as follows: for all $a,b \in A$, let $a\leq b$ if and only if $a = b$ or $a \in A_{n}$ and $b \in A_{m}$ and $n < m$. Clearly, $\leq$ is a partial order on $A$. Also, $A$ has size at least $\aleph_{\alpha+1}$. The only antichains of $(A,\leq)$ are the finite sets $A_{n}$ and subsets of $A_{n}$ where $n\in \aleph_{\alpha+1}$. By $\mathsf{CAC^{\aleph_{\alpha}}_{1}}$, $A$ has a chain of size at least $\aleph_{\alpha+1}$, say $C$. Let $M =\{m \in \aleph_{\alpha+1} : C \cap A_{m}\not= \emptyset\}$. Since $C$ is a chain and $\mathcal{A}$ is the family of all antichains of $(A,\leq)$, we have $M =\{m \in \aleph_{\alpha+1} : \vert C \cap A_{m}\vert =1\}$. Clearly, $f =\{ (m,c_{m}) : m \in M\}$, where for $m \in M$, $c_{m}$ is the unique element of $C \cap A_{m}$, is a choice function of the subset $\mathcal{B}=\{A_{m} : m \in M\}$ of $\mathcal{A}$ of size $\aleph_{\alpha+1}$. Thus $\mathcal{B}$ is an $\aleph_{\alpha+1}$-sized subfamily of $\mathcal{A}$ with a choice function. Consequently, $\mathsf{CAC^{\aleph_{\alpha}}_{1}}$ implies $\mathsf{PAC^{\aleph_{\alpha+1}}_{fin}}$ in $\mathsf{ZF}$.
In order to prove that $\mathsf{DC_{<\aleph_{\alpha+1}}}$ does not imply $\mathsf{CAC^{\aleph_{\alpha}}_{1}}$, we refer the reader to Jech \cite[\textbf{Theorem 8.3}]{Jec1973} by noting that $\aleph_{\alpha}$ therein can be
replaced by $\aleph_{\alpha+1}$ since we assumed that $\aleph_{\alpha+1}$ is a regular aleph. We can see that
$\mathsf{PAC^{\aleph_{\alpha+1}}_{fin}}$ fails in the modified model.
\end{proof}
\begin{thm}
{\em Fix a natural number $n$ such that $n \geq 4$. Let $\aleph_{\alpha}$ and $\aleph_{\alpha+1}$ be regular alephs. Then there is a model $\mathcal{M}$ of $\mathsf{ZFA}$ where the following hold.
\begin{enumerate}
\item If $X\in \mathsf{\{LOC^{-}_{2}, MC}\}$, then $X$ holds and $\mathsf{LOC_{n}^{-}}$ fails.
\item If $X\in \{`\forall$ infinite $\mathfrak{m}(2\mathfrak{m} = \mathfrak{m})$', $\mathsf{ISAE}, \mathsf{EPWFP}, \mathsf{DF = F}\}$, then $X$ fails.
\item $\mathsf{CAC^{\aleph_{\alpha}}_{1}}$ fail.
\end{enumerate}
}
\end{thm}
\begin{proof}
We divide into two cases.
\textbf{Case (1):} Let $n$ be a natural number such that $n > 4$. Consider the permutation model $\mathcal{M}_{\kappa,n}$ from \textbf{Proposition 5.2(2)} by letting the infinite well-ordered cardinal number $\kappa$ to be $\aleph_{\alpha+1}$.
In \cite[\textbf{Theorem 5.3}]{Ban2}, we observed that if $X\in \mathsf{\{LOC^{-}_{2}, MC}\}$, then $X$ holds in $\mathcal{M}_{\aleph_{\alpha+1},n}$ and $\mathsf{LOC_{n}^{-}}$ fails in $\mathcal{M}_{\aleph_{\alpha+1},n}$. We can see that the well-ordered family $\mathcal{A} = \{A_{\beta} : \beta< \aleph_{\alpha+1}\}$ of $n$-element sets does not have a partial choice function in the model. Thus $\mathsf{PAC^{\aleph_{\alpha+1}}_{fin}}$, and hence $\mathsf{CAC^{\aleph_{\alpha}}_{1}}$ fails in the model by \textbf{Proposition 5.3(3)}.
\begin{claim}
{\em If $X\in \{`\forall$ infinite $\mathfrak{m}(2\mathfrak{m} = \mathfrak{m})$', $\mathsf{ISAE}, \mathsf{EPWFP}, \mathsf{DF = F}\}$, then $X$ fails in $\mathcal{M}_{\aleph_{\alpha+1},n}$.}
\end{claim}
\begin{proof}
We show that (Sym($A$))$^{\mathcal{M}_{\aleph_{\alpha+1},n}}$ = FSym($A$). The rest follows from \textbf{Lemma 2.10}. For the sake of contradiction, assume that
$f$ is a permutation of $A$ in $\mathcal{M}_{\aleph_{\alpha+1},n}$, which moves infinitely many atoms.
Let $E \subset A$ be a finite support of $f$ , and without loss of generality assume that $E = \bigcup_{i=0}^{k} A_{i}$ for some $k\in\omega$. Then there exist $i \in \aleph_{\alpha+1}$ with $i>k$, $a \in A_{i}$ and $b \in A\backslash (E \cup \{a\})$ such that $b = f(a)$.
\textbf{Case (i):}
Let $b \in A_{i}$, and let $c,d\in A_{i}\backslash \{a,b\}$. Consider $\phi\restriction A_{i} = (a,c,d)=(a,d)(a,c)$ which is the member of the alternating group on $A_{i}$ and $\phi\restriction A\backslash A_{i} = 1_{A\backslash A_{i}}$. Clearly, $\phi$ moves only finitely many atoms and $\phi\in\mathcal{G}$. Also, $\phi(b)=b$, $\phi\in$ fix$_{\mathcal{G}}(E)$, and hence $\phi(f) = f$. Thus
$(a,b)\in f\implies (\phi(a), \phi(b))\in \phi(f)\implies (c, b)\in \phi(f)=f$.
So $f$ is not injective; a contradiction.
\textbf{Case (ii):} If $b \in A \backslash (E \cup A_{i})$, then let $x,y \in A_{i}\backslash \{a\}$, $\phi\restriction A_{i} = (a, x,y)$ and $\phi\restriction A\backslash A_{i} = 1_{A\backslash A_{i}}$. Again $\phi$ moves only finitely many atoms and $\phi\in\mathcal{G}$. Then again we easily obtain a contradiction.
\end{proof}
\textbf{Case (2):} Let $n=4$. Consider the permutation model $\mathcal{M}_{\kappa,4}$ from \textbf{Proposition 5.2(1)} by letting the infinite well-ordered cardinal $\kappa$ to be $\aleph_{\alpha+1}$. In $\mathcal{M}_{\aleph_{\alpha+1},4}$, $\mathsf{LOC_{2}^{-}}$ holds (cf. proof of the second assertion of \cite[\textbf{Theorem 10(ii)}]{HT2020}). We note that $\mathsf{MC}$ is true in $\mathcal{M}_{\aleph_{\alpha+1},4}$.
The proof is similar to the one that $\mathsf{MC}$ holds in the Second Fraenkel Model (cf. \cite{Jec1973}).
Following the arguments in the proof of Case (1), we can see that if $X\in \{`\forall$ infinite $\mathfrak{m}(2\mathfrak{m} = \mathfrak{m})$', $\mathsf{ISAE}, \mathsf{EPWFP}, \mathsf{DF = F}, \mathsf{CAC^{\aleph_{\alpha}}_{1}}\}$, then $X$ fails in $\mathcal{M}_{\aleph_{\alpha+1},4}$.
\end{proof}
\begin{thm}
{\em $(\mathsf{LOC_{2}^{-} + MC})$ does not imply $\mathsf{PUU}$ in $\mathsf{ZFA}$.}
\end{thm}
\begin{proof}
Let $n> 4$. Consider the permutation model $\mathcal{M}_{\aleph_{1},n}$ where $\mathsf{AC^{\aleph_{1}}_{fin}}$ fails and $(\mathsf{LOC_{2}^{-} + MC})$ holds. Since $\mathsf{BPI(\aleph_{1})}$ holds in any permutation model (cf. \textbf{Lemma 2.4(3)}), it holds in $\mathcal{M}_{\aleph_{1},n}$. Thus $\mathsf{PUU}$ fails in $\mathcal{M}_{\aleph_{1},n}$ by \textbf{Lemma 2.12}.
\end{proof}
\begin{thm}
($\mathsf{ZF}$)
{\em Let $\aleph_{\alpha+1}$ be a successor aleph. Then there is a model of $\mathsf{ZF}$ where $\mathsf{DC_{<\aleph_{\alpha+1}}}$ and $\mathsf{WOC_{2}^{-}}$ hold but $\mathsf{CAC^{\aleph_{\alpha}}_{1}}$ and $\mathsf{EPWFP}$ fail.
}
\end{thm}
\begin{proof}
Fix $n=4$. First, we exhibit a new permutation model $\mathcal{V}$ to establish the result in $\mathsf{ZFA}$, and then transfer into $\mathsf{ZF}$ via \textbf{Theorem 2.5}.
We start with a ground model $M$ of $\mathsf{ZFA+AC}$ where $A$ is an $\aleph_{\alpha+1}$-sized set of atoms written as a disjoint union $A=\bigcup\{A_{\beta}:\beta< \aleph_{\alpha+1}\}$, where $A_{\beta}=\{a_{\beta,1},a_{\beta,2},...,a_{\beta,n}\}$ such that $\vert A_{\beta}\vert=n$ for all $\beta<\aleph_{\alpha+1}$. Let $Alt(A_{\beta})$ be the alternating group on $A_{\beta}$ for each $\beta<\aleph_{\alpha+1}$. Let $\mathcal{G}$ be the group of permutations $\eta$ of $A$ such that for every $\beta<\aleph_{\alpha+1}$, $\eta\restriction A_{\beta} \in Alt(A_{\beta})$. Let $\mathcal{F}$ be the normal filter of subgroups of $\mathcal{G}$ generated by $\{$fix$_{\mathcal{G}}(E) : E \in [A]^{< \aleph_{\alpha+1}}\}$. Consider the permutation model $\mathcal{V}$ determined by $M$, $\mathcal{G}$, and $\mathcal{F}$. We observe the following.
\begin{enumerate}
\item In $\mathcal{V}$, $\mathsf{DC_{<\aleph_{\alpha+1}}}$ holds by a standard argument since the normal ideal $\mathcal{I}$ is closed under $<\aleph_{\alpha+1}$-unions (cf. \cite[the arguments in the proof of \textbf{Theorem 8.3 (i)}]{Jec1973}).
\item Following the arguments in the proof of \cite[\textbf{Theorem 10(ii)}]{HT2020} we can see that $\mathsf{WOC_{2}^{-}}$ holds in $\mathcal{V}$.
\item We can see that in $\mathcal{V}$, the well-ordered family $\mathcal{A} = \{A_{\beta} : \beta< \aleph_{\alpha+1}\}$ of $n$-element sets does not have any $\aleph_{\alpha+1}$-sized subfamily $\mathcal{B}$ with a choice function.\footnote{ For reader’s convenience, we write down the proof. For the sake of contradiction, let $\mathcal{B}$ be an $\aleph_{\alpha+1}$-sized subfamily of $\mathcal{A}$ with a choice function $f \in \mathcal{V}$. Let $E\in [A]^{<\aleph_{\alpha+1}}$ be a support of $f$. Since $\vert E\vert<\aleph_{\alpha+1}$, there is an $i<\aleph_{\alpha+1}$ such that $A_{i} \in \mathcal{B}$ and $A_{i}\cap E = \emptyset$. Without loss of generality, let $f(A_{i})=a_{i_{1}}$. Consider the permutation $\pi$ which is the identity on $A_{j}$, for all $j \in \aleph_{\alpha+1}\backslash {i}$, and let $(\pi \restriction A_{i})(a_{i_{1}})=a_{i_{2}}\not=a_{i_{1}}$. Then $\pi$ fixes $E$ pointwise, hence $\pi(f) = f$. So, $f(A_{i})=a_{i_{2}}$ which contradicts the fact that $f$ is a function.} Thus $\mathsf{PAC^{\aleph_{\alpha+1}}_{fin}}$ fails in the model.
\item Similar to the proof of \textbf{claim 5.5} one may verify that if $f$ is a permutation of $A$ in $\mathcal{V}$, then the set $\{x \in A : f(x) \neq x\}$ has cardinality at most $\aleph_{\alpha}$. Since $A$ has size $\aleph_{\alpha+1}$, it follows that `for any set $X$ of size $\aleph_{\alpha+1}$, Sym(X) $\neq$ $\aleph_{\alpha}$Sym(X)' fails in $\mathcal{V}$. Consequently, if $X\in \{`\forall$ infinite $\mathfrak{m}(2\mathfrak{m} = \mathfrak{m})$', $\mathsf{ISAE},\mathsf{EPWFP},\mathsf{W_{\aleph_{\alpha+1}}}\}$, then $X$ fails in $\mathcal{V}$ by \textbf{Proposition 4.1}.
\end{enumerate}
Next, we can see that $\mathsf{WOC_{2}^{-}}$, $\mathsf{DC_{<\aleph_{\alpha+1}}}$, $\mathsf{\neg EPWFP}$, and $\mathsf{\neg PAC_{fin}^{\aleph_{\alpha+1}}}$ are injectively boundable statements. Since $\Phi$ := $(\mathsf{WOC_{2}^{-}}\land \mathsf{DC_{<\aleph_{\alpha+1}}} \land \neg\mathsf{EPWFP} \land \neg \mathsf{PAC_{fin}^{\aleph_{\alpha+1}}})$ is a conjunction of injectively boundable statements and has a $\mathsf{ZFA}$ model, it follows by \textbf{Theorem 2.5} that $\Phi$ has a $\mathsf{ZF}$ model $\mathcal{N}$. Since $\mathsf{CAC^{\aleph_{\alpha}}_{1}}$ implies $\mathsf{PAC^{\aleph_{\alpha+1}}_{fin}}$ in $\mathsf{ZF}$ (cf. \textbf{Proposition 5.3(3)}), $\mathsf{CAC^{\aleph_{\alpha}}_{1}}$ fails in $\mathcal{N}$. Thus, $\mathsf{CAC^{\aleph_{\alpha}}_{1}}$ and $\mathsf{EPWFP}$ fail but $\mathsf{DC_{<\aleph_{\alpha+1}}}$ and $\mathsf{WOC_{2}^{-}}$ hold in $\mathcal{N}$.
\end{proof}
\begin{remark}
Let $M$ and $A$ as above. Let $\mathcal{G}'$ be the group of permutations $\eta$ of $A$ such that for every $\beta<\aleph_{\alpha+1}$, $\eta\restriction A_{\beta} \in Alt(A_{\beta})$ and $\eta$ moves at most $\aleph_{\alpha}$ atoms and let $\mathcal{F}'$ be the normal
filter of subgroups of $\mathcal{G}'$ generated by $\{$fix$_{\mathcal{G'}}(E) : E \in [A]^{< \aleph_{\alpha+1}}\}$. Consider the permutation model $\mathcal{V}'$ determined by $M$, $\mathcal{G}'$, and $\mathcal{F}'$. Following the arguments of \cite[\textbf{Claim 3.6}]{Tac2019} due to Tachtsis, we can prove $\mathcal{V}=\mathcal{V}'$.
\end{remark}
\section{Van Douwen’s Choice Principle in two permutation models}
\begin{prop} The following hold.\begin{enumerate}\item The statement $\mathsf{vDCP\land UT(\aleph_{0}, \aleph_{0}, cuf) \land \neg M(IC, DI)}$ has a permutation model.\item The statement $\mathsf{vDCP\land \neg \mathsf{MC(\aleph_{0}, \aleph_{0})}}$ has a permutation model.\end{enumerate}\end{prop}
\begin{proof}(1) We recall the permutation model $\mathcal{N}$ which was constructed in \cite[proof of \textbf{Theorem 3.3}]{CHHKR2008} where $\mathsf{UT(\aleph_{0}, \aleph_{0}, cuf)}$ holds. In order to describe $\mathcal{N}$, we start with a model $M$ of $\mathsf{ZFA + AC}$ with a set $A$ of atoms such that $A$ has a denumerable partition $\{A_{i} : i \in \omega\}$ into denumerable sets, and for each $i \in \omega$, $A_{i}$ has a denumerable partition $P_{i} = \{A_{i,j} : j \in \mathbb{N}\}$ into finite sets such that, for every $j \in \mathbb{N}$, $\vert A_{i,j}\vert = j$. Let $\mathcal{G} = \{\phi \in Sym(A) : (\forall i \in \omega)(\phi(A_{i}) = A_{i})$ and $\vert\{x \in A : \phi(x) \neq x\}\vert < \aleph_{0}\}$, where $Sym(A)$ is the group of all permutations of $A$. Let $\textbf{P}_{i} = \{\phi(P_{i}) : \phi \in \mathcal{G}\}$ for each $i \in \omega$ and let $\textbf{P} = \bigcup\{\textbf{P}_{i} : i \in \omega\}$. Let $\mathcal{F}$ be the normal filter of subgroups of $\mathcal{G}$ generated by the filter base $\{$fix$_{\mathcal{G}}(E) : E \in [\textbf{P}]^{<\omega}\}$. Then $\mathcal{N}$ is the permutation model determined by $M$, $\mathcal{G}$ and $\mathcal{F}$. Keremedis, Tachtsis, and Wajch proved that $\mathsf{M(IC, DI)}$ fails in $\mathcal{N}$ (cf. \cite[proof of \textbf{Theorem 13(i)}]{KTW2021}). We follow steps (1), (2) and (4) from the proof of \cite[\textbf{Lemma 5.1}]{Ban2} to see that $\mathsf{vDCP}$ holds in $\mathcal{N}$. For the sake of convenience, we write down the proof.
\begin{lem}
{\em If $(X,\leq)$ is a poset in $\mathcal{N}$, then $X$ can be written as a well-ordered disjoint union $\bigcup\{W_{\alpha} : \alpha<\kappa\}$ of antichains.}
\end{lem}
\begin{proof}
Let $(X,\leq)$ be a poset in $\mathcal{N}$ and $E\in [\textbf{P}]^{<\omega}$ be a support of $(X,\leq)$. We can write $X$ as a disjoint union of fix$_{\mathcal{G}}(E)$-orbits, i.e., $X=\bigcup \{Orb_{E}(p):p\in X\}$, where $Orb_{E}(p)=\{\phi(p): \phi\in$ fix$_{\mathcal{G}}(E)\}$ for all $p \in X$. The family $\{Orb_{E}(p): p \in X\}$ is well-orderable in $\mathcal{N}$ since fix$_{\mathcal{G}}(E) \subseteq Sym_{\mathcal{G}}(Orb_{E}(p))$ for all $p \in X$ (cf. the arguments of \cite[\textbf{claim 4}]{Tac2016a}). We prove that $Orb_{E}(p)$ is an antichain in $(X,\leq)$ for each $p\in X$ following the arguments of \cite[\textbf{claim 3}]{Tac2016a}. For the sake of contradiction, suppose there is a $p\in X$, such that $Orb_{E}(p)$ is not an antichain in $(X,\leq)$. Thus, for some $\phi,\psi\in$ fix$_{\mathcal{G}}(E)$, $\phi(p)$ and $\psi(p)$ are comparable. Without loss of generality we may assume $\phi(p)<\psi(p)$. Let $\pi=\psi^{-1}\phi$. Consequently, $\pi(p)<p$. Now each $\eta\in \mathcal{G}$, moves only finitely many atoms by the definition of $\mathcal{G}$. So for some $k<\omega$, $\pi^{k}=1$. Thus, $p=\pi^{k}(p)<\pi^{k-1}(p)<...<\pi(p)<p$. By transitivity of $<$, $p<p$, which is a contradiction.
\end{proof}
We recall the arguments from the $1^{st}$-paragraph of \cite[\textbf{p.175}]{HST2016} to give a proof of $\mathsf{vDCP}$ in $\mathcal{N}$.
Let $\mathcal{A} = \{(A_{i}, \leq_{i}) : i \in I\}$ be a family as in $\mathsf{vDCP}$. Without loss of generality, we assume that $\mathcal{A}$ is pairwise disjoint. Let $R = \bigcup \mathcal{A}$. We partially order $R$ by requiring $x \prec y$ if and only if there exists an index $i \in I$ such that
$x, y \in A_{i}$ and $x \leq_{i} y$. By \textbf{Lemma 6.2}, $R$ can be written as a well-ordered disjoint
union $\bigcup\{W_{\alpha} : \alpha<\kappa\}$ of antichains. For each $i \in I$, let $\alpha_{i} = min\{\alpha \in \kappa : A_{i} \cap W_{\alpha} \neq \emptyset\}$. Since for all $i \in I$, $A_{i}$ is linearly ordered, it follows that $A_{i} \cap W_{\alpha_{i}}$ is a singleton for each $i \in I$. Consequently, $f = \{(i,\bigcup(A_{i} \cap W_{\alpha_{i}})) : i \in I\}$ is a choice function of $\mathcal{A}$. Thus, $\mathsf{vDCP}$ holds in $\mathcal{N}$.
(2). We recall the permutation model (say $\mathcal{M}$) which was constructed in \cite[proof of \textbf{Theorem 3.4}]{HT2021}. In order to describe $\mathcal{M}$, we
start with a model $M$ of $\mathsf{ZFA + AC}$ with a denumerable set $A$ of atoms which is written as a disjoint union
$\bigcup\{A_{n} : n \in \omega\}$, where $\vert A_{n}\vert = \aleph_{0}$ for all $n \in \omega$.
For each $n \in \omega$, we let $\mathcal{G}_{n}$ be the group of all permutations of $A_{n}$ which move only finitely
many elements of $A_{n}$. Let $\mathcal{G}$ be the weak direct product of the $\mathcal{G}_{n}$’s for $n \in \omega$.
Consequently, every permutation of $A$ in $\mathcal{G}$ moves only finitely many atoms.
Let $\mathcal{I}$ be the normal ideal of subsets of $A$ which is generated by finite unions of $A_{n}$’s. Let $\mathcal{F}$ be the normal filter on $\mathcal{G}$ generated by the subgroups fix$_{G}(E)$, $E \in \mathcal{I}$.
Let $\mathcal{M}$ be the Fraenkel–Mostowski model, which is determined by $M$, $\mathcal{G}$, and $\mathcal{F}$.
Howard and Tachtsis proved that $\mathsf{MC(\aleph_{0}, \aleph_{0})}$ fails in $\mathcal{M}$ (cf. \cite[proof of \textbf{Theorem 3.4}]{HT2021}). Since every permutation of $A$ in $\mathcal{G}$ moves only finitely many atoms, following the arguments in the proof of (1), $\mathsf{vDCP}$ holds in $\mathcal{M}$.
\end{proof}
\begin{remark}
In every Fraenkel-Mostowski permutation model, $\mathsf{CS}$ (Every poset without a maximal element has two disjoint cofinal subsets) implies $\mathsf{vDCP}$ (cf. \cite[\textbf{Theorem 3.15(3)}]{HST2016}). We can also see that in the above mentioned permutation models (i.e., $\mathcal{N}$ and $\mathcal{M}$) $\mathsf{CS}$ and $\mathsf{CWF}$ (Every poset has a cofinal well-founded subset) hold applying \textbf{Lemma 6.2} and following the methods of \cite[\textbf{Theorem 3.26}]{HST2016} and \cite[proof of \textbf{Theorem 10 (ii)}]{Tac2018}.
\end{remark}
\section{Spanning subgraphs and weak choice forms}
\begin{prop} ($\mathsf{ZF}$) {\em The following hold.
\begin{enumerate}
\item $\mathsf{AC_{\leq n-1}^{\omega}} + \mathcal{Q}^{n}_{lf,c}$
is equivalent to $\mathsf{AC_{fin}^{\omega}}$ for any $2< n\in \omega$.
\item $\mathsf{UT(WO,WO,WO)}$ implies $\mathsf{AC_{\leq n-1}^{WO}} + \mathcal{Q}_{lw,c}^{n,k}$ and the later implies $\mathsf{AC_{WO}^{WO}}$ for any $2< n,k\in \omega$.
\item $\mathcal{P}_{lf,c}^{m}$ is equivalent to $\mathsf{AC_{fin}^{\omega}}$ for any even integer $m\geq 4$.
\item $\mathcal{Q}^{n}_{lf,c}$ fails in $\mathcal{N}_{6}$.
\end{enumerate}}
\end{prop}
\begin{proof}
(1). ($\Leftarrow$) We assume $\mathsf{AC_{fin}^{\omega}}$. Fix any $2< n\in \omega$. We know that $\mathsf{AC_{fin}^{\omega}}$ implies $\mathsf{AC_{\leq n-1}^{\omega}}$ in $\mathsf{ZF}$ and claim that $\mathsf{AC_{fin}^{\omega}}$ implies $\mathcal{Q}^{n}_{lf,c}$ in $\mathsf{ZF}$. We recall the fact that $\mathsf{AC_{fin}^{\omega}}$ implies the statement {\em `Every infinite locally finite connected graph is countably infinite'}; Let $G = (V_{G}, E_{G})$ be some infinite locally finite connected graph. Consider some $r \in V_{G}$. Let $V_{0}=\{r\}$. For each integer $n \geq 1$, define $V_{n} = \{v \in V_{G} : d_{G}(r, v) = n\}$ where `$d_{G}(r, v) = n$' means there are $n$ edges in the shortest path joining $r$ and $v$. Each $V_{n}$ is finite by locally finiteness of $G$, and $V_{G} = \bigcup_{n\in \omega}V_{n}$ by connectedness of $G$. By $\mathsf{UT(\aleph_{0},fin,\aleph_{0})}$ (which is equivalent to $\mathsf{AC_{fin}^{\omega}}$(cf. $\S$ 1.5.1)), $V_{G}$ is countable. Consequently, $V_{G}$ is well-ordered. The rest follows from the facts that every well-ordered graph has a spanning tree in $\mathsf{ZF}$, and any spanning tree is a spanning subgraph omitting $K_{2,n}$.
($\Rightarrow$) Fix any $2< n\in \omega$. We show that $\mathsf{AC_{\leq n-1}^{\omega}} + \mathcal{Q}^{n}_{lf,c}$ implies $\mathsf{AC_{fin}^{\omega}}$ in $\mathsf{ZF}$.
Let $\mathcal{A}=\{A_{i}:i\in \omega\}$ be a countably infinite set of non-empty finite sets. Without loss of generality, we assume that $\mathcal{A}$ is disjoint. Let $A=\bigcup_{i\in \omega} A_{i}$. Consider a countably infinite family $(B_{i},<_{i})_{i\in \omega}$ of well-ordered sets such that $\vert B_{i}\vert=\vert A_{i}\vert + k$ for some fixed $1\leq k\in \omega$, for each $i\in \omega$, $B_{i}$ is disjoint from $A$ and the other $B_{j}$’s, and there is no mapping with domain $A_{i}$ and range $B_{i}$ (cf. \cite[\textbf{Theorem 1}, \textbf{Remark 6}]{DM2006}).
Let $B=\bigcup_{i\in \omega} B_{i}$. Consider another countably infinite sequence $T=\{t_{i}:i\in \omega\}$ disjoint from $A$ and $B$. We construct a graph $G_{1}=(V_{G_{1}}, E_{G_{1}})$.
{\bf{\underline{Constructing $G_{1}$}:}}
Let $V_{G_{1}} = A\cup B\cup T$. For each $i \in\omega$, let $\{t_{i},t_{i+1}\}\in E_{G_{1}}$ and $\{t_{i},x\}\in E_{G_{1}}$ for every element $x\in A_{i}$. Also for each $i \in\omega$, join each $x\in A_{i}$ to every element of $B_{i}$.
Clearly, the graph $G_{1}$ is connected and locally finite. By assumption, $G_{1}$ has a spanning subgraph $G'_{1}$ omitting $K_{2,n}$.
For each $i \in \omega$, let $f_{i}: B_{i} \rightarrow \mathcal{P}(A_{i})\backslash{\emptyset}$ map each element of $B_{i}$ to its neighbourhood in $G'_{1}$. We can see that for any two distinct $\epsilon_{1}$ and $\epsilon_{2}$ in $B_{i}$, $f_{i}(\epsilon_{1})\cap f_{i}(\epsilon_{2})$ has at most $n-1$
elements, since $G'_{1}$ has no $K_{2,n}$. By \textbf{Lemma 2.9(1)}, there are tuples $(\epsilon'_{1}, \epsilon'_{2})\in B_{i}\times B_{i}$ s.t. $f_{i}(\epsilon'_{1})\cap f_{i}(\epsilon'_{2})\neq \emptyset$. Consider the first such tuple $(\epsilon''_{1},\epsilon''_{2})$ w.r.t. the well-ordering on $B_{i}\times B_{i}$.
Let $A'_{i}=f_{i}(\epsilon''_{1})\cap f_{i}(\epsilon''_{2})$.
By $\mathsf{AC_{\leq n-1}^{\omega}}$, we can obtain a choice function of $\mathcal{A}'=\{A'_{i}:i\in \omega\}$, which is a choice function of $\mathcal{A}$.
(2). For the first implication, we know that $\mathsf{UT(WO,WO,WO)}$ implies $\mathsf{AC_{\leq n-1}^{WO}}$ as well as the statement {\em `Every locally well-orderable connected graph is well-orderable'} in $\mathsf{ZF}$. The rest follows from the fact that every well-ordered graph has a spanning tree in $\mathsf{ZF}$.
We show that $\mathsf{AC_{\leq n-1}^{WO}}$ + $\mathcal{Q}_{lw,c}^{n,k}$ implies $\mathsf{AC_{WO}^{WO}}$. Let $\mathcal{A}=\{A_{n}:n\in \kappa\}$ be a well-orderable set of non-empty well-orderable sets. Without loss of generality, we assume that $\mathcal{A}$ is disjoint. Let $A=\bigcup_{i\in \kappa} A_{i}$. Consider an infinite well-orderable family $(B_{i},<_{i})_{i\in \kappa}$ of well-orderable sets such that for each $i\in \kappa$, $B_{i}$ is disjoint from $A$ and the other $B_{j}$’s, and there is no mapping with domain $A_{i}$ and range $B_{i}$ (cf. \cite[\textbf{Theorem 1}, \textbf{Remark 6}]{DM2006}). Let $B=\bigcup_{i\in \kappa} B_{i}$. Consider another $\kappa$ sequence $T=\{t_{n}:n\in \kappa\}$ disjoint from $A$ and $B$.
{\bf{\underline{Constructing $G_{2}$}:}} Let $V_{G_{2}} = A\cup B\cup T$. For each $n \in\kappa$, let $\{t_{n},t_{n+1}\}\in E_{G_{2}}$ and $\{t_{n},x\}\in E_{G_{2}}$ for every element $x\in A_{n}$. Also for each $n \in\kappa$, join each $x\in A_{n}$ to every element of $B_{n}$.
Clearly, the graph $G_{2}$ is connected and locally well-orderable. By assumption, $G_{2}$ has a spanning subgraph $G'_{2}$ omitting $K_{k,n}$. For each $i \in \kappa$, let $f_{i}: B_{i} \rightarrow \mathcal{P}(A_{i})\backslash{\emptyset}$ map each element of $B_{i}$ to its neighbourhood in $G'_{2}$. We can see that for any finite $k$-subset $H_{i}\subseteq B_{i}$, $\bigcap_{\epsilon\in H_{i}} f_{i}(\epsilon)$ has at most $n-1$ elements, since $G'_{2}$ has no $K_{k,n}$. Since each $B_{i}$ is infinite and well-orderable, by \textbf{Lemma 2.9(2)}, there are tuples $(\epsilon_{1}, \epsilon_{2},...\epsilon_{k})\in B_{i}^{k}$ s.t. $\bigcap_{i< k}f_{i}(\epsilon_{i})\neq \emptyset$. Consider the first such tuple $(\epsilon_{1}, \epsilon_{2},...\epsilon_{k})$ w.r.t. the well-ordering on $B_{i}^{k}$. Let $A'_{i}=\bigcap_{i< k}f_{i}(\epsilon_{i})$. By $\mathsf{AC_{\leq n-1}^{WO}}$, we can obtain a choice function of $\mathcal{A}'=\{A'_{n}:n\in \kappa\}$, which is a choice function of $\mathcal{A}$.
(3).
($\Rightarrow$) Fix any even integer $m=2(k+1) \geq 4$. We prove that $\mathcal{P}_{lf,c}^{m}$ implies $\mathsf{AC_{fin}^{\omega}}$.
Let $\mathcal{A}=\{A_{i}:i\in\omega\}$ be a countably infinite set of non-empty finite sets and $A=\bigcup_{i\in \omega} A_{i}$.
Let $G_{3} := (\bigcup_{i\in \omega} \bigcup_{x\in A_{i}} \{\{r_{i},(x, 1)\}, \{(x, 1),(x, 2)\},...,\{(x, k-1),(x, k)\}, \{(x, k),t_{i}\}\}) \cup (\bigcup_{i\in \omega}\{r_{i},r_{i+1}\})$ where the $t_{i}$’s are pair-wise distinct and belong to no $A_{j} \times \{1,..., k\}$,
and $r_{i}$'s are pair-wise distinct and belong to no $(A_{j} \times \{1,..., k\})\cup \{t_{j}\}$ for any $i,j\in\omega$.
Clearly, $G_{3}$ is locally finite and connected. By assumption, $G_{3}$ has a spanning $m$-bush $\zeta$. We can see that $\zeta$ generates a choice function of $\mathcal{A}$: for each $i \in I$, there is a unique $x \in A_{i}$, say $x_{i}$, such that $(t_{i}, (x_{i},k),... (x_{i}, 1), r_{i})$ is a path in $\zeta$.
($\Leftarrow$) Fix any even integer $m\geq 4$. We prove that $\mathsf{AC_{fin}^{\omega}}$ implies $\mathcal{P}_{lf,c}^{m}$. We know that $\mathsf{AC_{fin}^{\omega}}$ implies the statement {\em `Every infinite locally finite connected graph is countably infinite'} in $\mathsf{ZF}$. The rest follows from the fact that every well-ordered graph has a spanning tree in $\mathsf{ZF}$ and any spanning tree is a spanning $m$-bush.
(4). In $\mathcal{N}_{6}$, $\mathsf{AC_{fin}^{\omega}}$ fails, where as $\mathsf{AC_{\leq n-1}^{\omega}}$ holds for any natural number $n> 2$. By \textbf{Proposition 7.1(1)}, $\mathcal{Q}^{n}_{lf,c}$ fails in the model.
\end{proof}
We recall the definition of $P'_{G}$ for a graph $G$ from $\S 1.5.1$.
\begin{prop}($\mathsf{ZF}$) Fix any $2< k\in \omega$ and any $2\leq p,q< \omega$.
\begin{enumerate}
\item If each $A_{i}$ is $K_{k}$, then $\mathsf{AC_{k^{k-2}}}$ implies {\em `Every graph from the class $P'_{K_{k}}$ has a spanning tree}'.
\item If each $A_{i}$ is $C_{k}$, then $\mathsf{AC_{k}}$ implies {\em `Every graph from the class $P'_{C_{k}}$ has a spanning tree}'.
\item If each $A_{i}$ is $K_{p,q}$, then ($\mathsf{AC_{p^{q-1}q^{p-1}}+AC_{p+q}}$) implies {\em `Every graph from the class $P'_{K_{p,q}}$ has a spanning tree}'.
\end{enumerate}
\end{prop}
\begin{proof}
(1). Let $G_{2}=(V_{G_{2}}, E_{G_{2}})$ be a graph from the class $P'_{K_{k}}$. Then there is a $G_{1}\in P_{K_{k}}$ (an infinite graph whose only components are $K_{k}$) such that $V_{G_{2}}=V_{G_{1}}\cup \{t\}$ for some $t\not\in V_{G_{1}}$. Let $\{A_{i}: i\in I\}$ be the components of $G_{1}$. By $\mathsf{AC_{k}}$ (which follows from $\mathsf{AC_{k^{k-2}}}$ (cf. \textbf{Lemma 2.16})), we choose a sequence of vertices $\{a_{i}:i\in I\}$ such that $a_{i}\in A_{i}$ for all $i\in I$. By $\textbf{Lemma 2.14}$, the number of spanning trees in $A_{i}$ is $k^{k-2}$ for any $i\in I$. By $\mathsf{AC_{k^{k-2}}}$, we choose a sequence $\{s_{i}:i\in I\}$ such that $s_{i}$ is a spanning tree of $A_{i}$ for all $i\in I$. Then the graph $\bigcup_{i\in I} s_{i}\cup \{t,c_{i}\}$ is a spanning tree of $G_{2}$.
(2). Following the arguments of the proof of (1), we can prove that $\mathsf{AC_{\kappa}}$ implies the statement {\em `Every graph from the class $P'_{C_{k}}$ has a spanning tree'} since the number of spanning trees in $A_{i}$ is $k$ for any $i\in I$ in $\mathsf{ZF}$.
(3). Following the arguments of the proof of (1), we can prove that $(\mathsf{AC_{p^{q-1}q^{p-1}}+AC_{p+q}})$ implies the statement {\em `Every graph from the class $P'_{K_{p,q}}$ has a spanning tree'} since the number of spanning trees in $A_{i}$ is $p^{q-1}q^{p-1}$ for any $i\in I$ in $\mathsf{ZF}$ (cf. \textbf{Lemma 2.15}).
\end{proof}
\section{Questions and further studies}
\begin{question}
Which other choice principles hold in $\mathcal{V}_{p}$? In particular does $\mathsf{CAC}$, the infinite Ramsey's Theorem ($\mathsf{RT}$) \cite[\textbf{Form 17}]{HR1998}, and \textbf{Form 233} hold in $\mathcal{V}_{p}$?
\end{question}
We proved that $\mathsf{CAC_{1}^{\aleph_{0}}}$ and $\mathsf{CAC^{\aleph_{0}}}$ hold in $\mathcal{N}_{1}$ (cf. \cite{Ban2, BG1}). We know that $\mathsf{RT}$ is true in both $\mathcal{N}_{1}$ and Mostowski's linearly ordered model (labeled as Model $\mathcal{N}_{3}$ in \cite{HR1998}) (cf. \cite[\textbf{Theorem 2}]{Bla1977}, \cite[\textbf{Theorem 2.4}]{Tac2016a}). Consequently, $\mathsf{CAC}$ holds in both $\mathcal{N}_{1}$ and $\mathcal{N}_{3}$ (since $\mathsf{RT}$ implies $\mathsf{CAC}$ (cf. \cite[\textbf{Theorem 1.6}]{Tac2016a})).
\begin{question}
Does $\mathsf{CAC_{1}^{\aleph_{0}}}$ and $\mathsf{CAC^{\aleph_{0}}}$ hold in $\mathcal{N}_{3}$?
\end{question}
\begin{question}{(asked by Lajos Soukup)}
What is the relationship between $\mathsf{CAC_{1}^{\aleph_{0}}}$ and $\mathsf{CAC^{\aleph_{0}}}$ in $\mathsf{ZF}$? In particular, can we say whether $\mathsf{CAC_{1}^{\aleph_{0}}}$ and $\mathsf{CAC^{\aleph_{0}}}$ are equivalent in $\mathsf{ZF}$? Otherwise, is there any model of $\mathsf{ZF}$ where either $\mathsf{CAC_{1}^{\aleph_{0}}}$ holds and $\mathsf{CAC^{\aleph_{0}}}$ fails or $\mathsf{CAC^{\aleph_{0}}}$ holds and $\mathsf{CAC_{1}^{\aleph_{0}}}$ fails?
\end{question}
Bruce \cite{Bru2016} proved that $\mathsf{UT(WO,WO,WO)}$ holds in $\mathcal{V}_{p}$.
\begin{question}
Does $\mathsf{UT(WO,WO,WO)}$ and $\mathsf{DC}$ hold in $\mathcal{V}^{+}_{p}$?
\end{question}
We know that $\mathsf{DC}$ implies $\mathsf{CAC}$ in $\mathsf{ZF}$.
\begin{question}
Does $\mathsf{DC_{\aleph_{1}}}$ imply both $\mathsf{CAC_{1}^{\aleph_{0}}}$ and $\mathsf{CAC^{\aleph_{0}}}$ in $\mathsf{ZF}$?
\end{question}
We recall that every symmetric extension (symmetric submodel of a forcing extension where $\mathsf{AC}$ can consistently fail) is given by a symmetric system $\langle \mathbb{P}, \mathcal{G}, \mathcal{F}\rangle$, where $\mathbb{P}$ is a forcing notion, $\mathcal{G}$ is a group of permutations of $\mathbb{P}$, and $\mathcal{F}$ is a normal filter of subgroups over $\mathcal{G}$. We recall the definition of Feferman--L\'{e}vy's symmetric extension from Dimitriou's Ph.D. thesis (cf. \cite[\textbf{Chapter 1}, $\S$2]{Dim2011}).
\textbf{Forcing notion $\mathbb{P}_{1}$:} Let $\mathbb{P}_{1} = \{p : \omega \times \omega \rightharpoonup \aleph_{\omega} ; \vert p\vert < \omega$ and $\forall (n, i) \in dom(p), p(n, i) < \omega_{n}\}$ be a forcing notion ordered by reverse inclusion, i.e., $p \leq q$ iff $p \supseteq q$ (We denote by $p:A\rightharpoonup B$ a partial function from $A$ to $B$).
\textbf{Group of permutations $\mathcal{G}_{1}$ of $\mathbb{P}_{1}$:} Let $\mathcal{G}_{1}$ be the full permutation group of $\omega$. Extend $\mathcal{G}_{1}$ to an automorphism group of $\mathbb{P}_{1}$ by letting an $a \in \mathcal{G}_{1}$ act on a $p \in \mathbb{P}_{1}$ by $a^{*}(p)= \{(n, a(i), \beta) ; (n, i, \beta) \in p\}$. We identify $a^{*}$ with $a \in \mathcal{G}_{1}$. We can see that this is an automorphism group of $\mathbb{P}_{1}$.
\textbf{Normal filter $\mathcal{F}_{1}$ of subgroups over $\mathcal{G}_{1}$}: For every $n \in \omega$ define the following sets.
\begin{equation}
E_{n} =\{p \cap (n \times \omega \times \omega_{n}); p \in \mathbb{P}_{1}\},
\text{fix}_{\mathcal{G}_{1}}E_{n} = \{a \in \mathcal{G}_{1}; \forall p \in E_{n}(a(p) = p)\}.
\end{equation}
We can see that $\mathcal{F}_{1} =\{X \subseteq \mathcal{G}_{1} ; \exists n \in \omega,$ fix$_{\mathcal{G}_{1}}E_{n} \subseteq X\}$ is a normal filter of subgroups over $\mathcal{G}_{1}$.
Feferman--L\'{e}vy's symmetric extension is the symmetric extension obtained by $\langle\mathbb{P}_{1}, \mathcal{G}_{1},\mathcal{F}_{1}\rangle$ where $\mathsf{UT(\aleph_{0}, \aleph_{0}, \aleph_{0})}$ (The union of denumerably many pairwise disjoint denumerable sets is denumerable) fails. It is known that the following statements follow from `$\aleph_{1}$ is regular' as well as from $\mathsf{UT(\aleph_{0}, \aleph_{0}, \aleph_{0})}$ in $\mathsf{ZF}$ (cf. \cite{Ban2,BG1}).
(*): If $P$ is a poset such that the underlying set has a well-ordering and if all antichains in $P$ are finite and all chains in $P$ are countable, then $P$ is countable.
(**): If $P$ is a poset such that the underlying set has a well-ordering and if all antichains in $P$ are countable and all chains in $P$ are finite, then $P$ is countable.
\begin{question}
Does any of (**) and (*) is true in Feferman--L\'{e}vy's symmetric extension?
\end{question}
|
1,941,325,221,180 | arxiv | \section{Introduction}
Proton-rich radioactive ion-beam facilities offer the novel possibility
of exploring the structure
of nearly self-conjugate ($N \sim Z$) nuclei in the medium mass range
$Z \lesssim 50$. Special interest will be devoted to understanding
the proton-neutron
(pn) interaction, which
has long been recognized to play
a particularly important role in $N=Z$ nuclei. The pn correlations can either
correspond to isovector ($T=1$) or isoscalar ($T=0$) pairs. Like proton-proton
(pp) and neutron-neutron (nn) pairing, isovector pn correlations in light to
medium mass nuclei are assumed to involve a proton-neutron pair in
time-reversed spatial orbitals, while the isoscalar correlations involve
mainly pairing between identical orbitals and spin-orbit partners \cite{Goodmann}.
The importance of pn correlations in
self-conjugate odd-odd nuclei is evident from the ground state spins
and isospins. In the $sd$-shell, odd-odd $N=Z$ nuclei
(with the exception of $^{34}$Cl)
have ground states with isospin $T=0$ and angular momenta $J > 0$,
pointing to the importance of isoscalar pn pairing between
identical orbitals. In contrast, self-conjugate $N=Z$ nuclei in the
medium mass range ($A > 40$) have ground states with $T=1$
and $J^{\pi}=0^+$ (the only
known exception is $^{58}$Co) indicating the dominance of isovector
pn pairing.
Additional confirmation of the importance of $T=1$ pn-pairing
in medium-mass odd-odd $N=Z$ nuclei is
an experiment that identified the $T=0$ and $T=1$ bands
in $^{74}$Rb \cite{Rudolph}. An isospin $T=1$, arising from isovector
pn correlations, has been assigned
to the ground state rotational band in this nucleus.
At higher rotational frequency, or equivalently higher excitation energy,
a $T=0$ rotational band becomes energetically favored.
The competition of isovector and isoscalar
pn pairing has been extensively studied for $sd$-shell nuclei
\cite{Sandhu} and for nuclei at the beginning of the $pf$-shell \cite{Wolter}
using the Hartree-Fock-Bogoliubov (HFB) formalism.
A major result
\cite{Goodmann} is that $T=0$ pn correlations dominate $T=1$
correlations in the $N=Z$ nuclei studied;
$T=1$ pn pairing
was never found to be important.
As mentioned above,
this is surprising since
the $T=1$ ground state isospin
of most odd-odd $N=Z$ nuclei with $A\ge40$ clearly points to
the importance of $T=1$ pn pairing in these nuclei.
It has been pointed out recently that HFB calculations
for intermediate mass nuclei can exhibit nearly degenerate minima
that may or may not involve
important $T=1$ pn correlations \cite{Faessler}.
However, these studies assume
that the $T=1$ pn pairing strength is larger than the nn and pp
pairing strengths.
Although HFB calculations have already pioneered the study of
pairing in $N=Z$ nuclei,
the method of choice to study pair correlations
is the interacting shell model.
Within the $sd$ shell \cite{Wildenthal} and at the beginning of the
$pf$-shell \cite{McGrory,Caurier}
the interacting shell model has proven to give an excellent
description of all nuclei, including the correct reproduction
of the spin-isospin assignments of self-conjugate $N=Z$ nuclei.
However, the conventional shell model using diagonalization techniques
is currently restricted to nuclei with masses $A \leq48$ due to computational
limitations. These limitations are overcome by
the recently
developed Shell Model Monte Carlo (SMMC)
approach \cite{Johnson,Lang}. Using this novel method,
it has been demonstrated \cite{Langanke1} that complete $pf$ shell
calculations using the modified Kuo-Brown interaction well reproduce the
ground state properties of even-even and $N=Z$ nuclei with $A \leq 60$;
for heavier nuclei an extension of the model space to
include the $g_{9/2}$ orbitals is necessary.
Additionally the SMMC approach naturally allows the study of thermal
properties. First studies have been performed for several
even-even nuclei in the astrophysically interesting mass range $A=54-60$
\cite{Dean,Langanke2}.
In this paper we extend SMMC studies
to detailed calculations of the
$N=Z$ nuclei with $A=50-60$. The studies consider
all configurations within the complete $pf$-shell model space. Special
attention is paid to isovector
and isoscalar pairing correlations in the ground states.
To elucidate the experimentally observed
competition between $T=1$ and $T=0$ correlations as a function of
excitation energy, we have also performed SMMC studies of the thermal
properties of an even-even and an odd-odd $N=Z$ nucleus
($^{52}$Fe and $^{50}$Mn, respectively).
In particular, we discuss the differences in
the thermal behavior of the pair correlations,
and other selected observables, in these nuclei.
\section{Model}
The SMMC approach was developed in Refs.
\cite{Johnson,Lang}, where the reader can find a detailed
description of the ideas underlying the method, its formulation,
and numerical realization. As the present calculations follow
the formalism developed and published previously, a very brief
description of the SMMC approach suffices here.
A comprehensive review of the SMMC method and its applications can be found in
Ref. \cite{report}.
The SMMC method
describes the nucleus by a canonical ensemble at temperature
$T=\beta^{-1}$ and employs a Hubbard-Stratonovich linearization
\cite{Hubbard} of the
imaginary-time many-body propagator, $e^{-\beta H}$, to express
observables as path integrals of one-body propagators in fluctuating
auxiliary fields \cite{Johnson,Lang}. Since Monte Carlo techniques
avoid an explicit enumeration of the many-body states, they can be
used in model spaces far larger than those accessible to conventional
methods. The Monte Carlo results are in principle exact and are in
practice subject only to controllable sampling and discretization
errors.
The notorious ``sign problem'' encountered in the Monte
Carlo shell model calculations with realistic interactions \cite{Alhassid}
can be circumvented
by a procedure suggested in Ref. \cite{Dean},
which is based on an extrapolation from a family of
Hamiltonians that is free of the sign problem
to the physical Hamiltonian.
The numerical details of our calculation parallel those of Refs.
\cite{Langanke1,Dean}.
As we will show below, isovector pair correlations depend strongly
on the neutron excess in $N \sim Z$ nuclei,
so that a proper particle number
projection is indispensable for a meaningful study of these correlations.
We stress that this important requirement is fulfilled
by the present SMMC approach, which uses a {\it canonical} expectation value
for all observables at a given temperature; i.e.,
the proper proton and
neutron numbers of the nuclei are guaranteed by an appropriate number
projection \cite{Lang,report}.
The main focus of this paper is on pairing correlations
in the three isovector $J^{\pi}=0^+$ channels and the isoscalar
proton-neutron correlations in the $J^{\pi}=1^+$ channel.
In complete $0 \hbar \omega$ shell model calculations.
the definition of the pairing strength is somewhat arbitrary.
In this paper, we follow Ref. \cite{Langanke2}
in our description of pairing correlation
and define
a pair of protons or neutrons
with angular momentum quantum numbers $(JM)$
by ($c=\pi$ for protons and $c=\nu$ for neutrons)
\begin{equation}
A_{JM}^\dagger (j_a,j_b) =
\frac{1}{\sqrt{1+\delta_{j_a j_b}}}
\left[ c_{j_a}^\dagger c_{j_b}^\dagger \right]_{(JM)} ,
\end{equation}
where $\pi_j^\dagger$ ($\nu_j^\dagger$)
creates a proton (neutron) in an orbital with total spin $j$.
The
isovector (plus sign)
and isoscalar (minus sign) proton-neutron pair operators are given by
\begin{equation}
A_{JM}^\dagger (j_a,j_b) =
\frac{1}{\sqrt{2(1+\delta_{j_a j_b})}}
\left[
\nu_{j_a}^\dagger \pi_{j_b}^\dagger \pm
\pi_{j_a}^\dagger \nu_{j_b}^\dagger
\right]_{(JM)}.
\end{equation}
With these definitions, we build up a pair matrix
\begin{equation}
M_{\alpha \alpha'}^J = \sum_M \langle A_{JM}^\dagger (j_a,j_b)
A_{JM} (j_c,j_d) \rangle ,
\end{equation}
which corresponds to the calculation of the canonical ensemble average
of two-body operators like $\pi_1^\dagger \pi_2^\dagger \pi_3 \pi_4$.
The index $\alpha$ distinguishes the various possible $(j_a,j_b)$
combinations (with $j_a \geq j_b$).
The square matrix $M$ for the $pf$ shell has
dimension $N_J=4$ for $J=0$ and $N_J=7$ for $J=1$.
In Ref. \cite{Langanke2}
the sum
of the eigenvalues of the matrix $M^J$ (its trace)
has been introduced as a convenient overall measure for the strength
of pairs with spin $J$:
\begin{equation}
P^J = \sum_{\beta} \lambda_{\beta}^J = \sum_{\alpha} M^J_{\alpha \alpha},
\end{equation}
where the $\lambda_\beta$ are eigenvalues of the matrix $M$.
An alternative often used to measure
the overall pair correlations
in nuclear wave functions, is in terms
of the BCS pair operator
\begin{equation}
\Delta_{JM}^\dagger = \sum_{\alpha} A_{JM}^\dagger
(\alpha).
\end{equation}
The quantity $\sum_M \langle \Delta_{JM}^\dagger \Delta_{JM} \rangle$
is then a
measure of the number of nucleon pairs with spin $J$.
We note that, for the results discussed in this paper,
the BCS-like definition for the overall pairing
strength yields the same qualitative results for the pairing content
as the definition (4).
Some SMMC results for BCS pairing in nuclei in the mass range $A=48-60$
are published in Refs. \cite{Langanke1,Langanke2,report,Engel}.
With our definition (4) the pairing strength is non-negative, and indeed
positive, at the
mean-field level. The
mean-field pairing
strength, $P_{\rm mf}^J$, can be defined as in (3,4),
but replacing the expectation values of the two-body matrix
elements in the definition of $M^J$ by
\begin{equation}
\langle c_1^\dagger c_2^\dagger c_3 c_4 \rangle \rightarrow
n_1 n_2 \left( \delta_{13} \delta_{24} - \delta_{23} \delta_{14} \right) ,
\end{equation}
where $n_k = \langle c_k^\dagger c_k \rangle$ is the occupation number
of the orbital $k$. This mean-field value provides a baseline against
which true pair correlations can be judged.
\section{Results}
Our SMMC studies for self-conjugate nuclei with $A=48-60$
have been performed using the modified
Kuo-Brown KB3 residual interaction \cite{KB3}. Some results of these
studies for observables like ground state energies and total
Gamow-Teller, $B(M1)$, and $B(E2)$ strengths have already been presented
in Ref. \cite{Langanke1}. As for other $pf$-shell nuclei, the SMMC results
for the self-conjugate nuclei
are generally in very good agreement with data.
As is customary in shell model studies, the Coulomb interaction
has been neglected, which we believe is a justified approximation
in this mass range. Thus, our shell model Hamiltonian is isospin-invariant
and, as a consequence, there are symmetries in the pairing strengths
of the three isovector $J^{\pi}=0^+$ channels. For even-even $N=Z$ nuclei,
$P^{J=0}$ is
identical for pp, pn and nn
pairing. In odd-odd self-conjugate nuclei (with ground state isospin
$T=1$),
the equality of
the pp and nn channels remains,
but the pn part of the isovector multiplet
$P^{J=0}$ can differ from the other two
components
($P^{J=0}_{pp}=P^{J=0}_{nn} \ne P^{J=0}_{pn}$).
At the mean-field level, the three components of the isovector
pairing multiplet are identical for both odd-odd and even-even $N=Z$ nuclei.
\subsection{Pairing in the ground states of self-conjugate $pf$-shell nuclei}
Our SMMC calculations for the even-even nuclei
have been performed at finite temperatures
$T=0.5$ MeV,
which has been found sufficient in previous studies
to guarantee cooling into the ground state.
The odd-odd nuclei were studied at $T=0.4$ MeV. Since the latter have
experimentally a low-lying excited state
with an excitation energy of about 0.2 MeV, our ``ground state''
calculations for these nuclei corresponds to a mixture of the ground
and first excited states.
As expected, we calculate vanishing isospin and
angular momentum expectation values ($\langle T^2 \rangle$ and
$\langle J^2 \rangle$, respectively) for the ground states of the even-even
nuclei. For the odd-odd nuclei our calculations yield isospin
expectation values $\langle T^2 \rangle = 2.2\pm0.3$ for $^{50}$Mn,
and $1.8\pm0.2$ for $^{54}$Cu,
in good agreement with experiment, as both $^{50}$Mn
and $^{54}$Co have a $T=1$ ground state, so that $T(T+1)=2$.
For $^{58}$Cu we find
$\langle T^2 \rangle = 1.4\pm0.2$, while the experimental level spectrum
has a $T=0$ $1^+$
ground state and a $T=1$ $0^+$ first excited state at $E_x=0.2$ MeV.
The error bars in our calculations for the angular momentum expectation
values
($\langle J^2 \rangle = -1.4\pm4.5$ for $^{50}$Mn, $-7.0\pm7.5$
for $^{54}$Co and $3.0\pm4.0$ for $^{58}$Cu) prohibit meaningful
comparison with experiment. We note that the SMMC also
reproduces the $T=1$ isospin of the $^{62}$Ga and
$^{74}$Rb ground states
(using the $p,f_{5/2},g_{9/2}$ model space \cite{Dean96}).
Detailed
calculations of the pairing in these two nuclei will be presented in
\cite{Dean96}.
We have calculated the isovector
and isoscalar pairing strengths in the ground states of the self-conjugate
nuclei with $A=48-60$ using the definition (4).
The results are presented in Fig. 1,
where they are also compared to the mean-field values derived
using Eq. (6).
Discussing the isovector $J=0$ pairing channels first,
Fig. 1 shows an excess of pairing correlations
over the mean-field values.
For the even-even nuclei, this excess represents the well-known
pairing coherence in the ground state.
In addition, Fig. 1 exhibits
a remarkable staggering in the $J=0$ pp
and nn pairing channels
($P^{J=0}_{pp} = P^{J=0}_{nn}$) and in the pn
pairing channel ($P^{J=0}_{pn}$) when comparing neighboring even-even
and odd-odd self-conjugate nuclei. In the latter, the isovector pn pairing
clearly dominates the pp and nn pairing
and is always significantly
larger than in the neighboring
even-even $N=Z$ nuclei. In contrast, the
like-nucleon pairing
is noticeably reduced in the odd-odd nuclei
relative to the values in the neighboring even-even nuclei.
The odd-even staggering is not visible in the total $J=0$
pairing strength,
\begin{equation}
P^{J=0}_{tot}=
P^{J=0}_{pp}+P^{J=0}_{nn} + P^{J=0}_{pn},
\end{equation}
as can be seen in Fig. 1.
Although the excess of
$P^{J=0}_{tot}$ over the mean-field value is significant, it is
about equal for the ``open shell'' nuclei
$^{48}$Cr, $^{50}$Mn, $^{52}$Fe and $^{60}$Zn. Towards
the $N=28$ shell closure the excess decreases and becomes a minimum
for the double-magic nucleus $^{56}$Ni. In fact, the
excess of
$P^{J=0}_{tot}$ over the mean-field value is only $0.42\pm0.1$
in $^{56}$Ni (or about $13\%$
of $P^{J=0}_{tot}$), while it is $2.1\pm0.1$ in $^{48}$Cr (or about $350\%$).
We thus conclude that the change in the total pairing strength
in the $N=Z$ nuclei is governed by shell effects, but that there is a
significant redistribution of strength between the like-
and unlike-pairs in going from even-even to odd-odd nuclei,
with pn pairing favored in the latter.
Our calculations indicate that the $J=1$ pn channel is the most
important isoscalar pairing channel in the ground states.
As is shown in Fig. 1, there is a modest excess of $J=1$ isoscalar pn pairing
over the mean-field values in all nuclei studied. The calculations
indicate a slight even-odd staggering in the pairing excess, with the excess
being larger in the even-even nuclei. Apparently
the strong isovector pn pairing decreases not only
the isovector pairing between like nucleons, but also the isoscalar
pn pairing. As in the isovector channels, the excess of isoscalar
pairing is strongly decreased close to the $N=28$ shell closure,
where the nuclei become spherical. It is well-known that isoscalar $J=1$
pn pairing is important in deformed nuclei like $^{48}$Cr.
We note that
within the uncertainties of the calculation,
our studies
do not show any pairing excess above the mean-field values
in the $J=3$, $5$, and
$7$ isoscalar channels.
It is interesting to compare
the present SMMC results
for the isovector pairing strength
with those of
a simple seniority-like model with an isospin-invariant
pairing Hamiltonian \cite{Engel}. One finds
that the magnitude of the isovector
pairing correlations is smaller in the SMMC studies than in the
simple pairing model, as in the realistic shell model these
correlations compete with other nucleonic correlations (e.g.
the isoscalar pairing, which had not been considered in Ref. \cite{Engel}).
However, it is remarkable that the simple pairing model reproduces
the odd-even staggering seen in the
SMMC studies.
Using the HFB approach, Wolter {\it et al.} have studied
pairing in $^{48}$Cr, restricting themselves to considering
isovector and isoscalar pairing separately \cite{Wolter}.
These authors find
the isoscalar pairing mode to be considerably stronger than the
isovector \cite{Wolter}. This finding is not supported
by our SMMC calculation; it might be caused by the fact that the HFB
solutions had not been projected on the appropriate ground state
angular momentum. In fact, the HFB solutions have
$\langle J^2 \rangle \gg 0 $,
making the presence of aligned pairs necessary. The SMMC calculations,
which have
$\langle J^2 \rangle \approx 0 $,
do not show the importance
of aligned pairing in the $^{48}$Cr ground state.
To investigate
how the various pairing strengths change if the proton-neutron
symmetry of the $N=Z$ nuclei is broken
by adding additional neutrons,
we have performed a series of SMMC
studies for the iron isotopes $^{52-58}$Fe; the results are shown
in Fig. 2. The striking result is that the excess
of isovector pn pairing over the mean-field values is decreased
drastically upon adding neutrons and has practically
vanished in $^{56,58}$Fe.
In contrast
the excess in both $J=0$ pp and nn pairing is increased
by moving away from charge symmetry.
At the mean-field level pp pairing
is virtually unchanged through an isotope chain,
while adding neutrons increases
the nn pairing.
The excess in total pairing strength $P^{J=0}_{tot}$
within the isotope chain increases only slightly as neutrons are added.
Fig. 2 also shows that the excess of $P^{J=1}_{pn}$
pairing
decreases with neutron excess. However, this decline is less dramatic
than for $P^{J=0}_{pn}$ and it appears that in nuclei
with neutron excess, isoscalar $(J=1$) pn correlations are
more important than isovector. This finding
is in agreement with the observation that isoscalar pn
pairing is mainly responsible for the quenching of the Gamow-Teller
strength \cite{Gamow},
as those SMMC investigations have been performed for nuclei with
$N > Z$.
Note that $^{54}$Fe is exceptional as
it is magic in the neutron number ($N=28$).
As a consequence, the excess of nn pairing,
and also of isovector and isoscalar pn pairing, is low
compared to the other isotopes.
Our SMMC results for pairing correlations as function
of neutron excess thus yield the following simple picture.
Adding neutron pairs apparently increases the collectivity of the
neutron condensate so that there are fewer neutrons available
to pair with protons. As a result, protons pair more often with other
protons, in this way increasing the proton collectivity, although the
total number of protons, of course, remains unchanged.
Based on the results of their simple pairing model, the authors of Ref.
\cite{Engel} come to the same conclusions. In fact,
the SMMC results for the changes of isovector pairing
within an isotope chain again agree well with the simple
pairing model. However, the ground state
in the latter is nearly
a product of pp and nn condensates (in the limit of large neutron excess),
while
protons and neutrons
in the realistic SMMC calculations
still couple via
isoscalar (mainly $J=1$) correlations.
\subsection{Thermal properties of $^{50}$Mn and $^{52}$Fe}
To study the thermal properties of odd-odd and even-even $N=Z$ nuclei
we have performed SMMC studies of $^{50}$Mn and $^{52}$Fe at selected
temperatures $T \leq 2$ MeV;
the results are presented in Figs. 3 and 4.
As we will show in the following, the thermal
properties of the two nuclei are dominated
by the three isovector $J=0$ and the isoscalar $J=1$ proton-neutron
correlations; differences between the two nuclei can be traced
to differences in the thermal behavior of these correlations.
We note again that the three isovector $J=0$ pairing correlations are identical
in even-even $N=Z$ nuclei.
As discussed above, these correlations show a strong coherence and
dominate the ground state properties of $^{52}$Fe.
The temperature dependence of the
$J=0$ pp correlations in $^{52}$Fe is very similar to those
of the other even-even iron isotopes $^{54-58}$Fe, which have been discussed
in Refs. \cite{Dean,Langanke2}.
As in the other iron isotopes, the SMMC
calculations predict a phase transition
in a rather narrow temperature interval around $T=1$ MeV,
where the $J=0$ pairs break. Due to $N=Z$ symmetry, this phase
transition also occurs in the $J=0$ pn channel.
This behavior is different from the iron isotopes with neutron excess.
There the pn $J=0$ correlations have only
a small excess at low temperatures,
where $J=0$ pairing among like nucleons dominates, and this excess actually
increases slightly when the like-pairs break \cite{Langanke2}.
We also observe that
the excess in the
isoscalar $J=1$ pn correlations is about constant at temperatures
below 2 MeV. Thus, these correlations persist to higher temperatures
than the isovector $J=0$ pairing, as has already been pointed out in
Ref. \cite{Langanke2}.
The pairing phase transition is accompanied by a rapid increase in
the moment of inertia,
$I=\langle J^2 \rangle /(3T)$ \cite{Langanke2},
and a partial unquenching (orbital part) of the total M1 strength.
The total Gamow-Teller strength also unquenches partially at the
phase transition, related to the fact that the vanishing of the $J=0$
pn
correlations reduces the quenching for Gamow-Teller transitions between
identical orbitals (mainly the $f_{7/2}$
proton to $f_{7/2}$ neutron transitions).
The residual quenching of the Gamow-Teller strength at temperatures above
the phase transition (the single particle value is 13.1 calculated from
the various ground state occupation numbers)
is caused by the isoscalar pn correlations,
which persist to higher temperatures (see Fig. 3). We note that the temperature
evolution of the Gamow-Teller transition is different in $^{52}$Fe than in
the iron isotopes with neutron excess \cite{Dean,Langanke2}.
In the latter, $J=0$ pn
correlations do not show any significant excess over the mean-field values, and
in particular, do not exhibit the phase transition
as in $^{52}$Fe. As a consequence,
the Gamow-Teller strength in nuclei with neutron excess is roughly constant
across the pairing phase transition, without any noticeable
unquenching.
As required by general thermodynamic principles,
the internal energy
increases monotonically with temperature.
The heat capacity $C(T)=dE/dT$
is usually associated with a level density parameter $a$
by $C(T)=2a(T) T$.
As is typical for even-even
nuclei \cite{Dean} $a(T)$
increases from $a=0$ at $T=0$
to a roughly constant value at temperatures above the phase transition.
We find $a(T) \approx 5.3\pm1.2$ MeV$^{-1}$ at $T \geq 1$ MeV, in
agreement with the empirical value of 6.5 MeV$^{-1}$ \cite{Thielemann}.
At higher temperatures, $a(T)$ must decrease due to
the finite model space of our calculation.
The present temperature grid is not fine enough to determine whether
$a(T)$ exhibits a maximum related to the phase transition, as suggested
in \cite{Dean}.
As expected for an even-even nucleus, $\langle T^2 \rangle$
is zero at low temperatures and then slowly increases with temperature
as higher isospin configurations are mixed in.
The thermal properties of $^{50}$Mn (Fig. 4) show
some distinct differences from $^{52}$Fe, which we believe are typical for
odd-odd $N=Z$ nuclei in this mass range. As already stressed in the last
section, $J=0$ pn correlations dominate
the ground state properties. With increasing temperature
these correlations decrease rapidly and steadily
and have already dropped to the mean-field
value at $T=0.8$ MeV. (The fact that the correlations actually become
slightly negative is unphysical and due
to uncertainties in the extrapolation
required to avoid the sign problem \cite{Alhassid}.
We have verified the qualitative results of our calculation for
an isospin invariant pairing+quadrupole Hamiltonian which does not exhibit
the sign problem \cite{Zheng}.)
The $J=0$ pp (and the identical nn) correlations
show the same phase transition near $T=1$ MeV, as in $^{52}$Fe
and the other even-even nuclei \cite{Dean,Langanke2}. In contrast
however,
the excess of the $J=0$ correlations
between like nucleons
in $^{50}$Mn
is noticeably smaller at low temperatures.
As in $^{52}$Fe, the moment of inertia of $^{50}$Mn
increases drastically
when the $J=0$ pairs break.
We observe that the isoscalar $J=1$ pairing in $^{50}$Mn is rather
similar to that in $^{52}$Fe. In particular the excess of these
correlations is roughly constant
in the temperature range where the
isovector $J=0$ correlations vanish and persists to higher temperatures than
the excess in the isovector correlations.
Related to the rapid decrease of isovector pn pairing
with temperature, the isospin expectation value drops from
$\langle T^2 \rangle =2$ at $T \le 0.5$ MeV to
$\langle T^2 \rangle \approx 0.2\pm 0.2$ at $T = 1.0$ MeV;
the unpaired proton and neutron apparently recouple from an isovector
$J=0$ coupling to an isoscalar $J=1$ coupling.
At temperatures above $T=1$ MeV (i.e., above the phase transition for
like-nucleon pairs), the temperature dependences of the isospin expectation
values in $^{50}$Mn and $^{52}$Fe are similar.
The temperature dependence of the energy $E=\langle H \rangle$ in $^{50}$Mn
is significantly different than that in the even-even nuclei.
As can be seen in Fig.4, $E$ increases approximately linearly
with temperature, corresponding to a constant heat capacity
$C(T) \approx 5.4\pm1$ MeV$^{-1}$;
the level density parameter decreases like $a(T) \sim T^{-1}$
in the temperature interval between 0.4 MeV and 1.5 MeV.
We note that the
same linear increase of the energy with temperature is observed in SMMC
studies of odd-odd $N=Z$ nuclei performed with a pairing+quadrupole
hamiltonian \cite{Zheng} and thus appears to be generic for self-conjugate
odd-odd $N=Z$ nuclei.
>From the discussion above, we conclude that
the main effect of heating the nuclei $^{50}$Mn
and $^{52}$Fe to temperatures around 1.5 MeV is to release the pairing
energy stored in the isovector $J=0$ correlations. From Fig. 1
(i.e., $P^J_{\rm tot}$) we expect
that this pairing energy is about the same for both nuclei. This is in
fact confirmed by our calculation:
Using a linear extrapolation to zero
temperature, we find an internal excitation energy
of about 8.6 MeV at $T=1.6$ MeV in $^{50}$Mn, which agrees nicely with
the value for $^{52}$Fe, if we approximate $E(T=0)=E(T=0.5$ MeV).
The apparent difference between the two nuclei is that, with increasing
temperature, a strong pairing gap in the three isovector $J=0$ channels
has to be overcome in $^{52}$Fe, while no such strong gap
in the dominant isovector pn channel appears to exist in the
odd-odd $N=Z$ nucleus $^{50}$Mn.
Associated with the decrease of the isovector pn
correlations, the Gamow-Teller strength in $^{50}$Mn
unquenches with heating
to $T=1$ MeV. This is noticeably different than in even-even nuclei,
where the Gamow-Teller strength is roughly constant
at temperatures below the pairing phase transition.
We stress that the $B(GT)$ strength, however,
remains noticeably quenched even above $T=1$ MeV
due to the persistence of the isoscalar pn correlations;
the mean-field value for the Gamow-Teller strength increase only slightly
from 11.1 at $T=0.4$ MeV to 11.6
at $T=2$ MeV.
The $B(M1)$ strength in $^{50}$Mn
decreases when the isovector pn correlations
vanish. It unquenches at
$T\ge0.8$ MeV, related to the breaking
of the pp and nn pairs, as shown in \cite{Langanke2}.
As in even-even nuclei, the $B(M1)$ strength in $^{50}$Mn remains
quenched even for $T\approx 2$ MeV (the mean-field value
is 33.6 $\mu_N^2$ at $T=2$ MeV)
due to the
persisting isoscalar pn pairing.
\section{Conclusion}
We have studied pairing correlations in self-conjugate
nuclei in the middle of the $pf$ shell;
our calculations are the first to take all relevant two-nucleon
correlations into account.
Our study is based on SMMC
calculations of these nuclei within the complete $pf$ shell employing the
realistic Kuo-Brown interaction KB3. Several results of our investigation
are noteworthy.
The isovector $J=0$ pairing correlations show a significant staggering
between odd-odd and even-even $N=Z$ nuclei. While the three isovector
channels have identical strengths in even-even $N=Z$ nuclei,
the total isovector pairing strength is strongly redistributed
in odd-odd self-conjugate nuclei, with a strong enhancement of the
proton-neutron correlations. This redistribution manifests itself in
a significantly different temperature dependence of observables like
the $GT$ and $B(M1)$ strengths and the internal energy.
The importance of isovector proton-neutron correlations decrease drastically
if neutrons are added. These additional neutrons increase the coherence
among the neutron condensate, thus making less neutrons available
for isovector proton-neutron correlations.
At the same time, the correlations among protons also increase
if neutrons are added. Our calculations indicate that in nuclei
with large neutron excess, isoscalar ($J=1$) proton-neutron correlations
dominate over isovector proton-neutron pairing.
We have studied the temperature dependence of the pairing correlations
and of selected observables for $^{50}$Mn and $^{52}$Fe.
While the even-even $N=Z$ nucleus $^{52}$Fe shows the same qualitative trends
as other even-even nuclei in this mass region (including a pairing
phase transition at temperatures near $T=1$ MeV), the results for
the odd-odd nucleus $^{50}$Mn differ in some interesting aspects.
While the proton-proton and neutron-neutron correlations (although
much weaker than in even-even nuclei)
show a phase transition near $T=1$ MeV, the dominant
$J=0$ proton-neutron correlations decrease steadily with increasing temperature.
As a consequence the internal energy increases linearly with temperature,
indicating that there is no pairing gap
in the $J=0$ proton-neutron channel
to be overcome.
\acknowledgements
This work was supported in part by the National Science Foundation,
Grants No. PHY94-12818 and PHY94-20470. Oak Ridge National Laboratory
is managed by Lockheed Martin Energy Research Corp. for the U.S. Department
of Energy under contract number DE-AC05-96OR22464.
We are grateful to J.~Engel, B.~Mottelson and P.~Vogel
for helpful discussions. DJD acknowledges an E.P. Wigner Fellowship
from ORNL.
Computational
cycles were provided by
the Concurrent Supercomputing Consortium at Caltech
and by the VPP500,
a vector parallel processor at
the RIKEN supercomputing facility; we thank Drs. I. Tanihata and
S. Ohta for their assistance with the latter.
|
1,941,325,221,181 | arxiv | \section{Introduction \label{Introduction}}
A microscopic understanding of the nuclear fission process
remains one of the most complex and challenging problem in
low-energy nuclear physics.
Although fission-barrier heights are not observable quantities, they
play an important role in determining whether the excited compound
nucleus de-excites through neutron evaporation or fission. They are
also a necessary input for the calculations of fission
cross-sections. From a different point of view, they allow to describe
quantitatively the nuclear stability with respect to spontaneous
fission in competition with other decay modes,
particularly $\alpha$ decay.
Over the years, many microscopic calculations
of the average fission paths of heavy nuclei have been performed,
within mean-field approaches supplemented by the treatment of nuclear
correlations without or with the restoration of some symmetries
spuriously broken by the mean-field.
While most of fission-barrier calculations have been
performed for even-mass (with even proton and neutron numbers) nuclei
(see e.g. \cite{Nikolov_2011,McDonnell_2013,Staszczak_2013,Younes_2009,Warda_2012,Rodriguez-Guzman_2014,Abusara_2010,Lu_2012,Abusara_2012,
Afanasjev_2013,Hao_2012} for recent related works), there are
comparatively very few microscopic studies dedicated to odd-mass
nuclei and even fewer to odd-odd nuclei. The main reason
is the complication caused by the breaking of time-reversal symmetry
at the mean-field level for a nuclear system involving an odd number
of neutrons and/or protons, considered as identical fermions.
One of the earlier microscopic study of spectroscopic properties in odd-mass actinides at
very large deformation was performed by Libert and collaborators in
Ref.~\cite{Libert_1980} for the band-head energy spectra in the
fission-isomeric well of $^{239}$Pu within the
rotor-plus-quasi-particle approach.
More recently, fission-barrier calculations were performed within the Hartree-Fock-Bogoliubov
approach by Goriely \textit{et al.}~\cite{Goriely_2009} for nuclei
with a proton number $Z$ between 88 and 96.
The resulting fission barriers were then used for the
neutron-induced fission cross-section calculations as part of the
RIPL-3 project published in
Ref.~\cite{Capote_2009}.
Around the same time, Robledo \textit{et al.} have performed fission-barrier
calculations of the $^{235}$U \cite{Robledo_2009} and $^{239}$Pu
\cite{Iglesia_2009} nuclei, within the equal-filling approximation
(EFA) presented e.g. in Ref.~\cite{Robledo_2008}. In practice, the EFA
consists in occupying pairwise the
lowest single-particle energy levels---exhibiting the two-fold Kramers
degeneracy---and ``splitting'' the unpaired nucleon into two
time-reversal conjugate states with an equal occupation 0.5. In this
way, the time-reversal symmetry is not broken and the calculations are
performed in a way which is very similar to what is done when
describing the ground state of an even-even nucleus.
There are actually two different formalisms in which this EFA is
implemented. One as used in Ref.~\cite{Robledo_2009,Iglesia_2009} deals
with self-consistent calculations of one quasi particle states. It has
been shown to provide the same results as the exact blocking results
within this frame for the time-even part of the
densities~\cite{Schunck_2010}. Another EFA approach will be considered
here in some cases for the sake of comparison with the corresponding
exact calculations which are the
subject of our study. It corresponds here
to an equal-filling approximation to
self-consistent blocked one-particle states.
Although the EFA is likely to be a reasonable approximation, a proper
microscopic description of odd-mass nuclei requires a priori the
consideration of all the effects brought up by the unpaired nucleon.
This nucleon gives rise to
non-vanishing time-odd densities entering the mean-field
Hamiltonian. The terms involving time-odd densities vanish identically
in the ground-state of even-even nuclei. Their presence for odd-mass
nuclei increases the computing task. As discussed for e.g. in
Refs.~\cite{Quentin_2010,Bonneau_2011}, the time-odd densities cause a
spin polarisation of the even-even core nucleus which results in the
removal of the Kramers degeneracy of the single-particle states. The
recent work of Ref.~\cite{Bonneau_2015} shows that the static magnetic
properties of deformed odd-mass nuclei can be properly described when
taking into account the effect of core polarization
induced by the breaking of the time-reversal symmetry
at the mean-field level. Therefore, it is our purpose here to study
the effect on fission barriers of the time-reversal
symmetry breaking. To do so, we calculate
fission-barrier profiles of odd-mass
nuclei within the self-consistent blocking approach in the
HF+BCS framework, taking the time-reversal symmetry breaking at the
mean-field level into account.
As well known, some geometrical intrinsic solutions are broken near
both inner and outer barriers. The intrinsic parity is violated for
elongations somewhat before the outer barrier region and
beyond~\cite{Moller70}. The axial symmetry is also known from a very
long time to be violated in static calculations around the inner
barrier, an effect which is increasing with $Z$ in the actinide region
from, e.g., Thorium isotopes \cite{Pashkevich69}.
Recently it has been suggested that the outer barrier of actinide
nuclei should also correspond to triaxial shapes~\cite{Lu14}. However
the triaxial character of the fission path in both barriers might
vanish or be strongly reduced upon defining it as a least action
trajectory upon making some ansatz on adiabatic mass parameters as well
as on the set of collective variables to be retained. This has been
first discussed in Ref.~\cite{Gherghescu99} for super-heavy nuclei. There,
all quadrupole and octupole (axial and non-axial) degrees of freedom
have been considered. The mass parameters had been calculated
according to the Inglis-Belyaev formula~\cite{Belyaev_1961}. Such a result
has been recently confirmed in non-relativistic~\cite{Sadhukhan14} and
relativistic \cite{Lu14,Zhao16} mean-field calculations. The
calculations of mass parameters have been significantly improved by
using the non-perturbative ATDHFB approach first discussed and used in
Ref.~\cite{Yuldashbaeva_1999}, later revisited in
Ref.~\cite{Baran_2011}. Moreover the intensities of pairing fluctuations
have been included in the set of collective variables together with
the two axial and non-axial quadrupole degrees of
freedom. Calculations in $^{240}$Pu and $^{264}$Fm in
Ref.~\cite{Sadhukhan14} as well as $^{250}$Fm and $^{264}$Fm in
Ref.~\cite{Zhao16} have drawn similar conclusions about the
disappearance or strong quenching of the triaxiality of the fission
paths. These results have been shown to imply very strong consequences
on the spontaneous fission half-lives.
>From these considerations, and keeping in mind the somewhat preliminary
character of our exploration of fission barriers of odd nuclei, we
have deemed as a reasonable first step to stick here to purely axial
microscopic static solutions.
This paper is organized as follows. In Sec.~\ref{Theoretical framework}, a brief
presentation of the self-consistent blocking HF+BCS formalism and some of its key aspects are
given together with some technical details of the calculations.
Our results will be presented in Section~\ref{Results: Fission
barriers} and Section~\ref{Spectroscopic properties in the
fission-isomeric well}. Finally, the main results are summarised and
some conclusions drawn in the Section~\ref{Conclusion}.
\section{Theoretical framework \label{Theoretical framework}}
The fission-barrier heights have been obtained from
deformation-energy curves whereby the quadrupole
moment has been chosen as the driving coordinate.
The total energy at specific deformation points
has been calculated within
the Hartree-Fock-plus-BCS (HF+BCS) approach with blocking,
and we refer to this as a self-consistent blocking (SCB)
calculation. We will first discuss the details of our SCB calculations
in Subsection~\ref{SCB calculations}, while our approximate treatment
for the restoration of rotational symmetry using the Bohr-Mottelson (BM)
unified model is presented in
Subsection~\ref{Bohr-Mottelson total energy}.
A detailed discussion about the expressions relating our
mean-field solutions to the BM model
can be found in Ref.~\cite{Koh_2016}, and we shall only retain the relevant
expressions herein. Subsection~\ref{Calculation of moment of inertia}
will be devoted to the treatment of the moment of inertia entering the
rotational energy in the BM model, and
Subsection D to some technical aspects of the calculations.
\subsection{Self-consistent-blocking calculations \label{SCB calculations}}
We assume that the nucleus has an axially symmetrical shape such that
the projection $\Omega_{k}$ of the total angular momentum onto the
axial symmetry $z$-axis $\hat{j}_z$ of the single-particle state
$\ket{k}$
\eq{\elmx{k}{\hat{j}_z}{k} = \Omega_k}
is a good quantum number. The intrinsic left-right (parity)
symmetry was allowed to be broken around and beyond the top of the
outer-barrier, where such a symmetry breaking is
known to lower the outer-barrier. For our description of
odd-mass nuclei, we have merely considered
seniority-1 nuclear states in which
only one single-particle
state is blocked. The lowest nuclear $K^{\pi}$ state, in general,
corresponds to an unpaired nucleon blocked in the single-particle
state which is the nearest to the Fermi level with
quantum numbers such that $\Omega_k = K$ and, when
parity symmetry is not broken, $\pi_k = \pi$.
In practice, the blocking procedure translates to setting
the occupation probability $v_k^2$ of the blocked single-particle
state and its pair-conjugate state to 1 and 0,
respectively.
Such a blocking procedure in an odd-mass nucleus
results in the suppression of the
Kramers degeneracy of the single-particle spectrum.
As a consequence of time-reversal symmetry breaking at the mean-field level,
the pairs of conjugate single-particle states needed for the BCS pairing treatment
cannot be pairs of time-reversed states.
However, without recourse to the Bogoliubov treatment, we were able to unambiguously
identify pair-conjugate states by searching for the maximum overlap
in absolute value between
two eigenstates of the mean-field Hamiltonian,
$\ket{k}$ and $\ket{\widetilde k}$, such that
$|\elmx{k}{\big( \hat{T}}{\widetilde k}\big)|$, where
$\hat T$ denotes the time-reversal symmetry operator, is as close to 1
as possible. These partner states $\ket{k}$ and $\ket{\widetilde
k}$ are dubbed as \textit{pseudo-pairs} and they serve as Cooper
pairs in our BCS framework. The value for this overlap will be exactly
1 when time-reversal symmetry is not broken. This procedure for
establishing the BCS pair states when time-reversal symmetry is broken
at the mean-field level has been implemented earlier in the work of
Ref.~\cite{Pototzky_2010}. A more detail discussion can also be found
in Appendix A of Ref.~\cite{Koh_2016}.
The breaking of the time-reversal symmetry induces terms which are
related to the non-vanishing time-odd local densities in the Skyrme
energy density functionals (see Appendix \ref{Appendix: Skyrme energy density functional}).
These time-odd local densities are the spin-vector densities
$\mathbf{s}_q$, the spin-vector kinetic energy densities
$\mathbf{T}_q$, the current densities $\mathbf{j}_q$ where the index
$q$ here represents the nucleon charge states. These time-odd local
densities contributes in such a way that the expectation value of the
energy is a time-even quantity as it should.
\subsection{Bohr-Mottelson total energy \label{Bohr-Mottelson total
energy}}
The total energy within our Bohr-Mottelson approach (see the detailed discussion of Ref.~\cite{Koh_2016}),
is written as
\begin{flalign}
&\elmx{I M K \pi \alpha}{\hat{H}_{{\rm BM}}}{I M K \pi \alpha} \notag \\
&= \elmx{\Psi_{K \pi}^{\alpha}}{\hat{H}_{{\rm eff}}}{\Psi_{K \pi}^{\alpha}} - \frac{1}{2 \: \mathcal{J}} \langle {\rm \bf J}^2 \rangle_{{\rm core}}
+ \frac{\hbar^2}{2 \: \mathcal{J}} \Big[ I(I+1) \notag \\
&- K(K-1)
+ \delta_{K,\frac{1}{2}} a (-1)^{I+\frac{1}{2}} (I+ \frac{1}{2}) \Big]
\end{flalign}
with $\ket{I M K \pi \alpha}$ being the normalized nuclear state
defined by
\begin{flalign}
\ket{I M K \pi \alpha} = & \sqrt{\frac{2I+1}{16 \pi^2}} \Big[
D_{MK}^{I} \ket{\Psi_{K \pi}^{\alpha}} \notag \\
& + (-)^{(I+K)} D_{M\, -K}^{I} \hat{T} \ket{\Psi_{K \pi}^{\alpha}} \Big]
\end{flalign}
In the notation above, \textit{I} and \textit{M} are the
total angular momentum, and its projection on the symmetry axis in the
laboratory frame, respectively.
The state $\ket{\Psi_{K \pi}^{\alpha}}$ refers to the intrinsic nuclear
state, while $D_{MK}^{I}$ is a Wigner rotation matrix.
The $\langle {\rm \bf J}^2 \rangle_{{\rm core}}$
quantity is the expectation value of the
total angular momentum operator for a polarized even-even core.
In our model, Coriolis coupling has been
neglected except for the case of $K=1/2$
in which its effect has been accounted for by the
decoupling parameter term.
The moment of inertia $\mathcal{J}$ and the decoupling
parameter $a$ have been computed from the
microscopic solution of the polarized even-even
core (see Ref.~\cite{Koh_2016}).
For the band-head state ($I=K$), the Bohr-Mottelson total energy reduces to
\eq{
E_{K \pi \alpha} =
\langle \hat{H}_{\rm eff} \rangle - \frac{1}{2 \: \mathcal{J}} \langle
{\rm \bf J}^2 \rangle_{{\rm core}}
+ \frac{\hbar^2}{2 \: \mathcal{J}} (2 K - \delta_{K,\frac{1}{2}} a)
\label{eq:BM band-head}
}
For given quantum numbers $K$ and $\pi$ (when the intrinsic parity symmetry is present) the
fission-barrier heights have then been
calculated as
differences of the Bohr-Mottelson energy in
Eq.~(\ref{eq:BM band-head}) at the
saddle points and the
normally-deformed ground-state $K^{\pi}$ solution.
\subsection{Calculation of the moment of inertia \label{Calculation of moment of inertia}}
Special attention has been paid to the moment of inertia entering
the core rotational energy term given by
${\rm E_{rot}} = \langle{\hat{\bf J}}^2 \rangle_{{\rm core}}/2 \mathcal{J}$.
The usual way to handle it is to use the Inglis-Belyaev (IB) formula
\cite{Belyaev_1961}. It is not satisfactory for at least three reasons.
It is derived within the adiabatic limit of the Routhian
Hartree-Fock-Bogoliubov approach. The Routhian approach is, as well
known, only a semi-quantal prescription to describe the rotation of a
quantal object. Moreover, it is not clear, as we will see, that the
corresponding collective motion is adiabatic. Finally, the IB
formula corresponds to a well-defined approximation to the
Routhian-Hartree-Fock-Bogoliubov approach.
Concerning the last point, as discussed in Ref. \cite{Yuldashbaeva_1999}, the IB moment of
inertia ought to be renormalized to take into account the so-called
Thouless-Valatin corrective terms \cite{Thouless_Valatin_1962} studied in detail in
Ref. \cite{Yuldashbaeva_1999}.
They arise from the response of the self-consistent
fields with respect to the time-odd density (as e.g. current and spin
vector densities) generated by the rotation of the nucleus which is
neglected in the IB ansatz. In order to incorporate these
corrective terms in our current approach, the moments of inertia
yielded by the IB formula $\mathcal{J}_{\rm Bel}$ are scaled by
a factor $\alpha$ whose value is taken to be 0.32 following the
prescription of Ref.~\cite{Libert_Girod_Delaroche_1999}:
\eq{
\mathcal{J}' \; = \; \mathcal{J}_{\rm Bel} \, (1 + \alpha) \,.
}
As a result, one should diminish by the same percentage the rotational
correction evaluated upon using the IB moment of inertia.
Let us remark that the above correction concerns adiabatic regimes of rotation.
Projecting after variation the $0^+$ state out of a HF+BCS solution,
corresponds, of course, in principle to a better approach to the
determination of the ground-state energy. This has been performed in
Ref.~\cite{Bender_2004} for the fission-barrier of $^{240}$Pu upon
using two Skyrme force parametrizations
(SLy4 and SLy6 \cite{Chabanat_1998,Chabanat_1998_Erratum}).
These works clearly show that using
the IB approach leads to an overestimation of the rotational
correction by about 10 - 20\% in the region of inner-barrier and
fission-isomeric state and by more than 80\% close to the
outer-barrier. A word of caution on the specific values listed above
should be made, however, since these calculations yield a first $2^+$
energy in the ground-state band which is about twice its experimental
value (83 keV instead of 43 keV).
A third theoretical estimate stems from the consideration of a phenomenological approach belonging to the family of Variable Moment of
Inertia models. It describes the evolution of rotational energies in a
band by consideration of the well known Coriolis Anti-Pairing (CAP)
effect \cite{Mottelson_Valatin_1960} in terms of intrinsic vortical
currents (see e.g. Ref. \cite{Quentin_2004}). The IB treatment to
the moment of inertia corresponds to a global nuclear rotation which
is adiabatic, i.e. corresponding to a low angular velocity $\Omega$,
or equivalently to a rather small value of the total angular momentum
(also referred to as spin). However, one can compute the average value
of the total angular momentum $I_{\rm av}$ spuriously included in the
mean-field solution as
\eq{
I_{\rm av}(I_{\rm av}+1) \hbar^2 = \langle \hat{\bf J}^2 \rangle
\label{eq:I average}}
where $\hat{\bf J}$ is the total angular momentum operator,
and find that the value of $I_{\rm av}$ even at ground-state deformation
cannot be considered as small (one finds there that $I_{\rm av}
\approx 13$). Consequently, the moment of inertia entering the
rotational correction term should reflect the fact that the average
$\Omega$ is large.
Recently, a polynomial expression for the moment of inertia as a
function of $\Omega$ denoted as $\mathcal{J}(\Omega)$ has been
proposed according to this approach to
the Coriolis anti-pairing effect
(see Ref.~\cite{Quentin_to_be_submitted} and a preliminary account of it in Ref.~\cite{pomorski_2014}).
This model shall be referred to as the Intrinsic Vorticity Model (IVM) in the
discussion herein. The IVM was found to work well for the rotational
bands in the ground-state deformation for some actinide nuclei, for
instance a very good agreement for $^{240}$Pu for a value of $I$ as
high as $I_{\rm av} \approx 30$ (where it predicts a rotational energy
differing by only 70 keV from the experimental value).
Table \ref{table: rotational energy IB, IB+TV and IVM} lists
the spurious rotational energy obtained with
the IB formula as compared to the IVM rotational energy
for a given value of the total angular momentum $I_{\rm av}$ in the
ground-state of even-even nuclei.
In all cases, the spurious rotational energy evaluated
with the IB moments of inertia is larger by
about a factor of 2 with respect to the values
obtained in the IVM approach.
Therefore, the rotational energy
obtained with the IB formula should
be reduced by approximately 50\%. The same amount of correction is
assumed to apply as well to all other deformations.
\begin{table
\caption{\label{table: rotational energy IB, IB+TV and IVM}
Rotational energy (in MeV) calculated from Belyaev formula (IB) and the
Intrinsic Vorticity Model (IVM) at the ground-state deformation
of four even-even nuclei
as a function of the total angular momentum $I_{av}$
defined in Eq. (\ref{eq:I average}).}
\begin{ruledtabular}
\begin{tabular}{*{6}c}
Nucleus& $I_{av}$& & IB& & IVM \\
\hline
$^{234}$U& 12.988& & 2.371& & 1.232 \\
$^{236}$U& 12.905& & 2.423& & 1.255 \\
$^{238}$Pu& 13.146& & 2.441& & 1.266 \\
$^{240}$Pu& 13.143& & 2.408& & 1.232 \\
\end{tabular}
\end{ruledtabular}
\end{table}
Incidentally, the 50\% reduction in the rotational energy at all deformation
happens to translate into lowerings
of fission barriers of the same magnitude as those
obtained from the angular momentum projection calculations of
Ref.~\cite{Bender_2004} in $^{240}$Pu.
One may note that in both the exact or approximate projection formalisms described above, one overlooks -as we will do here- the possible effect of coupling of the pairing mode with the collective shape degrees of freedom, as for instance a possible Coulomb centrifugal stretching (see e.g. Ref. \cite{pomorski_2014}). Indeed, if any, this effect should be more important at the angular momentum value $I_{\rm av}$ than at much lower spins.
In view of this, we consider to fix ideas,
the following three approaches to the
calculation of the moment of inertia, namely
\begin{itemize}
\item[(i)] the Inglis-Belyaev's formula (IB),
\item[(ii)] the increase of the
Inglis-Belyaev moment of inertia by
32\% (IB+32\%), in order to take into account the
Thouless-Valatin corrective terms,
\item[(iii)] the renormalization of the Inglis-Belyaev moment of
inertia by a factor of 2 (IB+100\%), which
arises from the 50\% reduction in
the rotational energy of the intrinsic vorticity model.
\end{itemize}
\subsection{Total nuclear energies within an approximate projection on good parity states
\label{Total nuclear energies within an approximate projection on good parity states}}
In the spirit of the unified model description of odd nuclei disentangling the dynamics of an even-even core on
one hand and of the unpaired (odd) nucleon on the other, we factorize
the total wavefunction (with an obvious notation) as
\eq{
\ket{\Psi_{tot}} = \ket{\Phi_{core}} \ket{\phi_{odd}} \,.
}
Similarly we decompose the total Hamiltonian in two separate core and single particle parts
\eq{
\hat{H} = \hat{H}_{core} \hat{h}_{odd} \,.
}
Upon projecting on good parity states both core and odd particle states we get
\eq{
\ket{\Psi_{tot}} = \ket{\Psi^{+}} + \ket{\Psi^{-}}
}
where the good parity components of $\ket{\Psi_{tot}}$ may be
developed onto core and odd-particle good parity components as
\eq{
\ket{\Psi ^{+}} = \epsilon \eta \ket{\Phi_{core}^{+}}
\ket{\phi_{odd}^{+}} + \sqrt{1 - \epsilon^2} \sqrt{1 - \eta^2}
\ket{\Phi_{core}^{-}} \ket{\phi_{odd}^{-}}
}
and similarly
\eq{
\ket{\Psi ^{-}} = \epsilon \sqrt{1 - \eta^2} \ket{\Phi_{core}^{+}}
\ket{\phi_{odd}^{-}} + \sqrt{1 - \epsilon^2} \eta^2
\ket{\Phi_{core}^{-}} \ket{\phi_{odd}^{+}}
}
where all kets on the r.h.s. of the two above equations are normalized.
As a result of this, and further making the rough assumption that
$\hat{H}_{core}$ and $\hat{h}_{odd}$ break only slightly the parity,
one gets approximately the energies of the state described by the ket
$\ket{\Psi_{tot}}$ after projection as
\eq{
E^{+} = \frac{\epsilon^2 \eta^2 (E_{core}^{+} + h_{odd}^{+}) + (1 -
\epsilon^2) (1 - \eta^2) (E_{core}^{-} + h_{odd}^{-})}{1 -
(\epsilon^2 + \eta^{2}) + 2 \epsilon^2 \eta^{2}}
}
in the positive parity case and similarly for the negative parity case
\eq{
E^{-} = \frac{\epsilon^2 (1 - \eta^2) (E_{core}^{+} + h_{odd}^{-}) +
(1 - \epsilon^2) \eta^2 (E_{core}^{-} + h_{odd}^{+})}{(\epsilon^2 +
\eta^{2}) - 2 \epsilon^2 \eta^{2}}
}
where $E_{core}^{+}$ and $E_{core}^{-}$ are the energies of the
projected core states and $h_{odd}^{+}$ and $h_{odd}^{-}$ are the
diagonal matrix elements
$\elmx{\phi_{odd}^{+}}{\hat{h}_{odd}}{\phi_{odd}^{+}}$ and
$\elmx{\phi_{odd}^{-}}{\hat{h}_{odd}}{\phi_{odd}^{-}}$.
Only in special cases, can we easily approximate from what we know
about the core projected energies, what are the total projected energy
of the odd nucleus.
Let us illustrate the above in two simple cases. The first one is a
favourable one where the odd nucleon has an average parity which is
roughly equal to one in absolute value (e.g. such that roughly $\eta
= 1$). Then, the total projected energies will be given by
\eq{
E^{\pi} = E_{core}^{\pi} + e_{odd}
}
where $e_{odd}$ is the single particle (mean field) energy of the last
nucleon. Now, we recall that the energy of the core state projected
onto a positive parity is lower than (or equal to) what is obtained
when projecting it on a negative parity. Moreover, within the core
plus particle approach, we may approximate (\`{a} la Koopmans) the
total projected nuclear energy $E(K,\pi)$ of the odd nucleus
corresponding to a ($K,\pi$) configuration for the last nucleon as
\eq{
E(K,\pi) = E_{core}^{+} + e_{odd} = E_{int}(K,\pi) + \Delta
E_{core}^{+}
}
where the intrinsic total energy $ E_{int}(K,\pi)$ results from our
microscopic calculations for the considered single particle ($K,\pi$)
configuration and the corrective energy $\Delta E_{core}^{+}$ is the
gain in energy obtained when projecting the core intrinsic solution on
its positive parity componenet.
On the contrary whenever the average parity of the odd nucleon state
is close to zero such that roughly $\eta^2 = \frac{1}{2}$ one would
get for instance for the positive parity projected state
\eq{
E^{+} = \frac{\epsilon^2 E_{core}^{+} + (1 - \epsilon^2)
E_{core}^{-}}{2} + \frac{\epsilon^2 h_{odd}^{+} + (1 - \epsilon^2)
h_{odd}^{-}}{2} \,,
}
which cannot be simply evaluated without a detailed knowledge of
the projected wave functions.
\subsection{Some technical aspects of the calculations \label{Technical
aspects of the calculations}}
We have employed the SkM* \cite{SkM*_1982} parametrization as the main
choice of the Skyrme force for our calculations. This Skyrme
parametrization has been fitted to the liquid drop fission-barrier of
$^{240}$Pu and is usually considered as the standard parametrization
for the study of fission-barrier properties, for e.g in
Refs.~\cite{Hao_2012,Bonneau_2004} within the HF framework and
Refs.~\cite{Baran_2015, Schunck_2014, Staszczak_2013} in the
Hartree-Fock-Bogoliubov calculations.
Two other parametrizations will be also considered here in some
cases, namely the SIII \cite{Beiner_1975} and the SLy5*
\cite{Pastore_2013} parameter sets.
As was done in the study of low-lying band-head spectra in the
ground-state deformation \cite{Koh_2016},
to be consistent with the fitting protocol and respect the galilean
invariance, we have neglected the terms
involving the spin-current tensor density $J_q^{\mu \nu}$
and the spin-kinetic density $\mathbf{T}_q$ by setting the
corresponding coupling constants
$B_{14}$ and $B_{15}$ (see Appendix \ref{Appendix: Skyrme energy density functional}
for the definition of these constants) to 0 in
the energy-density functional and the Hartree--Fock mean field.
To make this presentation selfcontained we recall in
Appendix~\ref{Appendix: Skyrme energy density functional}, the
expressions of the Skyrme energy-density functional and the
Hartree--Fock fields, together with the coupling constants in terms of
the Skyrme parameters. In addition, we have also neglected the terms
of the form $\mathbf{s} \cdot \Delta \mathbf{s}$ in the energy-density
functional, where $\bf s$ is the spin nucleon density, and the
corresponding terms of the Hartree--Fock Hamiltonian. We shall refer
to this as the \textit{minimal time-odd} scheme where only some
combinations of the time-odd densities appearing in the Hamiltonian
density are taken into account. On the other hand, the \textit{full
time-odd} scheme refers to the case where all time-odd densities are
considered when solving the Hartree--Fock equations.
The pairing interaction has been approximated with a seniority force
which assumes the constancy of so-called pairing matrix elements between
all single-particle states belonging to a restricted
valence space. In our case, the valence space has been chosen to
include all single-particle states up to $\lambda_{\rm q} + X$, where
$\lambda_{\rm q}$ is the chemical potential for the charge state $q$
and $X = 6$ MeV. A smoothing factor of Fermi type with a diffuseness
$\mu = 0.2$ MeV (see e.g. Ref.~\cite{Pillet_2002} for details) has
been used to avoid a sudden variation of the single-particle valence
space. The pairing matrix element is given by
\eq{
g_{\rm q} = \frac{G_{\rm q}}{N_{\rm q} + 11} \,,
}
where $N_{\rm q}$ denotes the nucleon number of charge state $q$. The
pairing strengths $G_{\rm q}$ were obtained by reproducing as best as
possible the experimental mass differences $\Delta_q^{(3)} (N_{\rm
q})$ of some well-deformed actinide nuclei (for odd $N_{\rm
q}$-values, see Ref.~\cite{Koh_2016} for further discussions). The
obtained values when using the SkM* parametrization are $G_{\rm n} = G_{\rm p} = 16.0$ MeV.
The calculated single-particle states have been expanded in a
cylindrical harmonic oscillator basis. The expansion needs to be
truncated at some point, and this has been performed according to the
prescription of Ref.~\cite{Flocard_1973}
\eq{
\hbar \omega_{\bot} \Big( n_{\bot} + 1 \Big)
+ \hbar \omega_z \Big(n_z + \frac{1}{2} \Big)
\le \hbar \omega_0 \Big( N_0 + 2 \Big) \,,
}
where the frequencies $\omega_z$ and $\omega_{\bot}$ are related to
the spherical angular frequency, $\omega_0$, by $\omega_0^3 =
\omega_{\bot}^2 \omega_z$. The basis size parameter $N_0 = 14$ which
corresponds to 15 spherical major shells has been chosen. The
two basis size parameters have been optimized for a given Skyrme interaction
at each deformation point of the neighbouring even-even nuclei while
assuming axial and parity symmetrical nuclear shapes. The optimized
values were then used for the calculations of the odd-mass nuclei.
Numerical integrations were performed using the Gauss--Hermite and
Gauss--Laguerre approximations with 16 and 50 mesh points,
respectively. The Coulomb exchange term has been evaluated
in a usual approximation generally referred to as the Slater approximation~\cite{Slater_1951} even though it had been proposed much earlier by C.F. von Weis{\ae}cker \cite{vonWeizsaecker}.
\section{Fission-barrier calculations \label{Results: Fission barriers}}
\subsection{Fission barriers of odd-mass nuclei without rotational correction
\label{Results: Fission-barriers of odd-mass nuclei without rotational correction}}
First the HF+BCS calculations of deformation energy curves as functions of the
quadrupole moment $Q_{20}$, with imposed parity symmetry, were
performed in the two even-even neighboring isotopes of a given
odd-mass nucleus. Subsequently, the calculations for the odd-mass
nucleus were then carried out starting from the converged solutions of
either one of the two even-even neighboring nuclei. It has been
checked that, as it should, the choice of the initial even-even core
solution to be used at a particular deformation point does not affect
the solution of the odd-mass nucleus when self-consistency is achieved.
For odd-mass nuclei, the choice of the blocked states have been limited to
the low-lying band-head states appearing in the ground-state
well. This corresponds to blocking the single-particle states with
quantum numbers $\Omega^{\pi}$ = 1/2$^{+}$, 5/2$^{+}$, 7/2$^{-}$ and
7/2$^{+}$ for $^{239}$Pu and $^{235}$U, and the additional two
single-particle states with $\Omega^{\pi}$ = 3/2$^{+}$ and 5/2$^{-}$
for $^{235}$U. In all cases, the single-particle state with the
desired $K^{\pi}$ quantum numbers nearest to the Fermi level is
selected as the blocked state at every step of the iteration process.
However, this selection criterion does not guarantee a converged solution.
There can be, indeed, a problem related to the oscillation of the
blocked state from one iteration to the next. In this case, we were
forced to perform, instead, two sets of calculations. The blocked
configuration with a lower energy solution was selected as the
solution for the particular $K^{\pi}$ state.
The results of these calculations where intrinsic parity is conserved
are displayed on Figs.~\ref{figure: Pu-239 with parity symmetry
breaking and no rotational correction} (for $^{239}$Pu) and
\ref{figure: U-235 with parity symmetry breaking and no rotational
correction} (for $^{235}$U). They lead as well-known, to unduly high
fission barriers for two reasons. One is that a correction for the
spurious rotational energy content (as above discussed and
substantiated below) should be removed throughout the whole
deformation energy curve. The second specific to the outer barrier is
related to the imposition of the intrinsic parity symmetry. This is
why parity-symmetry breaking calculations have been considered. Due to
the huge amount of numerical effort that it involves, we have
considered only some of the lower band-head states in the ground-state
deformation. These are band-head states with $K$ = 1/2, 5/2 and 7/2
states for $^{239}$Pu, and 1/2, 3/2 and two 7/2 states for
$^{235}$U. These parity symmetry breaking calculations were performed
starting from a converged parity-symmetric solution of the respective
$K^{\pi}$ configuration beyond the fission-isomeric well. From this
initial solution corresponding to a given elongation (as measured by
$Q_{20}$), we blocked one single-particle state with $K = \Omega$ and
then performed calculations by constraining the nucleus to a slightly
asymmetrical shape at a finite $Q_{30}$ value for a few
iterations. The constraint on $Q_{30}$ was then released and the
calculations were allowed to reach convergence. Once an asymmetric
solution was obtained, we used it for calculating the next $Q_{20}$
deformation point with an increment of 20 barns.
\begin{figure*
\includegraphics[angle=-90,keepaspectratio=true,scale=1.0]{Pu239_def_energy_curve_SkM.eps}
\caption{Deformation energy curves of
$^{239}$Pu as functions of $Q_{20}$ calculated with the
SkM* parametrization and without taking the rotational energy correction into
account.
The $K^{\pi}$ labels refer to the quantum numbers in the parity symmetrical region (unfilled circles).
The plotted solutions when this symmetry is broken (filled circles)
are obtained by continuity as functions of $Q_{20}$.
.}
\label{figure: Pu-239 with parity symmetry breaking and no rotational correction}
\end{figure*}
\begin{figure*
\includegraphics[angle=-90,keepaspectratio=true,scale=1.0]{U235_def_energy_curve_SkM_symmetric.eps}
\caption{Same as Fig.~\ref{figure: Pu-239 with parity symmetry breaking and no rotational correction}
for $^{235}$U.}
\label{figure: U-235 with parity symmetry breaking and no rotational correction}
\end{figure*}
The results of such parity breaking calculations are reported also on
Figs.~\ref{figure: Pu-239 with parity symmetry breaking and no rotational correction}
and \ref{figure: U-235 with parity symmetry breaking and no rotational correction}.
Figure~\ref{figure: cut in Q20-Q30 plane of 5/2+ Pu-239} illustrates on a specific example,
the transition from a symmetrical equilibrium solution at $Q_{20} = 95$ b to
increasingly asymmetrical equilibrium solutions upon increasing $Q_{20}$.
At the top of the barrier (corresponding roughly to the $Q_{20} = 110-130$ b range)
the attained octupole deformations (as measured by $Q_{30}$) reach large values
which are representative of the most probable fragmentation in the asymmetrical fission mode
experimentally observed at very low excitation energy in this region.
Of course, upon releasing the symmetry constraint, the parity is no longer a good quantum number.
Thus, e.g. on Fig.~\ref{figure: Pu-239 with parity symmetry breaking and no rotational correction},
the parity-broken energy curve associated with the label $1/2^{+}$ corresponds merely to a $K = 1/2$ solution
beyond the critical point where the left-right reflection symmetry is lost.
This may cause some ambiguity in how we define the fission barrier.
For instance, in the case of $^{235}$U (Fig.~\ref{figure: U-235 with parity symmetry breaking and no rotational correction})
we have two $K = 7/2$ solutions of opposite parity.
On Fig.~\ref{figure: parity symmetric and asymmetric deformation energy 7/2 U-235},
we have reported potential energy curves for the two $K = 7/2$ solutions followed by continuity
upon increasing the deformation from the parity conserved region.
It turns out that the energy curves of these two solutions are crossing around $Q_{20} = 115$ barns.
The solution stemming at low $Q_{20}$ from a positive parity configuration becomes energetically favored.
We could thus define a lowest $K = 7/2$ fission barrier by jumping from one solution to the other.
Yet, this overlooks two problems.
One which will be touched upon below, is the projection on good parity states.
The other is the fact that we do not allow here for a residual interaction between the two configurations,
a refinement that is beyond the scope of our current approach.
\begin{figure
\includegraphics[angle=-90,keepaspectratio=true,scale=0.7]{Pu239_5_over_2_Q20_Q30_SkM.eps}
\caption{Cuts for given values of $Q_{20}$ in the potential-energy surface around the top of the
outer barrier as a function of the octupole moment $Q_{30}$ (given in
barns$^{3/2}$) of the 5/2 blocked configuration of $^{239}$Pu. The SkM* parametrization has been used.}
\label{figure: cut in Q20-Q30 plane of 5/2+ Pu-239}
\end{figure}
\begin{figure}[h!]
\includegraphics[angle=-90,keepaspectratio=true,scale=0.7]{U235_deformation_energy_outer_barrier_top_parity_assymetric.eps}
\caption{A portion of the deformation energy curves of the blocked
$K=7/2$ configurations in $^{235}$U from the fission-isomeric well
up to beyond the top of the outer-barrier. The filled symbols refer
to the local minima as a function of $Q_{30}$ for fixed elongation
$Q_{20}$ while the unfilled symbols refer to the solutions obtained
by imposing a left-right symmetry. The solid line connects the
lowest-energy solutions when the left-right symmetry is broken.
}
\label{figure: parity symmetric and asymmetric deformation energy 7/2 U-235}
\end{figure}
As expected, the parity-symmetry-breaking calculations do yield a
substantial effect on the intrinsic deformation energies around the outer
fission-barrier. Its height for the 1/2 configuration in $^{239}$Pu is
lowered by about 3.9 MeV with respect to the symmetrical barrier,
leading to a calculated height $E_B = 6.3$~MeV. The outer-barrier
height for the 5/2 configuration, in the same nucleus, was found to be
$E_B = 6.6$~MeV, corresponding to an even larger reduction of 4.7 MeV
with respect to the left-right symmetric barrier height. Important
reductions of fission barrier heights are also obtained in the
$^{235}$U case (see Fig.~\ref{figure: U-235 with parity symmetry
breaking and no rotational correction}). One lowers the $K = 1/2$
outer barrier by 3.7 MeV and by 5.4 MeV in the $K = 7/2$ case.
Associated with this substantial gain in energy upon releasing the
left-right reflection symmetry, one observes also a very important
lowering of the elongation at the outer fission saddle point,
resulting in a reduced barrier width and therefore in a strong further
enhancement of the barrier penetrability.
To generate relevant outer barrier heights, one has in principle to
project our solutions on good parity states. In the absence of such
calculations for the odd nuclei under consideration here, one may
propose some reasonable estimates taking stock of what we know about
the projection of a neighboring even-even core nucleus. As discussed
in Subsection~\ref{Theoretical framework}.\ref{Total nuclear energies
within an approximate projection on good parity states} however,
this is only possible whenever the single-particle wavefunction of the
last (unpaired) nucleon corresponds to an average value of the parity
operator which is close to 1 in absolute value. This is not always the
case as exemplified on Fig.~\ref{figure: evolution of 7/2 s.p. level
in U-235} corresponding to two low excitation energy $ K = 7/2$
configurations in the $^{235}$U nucleus. They are followed, as we have
already seen, by continuity from slightly before the isomeric-fission
well to much beyond the outer barrier. One of these two solutions
stemming from a $K^{\pi} = 7/2^{-}$ configuration at small elongation
keeps up to $Q_{20} = 120-130$ b an average parity reasonably close to
1. On the contrary, the other $ K = 7/2$ solution involves in the
outer barrier region, a large mixing of contributions from both
parities. We will therefore be only able to evaluate the fission
barrier of the former and will not propose any outer fission barrier
height for the latter.
\begin{figure}[h!]
\includegraphics[angle=-90,keepaspectratio=true,scale=0.75]{U235_sp_levels_SkM.eps}
\caption{(Top): Evolution of the single-particle energies for two $\Omega$ = 7/2 states near the BCS chemical potential (marked with crosses)
as functions of $Q_{20}$ obtained in the parity asymmetric calculations
of $^{235}$U. The solid line connects the blocked single-particle states as a function of deformation.
(Bottom): Average parity of the above considered blocked single-particle states as a function of $Q_{20}$.}
\label{figure: evolution of 7/2 s.p. level in U-235}
\end{figure}
In the work of Ref.~\cite{Hao_2012} one has described the fission barrier of
$^{240}$Pu nucleus within the Highly Truncated Diagonalization
approach, to account for pairing correlations while preserving the
particle-number symmetry. Such solutions have been projected on good
parity states after variation. The parity-projection calculation had
no effect on the total binding energy at the top of the outer
fission-barrier, where the value of $Q_{30}$ was found to be very large.
In contrast, projecting on a positive parity state causes a lowering
of the total binding energy in the fission-isomeric well.
Using the notation of Subsection~\ref{Theoretical
framework}.\ref{Total nuclear energies within an approximate
projection on good parity states}, one has obtained in
Ref.~\cite{Hao_2012} for the $^{240}$Pu nucleus, a positive correcting
energy $\Delta_{core}^{+}$ about equal to 350 keV for the
fission-isomeric state and which vanishes at the top of the outer
barrier.
According to the discussion of Subsection~\ref{Theoretical
framework}.\ref{Total nuclear energies within an approximate
projection on good parity states}, out of all the configurations
considered up to the isomeric state in $^{235}$U and in $^{239}$Pu,
only the $ K^{\pi} = 1/2^{+}$ and $7/2^{+}$ configurations in $^{235}$U,
and the $7/2^{+}$ configuration in $^{239}$Pu qualify to allow us to
propose reasonable estimates of the outer fission barrier heights (see
Table~\ref{table: average parity}).
\begin{table}[h
\caption{\label{table: average parity}
Expectation value of the parity of the parity operator for the blocked
single-particle state nearest to the Fermi level in both considered
odd nuclei corresponding to a specific K configuration in the $Q_{20}
= 120-130$~barn region. The SkM*interaction has been used. Only the
lowest energy solutions have been considered for a given $K$ value. In
the single case $K = 7/2$ where two solutions stemming by continuity
from states with the same $K$ and opposite $\pi$ values were close
enough in energy (a couple of MeV) we have reported the parity
expectation value of both, putting in parenthesis the solution with
the higher energy.}
\begin{ruledtabular}
\begin{tabular}{*{5}c}
$K$ & 1/2 & 3/2 & 5/2 & 7/2 \\
\hline
$^{235}$U & 0.76 & $-0.53$ & 0.06 & 0.85 (0.10) \\
$^{239}$Pu & -- & $-0.13$ & -- & 0.83 (0.19)
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Inclusion of rotational energy and sensitivity of
fission-barrier heights to the moment of inertia
\label{Sensitivity to moment of inertia}}
Table~\ref{table: sensitivity of fission-barrier heights all configurations}
displays the
inner-barrier height $E_A$, the fission-isomeric energy $E_{\rm IS}$
and the outer-barrier height $E_B$, obtained within the Bohr-Mottelson
unified model (therefore including the rotational energy).
Parity symmetric and asymmetric (when available) outer-barrier
heights are both tabulated for completeness. It should be emphasized that
the notation $E_{\rm IS}$ used here is not synonymous with the usual
meaning of fission-isomeric energy often denoted by $E_{\rm II}$. The
latter refers to the energy difference between the lowest-energy
solutions in the fission-isomeric and
ground-state wells. The corresponding
results will be reported in Section
\ref{Spectroscopic properties in the fission-isomeric well},
while the former is the energy difference between
given $K^{\pi}$ quantum numbers in the two wells.
\begin{table*
\caption{\label{table: sensitivity of fission-barrier heights all configurations}
Inner-barrier height $E_A$, fission-isomeric energy $E_{\rm
IS}$ (with respect to the ground-state solution),
and outer-barrier height $E_B$ for the two considered odd-neutron nuclei.
The SkM* parametrization has been used. Three values (in MeV) were given in all cases,
corresponding to different prescription for the moments of inertia (see the discussion in Section~\ref{Theoretical framework}).}
\begin{ruledtabular}
\begin{tabular}{*{17}c}
\multirow{2}{*}{Nucleus}& \multirow{2}{*}{$K^{\pi}$}& \multicolumn{3}{c}{$E_A$}& &
\multicolumn{3}{c}{$E_{\rm IS}$}& &
\multicolumn{3}{c}{$E_{B}$ (symmetric)}& & \multicolumn{3}{c}{$E_{B}$ (asymmetric)} \\
\cline{3-5} \cline{7-9} \cline{11-13} \cline{15-17}
& & IB& IB+32\% & IB+100\% & & IB& IB+32\% & IB+100\% & & IB& IB+32\% & IB+100\% & & IB& IB+32\% & IB+100\% \\
\hline
\multirow{6}{*}{$^{235}$U}
& 1/2$^+$& 6.57& 6.83& 7.11& & 2.60& 2.94& 3.30& & 8.60& 9.23& 9.90& & 5.31& 5.83& 6.38 \\
& 3/2$^+$& 6.19& 6.43& 6.69& & 1.48& 1.81& 2.16& & 8.12& 8.72& 9.37& & \\
& 5/2$^+$& 5.83& 6.09& 6.37& & 1.44& 1.78& 2.15& & 9.57& 10.17& 10.80& & \\
& 5/2$^-$& 6.32& 6.59& 6.87& & 3.97& 4.28& 4.62& & 8.21& 8.81& 9.46& & \\
& 7/2$^-$& 6.97& 7.18& 7.41& & 2.70& 3.00& 3.32& & 10.25& 10.85& 11.49& & \\
& 7/2$^+$& 4.75& 5.04& 5.35& & 2.21& 2.55& 2.91& & 7.29& 7.93& 8.61& & 4.03& 4.54& 5.09 \\
\hline
\multirow{4}{*}{$^{239}$Pu}
& 1/2$^+$& 7.43& 7.71& 7.98& & 1.70& 2.05& 2.43& & 7.63& 8.24& 8.88& & \\
& 5/2$^+$& 6.97& 7.25& 7.54& & 0.96& 1.30& 1.67& & 8.83& 9.40& 10.00& & \\
& 7/2$^-$& 8.10& 8.32& 8.56& & 2.74& 3.05& 3.37& & 8.75& 9.32& 9.93& & \\
& 7/2$^+$& 5.90& 6.18& 6.48& & 1.72& 2.05& 2.40& & 6.63& 7.22& 7.86& & 3.80& 4.25& 4.72 \\
\end{tabular}
\end{ruledtabular}
\end{table*}
It can be seen from Table~\ref{table: sensitivity of fission-barrier heights all configurations}
that the rotational-energy correction calculated
using the Inglis-Belyaev formula gives too low an outer fission barrier in
some cases, as compared to the empirical values found to be
within the range of 5.5 to 6.0 MeV (see
Table~\ref{table: fission-barrier heights comparison for odd-mass
nuclei} presented in the next Subsection).
The increase in the IB moments of inertia by 32\% and 100\% as discussed in Section~\ref{Theoretical framework}
results in an increase, on the average, of $E_A$ and $E_{IS}$ by about 0.27 MeV and 0.35 MeV, respectively
while the parity symmetric $E_B$ increases by about 0.64 MeV.
Among the three different considered energy differences, $E_B$ is
found to be the most sensitive one to the variation of the moment of
inertia as expected in view of the well-known increase of the
rotational energy correction with the elongation.
\subsection{Comparison with empirical values and other calculations}
Before comparing our fission-barrier heights to other available data,
some corrections should be made.
The corrections considered herein, stem from approximations of different nature:
the so-called Slater approximation to the Coulomb exchange
interaction, the truncation of the harmonic-oscillator basis,
and the effect of triaxiality around the inner-barrier ignored here.
We shall discuss first the corrections to be made for the inner-barrier heights.
A test study on the impact of basis size parameter on the fission-barrier heights
is presented in
Appendix~\ref{Appendix: Effect of basis size on fission-barrier heights}.
As discussed therein, the inner-barrier height is estimated to be
lowered by about 300 keV when increasing the basis size parameter $\rm
N_0$ to a value where this relative energy may be considered to have converged.
Moreover the use of Slater approximation was found
in Ref.~\cite{Bloas_2011} to underestimate the
inner-barrier height of $^{238}$U by about also 300 keV.
Assuming that a similar correction
applies to the two considered nuclei irrespective of the
$K^{\pi}$ quantum numbers, our inner-barrier height should be
increased by the same magnitude.
\begin{table*}[t
\caption{\label{table: fission-barrier heights comparison for odd-mass nuclei}
Comparison between various estimates of the inner $E_A$ and outer-barrier $E_B$ heights (given in MeV)
of the two considered odd-neutron nuclei.
Our calculated fission-barrier heights corresponding to the experimental $K^{\pi}$ quantum numbers,
at ground state deformation,
are listed in the last column,
whereby these values have been obtained after taking the various corrections into account.
}
\begin{ruledtabular}
\begin{tabular}{*{19}c}
\multirow{2}{*}{Nucleus}& \multirow{2}{*}{$K$}& \multicolumn{2}{c}{Ref. \cite{Robledo_2009,Iglesia_2009}}& &
\multicolumn{2}{c}{Ref. \cite{Moller_2009}}& &
\multicolumn{2}{c}{Ref. \cite{Goriely_2009}}& &
\multicolumn{2}{c}{Ref. \cite{Smirenkin_1993}}& &
\multicolumn{2}{c}{Ref. \cite{Bjornholm_1980}}& &
\multicolumn{2}{c}{present work} \\
\cline{3-4} \cline{6-7} \cline{9-10} \cline{12-13} \cline{15-16} \cline{18-19}
& & $E_A$& $E_B$& & $E_A$& $E_B$& &
$E_A$& $E_B$& & $E_A$& $E_B$& & $E_A$& $E_B$& & $E_A$& $E_B$ \\
\hline
\multirow{6}{*}{$^{235}$U}
& 1/2$^+$& 9.0& 8.0& &
\multirow{6}{*}{4.20}& \multirow{6}{*}{4.87}& &
\multirow{6}{*}{5.54}& \multirow{6}{*}{5.80}& &
\multirow{6}{*}{5.25}& \multirow{6}{*}{6.00}& &
\multirow{6}{*}{5.9}& \multirow{6}{*}{5.6}& &
5.81& 6.18 \\
& 3/2$^+$& -& -& && &&&&&&&&&&& 5.39& - \\
& 5/2$^+$& -& -& && &&&&&&&&&&& 5.07& - \\
& 5/2$^-$& -& -& && &&&&&&&&&&& 5.57& - \\
& 7/2$^-$& \multirow{2}{*}{8.5}& \multirow{2}{*}{7.2}& & & &&&&&&&&&&& 6.11& - \\
& 7/2$^+$& & & & & &&&&&&&&&&& 4.05& 4.89 \\
\hline
\multirow{4}{*}{$^{239}$Pu}
& 1/2$^+$& 11.0& 8.5& &
\multirow{4}{*}{5.73}& \multirow{4}{*}{4.65}& &
\multirow{4}{*}{5.96}& \multirow{4}{*}{5.86}& &
\multirow{4}{*}{6.20}& \multirow{4}{*}{5.70}& &
\multirow{4}{*}{6.2}& \multirow{4}{*}{5.5}& &
6.68& - \\
& 5/2$^+$& 11.5& 9.0& & & &&&&&&&&&&& 6.24& - \\
& 7/2$^-$& \multirow{2}{*}{11.0}& \multirow{2}{*}{8.5}& & & &&&&&&&&&&& 7.26& - \\
& 7/2$^+$& & & & & &&&&&&&&&&& 5.18& 4.52 \\
\end{tabular}
\end{ruledtabular}
\end{table*}
Let us consider the impact of breaking the axial symmetry should
around the top of the inner barrier. When breaking this symmetry, $K$
is no longer a good quantum number and this may pose a problem in the
blocking procedure for an odd-mass nucleus since the single-particle
states will contain to some extent mixtures of $K$ quantum number
components. As a simple ansatz, overlooking these potential
difficulties, we estimate the lowering of the inner barrier of
odd-mass nuclei by using the results obtained in similar triaxial
calculations for even-mass nuclei, taking stock of the results of
Ref.~\cite{Algerian_2016} where the same SkM* parametrization and
seniority residual interaction have been used. Thus assuming that the
effect of including the triaxiality is the same as in $^{236}$U (for
$^{235}$U) and as in $^{240}$Pu (for $^{239}$Pu) for all considered
blocked configurations, we expect a reduction in the inner-barrier
height by about 1.3 MeV.
Taking the three above mentioned corrections into account,
we obtain a total reduction of the inner-barrier height by about 1.3 MeV.
Next, we consider the isomeric energies $E_{IS}$. We estimate that
the finite basis size effect (see Appendix~\ref{Appendix: Effect of
basis size on fission-barrier heights}) results in an overestimation
of this energy by about 0.5 MeV. The exact Coulomb exchange
calculations of Ref.~\cite{Bloas_2011} have shown that the Slater
approximation yielded an underestimation of the isomeric energy of
$^{238}$U of about 0.3~MeV.
As for the outer barrier now, exact Coulomb exchange calculations have
not been performed---due to corresponding very large computing
times---for these very elongated shapes in this region of nuclei. As
discussed in Ref.~\cite{Bloas_2011} most of the correction comes from
an error in estimating the Coulomb exchange contributions in low
single-particle level density regimes Therefore as far as $E_B$ is
concerned, we assume that this correction depends only on the
treatment of the ground-state and therefore should be the same as
what was obtained for $E_A$, namely an underestimation of 0.3~MeV.
The finite basis size effect, as evaluated in a particular case in
Appendix~\ref{Appendix: Effect of basis size on fission-barrier
heights} corresponds to an overestimation of about 0.5 MeV.
The nett effect of the corrective terms for the outer-barrier height is
therefore a decrease by about 0.2~MeV.
When including all the above
corrections and using the doubled moment of inertia (IB+100\% scheme),
we obtain inner-barrier heights for the
different blocked configurations ranging from 5.0 to 6.2
MeV for $^{235}$U, and from 5.1 to 7.3 MeV for $^{239}$Pu.
The left-right asymmetric outer-barrier heights
lie within the range of 4.8 to 6.2~MeV for the $^{235}$U
nucleus, and 4.5~MeV for 7/2$^+$ configuration in the $^{239}$Pu nucleus.
Some other fission-barrier heights have been also reported for
comparison in Table~\ref{table: fission-barrier heights
comparison for odd-mass nuclei}. More precisely we consider two sets
of calculations, namely the EFA calculations by Robledo and
collaborators~\cite{Robledo_2009, Iglesia_2009} and the
macroscopic-microscopic calculations by
M\"{o}ller~\cite{Moller_2009}. Three sets of evaluated fission-barrier
heights are alos listed: those fitted to reproduce the neutron-induced
fission cross-sections by Goriely and collaborators~\cite{Goriely_2009},
those coming from the RIPL-3~\cite{Capote_2009} database extracted
from empirical estimates compiled by Maslov \textit{et al.}
\cite{Smirenkin_1993}, and the empirical fission-barrier heights of
Bj\o rnholm and Lynn \cite{Bjornholm_1980} obtained from the lowest-energy solution at
the saddle points irrespective of the nuclear angular-momentum and
parity quantum numbers.
Out of these values, only those obtained from Refs.~\cite{Robledo_2009, Iglesia_2009}
using the Gogny D1S force within the Hartree-Fock-Bogoliubov-EFA framework
are directly comparable with our results.
In these works, axial symmetry is assumed.
The resulting fission-barrier heights
are much higher than our calculated values.
This is consistent
with the rather high fission-barrier heights obtained for the
even-even $^{240}$Pu nucleus in the earlier work of Ref.~\cite{Berger_1984}.
It should be stressed that the rather large differences existing between our results
and those reported in Refs.~\cite{Robledo_2009, Iglesia_2009}
cannot be ascribed to the treatment of the
time-reversal symmetry breaking.
In fact, we have checked that equal-filling approximation (EFA) calculations
(corresponding to a particle and not quasi-particle blocking though)
affects the total binding energies by a few
hundred keV at most for the parity symmetric case.
The resuls of calculations for four different configurations
in $^{239}$Pu of $E_A$, $E_{IS}$ and $E_B$
are displayed on Table~\ref{table: diff of fission-barrier heights of EFA vs SCB}.
The effect of time-reversal symmetry
breaking terms is found to be approximately constant with deformation.
\begin{table}[h
\caption{\label{table: diff of fission-barrier heights of EFA vs SCB}
Differences (in keV) betwreen the intrinsic fission-barrier heights ($\Delta E_x = \big(E_x\big)_{EFA} - \big(E_x\big)_{SCB}$ with
$x \equiv A, IS, B$) calculated within
the EFA and SCB framework for $^{239}$Pu with the
SkM* parametrization.}
\begin{ruledtabular}
\begin{tabular}{*{4}c}
$K^{\pi}$& $\Delta E_A$& $\Delta E_{IS}$& $\Delta E_B$ \\
\hline
1/2$^+$& -70& -50& -10 \\
5/2$^+$& -10& -20& 0 \\
7/2$^+$& -10& -20& -10 \\
7/2$^-$& -10& 0& 0 \\
\end{tabular}
\end{ruledtabular}
\end{table}
The comparison with the other sets of data in
Table~\ref{table: fission-barrier heights comparison for odd-mass nuclei}
is less straightforward.
As was mentioned by Schunck \textit{et al.} in Ref.~\cite{Schunck_2014},
due to an uncertainty in the empirical
fission-barrier heights of about 1 MeV, it is may be illusory
to attempt a reprodution of empirical values within less than such an error bar.
In our case, the fission-barrier heights calculated with the SkM* parametrization
and including the various corrective terms as discussed above, falls easily within this range.
\subsection{Specialization energies \label{Specialization energies}}
Originally (see Refs.~\cite{Wheeler, Newton_1955}), the concept of
specialization energy has been defined as the difference between
fission barrier heights of an odd nucleus with respect to those of
some of its even-even neighbors. Namely one defines, for instance, the
specialization energy for the first (inner) barrier, upon considering
$^{239}$Pu as a $^{238}$Pu core plus one neutron particle, as
\eq{
\label{eq: specialization energy core-plus-particle}
\Delta E_A^{(p)}( ^{239}\text{Pu} , K^{\pi}) = E_A( ^{239}\text{Pu} ,
K^{\pi}) - E_A(^{238}\text{Pu} , 0^{+}) \,,
}
and similarly when considering $^{239}$Pu as a $^{240}$Pu core plus
one neutron hole
\eq{
\Delta E_A^{(h)}( ^{239}\text{Pu} , K^{\pi}) = E_A( ^{239}\text{Pu} ,
K^{\pi}) - E_A(^{240}\text{Pu} , 0^{+}) \,.
\label{eq: specialization energy core-minus-particle}
}
For configurations at the ground state deformation having a very low
or zero excitation energy, due to the conservation of quantum numbers
preventing to follow the \textit{a priori} lowest energy
configurations at s.p. level crossings, one expects that these
specialization energies should be positive quantities. This is of
course the case for experimentally observed spontaneous fission
processes. But this would not hold whenever one would consider
configurations which correspond to a high enough excitation energy in
the ground state well as we will show in a specific case (see
Table~\ref{table: specialization energy of Pu-239}).
To illustrate this concept
Figure~\ref{fig: specialization energy plutonium} and
Table~\ref{table: specialization energy of Pu-239} present the deformation-energy curves and the
fission-barrier heights, respectively, with a conserved parity
symmetry evaluated within the BM unified model for the four blocked
$K^{\pi}$ configurations of $^{239}$Pu with respect to those of the
neighbouring even-even nuclei.
\begin{figure*}[t]
\includegraphics[angle=-90,keepaspectratio=true,scale=0.6]{plutonium_isotopes_def_energy_curve_SkM.eps}
\includegraphics[angle=-90,keepaspectratio=true,scale=0.6]{plutonium_isotopes_specialization_energy_curve_SkM.eps}
\caption{\label{fig: specialization energy plutonium} Deformation-energy curves of $^{238,240}$Pu (with $K^{\pi} =
0^+$) and $^{239}$Pu (with $K^{\pi} = 1/2^+$, $5/2^+$, $7/2^-$ and
$7/2^+$) as functions of $Q_{20}$ in barns. The Belyaev's moments of
inertia have been increased by a factor of 2.
Left panel: absolute energy scale; right panel: relative
scale, taking the normal-deformed minimum as the origin of energy
for all curves.}
\end{figure*}
We see that the inner and
outer-barrier heights for some blocked
configurations---the $7/2^-$ configuration being an excellent
example---are higher than the one of the two
neighboring even-even nuclei as a consequence of
fixing $K^{\pi}$ quantum numbers along the fission path.
In contrast the $7/2^+$ blocked configuration happens
to yield lower fission-barrier heights as
compared to the two neighboring even-even nuclei.
This is so, as above discussed, because the $7/2^+$ configuration is found at
a much higher excitation energy
in the ground-state deformation well~\cite{Koh_2016} but
with a low excitation energy at the saddle points as
compared to the other blocked configurations.
This results in negative specialization energies, as shown
in Table~\ref{table: specialization energy of Pu-239}.
\begin{table}[h
\caption{\label{table: specialization energy of Pu-239} Specialization energies
defined here as the average of Eq.~(\ref{eq: specialization energy core-plus-particle})
and (\ref{eq: specialization energy core-minus-particle})
for the four blocked
configurations of $^{239}$Pu (in MeV). The Belyaev's moments of inertia have been increased
by a factor of 2.}
\begin{ruledtabular}
\begin{tabular}{*{8}c}
& \multicolumn{7}{c}{$K^{\pi}$} \\
\cline {2-8}
& 1/2$^{+}$& & 5/2$^{+}$& & 7/2$^{+}$& & 7/2$^{-}$ \\
\hline
$\Delta E_{A}$& 0.83& & 0.39& & -0.68& & 1.41 \\
$\Delta E_{B}$& 0.26& & 1.38& & -0.77& & 1.31 \\
\end{tabular}
\end{ruledtabular}
\end{table}
By way of conclusion, one can state that the
fission-barrier profiles (heights and widths) are very much dependent
on the $K^{\pi}$ quantum numbers.
\subsection{Effect of neglected time-odd terms
\label{Effect of neglected time-odd terms}}
In order to probe the effect of the neglected time-odd
densities we have performed calculations of the total
binding energy as a function of deformation with parity symmetry
within the so-called \textit{full time-odd} scheme,
from the normal-deformed ground-state well up to the fission-isomeric well.
For this study, we have also considered
another commonly used Skyrme parameters set,
namely the SIII parametrization~\cite{Beiner_1975},
partly because there, the coupling constants $B_{14}$ and
$B_{18}$ driving the terms involving the spin-current tensor
density $J_q^{\mu\nu}$ and the Laplacian of the spin density,
respectively), are exactly zero. In the \textit{full time-odd}
scheme, the $B_{14}$, $B_{15}$, $B_{18}$ and $B_{19}$
coupling constants are not set to zero but allowed to take the
values resulting from their expression in terms of the Skyrme
parameters (see Appendix~\ref{Appendix: Skyrme energy density functional}).
The contributions to the inner-barrier height $E_A$ and
fission-isomeric energy $E_{\rm IS}$ stemming from the kinetic energy,
the Coulomb energy, the pairing energy as well as the various
coupling-constant terms appearing in the
Skyrme Hamiltonian density are calculated self-consistently in
the \textit{minimal} and \textit{full time-odd} schemes from our converged solutions.
More specifically we denote by $\Delta E_{B_i}'$ the
difference between the $B_i$ contribution to the inner-barrier heights
$\Delta E_{B_i}^{(\rm full)}$ and $\Delta E_{B_i}^{(\rm min)}$
in the \textit{full time-odd} and the \textit{minimal time-odd} schemes,
respectively
\eq{
\label{eq_Delta_E'_Bi}
\Delta E_{B_i}' = \Delta E_{B_i}^{(\rm full)} - \Delta E_{B_1}^{(\rm
min)} \,.
}
Similarly we denote by $\Delta E_{\rm kin}^{(\rm full)}$ and $\Delta E_{\rm kin}^{(\rm min)}$ the
kinetic-energy contribution to the inner-barrier height in both time-odd schemes.
In the same
spirit the abbreviated indices $\rm C$ and $\rm pair$ are used for the
corresponding Coulomb and pairing contributions, respectively.
The sum of the double energy differences
coming from the kinetic, Coulomb, pairing and $B_i$
contributions with $i$ ranging from 1 to 13
is denoted as $\Delta E_{\min}'$
\eq{
\Delta E_{\min}' = \Delta E_{\rm kin}' + \sum_{i=1}^{13} \Delta E_{B_i}' + \Delta
E_{\rm pair}' + \Delta E_{\rm C}' \,.
}
The difference of inner-barrier heights in the two \textit{time-odd}
schemes is therefore given by
\eq{
\label{eq_Delta_E'_A}
\Delta E_{A}' = \Delta E_{\min}' + \Delta E_{B_{14}}' + \Delta
E_{B_{15}}' + \Delta E_{B_{18}}' + \Delta E_{B_{19}}' \,.
}
Similar notations are used for the fission-isomeric energy.
In Figures~\ref{fig: Pu239_time_odd_scheme_SkM}
and \ref{fig: Pu239_time_odd_scheme_SIII}, the various
energy differences defined above, are represented as histograms for the
SkM* and SIII parametrizations, respectively.
We find that the inner-barrier heights, in general,
decrease when going from a \textit{minimal} to a \textit{full time-odd}
scheme in all considered blocked configurations.
This is reflected by the negative values of $\Delta E_A'$.
The difference in the inner-barrier heights between both
time-odd schemes is overall a competition between the $\Delta
E_{\min}'$ and $\Delta E_{B_{14,15}}'$, while the $\Delta
E_{B_{18,19}}'$ terms have a negligible effect.
More precisely, the $\Delta E_{B_{14}}'$ term involves the combination
of $\overleftrightarrow{J}^2 - \mathbf{s \cdot T}$
local densities and is found to be dominated by the
$\overleftrightarrow{J}^2$ component.
When the $\Delta E_{\min}'$ and
$\Delta E_{B_{14,15}}'$ contributions are of the same magnitude but
with opposite signs, then we do not have a change in the inner-barrier
height, as is the case for the 7/2$^+$ blocked configuration
with the SkM* parametrization.
The effect of the time-odd scheme on fission-isomeric energy $E_{\rm IS}$
is less clear-cut. However, we could still observe
that the $B_{18}$ and $B_{19}$ contributions remain negligible.
Moreover the time-odd scheme generally has less impact
on the fission-isomeric energy than on the inner-barrier height.
A notable exception is found for the 1/2$^+$ configuration.
This study shows that the
terms proportional to coupling constants which are
not constrained in the original fits
of the Skyrme force can impact the fission-barrier heights in a
non-systematic and non-uniform manner.
This suggests that one cannot absorb this effect into
an adjustment procedure.
\begin{figure*}[h!]
\includegraphics[angle=-90,keepaspectratio=true,scale=0.75]{Pu239_SkM_coupling_constant_time_odd_scheme.eps}
\includegraphics[angle=-90,keepaspectratio=true,scale=0.75]{Pu239_SkM_coupling_constant_time_odd_scheme_IS.eps}
\caption{\label{fig: Pu239_time_odd_scheme_SkM} Energy differences between various
contributions
(see Eqs.~(\ref{eq_Delta_E'_Bi}) to (\ref{eq_Delta_E'_A}) for definitions)
to the inner-barrier height and
isomeric energy obtained in the default
\textit{minimal time-odd} scheme and the \textit{full time-odd} scheme
for several blocked configurations in $^{239}$Pu with
the SkM* parametrization. The difference in the inner-barrier heights
$\Delta E_{A}^{'}$ and fission-isomeric energy $\Delta E_{\rm IS}^{'}$
between the two schemes are also given for each blocked
configuration.}
\end{figure*}
\begin{figure*}[h!]
\includegraphics[angle=-90,keepaspectratio=true,scale=0.75]{Pu239_SIII_coupling_constant_time_odd_scheme.eps}
\includegraphics[angle=-90,keepaspectratio=true,scale=0.75]{Pu239_SIII_coupling_constant_time_odd_scheme_IS.eps}
\caption{\label{fig: Pu239_time_odd_scheme_SIII} Same as Figure \ref{fig: Pu239_time_odd_scheme_SkM}
for the SIII parametrization.}
\end{figure*}
\section{Spectroscopic properties in the
fission-isomeric well
\label{Spectroscopic properties in the fission-isomeric well}}
In this section, we discuss the results obtained in the
fission-isomeric well for the $^{235}$U and $^{239}$Pu nuclei.
We will compare here the results obtained with three Skyrme force parametrizations
(SkM*, SIII and SLy5*).
In the vicinity of the isomeric state, we will make the approximation that the parity mixing is indeed very small,
such that (with the notation of Subsection II.D)
\eq{\epsilon \sim 1}
and similarly for an odd nucleon state stemming from a positive parity s.p. configuration
\eq{\eta \sim 1 \text{\hspace{1cm}} h^{+} \sim e_{odd} \text{\hspace{1cm}} h^{-} \sim 0 }
while for an odd nucleon state stemming from a negative parity s.p. configuration
\eq{\eta \sim 0 \text{\hspace{1cm}} h^{-} \sim e_{odd}
\text{\hspace{1cm}} h^{+} \sim 0 \,. }
As a result for a positive parity nuclear configuration, the projected energy of the fission-isomeric state will be approximated by
\eq{ E(K^{+}) \sim E_{int}(K^{+} ) + \Delta E_{core}^{+}}
while in the negative parity case we will have
\eq{ E(K^{-}) \sim E_{int}(K^{-} ) + \Delta E_{core}^{+}}
where the intrinsic energies $ E_{int}(K^{\pm} ) $ are the energies of our microscopic blocked HF + BCS calculations.
\subsection{Static quadrupole moment
\label{Static quadrupole moments}}
Before discussing relative energy quantities in the fission-isomeric well,
we assess the quality of deformation properties of our solutions in this well by
calculating the intrinsic quadrupole moments for
some relevant $K^{\pi}$ configurations in the fission-isomeric well.
The obtained values are listed in
Table~\ref{table:quadrupole moment in fission isomeric well}.
To the best of our knowledge, experimental values
are available in $^{239}$Pu only~\cite{Habs_1977,Backe_1979}.
In this nucleus, our values calculated for the $5/2^+$ configuration
with the three considered Skyrme force parametrizations are all
found to agree with experiment within the quoted error bars.
\begin{table}[h
\caption{\label{table:quadrupole moment in fission isomeric well} Calculated intrinsic
quadrupole moments in the isomeric well
for the two lowest-energy states
in $^{235}$U and the two states corresponding to the
experimentally known \cite{Habs_1977,Backe_1979} $K^{\pi}$ configuration in $^{239}$Pu.
In addition, the values obtained for the $11/2^+$ state in $^{239}$Pu are also reported.}
\begin{ruledtabular}
\begin{tabular}{*{9}c}
Nucleus & $K^{\pi}$& SkM*& & SIII& & SLy5*& & Exp \\
\hline
\multirow{2}{*}{$^{235}$U} & 5/2$^+$& 32.9& & 31.8& & 33.4& & - \\
& 11/2$^+$& 32.5& & 31.8& & 32.3& & - \\
\hline
\multirow{3}{*}{$^{239}$Pu} & 5/2$^+$& 34.1 & & 33.2 & & 34.8 & & 36 $\pm$ 4 \\
& 9/2$^-$& 34.1 & & 33.2 & & 34.5 & & - \\
& 11/2$^+$& 34.5 & & 33.9 & & 34.3 & & - \\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Fission-isomeric energy, band heads and rotational bands
\label{Fission-isomeric energy, band heads and rotational bands}}
Above the lowest-energy solution at the fission-isomeric well
there are several band-head states within 1 MeV.
This has been displayed in
Figures~\ref{fig: Pu239 fission isomeric band head}
and \ref{fig: U235 fission isomeric band head}
for the $^{239}$Pu and $^{235}$U nuclei, respectively.
These results have been obtained with the inclusion of rotational energy
with the approximate Thouless-Valatin
corrective term in the moment of inertia (assuming a 32 \% increase above the IB value).
\begin{figure*}[h]
\hspace*{-0.75cm}
\includegraphics[angle=-90,keepaspectratio=true,scale=0.75]{Pu239_energy_spectra_IS_SkM_SIII.eps}
\caption{\label{fig: Pu239 fission isomeric band head} Band-head
energy spectra of $^{239}$Pu calculated with the SLy5*, SkM* and
SIII parametrizations in the isomeric well with the inclusion of the
rotational correction. The standard Thouless-Valatin correction of
Ref. \cite{Libert_Girod_Delaroche_1999} beyond the Belyaev's result has been
taken into account for the moments of inertia of each
band. The rotational spectra built upon the lowest-energy
$5/2^+$ state \textit{(rot band)} are also shown on the second
column of each Skyrme force. The experimental data are taken from
Refs. \cite{Browne_2014,Browne_2003}. The fission-isomeric energy
defined as the energy difference between the lowest-energy solution
in the ground state and the fission-isomeric well is denoted by
$E_{\rm II}$.}
\end{figure*}
\begin{figure*}[h]
\hspace*{-1cm}
\includegraphics[angle=-90,keepaspectratio=true,scale=0.75]{U235_energy_spectra_IS_SLy5_SkM_SIII.eps}
\caption{\label{fig: U235 fission isomeric band head} Same as Figure~\ref{fig: Pu239 fission isomeric band head}
for $^{235}$U.}
\end{figure*}
Let us first discuss the energy spectra for the $^{239}$Pu nucleus
for which a comparison with the experimental data of
Refs.~\cite{Browne_2014,Browne_2003} is possible.
As shown in Fig.~\ref{fig: Pu239 fission isomeric band head},
the experimental ground-state quantum numbers
in the normal-deformed well are
$1/2^+$ while in the fission-isomeric well they are $5/2^+$.
Our calculated results with the SkM* and the SIII parametrizations reproduce
these data.
On the contrary, the calculations with the SLy5* parameter set, fail
to do it as they yield a $5/2^+$ ground-state in the normal-deformed
well located 160 keV below the 1/2$^+$ state and a $1/2^+$
lowest-energy state in the fission-isomeric well.
Moreover, the $K^{\pi} = 9/2^-$ state calculated with SLy5* appears at
a too high excitation energy of more than 500~keV as compared to the
experimental value of about 200~keV.
In contrast the excitation energy of this $9/2^-$ state is found in
much better agreement with data for SkM* and SIII (139 keV and 127
keV, respectively). The agreement with the data of these values is
expected to be favorably improved when including the effect of
Coriolis coupling, as suggested from the work of
Ref.~\cite{Libert_1980}. In addition, a $11/2^+$ excited state is
predicted at 151 keV, 129 keV and 299 keV with the SLy5*, the SkM* and
the SIII parametrizations, respectively. This state was also predicted
(at a 44~keV excitation energy) in the Hartree--Fock--Bogoliubov
calculations with the Gogny force by Iglesia and collaborators~\cite{Iglesia_2009}.
The rotational band built on the $5/2^+$ band-head state can also be
compared with experimental data: the calculated energies for the first
two excited states are found to be rather similar within the three
considered Skyrme parametrizations in use, and to compare very well
with data.
Let us now move the discussion to the results for the $^{235}$U nucleus
displayed in Fig.~\ref{fig: U235 fission isomeric band head}.
To the best of our knowledge, there are
no experimental data available for comparison with our calculated
values in the superdeformed well of this nucleus. There are, however,
some calculations performed with the Gogny force in the work of
Ref.~\cite{Robledo_2009} which predict a $5/2^+$ ground state with a
first 11/2$^+$ excited state at 120 keV in the fission-isomeric
well. The same level sequence is also obtained in our calculations
with the SkM* and the SLy5* Skyrme parametrizations, although the
11/2$^+$ state is located at a much higher energy in the latter
parametrization. The calculations with SIII yields the opposite level
sequence, with a 5/2$^+$ state 66 keV above the 11/2$^+$ ground-state.
\subsection{Fission-isomeric energies}
Let us discuss now the fission-isomeric energy $E_{\rm II}$.
Table~\ref{table: effect of moment of inertia on fission isomeric energy}
displays the fission isomeric energies $E_{\rm II}$ defined as the
difference between the energies of the solutions lowest in energy in
both the ground state and fission-isomeric wells (irrespective of
their $K^{\pi}$ quantum numbers), namely with an obvious notation
\eq{
E_{\rm II} = E_{0}^{\rm IS} - E_{0}^{\rm GS} \,.
}
As seen on Table~\ref{table: effect of moment of inertia on fission isomeric energy}
(see also Figs.~\ref{fig: Pu239 fission isomeric band head}
and~\ref{fig: U235 fission isomeric band head}) when using the
standard Thouless-Valatin correction of $32 \%$ over the IB estimate,
the Skyrme SIII interaction yields values of $E_{\rm II}$ which are
much too high. This is not very surprising in view of the well-known
defect of its surface tension property. On the contrary, the too low
value obtained with the SkM$^{*}$ interaction which provides very good
Liquid Drop Model barrier heights must be explained by some inadequate
account of relevant shell effect energies. The last interaction
(SLy5$^{*}$) provides reasonable $E_{\rm II}$ values (yet slightly too
weak).
Now, as discussed before, rotational energy corrections calculated
using the Belyaev moment of inertia were found to be too large,
resulting in an underestimation of the fission-barrier heights. This
is partly due to the resulting overestimation of the rotational
correction. As a rough cure for this, one may increase the IB moments
of inertia by a factor of 2. The resulting $E_{\rm II}$ values are listed in
Table~\ref{table: effect of moment of inertia on fission isomeric energy}.
\begin{table*
\caption{Fission-isomeric energy $E_{\rm II}$ for three different
prescriptions for the moment of inertia. The $K^{\pi}$ quantum numbers of the
ground-state solution in the fission-isomeric well are those displayed in
Figs.~\ref{fig: Pu239 fission isomeric band head} and
\ref{fig: U235 fission isomeric band head},
except for $^{235}$U with the SkM* parametrization and when increasing the Belyaev's result by a factor of 2
(column labeled IB+100\%),
for which the $K^{\pi} = 11/2^+$ blocked configuration
has been considered.
}
\begin{ruledtabular}
\begin{tabular}{*{14}c}
\multirow{2}{*}{Nucleus}& \multicolumn{3}{c}{SLy5*}& & \multicolumn{3}{c}{SkM*}& &
\multicolumn{3}{c}{SIII}& & \multirow{2}{*}{Exp} \\
\cline{2-4} \cline{6-8} \cline{10-12}
& IB& IB+32\%& IB+100\%& & IB& IB+32\%& IB+100\%& & IB& IB+32\%& IB+100\%& \\
\hline
$^{235}$U& 2.36& 2.73& 3.11& & 1.46& 1.83& 2.20& & 3.62& 3.97& 4.35& & - \\
$^{239}$Pu& 2.30& 2.69& 3.10& & 1.08& 1.43& 1.80& & 3.42& 3.84& 4.30& & 3.1 \\
\end{tabular}
\label{table: effect of moment of inertia on fission isomeric energy}
\end{ruledtabular}
\end{table*}
It has been checked that the band-head energy spectra in
the fission-isomeric well are then only affected by
some tens of keV from the values shown in
Figs.~\ref{fig: Pu239 fission isomeric band head} and
\ref{fig: U235 fission isomeric band head}.
The $K^{\pi}$ quantum numbers of the
lowest-energy solutions in all cases remain unchanged
except for $^{235}$U with the SkM* interaction. In this case, we have
a change in the level ordering of the ground and first excited
states, where the quoted value of $E_{\rm II} = 2.20$~MeV involves
the $K^{\pi} = 11/2^+$ blocked
configuration in the fission-isomeric well.
\section{Concluding remarks\label{Conclusion}}
From the above calculations of fission barriers in odd-mass
nuclei within a self-consistent blocking approach we can draw the
following conclusions.
First, barrier heights and fission isomeric energies depend on the time-odd
scheme in a non-systematic way. Indeed they are found to vary with the
nucleus and with the quantum numbers in a given nucleus between zero
and almost 0.8 MeV in the studied nuclei. This effect cannot be
absorbed in the adjustment of the Skyrme parameters. In particular the
calculated specialization energies strongly vary with the $K$ and
$\pi$ quantum numbers and can be negative when the blocked
configuration lies rather high in energy in the ground-state well and
rather low at the saddle point.
Moreover, the equal-filling approximation, defined in our work as an
equal occupation of the block single-particle state and its
time-reversed state as opposed to the definition of
Ref.~\cite{Schunck_2010} based on one-quasi-particle states, is found to
have no significant effect on deformation and is a fairly good
approximation to calculate relative energies, such as the
fission-barrier heights and fission-isomeric energies.
Regarding spectroscopic properties in the ground-state and
fission-isomeric wells, we have found overall a fair agreement with
available data. This gives us some confidence in the deformation
properties of the fissioning nuclei, especially in the barrier
profiles as functions of the $K^{\pi}$ quantum numbers.
In this context, we recall that we have imposed axial symmetry throughout the whole potential energy curve so that the $K$ quantum number remains meaningful. As already discussed, this may be deemed as a reasonable assumption in view of dynamical calculations performed for $^{240}$Pu and heavier nuclei, showing that the least-action path
is closer to an axial path than the triaxial static one around the top
of the inner and outer barriers
\cite{Gherghescu99,Sadhukhan14,Zhao16}. Moreover, as far as class I
states are concerned, it has been established from gamma decay of
even-even and odd-odd rare earth nuclei formed by neutron capture,
that the K-quantum number is reasonably conserved even at energies
resulting from neutron capture in the thermal and resonance energy
domains (see, e.g., \cite{OSLO}).
Regardless of the validity of the axial symmetry assumption, our
calculations of fission barriers with fixed $K$ values allow to expect
that the penetrabilities of inner and outer fission barriers will
strongly vary with the blocked configurations, resulting in a
widespread distribution of fission-transmission coefficients as a
function of $K$ and $\pi$ for a fixed $J$ quantum number. This can a
priori impact the fission cross section computed in the optical model
for fission with the full $K$ mixing approximation (see, for instance,
\cite{Sin06,Sin08}).
As a matter of fact fission cross section calculations require in
principle the knowledge of penetrabilities for each discrete
transition state, that is the barrier profile and inertia parameters
for each discrete state at barrier tops. In
Fig.~\ref{fig_transition_states} we show such transition states as
rotational bands built on various low-lying blocked
configurations. They are calculated in the above discussed
Bohr--Mottelson approach using Skyrme-HFBCS intrinsic solutions with
self-consistent blocking.
\begin{figure*
\hspace*{-0.5cm}
\includegraphics[angle=-90,keepaspectratio=true,scale=0.7]{Pu239_energy_spectra_Broyden_top_w_correction.eps}
\caption{Example of calculated transition states at the top of the inner barrier of $^{239}$Pu.
\label{fig_transition_states}}
\end{figure*}
This kind of results can provide microscopic input to the discrete
contribution to the fission transmission coefficients, along the lines
of Ref.~\cite{Goriely_2009}. Note that, in this work, odd-mass nuclei
were not considered in a time-reversal symmetry breaking approach and
that the inertia parameters were calculated within a hydrodynamical
model. A natural extension, requiring very long computing times, is to
compute these parameters from a microscopic model as in the non-perturbative ATDHFB approach~\cite{Yuldashbaeva_1999},
consistently with the barrier profiles for each blocked configuration.
Finally, it is to be noted that in such dynamical calculations, and
even in static calculations, the phenomenological quality of the
pairing interaction is of paramount importance. In our case, its
intensities have been determined by a fit based on explicit
calculations of odd-even mass differences in the actinide
region. However, such approaches suffer a priori from the deficiencies
inherent to a non-conserving particle-number theoretical framework,
particularly so if strong pairing fluctuations are to be
considered. To cure for that in an explicit and manageable fashion, we
intend to perform similar calculations as those presented here, using
the so called Highly Truncated Diagonalization Approach of
Ref.~\cite{Pillet_2002}.
|
1,941,325,221,182 | arxiv | \section{Introduction}
Cardiovascular diseases (CVDs) have become the top one cause of death. 17.9 million people died from CVDs in 2019; three-quarters of these deaths occur in lower-income communities, according to World Health Organization (WHO).
Electrocardiogram (ECG) is a widely-used gold-standard [1] for the cardiovascular diagnostic procedure. By measuring the electrical activity of the heart and conveying information regarding heart functionality, continuous ECG monitoring is proven to be beneficial for early detection of CVDs.The patients at higher risks, such as the aging population, can benefit from continuous ECG monitoring. However, most conventional ECG equipment is restrictive on users’ activities. For example, the Apple Watch provides wrist-based ECG monitoring but requires the users to touch the watch for up to 30 seconds, making these solutions sporadic.
On the other hand, photoplethysmogram (PPG) is an optically obtained signal that can be used to detect blood volume changes in the microvascular bed of tissue. Compared to ECG, the process of deriving PPG is noninvasive, more convenient to set up, and low-cost. PPG is more user-friendly in long-term continuous monitoring without constant user participation. PPG technology thus represents a convenient and low-cost technology that can be applied to various aspects of cardiovascular monitoring, including the detection of blood oxygen saturation, heart rate, BP, cardiac output, respiration, arterial aging, endothelial function, microvascular blood flow, and autonomic function [2]. But, PPG usage is constraint from inaccurate HR estimation and several other limitations in comparison to conventional ECG monitoring devices due to factors like skin tone, diverse skin types, motion artifacts, and signal crossovers among others.
\begin{figure}
\centering
\includegraphics[width=10cm]{ontransfomer.png}
\caption{The Thought Process of Why Transformer is Chose in Performer.}
\end{figure}
In theory, PPG and ECG are physiologically related as they embody the same cardiac process in two different signal sensing domains [3]. The peripheral blood volume change recorded by PPG is influenced by the contraction and relaxation of the heart muscles, which are controlled by the cardiac electrical signals triggered by the sinoatrial node. Is there a way to create a new digital biomarker that combines the convenience of PPG and the accuracy of ECG for the effective and continuous cardiac monitoring? The purpose of this research is to develop a universal solution to reconstruct ECG from PPG, and generate a novel digital biomarker to connect the benefits of both PPG and ECG for CVD detection.
\section{Related Work}
The research on PPG based ECG inference is still in its infancy, and a few prior studies have been dedicated to this problem.
One approach is the feature-based Machine Learning. Computational Parameter model extracts features from ECG (e.g. QRS) or from PPG (e.g. PTT) [4]; Discrete Cosine Transform applies a mathematical model to study the correlation [5]. But the gaps exist in the limited features, lack of non-linear information representation. With the recent raise of the deep learning, Convolutional Neural Networks (CNN) is used to in signal processing for localizing key features and reducing noise, for both PPG and ECG. A large-scale study published on Nature [6], used smartphone-based PPG signals to achieve an AUC of 0.75 with 95$\%$ confidence level for diabetes prediction via CNN, with the known limits in long-term dependencies by locality sensitivity, as well as the lack of contextual information. Others use Recurrent Neural Networks (RNN) with Encoder / Decoder architecture for sequence to sequence learning, characterizing the temporal and spatial relationship of data [7], while it is sensitive to signal noises and computed with time dependency. The recent study applies Generative Adversarial Networks (GAN) with Generator / Discriminator architecture to generate artificial data for data augmentation [8], synthesizing ECG from PPG, although it suffers from unstable training and unsupervised learning method makes it harder to train and results in random oscillations.
To address the above gaps and challenges, this research proposes a novel deep learning architecture, Performer (PPG to ECG Reconstruction Transformer). It is a Transformer-based deep learning architecture [9], with a self-attention mechanism [10] to utilize the positional and contextual information of PPG/ECG waveforms, and perform sequence to sequence processing via encoder/decoder. And it creates Shifted Patch-based Attention algorithms, as the 1st time using Transformers for the biomedical waveform reconstruction.
\section{Method}
The major motivations of the new deep learning architecture include the following,
\begin{itemize}
\item ECG/PPG are usually evaluated for its amplitude, time interval, direction, scale, moving connection, angle and shape to understand the cardiac activities.
\begin{itemize}
\item Performer uses positional embedding and contextual relationship of each sequence to capture those key signal features.
\end{itemize}
\item PPG waveform is noisy, easily impacted by motion, light, skin type etc. Denoising is critical.
\begin{itemize}
\item Performer takes self-attention to give importance to sequences with more contextual information. The weights are learned to understand which sequence should be attended vs. ignored.
\end{itemize}
\item Both short-range and long-range variation of waveform needs to be considered for a disease diagnose. (e.g. heart rate variation).
\begin{itemize}
\item Performer addresses short and long range dependency from sequence to sequence training and parallel attention process.
\end{itemize}
\item Multiple vital signs are usually cross-referenced for the final disease detection.
\begin{itemize}
\item Performer creates multimodal training to incorporate both PPG and the reconstructed ECG for CVD detection.
\end{itemize}
\end{itemize}
Therefore, Performer is to use hierarchical design [11] to handle non-linear complex information representation; process longer range of signals via seq2seq and attention; address the noises via self-attention module by focusing on the key portions of waveforms; build multi-head attention and positional embeddings to provide contextual information about the relationship; perform parallel computing for each patch for faster performance; invent the Shifted Patch-based Attention to segment waveforms as an universal solution; create a novel multimodal architecture, capturing both the PPG and the reconstructed ECG for higher performance.
\subsection{Sequence to Sequence Transformer}
Transformer, a de-facto standard originally used for natural language processing (NLP), has emerged as a general-purpose ML architecture for almost every other machine learning task [12]. However, it is the 1st time using Transformers for PPG/ECG waveform signal reconstruction.
\begin{figure}
\centering
\includegraphics[width=11cm]{performer.png}
\caption{The Overall Architecture of Performer.}
\end{figure}
The Figure 1 illustrates a visual of the thought process on why Transformer is adopted in this research.
\begin{figure}
\centering
\includegraphics[width=5cm]{reconstruction.png}
\caption{The PPG to ECG Reconstruction Encoder / Decoder.}
\end{figure}
\subsection{Overall Architecture}
Performer contains the three parts, the initial data process model for the raw ECG and PPG data, the PPG to ECG reconstruction model, and the PPG/ECG multi-modal model for CVD detection, as presented in the Figure 2.
\subsection{PPG to ECG Reconstruction}
As illustrated in Figure 3, An encoder / decoder with attention mechanism is utilized to reconstruct ECG from PPG. It is the 1st time to reconstruct PPG to ECG through Transformer-based sequence to sequence translation via encoder/decoder. And it builds multi-head attention and positional embeddings to incorporate the information about the relationship.
\subsection{Shifted Patch-based Attention}
Transformer is majorly used in NLP and computer vision; there is minimal existing research on waveforms, and the effective tokenization is at the frontier of the new research.
The segmentation of waveforms is critical for embedding and representation to capture key patterns and relationships. A universal solution is required to optimize the performance. Too short of a sequence length results stochastic wave prediction, while too-long of a sequence length results narrow range curve prediction.
\begin{figure}
\centering
\includegraphics[width=15cm]{spa.png}
\caption{The Shifted Patch-based Attention (SPA).}
\end{figure}
This research proposes a patch-based algorithms as hierarchical stages. Each stage is captures specific aspects of PPG/ ECG signals. The patches start with 16, and merge as 32, 64, 128, 256; The shifted patch offset at the mid of the first patch.
To capture the signal features, a set of different size patches are used to fetch the various sequence lengths into the Performer. To capture the cross-patch connections, a shifted patch mechanism is proposed to allow the attention among the patches, as presented in Figure 4.
\subsection{Multimodal}
Both PPG and ECG reflect the same physiological process in different signal spaces. Peripheral blood volume (PPG) is affected by the left ventricular hypertrophy activities (ECG). Both are originated from the sinoatrial node (SA node). Certain features extracted from PPG (e.g. pulse rate variability) are highly correlated with corresponding metrics from ECG (e.g. heart rate variability) [2].
Performer also utilizes the multimodal capability, by building multimodal Transformer to take input from a novel digital biomarker, incorporating both PPG signal and the reconstructed ECG signal, to perform CVD detection; capturing the signals from both modalities for a higher performance, as presented in Figure 5.
\subsection{A Novel Digital Biomarker of CVD Classification}
As a result, a novel digital biomarker is created as the following in the Figure 6.
\section{Experiment}
\subsection{Datasets and Pre-processes}
This research uses the public CVD with PPG datasets, MIMIC III (40,000 patients) [13] and PPG-BP (219 patients) [14], and the public PPG / ECG datasets UQVSD (3,000+ minutes)[15], DaLiA (1,800+ minutes) [16], BIDMC (400+ minutes) [17].
\begin{figure}
\centering
\includegraphics[width=6cm]{multimodal.png}
\caption{The Multimodal Architecture of Performer to Take a Novel Digital Biomarker of CVD, PPG with the reconstructed ECG.}
\end{figure}
During the data pre-process, it resamples all data to 128HZ; applies the standard filtering to denoise with BioSPPy (biosignal lib) and then Convolution 1D; normalizes data range to [-1, 1]; cut to 4-second window each with overlaps; creates the patches with 16, 32, 64, 128, 256 to build hierarchical layers.
\subsection{Reconstruction}
The model performance of Performer is encouraging, and it achieves the SOTA result of 0.29 RMSE, surpassing the related work, as presented in Figure 7.
\begin{figure}
\centering
\includegraphics[width=12cm]{biomarker.png}
\caption{A Creation of a Novel Digital Biomarker of CVDs.}
\end{figure}
\subsection{Attention Map}
To provide a visual on the attention map and understand what has been learned through the attention mechanism, the attention map applied on the last layer of the architecture is highlighted in Figure 8. This shows that my model learned to focus on the PQRST complexes, the most important part of ECG signal.
\begin{figure}
\centering
\includegraphics[width=10cm]{rmse.png}
\caption{Comparison of Model Performance in RMSE. In order to understanding the stability between generated ECG and expected ECG, RMSE (Root Mean Squared Error) is calculated above. A comparison of this metric with other reconstruction models applying the same dataset (BIDMC) is also presented above, indicating the improved performance through Performer architecture.}
\end{figure}
\subsection{CVD Detection}
Performer detected CVDs with promising results for MIMIC III Dataset In this experiment, Performer was trained upon paired PPG/ECG data labeled for 4 cardiac diseases out of MIMIC III dataset. This promising results indicated the effectiveness of Performer model, which presents the state-of-the-art results comparing to the related research, as explained in Figure 9.
\begin{figure}
\centering
\includegraphics[width=10cm]{attentionmap.png}
\caption{Attention Areas.}
\end{figure}
In addition, to measure the universal applicable of Performer, the architecture is also applied to PPG-BP dataset, containing the PPG and the labels of diabetes and non-diabetes cases. The model generates very promising prediction results, as represented in Figure 10, comparable with a recent UCSF research published by Nature [6].
\begin{figure}
\centering
\includegraphics[width=6cm]{cvd1.png}
\caption{The Confusion Matrix for CVD Detection.}
\end{figure}
And also, as pictured in Figure 11, Performer predicted as normal in the 1st case and flag for diabetes in the 2nd case, from patients’ PPG signals. It indicates Performer’s capability to extract sufficient feature signals from PPG (and its reconstructed ECG) and use them to detect diabetes through the novel digital biomarker.
\begin{figure}
\centering
\includegraphics[width=5cm]{cvd2.png}
\caption{The Confusion Matrix for Diabetes Detection.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{diabetes.png}
\caption{Two Successful Examples of Diabetes Detection.}
\end{figure}
\subsection{Ablation Study}
An ablation study is also performed, as illustrated in Figure 12. It is interesting to find the reconstructed ECG via Performer has higher performance than the original raw ECG.
\begin{figure}
\centering
\includegraphics[width=11cm]{ablationstudy.png}
\caption{A comparison of CVD Detection Among Ablation Study.}
\end{figure}
\section{Conclusion}
This research proposes Performer, a novel Transformer architecture, achieving the state-of-the-art performance (0.29 RMSE) to reconstruct ECG from PPG. It is the 1st time to use Transformer for biomedical waveforms (PPG and ECG) as the self-attention based sequence to sequence processing. It creates a digital biomarker, PPG along with its reconstructed ECG, which is proved to be utilized effectively to detect CVDs (95.9$\%$ average accuracy for CAD, CHF, MI, HoTN on MIMIC III) and related CVDs (e.g. diabetes, 75.9$\%$ accuracy on PPG-BP).
After experimenting Wave2Vec (for voice/audio signal), leveraging the existing research on time series prediction via Transformer [18], and adopting the hierarchical neural networks design [19], a Shifted Patch-based Attention (SPA) algorithms is also invented, and it reaches high quality and real-time speed with great efficiency, capably serves as a general-purpose solution for waveforms analysis via Transformer.
Performer, effectively reconstructing ECG through PPG, creates a new digital biomarker which combines the best of both worlds, both the low-cost and easy accessibility of PPG in addition to the high accuracy and well-studied base of ECG.
This novel digital biomarker enables an effective solution for future continuous cardiac monitoring only through PPG data. And it also set a new direction on monitoring related CVDs, such as diabetes, through PPG data and its generated ECG. Performer also enables the possibility to utilize the existing valuable knowledge on ECG analysis via PPG only measurement by reconstructing ECG from PPG. The low-cost solution provided by this work will benefit more for the low-income communities by compensating their insufficient healthcare resources and infrastructure.
Consumer-grade based wearables grow at 25$\%$ each year, used by 20$\%$ US resident for disease prevention, risk assessment, and early screening. Using technology like Performer to increase performance and enable continuous tracking for wearables becomes more critical along this trend.
The future work includes to explore the longer range of PPG data through Performer, range from minutes to days and months, which might reveal undiscovered patterns to assess CVDs risk in an early phase; to experiment the Performer to reconstruct other biomedical signals, e.g. ballistocardiography (BCG) and phonocardiography (PCG), to broaden its usage beyond ECG; to make an earring wearable prototype, as proof of concept, and also incorporating other medical records (weights, ages, blood pressure, heart rates, etc.) as multi-modalities.
\section*{References}
\medskip
{
\small
[1] Somani, S., Russak, A. J., Richter, F., Zhao, S., Vaid, A., Chaudhry, F., ... \& Glicksberg, B. S. (2021). Deep learning and the electrocardiogram: review of the current state-of-the-art. EP Europace, 23(8), 1179-1191.
[2] Elgendi, M., Fletcher, R., Liang, Y., Howard, N., Lovell, N. H., Abbott, D., ... \& Ward, R. (2019). The use of photoplethysmography for assessing hypertension. NPJ digital medicine, 2(1), 1-11.
[3] Gil, E., Orini, M., Bailon, R., Vergara, J. M., Mainardi, L., \& Laguna, P. (2010). Photoplethysmography pulse rate variability as a surrogate measurement of heart rate variability during non-stationary conditions. Physiological measurement, 31(9), 1271.
[4] Banerjee, R., Sinha, A., Choudhury, A. D., \& Visvanathan, A. (2014, May). PhotoECG: Photoplethysmography to estimate ECG parameters. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 4404-4408). IEEE.
[5] Zhu, Q., Tian, X., Wong, C. W., \& Wu, M. (2021). Learning Your Heart Actions From Pulse: ECG Waveform Reconstruction From PPG. IEEE Internet of Things Journal, 8(23), 16734-16748.
[6] Avram, R., Olgin, J. E., Kuhar, P., Hughes, J. W., Marcus, G. M., Pletcher, M. J., ... \& Tison, G. H. (2020). A digital biomarker of diabetes from smartphone-based vascular signals. Nature medicine, 26(10), 1576-1582.
[7] Chiu, H. Y., Shuai, H. H., \& Chao, P. C. P. (2020). Reconstructing QRS complex from PPG by transformed attentional neural networks. IEEE Sensors Journal, 20(20), 12374-12383.
[8] Sarkar, P., \& Etemad, A. (2020). Cardiogan: Attentive generative adversarial network with dual discriminators for synthesis of ECG from PPG. arXiv preprint arXiv:2010.00104.
[9] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... \& Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
[10] Bahdanau, D., Cho, K., \& Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
[11] Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., \& Belongie, S. (2017). Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2117-2125).
[12] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., ... \& Houlsby, N. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
[13] Johnson, A. E., Pollard, T. J., Shen, L., Lehman, L. W. H., Feng, M., Ghassemi, M., ... \& Mark, R. G. (2016). MIMIC-III, a freely accessible critical care database. Scientific data, 3(1), 1-9.
[14] Liang, Y., Chen, Z., Liu, G., \& Elgendi, M. (2018). A new, short-recorded photoplethysmogram dataset for blood pressure monitoring in China. Scientific data, 5(1), 1-7.
[15] Liu, D., Görges, M., \& Jenkins, S. A. (2012). University of Queensland vital signs dataset: development of an accessible repository of anesthesia patient monitoring data for research. Anesthesia \& Analgesia, 114(3), 584-589.
[16] Reiss, A., Indlekofer, I., Schmidt, P., \& Van Laerhoven, K. (2019). Deep PPG: large-scale heart rate estimation with convolutional neural networks. Sensors, 19(14), 3079.
[17] Goldberger, A., Amaral, L., Glass, L., Hausdorff, J., Ivanov, P. C., Mark, R., ... \& Stanley, H. E. (2000). PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation [Online]. 101 (23), pp. e215–e220.
[18] Zhou, H., Zhang, S., Peng, J., Zhang, S., Li, J., Xiong, H., \& Zhang, W. (2021, February). Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of AAAI.
[19] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., ... \& Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 10012-10022).
}
\end{document} |
1,941,325,221,183 | arxiv | \chapter{Introduction.}
The study of discretized two-dimensional quantum gravity (matrix models)
has received much attention in recent years,
thanks to the uncovering of the
solvability of a range of such models in the double scaling limit
\REFS\doug{M. Douglas and S. Shenker \journal Nucl.Phys.&B335(90)635
.}\REFSCON\grossmig{D.J. Gross and A.A. Migdal\journal
Phys.Rev.Lett.&64(90)717.}\REFSCON
\brezkaz{E. Brezin and V.A. Kazakov\journal
Phys.Lett.&B236(90)144.}\REFSCON\brzkzzm{E. Brezin,
V.A. Kazakov and A.B. Zamolodchikov,
\journal Nucl.Phys.&B338(90)673.}
\REFSCON\grossmil{D.J. Gross and N. Miljkovic\journal
Phys.Lett.&B238(90)217
.}\REFSCON\pgins{P. Ginsparg and J. Zinn-Justin\journal Phys.Lett.
&B240(90)333.}\REFSCON\grossklb{D.J. Gross and I.R. Klebanov
\journal Nucl.Phys.&B344(90)475
.}\REFSCON\klbwlk{I.R. Klebanov and R.B. Wilkinson,
Princeton preprint PUPT-1188(1990).}
\refsend.
A matrix model describes the statistical mechanics of random surfaces
in the large-$N$ limit, where $N$ is the dimension of the matrix
(we restrict our attention to hermitian matrix models).
The procedure for solving these models consists of
three conceptual steps. In the first step, integrating out the
angular modes of the random matrix leaves us with its
$N$ real eigenvalues as the dynamical variables. Then, the large-N limit
is performed via a WKB-like procedure\foot{For the $D=1$ model, this
is a true quantum-mechanical WKB procedure for $N$ independent fermions
in a potential well.}. And finally, as $N$ is taken to infinity, the
matrix-model potential must approach one of its critical values
in accordance with the double-scaling limit.
The averages of various quantities over random surfaces
can then be found
as functions of the string coupling, which is held fixed in this limit.
To any given order in the string-perturbative
expansion, the matrix model
results may be compared with
results from the corresponding continuum models,
which consist of the quantum Liouville theory coupled to various matter
fields.
\REFS\kpz{V. Knizhnik, A. Polyakov and A. Zamolodchikov
\journal Mod.Phys.Lett.&A3(88)819}
\REFSCON\ddk{F. David\journal Mod.Phys.Lett.&A3(88)1651;
\nextline J. Distler and H. Kawai
\journal Nucl.Phys.&B321(89)509.}\refsend
\REFS\plch{J. Polchinski, Texas preprint UTTG-19-90.}
\REFSCON\polch{J. Polchinski\journal Nucl.Phys.&B346(90)253
.}\REFSCON\yng{Z. Yang\journal Phys.Lett.&B255(91)215
.}\REFSCON\wise{A. Gupta, S.P. Trivedi
and M.B. Wise
\journal Nucl.Phys.&B340(90)475.}\refsend.
Such comparisons have been carried out for low genus
\REFS\kst{I. Kostov\journal Phys.Lett.&B215(88)499.}
\REFSCON\newm{D.J. Gross, I.R. Klebanov and M.J. Newman\journal
Nucl.Phys.&B350(91)621.}
\REFSCON\brshd{M. Bershadsky and I.R. Klebanov\journal
Phys.Rev.Lett.&65(90)3088.}
\REFSCON\SBM{S. Ben-Menahem\journal Nucl.Phys.&B364(91)681.}
\refsend .
\par
In addition to perturbative series in the string coupling,
one finds in these models also
nonperturbative effects\REFS\npt{S. Shenker, Rutgers preprint
RU-90-47(90), and references within.}\refsend.
In the $D=1$ model, the leading such effect comes
from the tunneling of a single matrix eigenvalue (=fermion ) out of
the potential well, or between different potential wells.
The contribution to, say, the free energy will behave as
$\exp(-{\it const
}/g_{string})$ for low string couplings.
In ref.\npt\ it
was pointed out that for the $D=0$ models, as well,
there are saddle-point
configurations where a single eigenvalue leaves the potential
well\foot{In the $D=0$ case, this is the effective potential, namely
the matrix-model potential plus the mean potential due to the Coulomb
repulsion of the other N-1 particles.}, and that these configurations
give rise to nonperturbative effects of the same form. However, for
even-$k$ multicritical $D=0$ models, including pure gravity itself,
there are ambiguities in defining the theory nonperturbatively, which
can be traced to the fact that the critical quartic potential is
unbounded from below. Various nonperturbative definitions
have been proposed in the literature
\REFS\mira{J.Luis Miramontes and J.S. Guillen, CERN-TH. 6323/91.}
\REFSCON\dav{F. David\journal Mod.Phys.Lett.&A5(90)1019}
\REFSCON\fdav{F. David\journal Nucl.Phys.&B348(91)507}
\REFSCON\halp{J. Greensite and M.B. Halpern
\journal Nucl.Phys.&B348(91)507}
\REFSCON\marinar{E. Marinari and G. Parisi
\journal Phys.Lett.&B240(90)375}
\REFSCON\amb{J. Ambjorn, J. Greensite and S. Varsted
\journal Phys.Lett.&B249(90)411}
\REFSCON\marek{M. Karliner and A.A. Migdal\journal Mod.Phys.Lett.
&A5(90)2565}
\REFSCON\ambb{J. Ambjorn, C.V. Johnson and T. Morris, Southampton
and Niels Bohr Inst. Preprint SHEP 90/91-29, NBI-HE-91-27.}
\REFSCON\dall{S. Dalley, C.V. Johnson and T. Morris, SHEP 90/91-28
and SHEP 90/91-35.}
\REFSCON\deo{K. Demeterfi, N. Deo, S. Jain and C.-I. Tan\journal
Phys.Rev.&D42(90)4105.}
\REFSCON\bhanot{G. Bhanot, G. Mandal and O. Narayan\journal
Phys.Lett.&B251(90)388.}
\REFSCON\josh{J. Feinberg, TECHNION-PH-92-1.}
\refsend
. For the even-$k$ multicritical models, these definitions can and
do disagree with one another, since the perturbative series is
not Borel-summable. But even for the well-defined models,
a systematic derivation of the nonperturbative physics
directly in $D=0$ has so far been lacking.
\par
A further line of development has been the reformulation of
the $D=1$ matrix model as a string field theory.
In matrix language, the correlators that one calculates in this model
are of any
number of boundary operators, each such boundary having an arbitrary
length. Each boundary is in a single slice of the embedding dimension
\foot{Otherwise, the angular matrix variables are
excited and the calculations
cannot be done using current methods.}.
The same information is contained in the $n$-point functions
of the density of matrix eigenvalues; it is essentially
this density which has been
suggested as the string field\REFS\dj{S.R. Das and A. Jevicki\journal
Mod.Phys.Lett.&A5(90)1639.}\refsend .
It is a function of
$\lambda$, the matrix eigenvalue, and the embedding dimension; hence
the field theory is two-dimensional. In ref.\ \lbrack\dj\rbrack,
Das and Jevicki used
collective coordinate techniques to transform the quantum mechanics
of $N$ particles (in a bosonic formulation) into two-dimensional
quantum field theory. The kinetic term in the action is that of
a massless field, which corresponds to the massless tachyon
in the continuum effective field theory\ \lbrack\polch\rbrack.
Some aspects of this
correspondence remain unclear--- for instance, the identification of
matrix eigenvalue with (a function of the) Liouville zero-mode.
Nevertheless, much progress has been made in understanding the
Das-Jevicki collective field theory, making string-perturbative
calculations with it, and comparing the results with those obtained
via other methods
\REFS\kres{K. Demeterfi, A. Jevicki and J.P. Rodrigues\journal
Mod.Phys.Lett.&A6(91)3199
, and references therein.}
\REFSCON\karab{D. Karabali and B. Sakita\journal Int.J.Mod.Phys.
&A6(91)5079.}\refsend
. Nonperturbative effects, appearing in the form of solitons and
instantons of the $D=1$ field theory, have also been investigated
\REFS\jevnpt{A. Jevicki, BROWN-HET-807.}
\REFSCON\avan{J. Avan and A. Jevicki\journal Phys.Lett.&B272(91)17.}
\refsend.
\par
Another approach to $D=1$ field theory has been to retain its
formulation as $N$-fermion quantum mechanics, but second-quantize
the fermions and then bosonize them\REFS\wad{A.M. Sengupta and S.R.
Wadia\journal Int.J.Mod.Phys,&A6(91)1961
.}\REFSCON\moor{
G. Moore, RU-91-12.}\REFSCON\sncn{D.J. Gross and I.R. Klebanov\journal
Nucl.Phys.&B359(91)3.}
\refsend .
This
approach is equivalent to the Das-Jevicki formulation.
\par
Although the collective-field method has also been applied to the
$D=0$ matrix models\REFS\jevsak{A. Jevicki and B. Sakita\journal
Nucl.Phys.&B185(81)89.}\REFSCON\ajev{A. Jevicki\journal
Nucl.Phys.&B146(78)77.}\REFSCON\sakbook{B. Sakita, "Quantum Theory
of Many-Variable Systems and Fields", World Scientific, Singapore,
1985.}\REFSCON\cohn{J.D. Cohn and S.P. de Alwis,
COLO-HEP-247, IASSNS-HEP-91/7.}\refsend,
it seems to us less compelling there than for $D=1$, because of its
unusual kinetic term, and although a perturbative scheme has been
outlined\REFS\jevsev{A. Jevicki, BROWN-HET-777.}
\REFSCON\reneg{O. Lechtenfeld, IASSNS-HEP-91/86.}
\refsend , this
program has not, to our knowledge, been carried out as far as the
corresponding $D=1$ field theory\foot{In ref.\ \lbrack\reneg\rbrack\ ,
Lechtenfeld uses our equations to set up a perturbative scheme. But
since he performs the genus expansion at an early stage, one is left
with manifest divergences, and in addition nonperturbative
information is lost. We avoid these problems, as will be seen below.}.
\par
In this paper, we develop an alternative
field theory formalism for the
$D=0$ matrix models.
Our field theory is two-dimensional, but the extra dimension
is an auxiliary one and drops out of the formalism at some stage.
What is left is a nonlocal quantum mechanics, with a
function of $\lambda$
playing the role of time. This is interesting in view
of the idea \lbrack\dj\rbrack\
that the matrix eigenvalue is related to the Liouville zero-mode.
The Jevicki-Sakita $D=0$ collective field theory is also a nonlocal
quantum mechanics. We have found, however, that our formalism has the
following desirable properties, which make it worth pursuing:\par
{\bf A.\ } The entropy term\foot{This term is $N\int\rho\ln\rho
\; d\lambda$, with $\rho(\lambda)$ the eigenvalue density.} in the
action, which in the collective field approach appears through the
Jacobian, is for the first time incorporated into the classical
solution. This results in smoothing of the Dyson sea edges,
interpolation between different eigenvalue bands for multiple-well
potentials (determining their relative population),
and an unambiguous treatment
of nonperturbative (instanton) effects, directly in a $D=0$
field-theory framework, for multicritical models that can
occur for a bounded-from-below potential.
\par {\bf B.\ } \ Our classical eigenvalue
distribution, $\rho(\lambda)$, satisfies a local nonlinear Schr\"odinger
equation, with $1/N$ playing the role of $\hbar$. The Schr\"odinger
potential, $V_1(\lambda)$, appearing in this equation is similar to,
but distinct from,
the Marinari-Parisi $D=1$ reformulation. On the other hand,
in the planar limit it has an exact corresposondence with
the Dyson sea effective potential, mentioned above.
\par {\bf C.\ } Not only are quantum corrections computable, but most
of them vanish in the double scaling limit. The only
quantum corrections which survive this limit, apart from the
semiclassical functional determinant, are a set of Feynamn graphs
which can be exactly summed.\medskip
The rest of this paper is organized as follows. In section 2, the
matrix-model partition sum is recast as that of a massless
two-dimensional field, $A(r)$, with (nonlocal) interactions confined
to an infinite line (the `eigenvalue axis'). The equation of motion
is written, and its classical solution is expressed as an eigenvalue
distribution, $\rho(\lambda)$, and shown to be either unique or labeled
by a discrete index. The two-dimensional equation of motion becomes
a one-dimensional integro-differential equation on the eigenvalue
axis; this in turn leads to a weaker\foot{`Weaker' because the
Schr\"odinger equation has more solutions than the integro-differential
equation.} local nonlinear Schr\"odinger equation. The interpretation
of this equation is discussed, as well as the nature of its solution
and how it is determined uniquely (up to the possible discrete index).
\par In section 3, the quantum fluctuations about a classical solution
are studied, and an expression derived for the partition sum in terms
of the propagator of the $A$ field (the `conjugate field'). The IR
and UV singularities are seen to cancel in a trivial way, to all
orders. The one-dimensional Green's equation for the propagator is
presented, as well as a partial solution. Section 4 is devoted to
a discussion of the double scaling limit, including nonperturbative
effects\foot{Most of the discussion in section 4 is limited to
the case of the quartic potential as it approaches its $k=2$
critical
point. Since neither the theory nor our formalism are well defined
for this potential, only the string-perturbative series discussed
in that section is meaningful. Therefore, our description of
an instanton
ansatz for this model, described in sec.4,
is not necessarily more reliable than any of
the previous nonperturbative definitions of pure gravity
referenced above.
Nevertheless, it is reassuring that our exponentially-suppressed
tunneling factor, eq.(39a), agrees with that appearing in other
approaches
(it is the prefactor that is ambiguous). The main point
of our instanton calculation is, however, that the conjugate-field
formalism applies to any well-defined realization of a critical model,
and a similar instanton calculation for such a realization would be
rigorous.}.
We see there that the quartic $k=2$
classical solution is unique at
the level of string perturbation theory. In section 5 we restate our
conclusions.
Some mathematical
details are reserved for the appendix. Many results, presented or
stated without proof in this paper, will be exposed more fully in
a follow-up publication, to appear soon.\par
Finally, an explanatory note is in order. An early version of this
preprint, containing most of the results (with the notable exception
of point {\bf C} above), has been circulating privately since November
1990. That earlier version has sometimes been confused in the
literature with SLAC-PUB-5262, an altogether distinct work.\par
\chapter{Conjugate Field Formalism and Classical Solutions.}
The partition function of the $D=0$ matrix model is,
$$Z\{ V\}\equiv\int\lbrack d\lambda\rbrack \exp\bigl(-N\sum_{i=1}
^NV(\lambda_i)+\sum_{i\not=
j}\ln\vert\lambda_i-\lambda_j\vert\bigr)\eqno (1)$$
Where $V$ is the matrix-model potential, which we assume to be
a polynomial.
We may formally rewrite ($C$ a divergent number)
$$\exp\bigl(\sum_{i\not= j}\ln\vert\lambda_i-\lambda_j\vert\bigr)=
C\int\lbrack dA\rbrack \exp\bigl(
\sqrt{4\pi}i\sum_{j=1}^NA(\lambda_j)-{{1}\over{2}}\int(\partial A)^2d\lambda dx
\bigr)
\eqno (2)$$
Which is the path integral over a massless field $A$ in two dimensions,
with $N$ point charges on the $x=0$ (`eigenvalue') axis.
$x$ is an auxiliary dimension, and $A(\lambda)\equiv A(\lambda 0)$.
We shall use $r$ to denote a general point $(\lambda,x)$ in the plane.
\par This path integral is beset by infrared and ultraviolet divergences.
The $UV$ divergence is regulated by smearing the point charges;
$A(\lambda_j)$ is replaced by
$${1\over{\epsilon}}\int_{\lambda_j}^{\lambda_j+
\epsilon}d\lambda A(\lambda)\eqno(2a)$$
in eq. (2). The infrared
divergence is regulated by
introducing a uniformly charged circle in the plane, centered about
the origin and with a large radius $L$. The total charge of this
circle is $-N$, which screens the $N$ point-charges.
All dependences on $\epsilon$ and
$L$ will cancel in a simple way.
In what follows, we will mostly ignore the need to regulate the path
integral; a more careful treatment reveals that this naive approach
is the correct one.
Introducing the normalized charge density,
$$\rho(\lambda)={1\over{N}}\sum_i\delta(\lambda-\lambda_i)\eqno (3)$$
We may instead think of $Z$ as a path integral over the overcomplete
variables $\{\rho(\lambda)\}$, with integrand
$$\exp\bigl(-N^2\int d\lambda\rho(\lambda)V(\lambda)+N^2\int\int
d\lambda d\mu\rho(\lambda)\rho(\mu)\ln\vert\lambda-\mu\vert\bigr)\eqno (4)$$
This is the collective-field approach of Jevicki and Sakita. But we
prefer to use the conjugate field $A(r)$ as our dynamical field,
since it has a standard kinetic term and trivial Jacobian. The other
merits of the conjugate field theory have been listed in points
{\bf A, B} and {\bf C} of the introduction.\par
Combining eqs. (1) and (2), and using (2a) to regulate the UV
divergence in $C$, we obtain the following expression for the
partition function\foot{We define the measure $\lbrack dA\rbrack$ to include
the infinite, but $\{L,\epsilon,N,V(\lambda)\}$ independent, factor
of $(\det^\prime\partial^2)^{1/2}$.}:
$$Z\{V\}=C_1(L,N)\epsilon^{-N}\int\lbrack dA\rbrack\bigl(I\{A\}\bigr)^N
\exp\bigl(-{i\over{\sqrt{\pi}}}{N\over L}\ointop dsA-{1\over 2}\int
(\partial A)^2d^2r\bigr)\eqno(5)$$
$$I\{A\}\equiv\int d\lambda \exp\bigl(-NV(\lambda)+i\sqrt{4\pi}A(\lambda)\bigr)
\eqno (6)$$
In equation (5), the linear term is an integral over the charged circle,
and $C_1$ is IR divergent, UV finite, and $V$-independent. All integrals
over $\lambda$, here and below, range over the whole real axis, unless
the limits of integration are explicitly indicated.\par The euclidean
action is, apart from the IR-regulator source term,
$$S\{A\}\equiv {1\over 2}\int(\partial A)^2d^2r-N\ln I\{A\}.\eqno(6a)$$
This is a nonlocal action, with the interactions occuring on the
eigenvalue axis. Away from the axis, $A$ is a free massless field.
$i\sqrt{4\pi}A/N$ can be thought of as a quantum fluctuation in
the matrix-model potential.
In addition, $A$ is conjugate to $\rho$ in the thermodynamical sense.
\par The classical
equation of motion for the field $A$ is
\foot{ We omit a step here, related to the infrared
regulator: the field $A$ should be shifted by a constant, which is the
constant potential due to the charged circle in its interior.}
$$\partial^2A(r)=-i\sqrt{4\pi}N\rho_1(\lambda)\delta(x)\eqno(7a)$$
where
$$\rho_1(\lambda)\equiv{1\over I}\exp\bigl(-NV(\lambda)+
i\sqrt{4\pi}A(\lambda)\bigr) \eqno(7b)$$
Note that although the action is nonlocal, the equation of motion
is local, but with a new coupling $I$, to be determined
self-consistently.\par
The boundary condition for the conjugate field $A$ at infinity is,
$$A(r)\approx -i{{2N}\over{\sqrt{4\pi}}}\ln\vert r\vert+\; const
\eqno (7c)$$
since that is the electrostatic potential far from the $N$
point-sources (eq.(2)). This boundary condition allowed
us to freely integrate by parts the variation of the kinetic term
in the action.\par
We denote a solution of eqs.(7) by
$$A_{classical}(r)\equiv -iA_s(r)\eqno (7d)$$
and define the classical charge (eigenvalue) density
on the eigenvalue axis as $\rho$ at the classical solution:
$$\rho(\lambda)\equiv{1\over I_s}\exp\bigl(-NV(\lambda)+
\sqrt{4\pi}A_s(\lambda)\bigr) \eqno(8)$$
Where $I_s$ is the value of $I\{A\}$ at the classical solution.
Henceforth we shall adopt the definition eq.(8) for $\rho$, instead
of eq.(3). From here on we shall work with $\rho$ instead of $A_s$;
They contain the same information, although $\rho$ is defined only
on the eigenvalue axis\foot{Just as, in the analog 2d electrostatic
problem, the charge distribution on a plate together with the
boundary condition at infinity, determine the potential throughout
an otherwise empty space.}
. \par
Let us assume that the potential $V$ has been chosen to be bounded
from below. In that case, as will be shown, $\rho(\lambda)$ is unique
or labeled by a discrete index. In section 4, we shall see that
quantizing $A(r)$ about the classical configuration determined by
$\rho$, gives rise to a unique string-perturbative expansion for
physical quantities in the double scaling limit (abreviated henceforth
as {\it d}.{\it s}.{\it l}.
\foot{In order to properly analyze
the d.s.l., a multiple-well
potential, bounded from below, must be used. In this paper, when we
specialize to a particular $V(\lambda)$, it is the quartic
$\lambda^2/2+g\lambda^4$, which is {\it not} bounded from below at
criticality. This is not important for string perturbation theory;
work on nonperturbative
effects in which a multiple-well potential {\it is} used,
is currently in progress. }
), {\it provided} $\rho$ has support in a single Dyson sea (band)
in the planar limit.
Thus, the possible discrete non-uniqueness of $\rho$ will
be revealed either
at the nonperturbative level, or for multiband solutions.
However, any {\it continuous} non-uniqueness of multiband solutions,
which results from the freedom to adjust the relative populations of
different bands, is eliminated by the nonperturbative effects
(`tunneling'), as we shall see.
\par
Both the equation of motion and boundary condition for $A_s(r)$ are
real, and thus each $\rho(\lambda)$ in the discrete set of solutions
is either real, or the complex conjugate of another solution.
The possible imaginary
parts of solutions $\rho$ can only be revealed nonperturbatively;
in any case, all physical quantities are real. \par
Evaluating the action at the classical solution, we find that the
$L$-dependent prefactor in eq. (5) gets canceled:
$$\left.\eqalign{
Z_{\scriptstyle classical}\{V\}=&const\;\epsilon^{-N}\exp\bigl(
-N^2\int d\lambda\rho V\cr
&+N^2\int\int d\lambda d\mu\rho(\lambda)\rho(\mu)\ln\vert\lambda-\mu\vert-
N\int d\lambda\rho \ln\rho\bigr)}\right.\eqno(5a)$$
The constant depends only on $N$, and is hence irrelevant.
As we discuss below, the $UV$ divergence in eq (5a) is cancelled
by the semiclassical (determinant) factor, whereas the higher
quantum corrections to $Z$ are both $IR$- and $UV$-finite.
\par
Notice the $O(N)$ correction to the free energy that occurs already
at the classical level. This correction has a simple interpretation:
It is a combinatorical factor, the entropy of the classical
solution (see refs. \lbrack\jevsak\rbrack,\lbrack\ajev\rbrack,\lbrack\sakbook\rbrack,
\lbrack\cohn\rbrack) .
Of course, the sum of all $O(N)$ corrections to the free energy must
vanish, since only even powers of $N$ appear in the topological genus
expansion\REFS\biz{D. Bessis, C. Itzykson and J.B. Zuber\journal
Advances in Applied Math.&1(80)109.}\refsend.
\par
$\rho$ satisfies the integral equation,
$$V(\lambda)+{1\over N}\ln(\rho(\lambda)I_s)=
2\int d\mu \ln\vert\lambda-\mu\vert\rho(\mu)
\eqno(9a)$$
which follows from eqs.(7), in conjunction with the fact that the
$2d$ free Green's function is ${1\over {2\pi}}\ln\vert r-r^\prime\vert$.
Upon differentiation w.r.t. $\lambda$, eq.(9a) becomes the following
one-dimensional integro-differential equation:
$$V^\prime(\lambda)+{1\over N}{{\rho^\prime(\lambda)}\over{\rho(\lambda)}}=2{\cal H}(\rho
(\lambda))\eqno (9b)$$
Where ${\cal H}$ denotes the Hilbert transform:
$${\cal H}(f(\lambda))\equiv\int{{f(\mu)d\mu}\over{\lambda-\mu}}\eqno (9c)$$
for any function f\foot{Such integrals are understood, here and below,
to be principal-valued. Note that we use a nonstandard normalization
in our definition of ${\cal H}$.}.\par In addition to satisfying the $1d$
integro-differential equation, $\rho$ must be normalized to unity
(by eq. (8)):
$$\int d\lambda\rho(\lambda)=1\eqno(9d)$$
The eq. (9b) reduces to the integral equation of ref.\biz\
in the planar ($N\rightarrow\infty$) limit; the extra term can be
physically understood as due to the variation of the entropy term
in the exponent of eq.(5a). Its effect is to smear the edges of the
Dyson sea (or seas, for a multi-well potential $V$), so they become
transition regions, rather than singularities as in the
$D=0$ and $D=1$ collective field theories of Jevicki et. al. We will
further discuss the transition regions in section 4
\REFS\bowick{ For a discussion of the transition regions using the
standard Gel'fand-Dikii formalism, see: M.J. Bowick and E. Brezin
\journal Phys.Lett.&B268(91)21.}\refsend
.\par
For $N>>1$, it is easy to see from eq.(9b) that outside the sea, and
beyond the transition regions, $\rho(\lambda)$ is well-approximated by
$$\rho(\lambda)\approx const\;\lambda^{2N}e^{-N\lbrack V(\lambda)+O(1/\lambda)\rbrack}
\eqno (9e)$$ Thus, two conclusions can immediately be drawn.
Firstly, a normalizable classical $\rho$ exists only if $V(\lambda)$
is bounded from below; and secondly, $\rho(\lambda)$ vanishes nowhere,
so that multiple eigenvalue bands (`seas') are related by a tunneling
effect. This effect serves to determine the relative population of
multiple seas, and is responsible in general for string-nonperturbative
effects.\par We note that, as our classical solution depends on $N$
and hence includes some higher-genus effects, the terms `planar' and
`classical' are not synonymous in our formalism. This is also the case
for the Das-Jevicki field theory.
\par Once $\rho$ is known, the corresponding $I_s$ is determined
uniquely by eqs.(9); however, $I_s$ drops out of the formalism from
here on.\par In the appendix we use eqs.(9b-d), and properties of
the Hilbert transform, to derive the following nonlinear Schr\"odinger
equation:
$$\bigl(-{1\over N^2}{\partial^2\over{\partial\lambda^2}}+
V_1(\lambda)+\pi^2\rho^2\bigr)
\bigl(\rho^{-1/2}\bigr)=0\eqno(10a)$$
where $V_1$, henceforth called the `Schr\"odinger potential',
is the polynomial
$$ V_1(\lambda)={1\over 4}(V^\prime)^2+{1\over{2N}}V^{\prime\prime}
+P(\lambda)\eqno(10b)$$
$P$ is a polynomial whose coefficients are moments of the charge
distribution:
$$P(\lambda)\equiv -\int d\mu\rho(\mu){{V^\prime(\mu)-V^\prime(\lambda)}\over
{\mu-\lambda}}\eqno(10c)$$
This Schr\"odinger equation is local, except for the coefficients of
$P(\lambda)$, which are determined self-consistently. For example, in the
case of quartic $V$, $$V(\lambda)={1\over 2}
\lambda^2+g\lambda^4\eqno (10d)$$ the polynomial $P(\lambda)$ becomes:
$$P(\lambda)=-1-4g(m_2+\lambda^2)\eqno (10e)$$ where the eigenvalue moments
are defines as $$m_n=\int\lambda^n\rho(\lambda)d\lambda\eqno (10f)$$
and we have used the symmetry $\rho(\lambda)=\rho(-\lambda)$, which follows
from eq. (9b) and the symmetry of $V(\lambda)$
\foot{For simplicity, we assume throughout that $V(\lambda)=V(-\lambda)$.
Actually, this only implies a symmetric $\rho$ if $\rho(\lambda)$ is
unique, but we will ignore this complication here, since the entire
analysis can be easily redone without the symmetry assumption, and
besides the possible nonsymmetry will not affect perturbation
theory.}
. The nonlinear Schr\"odinger equation has several points of interest,
which we now discuss. Firstly, in it $1/N$ plays the role of Planck's
constant, and the tunneling effects implied by eq.(9e), can now be
seen to be similar to quantum mechanical tunneling. This is demonstrated
in section 4, using the WKB approximation. In the exterior of the Dyson
sea(s), $\rho$ is exponentially suppressed for large $N$, and (10a)
approximates a linear Schr\"odinger equation. a strange feature of this
correspondence is that the `wavefunction' here is $1/\sqrt{\rho}$,
the inverse of the intuitive $\sqrt{\rho}$. Another feature, probably
closely related, is that the $V^{\prime\pr}$ term in our Schr\"odinger
potential $V_1$, has the opposite sign compared with the $D=1$ potential
arising from the Marinari-Parisi reformulation\ \lbrack\marinar\rbrack\
\foot{The relation between the two facts is, that in order to get the
Marinari-Parisi $D=1$ Schr\"odinger equation, one chooses as wavefunction
$\psi\{M\}=\exp(-{N\over 2}\tr V(M))$, with $M$ the original random
matrix. Thus, if the Van der Monde
were to be ignored, the one-particle
wave-function would be the {\it square root} of $\rho$. If we choose
the inverse wave function, the sign of $V^{\prime\pr}$ in the Schr\"odinger
potential agrees with ours, rather than with that of Marinari
and Parisi.}.
\par The transition regions interpolate between the WKB solutions
outside and inside a given Dyson sea, similar to the role of the Airy
function in ordinary (linear) WKB (see section 4). In the sea interior,
the $\rho^2$ term in (10a) is no longer negligible, and in fact
there the leading WKB approximation is $$\rho(\lambda)\approx{1\over\pi}
\sqrt{-V_1(\lambda)}\quad(\; inside\; a\; sea\;)\eqno(10g)$$
which is just the generalization of Wigner's semicircle law to
arbitrary potential $V$\ \lbrack\biz\rbrack\ .
\par In the planar limit and outside the Dyson sea, $V_1(\lambda)$ has
a physical interpretation: $4V_1(\lambda)$ is the square of the gradient of
the effective potential, that is, the square of the total force exerted
on a single eigenvalue, due to the potential $V(\lambda)$ and the repulsion
of the other $(N-1)$ eigenvalues.\par
As stated in the introduction, the Schr\"odinger equation is weaker
than the integro-differential equation. This is because (10a) is a
second-order differential equation, so for given $V_1$ its solution
$\rho(\lambda)$ has two free continuous real parameters. By eqs.(10),
$V_1$ has one additional unknown, $m_2$; but we must impose the
two self-consistency conditions, for $m_2$ and for $m_0=1$. This
leaves us with a {\it single undetermined real parameter}\foot
{Note that the usual WKB procedure for bound states, does not apply
for the Dyson sea. In the usual procedure, one imposes that the
wavefunction component which blows up exponentially at spatial infinity,
vanishes. But here the wavefunction is $\psi=\rho^{-1/2}$, so in fact
$\rho$ normalizability {\it requires} $\psi$ to blow up exponentially,
and no useful information results from this condition.}
. However,
as is proven in the appendix, the integro-differential equation
allows no zero modes that preserve the normalization condition (9d).
Hence, the free real parameter can only assume discrete values.
\chapter{Quantum Corrections.}
We next address the quantum corrections to $Z_{\scriptstyle classical}$
( eq. (5a)), needed in order to regain the full partition sum
$Z$ in eq. (5). Let us separate the field $A$ into its classical and
quantum pieces,
$$A(r)=-iA_s(r)+A_q(r)\eqno(11)$$
and also separate out the quadratic part of the action:
$$S\{A\}=S_{classical}+S_I\{A_q\}+{1\over 2}\int\int d^2rd^2r^\prime
A_q(r)K(r,r^\prime)A_q(r^\prime)\eqno(12)$$
Here $S_I$ is the interacting piece, consisting of the terms
of order three
and higher in $A_q$. $K$ is the inverse propagator of the quantum field
in the background of the classical solution:
$$K(r,r^\prime)=-(\partial_r)^2\delta(r-r^\prime)+4\pi N\delta(x)\delta
(x^\prime)\{\rho(\lambda)\delta(\lambda-\lambda^\prime)-\rho(\lambda)\rho(\lambda^\prime)\}\eqno(13)$$
The only zero mode of $K$ is the constant function\foot
{This is equivalent to the fact, proven in the appendix, that
$\rho(\lambda)$ has no normalization-preserving zero modes.}, so
we fix that by defining our space of configurations $A_q$ to
satisfy $A_q(r)\rightarrow 0$ as $\vert r\vert\rightarrow\infty$. This
renders $K$ nonsingular\foot{This choice of boundary condition follows
from the fact that both $A(r)$ and $-iA_s(r)$ satisfy the boundary
condition (7c).}.\par
Now, $S_I$ depends only on the value of $A_q$ on the $x=0$ axis, so we
define the one-dimensional field
$$q(\lambda)\equiv\sqrt{4\pi}\bigl[A_q(\lambda 0)-\int\rho(\mu)d\mu A_q(\mu 0)
\bigr]\eqno(14)$$
and denote
$$S_i\{q\}\equiv S_I\{A_q\}\eqno(15)$$
We then have, by eqs.(6),(6a),(8),(11),(12) and (14):
$$S_i\{q\}=-N\ln\bigl(\int\rho(\lambda)d\lambda
e^{iq(\lambda)}\bigr)_{NQ}\eqno(16)$$
where $NQ$ denotes the non-quadratic part. The full partition
function, including quantum corrections, is given by
$$Z\{V\}=\sum Z_{\scriptstyle classical}\{V\}
\bigl({{\det K}\over{\det(-\partial^2)}}\bigr)
^{-1/2}\langle
e^{-S_i\{q\}}\rangle\eqno(17)$$
where the expectation value is a Gaussian average, i.e. evaluated
via Wick's theorem, the sum is over the discrete set of classical
solutions, the functional determinants are subject to the boundary
condition $\lim_{\vert r\vert\rightarrow\infty}A_q(r)=0$, and for each
classical solution, $Z_{classical}$ is given by eq.(5a).
\par The two-point function, to be used in the Wick expansion, is
$$\langle q(\lambda)q(\lambda^\prime)\rangle=4\pi\bigl(H(\lambda,\lambda^\prime)+
{1\over{4\pi N}}\bigr)\eqno (18)$$
where {$H(\lm,\lm^\pr)$}\ is the restriction to the eigenvalue axis of
$H(r,r^\prime)$; the latter is defined uniquely by the following
properties ---
\smallskip $$\partial^2_rH(r,r^\prime)=
4\pi N\rho(\lambda)\delta(x)H(r,r^\prime)+\delta(r-r^\prime)\eqno (19a)$$
$$H(r,r^\prime)=H(r^\prime,r)\eqno (19b)$$
$$H(r,r^\prime)\approx H(\infty,r^\prime)+O({1\over{\vert r
\vert}}) \eqno (19c)$$
as $\vert r\vert\rightarrow\infty$, where $H(\infty,r^\prime)$ is a finite
function of $r^\prime$.\par
By integrating eq.(19a) over $r$ with measure $\int d^2r$ and using
(19c), we find a fourth property:
$$\int{H(\lm,\lm^\pr)}\rho(\lambda)d\lambda=-{1\over{4\pi N}}\eqno (19d)$$
For actual calculations, only the one-dimensional restriction {$H(\lm,\lm^\pr)$}
\ is required. For ${\lm^\pr}\approx\lambda$, $H$ is
dominated by the logarithmic
singularity of the free $2d$ Green's function:
$${H(\lm,\lm^\pr)}\approx{1\over{2\pi}}\ln\vert\lambda-{\lm^\pr}\vert+\; regular
\quad({\lm^\pr}\approx\lambda)\eqno (19e)$$ \par
Equation (17) is our central tool for evaluating physical quantities
using the conjugate-field formalism. We shall refer to the last,
Wick-expanded factor as the `Feynman diagrams', since they go beyond
the semiclassical determinant factor\foot{In the sense that they involve
the interaction piece of the quantum action.}
. \par The only remaining divergences in eq.(17) are $UV$ ones, and
they appear in two places: the $\epsilon^{-N}$ factor in $Z_{classical}$
(see eq.(5a)), and the determinant factor
\foot{Superficially, it appears that the normal-ordering contributions
to the Feynman-graph factor are also divergent, due to (19e); but
these normal-ordering divergences manifestly cancel to all orders,
as we shall show below.}~.
But these two divergences cancel, as we now show. Using eq.(13)
and formally expanding in powers of the free massless propagator,
we find:
$$\left.\eqalign{
\ln\bigl[{{\det K}\over{\det(-\partial^2)}}\bigr]^{-{1\over 2}}=&
-{1\over 2}\tr\ln\bigl[K/(-\partial^2)\bigr]\cr &= -2\pi N\int
d\lambda\rho(\lambda)\bigl({1\over{-\partial^2}}\bigr)_{\lambda\lm}+\; UV\; finite
}\right.\eqno (20)$$
This is ill defined. But if we use the $UV$ regularization eq.(2a),
the $\delta(\lambda-{\lm^\pr})$ term in $K(r,r^\prime)$ is smeared, and (20)
becomes
$$\ln\bigl[{{\det K}\over{\det(-\partial^2)}}\bigr]^{-{1\over 2}}=
N\ln\epsilon+\; UV\; finite\eqno (20a)$$
and hence the singular $\epsilon$-dependence drops out of eq.(17), as
claimed.
\par The remainder of this paper deals mostly with properties of the
functions $\rho(\lambda)$ and {$H(\lm,\lm^\pr)$}, especially in the double scaling
limit, and with the summation of the Feynman graphs in eq.(17). The
evaluation of $\det K$, which is done using heat-kernel methods, will
be reported on elsewhere.\par To get an idea what the expressions for
Feynman graphs look like, we record the lowest-order terms in the Wick
expansion:
$$\left.\eqalign{
\ln\langle e^{-S_i\{q\}}&\rangle=\ln\langle\lbrack\int\rho(\lambda)d\lambda
e^{iq(\lambda)}\rbrack^N\exp\bigl({N\over 2}\int\rho(\lambda)d\lambda q(\lambda)^2\bigr)
\rangle\cr &=
{N\over 8}\{\int d\tau\lbrack\langle q(\tau)^2\rangle-\int d\tau^\prime\langle
q(\tau^\prime)^2\rangle\rbrack^2\cr &
-2\int\int d\tau d\tau^\prime\langle q(\tau)
q(\tau^\prime)\rangle^2+\dots\}
}\right.\eqno (21)$$
In eq.(21), the leftmost equality is exact, and we have introduced a
useful new variable $\tau(\lambda)$:
$$\tau(\lambda)\equiv\int_{-\infty}^\lambda\rho(\mu)d\mu\eqno (21a)$$
which ranges from $0$ to $1$. We will sometimes denote the arguments
of a function of $\lambda$, by $\tau$ instead
\foot{In deriving the expansion (21), we made use of the fact that
$\int d\tau q(\tau)=0$, by eqs.(14),(21a).}
. \par The terms displayed in the expansion on RHS of (21), are those
involving only two propagators. Using eq.(18), we rewrite these
Feynman-graph contributions to the free energy, thus:
$$\left.\eqalign{
\ln\langle e^{-S_i\{q\}}\rangle
=&2\pi^2N\{\int d\tau\lbrack H(\tau,\tau)-\int d{\tau^\pr} H({\tau^\pr},{\tau^\pr})\rbrack^2
+{1\over{8\pi^2N^2}}\cr &
-2\int\int d\tau d{\tau^\pr} H(\tau,{\tau^\pr})^2+O(H^3)\}
}\right.\eqno (21b)$$
The first term in the curly brackets is the normal-ordering contribution
to this order. $H(\tau,\tau)$ is logarithmically divergent, but this
divergence is $\tau$-independent, so $H(\tau,\tau)-\int d{\tau^\pr}
H({\tau^\pr},{\tau^\pr})$ is $UV$ finite
\foot{This can be shown rigorously by making use again of the $UV$
regularization procedure, eq. (2a).}
{}.
The other two-propagator graph in (21b) is manifestly finite. Note that
the Feynman-graph contributions to the free energy are not local. Indeed,
we started from a nonlocal action for the $A$ field, so this is to be
expected -- the cluster expansion does not hold.\par
In order to evaluate Feynman graphs, we must solve for the propagator
$H$. Before doing so, however, let us show how to sum the normal-ordering
contributions to all orders. This is easy to do: from the leftmost
equality in (21), we obtain
$$\langle e^{-S_i\{q\}}\rangle=\langle\{\int d\tau\exp(-w(\tau))
:e^{iq(\tau)}:\}^N\exp\lbrack{N\over 2}\int d\tau:q(\tau)^2:\rbrack\rangle
\eqno (22)$$
where $w$ is a finite function, defined as follows:
$$w(\tau)\equiv w_1(\tau)-\int d{\tau^\pr} w_1({\tau^\pr})\eqno (22a)$$
$$w_1(\tau(\lambda))\equiv\lim_{{\lm^\pr}\rightarrow\lambda}(2\pi{H(\lm,\lm^\pr)}-\ln\vert
\lambda-{\lm^\pr}\vert)\eqno (22b)$$
$w_1(\tau)$ is finite by virtue of eq.(19e).\par
We now turn to the task of computing {$H(\lm,\lm^\pr)$}. As was the case with
the $2d$ equation of motion (7), the two-dimensional differential
equation (19a) can be converted into a {\it one-dimensional}
integro-differential equation, by treating the RHS as a source term.
This new equation is
$${\partial\over{\partial\lambda}}{H(\lm,\lm^\pr)}={1\over{2\pi}}{1\over{\lambda-{\lm^\pr}}}+
2N{\cal H}_\lambda\bigl(\rho(\lambda)H(\lambda,{\lm^\pr})\bigr)\eqno (23)$$
where the principal value is understood in the first term, and the
subscript to the Hilbert transform indicates that the transform
acts on $\lambda$, with ${\lm^\pr}$ held fixed. When the conditions (19b-c)
are imposed, eq.(23) has a unique solution.\par It is possible to derive
from (23) a $1d$ {\it differential equation} for $H$, similarly to the
procedure that lead from eqs.(9) to eqs.(10). Namely, eq.(23) is
differentiated w.r.t. $\lambda$, then used again, and the properties of
the Hilbert transform (listed in the appendix) are used. In addition,
eq.(9b) for $\rho$ is used. The result of these manipulations is
\foot{We use the $\tau$ variable, defined in (21a), with
${\tau^\pr}=\tau({\lm^\pr})$, $\tau=\tau(\lambda)$.}:
$$\left.\eqalign{
\{{{\partial^2}\over{\partial\tau^2}}+4\pi^2N^2\}{H(\lm,\lm^\pr)}=&
-\pi N\delta(\tau-{\tau^\pr})+{1\over{\rho^2(\lambda)}}Q(\lambda\vert{\lm^\pr})\cr&-
{1\over{2\pi\rho^2(\lambda)}}{\partial\over{\partial{\tau^\pr}}}\bigl
({{\rho({\lm^\pr})}\over{\lambda-{\lm^\pr}}}\bigr)
}\right.\eqno (24)$$
where principle value is again understood in the last term. Here
$Q(\lambda\vert{\lm^\pr})$ is a polynomial in $\lambda$, with coefficients that
are one-sided moments\foot{That is, moments w.r.t. one of the two
variables.}
of $H$ with measure $\int d\tau$:
$$Q(\lambda\vert{\lm^\pr})\equiv{N\over{2\pi}}{{V^\prime(\lambda)-V^\prime({\lm^\pr})}\over
{\lambda-{\lm^\pr}}}+2N^2\int\rho(\mu)d\mu{{V^\prime(\lambda)-V^\prime(\mu)}\over
{\lambda-\mu}}H(\mu,{\lm^\pr})\eqno (24a)$$
and these moments are to be determined self-consistently, just as
for the moments $m_n$ of $\rho$, which entered the
Schr\"odinger eq.(10a) through the polynomial $P(\lambda)$.\par
Denote these one-sided moments as follows:
$$h_n(\mu)\equiv\int\lambda^n\rho(\lambda)d\lambda H(\lambda,\mu)\eqno (24b)$$
By (19d) we have, $$h_0(\mu)=-{1\over{4\pi N}}\eqno (24c)$$
Restricting again to the quartic potential, we obtain:
$$Q(\lambda\vert{\lm^\pr})={1\over{2\pi}}4gN\{\lbrack{\lm^\pr}^2+4\pi Nh_2({\lm^\pr})\rbrack
+\lambda\lbrack{\lm^\pr}+4\pi Nh_1({\lm^\pr})\rbrack\}\eqno (25)$$ \par
The LHS and first term on RHS of (24) constitute the Green's equation
for the correlator of a quantum-mechanical harmonic oscillator, but
the other, nonlocal source terms on the RHS spoil this simple picture.
As with the eigenvalue density, the $1d$ differential equation is
somewhat weaker than the $1d$ integro-differential equation: the latter,
however, is equivalent to the {\it $2d$} Green's equation (19a), once
the boundary condition (19c) is imposed.\par
The equation (24) is linear, and so can be readily solved in terms of
$Q$ (which however is itself unknown). The general solution is:
$$\left.\eqalign{
{H(\lm,\lm^\pr)}=&A({\lm^\pr})\cos 2\pi N\tau +B({\lm^\pr})\sin 2\pi N\tau-{1\over 2}
\theta(\tau-{\tau^\pr})\sin 2\pi N(\tau-{\tau^\pr})\cr &+
{1\over{4\pi^2N}}\int_0^\lambda{{d\mu}\over{\rho(\mu)}}\sin 2\pi N(\tau-
\tau(\mu))\ll2\pi Q(\mu\vert{\lm^\pr})+{\partial\over{\partial{\tau^\pr}}}\bigl(
{{\rho({\lm^\pr})}\over{{\lm^\pr}-\mu}}\bigr)\rbrack
}\right.\eqno (26)$$
where $A$,$B$ are free functions and $\theta$ is the step function.
The symmetry of {$H(\lm,\lm^\pr)$}\ determines $A$ and $B$ up to three real
constants; these constants, as well as $Q(\lambda\vert{\lm^\pr})$, can be
determined by making use of eq.(23).\par
The formalism for evaluating the classical solution(s), propagator
and quantum corrections is rather complicated for
finite $N$. Fortunately,
however, massive simplifications occur in the ${1\over N}$ expansion,
and the formalism simplifies even further when the couplings approach
criticality and the d.s.l. (double scaling limit)
is taken. We next turn to a discussion of this limit.
\chapter{The Double Scaling Limit.}
In this section, we will describe the procedure for performing the
double scaling limit ({\it d}.{\it s}.{\it l}. ) in our conjugate-field formalism.
We leave out many details, to be included in a forthcoming publication.
\par Let us specialize to the quartic potential, eq.(10d), and therefore
to the $k=2$ critical model (pure gravity). The plan of the section is
as follows. In part 4.a, the nature of the planar limit and the
{\it d}.{\it s}.{\it l}.\ for
the model is reviewed. Then the integro-differential and nonlinear
Schr\"odinger equations for a classical solution, $\rho$, are used
to find the string-perturbative expansions for the moments $m_n$ of
$\rho$ (defined in (10f)). These expansions are unique, and it is seen
they are unique for any critical potential, provided attention is
restricted to classical solutions that are single-band in the planar
limit. In addition, the WKB approximation for $\rho$ in the region
exterior to the Dyson sea, is found.\par In part 4.b we discuss
the perturbative corrections to $\rho$ inside the sea, and study the
details of $\rho(\lambda)$ in the transition regions at the edges of the
sea. In part 4.c we describe how the same techniques, when applied
to the propagator $H$, yield the perturbative expansion for {$H(\lm,\lm^\pr)$}
\ in various regions of the $(\lambda,{\lm^\pr})$ plane.
The formulae of section
3 for quantum corrections are seen to greatly simplify in the {\it d}.{\it s}.{\it l}.,
allowing the perturbative series for physical quantities (specific
heat, etc.) to be found. In particular, the normal-ordered
Feynman diagram expansion terminates after a small number of terms.
\par Since the critical quartic potential is unbounded from below, all
our results up to this point were obtained by continuing from the well
defined $g>0$ regime, to $g\approx g_c<0$. This procedure is
satisfactory only for string perturbation theory. In part 4.d, we
look at the Schr\"odinger potential $V_1$ {\it directly} for
$g\approx g_c$; $V_1$ is seen to acquire a small `second sea' in the
transition region. This sea is finite in shape when $V_1$ and $\lambda$
are appropriately double-scaled.\par
Our formalism is ill defined in this critical coupling regime,
like pure gravity
itself, since there is no normalizable $\rho(\lambda)$. Nevertheless, we
investigate the behavior of $\rho$ in the new transition region,
and find that the second sea has a population suppressed by the
expected $\exp(-const\;{1\over\kappa})$,
where $\kappa$ is the string
coupling. The constant in the exponential agrees with that obtained
in other approaches.
Thus, $\rho$ can be thought of as an instanton. As such, it has
two unusual aspects. Firstly, the tunneling factor
occurs in the {\it field}
configuration as well as in the classical action. Secondly,
a single conjugate-field configuration seems to describe
both the non-tunneling and tunneling
eigenvalue configurations\foot{We thank S. Shenker for discussions
concerning this latter point.}
. This configuration,
however, should be viewed only as a warm-up exercise for non-perturbative
calculations in our formalism. The meaningful calculations are to be
done for a potential $V$ bounded from below, and would thus most likely
not apply to any model with $k=2$ behavior\ \lbrack\deo\rbrack.\bigskip
\centerline{\bf 4.a. WKB and Perturbation Theory for $\rho$.}
{}From eqs.(10), we find the Schr\"odinger\ potential for the case of quartic
$V$\foot{Our notation, in particular our definition of $a(g)$, differs
slightly from that of ref.\ \lbrack\biz\rbrack\ .}:
$$V_1(\lambda)=4g^2(\lambda^2-a^2)(\lambda^2-b^2)^2-4g\delta m_2+{1\over{2N}}(1+
12g\lambda^2)\eqno (27a)$$
where: $$a^2={1\over{6g}}(\sqrt{1+48g}-1)\eqno(27b)$$
$$b^2=-{1\over{4g}}-{1\over 2}a^2\eqno(27c)$$
and $\delta m_2$ is the deviation of the second moment $m_2$ from its
planar-limit value:
$$\delta m_2\equiv
m_2+{1\over{36g}}-{{a^2}\over{144g}}(1+48g)\eqno(27d)$$
It will shortly be seen that $\delta m_2$ is $O({1\over N})$ for fixed
$g$, in the planar limit.\par For $g>0$, $b^2$ is negative, so $V_1(\lambda)$
vanishes only at two points, which are $\lambda=\pm a$ in the planar limit;
these points are the edges of the Dyson sea. We define $b$ such that
${\rm Im}\; b>0$. The $k=2$ critical point is at
$$g=g_c=-1/48\eqno(28a)$$
and for $g\approx g_c$, $b^2$ is positive, and in
fact $b^2\approx a^2\approx 8$
. As explained above, we will employ throughout most of section 4
(except part 4.d) the well-defined procedure of calculating at $g>0$
and then continuing to criticality. The double scaling limit for this
theory, consists in simultaneously letting $N\rightarrow\infty$,
$g\rightarrow g_c$ while holding the string coupling fixed
\ \lbrack 1-8\rbrack\ :
$$g_{string}=\kappa\equiv{1\over N}(g-g_c)^{-5/4}\eqno(28b)$$
\par Next, consider our nonlinear Schr\"odinger\ equation, (10a), in a region
of $\lambda$ outside the sea; more precisely, $\vert\lambda\vert>a$,
and $\vert\lambda\pm a\vert$ are held fixed as $N$ increases. The
$\pi^2\rho^2$ (nonlinear) term is then exponentially suppressed, by
virtue of eq.(9e), and in consequence $\rho^{-1/2}$ approximately
satisfies a {\it linear} Schr\"odinger\ equation:
$$\bigl[-{1\over{N^2}}{{\partial^2}\over{\partial\lambda^2}}+V_1(\lambda)\bigr]
(\rho^{-1/2})\approx 0\quad(\; outside\; sea\;)\eqno(29a)$$
This is easily solved via an asymptotic WKB expansion: ($V_1>0$ outside
sea)
$$\left.\eqalign{
\rho(\lambda)=&\vert\sqrt{V_1(\lambda)}\vert\exp\bigl(-2N\int^\lambda d\mu\sqrt
{V_1(\mu)}\bigr)\{const\; +O({1\over N})\cr &+
O(e^{\scriptstyle{
\scriptstyle -2N\int^\lambda
d\mu\sqrt{V_1(\mu)}}})\} \quad(\; outside\; sea\;)
}\right.\eqno(29b)$$
where $V_1(\lambda)$ is given by eq.(27a)\foot{It is easy to check that
(29b) agrees with eq.(9e).}
. The corrections on the RHS of eq.(29b) are of two kinds: the
$O({1\over N})$ corrections constitute the usual, linear-WKB asymptotic
expansion, whereas the exponentially-suppressed corrections are due to
the nonlinearities of the exact (10a). Since our $V(\lambda)$ is symmetric,
so is $V_1$, and the branch we choose for $\sqrt{V_1(\mu)}$ in the
exponent, is as follows:
$$ {\rm sgn}\sqrt{V_1(\lambda)}\equiv{\rm sgn}\lambda\eqno(29c)$$
This choice ensures that $\rho(\lambda)$ is symmetric, and in
addition can be
continued to an analytic function throughout the complex $\lambda$ plane,
except for a cut along the Dyson sea $(-a,a)$.\par
Next, we take $\vert\lambda\vert >>1$. The exponentially suppressed
corrections in (29b) can be neglected, and we obtain from eqs.(27a),(29b)
an asymptotic expansion for the logarithmic derivative of $\rho$:
$$\left.\eqalign{
{{\rho^\prime(\lambda)}\over{N\rho(\lambda)}}\approx &{{2\delta m_2}\over{(\lambda^2-b^2
)\sqrt{\lambda^2-a^2}}}-{1\over{4gN}}{{1+12g\lambda^2}\over
{(\lambda^2-b^2)\sqrt{\lambda^2-a^2}}}\cr &+{1\over N}({\lambda\over{\lambda^2-a^2}}+
{{2\lambda}\over{\lambda^2-b^2}})+O({1\over{N^2}})
}\right.\eqno(29d)$$
This expression can now be expanded as a Laurent series in ${1\over\lambda}$,
and compared with the corresponding expansion resulting from eqs.(9)
to yield the eigenvalue moments, $m_n$. The easiest way to compare the
two series term by term, is to continue both of them to complex $\lambda$,
with $\sqrt{\lambda^2-a^2}$ defined as described after (29c). We then
multiply
both (9b) and (29d) by $\lambda^n$, $n$ any integer, and integrate over $d\lambda$
along a closed contour with large $\vert\lambda\vert$. For negative $n$,
the coefficients agree identically. In addition, all odd moments vanish
by symmetry\foot{This symmetry can be shown to hold to all orders
in $1/N$ perturbation theory.}, whilst for even, positive $n$ values
we find:
$$\left.\eqalign{
m_n=&{{4g}\over\pi}\int_{-a}^a\lambda^n d\lambda (\lambda^2-b^2)\sqrt{a^2-\lambda^2}+
{1\over{8\pi gN}}\int_{-a}^a\lambda^n d\lambda{{8gN\delta m_2-(1+12g\lambda^2)}\over
{(\lambda^2-b^2)\sqrt{a^2-\lambda^2}}}\cr &+
{{a^n}\over{2N}}+d_0 b^n+O({1\over{N^2}})
\quad(\; n\; even\;)
}\right.\eqno(30a)$$
with $$d_0={1\over{2N}}+{1\over{b\sqrt{b^2-a^2}}}({{\delta
m_2}\over 2}
-{{1+12gb^2}\over{16gN}})\eqno(30b)$$
For $n=0$ and $2$, this just reproduces eqs.(9d)
and (27d) respectively,
up to $O(1/N^2)$ terms, so we gain no new information. For higher $n$,
eqs.(30) give us all the moments in terms of $\delta m_2$, to order
$1/N$. How is $\delta m_2$ to be determined, then? it is clear that
$d_0$ must vanish to this order in $1/N$, since at $N>>1$ the support
of $\rho(\lambda)$ is the Dyson sea, or very near it, and the $b^n$
term on
the RHS of (30a) cannot occur for such a distribution. thus $d_0=
O({1\over{N^2}})$, which gives us the requisite missing information
\foot{The ${{a^n}\over{2N}}$ contribution in (30a), comes from the
edges of the sea; the remaining two terms come from its bulk.}
\foot{If $d_0\not=0$, ${\cal H}(\rho(\lambda))$
must have poles at $\lambda=\pm b$,
to first order in $1/N$. This is actually
impossible for {\it any} $\rho(\lambda)$
for which the Hilbert transform is well-defined, since $b$ is
imaginary.}
. ---
$$N\delta m_2={{1+12gb^2}\over{8g}}-b\sqrt{b^2-a^2}+O({1\over N})
\eqno(30c)$$
so $\delta m_2$ is indeed $O(1/N)$. Substituting this back into (30a-b)
yields all nonvanishing moments $m_n$, to order $1/N$. \par It is
straightforward to extend this technique to any desired order in
$1/N$, and to take the {\it d}.{\it s}.{\it l}.\ limit (28b), as well. We thus see that the
perturbative genus expansion for $\rho(\lambda)$, or at least for the set
of all its moments, is unique and can be easily determined, as claimed.
Furthermore, the technique extends to any potential $V$, as long as we
have a planar limit of $\rho$ to expand about. When this limit is
restricted to have a single band, it is unique, and so the perturbative
expansion about it will also be unique.
\bigskip{\bf 4.b. Sea Interior and Transition Region.}
Consider the sea interior, namely the region $\vert\lambda\vert<a$ with
$a-\vert\lambda\vert$ fixed (in either the planar- or the double-scaling
limits). The nonlinear Schr\"odinger\ equation (10a)
can be solved via the WKB
approximation in this region, as was done in the exterior region. In
the interior, however, the linearized WKB is of no use. This is because
the planar limit we wish to expand about is given by eq.(10g), and thus
the nonlinearity is crucial here.\par
The correct procedure is as follows.
Defining new variables\foot{Recall that $V_1<0$ in the sea interior.},
$$t\equiv N\int_{-a}^\lambda\sqrt{-V_1(\mu)}d\mu\eqno(31a)$$
$$\rho(\lambda)\equiv{1\over\pi}\sqrt{-V_1(\lambda)}f(t)^{-2}\eqno(31b)$$
we find the differential equation,
$$f^{\prime\pr}+f-f^3=O({1\over{N^2}})\eqno(31c)$$
In the planar limit, the solutions of (31c) are elliptic functions.\par
These planar solutions have two free real parameters; this is just the
ambiguity discussed at the end of section 2, and is resolved by the
integro-differential equation and the consistency conditions. Let us
see how this works. Since we expect $f(t)\rightarrow 1$ in the planar
limit (by (10g) and (31b)), we need only consider $f\approx 1$; then (31c)
informs us that
$$f(t)=1+\epsilon\cos 2\bar t+({3\over 4}-{1\over 4}\cos 4\bar t)
\epsilon^2+O(\epsilon^3)+O({1\over{N^2}})\eqno(32a)$$
where $\epsilon$ is a small unknown oscillation amplitude, and $\bar t=
t+\varphi$, with $\varphi$ the constant phase of the oscillation. By
employing the consistency conditions
for $m_0=1$ and $m_2$, it can be shown
that $$\epsilon=O({1\over N})\eqno(32b)$$ and this turns out to mean
that the oscillations can be ignore in the {\it d}.{\it s}.{\it l}.\ ; thus we may use
$$f\approx 1\eqno(32c)$$ Next, we study the transition regions at the edges
of the sea. By symmetry, it suffices to investigate the $\lambda\approx a$
transition region. Since $V_1(\lambda)$ has a first-order zero at $\lambda=a$
in the planar limit, we may approximate it by a linear function in the
transition region, since the width of this region vanishes in the planar
limit. We rescale $\lambda$ and $\rho$ as follows:
$$\lambda-a=N^{-2/3}\lbrack 8ag^2(b^2-a^2)^2\rbrack^{-1/3}y\eqno(33a)$$
$$\rho={1\over\pi}N^{-1/3}\lbrack 8ag^2(b^2-a^2)^2\rbrack^{1/3}\eta(y)^{-2}
\eqno(33b)$$
The eq.(10a) then becomes
$$(-{{\partial^2}\over{\partial y^2}}+y+\eta^{-4})\eta\approx 0\eqno(33c)$$
The boundary condition for this differential equation is furnished
by the approximate solution inside the sea, eqs.(31b) and (32c), which
become in terms of the rescaled variables,
$$\eta(y)\approx(-y)^{-1/4}\quad at\; y<0,\;\vert y\vert>>1\eqno(33d)$$
As $N\rightarrow\infty$, the rescaled equation (33c) becomes exact, and
(33d) becomes an exact boundary condition in the $y\rightarrow\infty$
limit. On the exterior side of the transition region, the asymptotic
behavior is
$$\eta(y)\approx const\; y^{-1/4}e^{(2/3)y^{3/2}} \quad at\; y>0,\;\vert
y\vert>>1\eqno(33e)$$
In agreement with the exterior WKB solution, (29b).\par
For fixed coupling $g$, the width of the transition region is $O(N^
{-2/3})$. When $g$ is, instead, continued to the critical point $g_c$
in accordance with the {\it d}.{\it s}.{\it l}., we find from eqs.(27)
$$b^2-a^2=O((g-g_c)^{1/2})\eqno(34a)$$
so by (33a) and (28b), the width of the transition region is of order
$(g-g_c)^{1/2}\kappa^{2/3}$. Using eq.(33b), we also find that the
normalized eigenvalue population in the transition region, is of order
$$\int_{\scriptstyle transition}\rho d\lambda=O({1\over N}),\eqno(34b)$$
either for $g$ fixed, or in the {\it d}.{\it s}.{\it l}.. This means that of the original
$N$ matrix eigenvalues, on the order of {\it one} eigenvalue are
likely to inhabit the transition region.\bigskip
{\bf 4.c. Quantum Corrections in d.s.l.} In part 4.a, we used both the
$1d$ differential equation and the integro-differential equation for
$\rho(\lambda)$, to find the perturbative expansion for the moments $m_n$
of $\rho$. A similar procedure can be employed for the propagator
{$H(\lm,\lm^\pr)$}, by using the $1d$ Green's equation (24), and the corresponding
integro-differential equation (23). In this case, one finds $1/N$
expansions for the one-sided moments $h_n(\lambda)$, defined in (24b). The
moments $h_1$ and $h_2$ can then be substituted in eq.(25). We find,
to the leading approximation for large $N$,
$$Q(\lambda\vert{\lm^\pr})\approx{1\over{2\pi}}{{\partial}\over{\partial{\tau^\pr}}}\bigl(
\rho({\lm^\pr}){{\lambda+{\lm^\pr}}\over{b^2-{\lm^\pr}^2}}\bigr)\eqno(35)$$
This, in turn, can be used in eq.(26). The unknown functions $A$,$B$
are determined as explained in section 3.\par
The results of this analysis, are as follows\foot{The details will be
presented in a separate publication, as will the computation of the
determinant factor $\det K$.}. When $\lambda$ and ${\lm^\pr}$ are both interior
to the sea, and $\lambda\not={\lm^\pr}$, we have
$${H(\lm,\lm^\pr)}=O({1\over N})\quad
\vert\lambda\vert<a,\;\vert{\lm^\pr}\vert<a\eqno(36a)$$
The function $w_1(\lambda)$ (eqs.(22)), the regular piece of {$H(\lm,\lm^\pr)$}\ as
${\lm^\pr}\rightarrow\lambda$, is $O(1)$, and in
the sea interior it is approximated
thus:
$$w_1(\lambda)\approx\ln\rho(\lambda)+\; const\quad(\vert\lambda\vert<a)\eqno(36b)$$
In other regions of $\lambda$ and ${\lm^\pr}$, these functions have different
behaviors. For instance, when $\lambda$,${\lm^\pr}$ are both in the same
transition region,
$${H(\lm,\lm^\pr)}=O(1)\quad(\;\lambda\approx a,\;{\lm^\pr}\approx a\;)\eqno(36c)$$
However, the contribution to Feynman diagrams from vertices in a
transition region, is still suppressed by a power of $1/N$ for each
such vertex, due to eq.(34b). Combining the behavior of $\rho$ and $H$
in the various regions with eq.(22), we find that only the first few
Feynman diagrams survive in the {\it d}.{\it s}.{\it l}.. We are referring to diagrams
resulting from contractions among normal-ordered vertices; recall that
an infinite number of normal-ordering contractions have been summed to
obtain eq.(22).
\bigskip{\bf 4.d. Instanton Configuration in Direct d.s.l.}
Rather than continuing $V(\lambda)$ to criticality {\it after} solving for
$\rho$ and $H$, it is instructive to attempt taking the {\it d}.{\it s}.{\it l}.\ directly.
This will demonstrate what nonperturbative effects look like in the
conjugate-field formalism, although a trustworthy nonperturbative
calculation requires a potential $V(\lambda)$ bounded from below.\par
When we use the quartic potential (10d) with negative $g$, $b^2$ is
positive (part 4.a). As $g$ approaches $g_c$ from the origin along the
real axis ($g\approx g_c$, $g>g_c=-1/48$), $b$ approaches $a$ thus:
$$b>a\quad,\; b-a=O((g-g_c)^{1/2})\eqno(37)$$
Thus by (27a), $V_1$ has two small `seas', concave regions just outside
the main Dyson sea, where it is negative. Unlike $V(\lambda)$, $V_1$ {\it is}
bounded from below. Thus we can attempt to solve the nonlinear Schr\"odinger
\ equation (10a) near criticality, ignoring the fact that the solution
will not solve the integro-differential equation\foot{This is similar to
the Marinari-Parisi approach to rendering the
even-k models well-defined.}.
We shall continue to use eq.(30c) for $\delta m_2$, in eq.(27a). the
justification is that, assuming oscillations inside the sea are still
suppressed (eq.(32c)), substitution of (31b) in (10f) for $n=0$ and $n=2$
indeed yields (30c), at least to the approximation needed to take the
{\it d}.{\it s}.{\it l}.\foot{This argument, as opposed to the one in 4.a, does not use
information about large $\vert\lambda\vert$. Such information cannot be
trusted, as $V$ is unbounded from below there.}\foot{The transition
region contributes $O(1/N)$ to both $m_0$ and $m_2$, so to eliminate
these contributions we computed the subtracted $m_2-a^2m_0$.}
.\par We concentrate on
the behavior of $V_1(\lambda)$,$\rho(\lambda)$ in the new transition region;
we invoke symmetry again and concentrate on the $\lambda\approx a$ region.
To that end, we magnify the region via a new rescaling, different from
(33):
$$\lambda-2\sqrt{2}=4\sqrt{3}(g-g_c)^{1/2}x\eqno(38a)$$
$$\rho={1\over\pi}(g-g_c)^{3/4}\zeta(x)^{-2}\eqno(38b)$$
where $2\sqrt{2}$ is the value of $a$ at $g=g_c$. The Schr\"odinger\ equation
in this region assumes the form,
$$\bigl[-{1\over{48}}\kappa^2{{\partial^2}\over{\partial x^2}}+\bigl({
{64\sqrt{8}}\over{\sqrt{3}}}(x+\sqrt{2})(x-{1\over{\sqrt{2}}})^2-\kappa{
{2\sqrt{2}}\over{3^{1/4}}}\bigr)+
\zeta^{-4}\bigr]\zeta\approx 0\eqno(38c)$$
This becomes exact in the {\it d}.{\it s}.{\it l}.. The old transition region occurs at
$x+\sqrt{2}=O(\kappa^{2/3})$, and is thus part of the new transition
region.
We are assuming that the string coupling
$\kappa$ is small, in order to isolate the leading nonperturbative
effect.\par
To the right of the old transition region, $V_1$ is positive for
$$-\sqrt{2}<x<{1\over{\sqrt{2}}}$$
and $\rho$ is exponentially suppressed. The extra new minimum of $V_1$
is at $x=1/\sqrt{2}$, and the new sea surrounding it, where $V_1<0$,
has width $\Delta x=O(\kappa^{1/2})$ and depth $O(\kappa(g-g_c)^{3/2})$.\par
The resulting {\it d}.{\it s}.{\it l}.\ solution for $\rho$ in the new transition region,
has the following properties. Between $x\approx -\sqrt{2}$ and $x\approx
{1\over{\sqrt{2}}}$, $V_1>0$ and $\rho$ tunnels in accordance with (29b).
$\rho$ is thus suppressed in the small new sea, with the WKB suppression
factor (up to prefactors which can be found)
$$ \Lambda^2\equiv\exp\bigl(-{{4\sqrt{6}}\over{5\kappa}}(48)^{5/4}\bigr)
\eqno(39a)$$
which is precisely the tunneling factor appearing in the nonperturbative
ambiguity for pure gravity, using the various previous approaches
\ \lbrack\npt\rbrack
\foot{In this connection, note that our normalization
for $\kappa$ differs from the one usually employed in the literature.}
.\par In the new sea, we find that $\zeta(x)$ has the following form:
$$\zeta(x)\approx ({3\over\kappa})^{1/4}\sqrt{8}e^{-2\sqrt{2}z^2}\bigl(
d_1\Lambda^{-1}-d_2\kappa^{1/2}\Lambda\int_0^ze^{4\sqrt{2}\bar z^2}d\bar z
\bigr)\eqno(39b)$$
where $z$ is yet a {\it third} rescaled eigenvalue variable, appropriate
for the extra sea:
$$x-{1\over{\sqrt{2}}}\equiv\kappa^{1/2}(48)^{-3/8}z\eqno(39c)$$
When the {\it d}.{\it s}.{\it l}.\ limit is taken, $\kappa$ small and $z$ held fixed, the
corrections to (39b) are higher powers of $\kappa$. In eq.(39b), $d_1$,$d_2$
are two pure numbers, obtained by matching (39b) at large and negative
$z$ with the WKB approximation, (29b), which holds between the two seas.
\par The configuration given by eqs.(38-39) can be interpretted as an
instanton. For well-defined models, there exist similar
instantons which are true classical solutions of the conjugate-field
theory. \par In the configuration discussed above, the center $z=0$
of the new sea is a local minimum of $\rho$. $\rho$ then increases with
$z$ for $z>0$. When $z$ is sufficiently large, one exits the new
sea and enters another $V_1>0$
region, where the exponentially suppressed terms in eq.(29b)
dominate for a while; this allows $\rho$ to continue to increase.
Eventually the dominant term will again dominate, but if $\rho^2$
has by then increased to become comparable in magnitude to $V_1$,
a nonperturbative solution of (10a) is needed. We need not worry about
this exterior region, however, since in this model $\rho$ is not
a trustworthy configuration there.\bigskip
\chapter{Conclusions.}
We have presented a new field-theory formulation of $D=0$ matrix
models. The field is conjugate to the Jevicki-Sakita collective field,
i.e. conjugate to the density of matrix eigenvalues. The theory is
two dimensional, with an eigenvalue
coordinate and an auxiliary coordinate
that can be eliminated from the formalism. The action is nonlocal, but
the equation of motion is local, except for a self-consistency
condition. There is a unique or discretely labeled classical solution
for any well-defined potential. The equation for the classical eigenvalue
distribution is a modified version of the planar integral equation
of Bessis, Itzykson and Zuber, with an entropy term that smoothes the
edges of the Dyson sea and introduces higher-genus
effects already at the
classical level. Single-band classical solutions
are perturbatively unique. The classical distribution also satisfies
a nonlinear Schr\"odinger\ equation,
with a potential similar to, but different
from the one appearing in the Marinari-Parisi $D=1$ reformulation.
\par The classical solutions, and the
quantum corrections about them, are systematically calculable,
and all divergences (UV- and IR-) cancel manifestly to all orders.
The
normal-ordering graphs can be summed exactly to all orders. In the double
scaling limit, the formalism simplified drastically. \par
The conjugate-field formalism can be used to systematically compute
string-nonperturbative effects. We demonstrate this for the ill-defined,
but simple, case of $k=2$ realized with a
quartic potential. In this case,
the classical distribution contains two small seas on either side of the
main Dyson sea. The population of the new seas
is exponentially suppressed
by the same tunneling factor as appears in other approaches.
A more complete presentation of the conjugate-field formalism will
be presented in a forthcoming publication.\bigskip
|
1,941,325,221,184 | arxiv | \section{Introduction}
The notion of a {\it short star-product} for a filtered quantization $\mathcal A$ of a hyperK\"ahler cone was introduced by Beem, Peelaers and Rastelli in~\cite{BPR} motivated by the needs of 3-dimensional super\-conformal field theory (under the name ``star-product satisfying the truncation condition"); this is an algebraic incarnation of non-holomorphic $\mathop{\mathrm{SU}}\nolimits(2)$-symmetry of such cones. Roughly speaking, these are star-products which have fewer terms than expected (in fact, as few as possible). The most important short star-products are {\it nondegenerate} ones, i.e., those for which the constant term ${\rm CT}(a*b)$ of $a*b$ defines a nondegenerate pairing on $A=\operatorname{gr} \mathcal A$. Moreover, physically the most interesting ones among them are those for which an appropriate Hermitian version of this pairing is positive definite; such star-products are called {\it unitary}. Namely, short star-products arising in 3-dimensional SCFT happen to be unitary, which is a motivation to take a closer look at them.
In fact, in order to compute the parameters of short star-products arising from 3-dimensional SCFT, in~\cite{BPR} the authors attempted to classify unitary short star-products for even quantizations of Kleinian singularities of type $A_{n-1}$ for $n\le 4$. Their low-degree computations suggested that in these cases a unitary short star-product should be unique for each quantization. While the~$A_1$ case is easy (as it reduces to the representation theory of ${\rm SL}_2$),
in the $A_2$ case the situation is already quite interesting. Namely, in this case an even quantization depends on one parameter~$\kappa$, and
Beem, Peelaers and Rastelli showed that (at least in low degrees) short star-products for such a quantization are parametrized by another parameter $\alpha$. Moreover, they computed numerically the function $\alpha(\kappa)$ expressing the parameter of the unique {\it unitary} short star-product on the parameter of quantization \cite[Fig.~2]{BPR}, but a formula for this function (even conjectural) remained unknown.
These results were improved upon by Dedushenko, Pufu and Yacoby in~\cite{DPY}, who computed the short star-products coming from 3-dimensional SCFT in a different way. This made the need to understand all nondegenerate short star-products and in particular unitary ones less pressing for physics, but it remained a very interesting mathematical problem.
Motivated by~\cite{BPR}, the first and the last author studied this problem in~\cite{ES}. There they developed a mathematical theory of nondegenerate short star-products and obtained their classification. As a result, they confirmed the conjecture of~\cite{BPR} that such star-products exist for a~wide class of hyperK\"ahler cones and are parametrized by finitely many parameters. The main tool in this paper is the observation, due to Kontsevich, that nondegenerate short star-products correspond to nondegenerate twisted traces on the quantized algebra $\mathcal A$, up to scaling. The reason this idea is effective is that traces are much more familiar objects (representing classes in the zeroth Hochschild homology of $\mathcal{A}$), and can be treated by standard techniques of representation theory and noncommutative geometry. However, the specific example of type $A_{n-1}$ Kleinian singularities and in particular the classification of unitary short star-products was not addressed in detail in~\cite{ES}.
\looseness=1
The goal of the present paper is to apply the results of~\cite{ES} to this example, improving on the results of~\cite{BPR}. Namely, we give an explicit classification of nondegenerate short star-products for type $A_{n-1}$ Kleinian singularities, expressing the corresponding traces of weight $0$ elements (i.e.,~polynomials $P(z)$ in the weight zero generator $z$) as integrals $\int_{{\rm i}\mathbb R} P(x)w(x)|{\rm d}x|$ of $P(x)$ against a certain weight function $w(x)$. As a result, the corresponding quantization map sends monomials $z^k$ to $p_k(z)$, where $p_k(x)$ are monic orthogonal polynomials with weight $w(x)$ which belong to the class of {\it semiclassical orthogonal polynomials}. If $n=1$, or $n=2$ with special parameters, they reduce to classical hypergeometric orthogonal polynomials, but in general they do not. We~also determine which of these short star-products are unitary, confirming the conjecture of~\cite{BPR} that for even quantizations of $A_{n-1}$, $n\le 4$ a unitary star product is unique. Moreover, we find the exact formula for the function $\alpha(\kappa)$ whose graph is given in Fig.~2 of~\cite{BPR}:
\[
\alpha(\kappa)=\frac{1}{4}-\frac{\kappa+\frac{1}{4}}{1-\cos\big(\pi\sqrt{\kappa+\frac{1}{4}}\big)}.
\]
In particular, this recovers the value $\alpha\big({-}\frac{1}{4}\big)=\frac{1}{4}-\frac{2}{\pi^2}$ predicted in~\cite{BPR} and confirmed \mbox{in~\cite{DFPY,DPY}}.
It would be very interesting to develop a similar theory of positive traces for higher-dimen\-sio\-nal quantizations, based on the algebraic results of~\cite{ES}. It would also be interesting to extend this analysis from the algebra $\mc A$ to bimodules over $\mc A$ (e.g., Harish-Chandra bimodules, cf.~\cite{L}). Finally, it would be interesting to develop a $q$-analogue of this theory.
These topics are beyond the scope of this paper, however, and are subject of future research. For~instance, the~$q$-analogue of our results for Kleinian singularities of type A will be worked out by the second author in~a~forthcoming paper~\cite{K2}.
\begin{Remark} We show in Example~\ref{neq2} that for $n=2$ the theory of positive traces developed here recovers the classification of irreducible unitary spherical representations of ${\rm SL}_2(\mathbb C)$~\cite{V}. Moreover,
this can be extended to the non-spherical case if we consider
traces on Harish-Chandra bimodules over quantizations (with different parameters on the left and the right, in general) rather than just quantizations themselves. One could expect that a similar theory for higher-dimensional quantizations, in the special case of quotients of $U(\mathfrak{g})$ by a central character (i.e., quantizations of the nilpotent cone) would recover the theory of unitary representations of the complex reductive group $G$ with Lie algebra $\mathfrak{g}$. This suggests that the theory of positive traces on filtered quantizations of hyperK\"ahler cones may be viewed as a generalization of the theory of unitary Harish-Chandra bimodules for simple Lie algebras. A peculiar but essential new feature of this generalization (which may scare away classical representation theorists), is that a given simple bimodule may have more than one Hermitian (and even more than one unitary) structure up to scaling (namely, unitary structures form a cone, often of dimension $>1$), and that a bimodule which admits a unitary structure need not be semisimple.
\end{Remark}
\begin{Remark}
The second author studied the existence of unitary star-products for type $A_{n-1}$ Kleinian singularities in~\cite{K1} and obtained a partial classification of quantizations that admit a~unitary star-product. That paper also contains examples of non-semisimple unitarizable bimo\-dules. The present paper has stronger results: it contains a complete description of the set of~unitary star-products for any type $A_{n-1}$ Kleinian singularity.
\end{Remark}
The organization of the paper is as follows. Section~\ref{sec2} is dedicated to outlining the algebraic theory of filtered quantizations and twisted traces for Kleinian singularities of type A, following~\cite{ES}. In Section~\ref{sec3} we introduce our main analytic tools, representing twisted traces by contour integrals against a weight function. In this section we also use this weight function to study the orthogonal polynomials arising from twisted traces. In~Section~\ref{sec4}, using the analytic approach of Section~\ref{sec3}, we determine which twisted traces are positive. In~particular, we confirm the conjecture of~\cite{BPR} that a positive trace is unique up to scaling for $n\le 4$ (for the choice of conjugation as in~\cite{BPR}), and find the exact dependence of the parameter of the positive trace on the quantization parameters for $n=3$ and $n=4$, which was computed numerically in~\cite{BPR}.\footnote{It is curious that, unlike classical representation theory, this dependence is given by a transcendental function.}
Finally, in Section~\ref{expcom} we discuss the problem of explicit computation of the coefficients $a_k$, $b_k$
of the 3-term recurrence for the orthogonal polynomials arising from twisted traces, which
appear as coefficients of the corresponding short star-product. Since these orthogonal polynomials are semiclassical, these coefficients can be computed using non-linear recurrences which are generalizations of discrete Painlev\'e systems.
\section{Filtered quantizations and twisted traces}\label{sec2}
\subsection{Filtered quantizations}
Let $X_n$ be the Kleinian singularity of type $A_{n-1}$.
Recall that
\[
A:=\mathbb C[X_n]=\CN[p,q]^{\mathbb Z/n},
\]
where
$\mathbb Z/n$ acts by $p\mapsto {\rm e}^{2\pi {\rm i}/n}p,\ q\mapsto {\rm e}^{-2\pi {\rm i}/n}q$.
Thus
\[
A=\mathbb C[u,v,z]/(uv-z^n),
\]
where
\[
u=p^n,\qquad
v=q^n,\qquad
z=pq.
\]
This algebra has a grading defined by the formulas $\deg(p)=\deg(q)=1$, thus
\begin{equation}\label{gra}
\deg(u)=\deg(v)=n,\qquad
\deg(z)=2.
\end{equation}
The Poisson bracket is given by $\lbrace{p,q\rbrace}=\frac{1}{n}$
and on $A$ takes the form
\[
\lbrace z,u\rbrace =-u,\qquad
\lbrace z,v\rbrace=v,\qquad
\lbrace u,v\rbrace=nz^{n-1}.
\]
Also recall that filtered quantizations $\mathcal A$ of $A$ are {\it generalized Weyl algebras}~\cite{B} which look as follows. Let~$P\in \mathbb C[x]$ be a monic polynomial of degree $n$. Then $\mc{A}=\mc A_P$ is the algebra generated by $u$, $v$, $z$ with defining relations
\[
[z,u]=-u,\qquad
[z,v]=v,\qquad
vu=P\big(z-\tfrac{1}{2}\big),\qquad
uv=P\big(z+\tfrac{1}{2}\big)
\]
and filtration defined by \eqref{gra}.
Thus we have
\[
[u,v]=P\big(z+\tfrac{1}{2}\big)-P\big(z-\tfrac{1}{2}\big)=nz^{n-1}+\cdots ,
\]
i.e., the quasiclassical limit indeed recovers the algebra $A$ with the above Poisson bracket.
Note that we may consider the algebra $\mc A_P$ for a polynomial $P$ that is not necessarily monic. However, we can always reduce to the monic case by rescaling $u$ and/or $v$. Also by transformations $z\mapsto z+\beta$ we can make sure that the subleading term of $P$ is zero, i.e.,
\[
P(x)=x^n+c_2x^{n-2}+\dots +c_n.
\]
Thus the quantization~$\mc A$ depends on $n-1$ essential parameters (the roots of $P$, which add up to zero).
The algebra $\mc A$ decomposes as a direct sum of eigenspaces of
$\ensuremath\operatorname{ad} z$:
\[
\mc{A}=\oplus_{k\in \ZN}\mc{A}_{k}.
\]
If $b\in \mc A_m$, we will say that $b$ has {\it weight} $m$.
The weight decomposition of $\mc A$ can be viewed as a~$\mathbb C^\times$-action; namely, for $t\in \mathbb C^\times$ let
\begin{equation}\label{gt}
g_t=t^{\ensuremath\operatorname{ad} z}\colon \ \mc{A}\to \mc{A}
\end{equation}
be the automorphism of $\mc{A}$ given by
\[
g_t(v)= tv,\qquad
g_t(u)=t^{-1}u,\qquad
g_t(z)=z.
\]
Then $g_t(b)=t^mb$ if $b$ has weight $m$.
\begin{Example} \qquad
\begin{enumerate}\itemsep=0pt
\item[$1.$] Let $n=1$, $P(x)=x$. Then $\mc A$ is the Weyl algebra
generated by $u$, $v$ with $[u,v]=1$, and $z=vu+\tfrac{1}{2}=uv-\tfrac{1}{2}$.
\item[$2.$] Let $n=2$ and $P(x)=x^2-C$. Then setting $e=v$, $f=-u$, $h=2z$, we get
\[
[h,e]=2e,\qquad
[h,f]=-2f,\qquad
[e,f]=h,\qquad
fe=-\big(\tfrac{h+1}{2}\big)^2+C,
\]
i.e., $\mc A$ is the quotient of the universal enveloping algebra $U(\mathfrak{sl}_2)$ by the relation $fe+\big(\tfrac{h+1}{2}\big)^2=C$, where $fe+\big(\tfrac{h+1}{2}\big)^2$ is the Casimir element.
\end{enumerate}
\end{Example}
\subsection{Even quantizations}
Let $s$ be the automorphism of $\mc A$
given by
\[
s(u)=(-1)^nu,\qquad s(v)=(-1)^nv,\qquad s(z)=z,
\]
in other words, we have $s=g_{(-1)^n}$. Thus $\mathop{\mathrm{gr}}\nolimits s\colon A\to A$
equals $(-1)^d$, where $d$ is the degree operator.
Recall \cite[Section~2.3]{ES}, that a filtered quantization $\mathcal{A}$ is called {\it even} if it is equipped with an antiautomorphism $\sigma$ such that $\sigma^2=s$ and $\mathop{\mathrm{gr}}\nolimits\sigma={\rm i}^d$, and that $\sigma$ is unique if exists~\cite[Remark~2.10]{ES}. This means that $\sigma (z)=-z$, $\sigma (u)={\rm i}^n u$, $\sigma (v)={\rm i}^n v$. It is easy to see that $\sigma$ exists if and only if
\[
(-1)^n P\big(z-\tfrac{1}{2}\big)=(-1)^n vu=\sigma(v)\sigma(u)=\sigma (uv)=\sigma\big(P\big(z+\tfrac{1}{2}\big)\big)=P\big({-}z+\tfrac{1}{2}\big).
\]
This is equivalent to
\[
P(-x)=(-1)^n P(x),
\]
i.e., $P$ contains only terms $x^{n-2i}$. Thus even quantizations of $A$ are parametrized by $[n/2]$ essential parameters, and all quantizations for $n\le 2$ are even.
\subsection{Quantizations with a conjugation and a quaternionic structure}\label{conju}
Recall \cite[Section~3.6]{ES} that a conjugation on~$\mc A$ is an antilinear filtration preserving automorphism $\rho\colon \mc A\to \mc A$ that commutes with $s$. We~will consider conjugations on~$\mc A$
given by
\begin{equation}\label{rho}
\rho(v)=\lambda u,\qquad \rho(u)=\lambda_* v,\qquad \rho(z)=-z,
\end{equation}
where $\lambda,\lambda_*\in \mathbb C^\times$; it easy to show that they are the only ones up to symmetry, using that any two such conjugations differ by a filtration preserving automorphism commuting with $s$. The~auto\-morphism $u\mapsto \gamma^{-1} u$, $v\mapsto \gamma v$ rescales $\lambda$ by $|\gamma|^{-2}$ and $\lambda_*$ by $|\gamma|^2$, so we may assume that $|\lambda|=1$, i.e.,
\[
\lambda=\pm {\rm i}^{-n}{\rm e}^{-\pi {\rm i} c},
\]
where $c\in [0,1)$.
Then
\[
\overline{P}\big({-}z+\tfrac{1}{2}\big)=\rho\big(P\big(z+\tfrac{1}{2}\big)\big)
=\rho(uv)=\rho(u)\rho(v)=\lambda\lambda_* vu=\lambda\lambda_* P\big(z-\tfrac{1}{2}\big),
\]
i.e., $\overline{P}(-x)=\lambda\lambda_* P(x)$. Thus $\lambda_*=(-1)^n\lambda^{-1}=\pm {\rm i}^{-n}{\rm e}^{\pi {\rm i}c}$ (so $|\lambda_*|=1$) and
\[
\overline{P}(-x)=(-1)^n P(x),
\]
i.e., ${\rm i}^nP$ is real on ${\rm i}\mathbb R$. We~also have
\[
\rho^2(u)=\overline{\lambda_*}\lambda u,\qquad \rho^2(v)=\overline{\lambda}\lambda_* v,\qquad \rho^2(z)=z,
\]
so $\rho^2=g_t$, where $g_t$ is given by \eqref{gt} and
\[
t=(-1)^n\overline{\lambda}\lambda^{-1}=(-1)^n\lambda^{-2}={\rm e}^{2\pi {\rm i}c},
\]
i.e., $|t|=1$. Thus we see that for every $t$ there are two non-equivalent conjugations, corresponding to the two choices of sign for $\lambda$, which we denote by $\rho_+$ and $\rho_-$.
In particular, consider the special case $t=(-1)^n$, i.e., $g_t=s$. Then $c=\frac{1}{2}$ for $n$ odd and $c=0$ for $n$ even.
Thus $\lambda=\pm 1$, so the conjugation $\rho$ on~$\mc A$ is given by
\[
\rho(v)=\pm u,\qquad \rho(u)=\pm (-1)^nv,\qquad \rho(z)=-z.
\]
Now assume in addition that $\mc A$ is even, i.e., $P(-x)=(-1)^nP(x)$. Then we have $\rho\sigma=\sigma^{-1}\rho$, so $\rho$ and $\sigma$ give a {\it quaternionic structure} on $\mc{A}$, cf.~\cite[Section~3.7]{ES}. So~this quaternionic structure exists if and only if $P\in \RN[x]$, $P(-x)=(-1)^n P(x)$.
\begin{Example} Let $n=2$, so $\mc A$ is the quotient
of the enveloping algebra $U(\mathfrak{g})$, $\mathfrak{g}=\mathfrak{sl}_2$, by the relation $fe+\frac{(h+1)^2}{4}=C$, where
$C\in \mathbb R$. Since
\[
e=v,\qquad f=-u,\qquad h=2z,
\]
we have
\[
\rho_\pm(e)=\pm f,\qquad \rho_\pm(f)=\pm e,\qquad \rho_\pm(h)=-h.
\]
So $\mathfrak{g}_+:=\mathfrak{g}^{\rho_+}$ has basis
$\mathbf{x}=\frac{e+f}{2}$, $\mathbf{y}=\frac{{\rm i}(e-f)}{2}$, $\mathbf{z}=\frac{{\rm i}h}{2}$.
Thus,
\[
[\mathbf{x},\mathbf{y}]=-\mathbf{z},\qquad [\mathbf{z},\mathbf{x}]=\mathbf{y},\qquad [\mathbf{y},\mathbf{z}]=\mathbf{x}.
\]
Hence, setting $E:=\mathbf{y}-\mathbf{z}$, $F:=\mathbf{y}+\mathbf{z}$, $H:=2\mathbf{x}$,
we have
\[
[H,E]=2E,\qquad [H,F]=-2F,\qquad [E,F]=H,
\]
so $\mathfrak{g}_+=\mathfrak{sl}_2(\mathbb R)$.
On the other hand, $\mathfrak{g}_-:=\mathfrak{g}^{\rho_-}$ has basis
${\rm i}\mathbf{x},{\rm i}\mathbf{y},\mathbf{z}$, hence $\mathfrak{g}_-=\mathfrak{so}_3(\mathbb R)=\mathfrak{su}_2$.
So $\rho_+$ and $\rho_-$ correspond to the split and compact form of $\mathfrak{g}$, respectively.
\end{Example}
\subsection{Twisted traces}
Let $\mc A=\mc A_P$ be a filtered quantization of $A$. Recall \cite[Section~3.1]{ES} that a $g_t$-{\it twisted trace} on~$\mc A$ is a linear map $T\colon \mc{A}\to \CN$ such that $T(ab)=T(bg_t(a))$, where $g_t$ is given by \eqref{gt}. It is shown in~\cite[Section~3]{ES}, that ($s$-invariant) nondegenerate twisted traces, up to scaling, correspond to ($s$-invariant) nondegenerate short star-products on~$\mc A$.
Let us classify $g_t$-twisted traces\footnote{One can show that for generic $P$ and $n>2$, the only possible filtration preserving automorphisms are $g_t$.} $T$ on~$\mc A$. The answer is given by the following proposition.
\begin{Proposition}
$T\colon\mc{A}\to \CN$ is a $g_t$-twisted trace on $\mc{A}$ if and only if
\begin{enumerate}\itemsep=0pt
\item[$(1)$]
$T(\mc{A}_j)=0$ for $j\ne 0$;
\item[$(2)$]
$T\big(S\big(z-\tfrac{1}{2}\big)P\big(z-\tfrac{1}{2}\big)\big)
=tT\big( S\big(z+\tfrac{1}{2}\big)P\big(z+\tfrac{1}{2}\big)\big)$ for all $S \in\CN[x]$.
\end{enumerate}
In particular, any twisted trace is automatically $s$-invariant.
\end{Proposition}
\begin{proof} Suppose $T$ satisfies (1), (2).
It is enough to check that
\[
T(ub)=t^{-1}T(bu),\qquad T(vb)=tT(bv),\qquad T(zb)=T(bz)
\]
for $b\in \mc{A}$.
The equality $T(zb)=T(bz)$ says that $T(\mc A_j)=0$ for $j\ne 0$, which is condition (1).
By (1), it is enough to check the equality $T(ub)=t^{-1} T(bu)$ for $b\in \mc{A}_{-1}$. In this case $b=vS\big(z+\tfrac{1}{2}\big)$ for some polynomial $S$. We~have
\begin{gather*}
T(ub)=T\big(uvS\big(z+\tfrac{1}{2}\big)\big)=T\big(P\big(z+\tfrac{1}{2}\big)S\big(z+\tfrac{1}{2}\big)\big),
\\
T(bu)=T\big(vS\big(z+\tfrac{1}{2}\big)u\big)=T\big(vu S\big(z-\tfrac{1}{2}\big)\big)=T\big(P\big(z-\tfrac{1}{2}\big)S\big(z-\tfrac{1}{2}\big)\big),
\end{gather*}
which yields the desired identity using (2).
Similarly, it is enough to check the equality $T(vb)=tT(bv)$ for $b\in \mc{A}_1$. In this case $b=u S\big(z-\tfrac{1}{2}\big)$. We~have
\begin{gather*}
T(vb)=T\big(vu S\big(z-\tfrac{1}{2}\big)\big)=T\big(P\big(z-\tfrac{1}{2}\big)S\big(z-\tfrac{1}{2}\big)\big),
\\
T(bv)=T\big(u S\big(z-\tfrac{1}{2}\big)v\big)=T\big(uv S\big(z+\tfrac{1}{2}\big)\big)=T\big(P\big(z+\tfrac{1}{2}\big)S\big(z+\tfrac{1}{2}\big)\big),
\end{gather*}
which again gives the desired identity using (2).
Conversely, the same argument shows that if $T$ is a $g_t$-twisted trace
then (1), (2) hold.
\end{proof}
Thus we get
\begin{Corollary}\label{natiso}
The space of $g_t$-twisted traces on~$\mc A$ is naturally isomorphic to the space
\[
\bigl(\CN[z]/\big\{S\big(z-\tfrac{1}{2}\big)P\big(z-\tfrac{1}{2}\big)-t S\big(z+\tfrac{1}{2}\big)P\big(z+\tfrac{1}{2}\big)\,|\, S \in\CN[z]\big\}\bigr)^*
\]
and has dimension $n$ if $t\neq 1$ and dimension $n-1$ if $t=1$.
\end{Corollary}
\subsection{The formal Stieltjes transform}
There is a useful characterization of the space $g_t$-twisted traces in
terms of generating functions. Given a linear functional $T$ on $\CN[z]$,
its {\em formal Stieltjes transform} is the generating function
\[
F_T(x):=\sum_{n\ge 0} x^{-n-1} T(z^n) \in \CN\big[\big[x^{-1}\big]\big],
\]
or equivalently $F_T(x)=T\big((x-z)^{-1}\big)$, with $(x-z)^{-1}$ itself expanded as a
formal power series in~$x^{-1}$.
\begin{Proposition}
The formal Stieltjes transform of a $g_t$-twisted trace on~$\mc A$
satisfies
\[
P(x)\big(F_T\big(x+\tfrac{1}{2}\big)-t F_T\big(x-\tfrac{1}{2}\big)\big)\in \CN[x],
\]
and this establishes an isomorphism of the space of $g_t$-twisted traces
with the space of polynomials of degree $\le n-1$ $($for $t\ne 1)$ or $\le
n-2$ $($for $t=1)$.
\end{Proposition}
\begin{proof}
We may write
\begin{gather*}
P(x)\big(F_T\big(x+\tfrac{1}{2}\big) - t F_T\big(x-\tfrac{1}{2}\big)\big)
= T\bigg(\frac{P(x)}{x+\frac{1}{2}-z}-t \frac{P(x)}{x-\frac{1}{2}-z}\bigg)
\\ \hphantom{ P(x)\big(F_T\big(x+\tfrac{1}{2}\big) - t F_T\big(x-\tfrac{1}{2}\big)\big)}
{}= T\bigg(\frac{P\big(z-\frac{1}{2}\big)}{x+\frac{1}{2}-z}-t \frac{P\big(z+\frac{1}{2}\big)}{x-\frac{1}{2}-z}\bigg)
\\ \hphantom{ P(x)\big(F_T\big(x+\tfrac{1}{2}\big) - t F_T\big(x-\tfrac{1}{2}\big)\big)=}
{}+ T\bigg(\frac{P(x)-P\big(z-\frac{1}{2}\big)}{x-\big(z-\frac{1}{2}\big)}-t \frac{P(x)-P\big(z+\frac{1}{2}\big)}{x-\big(z+\frac{1}{2}\big)}\bigg).
\end{gather*}
In the final expression, the second term is the image under $T$ of a
polynomial in $z$ and $x$, while the first term expands as
\[
\sum_{n\ge 0} x^{-n-1} T\big(P\big(z-\tfrac{1}{2}\big)\big(z-\tfrac{1}{2}\big)^n-t P\big(z+\tfrac{1}{2}\big)\big(z+\tfrac{1}{2}\big)^n\big) = 0.
\]
Since the map $F\mapsto F\big(x+\tfrac{1}{2}\big)-t F\big(x-\tfrac{1}{2}\big)$ is injective on $x^{-1}\mathbb C[[1/x]]$, this establishes an
injective map from $g_t$-twisted traces to polynomials of degree
$<\deg(P)$. This establishes the conclusion for $t\ne 1$, with surjectivity following by dimension count.
Finally, for $t=1$, we need simply observe that for any $F\in x^{-1}\CN\big[\big[x^{-1}\big]\big]$,
$F_T\big(x+\frac{1}{2}\big)-F_T\big(x-\tfrac{1}{2}\big)\in x^{-2}\CN\big[\big[x^{-1}\big]\big]$, and thus the polynomial has
degree $<\deg(P)-1$, and surjectivity again follows from dimension count.
\end{proof}
\begin{Remark} It is easy to see that the map $F(x)\mapsto F\big(x+\frac{1}{2}\big)-t F\big(x-\tfrac{1}{2}\big)$ acts triangularly
on $x^{-1}\CN\big[\big[x^{-1}\big]\big]$, of degree $0$ (with nonzero leading coefficients)
if $t\ne 1$ and degree $-1$ (ditto) if $t=1$, letting one see directly
that there is a unique solution of $P(x)\big(F\big(x+\frac{1}{2}\big)-t F\big(x-\tfrac{1}{2}\big)\big)=R(x)$
for any polynomial $R$ satisfying the degree constraint.
\end{Remark}
\begin{Remark}
A similar argument establishes an isomorphism between linear functionals
satisfying $T\big(P\big(q^{-\frac{1}{2}}z\big)S\big(q^{-\frac{1}{2}}z\big)-qt P\big(q^{\frac{1}{2}}z\big)S\big(q^{\frac{1}{2}}z\big)\big)=0$ and
elements $F\in x^{-1}\CN\big[\big[x^{-1}\big]\big]$ such that
\[
P(x)\big(F_T\big(q^{\frac{1}{2}}x\big)-t F_T\big(q^{-\frac{1}{2}}x\big)\big) \in \CN[x],
\]
or, for $t=1$, between linear functionals satisfying
\[
T\big(z^{-1}\big(P\big(q^{-\frac{1}{2}}z\big)S\big(q^{-\frac{1}{2}}z\big)
-P\big(q^{\frac{1}{2}}z\big)S\big(q^{\frac{1}{2}}z\big)\big)\big)=0
\]
and formal series satisfying
\[
P(x) x^{-1}\big(F_T\big(q^{\frac{1}{2}}x\big)-F_T\big(q^{-\frac{1}{2}}x\big)\big)\in \CN[x].
\]
\end{Remark}
\section{An analytic construction of twisted traces}\label{sec3}
\subsection[Construction of twisted traces when all roots of P(x) satisfy |Re alpha|<1/2]
{Construction of twisted traces when all roots of $\boldsymbol{P(x)}$ satisfy $\boldsymbol{|\Re\alpha|<\frac{1}{2}}$}
Let $t=\exp(2\pi {\rm i} c)$, where $0\le \Re c<1$ (clearly, such $c$ exists and is unique).
Let $P(x)=\prod_{j=1}^n(x-\alpha_j)$. Define
\[
\mathbf{P}(X):=\prod_{j=1}^n\big(X+{\rm e}^{2\pi {\rm i}\alpha_j}\big).
\]
When $P(x)$ satisfies the equation $\overline{P}(-x)=(-1)^nP(x)$
(the condition for existence of a conjugation $\rho$) the polynomial $\mathbf{P}(X)$ has real coefficients.
\begin{Proposition}\label{classtr}
\label{PropTracesForOpenStrip}
Assume that every root $\alpha$ of $P(x)$ satisfies $|\Re\alpha|<\frac{1}{2}$. Also suppose first that $t$ does not belong to $\RN_{>0}\setminus\{1\}$, i.e., $\Re c\in (0,1)$ or $c=0$. Then every $g_t$-twisted trace is given by\footnote{Here $|{\rm d}x|$ denotes the Lebesgue measure on the imaginary axis.}
\[
T(R(z))=\int_{{\rm i}\RN} R(x)w(x)|{\rm d}x|, \qquad R\in \mathbb C[x],
\]
where $w$ is the {\it weight function} defined by the formula
\[
w(x)=w(c,x):={\rm e}^{2\pi {\rm i}cx}\frac{G({\rm e}^{2\pi {\rm i} x})}{\mathbf{P}({\rm e}^{2\pi {\rm i} x})},
\]
where $G$ is a polynomial of degree $\le n-1$ and $G(0)=0$ if $c=0$.
\end{Proposition}
\begin{proof}
It is easy to see that the function $w(x)$ enjoys the following properties:
\begin{enumerate}\itemsep=0pt
\item[$(1)$]
$w(x+1)=tw(x)$;
\item[$(2)$]
$|w(x)|$ decays exponentially and uniformly when $\Im x$ tends to $\pm\infty$;
\item[$(3)$]
$w\big(x+\frac{1}{2}\big)P(x)$ is holomorphic when $|\Re x|\leq \frac{1}{2}$.
\end{enumerate}
Indeed, (2) holds because the degree of $G$ is strictly less than the degree of $\mathbf{P}$ and either ${\rm Re}(c)>0$ or $G(0)=0$, and (3) holds because all roots of $P$ are in the strip $|{\rm Re}\alpha|<\frac{1}{2}$.
Let $T(R(z)):=\int_{{\rm i}\RN} R(x)w(x)|{\rm d}x|$. We~should check that
\[
T\big(tS\big(z+\tfrac{1}{2}\big)P\big(z+\tfrac{1}{2}\big)-S\big(z-\tfrac{1}{2}\big)P\big(z-\tfrac{1}{2}\big)\big)=0.
\]
We have \begin{gather*}
T\big(tS\big(z+\tfrac{1}{2}\big)P\big(z+\tfrac{1}{2}\big)-S\big(z-\tfrac{1}{2}\big)
P\big(z-\tfrac{1}{2}\big)\big)
\\ \qquad
{}=\int_{{\rm i}\RN}tS\big(x+\tfrac{1}{2}\big)P\big(x+\tfrac{1}{2}\big)w(x)|{\rm d}x|-\int_{{\rm i}\RN}S\big(x-\tfrac{1}{2}\big)P\big(x-\tfrac{1}{2}\big)w(x)|{\rm d}x|
\\ \qquad
{}=\int_{\frac{1}{2}+{\rm i}\RN}tS(x)P(x)w\big(x-\tfrac{1}{2}\big)|{\rm d}x|-\int_{-\frac{1}{2}+{\rm i}\RN}S(x)P(x)w\big(x+\tfrac{1}{2}\big)|{\rm d}x|
\\ \qquad
{}=\int_{\frac{1}{2}+{\rm i}\RN}S(x)P(x)w\big(x+\tfrac{1}{2}\big)|{\rm d}x|-\int_{-\frac{1}{2}+{\rm i}\RN}S(x)P(x)w\big(x+\tfrac{1}{2}\big)|{\rm d}x|
\\ \qquad
{}=\frac{1}{{\rm i}}\int_{\partial \left(\left[-\frac{1}{2},\frac{1}{2}\right]\times \RN\right)}S(x)P(x)w\big(x+\tfrac{1}{2}\big){\rm d}x.
\end{gather*}
But this integral vanishes by the Cauchy theorem since $S(x)P(x)w\big(x+\tfrac{1}{2}\big)$ is holomorphic when $|\Re x|\leq \frac{1}{2}$ and decays exponentially as $\Im x\to\pm {\rm i}\infty$.
By Corollary~\ref{natiso}, the space of polynomials $G(X)$ has the same dimension as the space of $g_t$-twisted traces, and the map sending polynomials $G$ to traces is clearly injective, so we have described all traces.
\end{proof}
Now consider the remaining case $t\in \mathbb R\setminus \lbrace 1\rbrace$, i.e., $c\in {\rm i}\mathbb R\setminus \lbrace 0\rbrace$. In this case the function $w(x)$ does not decay at $+{\rm i}\infty$, so the integral in Proposition~\ref{classtr} is not convergent. However, we can write the formula for $T(R(z))$ as follows, so that it makes sense in this case:
\[
T(R(z))=\lim_{\delta\to 0+}\int_{{\rm i}\RN} R(x)w(c+\delta,x)|{\rm d}x|.
\]
Alternatively, one may say that
$T(R(z))$ is the value of the Fourier transform of the distribution
$R(-{\rm i}y)w(0,-{\rm i}y)$ at the point ${\rm i}c$ (it is easy to see that this Fourier transform is given by an~analytic function outside of the origin). We~then have the following easy generalization of~Proposition~\ref{classtr}:
\begin{Proposition} With this modification, Proposition~$\ref{classtr}$ is valid for all $t$.
\end{Proposition}
Consider now the special case of even quantizations.
Recall \cite[Section~3.3]{ES} that nondegenerate {\it even} short star-products on $A$ correspond to nondegenerate $s$-twisted $\sigma$-invariant traces $T$ on~various even quantizations $\mc A$ of $A$, up to scaling. So~let us classify such traces. As shown above, $s$-twisted traces $T$ correspond to $w(x)$ such that $w(x+1)=(-1)^n w(x)$. Also, it is easy to see that such $T$ is $\sigma$-invariant if and only if $T(R(z))=T(R(-z))$. We~have
\[
T(R(-z))=\int_{{\rm i}\RN} R(-x)w(x) |{\rm d}x|=\int_{{\rm i}\RN} R(x)w(-x) |{\rm d}x|.
\]
So $T$ is $\sigma$-invariant of and only if $w(x)=w(-x)$. Thus we have the following proposition.
\begin{Proposition}
Suppose that $\mc{A}$ is an even quantization of $A$. Then $s$-twisted $\sigma$-invariant traces $T$ are given by the formula
\[
T(R(z))=\int_{{\rm i}\RN} R(x)w(x)|{\rm d}x|,
\]
where $w$ is as in Proposition~$\ref{PropTracesForOpenStrip}$ and
$w(x)=w(-x)=(-1)^n w(x+1)$.
\end{Proposition}
\subsection{Relation to orthogonal polynomials}\label{ortpol}
We continue to assume that all roots of $\mathbf{P}$ are in the strip $|{\rm Re}\alpha|<\frac{1}{2}$. Assume that the trace $T$ is nondegenerate, i.e., the form $(a,b)\mapsto T(ab)$ defines an inner product on~$\mc A$ nondegenerate on~each filtration piece.
This holds for generic parameters, e.g., specifically if $w(x)$ is nonnegative on~${\rm i}\mathbb R$. Let~$\phi\colon A\to \mc A$ be the quantization map defined by $T$
(see~\cite[Section~3]{ES}). Namely, the form $(a,b)$ allows us to split the filtration, and $\phi$ is precisely the splitting map. Thus, $\phi(z^k)=p_k(z)$, where
$p_k$ are monic orthogonal polynomials
for the inner product
\[
(f_1,f_2)_*:=\int_{{\rm i}\mathbb R}f_1(x)f_2(x)w(x)|{\rm d}x|.
\]
Recall~\cite{Sz} that these polynomials satisfy a 3-term recurrence
\[
p_{k+1}(x)=(x-b_k)p_k(x)-a_kp_{k-1}(x),
\]
for some numbers $a_k$, $b_k$, i.e.,
\[
xp_{k}(x)=p_{k+1}(x)+b_kp_k(x)+a_kp_{k-1}(x).
\]
Thus the corresponding short star-product $z*z^k$
has the form
\begin{gather*}
z*z^k=\phi^{-1}\big(\phi(z)\phi\big(z^k\big)\big)=\phi^{-1}(zp_k(z))=
\phi^{-1}(p_{k+1}(z)+b_kp_k(z)+a_kp_{k-1}(z))
\\ \hphantom{z*z^k}
{}=z^{k+1}+b_kz^k+a_kz^{k-1}.
\end{gather*}
Thus the numbers $a_k$, $b_k$ are the matrix elements of multiplication by $z$ in weight $0$ for the short star-product attached to $T$. More general matrix elements of multiplication by $u$, $v$, $z$ for this short star-product are computed similarly. In other words, to compute the short star-product attached to $T$, we need to compute explicitly the coefficients $a_k$, $b_k$ and their generalizations. This problem
is addressed in Section~\ref{expcom}.
It is more customary to consider orthogonal polynomials on the real (rather than imaginary) axis, so let us make a change of variable $x=-{\rm i}y$.
Then we see that the monic polynomials $P_k(y):={\rm i}^{k}p_k(-{\rm i}y)$
are orthogonal under the inner product
\[
(f_1,f_2):=\int_{-\infty}^\infty f_1(y)f_2(y){\rm w}(y){\rm d}y,
\]
where ${\rm w}(y):=w(-{\rm i}y)$. Then the 3-term recurrence looks like
\[
P_{k+1}(y)=(y-{\rm i}b_k)P_k(y)+a_kP_{k-1}(y)
\]
(so for real parameters we'll have $a_k\in \mathbb R$, $b_k\in {\rm i}\mathbb R$).
\begin{Example}\label{n=1}
Let $n=1$, $P(x)=x$, so $\mathbf{P}(X)=X+1$.
Then a nonzero twisted trace exists if and only if $c\ne 0$, in which case it is unique up to scaling, and the corresponding weight func\-tion~is
\[
w(x)=\frac{{\rm e}^{2\pi {\rm i}cx}}{{\rm e}^{2\pi {\rm i}x}+1}
=\frac{{\rm e}^{2\pi {\rm i}\left(c-\frac{1}{2}\right)x}}{2\cos \pi x},\qquad
{\rm w}(y)=\frac{{\rm e}^{2\pi\left(c-\frac{1}{2}\right)y}}{2\cosh \pi y}.
\]
The corresponding orthogonal polynomials $P_k(y)$ are the (monic) {\it Meixner--Pollaczek polynomials} with parameters $\lambda=\frac{1}{2}$, $\phi=\pi c$~\cite[Section~1.7]{KS}.
\end{Example}
\begin{Example}\label{n=2} Let $n=2$, $P(x)=x^2+\beta^2$, so
\[
\mathbf{P}(X)=\big(X+{\rm e}^{2\pi \beta}\big)\big(X+{\rm e}^{-2\pi \beta}\big).
\]
The space of twisted traces is $1$-dimensional if $c=0$ and $2$-dimensional if $c\ne 0$. So~for $c\ne 0$ the traces up to scaling
are defined by the weight function
\[
w(x)=\frac{{\rm e}^{2\pi {\rm i}\left(c-\frac{1}{2}\right)x} \cos\pi (x-{\rm i}\alpha)}{2\cos \pi(x-{\rm i}\beta )\cos \pi(x+{\rm i}\beta )},\qquad {\rm w}(y)=\frac{{\rm e}^{2\pi\left(c-\frac{1}{2}\right)y} \cosh\pi (y-\alpha)}{2\cosh \pi(y-\beta)\cosh \pi(y+\beta)},
\]
and the limiting cases $\alpha\to \pm \infty$ along the real axis, which yield
\[
w(x)=\frac{{\rm e}^{2\pi {\rm i}\left(c-\frac{1}{2}\pm \frac{1}{2}\right)x}}{4\cos \pi(x-{\rm i}\beta )\cos \pi(x+{\rm i}\beta )},\qquad {\rm w}(y)=\frac{{\rm e}^{2\pi\left(c-\frac{1}{2}\pm \frac{1}{2}\right)y}}{4\cosh \pi(y-\beta)\cosh \pi(y+\beta)}.
\]
These formulas for the plus sign also define the unique up to scaling trace for $c=0$; i.e., \[
w(x)=\frac{1}{4\cos\pi(x-{\rm i}\beta )\cos\pi(x+{\rm i}\beta )},\qquad {\rm w}(y)=\frac{1}{4\cosh\pi(y-\beta)\cosh\pi(y+\beta)}.
\] In this case, the corresponding orthogonal polynomials $P_k(y)$ are the {\it continuous Hahn polynomials} with parameters $\tfrac{1}{2}+{\rm i}\beta ,\tfrac{1}{2}-{\rm i}\beta ,\tfrac{1}{2}-{\rm i}\beta ,\tfrac{1}{2}+{\rm i}\beta$~\cite[Section~1.4]{KS}.
Also for $c=\frac{1}{2}$, $\alpha=0$ we have
\[
w(x)=\frac{\cos \pi x}{2\cos\pi(x+{\rm i}\beta )\cosh\pi(x-{\rm i}\beta )},\qquad {\rm w}(y)=\frac{\cosh \pi y}{2\cosh\pi(y+\beta)\cosh\pi(y-\beta)},
\]
so $P_k(y)$ are the {\it continuous dual Hahn polynomials} with $a=0$, $b=\frac{1}{2}-{\rm i}\beta $, $c=\frac{1}{2}+{\rm i}\beta$ \cite[Section~1.3]{KS}.
\end{Example}
\begin{Remark} In Example~\ref{n=1} ($n=1$),
the only even short star-product corresponds
to ${\rm w}(y)=\frac{1}{2\cosh \pi y}$. This is the Moyal--Weyl star-product.
In Example~\ref{n=2} ($n=2$), the only even short star-product corresponds
to ${\rm w}(y)=\frac{1}{4\cosh \pi (y-\beta)\cosh \pi(y+\beta)}$. This is the
unique ${\rm SL}_2$-invariant star-product.
\end{Remark}
\begin{Example}\label{betai} Let $t=(-1)^n$, $G(X)=X^{[n/2]}$. Then
\[
w(x)=\prod_{j=1}^n\frac{1}{2\cos \pi(x-{\rm i}\beta _j)},\qquad
{\rm w}(y)=\prod_{j=1}^n\frac{1}{2\cosh \pi(y+\beta_j)},
\]
which defines an $s$-twisted trace.
The corresponding orthogonal polynomials are semiclassical but not hypergeometric for $n\ge 3$.
\end{Example}
\begin{Remark} The trace of Example~\ref{betai} corresponds to the short star-product arising in the 3-d SCFT, as shown in~\cite[Section~8.1.2]{DPY}. There the Kleinian singularity of type $A_{n-1}$ appears as the Higgs branch, and the parameters $\beta_j$ are the FI parameters. The same trace also shows up in~\cite[equation~(5.27)]{DFPY}, where
the Kleinian singularity appears as the Coulomb branch, and the parameters $\beta_j$ are the mass parameters.\footnote{We thank Mykola Dedushenko for this explanation.}
\end{Remark}
\subsection{Conjugation-equivariant traces}
Let now $\rho$ be a conjugation on~$\mc A$ (Section~\ref{conju}). Let~us determine which $g_t$-twisted traces are $\rho$-equivariant (see~\cite[Section~3.6]{ES}. A trace $T$ is $\rho$-equivariant if $\overline{T(R(z))}=T\big(\overline{R}(-z)\big)$, which is~equivalent to $T$ being real on $\RN[{\rm i}z]$. This happens if and only if $w(x)$ is real on ${\rm i}\RN$. Since $w$ is meromorphic this means that $w(x)=\overline{w(-\overline{x})}$.
So we have the following proposition.
\begin{Proposition}
\label{PropQuaternionicTraces}
Suppose that $\mc{A}$ is a quantization of $A$ with conjugation $\rho$. Then $\rho$-equivariant $g_t$-twisted traces $T$ on~$\mc A$ are
given by
\[
T(R(z))=\int_{{\rm i}\RN} R(x)w(x)|{\rm d}x|,
\]
where $w$ is as in Proposition~$\ref{PropTracesForOpenStrip}$ and
\[w(x)=\overline{w(-\overline{x})}=(-1)^n w(x+1).
\]
Moreover, if $\mc A$ is even then $\sigma$-invariant traces among them
correspond to the functions $w$ with $w(x)=w(-x)$.
\end{Proposition}
\subsection[Construction of traces when all roots of P(x) satisfy |Re alpha| < 1/2]
{Construction of traces when all roots of $\boldsymbol{P(x)}$ satisfy $\boldsymbol{|\Re\alpha|\leq \frac{1}{2}}$}
From now on we suppose that ${\rm i}^nP(x)$ is real on ${\rm i}\RN$ (so that the conjugations $\rho_\pm$ are well defined). In particular, the roots of $P(x)$ are symmetric with respect to ${\rm i}\RN$.
Suppose that for all roots $\alpha$ of $P(x)$ we have $|\Re\alpha|\leq \frac{1}{2}$, and let us give a formula for twisted traces in this case.
There are unique monic polynomials $P_*(x)$, $Q(x)$ such that $P(x)=P_*(x)Q\big(x+\frac{1}{2}\big)Q\big(x-\tfrac{1}{2}\big)$, all roots of $P_*(x)$ belong to the strip $|\Re x|<\frac{1}{2}$ and all roots of $Q(x)$ belong to ${\rm i}\RN$. Suppose that $\alpha_1,\ldots,\alpha_k$ are the roots of $P_*(x)$ and $\alpha_{k+1},\ldots,\alpha_m$ are the roots of~$Q\big(x+\frac{1}{2}\big)$. Note that $\deg Q=n-m$. Let~$\mathbf{P}_*(X)=\prod_{j=1}^m(X+{\rm e}^{2\pi {\rm i}\alpha_j})$, $w(x)={\rm e}^{2\pi {\rm i}cx}\frac{G({\rm e}^{2\pi {\rm i} x})}{\mathbf{P}_*({\rm e}^{2\pi {\rm i} x})}$, where $G(X)$ is a polynomial of degree at most $m-1$ and $G(0)=0$ when $t=1$. We~have
\begin{enumerate}\itemsep=0pt
\item[(1)]
$w(x+1)=tw(x)$;
\item[(2)]
$w(x)Q(x)$ is bounded on ${\rm i}\RN$ and decays exponentially and uniformly when $\Im x$ tends to~$\pm \infty$;
\item[(3)]
$w\big(x+\frac{1}{2}\big)P(x)$ is holomorphic on $|\Re x|\leq \frac{1}{2}$.
\end{enumerate}
For any $R\in \CN[x]$ let $R(x)=R_1(x)Q(x)+R_0(x)$, where $\deg R_0<\deg Q$.
\begin{Proposition}
\label{PropTracesForClosedStrip}
A general $g_t$-twisted trace on~$\mc A$
has the form
\[T(R(z))=\int_{{\rm i}\RN}R_1(x)Q(x)w(x)|{\rm d}x|+\phi(R_0),\] where $w(x)$ is as above and $\phi$ is any linear functional.
\end{Proposition}
\begin{proof}
The space of polynomials $G$ has dimension $m-\delta_c^0$, while the space of linear functionals~$\phi$ has dimension $\deg Q=n-m$. So~the space of such linear functionals $T$ has dimension $n-\delta_c^0$. The space of all $g_t$-twisted traces has the same dimension, so it is enough to prove that all linear functionals $T$ of this form are $g_t$-twisted traces. In other words, we should prove that $T\big(S\big(z-\frac{1}{2}\big)P\big(z-\frac{1}{2}\big)-tS\big(z+\frac{1}{2}\big)P\big(z+\frac{1}{2}\big)\big)=0$ for all $S\in\CN[x]$.
We see that $S\big(x-\tfrac{1}{2}\big)P\big(x-\tfrac{1}{2}\big)-tS\big(x+\frac{1}{2}\big)P\big(x+\frac{1}{2}\big)$ is divisible by $Q(x)$, so
\begin{gather*}
T\big(S\big(z-\tfrac{1}{2}\big)P\big(z-\tfrac{1}{2}\big)-tS\big(z+\tfrac{1}{2}\big)
P\big(z+\tfrac{1}{2}\big)\big)
\\ \qquad
{}= \int_{{\rm i}\RN}\big(S\big(x-\tfrac{1}{2}\big)P\big(x-\tfrac{1}{2}\big)-tS\big(x+\tfrac{1}{2}\big)
P\big(x+\tfrac{1}{2}\big)\big)w(x)|{\rm d}x|.
\end{gather*}
Since $w\big(x+\frac{1}{2}\big)P(x)$ is holomorphic on $|\Re x|\leq \frac{1}{2}$, we deduce that this integral is zero similarly to the proof of Proposition~\ref{PropTracesForOpenStrip}
\end{proof}
\subsection{Twisted traces in the general case}
\label{SubSubSecGeneralTraces}
Let $m(\alpha)$ be the multiplicity of $\alpha$ as a root of $P(x)$. Any linear functional $\phi$ on the space~$\CN[x]/$ $P(x)\CN[x]$ can be written as $\phi(S)=\sum\limits\nolimits_{\alpha,\,0\le i<m(\alpha)}C_{\alpha i} S^{(i)}(\alpha)$, where $C_{\alpha i}\in\CN$. Therefore any $g_t$-twis\-ted trace $T$ is given by
\[
T\big(S\big(z-\tfrac{1}{2}\big)-tS\big(z+\tfrac{1}{2}\big)\big)
=\sum\limits_{\alpha,\,0\le i<m(\alpha)}C_{\alpha i} S^{(i)}(\alpha).
\]
Let $\widetilde{P}(x)$ be the following polynomial: all roots of $\widetilde{P}(x)$ belong to the strip $|\Re x|\leq \frac{1}{2}$ and the multiplicity of a root $\alpha$ equals to
\begin{itemize}
\item
$\sum_{k\in \ZN} m(\alpha+k)$ if $|\Re\alpha|<\frac{1}{2}$;
\item
$\sum_{k\geq 0} m(\alpha+k)$ if $\Re\alpha=\frac{1}{2}$;
\item
$\sum_{k\leq 0} m(\alpha+k)$ if $\Re\alpha=-\frac{1}{2}$.
\end{itemize}
So $\widetilde{P}(x)$ has the same degree as $P(x)$ and its roots are obtained from roots of $P(x)$ by the minimal integer shift into the strip $|\Re x|\leq \frac{1}{2}$. In particular, the roots of $\widetilde{P}(x)$ are symmetric with respect to ${\rm i}\RN$.
Suppose that $\alpha\in \CN$ has real part bigger than $\frac{1}{2}$, $S(x)$ is an arbitrary polynomial, $R(x)=S\big(x-\tfrac{1}{2}\big)-tS\big(x+\frac{1}{2}\big)$, $i\geq 0$. Let~$r$ be the smallest positive integer such that ${\rm Re}(\alpha)-r\leq \frac 12$. Then
\begin{gather*}
S^{(i)}(\alpha)=\sum_{k=0}^{r-1}\big(t^{-k}S^{(i)}(\alpha-k)
-t^{-k-1}S^{(i)}(\alpha-k-1)\big)+t^{-r}S^{(i)}(\alpha-r)
\\ \hphantom{S^{(i)}(\alpha)}
{}=\sum_{k=0}^{r-1}t^{-k-1}R^{(i)}\big(\alpha-k-\tfrac{1}{2}\big)+t^{-r}S^{(i)}(\alpha-r)
=\phi_{i,\alpha}(R)+t^{-r}S^{(i)}(\alpha-r),
\end{gather*}
where
\[
\phi_{i,\alpha}(R):=\sum_{k=0}^{r-1}t^{-k-1}R^{(i)}\big(\alpha-k-\tfrac{1}{2}\big).
\]
We can write a similar equation for $\alpha\in \CN$ with real part smaller than $-\frac{1}{2}$.
Therefore
\begin{gather*}
T\big(S\big(z-\tfrac{1}{2}\big)-tS\big(z+\tfrac{1}{2}\big)\big)=\!\!\!\sum_{\alpha,\,0\le i<m(\alpha)}\!\!\!\!\!C_{\alpha i} S^{(i)}(\alpha)
\\ \hphantom{T\big(S\big(z-\tfrac{1}{2}\big)-tS\big(z+\tfrac{1}{2}\big)\big)}
{}=\!\!\!\sum_{\alpha,\,0\le i<m(\alpha)}\!\!\!\!\!C_{\alpha i} \phi_{i,\alpha}(R)+t^{-r(\alpha)}C_{\alpha i} S^{(i)}(\alpha-r)=\Phi(R)+\widetilde{T}(R(z)),
\end{gather*}
where $\Phi(R):=\sum\limits\nolimits_{\Re a\neq 0,k\ge 0}c_{ak} R^{(k)}(a)$, $c_{ak}\in \CN$, $\widetilde{T}$ is a $g_t$-twisted trace for the quantization defined by the polynomial $\widetilde{P}(x)$. Below we will abbreviate this sentence to ``$\widetilde{T}$ is a trace for $\widetilde{P}$''.
Let $P_\circ$ be the following polynomial: all the roots of $P_\circ$ belong to strip $|\Re x|\leq \frac{1}{2}$ and the multiplicity of $\alpha$, $|\Re\alpha|\leq \frac{1}{2}$ in $P_\circ$ equals the multiplicity of $\alpha$ in $P$.
Since $\phi_{i,\alpha}$ are linearly independent for different $i,\alpha$, we deduce that $\Phi=0$ if and only if $T$ is a trace for $P_\circ$.
So we have proved the following proposition:
\begin{Proposition}
\label{PropGeneralTrace}
Suppose that $P$ is any polynomial, $\widetilde{P}$ is obtained from $P$ by the minimal integer shift of roots into the strip $|\Re x|\leq \frac{1}{2}$, and $P_\circ$ is obtained from $P$ by throwing out roots not in the strip $|\Re x|\leq \frac{1}{2}$. Then any twisted trace $T$ on $\mathcal A_P$ can be represented as $T=\Phi+\widetilde{T}$, where
\[
\Phi(R)=\sum_{a\notin {\rm i}\RN,\,k\ge 0} c_{a k} R^{(k)}(a),
\]
and $\widetilde{T}$ is a trace for $\widetilde{P}$. Furthermore, if $\Phi=0$ then $T$ is a trace for $P_\circ$.
\end{Proposition}
\begin{Remark} We may think about Proposition~\ref{PropGeneralTrace} as follows.
When the roots of $P$ lie inside the strip $|\Re x|<\frac{1}{2}$, the trace of $R(z)$ is given by
the integral of $R$ against the weight function~$w$ along the imaginary axis.
However, when we vary~$P$, as soon as its roots leave the strip $|\Re x|<\frac{1}{2}$,
poles of~$w$ start crossing the contour of integration. So~for the formula to remain valid,
we need to add the residues resulting from this. These residues give rise to the linear functional $\Phi$.
\end{Remark}
\section{Positivity of twisted traces}\label{sec4}
\subsection{Analytic lemmas}
We will use the following classical result:
\begin{Lemma}
\label{LemClassicalDense}
Suppose that $w(x)\ge 0$ is a measurable function on the real line such that $w(x)<c {\rm e}^{-b|x|}$ for some $c,b>0$. We~also assume that $w>0$ almost everywhere. Then polynomials are dense in the space $L^p(\RN,w(x){\rm d}x)$ for all $1\leq p<\infty$.
\end{Lemma}
\begin{proof}
Changing $x$ to $bx$ we can asssume that $b=1$.
Fix $p$. Let~$\frac{1}{p}+\frac{1}{q}=1$. Since $L^p(\RN,w)^*=L^q(\RN,w)$, it suffices to show that any function $f\in L^q(\RN,w)$ such that $\int_{\RN} f(x)x^n w(x){\rm d}x=0$ for all nonnegative integers $n$ must be zero.
Choose $0<a<\frac{1}{p}$. We~have ${\rm e}^{a|x|}\in L^p(\RN,w)$. Therefore $f(x){\rm e}^{a|x|}w(x)\in L^1(\RN)$. Denote $f(x)w(x)$ by $F(x)$. Let~$\widehat{F}$ be the Fourier transform of $F$. Since $F(x){\rm e}^{a|x|}\in L^1(\RN)$, $\widehat{F}$ extends to a holomorphic function in the strip $|\Im x|<a$.
Since $\int_{\RN}f(x)x^n w(x){\rm d}x=0$, we have $\int_{\RN}F(x)x^n{\rm d}x=0$, so $\widehat{F}^{(n)}(0)=0$. Since $\widehat{F}$ is a holomorphic function and all derivatives of $\widehat{F}$ at the origin are zero, we deduce that $\widehat{F}=0$. Therefore $F=0$, so $f=0$ almost everywhere, as desired.
\end{proof}
We get the following corollaries:
\begin{Lemma}\label{CorClassical}
Let $w$ satisfy the assumptions of Lemma~$\ref{LemClassicalDense}$.
\begin{enumerate}\itemsep=0pt
\item[$1.$]
Suppose that $H(x)$ is a continuous complex-valued function on $\mathbb R$ with finitely many zeros and at most polynomial growth at infinity. Then the set $\{H(x)S(x)\,|\, S(x)\in\CN[x]\}$ is dense in the space $L^p(\RN,w)$.
\item[$2.$]
Suppose that $M(x)$ is a nonzero polynomial nonnegative on the real line. Then the closure of the set $\{M(x)S(x)\overline{S}(x)\,|\, S(x)\in \CN[x]\}$ in $L^p(\RN,w)$ is the subset of almost everywhere nonnegative functions.
\end{enumerate}
\end{Lemma}
\begin{proof}
1.\ The function $w(x)|H(x)|^p$ satisfies the assumptions of Lemma~\ref{LemClassicalDense}. Therefore polynomials are dense in the space $L^p(\RN,w|H|^p)$. The map $g\mapsto gH$ is an isometry between $L^p(\RN,w|H|^p)$ and $L^p(\RN,w)$. The statement follows.
2.\ Suppose that $f\in L^p(\RN,w)$ is nonnegative almost everywhere. Then $\sqrt{f}$ is an element of~$L^{2p}(\RN,w)$. Using (1), we find a sequence $S_n\in \CN[x]$ such that $\sqrt{M}S_n$ tends to $\sqrt{f}$ in $L^{2p}(\RN,w)$. We~use the following corollary of Cauchy--Schwarz inequality: if $a_k$, $b_k$ tend to $a$,~$b$ respectively in $L^{2p}(\RN,w)$ then $a_kb_k$ tends to $ab$ in $L^p(\RN,w)$.
Applying this to $a=b=\sqrt{f}$, $a_n=\sqrt{M}S_n$, $b_k=\sqrt{M}\overline{S_n}$ we deduce that $MS_n\overline{S_n}$ tends to $f$ in $L^p(\RN,w)$. The statement follows.
\end{proof}
\subsection[The case when all roots of P(x) satisfy |Re alpha|<1/2]
{The case when all roots of $\boldsymbol{P(x)}$ satisfy $\boldsymbol{|\Re\alpha|<\frac{1}{2}}$}
Let $\mc A$ be a filtered quantization of $A$ with conjugations $\rho_\pm$ such that $\rho_\pm^2=g_t$. We~want to classify positive definite Hermitian $\rho_\pm$-invariant forms on $\mc{A}$, i.e., positive definite Hermitian forms $(\cdot,\cdot)$ on $\mc{A}$ such that
\[
(a\rho(y),b)=(a,yb)
\]
for all $a,b,y\in\mc{A}$, where $\rho=\rho_\pm$.
In this subsection we will do the classification in the case when all roots $\alpha$ of $P(x)$ satisfy $|\Re\alpha|<\tfrac{1}{2}$. We~start with general results that are true for all parameters $P$.
It is easy to see that Hermitian $\rho$-invariant forms are in one-to-one correspondence with $g_t$-twisted $\rho$-invariant traces, i.e., $g_t$-twisted traces $T$ such that $T(\rho(a))=\overline{T(a)}$. The correspondence is as follows:
\[(a,b)=T(a\rho(b)),\qquad T(a)=(a,1).\] Therefore it is enough to classify $g_t$-twisted traces $T$ such that the Hermitian form $(a,b)=T(a\rho(b))$ is positive definite. This means that $T(a\rho(a))>0$ for all nonzero $a\in \mc{A}$. Recall that $\ensuremath\operatorname{ad} z$ acts on $\mc{A}$ diagonalizably, $\mc{A}=\oplus_{d\in \ZN} \mc{A}_{d}$. Thus it is enough to check the condition $T(a\rho(a))>0$ for homogeneous $a$.
\begin{Lemma}\label{posde}\qquad
\begin{enumerate}\itemsep=0pt
\item[$1.$]
$T$ gives a positive definite form if and only if one has $T(a\rho(a))>0$ for all nonzero $a\in \mc{A}$ of weight $0$ or $1$.
\item[$2.$]
$T$ gives a positive definite form if and only if
\[T\big(R(z)\overline{R}(-z)\big)>0\qquad \text{and}\qquad
\lambda T\big(R\big(z-\tfrac{1}{2}\big)\overline{R}\big(\tfrac{1}{2}-z\big)P\big(z-\tfrac{1}{2}\big)\big)>0
\]
for all nonzero $R\in \CN[x]$.
\end{enumerate}
\end{Lemma}
\begin{proof}
1.\ Suppose that $T(a\rho(a))>0$ for all nonzero $a\in \mc{A}$ of weight $0$ or $1$. Let~$a$ be a nonzero homogeneous element of $\mc{A}$ with positive weight. There exists $b$ of weight $0$ or $1$ and nonnegative integer $k$ such that $a=v^kbv^k$. We~have
\begin{gather*}
T(a\rho(a))=\lambda^{2k}T\big(v^kbv^ku^k\rho(b)u^k\big)=\lambda^{2k}T\big(g_t^{-1}\big(u^k\big)v^kbv^ku^k\rho(b)\big)
\\ \hphantom{T(a\rho(a))}
{}= \lambda^{2k}t^{k}T\big(u^kv^kbv^ku^k\rho(b)\big)=
(-1)^{nk}T\big(u^kv^kbv^ku^k\rho(b)\big)=T\big(u^kv^kb\rho\big(u^kv^kb\big)\big)>0
\end{gather*}
since $u^kv^kb$ is a homogeneous element of weight $0$ or $1$.
Suppose that $a$ is a nonzero homogeneous element of $\mc{A}$ with negative weight. Then $a=\rho(b)$, where $b$ is a homogeneous element with positive weight. We~get
\[
T(a\rho(a))=T(\rho(b)\rho^2(b))=T(\rho(b)g_t(b))=T(b\rho(b))>0.
\]
2.\ Suppose that $a$ is an element of $\mc{A}_0$. Then $a=R(z)$ for some $R\in\CN[x]$. We~have $T(a\rho(a))=T(R(z)\overline{R}(-z))$.
Suppose that $a$ is an element of $\mc{A}_1$. Then $a=R\big(z-\frac{1}{2}\big)v$ for some $R\in \CN[x]$. We~have
\begin{gather*}
T(a\rho(a))=\lambda T\big(R\big(z-\tfrac{1}{2}\big)v \overline{R}\big(-z-\tfrac{1}{2}\big)u\big)
= \lambda T\big(R\big(z-\tfrac{1}{2}\big)vu \overline{R}\big(-z+\tfrac{1}{2}\big)\big)
\\ \hphantom{T(a\rho(a))}
{}=
\lambda T\big(R\big(z-\tfrac{1}{2}\big)\overline{R}\big(\tfrac{1}{2}-z\big)P\big(z-\tfrac{1}{2}\big)\big).
\end{gather*}
The statement follows.
\end{proof}
\begin{Proposition}
\label{PropFromPositiveTraceToFunctions}
Suppose that $T(R(z))=\int_{{\rm i}\RN}R(x)w(x)|{\rm d}x|$. Then $T$ gives positive definite form if and only if $w(x)$ and $\lambda w\big(x+\frac{1}{2}\big)P(x)$ are nonnegative on ${\rm i}\RN$.
\end{Proposition}
\begin{proof} By Lemma~\ref{posde}
$T$ gives positive definite form if and only if
\[
T(R(z)\overline{R}(-z))>0
\]
and
\[
\lambda T\big(R\big(z-\tfrac{1}{2}\big)\overline{R}\big(\tfrac{1}{2}-z\big)P\big(z-\tfrac{1}{2}\big)\big)>0
\]
for all nonzero $R\in \CN[x]$. A polynomial $S\in \CN[x]$ can be represented as $S(x)=R(x)\overline{R}(-x)$ if~and only if $S$ is nonnegative on ${\rm i}\RN$. So~we have $T(R(z)\overline{R}(-z))>0$ for all nonzero $R\in \CN[x]$ if~and only if
\[
\int_{{\rm i}\RN}S(x)w(x)|{\rm d}x|>0
\]
for all nonzero $S\in \CN[x]$ nonnegative on ${\rm i}\RN$. Using Lemma~\ref{CorClassical}(2) for $M=1$, we see that this is equivalent to $w(x)$ being nonnegative on ${\rm i}\RN$.
We have
\begin{gather*}
T\big(R\big(z-\tfrac{1}{2}\big)\overline{R}\big(\tfrac{1}{2}-z\big)P\big(z-\tfrac{1}{2}\big)\big)=\int_{{\rm i}\RN}R\big(x-\tfrac{1}{2}\big)\overline{R}\big(\tfrac{1}{2}-x\big)P\big(x-\tfrac{1}{2}\big)w(x)|{\rm d}x|
\\ \hphantom{T\big(R\big(z-\tfrac{1}{2}\big)\overline{R}\big(\tfrac{1}{2}-z\big)P\big(z-\tfrac{1}{2}\big)\big)}
{}=\int_{\frac{1}{2}+{\rm i}\RN}R(x)\overline{R}\big(-x\big)P(x)w\big(x+\tfrac{1}{2}\big)|{\rm d}x|
\\ \hphantom{T\big(R\big(z-\tfrac{1}{2}\big)\overline{R}\big(\tfrac{1}{2}-z\big)P\big(z-\tfrac{1}{2}\big)\big)}
{}=\int_{{\rm i}\RN}R(x)\overline{R}\big(-x\big)P(x)w\big(x+\tfrac{1}{2}\big)|{\rm d}x|.
\end{gather*}
In the last equality we used the Cauchy theorem and the fact that the function \mbox{$P(x)w\big(x+\tfrac{1}{2}\big)$} is~holomorphic when $|\Re x|\leq \frac{1}{2}$. Using Lemma~\ref{CorClassical}(2) for $M=1$ again, we see that \mbox{$\lambda T\big(R\big(z-\frac{1}{2}\big)$} $\times\overline{R}\big(\frac{1}{2}-z\big) P\big(z-\frac{1}{2}\big)\big)>0$ for all nonzero $R\in \CN[x]$ if and only if $\lambda P(x)w\big(x+\frac{1}{2}\big)$ is nonnegative on ${\rm i}\RN$.
\end{proof}
From now on we assume that all roots $\alpha$ of $P(x)$ satisfy $|\Re\alpha|<\frac{1}{2}$. In this case every trace~$T$ can be represented as $T(R(z))=\int_{{\rm i}\RN}R(x)w(x)|{\rm d}x|$.
Recall that $w(x)={\rm e}^{2\pi {\rm i}cx}\frac{G({\rm e}^{2\pi {\rm i} x})}{\mathbf{P}({\rm e}^{2\pi {\rm i} x})},$ where~$G$ is any polynomial with $\deg G\leq \deg P$ in the case when $c\neq 0$ and $\deg G<\deg P$ in the case when~$c=0$.
\begin{Proposition}
\label{PropFromFunctionsToPolynomials}\qquad
\begin{enumerate}\itemsep=0pt
\item[$1.$]
If $\lambda=-{\rm i}^{-n}{\rm e}^{-\pi {\rm i}c}$ $($i.e., $\rho=\rho_-)$ then $w(x)$ and $\lambda P(x)w\big(x+\frac{1}{2}\big)$ are nonnegative on ${\rm i}\RN$ if~and only if $G(X)$ is nonnegative when $X>0$ and nonpositive when $X<0$.
\item[$2.$]
If $\lambda=+{\rm i}^{-n}{\rm e}^{-\pi {\rm i}c}$ $($i.e., $\rho=\rho_+)$ then $w(x)$ and $\lambda w\big(x+\frac{1}{2}\big)P(x)$ are nonnegative on ${\rm i}\RN$ if~and only if $G(X)$ is nonnegative for all $X\in \RN$.
\end{enumerate}
\end{Proposition}
\begin{proof}
It is easy to see that $\mathbf{P}(X)$ is positive when $X>0$. Therefore $w(x)$ is nonnegative on~${\rm i}\RN$ if and only if $G(X)$ is nonnegative when $X>0$.
We have
\[
\lambda P(x)w\big(x+\frac{1}{2}\big)=\pm {\rm i}^{-n}P(x) {\rm e}^{2\pi {\rm i} cx}\frac{G(-{\rm e}^{2\pi {\rm i} x})}{\mathbf{P}(-{\rm e}^{2\pi {\rm i} x})}.
\]
It is clear that $\frac{{\rm i}^{-n}P(x)}{\mathbf{P}(-{\rm e}^{2\pi {\rm i} x})}$ belongs to $\RN$ when $x\in {\rm i}\RN$ and does not change sign on ${\rm i}\RN$. When~$x$ tends to $-{\rm i}\infty$, the functions ${\rm i}^{-n}P(x)$ and $\mathbf{P}(-{\rm e}^{2\pi {\rm i} x})$ have sign $(-1)^n$. Therefore $\frac{{\rm i}^{-n}P(x)}{\mathbf{P}({\rm e}^{-2\pi {\rm i}x})}$ is positive on ${\rm i}\RN$. We~deduce that $\pm G(X)$ should be nonnegative when $X<0$. So~there are two cases:
\begin{enumerate}\itemsep=0pt
\item
If $\lambda=-{\rm i}^{-n}{\rm e}^{-\pi {\rm i}c}$ then $G(X)$ should be nonnegative when $X>0$ and nonpositive when $X<0$.
\item
If $\lambda=+{\rm i}^{-n}{\rm e}^{-\pi {\rm i}c}$ then $G(X)$ should be nonnegative for all $X\in \RN$.
\end{enumerate}
This proves the proposition.
\end{proof}
We deduce the following theorem from Propositions~\ref{PropFromPositiveTraceToFunctions} and~\ref{PropFromFunctionsToPolynomials}:
\begin{Theorem}
\label{ThrPositiveFormSmallerThanOne}
Suppose that $\mc{A}$ is a deformation of $A=\CN[p,q]^{\mathbb Z/n}$ with conjugation $\rho$ as above, $\rho^2=g_t$, $t=\exp(2\pi {\rm i} c)$. Let~$P(x)$ be the parameter of $\mc{A}$, $\varepsilon={\rm i}^n{\rm e}^{\pi {\rm i}c}\lambda=\pm 1$ $($so $\rho=\rho_\varepsilon)$. Then the cone $\mc C_+$ of positive definite $\rho$-invariant forms on $\mc{A}$ is isomorphic to the cone of nonzero polynomials $G(X)$ of degree $\le n-1$ with $G(0)=0$ if $c=0$ such that
\begin{enumerate}\itemsep=0pt
\item[$1.$] If $\varepsilon=-1$ then $G(X)$ is nonnegative when $X>0$ and nonpositive when $X<0$.
\item[$2.$] If $\varepsilon=1$ then $G(X)$ is nonnegative for all $X\in \RN$.
\end{enumerate}
\end{Theorem}
Thus for $\rho=\rho_-$, $G(X)=XU(X)$ where $U(X)\ge 0$ is a polynomial of degree $\le n-2$, and for $\rho=\rho_+$, $G(X)\ge 0$ is a polynomial of degree $\le n-1$ with $G(0)=0$ if $c=0$; in the latter case $G(X)=X^2U(X)$ where $U(X)\ge 0$ is a polynomial of degree $\le n-3$. Therefore, we get
\begin{Proposition} The dimension of $\mc C_+$ modulo scaling is
\begin{enumerate}\itemsep=0pt
\item[$\bullet$] $n-2$ for even $n$ and $n-3$ for odd $n$ if $\rho=\rho_-$;
\item[$\bullet$] $n-2$ for even $n$ and $n-1$ for odd $n$ if $c\ne 0$ and $\rho=\rho_+$;
\item[$\bullet$] $n-4$ for even $n$ and $n-3$ for odd $n$ if $c=0$ and $\rho=\rho_+$.
\end{enumerate}
$($Here if the dimension is $<0$, the cone $\mc C_+$ is empty$.)$
\end{Proposition}
Consider now the special case of even short star-products (i.e., quaternionic structures). Let~$\mc A$ be an even quantization of $A$, and
$\mc C_+^{\rm even}$ the cone of positive $\sigma$-stable $s$-twisted traces (i.e., those defining even short star-products). Then we have
\begin{Proposition}\label{eve}
The dimensions
of $\mc C_+^{\rm even}$ modulo scaling in various cases are as follows:
\begin{enumerate}\itemsep=0pt
\item[$\bullet$] $\frac{n-3}{2}$ if $\rho=\rho_-$, $n$ odd;
\item[$\bullet$] $\frac{n-1}{2}$ if $\rho=\rho_+$, $n$ odd;
\item[$\bullet$] $\frac{n-2}{2}$ if $\rho=\rho_-$, $n$ even;
\item[$\bullet$] $\frac{n-4}{2}$ if $\rho=\rho_+$, $n$ even.
\end{enumerate}
\end{Proposition}
Proposition~\ref{eve} shows that the only cases of a unique positive
$\sigma$-stable $s$-twisted trace are $\rho=\rho_+$ for $n=1,4$ and $\rho=\rho_-$ for $n=2,3$.
The paper~\cite{BPR} considers the case $\rho=\rho_+$ if $n=0,1$ mod $4$
and $\rho=\rho_-$ if $n=2,3$ mod $4$; this is the canonical quaternionic structure of the hyperK\"ahler cone (see~\cite[Section~3.8]{ES}), since it is obtained from $\rho_+$ on $\mathbb C[p,q]$ by restricting to $\mathbb Z/n$-invariants. Thus for $n\le 4$ the unitary even star-product is unique, as conjectured in~\cite{BPR}. However, for $n\ge 5$ this is no longer so. For~example, for $n=5$ (a case commented on at the end of section 6 of~\cite{BPR}) by Proposition~\ref{eve} the cone $\mc C_+^{\rm even}$ modulo scaling is 2-dimensional (which disproves the most optimistic conjecture of~\cite{BPR} that a unitary even star-product is always unique).\footnote{It is curious that in the case considered in~\cite{BPR}, the dimension of $\mc C_+^{\rm even}$ modulo scaling is always even.}
\begin{Example} Let $n=1$, $P(x)=x$, so $\mathbf{P}(X)=X+1$.
Then for $\rho=\rho_-$ there are no positive traces while for $\rho=\rho_+$
positive traces exist only if $c\ne 0$. In this case there is a unique positive trace up to scaling given by the weight function
\[
{\rm w}(y)=\frac{{\rm e}^{2\pi {\rm i}\left(c-\frac{1}{2}\right)y}}{2\cos \pi y}.
\]
In particular, the only quaternionic case is $\rho=\rho_+$, $c=\frac{1}{2}$,
which gives ${\rm w}(y)=\frac{1}{2\cos \pi y}$.
\end{Example}
\begin{Example}\label{neq2} Let $n=2$, $P(x)=x^2+\beta^2$, $\beta^2\in \mathbb R$ so we have $\mathbf{P}(X)=(X+{\rm e}^{2\pi\beta})(X+{\rm e}^{-2\pi\beta})$. We~assume that $\beta^2>-\frac{1}{4}$ so that all roots of $P$ are in the strip $|\Re x|<\frac12$. Then $\rho=\rho_-$ gives a unique up to scaling positive trace defined by the weight function
\[
{\rm w}(y)=\frac{{\rm e}^{2\pi cy}}{4\cos \pi(y-\beta)\cos \pi(y+\beta)},
\]
and $\rho=\rho_+$ is possible if and only if $c\ne 0$ and gives a unique up to scaling positive trace defined by the weight function
\[
{\rm w}(y)=\frac{{\rm e}^{2\pi(c-1)y}}{4\cos \pi(y-\beta)\cos \pi(y+\beta)}.
\]
In particular, the only quaternionic case is $\rho=\rho_-$, $c=0$, with
\[
{\rm w}(y)=\frac{1}{4\cos \pi(y-\beta)\cos \pi(y+\beta)},
\]
which corresponds to the ${\rm SL}_2$-invariant short star-product.
There are two subcases: $\beta^2\ge 0$, which corresponds to the
{\it spherical unitary principal series} for ${\rm SL}_2(\mathbb C)$, and
$-\frac{1}{4}<\beta^2<0$, which corresponds to the {\it spherical unitary complementary series} for the same group (namely, the trace form is exactly the positive inner product on the underlying Harish-Chandra bimodule).
Note that together with the trivial representation $\big($corresponding to $\beta^2=-\frac{1}{4}\big)$, these representations are well known to exhaust irreducible spherical unitary representations of ${\rm SL}_2(\mathbb C)$~\cite{V}.
\end{Example}
\begin{Example}
Let $n=3$ and $P(x)=x^3+\beta^2x=x(x-{\rm i}\beta )(x+{\rm i}\beta )$, where $\beta^2\in \mathbb R$.
This gives the algebra defined by formulas~(6.17), (6.18)
of~\cite{BPR}, with $\zeta=1$; namely,
the generators $\hat X$, $\hat Y$, $\hat Z$ of~\cite{BPR} are $v$, $u$, $z$, respectively, and the parameter $\kappa$ of~\cite{BPR} is $\kappa=-\beta^2-\frac{1}{4}$.
This is an even quantization of $A=\mathbb C[X_3]$. Thus even short star-products are parametrized by a~single parameter $\alpha$; namely, the corresponding $\sigma$-invariant $s$-twisted trace such that $T(1)=1$ is determined by the condition that $T(z^2)=-\alpha$ (using the notation of~\cite{BPR}).
Assume that $\beta^2>-\frac{1}{4}$ (i.e., $\kappa<0$), so that all the roots of $P$ are in the strip $|\Re x|<\frac{1}{2}$. We~have
\[
\mathbf{P}(X)=(X+1)\big(X+{\rm e}^{2\pi \beta}\big)\big(X+{\rm e}^{-2\pi \beta}\big).
\]
In this case $c=\frac{1}{2}$ so the trace $T$, up to scaling, is given by
\[
T(R(z))=\int_{{\rm i}\mathbb R}R(x)w(x)|{\rm d}x|,
\]
where
\[
w(x)={\rm e}^{\pi {\rm i}x}\frac{G({\rm e}^{2\pi {\rm i}x})}{({\rm e}^{2\pi {\rm i}x}+1)({\rm e}^{2\pi {\rm i}(x-{\rm i}\beta )}+1)({\rm e}^{2\pi {\rm i}(x+{\rm i}\beta )}+1)},
\]
with $\deg(G)\le 2$. Moreover, because of evenness we must have $w(x)=w(-x)$, so $G(X)=X^2G\big(X^{-1}\big)$. Up to scaling, such polynomials $G$ form a 1-parameter family, parametrized by $\alpha$.
Following~\cite[Section~6.3]{BPR}, let us equip the corresponding quantum algebra $\mc A=\mc A_P$ with the quaternionic structure $\rho_-$ given by\footnote{Note that our $\rho$ is $\rho^{-1}$ in~\cite{BPR}, so we use $\rho_-$ while~\cite{BPR} use $\rho_+=\rho_-^{-1}$.}
\[
\rho_-(v)=-u,\qquad \rho_-(u)=v,\qquad \rho_-(z)=-z,
\]
and let us determine which traces are unitary
for this quaternionic structure. According to The\-o\-rem~\ref{ThrPositiveFormSmallerThanOne}, there is a unique
such trace (which is automatically $\sigma$-stable), corresponding to~$G(X)=X$. Thus this trace is given by the weight function
\[
w(x)=\frac{1}{\cos \pi x\cos \pi (x-{\rm i}\beta )\cos\pi(x+{\rm i}\beta )}.
\]
Hence,
\[
T(z^{k})=\int_{{\rm i}\mathbb R}\frac{x^{k}|{\rm d}x|}{\cos \pi x\cos \pi (x-{\rm i}\beta )\cos\pi(x+{\rm i}\beta )},
\]
in particular, $T(z^k)=0$ if $k$ is odd.
For even $k$ this integral can be computed using the residue formula. Namely, assume $\beta\ne 0$ and let us first compute $T(1)$. Replacing the contour ${\rm i}\mathbb R$ by $1+{\rm i}\mathbb R$ and subtracting,
we find using the residue formula:
\[
2T(1)=2\pi\big({\rm Res}_{\frac{1}{2}}w+{\rm Res}_{\frac{1}{2}-{\rm i}\beta }w+{\rm Res}_{\frac{1}{2}+{\rm i}\beta }w\big).
\]
Now,
\[
{\rm Res}_{\frac{1}{2}}w=\frac{1}{\pi\sinh^2\pi \beta},
\]
while
\[
{\rm Res}_{\frac{1}{2}-{\rm i}\beta }w={\rm Res}_{\frac{1}{2}+{\rm i}\beta }w=-\frac{1}{\pi \sinh \pi\beta \sinh 2\pi \beta}.
\]
Thus
\[
T(1)=\frac{1}{\sinh^2\pi \beta}-\frac{2}{\sinh \pi\beta \sinh 2\pi\beta }=
\frac{1}{\sinh^2\pi \beta}\bigg(1-\frac{1}{\cosh \pi\beta }\bigg)=\frac{1}{2\cosh^2(\frac{\pi\beta }{2})\cosh \pi\beta }.
\]
Note that this function has a finite value at $\beta=0$, which is the answer in that case.
Now let us compute $T\big(z^2\big)$.
Again replacing the contour ${\rm i}\mathbb R$ with $1+{\rm i}\mathbb R$ and subtracting,
we~get
\[
T(1)+2T\big(z^2\big)=T\big(z^2\big)+T\big((z+1)^2\big)=2\pi\big({\rm Res}_{\frac{1}{2}}x^2w+{\rm Res}_{\frac{1}{2}-{\rm i}\beta }x^2w+{\rm Res}_{\frac{1}{2}+{\rm i}\beta }x^2w\big).
\]
Now,
\[
{\rm Res}_{\frac{1}{2}}x^2w=\frac{1}{4\pi\sinh^2\pi \beta},
\]
while
\[
{\rm Res}_{\frac{1}{2}-{\rm i}\beta }x^2w+{\rm Res}_{\frac{1}{2}+{\rm i}\beta }x^2w=\frac{2\beta^2-\frac{1}{2}}{\pi \sinh \pi\beta \sinh 2\pi \beta}.
\]
So
\[
T\big(z^2\big)=-\frac{1}{4\sinh^2\pi \beta}+\frac{2\beta^2+\frac{1}{2}}{\sinh \pi\beta \sinh 2\pi \beta}=
\frac{1}{\sinh^2\pi \beta}\bigg(-\frac{1}{4}+\frac{\beta^2+\frac{1}{4}}{\cosh \pi \beta}\bigg).
\]
Thus,
\[
\alpha=-\frac{T\big(z^2\big)}{T(1)}=\frac{1}{4}+\frac{\beta^2}{1-\cosh \pi\beta }=\frac{1}{4}-\frac{\kappa+\tfrac{1}{4}}{1-\cos\pi\sqrt{\kappa+\tfrac{1}{4}}}.
\]
This gives the equation of the curve in Fig.~2 in~\cite{BPR}. We~also note that for $\kappa=-\frac{1}{4}$ (i.e., $\beta=0$) we get
$\alpha=\frac{1}{4}-\frac{2}{\pi^2}$.
the value found in~\cite{BPR}.
\end{Example}
\begin{Example}
Let $n=4$ and
\[
P(x)=\big(x^2+\beta^2\big)\big(x^2+\gamma^2\big)=(x-{\rm i}\beta )(x+{\rm i}\beta )(x-{\rm i}\gamma)(x+{\rm i}\gamma),
\]
where $\beta^2,\gamma^2\in \mathbb R$.
This is an even quantization of $A=\mathbb C[X_4]$ discussed in~\cite[Section~6.4]{BPR}. Thus even short star-products are still parametrized by a single parameter $\alpha$; namely, the corresponding $\sigma$-invariant $s$-twisted trace such that $T(1)=1$
is determined by the condition that $T\big(z^2\big)=-\alpha$.
Assume that $\beta^2,\gamma^2>-\frac{1}{4}$, so that all the roots of $P$ are in the strip $|\Re x|<\frac{1}{2}$. We~have
\[
\mathbf{P}(X)=\big(X+{\rm e}^{2\pi \beta}\big)\big(X+{\rm e}^{-2\pi \beta}\big)\big(X+{\rm e}^{2\pi \gamma}\big)\big(X+{\rm e}^{-2\pi \gamma}\big).
\]
In this case $c=0$ so the trace $T$, up to scaling, is given by
\[
T(R(z))=\int_{{\rm i}\mathbb R}R(x)w(x)|{\rm d}x|,
\]
where
\[
w(x)=\frac{G({\rm e}^{2\pi {\rm i}x})}{({\rm e}^{2\pi {\rm i}(x-{\rm i}\beta )}+1)({\rm e}^{2\pi {\rm i}(x+{\rm i}\beta )}+1)({\rm e}^{2\pi {\rm i}(x-{\rm i}\gamma)}+1)({\rm e}^{2\pi {\rm i}(x+{\rm i}\gamma)}+1)},
\]
with $\deg(G)\le 3$ and $G(0)=0$. Moreover, because of evenness we must have \mbox{$w(x)=w(-x)$}, so~$G(X)=X^4G\big(X^{-1}\big)$. Up to scaling, such polynomials $G$ form a 1-parameter family, para\-met\-ri\-zed by $\alpha$.
Let us equip the corresponding quantum algebra $\mc A=\mc A_P$ with the quaternionic structure $\rho_+$ given by $\rho_+(v)=u$, $\rho_+(u)=v$, $\rho_+(z)=-z$, and let us determine which traces are unitary
for this quaternionic structure. According to Theorem~\ref{ThrPositiveFormSmallerThanOne}, there is a unique
such trace (which is automatically $\sigma$-stable), corresponding to $G(X)=X^2$. Thus this trace is given by the weight function
\[
w(x)=\frac{1}{\cos \pi (x-{\rm i}\beta )\cos\pi(x+{\rm i}\beta )\cos \pi (x-{\rm i}\gamma)\cos\pi(x+{\rm i}\gamma)}.
\]
Thus,
\[
T\big(z^{k}\big)=\int_{{\rm i}\mathbb R}\frac{x^{k}|{\rm d}x|}{\cos \pi (x-{\rm i}\beta )\cos\pi(x+{\rm i}\beta )\cos \pi (x-{\rm i}\gamma)\cos\pi(x+{\rm i}\gamma)},
\]
in particular, $T\big(z^k\big)=0$ if $k$ is odd.
As before, for even $k$ this integral can be computed using the residue formula. Namely, assume $\beta\ne 0$, $\gamma\ne 0$, $\beta\ne \pm \gamma$, and let us first compute $T(1)$. Replacing the contour ${\rm i}\mathbb R$ by $1+{\rm i}\mathbb R$ and subtracting, we find using the residue formula:
\begin{gather*}
T\big((z+1)^2\big)-T\big(z^2\big)=T(1)
\\ \hphantom{T\big((z+1)^2\big)-T\big(z^2\big)}
{}=-2\pi\big({\rm Res}_{\frac{1}{2}-{\rm i}\beta }x^2w+{\rm Res}_{\frac{1}{2}+{\rm i}\beta }x^2w+{\rm Res}_{\frac{1}{2}-{\rm i}\gamma}x^2w+{\rm Res}_{\frac{1}{2}+{\rm i}\gamma}x^2w\big).
\end{gather*}
Now,
\[
{\rm Res}_{\frac{1}{2}-{\rm i}\beta }x^2w+{\rm Res}_{\frac{1}{2}+{\rm i}\beta }x^2w=\frac{2\beta}{\pi \sinh \pi(\beta+\gamma)\sinh\pi(\gamma-\beta)\sinh 2\pi \beta}.
\]
Thus
\[
T(1)=\frac{1}{\sinh \pi(\beta+\gamma)\sinh\pi(\gamma-\beta)}\bigg(\frac{4\beta}{\sinh 2\pi \beta}-\frac{4\gamma}{\sinh 2\pi \gamma}\bigg).
\]
Note that this function is regular when $\beta\gamma(\beta-\gamma)(\beta+\gamma)=0$, and the corresponding limit is the answer in that case.
We similarly have
\begin{gather*}
T\big((z+1)^4\big)-T\big(z^4\big)=6T\big(z^2\big)+T(1)
\\ \hphantom{T\big((z+1)^4\big)-T\big(z^4\big)}
{}=-2\pi\big({\rm Res}_{\frac{1}{2}-{\rm i}\beta }x^4w+{\rm Res}_{\frac{1}{2}+{\rm i}\beta }x^4w+{\rm Res}_{\frac{1}{2}-i\gamma}x^4w+{\rm Res}_{\frac{1}{2}+i\gamma}x^4w\big),
\end{gather*}
and
\[
{\rm Res}_{\frac{1}{2}-{\rm i}\beta }x^4w+{\rm Res}_{\frac{1}{2}+{\rm i}\beta }x^4w=\frac{\beta-4\beta^3}{\pi \sinh \pi(\beta+\gamma)\sinh\pi(\gamma-\beta)\sinh 2\pi \beta}.
\]
Thus
\[
6T\big(z^2\big)+T(1)=\frac{2}{\sinh \pi(\beta+\gamma)\sinh\pi(\gamma-\beta)}\bigg(\frac{\beta-4\beta^3}{\sinh 2\pi \beta}-\frac{\gamma-4\gamma^3}{\sinh 2\pi \gamma}\bigg).
\]
Hence
\[
T\big(z^2\big)=\frac{1}{\sinh \pi(\beta+\gamma)\sinh\pi(\gamma-\beta)}\bigg(\frac{\gamma+4\gamma^3}{3\sinh 2\pi \gamma}-\frac{\beta+4\beta^3}{3\sinh 2\pi \beta}\bigg).
\]
Thus
\[
\alpha=-\frac{T\big(z^2\big)}{T(1)}=\frac{1}{12}+\frac{1}{3}\frac{\beta^3\sinh 2\pi \gamma-\gamma^3\sinh 2\pi \beta}{\beta\sinh 2\pi \gamma-\gamma\sinh 2\pi \beta}.
\]
This is the equation (in appropriate coordinates) of the surface computed numerically in~\cite{BPR} and shown in Fig.~4 of that paper.
In particular, for $\beta=\gamma=0$, we get
\[
\alpha=\frac{1}{12}-\frac{1}{2\pi^2}.
\]
Thus $\tau=128\alpha=\frac{32(\pi^2-6)}{3\pi^2}=4.18211{\dots}$
is the number given by the complicated expres\-sion~(B.16) of~\cite{BPR}
(as was pointed out in~\cite{DPY}).
\end{Example}
\begin{Remark} Similar calculations can be found in~\cite[Section~8.1]{DPY}.
\end{Remark}
\subsection{The case of a closed strip}
Suppose now that all roots $\alpha$ of $P$ satisfy $|\Re\alpha|\leq \frac{1}{2}$. Recall that we have $P(x)=P_*(x)Q\big(x+\frac{1}{2}\big)$ $\times Q\big(x-\tfrac{1}{2}\big)$ where all roots of $P_*(x)$ satisfy $|\Re x|<\frac{1}{2}$ and all roots of $Q(x)$ belong to ${\rm i}\RN$. For~any $R\in \CN[x]$ write $R=R_1Q+R_0$, where $\deg R_0<\deg Q$.
By Proposition~\ref{PropTracesForClosedStrip} each $g_t$-twisted trace can be obtained as
\[
T(R(z))=\int_{{\rm i}\RN}R_1(x)Q(x)w(x)|{\rm d}x|+\phi(R_0),
\]
where $w(x)={\rm e}^{2\pi {\rm i} cx}\frac{G({\rm e}^{2\pi {\rm i} x})}{\mathbf{P}({\rm e}^{2\pi {\rm i}x})}$ and $\phi$ is any linear functional.
\begin{Proposition}
\label{PropPositiveFormNoPoles}
Suppose that $T$ is a trace as above and $w(x)$ has poles on ${\rm i}\RN$. Then $T$ does not give a positive definite form.
\end{Proposition}
\begin{proof}
Let $Q_*(x)=Q(x)\overline{Q}(-x)$; note that $Q_*(x)\ge 0$ for $x\in {\rm i}\mathbb R$. Then there exists a linear functional $\psi$ such that for any $R=R_1Q_*+R_0$ with $\deg R_0<\deg Q_*$ we have
\[
T(R(z))=\int_{{\rm i}\RN}R_1(x)Q_*(x)w(x)|{\rm d}x|+\psi(R_0).
\]
Suppose that $T$ gives a positive definite form. Then $T(S(z)\overline{S}(-z))>0$ for all nonzero \mbox{$S\in\CN[x]$}. Taking $S(x)=Q_*(x)S_1(x)$ and using Lemma~\ref{CorClassical}, we deduce that $Q_*^2(x)w(x)$, hence~$w(x)$, is nonnegative on ${\rm i}\RN$. In particular, all poles of $w(x)$ have order at least $2$.
Without loss of generality assume that $w(x)$ has a pole at zero. Let
\[
R_n(x):=(F_nQ_*+b)\big(\overline{F_n}Q_*+b\big),
\]
where $b\in \RN$. Suppose that $F_n$ is a sequence of polynomials that tends to the function $f:=\chi_{(-\ensuremath\varepsilon,\ensuremath\varepsilon)}$ (the characteristic function of the interval) in the space $L^{2}\big({\rm i}\RN, (Q_*+Q_*^2\big)w)$. In particular, $F_n$~tends to $f$ in the spaces $L^2({\rm i}\RN,Q_* w)$ and $L^2\big({\rm i}\RN,Q_*^2 w\big)$. Then we deduce from the Cauchy--Schwartz inequality that $F_n\overline{F_n}$ tends to $f^2$ in the space $L^1\big({\rm i}\RN,Q_*^2w\big)$, and $F_n$ and $\overline{F_n}$ tend to $f$ in $L^1({\rm i}\RN,Q_*w)$.
We have
\begin{gather*}
T(R_n(z))=T\big(\big(F_n\overline{F_n}Q_*+F_nb+\overline{F_n}b\big)(z)Q_*(z)+b^2\big)
\\ \hphantom{T(R_n(z))}
{}=\int_{{\rm i}\RN}\big(F_n\overline{F_n}Q_*^2+F_nbQ_*+\overline{F_n}bQ_*\big)w|{\rm d}x|+\phi\big(b^2\big).
\end{gather*}
Therefore, when $n$ tends to infinity,
\[
T(R_n(z))\to \int_{{\rm i}\RN}\big(f^2Q_*^2+2fbQ_*\big)w|{\rm d}x|+\phi\big(b^2\big).
\]
We have $\phi\big(b^2\big)=Cb^2$ for some $C\geq 0$. Suppose that $w$ has a pole of order $M\geq 2$ at $0$ and $Q_*$ has a zero of order $N>0$ at $0$. Then $Q_*w$ has a zero of order $N-M$ at zero and $Q_*^2w$ has a~zero of order $2N-M$ at zero. We~deduce that
\[
\int_{{\rm i}\RN}F_nQ_*w|{\rm d}x|\to c_1\ensuremath\varepsilon^{N-M+1},\qquad \int_{{\rm i}\RN}F_nQ_*^2w|{\rm d}x|\to c_2\ensuremath\varepsilon^{2N-M+1},\qquad n\to \infty,
\]
where $c_1=c_1(\varepsilon)$, $c_2=c_2(\varepsilon)$ are functions having strictly positive limits at $\varepsilon=0$ . Therefore
\[
\lim_{n\to\infty}T(R_n(z))=Cb^2+2c_1\ensuremath\varepsilon^{N-M+1}b+c_2\ensuremath\varepsilon^{2N-M+1}.
\]
This is a quadratic polynomial of $b$ with discriminant
\[
D=4\ensuremath\varepsilon^{2N-2M+2}\big(c_1^2-Cc_2\ensuremath\varepsilon^{M-1}\big).
\]
Since $M{\geq }2$, for small $\ensuremath\varepsilon$ this discriminant is positive. In particular, for some $b$,
$ \lim\limits_{n\to\infty}T(R_n(z)){<}0,$
so for this $b$ and some $n$, $T(R_n(z))<0$, a contradiction.
\end{proof}
Now we are left with the case when $w(x)$ has no poles on ${\rm i}\RN$. In this case $T(R(z))=\int_{{\rm i}\RN}R(x)w(x)|{\rm d}x|+\eta(R_0)$, where $\eta$ is some linear functional.
\begin{Proposition}
\label{PropIfPositiveThenNoDerivatives}
$T$ gives a positive definite form only when $\eta(R_0)=\sum_{j}c_j R_0(z_j)$, where $c_j\geq 0$ and $z_j\in {\rm i}\mathbb R$ are the roots of $Q$.
\end{Proposition}
\begin{proof}
Suppose that this is not the case. Then it is easy to find a polynomial $S$ such that $\eta\big(\big(S\overline{S}\big)_0\big)<0$. Then using Lemma~\ref{CorClassical}(2) for $M=Q$, we find $F_n$ such that $F_nQ+S$ tends to zero in $L^2({\rm i}\RN,w)$. We~deduce that
\[
T\big((F_nQ+S)(z)\overline{(F_nQ+S)}(z)\big)\to \eta\big((S\overline{S})_0\big)<0,
\]
which gives a contradiction.
\end{proof}
In the proof of Proposition~\ref{PropPositiveFormNoPoles} we got that $w$ is nonnegative on ${\rm i}\RN$. We~also note that $Q(z)$ divides $P\big(z-\tfrac12\big)$, hence
\[
T\big(R\big(z-\tfrac12\big)\overline{R}\big(\tfrac12-z\big)P\big(z-\tfrac12\big)\big)
=\int_{{\rm i}\RN}R\big(z-\tfrac12\big)\overline{R}\big(\tfrac12-z\big)P\big(z-\tfrac12\big) w(z) |dz|.
\]
Using the proof of Proposition~\ref{PropFromPositiveTraceToFunctions} we see that $\lambda P(z)w\big(z+\tfrac12\big)$ is nonnegative on ${\rm i}\RN$. Assume that $Q\big(z-\tfrac12\big)Q\big(z+\tfrac12\big)$ is positive on $\RN$. Then this is equivalent to $\lambda P_*(z)w\big(z+\tfrac12\big)$ being nonnegative on ${\rm i}\RN$. So~we have proved the following theorem.
\begin{Theorem}
\label{ThrPositiveDefiniteFormsClosedStrip}
Suppose that $P(x)=P_*(x)Q\big(x-\tfrac{1}{2}\big)Q\big(x+\frac{1}{2}\big)$, where all roots of $P_*$ belong to the set $|\Re x|<\frac{1}{2}$ and all roots of $Q$ belong to ${\rm i}\RN$. Suppose that $\alpha_1,\ldots,\alpha_k$ are all the different roots of $Q$. Then positive traces $T$ are in one-to-one correspondence with $\widetilde{T}$, $c_1,\ldots, c_k\geq 0$, where $\widetilde{T}$ is a positive trace for $P_*$; namely, \[T(R(z))=\widetilde{T}(R(z))+\sum c_i R(\alpha_i).\]
\end{Theorem}
\subsection{The general case}
Let $\mc{A}$ is be a filtered quantization of $A$ with conjugation $\rho$ given by
formula \eqref{rho}. Let~$P(x)$ be its parameter. Let~$\widetilde{P}(x)$ be the polynomial defined in Section~\ref{SubSubSecGeneralTraces}: it has the same degree as~$P(x)$ and its roots are obtained from the roots of $P(x)$ by minimal integer shift into the strip $|\Re x|\leq \frac{1}{2}$.
Also recall from Section~\ref{SubSubSecGeneralTraces}
that $P_\circ$ denotes the following polynomial:
all roots of~$P_\circ$ belong to strip $|\Re x|\leq \frac{1}{2}$ and the multiplicity of $\alpha, |\Re\alpha|\leq \frac{1}{2}$ in $P_\circ$ equals to multiplicity of~$\alpha$ in~$P$. Let~$n_\circ:=\deg(P_\circ)$.
Proposition~\ref{PropGeneralTrace} says that any trace $T$ can be represented as $T=\Phi+\widetilde{T}$, where $\Phi$ is a linear functional such that
\[
\Phi(R)=\sum_{j=1}^m\sum_k c_{jk} R^{(k)}(z_j),
\]
$z_j\notin {\rm i}\RN$, and $\widetilde{T}$ is a trace for $\widetilde{P}$. Furthermore, if $\Phi=0$ then $T$ is just a trace for $P_\circ$.
\begin{Proposition}
\label{PropPositivePhiIsZero}
Let $T$ be a trace such that $\Phi\neq 0$. Then $T$ does not give a positive definite form.
\end{Proposition}
\begin{proof}
For big enough $k$ we have $\Phi((x-z_1)^k\cdots (x-z_m)^k \CN[x])=0$. Recall that there exists polynomial $Q_*(x)$ nonnegative on ${\rm i}\RN$ such that for $R=R_1Q_*+R_0$, $\deg R_0<\deg Q_*$, we have $\widetilde{T}(R)=\int_{{\rm i}\RN} R_1Q_* w |{\rm d}x|+\psi(R_0)$, where $\psi$ is some linear functional. Let~$U(x)$ be a polynomial divisible by $Q_*$ such that $\Phi(U(x)\CN[x])=0$.
Let $L$ be any polynomial. Using Lemma~\ref{CorClassical} for $M=U$, we find a sequence $G_n=US_n$ that tends to $L$ in the space $L^2({\rm i}\RN, Q_*w|{\rm d}x|)$. We~deduce that $H_n(x):=(G_n(x)-L(x))(\overline{G_n}(-x)-\overline{L}(-x))$ tends to zero in $L^1({\rm i}\RN,Q_*w)$. We~have $\widetilde{T}(H_n(z)Q_*(z))=\int_{{\rm i}\RN}H_n(x) Q_* w|{\rm d}x|$. We~conclude that $\widetilde{T}(H_n(z)Q_*(z))=\|H_n\|_{L^1({\rm i}\RN,Q_*w)}$ tends to zero when $n$ tends to infinity.
It follows that $T(H_n(z))$ tends to $\Phi(Q_*(x)H_n(x))=\Phi\big(Q_*(x)L(x)\overline{L}(-x)\big)$. Since $H_n$ is nonnegative on ${\rm i}\RN$, we have $T(H_n(z))>0$. Now we get a contradiction with
\begin{Lemma}
There exists $F(x)\in\CN[x]$ such that $\Phi\big(Q_*(x)F(x)\overline{F}(-x)\big)<0$.
\end{Lemma}
\begin{proof}
Let $r$ be the biggest number such that there exists $j$ with $c_{jr}\neq 0$. Let
\[
F(x):=G(x)(x-z_1)^{r+1}\cdots(x-z_j)^{r}\cdots (x-z_m)^{r+1}.
\]
Here we omit $x-z_{{j^*}}$ in the product, where ${j^*}\ne j$ is such that $z_{{j^*}}=-\overline{z_j}$. We~note that $c_{ik} \big(Q_*(x)F(x)\overline{F}(-x)\big)^{(k)}(z_i)=0$ for all $i,k$ except $k=r$ and $i=j$ or $i={j^*}$. It follows that
\begin{gather*}
\Phi\big(Q_*(x)F(x)\overline{F}(-x)\big)=
c_{jr}\big(Q_*(x)F(x)\overline{F}(-x)\big)^{(r)}(z_j)+c_{{j^*}r} \big(Q_*(x)F(x)\overline{F}(-x)\big)^{(r)}(z_{{j^*}})
\\ \hphantom{\Phi\big(Q_*(x)F(x)\overline{F}(-x)\big)}
{}=c_{jr} Q_*(z_j)F^{(r)}(z_j)\overline{F}(-z_j)+(-1)^r c_{{j^*}r}Q_*(z_{{j^*}})F(z_{{j^*}})\overline{F}^{(r)}(-z_{{j^*}})
\\ \hphantom{\Phi\big(Q_*(x)F(x)\overline{F}(-x)\big)}
{}=c_{jr}a+c_{{j^*}r}\overline a,
\end{gather*}
where $a:= Q_*(z_{j})F^{(r)}(z_{j})\overline{F}(-z_{j})$. Pick $a\in \mathbb C$ so that
\[
c_{jr}a+c_{{j^*}r}\overline a=2{\rm Re}(c_{jr}a)<0,
\]
and choose
$G\in \CN[x]$ which gives this value of $a$ (e.g., we can choose $G$ to be linear).
Then $\Phi\big(Q_*(x)F(x)\overline{F}(-x)\big)<0$, as desired.
\end{proof}
\renewcommand{\qed}{}
\end{proof}
If $\mc{A}_{P_\circ}$ is the quantization with parameter $P_\circ$ then there is a conjugation $\rho_\circ$ on $\mc{A}_{P_\circ}$ given by the formulas
\[
\rho_\circ(v)=\lambda_\circ u,\qquad \rho_\circ(u)=(-1)^n \lambda_\circ^{-1}v,\qquad \rho_\circ(z)=-z,
\]
where $\lambda_\circ:=(-1)^{\frac{n-n_\circ}{2}}\lambda$. Therefore we can consider the cone of positive definite forms for $\mc{A}_{P_\circ}$ with respect to $\rho_\circ$.
\begin{Corollary}
The cone of positive definite forms on $\mc{A}_P$ with respect to $\rho$ coincides with the cone of positive definite forms on $\mc{A}_{P_\circ}$ with respect to $\rho_\circ$. Namely, a trace $T\colon \CN[x]\to \CN$ for $\mc{A}$ gives a positive definite form if and only if $T$ is a trace for $\mc{A}_{P_\circ}$ that gives a positive definite form on $\mc{A}_{P_\circ}$.
\end{Corollary}
\begin{proof}
We deduce from Proposition~\ref{PropPositivePhiIsZero} that each trace $T$ that gives a positive definite form should have $\Phi=0$. By Proposition~\ref{PropGeneralTrace}, in this case $T$ is a trace for the polynomial $P_\circ(x)$. So~there exists polynomial $Q_*$ such that for $R=R_1Q_*+R_0$, $\deg R_0<\deg Q_*$, and
\[
T(R(z))=\int_{{\rm i}\RN} Q_*(x)R_1(x)w(x) |{\rm d}x|+\phi(R_0).
\]
Using Proposition~\ref{PropPositiveFormNoPoles} and its proof, we deduce that $w$ has no poles and that $w(x)$ and $\lambda w\big(x+\frac{1}{2}\big)P(x)$ are nonnegative on ${\rm i}\RN$. Therefore
\[
T(R(z))=\int_{{\rm i}\RN} R(x) w(x) |{\rm d}x|+\psi(R_0),
\]
where $\psi$ is some linear functional. Using Proposition~\ref{PropIfPositiveThenNoDerivatives}, we deduce that this trace is positive if and only if $\psi(R_0)=\sum_j c_j R_0(z_j)$, where $c_j\geq 0$ and $z_j\in {\rm i}\mathbb R$.
Since $(-1)^{\frac{n-n_\circ}{2}}\frac{P(x)}{P_\circ(x)}$ is positive on ${\rm i}\RN$, we see that $\lambda w\big(x+\frac{1}{2}\big)P(x)$ is nonnegative on ${\rm i}\RN$ if and only if $\lambda_\circ w\big(x+\frac{1}{2}\big)P_\circ(x)$ is nonnegative on ${\rm i}\RN$. Using Theorem~\ref{ThrPositiveDefiniteFormsClosedStrip} we then deduce that $T$ is positive for $P(x)$ if and only if it is positive for $P_\circ(x)$.
\end{proof}
So we have proved the following theorem.
\begin{Theorem}
Let $\mc{A}=\mc A_P$ be a filtered quantization of $A$ with parameter $P$ equipped with a~con\-jugation $\rho$ such that $\rho^2=g_t$. Let~$\ell$ be the number of roots $\alpha$ of $P$ such that $|\Re\alpha|<\frac{1}{2}$ counted with multiplicities and $r$ be the number of distinct roots $\alpha$ of $P$ with $\Re\alpha=-\frac{1}{2}$. Then the cone $\mc C_+$ of $\rho$-equivariant positive definite traces on $\mc{A}$ is isomorphic to $\mc C_+^1\times \mc C_+^2$, where $\mc C_+^2=\RN_{\geq 0}^r$, and $\mc C_+^1$ is the cone of nonzero polynomials $G$ such that
\begin{enumerate}\itemsep=0pt
\item[$(1)$]
$G$ has degree less than $\ell$;
\item[$(2)$]
$G(0)=0$ if $t=1$;
\item[$(3)$]
$G(X)\geq 0$ when $X>0$;
\item[$(4)$]
$G(X)$ is either nonnegative or nonpositive when $X<0$ depending on whether
$\rho=\rho_+$ or~$\rho_-$.
\end{enumerate}
The conditions are the same as in Theorem~$\ref{ThrPositiveFormSmallerThanOne}$.
\end{Theorem}
\section[Explicit computation of the coefficients ak, bk of the 3-term recurrence for orthogonal polynomials and discrete Painleve systems]
{Explicit computation of the coefficients $\boldsymbol{a_k}$, $\boldsymbol{b_k}$ \\of the 3-term recurrence for orthogonal polynomials \\and discrete Painlev\'e systems}
\label{expcom}
As noted in Section~\ref{ortpol}, to compute
the short star-product associated to a trace $T$,
one needs to compute the coefficients $a_k$, $b_k$ of the 3-term recurrence for the corresponding orthogonal polynomials:
\[
p_{k+1}(x)=(x-b_k)p_k(x)-a_kp_{k-1}(x).
\]
Also recall \cite{Sz} that
$a_k=\frac{\nu_k}{\nu_{k-1}}$, where $\nu_k:=(P_k,P_k)$. Finally, recall that
$\nu_k=\frac{D_k}{D_{k-1}}$, where
$D_k$ is the Gram determinant for $1,x,\dots,x^{k-1}$, i.e.,
\[
D_k=\det_{0\le i,\,j\le k-1}\big(x^i,x^j\big)=\det_{0\le i,\,j\le k-1}(M_{i+j}),
\]
where $M_r$ is the $r$-th moment of the weight function $w(x)$, i.e.,
\[
M_r=\int_{{\rm i}\mathbb R}x^rw(x)|{\rm d}x|.
\]
In the even case $w(-x)=w(x)$ we have $b_k=0$, so
\[
p_{k+1}(x)=xp_k(x)-a_kp_{k-1}(x),
\]
and $p_k$ can be easily computed recursively from the sequence $a_k$.
If the polynomials $p_k$ are $q$-hypergeometric (i.e., obtained by a limiting procedure from Askey--Wilson polynomials), then $D_k$, $\nu_k$, $a_k$ admit explicit product formulas, but in general they do not admit any closed expression and do not enjoy any nice algebraic properties beyond the above.
In our case, the hypergeometric case only arises for $n=1$ or, in special
cases, $n=2$, but the fact that the weight function for general $n$ is
essentially a higher complexity version of the weight function for $n=1$
suggests that there is still a weaker algebraic structure in the
picture.
In fact, by~\cite{Magnus} it follows immediately from the fact that the
formal Stieltjes transform satisfies an inhomogeneous first-order
difference equation with rational coefficients that the corresponding orthogonal
polynomials $p_m(x)$ in the $x$-variable satisfy a family of difference equations
\[
\begin{pmatrix}
p_m\big(x+\frac{1}{2}\big)\\[1ex]
p_{m-1}\big(x+\frac{1}{2}\big)
\end{pmatrix}
= A_m(x)
\begin{pmatrix}
p_m\big(x-\tfrac{1}{2}\big)\\[1ex]
p_{m-1}\big(x-\tfrac{1}{2}\big)
\end{pmatrix}
\]
such that the matrix $A_m(x)$ has rational function coefficients of degree
bounded by a linear function of $n$ alone. (Here we work with the ``$x$''
version of the polynomials, to avoid unnecessary appearances of~${\rm i}$.)
Since the results of~\cite{Magnus} are stated in significantly more
generality than we need, we sketch how they apply in our special case. Let~$Y_0$ be the matrix
\[
Y_0(x) = \begin{pmatrix} 1 & F(x) \\ 0 & 1\end{pmatrix} ,
\]
where $F$ is the formal Stieltjes transform of the given trace. Moreover,
for each~$n$, let $\frac{q_n(x)}{p_n(x)}$ be the $n$-th Pad\'e approximant to $F(x)$
(with monic denominator), so that $\frac{q_n(x)}{p_n(x)}-F(x)=O\big(x^{-2n-1}\big)$. If we
define
\[
Y_n(x):= \begin{pmatrix} p_n(x) & -q_n(x)\\ p_{n-1}(x) & -q_{n-1}(x)\end{pmatrix}
Y_0(x)
\]
for $n>0$, then
\[
Y_n = \begin{pmatrix} x^n+o\big(x^n\big) & O\big(x^{-n-1}\big)\\
x^{n-1}+o\big(x^{n-1}\big) & O\big(x^{-n}\big)\end{pmatrix} .
\]
\begin{Lemma}
The denominator $p_n$ of the $n$-th Pad\'e approximant to $F(x)$ is the
degree $n$ monic orthogonal polynomial for the associated linear
functional $T$.
\end{Lemma}
\begin{proof}
If $F=F_T$, then we find
\[
p_n(x) F(x) = T\bigg(\frac{p_n(x)}{x-z}\bigg) = T\bigg(\frac{p_n(x)-p_n(z)}{x-z}\bigg) +
T\bigg(\frac{p_n(z)}{x-z}\bigg)
\]
(where we evaluate $T$ on functions of $z$, and $x$ is a parameter).
The two terms correspond to the splitting of $p_n(x)F(x)$ into its
polynomial part and its part vanishing at $x=\infty$, so that
\[
q_n(x) = T\bigg(\frac{p_n(x)-p_n(z)}{x-z}\bigg)
\]
and
\[
T\bigg(\frac{p_n(z)}{x-z}\bigg) = p_n(x)F(x)-q_n(x) = O\big(x^{-n-1}\big).
\]
Comparing coefficients of $x^{-m-1}$ for $0\le m<n$ implies that $T(z^m
p_n(z))=0$ as required.
\end{proof}
{\samepage
\begin{Remark}\qquad
\begin{enumerate}\itemsep=0pt
\item[1.] It also follows that
\[
Y_n(x)_{12}=N_n x^{-n-1}+O\big(x^{-n-2}\big),\qquad Y_n(x)_{22}=N_{n-1} x^{-n}+O\big(x^{-n-1}\big).
\]
\item[2.] Note that this is an algebraic/asymptotic version of the explicit solution of~\cite{BI} to the Riemann--Hilbert problem for orthogonal polynomials introduced in~\cite{FIK}.
\end{enumerate}
\end{Remark}
}
\begin{Lemma}
We have $\det(Y_n)=N_{n-1}$ for all $n>0$.
\end{Lemma}
\begin{proof}
The definition of $Y_n$ implies that $\det(Y_n)\in \CN[x]$, while the
(formal) asymptotic behavior implies that $\det(Y_n)=N_{n-1}+O\big(\frac{1}{x}\big)$.
\end{proof}
The inhomogeneous difference equation satisfied by $F$ trivially induces an
inhomogeneous difference equation satisfied by~$Y_0$:
\[
Y_0\big(x+\tfrac{1}{2}\big)
=
\begin{pmatrix}
1 & t^{-1}\frac{L(x)}{P(x)}\\[1ex]
0 & t^{-1}
\end{pmatrix}
Y_0\big(x-\tfrac{1}{2}\big)
\begin{pmatrix}
1 & 0\\
0 & t
\end{pmatrix} ,
\]
where
\[
L(x) = P(x)\big(F\big(x+\tfrac{1}{2}\big)-tF\big(x-\tfrac{1}{2}\big)\big)\in \CN[x].
\]
It follows immediately that $Y_n$ satisfies an analogous equation
\[
Y_n\big(x+\tfrac{1}{2}\big)=A_n(x)Y_n\big(x-\tfrac{1}{2}\big)
\begin{pmatrix}
1 & 0\\
0 & t
\end{pmatrix} ,
\]
where
\[
A_n(x)=
\begin{pmatrix} p_n\big(x+\frac{1}{2}\big) & -q_n\big(x+\frac{1}{2}\big)\\[1ex]
p_{n-1}\big(x+\frac{1}{2}\big) & -q_{n-1}\big(x+\frac{1}{2}\big)\end{pmatrix}
\begin{pmatrix}
1 & t^{-1}\frac{L(x)}{P(x)}\\[1ex]
0 & t^{-1}
\end{pmatrix}
\begin{pmatrix} p_n\big(x-\tfrac{1}{2}\big) & -q_n\big(x-\tfrac{1}{2}\big)\\[1ex]
p_{n-1}\big(x-\tfrac{1}{2}\big) & -q_{n-1}\big(x-\tfrac{1}{2}\big)\end{pmatrix}^{-1}.
\]
Since $\det(Y_n)=N_{n-1}$, $\det(Y_0)=1$, we can use the standard formula
for the inverse of a $2\times 2$ matrix to rewrite this as
\[
A_n(x)
=
N_{n-1}^{-1}
\begin{pmatrix} p_n\big(x+\frac{1}{2}\big) & -q_n\big(x+\frac{1}{2}\big)\\[1ex]
p_{n-1}\big(x+\frac{1}{2}\big) & -q_{n-1}\big(x+\frac{1}{2}\big)\end{pmatrix}
\begin{pmatrix}
1 & t^{-1}\frac{L(x)}{P(x)}\\[1ex]
0 & t^{-1}
\end{pmatrix}
\begin{pmatrix} -q_{n-1}\big(x-\tfrac{1}{2}\big) & q_n\big(x-\tfrac{1}{2}\big)\\[1ex]
-p_{n-1}\big(x-\tfrac{1}{2}\big) & p_n\big(x-\tfrac{1}{2}\big)\end{pmatrix} .
\]
It follows immediately that $P(x)A_n(x)$ has polynomial coefficients. We~can also compute the asymptotic behavior of $A_n(x)$ using the expression
\[
A_n(x)
=
Y_n\big(x+\tfrac{1}{2}\big)
\begin{pmatrix}
1 & 0\\
0 & t^{-1}
\end{pmatrix}
Y_n\big(x-\tfrac{1}{2}\big)^{-1}
\]
to conclude that
\begin{alignat*}{3}
& A_n(x)_{11} = 1+\tfrac{n}{x}+O\big(\tfrac{1}{x^2}\big),\qquad&&
A_n(x)_{12} = -\tfrac{(1-t^{-1})a_n}{x} + O\big(\tfrac{1}{x^2}\big),&\\
&A_n(x)_{21} = \tfrac{1-t^{-1}}{x}+O\big(\tfrac{1}{x^2}\big),\qquad&&
A_n(x)_{22} = t^{-1}(1-\tfrac{n}{x})+O\big(\tfrac{1}{x^2}\big),&
\end{alignat*}
which when $t=1$ refines to
\begin{alignat*}{3}
&A_n(x)_{11} = 1+\tfrac{n}{x}+O\big(\tfrac{1}{x^2}\big),\qquad&&
A_n(x)_{12} = -\tfrac{(2n+1)a_n}{x^2} + O\big(\tfrac{1}{x^3}\big),&\\
&A_n(x)_{21} = \tfrac{2n-1}{x^2}+O\big(\tfrac{1}{x^3}\big),\qquad&&
A_n(x)_{22} = 1-\tfrac{n}{x}+O\big(\tfrac{1}{x^2}\big).&
\end{alignat*}
Restricting to the first column of $Y_n(x)$ gives the following.
\begin{Proposition}
The orthogonal polynomials satisfy the difference equation
\[
\begin{pmatrix}
p_n\big(x+\frac{1}{2}\big)\\[1ex]
p_{n-1}\big(x+\frac{1}{2}\big)
\end{pmatrix}
= A_n(x)
\begin{pmatrix}
p_n\big(x-\tfrac{1}{2}\big)\\[1ex]
p_{n-1}\big(x-\tfrac{1}{2}\big)
\end{pmatrix} .
\]
\end{Proposition}
Note that it is not the mere existence of a difference equation with
rational coefficients that is significant (indeed, any pair of polynomials
satisfies such an equation!), rather it is the fact that (a) the poles are
bounded independently of $n$, and (b) so is the asymptotic behavior at
infinity.
If we consider (for $t\ne 1$) the family of matrices satisfying the above
conditions; that is, $PA_n$~is polynomial, $\det(A_n)=t^{-1}$, and
\begin{gather*}
A_n(x)_{11} = 1+\tfrac{n}{x}+O\big(\tfrac{1}{x^2}\big),\qquad
A_n(x)_{12} = O\big(\tfrac{1}{x}\big),\\
A_n(x)_{21} = \tfrac{1-t^{-1}}{x} + O\big(\tfrac{1}{x^2}\big),\qquad
A_n(x)_{22} = t^{-1}\big(1-\tfrac{n}{x}\big)+O\big(\tfrac{1}{x^2}\big),
\end{gather*}
we find that the family is classified by a {\em rational} moduli space. To
be precise, let $f(x):=$ \mbox{$\big(1-t^{-1}\big)^{-1}P(x)A_n(x)_{21}$}, and let $g(x)\in
\mathbb{C}[x]/(f(x))$ be the reduction of $P(x)A_n(x)_{11}$ modu\-lo~$f(x)$. Then
$f$ and $g$ both vary over affine spaces of dimension $\deg(q)-1$, and
generically determine $A_n$. Indeed, $A_n(x)_{21}$ is clearly determined
by $f$, and since $A_n(x)_{11}P(x)$ is specified by the asymptotics up to
an additive polynomial of degree $\deg(P)-2$, it is determined by $f$ and
$g$. For~generic $f$, $g$, this also determines $A_n(x)_{22}$, since the
determinant condition implies that for any root $\alpha$ of $f$,
$A_n(\alpha)_{11}A_n(\alpha)_{22}=t^{-1}$. Moreover, this constraint forces
$P(x)^2 \big(A_n(x)_{11}A_n(x)_{22}-t^{-1}\big)$ to be a multiple of $f(x)$, and
thus the unique value of~$A_n(x)_{12}$ compatible with the determinant
condition gives a matrix satisfying the desired conditions.
Moreover, given such a matrix, the three-term recurrence for orthogonal
polynomials tells us that the corresponding $A_{n+1}$ is the unique matrix
satisfying {\em its} asymptotic conditions and having the form
\[
A_{n+1}(x)=
\begin{pmatrix}
x+\frac{1}{2}-b_n & -a_n\\
1 & 0
\end{pmatrix}
A_n(x)
\begin{pmatrix}
x-\frac{1}{2}-b_n & -a_n\\
1 & 0
\end{pmatrix}^{-1}.
\]
It is straightforward to see that $a_n$, $b_n$ are determined by the
leading terms in the asymptotics of $A_n(x)_{12}$, and thus in particular
are rational functions of the parameters. We~thus find that the map from
the space of matrices $A_n$ to the space of matrices $A_{n+1}$ is a
rational map, and by considering the inverse process, is in fact
birational, corresponding to a sequence $F_n$ of~bira\-tional automorphisms
of $\mathbb A^{2\deg(P)-2}$. Note that the equation $A_0$, though not of the
standard form, is still enough to determine $A_1$, and thus gives
(rationally) a $\mathbb P^{\deg(P)-1}$ worth of initial conditions corresponding
to orthogonal polynomials. (There is a $\deg(P)$-dimensional space of
valid functions $F$, but rescaling $F$ merely rescales the trace, and thus
does not affect the orthogonal polynomials.)
\begin{Example}
As an example, consider the case $P(x)=x^2$, corresponding, e.g., to
\[
{\rm w}(y)=\frac{{\rm e}^{2\pi cy}}{\cosh^2\pi y},
\]
with $c\in (0,1)$. In this case,
$\deg(P)=2$, so we get a $2$-dimensional family of linear equations, and
thus a second-order nonlinear recurrence, with a 1-parameter family of
initial conditions corresponding to orthogonal polynomials. Since the
monic polynomial $f$ is linear, we may use its root as one parameter $f_n$,
and $g_n=A_n(f_n)_{11}$ as the other parameter. We~thus find that
\begin{gather}
A_n(x)=
\begin{pmatrix}
\big(1-\frac{f_n}{x}\big)\big(1+\frac{f_n+n}{x}\big)+\frac{f_n^2 g_n}{x^2}
&-a_n\frac{1-t^{-1}}{x}\big(1-\frac{f_{n+1}}{x}\big)
\\
\frac{1-t^{-1}}{x}\big(1-\frac{f_n}{x}\big)
&t^{-1}\big(\big(1-\frac{f_n}{x}\big)\big(1+\frac{f_n-n}{x}\big)+\frac{f_n^2}{g_n x^2}\big)
\end{pmatrix} ,
\end{gather}
where
\begin{gather}
a_n = \frac{t}{(t-1)^2}\frac{n^2g_n-f_n^2(g_n-1)^2}{g_n}
\end{gather}
and $f_n$, $g_n$ are determined from the recurrence
\begin{gather}
f_{n+1} =\frac{f_n(f_n(g_n-1)-ng_n)(f_n(g_n-1)-n)}{n^2g_n-f_n^2(g_n-1)^2},
\\
g_{n+1} = \frac{(f_n(g_n-1)-ng_n)^2}{t g_n (f_n(g_n-1)-n)^2}.
\end{gather}
The three-term recurrence for the orthogonal polynomials is then
\[
p_{n+1}(x)=(x-b_n)p_n(x)-a_np_{n-1}(x),
\]
where $a_n$ is as above and
\[
b_n = -f_{n+1}-\frac{(t+1)\big(n+\frac{1}{2}\big)}{t-1}.
\]
The initial condition is given by
\[
f_0 = b_0 + \frac{t+1}{2(t-1)},\qquad
g_0 = 1.
\]
(Note that the resulting $A_0$ is not actually correct, but this induces
the correct values for~$f_1$,~$g_1$, noting that the recurrence simplifies
for $n=0$ to $f_1=-f_0$, $g_1=1/tg_0$.) It follows from the general theory
of isomonodromy deformations~\cite{Ra} that this recurrence is a discrete
Painlev\'e equation (This will also be shown by direct computation in forthcoming work by N.~Witte.). We~also note that the recurrence satisfies a sort of
time-reversal symmetry: there is a natural isomorphism between the space of
equations for~$t$,~$n$ and the space for~$t^{-1}$,~$-n$, coming (up to a diagonal
change of basis) from the duality $A\mapsto \big(A^{\rm T}\big)^{-1}$, and this symmetry
preserves the recurrence. (This follows from the fact that if two
equations are related by the three-term recurrence, then so are their
duals, albeit in the other order.)
\end{Example}
\begin{Remark}
The fact that $A_n(x)_{12}$ has a nice expression in terms of $a_n$ and
$f_{n+1}$ follows more generally from the fact (via the three-term
recurrence) that
\[
A_n(x)_{12} = -a_n A_{n+1}(x)_{21}.
\]
One similarly has
\[
A_n(x)_{22} = A_{n+1}(x)_{11} - \big(x+\tfrac{1}{2}-b_n\big) A_{n+1}(x)_{21},
\]
so that in general $f_{n+1}(x)\propto P(x)A_n(x)_{12}$ and $g_{n+1}(x) =
P(x)A_n(x)_{22}\bmod f_{n+1}(x)$.
In particular, applying this to $n=0$ tells us that the orthogonal polynomial case corresponds to the initial condition $f_1(x)\propto L(x)$, $g_1(x)=t^{-1}P(x)\bmod L(x)$.
\end{Remark}
The above construction fails for $t=1$, because the constraint on the
asymptotics of the off-diagonal coefficients of~$A_n$ is stricter in that
case:
\begin{gather*}
A_n(x)_{21}= \tfrac{2n-1}{x^2} + O\big(\tfrac{1}{x^3}\big),
\\
A_n(x)_{12} = O\big(\tfrac{1}{x^2}\big).
\end{gather*}
The moduli space is still rational, although the arguments is somewhat
subtler. We~can still parametrize it by $f_n(x):=P(x)A_n(x)_{12}$ and
$g_n(x):=P(x)A_n(x)_{11}\bmod f_n(x)$ as above, which is certainly enough
to determine $P(x)A_n(x)_{22}$ modulo $f_n(x)$. This still leaves two
degrees of~freedom in the diagonal coefficients, but
$\det(A_n(x))+O\big(\tfrac{1}{x^4}\big)$ depends only on the diagonal coefficients and is
linear in the remaining degrees of freedom, so we can solve for those.
Once again, having determined the coefficients on and below the diagonal,
the $21$ coefficient follows from the determinant, and can be seen to have
the correct poles and asymptotics. Note that now the dimension of the
moduli space is~$2\deg(q)-4$; that the dimension is even in both cases
follows from the existence of a canonical symplectic structure on such
moduli spaces, see~\cite{Ra}.
There is a similar reduction in the number of parameters when the trace is
even (forcing $t=(-1)^n$ and $P(x)=(-1)^n P(-x)$). The key observation in that case
is that
\[
Y_n(-x) = (-1)^n \begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}Y_n(x)
\begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}
\]
implying that $A_n$ satisfies the symmetry
\[
A_n(-x)
=
\begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}
A_n(x)^{-1}
\begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix} .
\]
Since $A_n$ is $2\times 2$ and has determinant $t^{-1}=(-1)^n$, this actually
imposes linear constraints on~the coefficients of $A_n$:
\begin{alignat*}{3}
&A_n(-x)_{11} = (-1)^n A_n(x)_{22},\qquad&&
A_n(-x)_{12} = (-1)^n A_n(x)_{12},&\\
&A_n(-x)_{21} = (-1)^n A_n(x)_{21},\qquad&&
A_n(-x)_{22} = (-1)^n A_n(x)_{11}.&
\end{alignat*}
In particular, $A_n(x)_{21}$ has only about half the degrees of freedom one
would otherwise expect, and for any root of that polynomial,
$A_n(\alpha)_{11}A_n(-\alpha)_{11}=1$, again halving the degrees of freedom (and
preserving rationality).
\begin{Example}
Consider the case $P(x)=x^3+\beta^2x$ with $t=-1$ and even trace $\big($e.g., for
$\beta=0$, the weight function ${\rm w}(y)=\frac{1}{\cosh^3 \pi y}\big)$. Then $A_n(x)_{21}$
has the form $\frac{2(x^2-f_n)}{x^3+\beta^2x}$, and $A_n\big(\sqrt{f_n}\big)_{11}$ is of~norm~1, which can be parametrized in the form
\[
A_n\big(\sqrt{f_n}\big)_{11} = \frac{g_n+\sqrt{f_n}}{g_n-\sqrt{f_n}}.
\]
Applying this to both square roots gives two linear conditions on
$A_n(x)_{11}$, which suffices to determine it, with $A_n(x)_{22}$
following by symmetry and $A_n(x)_{12}$ from the remaining determinant
conditions. We~thus obtain
\[
A_n(x)
=
\begin{pmatrix}
\displaystyle 1 \!+\! \frac{n(x^2\!-\!f_n)}{x(x^2\!+\!\beta^2)}
\!+\! \frac{2f_n(f_n\!+\!\beta^2)(g_n\!+\!x)}{(g_n^2\!-\!f_n)x(x^2\!+\!\beta^2)}
&\displaystyle
-2 a_n \frac{x^2-f_{n+1}}{x(x^2+\beta^2)}
\\[2ex]
\displaystyle 2 \frac{x^2-f_n}{x(x^2+\beta^2)}
&\displaystyle
\!-\!1 \!+\! \frac{n(x^2\!-\!f_n)}{x(x^2\!+\!\beta^2)} \!+\! \frac{2f_n(f_n\!+\!\beta^2)(g_n\!-\!x)}{(g_n^2\!-\!f_n)x(x^2\!+\!\beta^2)}
\end{pmatrix} ,
\]
where
\[
a_n = -\frac{n^2}{4}+\frac{f_n(f_n+\beta^2)}{g_n^2-f_n}
\]
and $f_n$, $g_n$ are determined by the recurrence
\begin{gather*}
g_{n+1} = -\frac{n}{2} - \frac{2g_na_n}{ng_n-2f_n},
\\
f_{n+1} = -\frac{(ng_n-2f_n)^2 g_{n+1}^2}{4f_na_n},
\end{gather*}
with initial condition
$f_1 = -\beta^2-\frac{1}{4}-a_1$, $g_1 = 0$.
\end{Example}
\begin{Remark}
One can perform a similar calculation for the case $P(x)=x^4-e_1x^2+e_2$
with even trace; again, one obtains a second-order nonlinear recurrence,
but the result is significantly more complicated, even for $e_1=e_2=0$.
\end{Remark}
In each case, when the moduli space is $0$-dimensional, so that the
conditions uniquely determine the equation, we get an explicit formula for
$A_n$. This, of course, is precisely the case that the orthogonal
polynomial is classical.
\subsection*{Acknowledgements} The work of P.E.\ was partially supported by the NSF grant DMS-1502244. P.E.\ is grateful to Anton Kapustin for introducing him to the topic of this paper, and to Chris Beem, Mykola Dedushenko and Leonardo Rastelli for useful discussions. E.R.\ would like to thank Nicholas Witte for pointing out the reference~\cite{Magnus}.
\pdfbookmark[1]{References}{ref}
|
1,941,325,221,185 | arxiv | \section{Introduction}
\label{sec:introduction}
Non-periodic tilings have received a lot of attention since the discovery of quasicrystals in the early 80s, because they provide a model of their structure.
Two prominent methods to define non-periodic tilings are {\em substitutions} and {\em cut and projection} (for a general introduction to these methods, see, {e.g.}, \cite{Baake-Grimm-2013,Senechal-1995}).
However, to model the stabilization of quasicrystals by short-range energetic interaction, a crucial point is to know whether such non-periodic tilings admit {\em local rules}, that is, can be characterized by their patterns of a given size.\\
If one allows tiles to be {\em decorated}, then the tilings obtained by substitutions are known to (generally) admit local rules (see \cite{Fernique-Ollinger-2010,Goodman-Strauss-1998,Mozes-1989}).
It has moreover recently been proven in \cite{Fernique-Sablik-2016} that a cut and project tiling admits local rules with decorated tiles if and only if it can be defined by {\em computable} quantities.
This complete characterization goes much further than previous results ({\em e.g.}, \cite{Le-Piunikhin-Sadov-1993,Le-1992b,Le-1992c,Le-1997,Le-1995,Le-Piunikhin-Sadov-1992,Socolar-1989}) by using decorations to simulate Turing machines.
But it can hardly help to model real quasicrystals because of the huge number of different decorated tiles that it needs.\\
If one does not allow tiles to be decorated, then the situation becomes more realistic but dramatically changes.
Algebraicity indeed comes into play instead of computability.
This problem has been widely studied (see, {\em e.g.}, \cite{Beenker-1982,deBruijn-1981,Burkov-1988,Katz-1988,Katz-1995,Kleman-Pavlovitch-1987,Le-1992c,Le-1997,Levitov-1988,Socolar-1990}), but there is yet no complete characterization.
We here provide the first such characterization in the case of so-called {\em octagonal tilings}.\\
Let us here sketch the main definitions leading up to our theorem (more details are given in Section~\ref{sec:settings}).
An {\em octagonal tiling} is a covering of the plane by rhombi whose edges have unit length and can take only four different directions, with the intersection of two rhombi beeing either empty, or a point, or a whole edge.
By interpretating these four edges as the projection of the standard basis of $\mathbb{R}^4$, any octagonal tiling can be seen as a square tiled surface in $\mathbb{R}^4$, called its {\em lift}.
It is then said to be {\em planar} if this lift lies in the neighborhood $E+[0,t]^4$ of a $2$-dimensional affine plane $E\subset\mathbb{R}^4$, called the {\em slope} of the tiling.
Unless otherwise specified, ``plane'' shall here mean ``$2$-dimensional affine plane of $\mathbb{R}^4$''.\\
On the one hand, a plane $E$ is determined by its {\em subperiods} if any other plane having the same subperiods is parallel to $E$, where a subperiod of $E$ corresponds to a direction in $E$ with at least three rational entries\footnote{We shall formally define it as a linear integer relation on three Grassmann coordinates of $E$, see Definition~\ref{def:subperiod}.}.
In other words, $E$ is determined by some algebraic constraints.\\
On the other hand, a plane $E$ is said to admit {\em weak local rules} if there is $r\geq 0$ such that, whenever the patterns of size $r$ of an octagonal tiling form a subset of the patterns of a planar octagonal tiling with a lift in $E+[0,1]^4$, then this tiling is planar with slope $E$.
In other words, $E$ is determined by some geometric constraints.\\
\noindent Our main result connects these algebraic and geometric constraints:
\begin{theorem}\label{th:main}
A plane admitting weak local rules is determined by its subperiods.
\end{theorem}
This characterization is actually up to {\em algebraic conjugacy} in the sense that such a plane $E$ turns out to be always generated by vectors with entries in some quadratic number field $\mathbb{Q}(\sqrt{d})$ (see Cor.~\ref{cor:quadratic}, below) and the plane $E'$ obtained by changing $\sqrt{d}$ into $-\sqrt{d}$ everywhere also has the same subperiods (but octagonal tilings with a lift in $E'+[0,t]$ may not exist).
The converse implications is the main theorem of \cite{Bedaride-Fernique-2015}, so that we get a full characterization:
\begin{corollary}\label{cor:characterization}
A plane admits weak local rules if and only if it is determined by its subperiods.
\end{corollary}
This is moreover an {\em effective} characterization.
We indeed show how to associate with any given slope a system of polynomial equations which is zero-dimensional if and only if this slope is characterized by its subperiods.
The zero-dimensionality of such a system can then be checked by computer.
We will also easily obtain as a corollary the following result:
\begin{corollary}\label{cor:quadratic}
If a plane has weak local rules, then it is generated by vectors with entries in some quadratic number field $\mathbb{Q}(\sqrt{d})$.
\end{corollary}
This answers a conjecture of Thang Le in the 90s.
He showed in \cite{Le-1997} that if the slope of a planar tiling (planar octagonal tilings are a particular case) has weak local rules, then it is generated by vectors with entries in a common algebraic field.
He conjectured that it is a quadratic field for $2$-planes of $\mathbb{R}^4$.\\
The maximal algebraic degree is however still unknown in general.
One can show that it would be $\lfloor n/d\rfloor$ if Theorem~\ref{th:main} extends to $d$-dimensional affine planes of $\mathbb{R}^n$.
At least, there is no counter-example to our knowledge.
For example, the slope of Penrose tilings is a $2$-dimensional affine plane of $\mathbb{R}^5$ based on the golden ratio which has degree $2=\lfloor 5/2\rfloor$.
More generally, the slope of an {\em $2p$-fold tiling} ($p\geq 3$) is a $2$-dimensional affine plane of $\mathbb{R}^p$ based on an algebraic number of degree $\varphi(p)/2\leq \lfloor p/4\rfloor$, where $\varphi$ is the Euler's totient function (the Penrose case corresponds to $p=5$).
Let us also mention the {\em icosahedral tiling}, whose slope is a $3$-dimensional affine space of $\mathbb{R}^6$ based, again, on the golden ratio, of degree $2=\lfloor 6/3\rfloor$.\\
The paper is organized as follows.
Section~\ref{sec:settings} introduces the settings, providing the necessary formal definitions, in particular weak local rules and subperiods.
Section~\ref{sec:less} proves that a plane with less than three types of subperiods cannot have weak local rules.
The idea is to construct a non-planar tiling which has the same patterns of a given size as the original planar tiling.
This relies on the precise study of what happens when the slope of a planar tiling is slightly shifted (Proposition~\ref{prop:shift_flips}).
Section~\ref{sec:more} proves that if a plane has weak local rules, hence three types of subperiods, then it has necessarily a fourth subperiod (Lemma~\ref{lem:fourth_subperiod}) which, together with the three first subperiods, characterize it.
This yields the main theorem.
The proof relies on a case-study which uses the notion of {\em coincidence} (Definition~\ref{def:coincidence}) to express in algebraic terms the constraints on patterns enforced by weak local rules.
\section{Settings}
\label{sec:settings}
Let $\vec{v}_1,\ldots,\vec{v}_4$ be pairwise non-colinear vectors of $\mathbb{R}^2$ and define the {\bf proto-tiles}
$$
T_{ij}=\{\lambda\vec{v}_i+\mu\vec{v}_j~|~0\leq\lambda,\mu\leq 1\},
$$
for $1\leq i<j\leq 4$.
A {\bf tile} is a translated proto-tile.
An {\bf octagonal tiling} is a edge-to-edge tiling by these tiles, that is, a covering of the Euclidean plane such that two tiles can intersect only in a vertex or along a whole edge.\\
The {\bf lift} of an octagonal tiling is a $2$-dim. surface of $\mathbb{R}^4$ defined as follows: an arbitrary vertex of the tiling is first mapped onto an arbitary point of $\mathbb{Z}^4$, then each tile $T_{ij}$ is mapped onto the unit face generated by $\vec{e}_i$ and $\vec{e}_j$, where $\vec{e}_1,\ldots,\vec{e}_4$ denote the standard basis of $\mathbb{R}^4$, so that two tiles adjacent along $\vec{v}_i$ are mapped onto faces adjacent along $\vec{e}_i$.\\
Among octagonal tilings, we distinguish {\bf planar octagonal tilings}: they are those with a lift which lies inside a tube $E+[0,t]^4$, where $E$ is a (two-dimensional) affine plane of $\mathbb{R}^4$ called the {\bf slope} of the tiling, and $t\geq 1$ is a real number called the {\bf thickness} of the tiling (both $E$ and $t$ are uniquely defined).\\
A plane is {\bf irrational} if it does not contain any line generated by a vector with only rational entries.
By extension, a planar tiling is said to be irrational if its slope is irrational.
An irrational tiling is {\bf aperiodic} or {\bf non-periodic}, {\em i.e.}, no (non-trivial) translation maps it onto itself.
It can actually be ``more or less irrational'' because they may exist rational dependencies between the {\bf Grassmann coordinates} of its slope.
Recall (see {\em e.g.}, \cite{Hodge-Pedoe-1994}, chap.~7, for a general introduction) that the Grassmann coordinates of a plane $E$ generated by two vectors $(u_1,u_2,u_3,u_4)$ and $(v_1,v_2,v_3,v_4)$ are the six real numbers defined up to a common multiplicative factor by
$$
G_{i,j}=u_iv_j-u_jv_i,
$$
for $1\leq i<j\leq 4$.
They always satisfy the so-called {\bf Plücker relation}:
$$
G_{12}G_{34}=G_{13}G_{24}-G_{14}G_{23}.
$$
A plane is said to be {\bf nondegenerate} if its Grassmann coordinates are all non zero.
By extension a planar tiling is said to be nondegenerate if its slope is nondegenerate: this means that each of the six proto-tiles appears in the tiling.
We will implicitly consider only such planes or tilings in this paper.\\
We used Grassmann coordinates in \cite{Bedaride-Fernique-2015} to rephrase the geometric {\em SI-condition} of \cite{Levitov-1988} in more algebraic terms via the notion of {\em subperiod}:
\begin{definition}[subperiod]\label{def:subperiod}
A {\em type $k$ subperiod} of a plane $E$ is a linear rational equation on its three Grassmann coordinates which have no index $k$.
\end{definition}
One can show (Prop. 5 of \cite{Bedaride-Fernique-2015}) that a plane $E$ has a subperiod $pG_{23}-qG_{13}+rG_{12}=0$ of type $4$ if and only if there is $x\in\mathbb{R}$ such that $E$ contains a line directed by $(p,q,r,x)$ (this is how subperiods were defined in the introduction).\\
Consider, for example, the celebrated Ammann-Beenker tilings.
One of them is depicted on Fig.~\ref{fig:ammann_beenker_tiling}.
They are the planar octagonal tilings of thickness one with a slope parallel to the plane generated by
$$
(\sqrt{2},1,0,-1)
\qquad\textrm{and}\qquad
(0,1,\sqrt{2},1).
$$
This plane has Grassmann coordinates $(1,\sqrt{2},1,1,\sqrt{2},1)$ (by lexocographic order).
It is irrational but has four subperiods (ordered by increasing type):
$$
G_{23}=G_{34},
\qquad
G_{14}=G_{34},
\qquad
G_{12}=G_{14},
\qquad
G_{12}=G_{23}.
$$
However, one checks that these equations and the Plücker relation do not characterize the Ammann-Beenker slope but the one-parameter family of planes with Grassmann coordinates $(1,t,1,1,2/t,1)$ (see \cite{Bedaride-Fernique-2013}).
Hence, according to Theorem~\ref{th:main}, Ammann-Beenker tilings do not have weak local rules.
This particular case was already (differently) proven by Burkov in \cite{Burkov-1988} (see also \cite{Bedaride-Fernique-2015b}).\\
\begin{figure}[hbtp]
\includegraphics[width=\textwidth]{8fold.pdf}
\caption{A celebrated octagonal tiling: the Ammann-Beenker tiling.}
\label{fig:ammann_beenker_tiling}
\end{figure}
It is also worth noticing (although we shall not use this in this paper) that planar tilings provide a very natural interpretation for Grassmann coordinates.
The frequency of the proto-tile $T_{ij}$ in a planar octagonal tiling of slope $E$, which exists, indeed turns out to be proportional to $|G_{ij}|$.
In other words, one can ``read'' on a planar tiling the Grassmann coordinates of its slope.
For example, any Ammann-Beenker tiling contains $\sqrt{2}$ rhombi for one square (and both cover half of the plane since a square is $\sqrt{2}$ times larger than a rhombus, see Fig.~\ref{fig:ammann_beenker_tiling}).\\
A {\bf pattern} of an octagonal tiling is a finite subset of its tiles.
Among patterns, we distinguish {\bf $r$-maps}.
A $r$-map is a pattern whose tiles are exactly those intersecting a closed ball of diameter $r$ drawn on the tilings.
The set of $r$-maps of a tiling form its {\bf $r$-atlas}.
The main question we are here interested in is: when does the $r$-atlas of a tiling characterize it?
Formally, we follow \cite{Levitov-1988}:
\begin{definition}[weak local rules]\label{def:local_rules}
A plane $E$ has {\em weak local rules} of diameter $r$ and thickness $t$ if any octagonal tiling whose $r$-maps are also $r$-maps of a planar tiling with slope $E$ and thickness $1$ is itself planar with slope $E$ and thickness $t$.
\end{definition}
In other words, a finite number of finite prescribed patterns (the $r$-atlas) suffices to enforce a tiling to have the slope $E$.
By extension, one says that a planar tiling admits weak local rules if so does its slope.
The parameter $t\geq 1$, allows some bounded fluctuations around $E$.
{\bf Strong local rules} corresponds to $t=1$.
This distinction between strong and weak local rules actually play a significant role.
For example, the so-called $7$-fold tilings, based on cubic irrationalities, have weak local rules \cite{Socolar-1990} but no strong local rules \cite{Levitov-1988}.
Theorem~\ref{th:main} {\em a fortiori} holds for strong local rules, but the result proven in \cite{Bedaride-Fernique-2015} allows to state corollary~\ref{cor:characterization} only in terms of weak local rules.\\
Let us now briefly recall the notion of {\bf window} and some of its properties (a complete presentation can be found in \cite{Baake-Grimm-2013}, Chapter 7).
The window of a planar octagonal tiling with slope $E$ and thickness $1$ is the octagon obtained by orthogonally projecting $[0,1]^4$ onto the orthogonal plane $E^\bot$.
One can then associate with any pattern $\mathcal{P}$ of this tiling a polygonal region $R(\mathcal{P})$ of its window, such that $\mathcal{P}$ appears in position $\vec{x}$ if and only if the projection of $\vec{x}$ in the window falls in $R(\mathcal{P})$.
This is for example used in \cite{Julien-2010} to compute the {\em complexity} of tilings, that is, the number of its patterns with a given size.
The following proposition, which is a particular case of Prop.~3.5 in \cite{Le-1997} or Prop.~1 in \cite{Levitov-1988}, can then be deduced from the density of the projection of $\mathbb{Z}^4$ in the window:
\begin{proposition}\label{prop:LI_classes}
Two planar irrational octagonal tilings with parallel slope and thickness $1$ have the same patterns.
\end{proposition}
For example, the Ammann-Beenker tilings all have the same patterns.
This may explain why one often speaks about the Ammann-Beenker tiling in the singular form (as in the caption of Fig.~\ref{fig:ammann_beenker_tiling}) although there is uncountably many of them.
We will here use the window to look how patterns appear when the slope $E$ is modified.
We rely on the following notion, introduced in \cite{Bedaride-Fernique-2015b}:
\begin{definition}[coincidence]\label{def:coincidence}
A {\em coincidence} of a plane $E\subset\mathbb{R}^4$ is a point of the window of $E$ which lies on the orthogonal projection of (at least) three unit open line segments with endpoints in $\mathbb{Z}^4$.
\end{definition}
Coincidences are exactly the points where new patterns can appear when the slope is modified.
Indeed, the boundary of the region $R(\mathcal{P})$ associated with a pattern $\mathcal{P}$ turns out to be delimited by the projection of line segments of $\mathbb{Z}^4$.
Hence, in order to create a new pattern, the slope must be modified so that the projection of $k\geq 3$ line segments of $\mathbb{Z}^4$ that formed a coincidence now form a nonempty polygonal region (see Fig.~\ref{fig:coincidence}).
One can moreover show that a plane which is not determined (among the planes) by a finite set of coincidences can, for any $r$, be modified without creating a region associated with a pattern of size $r$.
In other words (see \cite{Bedaride-Fernique-2015b}, Prop.~3):
\begin{proposition}\label{prop:coincidence}
If a plane has weak local rules, then it is determined by finitely many coincidences
\end{proposition}
\begin{figure}[hbtp]
\includegraphics[width=\textwidth]{coincidence.pdf}
\caption{
The window of an Ammann-Beenker tiling, divided into some regions (each of which corresponds to a pattern), with a circled coincidence (left).
The slope is slightly changed so that the circled coincidence breaks and a new region appears (right).
This corresponds to a new pattern which does not appear in an Ammann-Beenker tiling (compare with Fig.~\ref{fig:ammann_beenker_tiling}).
}
\label{fig:coincidence}
\end{figure}
\noindent Coincidences can also be expressed in terms of Grassmann coordinates:
\begin{proposition}\label{prop:coincidence2}
A coincidence of a plane corresponds to an equation on its Grassmann coordinates of the form (up to a permutation of the indices)
$$
aG_{14}G_{23}-bG_{13}G_{24}+cG_{12}G_{34}+dG_{24}G_{34}+eG_{14}G_{34}+fG_{14}G_{24}=0,
$$
where $a$, $b$, $c$, $d$, $e$ and $f$ are integers.
\end{proposition}
\begin{proof}
Consider a coincidence.
It is the intersection of the projection of three unit open line segments with endpoints in $\mathbb{Z}^4$.
Consider, on each of these segments, the point which projects onto the coincidence.
This yields three points which can be written (up to a permutation of the indices)
$$
(x,a,b,c), \qquad (d,y,e,f),\qquad (g,h,z,i),
$$
where all the entries are integers except $x$, $y$ and $z$.
Let $(u_1,u_2,u_3,u_4)$ and $(v_1,v_2,v_3,v_4)$ be a basis of the plane.
There are $\lambda_1$, $\mu_1$, $\lambda_2$ and $\mu_2$ such that
$$
\left(\begin{array}{c}
x-d\\
a-y\\
b-e\\
c-f
\end{array}\right)
=
\lambda_1
\left(\begin{array}{c}
u_1\\
u_2\\
u_3\\
u_4
\end{array}\right)
+\mu_1
\left(\begin{array}{c}
v_1\\
v_2\\
v_3\\
v_4
\end{array}\right)
$$
and
$$
\left(\begin{array}{c}
x-g\\
a-h\\
b-z\\
c-i
\end{array}\right)
=
\lambda_2
\left(\begin{array}{c}
u_1\\
u_2\\
u_3\\
u_4
\end{array}\right)
+\mu_2
\left(\begin{array}{c}
v_1\\
v_2\\
v_3\\
v_4
\end{array}\right).
$$
The third and fourth entries of the first equation yield
$$
\lambda_1=\frac{(b-e)v_4-(c-f)v_3}{G_{34}} \qquad \textrm{and} \qquad \mu_1=\frac{-(b-e)u_4+(c-f)u_3}{G_{34}}.
$$
The second and fourth entries of the second equation yield
$$
\lambda_2=\frac{(a-h)v_4-(c-i)v_2}{G_{24}} \qquad \textrm{and} \qquad \mu_2=\frac{-(a-h)u_4+(c-i)u_2}{G_{24}}.
$$
The first entry of these equations yields two expressions for $x$:
$$
x=d+\lambda_1u_1+\mu_1v_1=g+\lambda_2u_1+\mu_2v_1.
$$
By replacing $\lambda_1$, $\mu_1$, $\lambda_2$ and $\mu_2$ by their expressions and by grouping the terms in order to make appearing Grassmann entries, the previous inequality becomes
$$
d+\frac{(b-e)G_{14}-(c-f)G_{13}}{G_{34}}
=
g+\frac{(a-h)G_{14}-(c-i)G_{12}}{G_{24}}.
$$
By multiplying by $G_{24}G_{34}$ we get the claimed equation (with $a=0$).
\end{proof}
In the proof of the above proposition, we got $a=0$.
But the Plücker relation allows to replace any element in $\{G_{12}G_{34},G_{13}G_{24},G_{14}G_{23}\}$ by an integer linear combination of the two other ones.
We could thus equally have $b=0$ or $c=0$.
We have nevertheless chosen to write the three terms in order to emphasize the symmetry of this equation.
Indeed, one checks that any permutation of the indices $\{1,2,3\}$ yields the same permutation of the coefficients $\{a,b,c\}$ (with a change of sign) and $\{d,e,f\}$, hence do not modify the form of the equation.
The index $4$ plays here a special role because this coincidence corresponds to the projection of unit segments directed by $\vec{e}_1$, $\vec{e}_2$ and $\vec{e}_3$.
\section{At most two types of subperiods}
\label{sec:less}
In this section, we show that a plane $E$ with at most two types of subperiods cannot admit weak local rules.\\
Let us first informally sketch the proof.
The idea is to look at how spread the integer points which enter or exit the tube $E+[0,1]^4$ when $E$ is shifted.
On a planar tiling of slope $E$ and thickness $1$, this corresponds to a local rearrangement of tiles called {\em flip} (physicists speak about {\em phason flips}).
First, we shall show that, for any $r$, these flips can be made sparse enough to draw on $E$ a curve which stays at distance at least $r$ from any of them.
Then, we shall shift $E$ only on one side of this curve in order to create in the tiling a ``step'' that cannot be detected by patterns of diameter $r$.
Last, by repeating this, we shall build a sort of ``staircase'' which has the same $r$-atlas as the original tiling but is not planar (hence contradicting the existence of weak local rules).\\
\noindent Formally, let us associate with any shift vector $\vec{s}\in\mathbb{R}^4$ the set
$$
E(\vec{s}):=\{x\in\mathbb{Z}^4,~x\in (E+\vec{s}+[0,1]^4)\backslash(E+[0,1]^4)\}.
$$
The following proposition is illustrated by Figures~\ref{fig:four_subperiods} and \ref{fig:two_subperiods}.
\begin{proposition}\label{prop:shift_flips}
Let $E$ be a plane of $\mathbb{R}^4$ and $r\geq 0$.
Then, for $\vec{s}$ small enough, one can writes $E(\vec{s})=E_1\cup E_2\cup E_3\cup E_4$, where $E_i$ is empty if $\vec{s}\in\mathbb{R}\vec{e}_i$, or can be described according to the number of subperiods of type $i$ of $E$ otherwise:
\begin{description}
\item[0 subperiod:] any two points in $E_i$ are at distance at least $r$ from each other;
\item[1 subperiod:] there is a set of parallel lines of $E$ at distance at least $r$ from each other, whose direction depends only on the subperiod, and such that the points in $E_i$ are within distance $1$ from these lines;
\item[2 subperiods:] the points of $E_i$ are within distance $1$ from a lattice.
\end{description}
\end{proposition}
\begin{proof}
For $i=1,\ldots,4$, define
$$
E_i:=\{x\in\mathbb{Z}^4,~x\in (E+\vec{s}+[0,1]^4)\backslash(E+[0,1]^4+\mathbb{R}\vec{e}_i)\}.
$$
This set is empty when $\vec{s}\in\mathbb{R}\vec{e}_i$, and one has:
$$
\cup_i E_i=\{x\in\mathbb{Z}^4,~x\in (E+\vec{s}+[0,1]^4)\backslash\cap_i(E+[0,1]^4+\mathbb{R}\vec{e}_i)\}=E(\vec{s}).
$$
Assume, {\em w.l.o.g.}, that $i=1$ and consider $E_1$.\\
Actually, let us first consider $\pi_1(E_1)$ where $\pi_1$ denote the projection which removes the first entry:
$$
\pi_1(E_1)=\{x\in\mathbb{Z}^3,~x\in (\pi_1(E)+\pi_1(\vec{s})+[0,1]^3)\backslash(\pi_1(E)+[0,1]^3)\}.
$$
This is the set of the integer points of the Euclidean space lying between two planes parallel to $\pi_1(E)$, with one being the image of the other by a translation by $\pi_1(\vec{s})$.
If $E$ is generated by $(u_1,u_2,u_3,u_4)$ and $(v_1,v_2,v_3,v_4)$, then $\pi_1(E)$ is generated by $(u_2,u_3,u_4)$ and $(v_2,v_3,v_4)$.
The cross product of these two vectors yields a normal vector for $\pi_1(E)$.
One computes $(G_{23},-G_{24},G_{34})$, where the $G_{ij}$'s are the Grassmann coordinates of $E$.
One can thus rewrite:
$$
\pi_1(E_1)=\{(a,b,c)\in\mathbb{Z}^3,~z\leq aG_{23}-bG_{24}+cG_{34}\leq z+f(\vec{s})\},
$$
where $z$ depends only on how the unit cube $[0,1]^3$ projects onto the line directed by $(G_{23},-G_{24},G_{34})$, while $f(\vec{s})$ is the dot product of $(G_{23},-G_{24},G_{34})$ and $\pi_1(\vec{s})$.
In particular, $f(\vec{s})$ tends towards $0$ when $\vec{s}$ tends towards $\vec{0}$.
Now, assume that for any $\vec{s}$, $\pi_1(E_1)$ contains two integer points $x(\vec{s})$ and $y(\vec{s})$ at distance at most $r$ from each other.
The non-zero integer vector $d(\vec{s}):=x(\vec{s})-y(\vec{s})$ takes finitely many values.
So does also its dot product with $(G_{23},-G_{24},G_{34})$, which is in the interval $[-f(\vec{s}),f(\vec{s})]$.
This latter is thus necessarily equal to zero when this interval is small enough, that is, for a small enough $\vec{s}$.
For such a $\vec{s}$, we have
$$
d_1(\vec{s})G_{23}-d_2(\vec{s})G_{24}+d_3(\vec{s})G_{34}=0,
$$
where $d_i(\vec{s})$ denotes the $i$-th entry of $d(\vec{s})$.
Since $d(\vec{s})$ is a non-zero integer vector, this is exactly the equation of a subperiod of type $1$.
In other words, for a small enough $\vec{s}$, any two points in $\pi_1(E_1)$ at distance at most $r$ are aligned along a direction determined by a subperiod of type $1$ of $E$.
There is thus no such point if $E$ does not have a subperiod of type $1$.
If $E$ has exactly one subperiod, then the points must be on parallel lines, with the distance between two lines being less than $r$ (otherwise it would yields a second subperiod).
If $E$ has two subperiods, then the points are on a lattice.
We thus have the wanted description\ldots but for $\pi_1(E_1)$!\\
Let us come back to $E_1$.
Let $x$ and $y$ in $E_1$.
The points $\pi_1(x)$ and $\pi_1(y)$ are at distance at most $f(\vec{s})$ from $\pi_1(E_1)$.
There is thus a vector $\vec{p}\in E$ and $\vec{q}$ of length at most $f(\vec{s})$ such that
$$
\pi_1(x-y)=\vec{p}+\vec{q}.
$$
By definition of $\pi_1$, there is then $k\in\mathbb{Z}$ such that
$$
x-y=\vec{p}+\vec{q}+k\vec{e}_1.
$$
Now, let $\pi'$ denote the orthogonal projection onto $E^\bot$.
One has
$$
\underbrace{\pi'(x-y)}_{\in\pi'(E_1(\vec{s}))}=\underbrace{\pi'(\vec{p})}_{=0}+\underbrace{\pi'(\vec{q})}_{||.||\leq f(\vec{s})}+k\pi'(\vec{e}_1).
$$
The set $\pi'(E_1(\vec{s})$ is bounded and its closure converges towards a unit segment directed by $\vec{e}_1$ when $\vec{s}$ tends towards $\vec{0}$.
For $\vec{s}$ small enough, this yields $k=0$ or $k=1$.
The points in $E_1$ thus spread as the ones in $\pi_1(E_1)$ do, up to a small local correction by $\vec{e}_1$.
This shows the claimed result.
\end{proof}
\begin{figure}[hbtp]
\centering
\includegraphics[width=0.48\textwidth]{subperiod_flips_a.pdf}
\hfill
\includegraphics[width=0.48\textwidth]{subperiod_flips_b.pdf}
\caption{
When an Ammann-Beenker tiling is shifted, it creates lines of flips whose directions are determined by its four subperiods (left).
A smaller shift yields a similar picture, but the lines become sparser (right).
}
\label{fig:four_subperiods}
\end{figure}
\begin{figure}[hbtp]
\centering
\includegraphics[width=0.48\textwidth]{smart_shift_a.pdf}
\hfill
\includegraphics[width=0.48\textwidth]{smart_shift_b.pdf}
\caption{
The irrational slope $(1,\sqrt{2},\sqrt{3},2\sqrt{2},3\sqrt{3},\sqrt{6})$ has one subperiod of type $3$ and one of type $4$.
A generic shift thus creates two sets of sparse lines and two sets of sparse flips (left).
However, a shift along $\vec{e}_4$ ``neutralizes'' the lines of flips directed by the subperiod of type $4$ (right).
}
\label{fig:two_subperiods}
\end{figure}
\noindent We shall need the following elementary lemma:
\begin{lemma}\label{lem:four_subperiods_two_types}
A plane with two subperiods of type $i$ and two of type $j$ is rational.
\end{lemma}
\begin{proof}
Assume, {\em w.l.o.g.}, that $i=1$ and $j=2$.
The two subperiods of type $1$ yield two independent rational equations on $G_{23}$, $G_{24}$ and $G_{34}$.
Hence $G_{23}$, $G_{24}$ are in $\mathbb{Q}(G_{34})$.
The two subperiods of type $2$ yield two independent rational equations on $G_{13}$, $G_{14}$ and $G_{34}$.
Hence $G_{13}$, $G_{14}$ are in $\mathbb{Q}(G_{34})$.
The Plücker relations then yields that $G_{12}$ is also in $\mathbb{Q}(G_{34})$.
The Grassmann coordinates are thus all in the same number field.
This shows that such a plane is rational, actually {\em totally} rational ({\em i.e.}, it contains two independent rational lines).
\end{proof}
\begin{proposition}\label{prop:at_most_two_subperiods}
An irrational plane with at most two types of subperiods does not admit weak local rules.
\end{proposition}
\begin{proof}
Let $E$ be an irrational plane with at most two types of subperiods.
Let $\mathcal{T}$ be a planar tiling with slope $E$ and thickness $1$.
By Lemma~\ref{lem:four_subperiods_two_types}, $E$ has at most three subperiods, say two of type $4$ and one of type $3$.
Fix $r\geq 0$.
We shall construct step by step a ``staircase'' tiling $\mathcal{T}_\infty$ with the same $r$-atlas as $\mathcal{T}$ but which is non-planar.
This will prove that $E$ has no weak local rules of diameter $r$.\\
{\bf Step height.}
Prop.~\ref{prop:shift_flips} yields a nonzero vector $\vec{s}\in\mathbb{R}\vec{e}_4$ such that $E(\vec{s})$ can be written $E_1\cup E_2\cup E_3$, where points in $E_1$ are at distance $1$ from parallel lines of $E$ at distance $3r$ from each other, while $E_2$ and $E_3$ are points at distance $3r$ from each other.
This shift $\vec{s}$ will be the step height of our staircase.\\
{\bf Step edge.}
Consider two consecutive parallel lines of $E$ from which the points in $E_1$ are at distance at most $1$.
Between them, there is a stripe of $E$ of width at least $2r$ which stays at distance at least $r/2$ from $E_1$.
We claim that one can divide $E$ in two parts by drawing in this stripe a curve which stays at distance at least $r/2$ from any point in $E(\vec{s})$.
In other words, this stripe cannot be blocked by the balls of diameter $r$ centered on the points in $E_2$ and $E_3$.
Indeed, since this stripe has width $2r$, one needs at least three intersecting such balls to block it.
Two of these balls must be centered on two points in the same set $E_2$ or $E_3$.
These two points should be at distance at most $2r$ (because of the diameter of the balls), but it is impossible because two points in one of these sets are at distance at least $3r$ from each other.
One can thus draw the wanted curve.
It defines the edge of our step.
Fig.~\ref{fig:step_edge} illustrates this.\\
\begin{figure}[hbtp]
\centering
\includegraphics[width=\textwidth]{step_edge.pdf}
\caption{
The points in $E_1$ are depicted by squares, while the points in $E_2$ and $E_3$ are respectively depicted by triangles and hexagons.
The two lines are at distance at least $3r$ from each other, as well as any two triangles and any two hexagons.
The step edge goes between the two lines and stay at distance at least $r/2$ from any point in $E(\vec{s})$, that is, outside the shaded region.
}
\label{fig:step_edge}
\end{figure}
{\bf First step.}
Let us shift by $\vec{s}$ the part of $E$ on the right\footnote{We fix an orientation of the plane $E$ and keep the same orientation when we shift it.} of the previous curve, that is, we perform on $\mathcal{T}$ the flips which correspond to the points of $E(\vec{s})$ on the right of this curve.
Let $\mathcal{T}_1$ denote the resulting tiling.
We claim that $\mathcal{T}_1$ and $\mathcal{T}$ have the same $r$-atlas.
On the one hand, every $r$-map of $\mathcal{T}$ can be found on the left side of the step (that is, in $\mathcal{T}$) because $\mathcal{T}$ is repetitive (any pattern which occurs once reoccurs at uniformly bounded distance from any point of the tiling).
On the other hand, consider a $r$-map of $\mathcal{T}_1$.
If it on the left side of the step, then it is in $\mathcal{T}$, thus in its $r$-atlas.
If it is on the right side of the step, then it is in a tiling of slope $E+\vec{s}$, which has the same $r$-atlas by Prop.~\ref{prop:LI_classes}.
It is thus also in the $r$-atlas of $\mathcal{T}$.
If it crosses the curve defining the step, then since there is no point of $E(\vec{s})$ at distance less than $r/2$ from this curve, then this $r$-map can be seen as a pattern either of the tiling of slope $E$ or of the one of slope $E+\vec{s}$ (this is exactly what we cared about when defining the step).
In both case it is in the $r$-atlas of $\mathcal{T}$.\\
{\bf Next steps.}
We proceed by induction.
Let $\mathcal{T}_k$ be the tiling obtained after $k$ steps.
It has the same $r$-atlas as $\mathcal{T}$.
Its $i$-th step coincide with the tiling of thickness $1$ and slope $E+i\vec{s}$, either on a half plane for $i=0$ and $i=k$, or on a stripe otherwise.
We obtain $\mathcal{T}_{k+1}$ by proceeding on the $k$-th step of $\mathcal{T}_k$ as we did on $\mathcal{T}$ in order to obtain $\mathcal{T}_1$ (we simply take care that the curve which defines the $k+1$-th step is far enough on the right of the one define the $k$-th step).
Fig.~\ref{fig:staircase} illustrates this.\\
\begin{figure}[hbtp]
\centering
\includegraphics[width=\textwidth]{staircase.pdf}
\caption{
The creation of the three first steps of the staircase (from top to bottom).
At each time, a curve (step edge) is drawn between two lines of points in $E_1$, and half of the tiling is shifted by $\vec{s}$.
The changes from a steps to the next one are sparse enough to be undetectable by patterns of diameter $r$.
}
\label{fig:staircase}
\end{figure}
{\bf Staircase.}
Let $\mathcal{T}_\infty$ be the tiling which coincides with $\mathcal{T}_k$ on its $k$ first steps.
It has infinitely many steps: this is our staircase.
It still has the same $r$-atlas as $\mathcal{T}$.
It coincides with $\mathcal{T}$ on a half-plane, so must have slope $E$ if it is planar.
But this cannot be the case because its $k$-th step is close to $E+k\vec{s}$, so cannot stay at bounded distance from $E$ for $k$ large enough.
This proves that $E$ has no weak local rules of diameter $r$.
\end{proof}
\section{Three types of subperiods}
\label{sec:more}
In this section, we prove Theorem~\ref{th:main}.
The proof relies on Proposition~\ref{prop:at_most_two_subperiods} (previous section) and on two technical lemmas.
For the sake of clarity, let us first state these lemmas, then prove the theorem, and only after that prove the two lemmas.
\begin{lemma}\label{lem:thee_types_independence}
If an irrational plane has three subperiods of different types, then these subperiods are independent.
\end{lemma}
\begin{lemma}\label{lem:fourth_subperiod}
If an irrational plane is characterized by three subperiods of different types and a coincidence, then it is actually characterized solely by its subperiods.
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{th:main}]
Assume that $E$ is an irrational plane which admits weak local rules.
According to Proposition~\ref{prop:coincidence}, it is determined by finitely many coincidences.
We want to show that $E$ is actually determined by its subperiods.
According to Proposition \ref{prop:at_most_two_subperiods}, we know that $E$ must have three subperiods of different type.
Lemma~\ref{lem:thee_types_independence} ensures that these subperiods are independent.
Since the space of two-dimensional planes in $\mathbb{R}^4$ has dimension $4$, these three subperiods form a one-dimensional system of equations.
But since coincidences determine a zero-dimensional system, we can find a coincidence which, together with these three subperiods, form a zero-dimensional system, that is, characterizes the slope up to algebraic conjugacy.
Lemma~\ref{lem:fourth_subperiod} then ensures that the plane is characterized, up to algebraic conjugacy, by its subperiods.
\end{proof}
In other terms, subperiods are as powerful as weak local rules.
Corollary~\ref{cor:quadratic} then just comes from the fact that the plane is characterized by linear rational equations (the subperiods) and a single quadratic rational equation: the Plücker relation.
Let us now prove the two technical lemmas.
\begin{proof}[Proof of Lemma~\ref{lem:thee_types_independence}]
First, assume that two subperiods, say of type $1$ and $2$, are dependent:
$$
aG_{23}+bG_{24}+cG_{34}=0,
$$
$$
dG_{13}+eG_{34}+fG_{14}=0,
$$
where the coefficients are rational.
The dependence yields \mbox{$a=b=d=f=0$}, thus $c\neq 0$ and $e\neq 0$, whence $G_{34}=0$.
This is forbidden because $E$ is nondegenerate.\\
Now, assume that there are three dependent subperiods, say of type $1$, $2$ and $3$.
The two first are written above, and the last one is
$$
gG_{12}+hG_{14}+iG_{24}=0,
$$
where the coefficients are rational.
The dependence yields $a=d=g=0$, so that the equations yield that $G_{14}$, $G_{24}$ and $G_{34}$ are commensurate.
But one checks that the vector $(G_{14},G_{24},G_{34},0)$ always belongs to $E$.
This contradicts the irrationality of $E$.
Hence three subperiods of different type are independent.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:fourth_subperiod}]
We first prove that the Grassmann coordinates of such a plane can be chosen in the same quadratic field.
This will yield a linear rational relation between any three Grassmann coordinates, in particular a subperiod of the fourth type.
We then prove that this fourth subperiod is independent from the three first ones, so that all together they characterize the plane.\\
\noindent {\em W.l.o.g.}, the coincidence equation can be written
$$
\circ G_{12}G_{34}+\circ G_{13}G_{24}+\circ G_{14}G_{24}+\circ G_{14}G_{34}+\circ G_{24}G_{34}=0,
$$
where the symbol $\circ$ stands for any rational number (this notation shall be used again in what follows).
This equation being invariant (up to the coefficients) under any permutation of the indices $1$, $2$ and $3$ (by using the Plücker relation), there is two cases, depending whether there is subperiod of type $4$ (case A) or not (case B).
In case A, one can assume (again, thanks to the invariance under permutation of the indices $1$, $2$ and $3$) that the two other subperiods have type $2$ and $3$.\\
{\em Disclaimer}: it unfortunately seems hard to further use symmetry to shorten the following case study, although all the considered cases behave similarly\ldots
\paragraph{Grassmann coordinates are quadratic: case A.}~\\
The three subperiods are
\begin{eqnarray*}
pG_{23}&=&aG_{12}+bG_{13},\\
qG_{24}&=&cG_{12}+dG_{14},\\
rG_{34}&=&eG_{13}+fG_{14}.
\end{eqnarray*}
When $p$, $q$ or $r$ are non-zero, one can normalize them to one.
According to this, we make eight subcases, with three of which being eventually excluded.\\
{\bf Subcase A1}: $pqr\neq 0$.\\
Set $G_{12}=y$, $G_{13}=x$ and normalize to $G_{14}=1$.
The subperiods yield
$$
G_{23}=ay+bx,\qquad
G_{24}=cy+d,\qquad
G_{34}=ex+f.
$$
The Plücker relation becomes
$$
y(ex+f)=x(cy+d)-(ay+bx),
$$
that one simply writes (with again $\circ$ denoting a generic rational number)
$$
\circ xy+\circ x+\circ y=0.
$$
Since the plane is characterized, the system formed by the three subperiods, the Plücker relation and the coincidence equation is zero-dimensional.
The Plücker relation thus not reduces to $0=0$, that is, the coefficients of $xy$, $x$ and $y$ are not all equal to zero.
Moreover, since the plane is nondegenerate, {\em i.e.}, its Grassmann coordinates are non-zero, there is actually at most one of the coefficients of $xy$, $x$ and $y$ that can be equal to zero.
We can thus write
$$
x=\frac{\circ y}{\circ y+\circ}
$$
and rewrite the coincidence equation
$$
\circ xy+\circ x+\circ y+\circ=0.
$$
Using then the expression for $x$ obtained from the Plücker relation, and reducing to the same (non-zero) denominator, we get the equation
$$
\circ y^2+\circ y+\circ=0.
$$
This yields that $y$ is a quadratic number.
The expression for $x$ obtained from the Plücker relation then yields that $x$ belongs to the same quadratic number field.
The subperiods finally yields the same for all the other Grassmann coordinates.\\
{\bf Subcase A2}: $p=0$, $qr\neq 0$.\\
Set $G_{14}=x$, $G_{23}=y$ and normalize to $G_{12}=1$.
The subperiods yield
$$
G_{13}=\circ,\qquad
G_{24}=\circ x+\circ,\qquad
G_{34}=\circ x+\circ.
$$
The Plücker and coincidence relations respectively become
$$
\circ xy+\circ x+\circ=0
\qquad\textrm{and}\qquad
\circ x^2+\circ x+\circ x=0.
$$
The second equation yiels that $x$ is quadatic, and the first one then yields that $y$ is in the same number field.
The subperiods finally yield that all the other Grassmann coordinates are also in this number field.\\
{\bf Subcase A3}: $q=0$, $pr\neq 0$.\\
Set $G_{24}=x$, $G_{13}=y$ and normalize to $G_{12}=1$.
The subperiods yield
$$
G_{23}=\circ y+\circ,\qquad
G_{14}=\circ,\qquad
G_{34}=\circ y+\circ.
$$
The Plücker and coincidence relations respectively become
$$
\circ xy+\circ y+\circ=0
\qquad\textrm{and}\qquad
\circ xy+\circ x+\circ y+\circ=0.
$$
We use the first equation to express $x$ as a function of $y$ and plug this into the second equation to get that $x$ is quadratic.
We then deduce that all the remaining Grassmann coordinates also belong to this quadratic field.\\
{\bf Subcase A4}: $r=0$, $pq\neq 0$.\\
Set $G_{34}=x$, $G_{12}=y$ and normalize to $G_{13}=1$.
The subperiods yield
$$
G_{23}=\circ y+\circ,\qquad
G_{24}=\circ y+\circ,\qquad
G_{14}=\circ.
$$
The Plücker and coincidence relations can be rewritten as in Subcase A3.
We then deduce that all the remaining Grassmann coordinates also belong to this quadractic field.\\
{\bf Subcase A5}: $p\neq 0$, $q=r=0$\\
Set $G_{24}=x$, $G_{34}=y$ and normalize to $G_{12}=1$.
The subperiods yield
$$
G_{23}=\circ,\qquad
G_{14}=\circ,\qquad
G_{13}=\circ.
$$
The Plücker and coincidence relations can be rewritten as in Subcase A3.
$$
\circ x+\circ y+\circ=0
\qquad\textrm{and}\qquad
\circ xy+\circ x+\circ y=0.
$$
We use the first equation to express $x$ as a function of $y$ and plug this into the second equation to get that $x$ is quadratic.
We then deduce that all the remaining Grassmann coordinates also belong to this quadractic field.\\
{\bf Subcase A6}: $q\neq 0$, $p=r=0$.\\
Set $G_{23}=x$, $G_{34}=y$ and normalize to $G_{12}=1$.
The subperiods yield
$$
G_{13}=\circ,\qquad
G_{24}=\circ,\qquad
G_{14}=\circ.
$$
The Plücker and coincidence relations respectively become
$$
\circ x+\circ y+\circ=0
\qquad\textrm{and}\qquad
\circ y+\circ=0.
$$
This yields that all the Grassmann coordinates are rational.
This subcase is excluded since the plane is irrationnal.\\
{\bf Subcase A7}: $r\neq 0$, $p=q=0$.\\
Set $G_{23}=x$, $G_{24}=y$ and normalize to $G_{12}=1$.
The subperiods yield
$$
G_{13}=\circ,\qquad
G_{14}=\circ,\qquad
G_{34}=\circ.
$$
The Plücker and coincidence relations can be rewritten as in Subcase A6.
This yields that all the Grassmann coordinates are rational, what is again excluded.\\
{\bf Subcase A8}: $p=q=r=0$.\\
The subperiods yield $G_{12}=0$: it is excluded because the plane is nondegenerate.
\paragraph{Grassmann coordinates are quadratic: case B.}~\\
The three subperiods are
\begin{eqnarray*}
pG_{23}&=&aG_{24}+bG_{34}\\
qG_{12}&=&cG_{14}+dG_{24}\\
rG_{13}&=&eG_{14}+fG_{34}.
\end{eqnarray*}
We make again eight subcases, but here, they eventually will be all excluded.\\
{\bf Subcase B1}: $pqr\neq 0$.\\
Set $G_{24}=x$, $G_{34}=y$ and normalize to $G_{14}=1$.
The subperiods yield
$$
G_{23}=\circ x+\circ y,\qquad
G_{12}=\circ x+\circ,\qquad
G_{13}=\circ y+\circ.
$$
The Plücker and coincidence relations both become
$$
\circ xy+\circ x+\circ y=0.
$$
We use the first equation to express $x$ as a function of $y$ and plug this into the second equation to get an algebraic fraction in $y$ whose numerator is $\circ y^2+\circ y$.
This yields that $y$, and then all the other Grassmann coordinates, are rational.
This is is excluded because the plane is irrationnal.\\
{\bf Subcase B2}: $p=0$, $qr\neq 0$.\\
Set $G_{23}=x$, $G_{14}=y$ and normalize to $G_{24}=1$.
The subperiods yield
$$
G_{34}=\circ,\qquad
G_{12}=\circ y+\circ,\qquad
G_{13}=\circ y+\circ.
$$
The Plücker and coincidence relations respectively become
$$
\circ xy+\circ y+\circ =0
\qquad\textrm{and}\qquad
\circ y+\circ=0.
$$
This yiels that $y$, and then all the other Grassmann coordinates, are rational: this is excluded.\\
{\bf Subcase B3}: $q=0$, $pr\neq 0$.\\
Set $G_{12}=x$, $G_{34}=y$ and normalize to $G_{24}=1$.
The subperiod yield
$$
G_{23}=\circ y+\circ,\qquad
G_{14}=\circ,\qquad
G_{13}=\circ y+\circ.
$$
The Plücker and coincidence relations both become
$$
\circ xy+\circ y+\circ =0.
$$
We use the first equation to express $x$ as a function of $y$ and plug this into the second equation to get an algebraic fraction in $y$ whose numerator is $\circ x+\circ$.
This yields that $x$, and then all the other Grassmann coordinates, are rational: this is excluded.\\
{\bf Subcase B4}: $r=0$, $pq\neq 0$.\\
Set $G_{13}=x$, $G_{24}=y$ and normalize $G_{14}=1$.
The subperiods yield
$$
G_{23}=\circ y+\circ,\qquad
G_{12}=\circ y+\circ,\qquad
G_{34}=\circ.
$$
The Plücker and coincidence relations can be rewritten as in Subcase B3.
This yields that all the Grassmann coordinates are rational: this is excluded.\\
{\bf Subcase B5}: $p\neq 0$, $q=r=0$.\\
Set $G_{12}=x$, $G_{13}=y$ and normalize to $G_{14}=1$.
The subperiods yield
$$
G_{23}=\circ,\qquad
G_{24}=\circ,\qquad
G_{34}=\circ.
$$
The Plücker and coincidence relations both become
$$
\circ x+\circ y+\circ 1=0.
$$
This yields that all the Grassmann coordinates are rational: this is excluded.\\
{\bf Subcase B6}: $q\neq 0$, $p=r=0$.\\
Set $G_{13}=x$, $G_{23}=y$ and normalize to $G_{24}=1$.
The subperiods yield
$$
G_{34}=\circ,\qquad
G_{12}=\circ,\qquad
G_{14}=\circ.
$$
The Plücker and coincidence relations respectively become
$$
\circ x+\circ y+\circ =0
\qquad\textrm{et}\qquad
\circ x+\circ=0.
$$
This yields that all the Grassmann coordinates are rational: this is excluded.\\
{\bf Subcase B7}: $r\neq 0$, $p=q=0$.\\
Set $G_{12}=x$, $G_{23}=y$ and normalize to $G_{24}=1$.
The subperiods yield
$$
G_{34}=\circ,\qquad
G_{14}=\circ,\qquad
G_{13}=\circ.
$$
The Plücker and coincidence relations can be rewritten as in Subcase B6.
This yields that all the Grassmann coordinates are rational: this is excluded.\\
{\bf Subcase B8}: $p=q=r=0$.\\
This subcase is excluded for the same reason as Subcase A8.
\paragraph{The four equations are independent.}\hfill\\
The only subcases to consider are A1 to A5.
Since the Grassmann coordinates are in the same quadratic field, there is a type $1$ subperiod:
$$
\alpha G_{23}+\beta G_{24}+\gamma G_{34}=0.
$$
{\bf Subcase A1}:\\
The Grassmann coordinates are
$$
G_{12}=y,\quad
G_{13}=x,\quad
G_{14}=1,\quad
G_{23}=ay+bx,\quad
G_{24}=cy+d,\quad
G_{34}=ex+f.
$$
The fourth subperiod becomes
$$
\alpha (ay+bx)+\beta (cy+d)+\gamma(ex+f)=0,
$$
that is,
$$
(a\alpha+c\beta)y+(b\alpha+e\gamma)x+(d\beta+f\gamma)=0.
$$
We use the Plücker relation to express $y$ as a function of $x$:
$$
y=\frac{(d-b)x}{(e-c)x+(a+f)}.
$$
By plugging this into the fourth subperiod, we get $Ax^2+Bx+C=0$ with
\begin{eqnarray*}
A&=&(e-c)(b\alpha+e\gamma),\\
B&=&(d-b)(a\alpha+c\beta)+(a+f)(b\alpha+e\gamma)+(e-c)(d\beta+f\gamma),\\
C&=&(a+f)(d\beta+f\gamma).
\end{eqnarray*}
In terms of matrices:
$$
\left(\hspace{-5pt}\begin{array}{ccc}
b(e-c) & 0 & e(e-c)\\
a(d-b)+b(a+f) & c(d-b)+d(e-c) & e(a+f)+f(e-c)\\
0 & d(a+f) & f(a+f)
\end{array}\hspace{-5pt}\right)
\hspace{-3pt}
\left(\hspace{-5pt}\begin{array}{c}
\alpha\\\beta\\\gamma
\end{array}\hspace{-5pt}\right)
\hspace{-3pt}=\hspace{-3pt}
\left(\hspace{-5pt}\begin{array}{c}
A\\B\\C
\end{array}\hspace{-5pt}\right).
$$
Since $x$ is irrational (otherwise all the other Grassmann coordinates, hence the plane, would be rational), the triple $(A,B,C)$ is unique (up to a common multiplicative factor).
The above matrix has thus non-zero determinant.
To compute it, we first factor $(e-c)$ in the firs line and $(a+f)$ in the third one.
We then substract from the second line $(a+f)$ times the first one and $(e-c)$ times the third one.
We can then factor the second line by $(d-b)$ and compute the determinant of the matrix
$$
\left(\begin{array}{ccc}
b & 0 & e\\
a & c & 0\\
0 & d & f
\end{array}\right).
$$
We finally get
$$
(e-c)(a+f)(d-b)(bcf+ade).
$$
Now, the matrix of the linear system formed by the four subperiods (the variables being the six Grassmann coordinates) is
$$
\left(\begin{array}{cccccc}
a & b & 0 & -p & 0 & 0\\
c & 0 & d & 0 & -q & 0\\
0 & e & f & 0 & 0 & -r\\
0 & 0 & 0 & \alpha & \beta & \gamma
\end{array}\right).
$$
In particular, the minor of size $4$ formed by the three first columns and one of the columns whose fourth entry is non-zero (at most one of the rational $\alpha$, $\beta$ and $\gamma$ can be equal to zero, since they are the coefficients of a subperiod) is proportional to $bcf+ade$.
It is thus non-zero {\em i.e.}, the four equations are independent.\\
{\bf Subcase A2}:\\
The Grassmann coordinates are
$$
G_{12}=1,\quad
G_{13}=-\frac{a}{b},\quad
G_{14}=x,\quad
G_{23}=y,\quad
G_{24}=c+dx,\quad
G_{34}=fx-\frac{ea}{b}.
$$
The fourth subperiod becomes
$$
\alpha y+\beta (c+dx)+\gamma(fx-ea/b)=0,
$$
that is,
$$
\alpha y+(d\beta+f\gamma)x+c\beta-\frac{ea}{b}\gamma=0.
$$
We use the Plücker relation to express $x$ as a function of $y$:
$$
x=\frac{a(e-c)}{bf+ad+by}.
$$
By plugging this into the fourth subperiod, we get $Ay^2+By+C=0$ with
\begin{eqnarray*}
A&=&b\alpha,\\
B&=&(bf+ad)\alpha+b(c\beta-\frac{ea}{b}\gamma),\\
C&=&(d\beta+f\gamma)a(e-c)+(bf+ad)(c\beta-\frac{ea}{b}\gamma).
\end{eqnarray*}
In terms of matrices:
$$
\left(\hspace{-5pt}\begin{array}{ccc}
b & 0 & 0\\
bf+ad & bc & -ea\\
0 & ad(e-c)+c(bf+ad) & af(e-c)-(bf+ad)\frac{ea}{b}
\end{array}\hspace{-5pt}\right)
\hspace{-3pt}
\left(\hspace{-5pt}\begin{array}{c}
\alpha\\\beta\\\gamma
\end{array}\hspace{-5pt}\right)
\hspace{-3pt}=\hspace{-3pt}
\left(\hspace{-5pt}\begin{array}{c}
A\\B\\C
\end{array}\hspace{-5pt}\right).
$$
The determinant of this matrix is non-zero.
It is equal to
$$
ab(e-c)(ade+bcf).
$$
This ensures as in Subcase A1 that the four equationss are independent.\\
{\bf Subcase A3}:\\
The Grassmann coordinates are
$$
G_{12}=1,\quad
G_{13}=y,\quad
G_{14}=-\frac{c}{d},\quad
G_{23}=a+by,\quad
G_{24}=x,\quad
G_{34}=ey-\frac{cf}{d}.
$$
The fourth subperiod becomes
$$
(b\alpha+e\gamma)y+\beta x+(a\alpha-\frac{cf}{d}\gamma)=0.
$$
We use the Plücker relation to express $y$ as a function of $x$:
$$
y=-\frac{c(a+f)}{dx-de+bc}.
$$
By plugging this into the fourth subperiod, we get $Ax^2+Bx+C=0$ with
\begin{eqnarray*}
A&=&d\beta\\
B&=&(bc-de)\beta+ad\alpha-cf\gamma\\
C&=&-(b\alpha+e\gamma)c(a+f)+(bc-de)(a\alpha-\frac{cf}{d}\gamma).
\end{eqnarray*}
In terms of matrices:
$$
\left(\hspace{-5pt}\begin{array}{ccc}
0 & d & 0\\
ad & bc-de & -cf\\
-(ade+bcf) & 0 & -\frac{c}{d}(ade+bcf)
\end{array}\hspace{-5pt}\right)
\hspace{-3pt}
\left(\hspace{-5pt}\begin{array}{c}
\alpha\\\beta\\\gamma
\end{array}\hspace{-5pt}\right)
\hspace{-3pt}=\hspace{-3pt}
\left(\hspace{-5pt}\begin{array}{c}
A\\B\\C
\end{array}\hspace{-5pt}\right).
$$
The determinant of this matrix is non-zero.
It is equal to
$$
cd(a+f)(ade+bcf).
$$
This ensures as in Subcase A1 that the four equations are independent.\\
{\bf Subcase A4}:\\
The Grassmann coordinates are
$$
G_{12}=y,\quad
G_{13}=1,\quad
G_{14}=-\frac{e}{f},\quad
G_{23}=ay+b,\quad
G_{24}=cy-\frac{ed}{f},\quad
G_{34}=x.
$$
The fourth subperiod becomes
$$
(a\alpha+c\beta)y+\gamma x+(b\alpha-\frac{ed}{f}\beta)=0.
$$
We use the Plücker relation to express $y$ as a function of $x$:
$$
y=\frac{e(b-d)}{fx-fc-ea}.
$$
By plugging this into the fourth subperiod, we get $Ax^2+Bx+C=0$ with
\begin{eqnarray*}
A&=&f\gamma\\
B&=&bf\alpha-ed\beta-(cf+ae)\gamma\\
C&=&-(ade+bcf)\alpha+\frac{e}{f}(ade+bcf)\beta
\end{eqnarray*}
In terms of matrices:
$$
\left(\hspace{-5pt}\begin{array}{ccc}
0 & 0 & f\\
bf & -ed & -cf-ae\\
-(ade+bcf) & \frac{e}{f}(ade+bcf) & 0
\end{array}\hspace{-5pt}\right)
\hspace{-3pt}
\left(\hspace{-5pt}\begin{array}{c}
\alpha\\\beta\\\gamma
\end{array}\hspace{-5pt}\right)
\hspace{-3pt}=\hspace{-3pt}
\left(\hspace{-5pt}\begin{array}{c}
A\\B\\C
\end{array}\hspace{-5pt}\right).
$$
The determinant of this matrix is non-zero.
It is equal to
$$
ef(b-d)(ade+bcf).
$$
This ensures as in Subcase A1 that the four equations are independent.\\
{\bf Subcase A5}:\\
The Grassmann coordinates are
$$
G_{12}=1,\quad
G_{13}=\frac{cf}{ed},\quad
G_{14}=-\frac{c}{d},\quad
G_{23}=a+\frac{bcf}{ed},\quad
G_{24}=x,\quad
G_{34}=y.
$$
In this case, the Plücker relation {\em is} the fourth subperiod:
$$
y=\frac{cf}{ed}x-\frac{c}{d}(a+\frac{bcf}{ed}).
$$
Note that
$$
G_{23}=\frac{ade+bcf}{ed}.
$$
Hence, $ade+bcf\neq 0$ because the plane is non-degenerated.
This ensures as in Subcase A1 that the four equations are independent.
\end{proof}
\bibliographystyle{alpha}
|
1,941,325,221,186 | arxiv | \section{Introduction}\label{sec:intro}
Inverse problems aim at inferring information about unknown parameters $u \in \mathcal U$ from observations $y \in \mathcal Y$ and a model
that relates parameters and observations. Denoting the model by
$\mathcal F : \mathcal U \to \mathcal Y$ and assuming additive
observation errors $\varepsilon$, the relation between parameters,
model and observations is
\begin{align}\label{eq:invprob}
y = \mathcal F (u) + \varepsilon.
\end{align}
In the Bayesian approach to inverse problems, one considers $u,y$ and
$\varepsilon$ as random variables. In particular, one requires a
distribution $p(u)$ that reflects prior knowledge about the parameters $u$. The solution to this Bayesian inverse problem is then the posterior
distribution $p(u|y)$ of the parameters, defined via Bayes' rule as
\begin{align}\label{eq:Bayesrule}
p (u|y) \propto p (y | u) p(u),
\end{align}
where `$\propto$' means equality up to a normalization constant, and
$p(y|u)$ is the likelihood derived from the forward model
\eqref{eq:invprob} and the distribution of the errors $\varepsilon$. The Bayesian approach to inverse problems has a long history
\cite{franklin1970, Kaipio2005}. While many important questions around
the characterization of the posterior distribution remain, the main
focus of this paper is on the choice of prior distributions for $u\in
\mathcal U$. In particular, we are interested in problems where
$\mathcal U$ is a space of functions defined over a domain $\mathcal
D\subset \mathbb R^d$, with $d\in \{1,2,3\}$, and we have the prior
knowledge that the true parameter might be a discontinuous function
over $\mathcal D$. Samples from the prior distribution $p(u)$ should
thus include discontinuous (or approximately discontinuous)
functions. This is known to be a challenging problem, and the aim of
this paper is to propose a novel method based on Bayesian neural network
parameterizations to define $p(u)$.
\paragraph{\em Approach}
We study neural network priors with weights drawn from $\alpha$-stable
distributions as proposed in \cite{neal1995BayesianLF}. In Figure~\ref{fig:2dnnprior}, we compare samples generated with
such neural network priors with Cauchy and Gaussian weights, which are
$\alpha$-stable random variables with $\alpha = 1, 2$. As can be seen, neural networks with Cauchy weights generate
samples that have large jumps, compared to those with Gaussian weights. This behavior is due to the
heavy tails of Cauchy distributions, resulting in strongly re-scaled
neural network activation functions that visually appear as
discontinuities in the network outputs.
The prior we propose builds on this observation. To study its properties, we first summarize known
theoretical results on the convergence of neural networks with
$\alpha$-stable weights in the limit of infinite width. Then, for
finite-width networks, we present results on the distribution of
pointwise derivatives of the output and use them to explain
the behavior shown in Figure~\ref{fig:2dnnprior}. We also discuss
practical methods to condition these neural network priors with
observations using optimization as well as sampling methods.
\setlength{\fboxsep}{-.2pt}
\setlength{\fboxrule}{.5pt}
\begin{figure}[tb]\centering
\begin{tikzpicture}
\node at (-1,2cm) (img1) {\fbox{\includegraphics[scale=0.31]{plot_paper/prior/2dpriorgraycauchy1}}};
\node[left=.2cm of img1, node distance=0cm, rotate=90, anchor=center, yshift = 0cm, font=\color{black}] {Cauchy};
\node at (2.1,2cm) {\fbox{\includegraphics[scale=0.31]{plot_paper/prior/2dpriorgraycauchy2}}};
\node at (5.2,2cm) {\fbox{\includegraphics[scale=0.31]{plot_paper/prior/2dpriorgraycauchy3}}};
\node at (8.3,2cm) {\fbox{\includegraphics[scale=0.31]{plot_paper/prior/2dpriorgraycauchy4}}};
\node at (-1,-1.5cm) (img2) {\fbox{\includegraphics[ scale=0.31]{plot_paper/prior/2dpriorgraygaussian1}}};
\node[left=.2cm of img2, node distance=0cm, rotate=90, anchor=center, yshift = 0cm, font=\color{black}] {Gaussian};
\node at (2.1,-1.5cm) {\fbox{\includegraphics[scale=0.31]{plot_paper/prior/2dpriorgraygaussian2}}};
\node at (5.2,-1.5cm) {\fbox{\includegraphics[scale=0.31]{plot_paper/prior/2dpriorgraygaussian3}}};
\node at (8.3,-1.5cm) {\fbox{\includegraphics[scale=0.31]{plot_paper/prior/2dpriorgraygaussian4}}};
\end{tikzpicture}
\caption{Comparison between outputs on $[-1,1]^2$ of Bayesian neural
network priors with three hidden layers. Shown are realizations of
networks with $\tanh(\cdot)$ as activation function and Cauchy
weights (top) and Gaussian weights (bottom).\label{fig:2dnnprior}}
\end{figure}
\paragraph{\em Contributions}
The main contribution of this work are as follows.
(1) For finite-width networks, we prove that the distribution of the
derivative of the output function at an arbitrary but fixed point is heavy-tailed
provided the weights in the network are heavy-tailed. This explains
the large jumps in the output function generated by certain neural
networks.
(2) We discuss practical methods to condition these neural network
priors using observations, and present a comprehensive numerical study
using image deconvolution problems in one and two space dimensions.
\paragraph{\em Paper outline}
The outline of this work is as follows. In Sec.~\ref{sec:priormodel},
we review existing priors for Bayesian inverse problems and introduce
neural network priors with general $\alpha$-stable distributed
weights. We provide a systematic discussion of the properties of
neural network priors both in the cases of infinite width and finite
width. In Sec.~\ref{sec:BNN}, we review the Bayesian approach for
inverse problems and discuss optimization and sampling methods to
explore the posterior distribution. In
Sec.~\ref{sec:numerical}, we study the numerical performance of
different neural network priors using two deblurring examples with
discontinuous parameter functions. Finally, we draw conclusions and
point out potential research directions in Sec.~\ref{sec:conclu}.
\section{Prior modeling with neural networks}\label{sec:priormodel}
Here, we review priors for infinite-dimensional Bayesian inverse
problems and study properties of neural network priors. We provide an
overview of existing priors for Bayesian inverse problems in
Sec.~\ref{subsec:litereview}. In Sec.~\ref{subsec:nnprior}, we propose
functions parameterized using neural networks with random weights as
priors. Numerical tests motivate using neural networks with weights
drawn from $\alpha$-stable distributions as priors for discontinuous
parameter functions. We introduce $\alpha$-stable distributions in
Sec.~\ref{subsec:alpha}. In Sec.~\ref{subsec:outputinfinite}, we
provide a review of the limiting properties of Bayesian neural
networks with Gaussian and $\alpha$-stable ($0 < \alpha < 2$) weights
as the width of network layers goes to infinity. We present our main
results on neural network priors with finite width in
Sec.~\ref{subsec:outputfinite}.
\subsection{Related literature}\label{subsec:litereview}
We consider priors for unknown parameter functions $u$ defined over a
domain $\mathcal D$. One approach to infinite-dimensional Bayesian
problems is to first discretize all functions and use a
finite-dimensional perspective \cite{cotter2010,kaipio2007}. Methods building on this approach may suffer from a dependence on the
discretization, i.e., their performance may degrade as the
discretization is refined. Alternatively, one can analyze and develop
methods for Bayesian inverse problems in function space and then
discretize them \cite{stuart_2010}. This approach typically avoids
mesh dependence but is theoretically more challenging.
We follow the latter approach, i.e., construct priors in function
space.
Priors encode a priori available information about the unknown
function $u$. A common choice are Gaussian priors, for which a vast
number of theoretical and computational results are available
\cite{stuart_2010, RasWill2005gaussian, BuiGeorg2013}. A sample $u$ from a Gaussian prior can be constructed using, e.g., the
Karhunen-Lo\`eve expansion \cite{RasWill2005gaussian}. Let $\{\varphi_j, \rho_j^2 \}_{j=1}^{\infty}$ be a
set of orthonormal eigenfunctions and eigenvalues of a positive
definite, trace-class covariance operator $\mathcal C_0:\mathcal
U\to\mathcal U$, and let $\{\xi_j\}_{j=1}^{\infty}$ be an
i.i.d.\ sequence $\xi_j \sim \mathcal N (0, 1)$. Then
\begin{align}\label{eq:kl_expansion}
u := m_0 + \sum_{j =1}^{\infty} \rho_j \xi_j \varphi_j
\end{align}
is distributed according to $\mathcal N (m_0, \mathcal C_0)$, where
$m_0 \in \mathcal U$ is the mean.
Examples of covariance operators are Mat\'ern covariance operators and
fractional inverse powers of elliptic differential operators. One can
show that samples from these Gaussian priors are almost surely
continuous \cite{stuart_2010}. However, this might not always be
desirable. For example, in many practical problems, the unknown
function $u$ might have discontinuities. Thus, a prior
that allows for discontinuous samples might be preferable to a
Gaussian prior.
Several priors have been proposed to overcome the limitations of
Gaussian priors. For example, total variation priors based on the
total variation regularization \cite{Lassas_2004, gonzalez2017,
Chambolle10anintroduction} can be defined for a fixed
discretization. However, these priors converge to Gaussian fields as
the mesh is refined, which is referred to as lack of discretization
invariance in \cite{Lassas_2004}. That is, total variation priors
depend on the discretization, which is undesirable from an
infinite-dimensional perspective. Non-Gaussian priors can also be based on generalizations of
the expansion \eqref{eq:kl_expansion} with non-Gaussian coefficients.
For instance, when
$\{\varphi_j\}_{j=1}^\infty$ is a wavelet basis of $\mathcal U$, Besov priors are obtained if
$\{ \xi_j \}_{j =1}^{\infty}$ are i.i.d.\ random variables with
probability density function $p_{\xi}(z) \propto \exp \left(− \frac 1
2 |z|^q \right)$ \cite{dashti2011besov}. This results in a
non-Gaussian discretization-invariant prior that is able to produce
discontinuous samples. However, choosing a proper wavelet basis for Besov priors is
challenging and the location of discontinuities depends on this
basis \cite{matti2009samuli,dashti2011besov}.
Due to properties related to their heavy tails,
$\alpha$-stable processes have recently been proposed as priors \cite{chada2019posterior,
sullivan2017}. These can be realized by choosing for $\{ \xi_j \}_{j
=1}^{\infty}$ in \eqref{eq:kl_expansion} i.i.d.\ $\alpha$-stable
distributions. The processes are well-defined in function space
\cite{sullivan2017} and due to their heavy tails, they can take on
extreme values and thus incorporate large jumps. Variations of $\alpha$-stable processes were proposed, e.g., the
Cauchy difference prior \cite{markkanen2016cauchy} and the Cauchy Markov
random field \cite{chada2021cauchy}. Such priors are difficult to sample in high dimensional problems \cite{markkanen2016cauchy}.
Priors based on neural networks have been considered
\cite{neal1995BayesianLF, Asim2020}. One ongoing research direction in
constructing neural network priors are generative methods, which can
tailor priors to a specific application and are then used in
inference problems \cite{Asim2020, ardizzone2019analyzing}. An alternative approach is constructing Bayesian neural networks,
i.e., to use neural networks to parameterize functions
\cite{neal1995BayesianLF} and assume that the weights in the neural network
are random variables. Here, we follow this latter approach and thus introduce Bayesian neural networks next.
\subsection{Neural network priors}\label{subsec:nnprior}
Motivated by their relation to Gaussian processes under certain
conditions \cite{neal1995BayesianLF, matthews2018gaussian}, neural
networks can be used to construct priors in function space. These
novel priors are defined as parameterizations given by a neural
network with random weights drawn from specific distributions. The
network is defined on a spatial domain $\mathcal
D \subseteq \mathbb R^{D_0}$, where $D_0=d$ is the dimension of the
input space. Let $\Psi(\boldsymbol w): \mathcal D \to \mathbb R$ denote the
neural network, where $\boldsymbol w$ summarizes all weights in the neural
network. The network (we only consider general fully connected
networks) consists of $L$ hidden layers and its
structure is
\begin{subequations}\label{eq:neural_nets}
\begin{align}
\boldsymbol h^{(0)}(\boldsymbol x) &= \boldsymbol x, \\
\boldsymbol h^{(l + 1)}(\boldsymbol x) &= \phi \left(\boldsymbol b^{(l)} + \boldsymbol V^{(l)} \boldsymbol h^{(l)}(\boldsymbol x) \right), \quad l = 0, \ldots, L-1, \\
u (\boldsymbol x) &= \boldsymbol V^{(L)} \boldsymbol h^{(L)}(\boldsymbol x),
\end{align}
\end{subequations}
where $\boldsymbol h^{(l)}(\boldsymbol x)$ is the $l$-th hidden layer vector with
width $D_l$. The matrices $\boldsymbol V^{(l)} \in \mathbb R^{D_{l + 1}
\times D_l}$, for $0\le l<L$, and $\boldsymbol V^{(L)} \in \mathbb R^{1 \times D_L}$ are the
weights of the $l$-th hidden layer and the output layer, respectively,
and $\boldsymbol b^{(l)} \in \mathbb R^{D_{l + 1} \times 1}$ is the bias of
the $l$-th hidden layer. We denote the output of the neural network as $u(\boldsymbol x)$ for
input $\boldsymbol x$. Note that, in general, $u$ can be vector-valued, but
for simplicity we consider only scalar-valued $u$ in this paper. The
function $\phi: \mathbb R \to \mathbb R$ denotes a nonlinear
activation function in the neural network, which acts on vectors
component-wise. We will mainly use the hyperbolic tangent
function $\tanh(\cdot)$ as activation in this paper. Since this
is a smooth function, the input-to-output mapping of
the neural network is smooth too. In particular, while samples
generated by these networks appear to be discontinuous, they are
continuous but have very sharp jumps that visually appear to be
discontinuities.
We consider priors parameterized using \eqref{eq:neural_nets} with
weights $\boldsymbol w$ drawn from certain distributions. Such neural
network priors are flexible; one can change the distributions on the
weights or the number and widths of the hidden layers to obtain
different priors. One choice of weight distribution
is Gaussian \cite{neal1995BayesianLF, Williams96computing}, but non-Gaussian
weights have also been studied \cite{neal1995BayesianLF,
weiss06platt}. In this paper, we consider neural network priors with
weights drawn from $\alpha$-stable distributions ($0 < \alpha \leq 2$),
in particular, Cauchy and Gaussian distributions (i.e., $\alpha = 1$
or $\alpha=2$).
To illustrate the difference between neural network priors with
Cauchy and Gaussian weights, we show realizations from both neural
networks in one dimension in
Figure~\ref{fig:1dnnprior}. Here, we use the neural network structure
in \eqref{eq:neural_nets} with three hidden layers of widths $[80, 80,
100]$. The distributions assigned to the weights are the standard
Cauchy and Gaussian distributions. We observe that the
realizations of the neural network prior with Cauchy weights
have large jumps while realizations with Gaussian weights tend to be
smooth. This is the same behavior as observed in two dimensions in Figure~\ref{fig:2dnnprior}.
\begin{figure}[tb]\centering
\begin{tikzpicture}
\begin{scope}[xshift=0cm]
\begin{axis}[width=.54\columnwidth,xmin=-1,xmax=1,ymax=5.5, compat=1.13, legend pos=south west, legend style={nodes={scale=.85, transform shape}}, xlabel= Cauchy NN]
\addplot[color=green!70!white,mark=none,thick] table [x=x,y=s2]{plot_paper/prior/1dpriorcauchy.txt};
\addplot[color=violet!70!white,mark=none,thick] table [x=x,y=s3]{plot_paper/prior/1dpriorcauchy.txt};
\addplot[color=red!70!white,mark=none,thick] table [x=x,y=s4]{plot_paper/prior/1dpriorcauchy.txt};
\addplot[color=orange!70!white,mark=none,thick] table [x=x,y=s5]{plot_paper/prior/1dpriorcauchy.txt};
\addplot[color=purple!70!white,mark=none,thick] table [x=x,y=s7]{plot_paper/prior/1dpriorcauchy.txt};
\end{axis}
\node[color=black] at (0.4,4.05cm) {a)};
\end{scope}
\begin{scope}[xshift=6.5cm]
\begin{axis}[width=.54\columnwidth,xmin=-1,xmax=1,compat=1.13, legend pos=south west, legend style={nodes={scale=.85, transform shape}}, xlabel= Gaussian NN]
\addplot[color=green!70!white,mark=none,thick] table [x=x,y=s2]{plot_paper/prior/1dpriorgaussian.txt};
\addplot[color=violet!70!white,mark=none,thick] table [x=x,y=s3]{plot_paper/prior/1dpriorgaussian.txt};
\addplot[color=red!70!white,mark=none,thick] table [x=x,y=s4]{plot_paper/prior/1dpriorgaussian.txt};
\addplot[color=orange!70!white,mark=none,thick] table [x=x,y=s5]{plot_paper/prior/1dpriorgaussian.txt};
\addplot[color=purple!70!white,mark=none,thick] table [x=x,y=s7]{plot_paper/prior/1dpriorgaussian.txt};
\end{axis}
\node[color=black] at (0.4,4.05cm) {b)};
\end{scope}
\end{tikzpicture}
\caption{Realizations of the neural network with two different weight
distributions on the interval $[-1, 1]$. Shown are realizations with
Cauchy weights (a) and Gaussian weights (b).\label{fig:1dnnprior}}
\end{figure}
Note that realizations of neural network priors tend to exhibit
larger variations closer to the origin, i.e., the resulting processes
are non-stationary. To illustrate why this is the case, we consider
a simplified neural network with a one-dimensional input, one hidden
layer, and $V^{(0)}\in \mathbb R^{D_1}$ is a vector of all ones:
\begin{align*}
u(x) = \sum_{i=1}^{D_1} V^{(1)}_i H \left( x - b^{(0)}_i \right),
\end{align*}
where the activation function $H$ is taken to be the Heaviside
function for illustration purposes. For fixed biases $b_i^{(0)}$,
the network output is a random walk with jumps occurring
at each $b_i^{(0)}$ with magnitude $V_i^{(1)}$. Thus, if the biases $b_i^{0}$ are
concentrated around zero (as in a Cauchy or Gaussian
distribution), more jumps occur near the origin. If one wishes to
reduce this non-stationarity, $b_i^{(0)}$ could be taken to
be uniformly distributed in some zero-centered interval. However, this
will not guarantee stationarity for more general networks.
\subsection{$\alpha$-stable distributions}\label{subsec:alpha}
In the previous section, we reviewed neural networks and
how to consider their outputs as priors for Bayesian inverse problems. Before
moving to the theoretical discussion of neural network priors with
$\alpha$-stable weights ($0 < \alpha \leq 2$), we review scalar
$\alpha$-stable distributions.
First introduced by L\'evy, $\alpha$-stable distributions are a class
of heavy-tailed distributions, which have a large probability of attaining
extreme values. Given scalars $\alpha \in (0,2]$ (the stability parameter) and
$\gamma > 0$ (the scaling parameter), the $\alpha$-stable distribution
$\text{St}(\alpha, \gamma)$ may be defined by its characteristic function
\begin{align*}
\Phi(t) = \mathbb E \left[\exp\left(it\text{St}(\alpha, \gamma)\right) \right] = e^{-\gamma^{\alpha} |t|^{\alpha}}.
\end{align*}
Additional parameters representing skew and location may also be introduced,
though we consider these fixed at zero in this article. The absolute moment of an
$\alpha$-stable random variable $\text{St}(\alpha, \gamma)$, i.e.,
$\mathbb E \left[|\text{St}(\alpha,\gamma)|^{\beta} \right]$, is finite when
$\beta \in [0, \alpha)$ and infinite when $\beta \in [\alpha, \infty)$.
A key property of $\alpha$-stable distributions is that they are closed under
independent linear combinations \cite{Borak2005} leading to generalizations of central limit theorems beyond the Gaussian regime \cite{gnedenko1954limit}.
Note that $\text{St}(2, \gamma)$ coincides with
$\mathcal N(0, 2\gamma^2)$, i.e., a mean-zero Gaussian random
variable. Another special case of an $\alpha$-stable distribution is
the Cauchy distribution for $\alpha = 1$. It has the probability density
function
\begin{align*}
f(x; \gamma) = \frac{1}{\pi \gamma} \left( \frac{\gamma^2}{x^2 + \gamma^2}\right),
\end{align*}
where the scaling parameter $\gamma$ specifies the half-width at
half-maximum (HWHM). In general, for $\alpha \neq 1,2$, the density of $\text{St}(\alpha,\gamma)$
may not be expressed analytically.
As discussed above, we construct different neural network priors by
using different $\alpha$-stable distributions ($0 < \alpha \leq 2$) on
the weights. Gaussian and Cauchy neural network priors are obtained
for $\alpha = 1, 2$, respectively. It is possible to define neural
network priors with weights drawn from general $\alpha$-stable
distributions, but in practice this is complicated by the lack
of analytic forms. In the following, we study theoretical results
for general $\alpha$-stable distributions but only show numerical
experiments from neural network priors with Gaussian and Cauchy
weights.
\subsection{Output properties of infinite-width neural network priors}\label{subsec:outputinfinite}
We have seen that realizations of neural networks with Cauchy weights
have jumps, while realizations of
networks with Gaussian weights tend to be smooth; see
Figures~\ref{fig:2dnnprior} and \ref{fig:1dnnprior}. In this section,
we review the convergence of neural networks with different weight
distributions in the limit of infinite width \cite{neal1995BayesianLF,
matthews2018gaussian}. The results are stated for Gaussian ($\alpha
= 2$) and $\alpha$-stable distributions ($0 < \alpha < 2$).
\subsubsection{Gaussian weights}
Here we summarize theoretical results for Gaussian neural networks as
the widths of the hidden layers approach infinity. We start with the
case of one hidden layer, i.e., \eqref{eq:neural_nets} has the form
\begin{subequations}\label{eq:neural_net_1hidden}
\begin{alignat}{2}
h^{(1)}_i(\boldsymbol x) &= \phi \left(b^{(0)}_i + \sum_{j=1}^{D_0} V^{(0)}_{ij} x_j \right), \\
u_{D_1}(\boldsymbol x) &= \frac{1}{\sqrt{D_1} } \sum_{i=1}^{D_1} V^{(1)}_{i} h^{(1)}_i(\boldsymbol x),
\end{alignat}
\end{subequations}
where $V_{ij}, b_{i}, h_i$ are the components of $\boldsymbol V, \boldsymbol b, \boldsymbol
h$, respectively. We denote the $j$-th component of $\boldsymbol x$ as
$x_j$. The normalization factor $\frac{1}{\sqrt{D_1}}$ could
also be absorbed into the variance of the Gaussian distribution.
Then, for Gaussian weights, the output converges to a Gaussian distribution
when the width of the hidden layer goes to
infinity, as summarized in the following theorem from
\cite{neal1995BayesianLF}.
\begin{theorem}\label{thm:gaussian_convergence}
Consider the neural network \eqref{eq:neural_net_1hidden} with all
weights following Gaussian distributions, i.e., $V_{ij}^{(0)},
V_{i}^{(1)} \stackrel{iid}{\sim} \mathcal N(0, \sigma^2_V)$,
$b_i^{(0)}\stackrel{iid}{\sim} \mathcal N(0, \sigma^2_b)$.
Assume that the activation function $\phi(\cdot)$ is bounded and
fix the input $\boldsymbol x$. Then, as the width $D_1 \to \infty$, the
output $u_{D_1}(\boldsymbol x)$ converges to a Gaussian distribution
$\mathcal N(0, \sigma^2_O (\boldsymbol x))$, with
\begin{align}
\sigma^2_O (\boldsymbol x) = \sigma_V^2 \mathbb E \left[ \left( h_1^{(1)}(\boldsymbol x) \right)^2 \right].
\end{align}
\end{theorem}
More generally, one can show that in the limit of infinite width, the
process $u_{D_1}$ is distributed as a centered Gaussian process with
kernel
$$K(\boldsymbol x, \boldsymbol x') = \sigma_V^2 \mathbb E \left[ h_1^{(1)}(\boldsymbol
x) h_1^{(1)}(\boldsymbol x') \right].$$
If we further assume that $\phi(
h_1^{(1)}(\boldsymbol x))$ has a finite second moment, one can prove a similar
result for neural networks with multiple hidden layers by induction
\cite{matthews2018gaussian}.
\subsubsection{$\alpha$-stable weights ($0<\alpha < 2$)}
We now consider the case that all the weights of the neural network are
drawn from an $\alpha$-stable distribution with $\alpha \in (0,
2)$. As $D_L$ approaches infinity, we study the
convergence of the weighted sum
\begin{align}\label{eq:neu_lasthidden}
u_{D_L}(\boldsymbol x) = \frac{1}{{D_L}^{1/\alpha}} \sum_{i = 1}^{D_L} V_i^{(L)} h_{i}^{(L)}(\boldsymbol x),
\end{align}
where $V_i^{(L)}$ satisfies the $\alpha$-stable distribution and
$h_{i}^{(L)}$ is the $i$-th node of the last hidden layer defined as
\begin{align}\label{eq:final_hidden_cauchy}
h_{i}^{(L)}(\boldsymbol x) = \phi \left(b^{(L - 1)}_i + \sum_{j=1}^{D_{L-1}}V^{(L - 1)}_{ij} h^{(L-1)}_j (\boldsymbol x) \right).
\end{align}
Note that for a fixed input $\boldsymbol x$, $\large\{h_j^{(L)}(\boldsymbol x), V_j^{(L)} \large \}_{j=1}^{D_{L-1}}$ are
pairwise independent and each $h_i^{(L)}(\boldsymbol x)$ follows the same
distribution. Thus, we neglect the indices and simply use
$h(\boldsymbol x):=h_i^{(L)}(\boldsymbol x)$. By letting the width of each hidden layer tend to infinity, it is shown in
\cite{weiss06platt} that the final output also converges
to an $\alpha$-stable distribution. The precise result is stated next.
\begin{theorem}\label{thm:stable_convergence}
Assume that all the weights of \eqref{eq:neu_lasthidden} are i.i.d.,
and follow an $\alpha$-stable distribution with scale parameter
$\gamma$, where $\alpha \in (0, 2)$. Assume also that the activation
$\phi(\cdot)$ satisfies that $\mathbb E \left[ h(\boldsymbol x) \right]^{\alpha} <
\infty$, where $h_{i}^{(L)}(\boldsymbol x)$ is defined in
\eqref{eq:final_hidden_cauchy}. Then, as $D_L \to \infty$, the
output $u_{D_L}(\boldsymbol x)$ converges in distribution to an
$\alpha$-stable random variable $u(\boldsymbol x)$ with characteristic
function $\Phi_{u(\boldsymbol x)} (t) = \exp \left(-|\gamma t|^\alpha \mathbb
E \left[ h(\boldsymbol x) \right]^{\alpha} \right)$.
\end{theorem}
We note that the assumption $\mathbb E \left [h(\boldsymbol x) \right]^\alpha <
\infty$ in the theorem always holds for bounded activation functions
such as $\tanh(\cdot)$ and $\text{sgn}(\cdot)$.
\subsection{Outputs of neural network priors with finite width}\label{subsec:outputfinite}
Since infinite-width neural networks
are not practical, we next consider neural network
priors with finite width. In particular, we study the
distribution of the output's derivative at a fixed point $\boldsymbol x$. We
distinguish two cases, namely neural networks with heavy-tailed (e.g.,
Cauchy) weights and with finite moment (e.g., a Gaussian) weights.
We start with the case of a single hidden layer. We first
assume that the input is one-dimensional and extend the result to
multi-dimensional input later. The network
\eqref{eq:neural_nets} becomes
\begin{align}\label{eq:neu_onediminput}
u(x) = \sum_{i = 1}^{D_1} V_i^{(1)} \phi \left( b^{(0)}_i + V^{(0)}_{i} x \right).
\end{align}
Assuming $\phi$ is differentiable, the gradient of
$u(x)$ with respect to $x$ is
\begin{align}\label{eq:derivative_}
u'(x) &= \sum_{i = 1}^{D_1} V_i^{(1)} V_i^{(0)}\phi' ( b_i^{(0)} + V_i^{(0)} x).
\end{align}
We now show that the distribution of the derivative
\eqref{eq:derivative_} is heavy-tailed if the weights from the input
to the hidden layer follow a heavy-tailed distribution. A
distribution is referred to as heavy-tailed if its tail is heavier
than an exponential distribution. The formal definition is given next.
\begin{definition}\label{eq:heavy-tailed}
A random variable $X$ with cumulative distribution function $F_X(x)$ is said to be heavy-tailed if
\begin{align*}
\int_{-\infty}^{\infty} e^{t|x|} \, \text{d} F_X(x) = \infty \quad \text{for all} \ t > 0.
\end{align*}
\end{definition}
To motivate our interest in \eqref{eq:derivative_},
consider the Taylor expansion of $u$ at a point $x$,
\begin{align*}
u(x + \delta) - u(x) = u'(x) \delta + o(|\delta|),
\end{align*}
where $|\delta|$ is small. When, for fixed $x$, $u'(x)$ follows a heavy-tailed distribution, the
difference $|u(x + \delta) - u(x)|$ is very large with a
non-negligible probability, resulting in a large jump in $u$.
To study when \eqref{eq:derivative_} is heavy-tailed, we
first consider
$ V_i^{(0)} \phi' (b_i^{(0)} + V_i^{(0)} x)$. To simplify notation, we
neglect indices and introduce the random variable
\begin{align}\label{eq:gen_deri}
G := V \phi'(B + V x),
\end{align}
which depends
on the i.i.d.\ heavy-tailed and general
symmetric random variables $V$ and $B$, respectively. We next show that the
distribution of $G$ is heavy-tailed for appropriate activation
functions. Recall that for two measurable spaces $X, Y$ with
$\mu$ being a measure on $X$ and a measurable function $f: X \to Y$,
the induced measure $\nu$ on $Y$ defined by $\nu(A) =
\mu(f^{-1}(A))$, for any measurable set $A$, satisfies
\begin{align}\label{eq:measure-transport}
\mathbb E_{x \sim \mu } (f(x)) = \int_X f(x)\,\mu(\text{d} x)= \int_Y y\,\nu(\text{d} y) = \mathbb E_{y \sim \nu} y.
\end{align}
The following theorem states the main results.
\begin{theorem}\label{thm:grad_heavy_tailed}
Assume that in \eqref{eq:gen_deri}, $V$ follows a
symmetric heavy-tailed distribution and $B$ is
a symmetric random variable. Furthermore, assume that
$\phi(\cdot)$ is differentiable and its derivative is bounded away from zero,
i.e., $|\phi'(\cdot)| \geq c > 0$. Then the
distribution of $G$ is heavy-tailed.
\end{theorem}
\begin{proof}
We denote the joint cumulative distribution function of $V$ and $B$ as
$F_{V, B}(\cdot, \cdot)$, and the cumulative distribution functions of
$V$ and $B$ as $F_V(\cdot)$ and $F_B(\cdot)$, respectively. For $t >
0$, using \eqref{eq:measure-transport}, we have
\begin{align*}
\int_{\mathbb{R}} e^{t|g|} \, \text{d} F_G(g) = \mathbb E_{V, B} \left [ e^{t|V \phi'(B + V x)|} \right] &= \int_{\mathbb R} \int_{\mathbb R} e^{t|v \phi'(b + v x)|} \, \text{d} F_{V, B}(v, b) \\
&= \int_{\mathbb R} \int_{\mathbb R} e^{t|v \phi'(b + v x)|} \, \text{d} F_V(v)\,\text{d} F_B(b),
\end{align*}
since $V$ and $B$ are independent. The
boundedness of $\phi$ from below implies
\begin{align*}
\int_{\mathbb{R}} e^{t|g|} \,\text{d} F_G(g) &\geq \int_{\mathbb R} \int_{\mathbb R} e^{ct |v |} \,\text{d} F_V(v)\,\text{d} F_B(b) \geq \int_{\mathbb R} e^{tc|v |} \,\text{d} F_V(v) = \infty,
\end{align*}
where we used $\int_{\mathbb R} \,\text{d} F_B(b) = 1$ and that
$V$ is heavy-tailed. Thus, $G$ is heavy-tailed.
\end{proof}
We note that the
assumption of the derivative bounded from below is satisfied for most activation
functions including leaky-ReLU and SeLU. Furthermore, we can apply
this theorem to more activation functions, e.g., $\tanh(\cdot)$, ReLU,
by using
$\tilde \phi (x) = \phi (x) + \varepsilon x$,
with small $\varepsilon > 0 $. Having established that each $ V_i^{(0)} \phi' (b_i^{(0)} + V_i^{(0)}
x)$ in \eqref{eq:derivative_} is heavy-tailed, we next show that
$u'(x)$ is also heavy-tailed. We first
formulate a basic lemma, whose proof can be found in the appendix.
\begin{lemma}\label{lemma:heavy_product_sum}
If $X$ is a heavy-tailed random variable and $Y$ is an independent
symmetric continuous random variable, then we have the
following properties:
\begin{enumerate}[(i)]
\item The product of $XY$, is also a heavy-tailed random variable.
\item If we further assume that $Y$ is also heavy-tailed, then
the sum of $X + Y$, is also a heavy-tailed random variable.
\end{enumerate}
\end{lemma}
We now show that all components of the gradient \eqref{eq:derivative_} are heavy-tailed
under the assumptions in Theorem~\ref{thm:grad_heavy_tailed}. Note
that we only require the weights in the last hidden layer to the
output to be symmetric continuous random variables. While the result
above is for one-dimensional input $x$, the generalization to
multi-dimensional input $\boldsymbol x$ by considering the partial derivatives
is straightforward. A neural network with multi-dimensional input $\boldsymbol
x$ and one hidden layer is given as
\begin{align}\label{eq:neu_multidiminput}
u(\boldsymbol x) = \sum_{i = 1}^{D_1} V_i^{(1)} \phi \left(b^{(0)}_i + \sum_{j=1}^{D_0} V_{ij}^{(0)} x_j \right).
\end{align}
The next theorem extends
Theorem~\ref{thm:grad_heavy_tailed} to networks of the form
\eqref{eq:neu_multidiminput}.
\begin{theorem}\label{thm:gradone_heavy_tailed}
Assume $V_{ij}^{(0)}, b^{(0)}_i$ follow i.i.d.\ heavy-tailed and
symmetric distributions, respectively, and $V_i^{(1)}$ are
i.i.d.\ symmetric continuous random variables. Further assume that
$\phi(\cdot)$ is differentiable and its derivative is
bounded away from zero. Then the partial derivatives of
\eqref{eq:neu_multidiminput},
\begin{align}\label{eq:gr_partial}
\partial_{x_k} u(\boldsymbol x) &= \sum_{i = 1}^{D_1} V_i^{(1)} V_{ik}^{(0)} \phi' \left(b_i^{(0)} + \sum_{j=1}^{D_0} V_{ij}^{(0)} x_j \right)
\end{align}
are also heavy-tailed.
\end{theorem}
The proof follows by using that each component is
heavy-tailed. Next, we prove our main theorem for neural networks with
multiple hidden layers and weights from heavy-tailed
and finite-moments distributions.
\begin{theorem}\label{thm:gradmulti_heavy_tailed}
Assume the neural network \eqref{eq:neural_nets} and that $\boldsymbol
b^{(l)}$ are symmetric random variables for $l = 0, \ldots,
L-1$. Furthermore, assume that $\phi(\cdot)$ is differentiable,
and its derivative is bounded and bounded away from zero. Then, for a fixed $\boldsymbol x$, the distribution of
$\partial_{x_k} u(\boldsymbol x)$ satisfies:
\begin{enumerate}[(i)]
\item If the weights in $\boldsymbol V^{(l)}$ are i.i.d.\ symmetric
heavy-tailed and $\boldsymbol V^{(L)}$ are i.i.d.\ symmetric continuous random
variables, then all $\partial_{x_k}u(\boldsymbol x)$ are heavy-tailed.
\item If $\boldsymbol V^{(l)}$ are i.i.d.\ symmetric random variables
of finite $k$-th order moment, then all $\partial_{x_k}u(\boldsymbol
x)$
have finite moment of $k$-th order.
\end{enumerate}
\end{theorem}
\begin{proof}
We use induction for the proof. The partial derivative
is
\begin{align}\label{eq:grmulti_partial}
\partial_{x_k} u(\boldsymbol x) &= \sum_{i = 1}^{D_L} V_i^{(L)} \phi' \bigg( b_i^{(L-1)} + \sum_{j=1}^{D_{L-1}} V_{ij}^{(L-1)} h_j^{(L-1)}(\boldsymbol x) \bigg) \bigg( \sum_{j=1}^{D_{L-1}} V_{ij}^{(L-1)} \partial_{x_k} h_j^{(L-1)}(\boldsymbol x) \bigg),
\end{align}
where $\partial_{x_k} h_j^{(L-1)}(\boldsymbol x)$ is the partial
derivative of the $j$-th component of the $(L-1)$-th hidden layer.
We consider first case (i). In Theorem~\ref{thm:gradone_heavy_tailed},
for one hidden layer, we have shown that the distribution of the
partial derivative is heavy-tailed. We assume that this
argument holds for neural networks with $L-1$ hidden layers. It is
easy to see that
\begin{align}\label{eq:gramulti_part}
\sum_{j=1}^{D_{L-1}} V_{ij}^{(L-1)} \partial_{x_k} h_j^{(L-1)}(\boldsymbol x)
\end{align}
is also heavy-tailed using Lemma~\ref{lemma:heavy_product_sum} and
since each component is the partial derivative of the output of a
neural network of $L-1$ hidden layers and . Since $\phi'(\cdot)$ is
bounded away from zero, $V_i^{(L)} \phi' \left(b_i^{(L-1)} +
\sum_{j=1}^{D_{L-1}} V_{ij}^{(L-1)} h_j^{(L-1)}(\boldsymbol x) \right)$ is
heavy-tailed following the proof in
Theorem~\ref{thm:grad_heavy_tailed}, and thus that
\eqref{eq:grmulti_partial} is also heavy-tailed by
Lemma~\ref{lemma:heavy_product_sum}.
For case (ii), we start with a one-hidden-layer neural network given by
\begin{align}\label{eq:neu_onediminputgau}
u(\boldsymbol x) = \sum_{i = 1}^{D_1} V_i^{(1)} \phi \left(b^{(0)}_i + \sum_{j=1}^{D_0} V_{ij}^{(0)} x_j \right),
\end{align}
with corresponding partial derivative
\begin{align*}
\partial_{x_k} u(\boldsymbol x)= \sum_{i = 1}^{D_1} V_i^{(1)} V_{ij}^{(0)} \phi' \left(b_i^{(0)} + \sum_{j=1}^{D_0} V_{ij}^{(0)} x_j \right).
\end{align*}
Note that $|\phi'(\cdot)|$ is bounded by the assumption. Therefore,
the partial derivative of the output has finite $k$-th moment since
$V_i^{(1)}, V_{ij}^{(0)}$ are i.i.d.\ random variables of finite
$k$-th moment.
We now assume that the result holds for any neural network with $L-1$
hidden layers. Then \eqref{eq:gramulti_part} has
finite $k$-th moment since both $V_{ij}^{(L-1)}$ and $\partial_{x_k} h_j^{(L-1)}(\boldsymbol x)$ have finite $k$-th moment and
are independent of each other. This implies that the partial
derivative in \eqref{eq:grmulti_partial} also has finite $k$-th
moment. Thus, the result follows by induction.
\end{proof}
Again, note that the results in this section hold for heavy-tailed
weights, which includes $\alpha$-stable weights. From
Theorem~\ref{thm:gradmulti_heavy_tailed}, one can see that the
derivative of the neural network output with Gaussian weights has
finite moments, which implies smooth outputs in practice. In contrast,
the derivative of the neural network output is heavy-tailed if all the
weights before the last hidden layer are $\alpha$-stable distributed,
$0 < \alpha <2$. This implies that the derivative of the output can
have extreme values with non-negligible probability and thus the
corresponding neural network outputs can have
large jumps, emulating discontinuities even when the activation
function is smooth. Furthermore, we only assume the weights before the last hidden layer to be
$\alpha$-stable. Biases can follow any symmetric random
distribution. For the numerical experiments in
Sec.~\ref{sec:numerical}, we study one type of such neural network
priors, which has Cauchy random variables as the weights except for
the last hidden layer, which has Gaussian weights. We refer to this
setup as the Cauchy-Gaussian prior.
\section{Bayesian inference with neural network priors}\label{sec:BNN}
In this section, we study Bayesian inverse problems in function space
with neural network priors. We first use observation data to define the posterior on the
weights. Several approaches to
probe the posterior distribution are then discussed, including
maximum a posterior (MAP) estimation via optimization, and Markov
chain Monte Carlo (MCMC) sampling.
In the Bayesian approach, one treats the inverse problem as
statistical inference of the function $u$ in a function space
$\mathcal U$. This amounts to finding the posterior distribution of
$u$ that reflects the prior information and the observations. The
forward map is the parameter-to-observable operator
$\mathbfcal{F}: \mathcal U \to \mathbb R^{N_{\text{obs}}}$ that maps
the parameter field $u$ to observations
in $\mathbb R^{N_{\text{obs}}}$, where $N_{\text{obs}}$ is the
dimension of the observables. We assume
additive Gaussian errors $\boldsymbol \varepsilon \sim \mathcal N(0, \eta^2
I_{N_{\text{obs}}})$, so that the observations are
\begin{align}\label{eq:fwd_inf}
\boldsymbol y_{\text{obs}} = \mathbfcal F(u) + \boldsymbol \varepsilon.
\end{align}
Hence, the likelihood for given $u$ is
\begin{align*}
p(\boldsymbol y_{\text{obs}}|u) \propto \exp \left(-\frac{1}{2\eta^2} \left\| \mathbfcal F(u) - \boldsymbol y_{\text{obs}} \right\|_2^2 \right).
\end{align*}
Given a prior measure $\mu^0$, the
posterior measure $\mu^y$ of $u$ is defined via Bayes' rule
\begin{align}\label{eq:bayesinf}
\frac{\text{d} \mu^y}{\text{d} \mu^0} = \frac 1 Z p(\boldsymbol y_{\text{obs}}|u),
\end{align}
where the left-hand side is the Radon--Nikodym derivative of the
posterior measure $\mu^y$ with respect to the prior measure $\mu^0$
and $Z = \int_{\mathcal U} p(\boldsymbol y_{\text{obs}}|u)\,\text{d} \mu_0$ is the
normalization constant.
Note that in infinite dimensions, Bayes' formula cannot
be written in terms of probability density functions since there is no
Lebesgue measure against which to define the densities of the prior
and posterior distributions. Thus, finite-dimensional
approximations of the prior and posterior distributions are
proposed in \cite{BuiGeorg2013,cotter2010}. In particular, we assume
that the prior distribution is approximated by a finite-dimensional
measure $\mu^{0, h}$ which is absolutely continuous with respect to
the Lebesgue measure $\lambda$; the resulting posterior $\mu^{y,h}$ is
then also absolutely continuous with respect to $\lambda$. If we
define $p_{\text{post}}(u | \boldsymbol y_{\text{obs}})$ and
$p_{\text{prior}}(u)$ as the Lebesgue densities of $\mu^{0, h}$ and
$\mu^{y, h}$, respectively, we have
\begin{align}\label{eq:bayesinfappro}
p_{\text{post}}(u | \boldsymbol y_{\text{obs}}) = \frac{\text{d} \mu^{y, h}}{\text{d} \lambda} = \frac{\text{d} \mu^{y, h}}{\text{d} \mu^{0, h}} \frac{\text{d} \mu^{0, h}}{\text{d} \lambda} \propto p(\boldsymbol y_{\text{obs}}|u) p_{\text{prior}}(u).
\end{align}
In this paper, we estimate $u$ defined by the the finite-dimensional
network parameterization. The output of the neural network is a
function of the weights, which we denote by $u =
\Psi (\boldsymbol w)$, where $\boldsymbol w$ represents the weights inside the neural
network. This connects the weights $\boldsymbol w$ with the
observations $\boldsymbol y_{\text{obs}}$ as
\begin{align*}
\boldsymbol y_{\text{obs}} = \mathbfcal F \left(\Psi \left(\boldsymbol w \right) \right) + \boldsymbol \varepsilon.
\end{align*}
This allows one to infer $u$ by learning the
posterior distribution on the weights of the neural network
instead. Again, the posterior measure $\mu_{w}^y$ on the weights can
be computed by Bayes' rule as
\begin{align}\label{eq:Bayesrulenn}
\frac{\text{d} \mu_{w}^y}{\text{d} \mu_{w}^0} \propto p(\boldsymbol y_{\text{obs}}| \boldsymbol w),
\end{align}
where $p(\boldsymbol y_{\text{obs}}| \boldsymbol w)$ is the likelihood for given
weights $\boldsymbol w$. The finite-dimensional equation \eqref{eq:Bayesrulenn}
can be expressed in terms of densities.
For the prior density on $\boldsymbol w$, we use the
product of one-dimensional densities on each component.
\subsection{Posterior approximation based on minimizers}
Here, we give an overview of methods that approximate the
posterior distribution using optimization. Minimization-based point estimation only provides limited uncertainty
information, but it is computationally much cheaper
than sampling, which we discuss in Sec.~\ref{subsubsec:MCMC_pcn}.
\subsubsection{MAP estimation}
A widely-used estimation of the posterior \eqref{eq:Bayesrulenn}
is maximum a posteriori (MAP) estimation. MAP estimates $\boldsymbol
w_{\text{map}}$ are obtained by maximizing the posterior distribution
$p_{\text{post}}(\boldsymbol w|\boldsymbol y_{\text{obs}})$:
\begin{align*}
\boldsymbol w_{\text{map}} = \argmax_{\boldsymbol w} p_{\text{post}}(\boldsymbol w | \boldsymbol y_{\text{obs}}).
\end{align*}
This is equivalent to
\begin{align}\label{eq:mapobjective}
\boldsymbol w_{\text{map}} = \argmin_{\boldsymbol w} J(\boldsymbol w) := \frac{1}{2 \eta^2} \left\| \mathbfcal F \left(\Psi (\boldsymbol w) \right) - \boldsymbol y_{\text{obs}} \right\|_2^2 + R(\boldsymbol w),
\end{align}
where $R(\boldsymbol w) := -\log \left(p_{\text{prior}}( \boldsymbol w) \right)$ is the regularization
term. We assume that the prior on $\boldsymbol w$ is an
$\alpha$-stable distribution. Note that \eqref{eq:mapobjective} is a non-convex optimization problem
due to the presence of the neural network and due to a possibly
non-linear map $\mathbfcal F$. It is thus not guaranteed (and even
unlikely) that we find the global minimum numerically. Instead, we
likely find a local minimizer, which we denote by $\boldsymbol w_{\text{loc}}$.
Which local minimizer is found may depend on the initialization of the
optimization algorithm and the algorithm itself.
We will use local minima
to construct different approximations to the posterior distribution as
discussed next.
\subsubsection{Laplace approximation}\label{sec:laplace}
Based on a local minimum $\boldsymbol w_{\text{loc}}$, the Laplace
approximation is defined as the Gaussian distribution
\begin{align}\label{eq:laplaceapp}
\mathcal N \left(\boldsymbol w_{\text{loc}}, \nabla^2 J (\boldsymbol w_{\text{loc}})^{-1} \right),
\end{align}
where $\nabla^2 J (\boldsymbol w_{\text{loc}})$ denotes the Hessian of the
objective function \eqref{eq:mapobjective} at the local minimum $\boldsymbol
w_{\text{loc}}$, assuming that it is positive definite.
That is, the Laplace approximation replaces the true posterior using a
Gaussian distribution centered at the $\boldsymbol w_{\text{loc}}$ with
`local' covariance information of the posterior distribution
\cite{schillings20philipp, BuiGeorg2013}. The Laplace approximation
coincides with the true posterior if the mapping $\boldsymbol w \mapsto
\mathbfcal{F}(\Psi(\boldsymbol w))$ is linear and the prior and observation error
distributions are Gaussian. The approximation might be insufficient
when the posterior distribution is multimodal or heavy-tailed. As reported in \cite{ImmerKB21}, we observed numerically that
the Laplace approximation can
lead to substantial overestimation of the variance, and hence we do not
consider it further in this
paper.
\subsubsection{Ensemble method}\label{sec:ensemble}
A heuristic approach to estimate uncertainty is based on the set of
local minima computed with different initializations. This approach is
known as the ensemble method or model averaging method
\cite{rahaman2020uncertainty}. Local minimizers are averaged to
approximate the posterior mean and the variation of minimizers is
used to estimate uncertainty. Although not well understood
theoretically, the ensemble method can provide useful and cheap
uncertainty quantification compared to more sophisticated approaches
\cite{Laksh2017,zhou2002}. We will show results obtained with this
heuristic approach as part of our numerical results.
\subsubsection{Last-layer Gaussian regression}\label{sec:regression}
This approach starts with finding a minimizer for networks where the
last layer is Gaussian. The output of the neural network \eqref{eq:neural_nets} can be viewed as a
linear combination of the nodes in the last hidden layer. For linear
parameter-to-observable maps $\mathbfcal{F}(\cdot)$ and Gaussian
weights between the last hidden layer to the output,
\eqref{eq:neural_nets} reduces to a Gaussian regression problem upon
fixing the weights in all previous layers. This is related to the
majority voting algorithm \cite{Laksh2017} except that here we use it
to quantify uncertainty. The output function can be represented as
\begin{align*}
u(\boldsymbol x) = \sum_{j = 1}^{D_L} \nu_j f_i(\boldsymbol x) = \boldsymbol f(\boldsymbol x)^T \boldsymbol \nu,
\end{align*}
where $D_L$ denotes the width of the last hidden layer, i.e., the
number of base functions, $\boldsymbol f(\boldsymbol x) = [f_1(\boldsymbol x), \ldots,
f_{D_L}(\boldsymbol x)]^T$ denotes the vector containing the base functions,
and $\boldsymbol \nu = [\nu_1, \ldots, \nu_{D_L}]^T$ is the weight vector,
which follows a multivariate Gaussian prior $\mathcal N(0,
\frac{1}{D_L} I_{D_L})$. Thus, the forward model is
$
\boldsymbol y_{\text{obs}} = \mathbfcal F\boldsymbol f(\cdot)^T \boldsymbol \nu + \boldsymbol \varepsilon = \boldsymbol F^T \boldsymbol \nu + \boldsymbol \varepsilon,
$
where $\boldsymbol F = [\mathbfcal F f_1 , \ldots, \mathbfcal F f_{D_L} ]^T$
and $\mathbfcal F(\cdot)$ is the linear parameter-to-observable
mapping. By Bayes' formula, the posterior distribution of $\boldsymbol \nu$ follows a
multivariate Gaussian distribution $\mathcal N(\boldsymbol \nu_n, \Sigma_n)$
with
\begin{align*}
\boldsymbol \nu_n &= \frac{1}{\eta^2}\Sigma_n \boldsymbol F^* \boldsymbol y_{\text{obs}}, \quad \Sigma_n^{-1} = \frac{1}{\eta^2 }\boldsymbol F^* \boldsymbol F + I_{D_L},
\end{align*}
where $\boldsymbol F^*$ is the conjugate transpose of $\boldsymbol F$.
Note that this Gaussian distribution is $D_L$-dimensional, which is
much smaller than the number of weights. The covariance $\Sigma_n$
can thus be computed and factored easily to generate samples. The base
functions $f_i$, which are found through optimization, typically
contain local structural information such as jumps, and so the $f_i$
essentially provide a basis adapted to the inverse problem \cite{Helmut2017Optimal}. The
assumption of a linear parameter-to-observable mapping $\mathbfcal{F}$
is restrictive. However, when $\mathbfcal{F}$ is nonlinear this
approach may still provide a form of dimension reduction for use with
other sampling/approximation methods.
\subsection{Posterior approximation using MCMC sampling}\label{subsubsec:MCMC_pcn}
Sampling methods aim at full exploration of the posterior
distribution.
MCMC explores the posterior distribution $p(\boldsymbol w|\boldsymbol
y_{\text{obs}})$ by constructing a Markov chain which targets the
posterior in stationarity; states of this chain then form a sequence
of correlated samples following convergence. One starts with an
initial state $\boldsymbol w^{(0)}$ and proposes another state based on a
Markov transition kernel. This proposed state is then accepted or
rejected based on an acceptance criterion. Iterating this procedure is
the basis for generating a sequence of MCMC samples.
Ideally one desires the correlation between states of the chain to be
minimal in order to reduce the cost of computing accurate uncertainty
estimates: generating each proposal and/or acceptance probability
typically involves evaluation of the likelihood function and possibly
its derivatives. Many classical MCMC algorithms suffer from increasing
correlations between samples when the dimension of the parameter space
is increased, which is an issue in settings such as ours where the
number of weights to be inferred is high \cite{hairer2014spectral}. A
simple MCMC method which does not suffer from this dimensional
dependence, in the case of a Gaussian prior, is the preconditioned
Crank-Nicolson (pCN) method \cite{cotter2013mcmc}. In this method the
proposed states encode the prior information via a rescaling and
random perturbation of the current state, and the acceptance
probability requires only evaluation of the likelihood; see
\cite{cotter2013mcmc} for a full statement of the algorithm. In this
algorithm one must choose a scalar parameter $\beta$ that controls the
size of the perturbation in proposed states: if the perturbations are
too large, proposed states will be unlikely to be accepted, but if they
are too small the states of the chain will be highly correlated, and
so a balance must be achieved. In practice this parameter may be
adapted on-the-fly to ensure a certain proportion of proposals are
accepted. Variants of pCN are available that use gradient and
curvature information of the likelihood in order to generate more
feasible proposals with larger perturbations, reducing correlation
between samples in exchange for increased computational cost per
sample; see \cite{beskos2017geometric} for a review. In this paper we
will use only the pCN method.
We remark that our prior is in general non-Gaussian, however we may
still utilize the above methods via a reparameterization typically
known as non-centering \cite{chen2019dimensionrobust}: we rewrite our prior as a nonlinear
transformation of a Gaussian distribution. Since our prior is assumed
to be an independent product of one-dimensional distributions this
mapping may be found using the inverse CDF method. For example, in the
case of a standard Cauchy prior we define the mapping
\begin{align*}
\Lambda (p) = \tan \left[ \pi \left(\psi_G \left( p \right) - \frac 1 2 \right) \right],
\end{align*}
where $\psi_G$ is the standard Gaussian cumulative density function:
if $p \sim N(0,1)$, $\Lambda(p)$ is then a standard Cauchy sample. We
therefore compose our likelihood function with $\Lambda$ acting
componentwise, and apply the pCN method assuming independent $N(0,1)$
priors on each weight. Transforming the resulting MCMC samples with
the function $\Lambda$ componentwise then provides samples from our
desired posterior.
\section{Numerical experiments}\label{sec:numerical}
In this section, we study the behavior of
neural network priors for Bayesian inverse problems. Our goal is to
compare inversion results obtained with neural network priors with
Cauchy and Gaussian weight distributions. For that purpose, we use
deconvolution problems in one and
two dimensions. All numerical experiments are implemented using
the PyTorch 1.9.0 environment.
\subsection{One-dimensional deconvolution}
\begin{problem}\label{ex1:fun_appro}
We first consider a deconvolution problem on the interval
$\mathcal D = [-1, 1]$ with forward model
\begin{align*}
\boldsymbol y_{\text{obs}} = A (u) + \boldsymbol \varepsilon, \quad \boldsymbol
\varepsilon \sim \mathcal N(0, \eta^2 I_{N_{\text{obs}}}),
\end{align*}
where $u$ is the unknown, potentially discontinuous parameter to be
recovered. The forward operator is defined as $A = P\circ B$, where $B:
L^{\infty}(\mathcal D) \to \mathcal C^{\infty}$ denotes the blurring
operator defined by the convolution with a mean-zero Gaussian kernel
with variance $0.03^2$, and $P: \mathcal C^{\infty} \to \mathbb
R^{N_{\text{obs}}}$ is the evaluation operator at $N_{\text{obs}}=50$
uniformly distributed points. The variance of the errors $\boldsymbol
\varepsilon$ is $\eta^2 = 0.05^2$.
We discretize the forward operator using
a uniform mesh with $128$ points.
\end{problem}
The true model and the synthetic observations are shown in
Figure~\ref{fig:1ddatagpr}(a). As a reference, we show the
reconstruction obtained with a Gaussian process regression in
Figure~\ref{fig:1ddatagpr}(b). Here, we used a mean-zero Gaussian
process prior with the Mat\'ern covariance operator $0.25 (I -
10\Delta)^{-2}$ with homogeneous Neumann boundary conditions. Here
and in the remainder of this section, we show an uncertainty region
corresponding to the $95 \%$ credible interval. For Gaussians, this
corresponds to $\hat \mu \pm 1.96 \hat \sigma$, where $\hat \mu$ and
$\hat \sigma$ denote the sample mean and sample pointwise standard
deviation, respectively. For the following neural network priors, we
use a network with 3 hidden layers with widths $[50, 50, 100]$.
\begin{figure}[tb]\centering
\begin{tikzpicture}
\begin{scope}[xshift=0cm]
\begin{axis}[width=.54\columnwidth,xmin=-1,xmax=1,ymin=-1.8,ymax=0.8,compat=1.13, ytick ={-1.5, -1, -0.5, 0, 0.5}, legend pos=south west, legend cell align={left}, legend style={nodes={scale=.85, transform shape}}, xlabel = ]
\addplot[mark=none, black, dotted, color=blue!20!red, thick] table [x=x,y=U]{plot_paper/ex1/data/1dtruth.txt};
\addlegendentry{Truth $u^{\dagger}$}
\addplot[color=blue!60!green,mark=none,thick] table [x=x,y=blur]{plot_paper/ex1/data/1dtruth.txt};
\addlegendentry{Blurred model}
\addplot[only marks, mark=x, color=black, mark size=1] table [x=x,y=yobs]{plot_paper/ex1/data/1dobs.txt};
\addlegendentry{Observations}
\end{axis}
\node[color=black] at (0.4,4.1cm) {a)};
\end{scope}
\begin{scope}[xshift=6cm]
\begin{axis}[width=.54\columnwidth,xmin=-1,xmax=1,ymin=-1.8,ymax=0.8,compat=1.13, xtick ={-0.5, 0, 0.5, 1}, yticklabels=\empty, legend pos=south west, legend cell align={left}, legend style={nodes={scale=.85, transform shape}}, xlabel =]
\addplot[mark=none, black, dotted, color=blue!20!red, thick] table [x=x,y=U]{plot_paper/ex1/data/1dtruth.txt};
\addlegendentry{Truth}
\addplot[color=blue!40!orange,mark=none,thick] table [x=x,y=mean]{plot_paper/ex1/data/1dgpr.txt};
\addlegendentry{Mean}
\errorband[blue, opacity=0.2]{plot_paper/ex1/data/1dgpr.txt}{x}{mean}{std}
\addlegendentry{$\pm 1.96$ Std. dev.}
\end{axis}
\node[color=black] at (0.4,4.1cm) {b)};
\end{scope}
\end{tikzpicture}
\caption{Setup for Problem~\ref{ex1:fun_appro}. Shown in (a) are the
truth model and synthetic observations. As reference, shown in (b)
is the result of Gaussian process regression with the covariance
operator $0.25(I - 10 \Delta)^{-2}$.\label{fig:1ddatagpr}}
\end{figure}
\subsubsection{Optimization-based methods}
For the optimization-based methods, the objective function to be minimized is
\begin{align}\label{eq:optobj}
J(\boldsymbol w) = \frac{1}{2\eta^2} \left\| A \left(\Psi (\boldsymbol w) \right) - \boldsymbol y_{\text{obs}} \right\|_2^2 + R(\boldsymbol w).
\end{align}
For Gaussian weights, the regularization $R(\boldsymbol w)$ is
$R_{\text{G}}(\boldsymbol w)$ and for Cauchy weights it is
$R_{\text{C}}(\boldsymbol w)$, defined as
\begin{align}\label{eq:regularization}
R_{\text{G}}(\boldsymbol w) := \frac 1 2 \sum_i w_i^2, \quad R_{\text{C}}(\boldsymbol w) := \sum_i \log(1 + w_i^2),
\end{align}
where the summation is over all weights $\boldsymbol w$. We compare reconstructions for three regularizations, namely:
\begin{enumerate}[(i)]
\item Fully Gaussian weights,
\item Cauchy-Gaussian weights, i.e., all weights except those in the
last layer follow a Cauchy distribution. The weights in the last
layer are Gaussian,
\item Fully Cauchy weights.
\end{enumerate}
For each optimization, we first use $500$ iterations of the Adaptive Moment
Estimation (Adam) method, which adaptively changes the
stepsize. Following that, we use $1500$ iterations using the
Limited-memory Broyden–Fletcher–Goldfarb–Shannon (L-BFGS) algorithm
\cite{NocedalWright06} for a faster convergence to a local
minimum. Examples of reconstructions of the parameter function $u$
for the different regularizations are shown in
Figure~\ref{fig:1drecons}. We observe that while optimizations from
different initializations typically result in different weights, the
differences in the outputs $u$ are small. Moreover, the
reconstructions using Gaussian regularizations are smooth while the
reconstructions with Cauchy regularizations capture the
discontinuities in the parameter function better, although overall the
differences are rather small.
\begin{figure}[tb]\centering
\begin{tikzpicture}
\begin{scope}[xshift=0cm]
\begin{axis}[width=.41\columnwidth,xmin=-1,xmax=1,ymin=-1.8,ymax=0.8,compat=1.13, ytick ={-1.5, -1, -0.5, 0, 0.5}, ticklabel style = {font=\tiny}, legend pos=south west, legend cell align={left}, legend style={nodes={scale=0.65, transform shape}}]
\addplot[mark=none, black, dotted, color=blue!20!red, thick] table [x=x,y=U]{plot_paper/ex1/data/1dtruth.txt};
\addlegendentry{Truth}
\addplot[color=blue!70!green!70!white,mark=none,thick] table [x=x,y=recon1]{plot_paper/ex1/opt/Gau/recons.csv};
\addplot[color=blue!60!green!70!white,mark=none,thick] table [x=x,y=recon3]{plot_paper/ex1/opt/Gau/recons.csv};
\addplot[color=blue!50!green!70!white,mark=none,thick] table [x=x,y=recon7]{plot_paper/ex1/opt/Gau/recons.csv};
\addlegendentry{Minimizers}
\end{axis}
\node[color=black] at (0.4,2.7cm) {\small a)};
\end{scope}
\begin{scope}[xshift=4.0cm]
\begin{axis}[width=.41\columnwidth,xmin=-1,xmax=1,ymin=-1.8,ymax=0.8,compat=1.13, xtick ={-0.5, 0, 0.5, 1}, yticklabels=\empty, ticklabel style = {font=\tiny}, legend pos=south west, legend cell align={left}, legend style={nodes={scale=0.65, transform shape}}]
\addplot[mark=none, black,dotted, color=blue!20!red, thick] table [x=x,y=U]{plot_paper/ex1/data/1dtruth.txt};
\addlegendentry{Truth}
\addplot[color=blue!70!green!70!white,mark=none,thick] table [x=x,y=recon1]{plot_paper/ex1/opt/Cau_Gau/recons.csv};
\addplot[color=blue!60!green!70!white,mark=none,thick] table [x=x,y=recon2]{plot_paper/ex1/opt/Cau_Gau/recons.csv};
\addplot[color=blue!50!green!70!white,mark=none,thick] table [x=x,y=recon10]{plot_paper/ex1/opt/Cau_Gau/recons.csv};
\addlegendentry{Minimizers}
\end{axis}
\node[color=black] at (0.4,2.7cm) {\small b)};
\end{scope}
\begin{scope}[xshift=8cm]
\begin{axis}[width=.41\columnwidth,xmin=-1,xmax=1,ymin=-1.8,ymax=0.8,compat=1.13, xtick ={-0.5, 0, 0.5, 1}, yticklabels=\empty, ticklabel style = {font=\tiny}, legend pos=south west, legend cell align={left}, legend style={nodes={scale=0.65, transform shape}}]
\addplot[mark=none, black,dotted, color=blue!20!red, thick] table [x=x,y=U]{plot_paper/ex1/data/1dtruth.txt};
\addlegendentry{Truth}
\addplot[color=blue!70!green!70!white,mark=none,thick] table [x=x,y=recon1]{plot_paper/ex1/opt/Cau/recons.csv};
\addplot[color=blue!60!green!70!white,mark=none,thick] table [x=x,y=recon2]{plot_paper/ex1/opt/Cau/recons.csv};
\addplot[color=blue!50!green!70!white,mark=none,thick] table [x=x,y=recon3]{plot_paper/ex1/opt/Cau/recons.csv};
\addlegendentry{Minimizers}
\end{axis}
\node[color=black] at (0.4,2.7cm) {\small c)};
\end{scope}
\end{tikzpicture}
\caption{Shown are reconstructions using different initializations obtained through
optimization for Problem~\ref{ex1:fun_appro}. The results correspond
to the Gaussian (a), Cauchy-Gaussian (b), and fully Cauchy
(c) weights. \label{fig:1drecons}}
\end{figure}
We also study the uncertainty based
on these reconstructions obtained with the ensemble
method (Sec.~\ref{sec:ensemble}) and the last-layer Gaussian regression method
(Sec.~\ref{sec:regression}). The results obtained with both approaches for
Cauchy-Gaussian weights are shown in Figure~\ref{fig:1densemblegr}. A quantitative summary of these results using the ensemble method is shown in Table~\ref{tb:tab1}. Here, we report the quantities
${\|\mathbb E \left[ u \right] -
u^{\dagger}\|_{L^1}}/{\|u^{\dagger}\|_{L^1}}$ and the mean and
standard deviation of
${\| u - u^{\dagger}\|_{L^1}}/{\|u^{\dagger}\|_{L^1}}$, where
$u^{\dagger}$ and $u$ denote the truth and reconstructions,
respectively. Based on the
numerical reconstructions and the $L^1$ relative error, we observe
that the results obtained with Cauchy-Gaussian and fully Cauchy
weights better fit the discontinuities of the truth parameter
better.
\begin{figure}[tb]\centering
\begin{tikzpicture}
\begin{scope}[xshift=0cm]
\begin{axis}[width=.54\columnwidth,xmin=-1,xmax=1,ymin=-1.8,ymax=0.8,compat=1.13, ytick ={-1.5, -1, -0.5, 0, 0.5}, legend pos=south west, legend cell align={left}, legend style={nodes={scale=0.65, transform shape}}]
\addplot[mark=none, black,dotted, color=blue!20!red, thick] table [x=x,y=U]{plot_paper/ex1/data/1dtruth.txt};
\addlegendentry{Truth}
\addplot[color=black!40!orange,mark=none,thick] table [x=x,y=mean]{plot_paper/ex1/opt/ensemble_caugau.txt};
\addlegendentry{Mean}
\errorband[blue, opacity=0.2]{plot_paper/ex1/opt/ensemble_caugau.txt}{x}{mean}{std}
\addlegendentry{$\pm 1.96$ Std. dev.}
\end{axis}
\node[color=black] at (0.4,4.05cm) {a)};
\end{scope}
\begin{scope}[xshift=6cm]
\begin{axis}[width=.54\columnwidth,xmin=-1,xmax=1,ymin=-1.8,ymax=0.8,compat=1.13, xtick ={-0.5, 0, 0.5, 1}, yticklabels=\empty, legend pos=south west, legend cell align={left}, legend style={nodes={scale=0.65, transform shape}}]
\addplot[mark=none, black,dotted, color=blue!20!red, thick] table [x=x,y=U]{plot_paper/ex1/data/1dtruth.txt};
\addlegendentry{Truth}
\addplot[color=black!40!orange,mark=none,thick] table [x=x,y=mean]{plot_paper/ex1/opt/gpr_caugau.txt};
\addlegendentry{Mean}
\errorband[blue, opacity=0.2]{plot_paper/ex1/opt/gpr_caugau.txt}{x}{mean}{std}
\addlegendentry{$\pm 1.96$ Std. dev.}
\end{axis}
\node[color=black] at (0.4,4.05cm) {b)};
\end{scope}
\end{tikzpicture}
\caption{Shown are the means and pointwise standard deviations obtained
with the ensemble method (a) and the last-layer Gaussian regression
method (b). Both results are for Cauchy-Gaussian priors for
Problem~\ref{ex1:fun_appro}. \label{fig:1densemblegr}}
\end{figure}
\begin{table}[ht!]
\centering
\caption{Relative $L^1$-error of reconstructions obtained using the ensemble method with
Gaussian, Cauchy-Gaussian and Cauchy priors in
Problem~\ref{ex1:fun_appro}.}
\begin{tabular}{ | p {3.5cm} | p {2.4cm} | p {2.9cm}| p {2.4cm}|}
\hline
\hfil Regularizations & \hfil Gaussian & \hfil Cauchy-Gaussian & \hfil Cauchy \\
\hline
$\|\mathbb E \left[ u \right] - u^{\dagger}\|_{L^1}/\|u^{\dagger}\|_{L^1}$ & \hfil 8.35 & \hfil 5.90 & \hfil 5.53 \\
$\mathbb E [ \|u - u^{\dagger}\|_{L^1} ]/\|u^{\dagger}\|_{L^1}$ & \hfil 8.44 & \hfil 5.91 & \hfil 5.56 \\
$\text{Std} [\|u - u^{\dagger}\|_{L^1} ]/\|u^{\dagger}\|_{L^1}$ & \hfil 0.10 & \hfil 0.11 & \hfil 0.13 \\
\hline
\end{tabular}
\label{tb:tab1}
\end{table}
\subsubsection{MCMC sampling}
We also aim at exploring the posterior using the MCMC method. We use a
local minimum obtained with the optimization method as the starting
point to reduce the burn-in phase. The pCN method is used to generate
$5 \times 10^6$ samples as discussed in
Sec.~\ref{subsubsec:MCMC_pcn}. We use an adaptive method
to adjust the value of $\beta$ to maintain an
acceptance rate of about $30 \%$. Note that we always use Gaussian
increments in the pCN method. Numerical results obtained with each neural network prior are shown in Figure~\ref{fig:1dpcn}. We observe that the pCN samples based on
Cauchy weights capture the discontinuities better. The sample mean and
uncertainty region indicate a better fit to the truth model when we
use the Cauchy-Gaussian or the Cauchy neural network prior. We also
compare the $L^1$ relative error between the truth $u^\dagger$ and the
samples $u$ generated using each prior in Table~\ref{tb:tab2}. Note
that the landscape of the posterior is multi-modal due to the
nonlinearity of the neural network priors. Thus, these results might not fully explore the posterior distribution
but only provide a local estimate of the uncertainty around one (or
several) local minimizers.
\begin{figure}[tb]\centering
\begin{tikzpicture}
\begin{scope}[xshift=0cm]
\begin{axis}[width=.41\columnwidth,xmin=-1,xmax=1,ymin=-1.8,ymax=0.8,compat=1.13, ytick ={-1.5, -1, -0.5, 0, 0.5}, ticklabel style = {font=\tiny}, legend pos=south west, legend cell align={left},legend style={nodes={scale=0.65, transform shape}}]
\addplot[mark=none, black,dotted, color=blue!20!red, thick] table [x=x,y=U]{plot_paper/ex1/data/1dtruth.txt};
\addlegendentry{Truth}
\addplot[color=blue!40!green!70!white,mark=none,thick] table [x=x,y=r1]{plot_paper/ex1/MCMC/Gau/1dpcngau.txt};
\addplot[color=blue!50!green!60!white,mark=none,thick] table [x=x,y=r2]{plot_paper/ex1/MCMC/Gau/1dpcngau.txt};
\addplot[color=blue!60!green!50!white,mark=none,thick] table [x=x,y=r3]{plot_paper/ex1/MCMC/Gau/1dpcngau.txt};
\addplot[color=blue!70!green!40!white,mark=none,thick] table [x=x,y=r4]{plot_paper/ex1/MCMC/Gau/1dpcngau.txt};
\addlegendentry{Samples}
\end{axis}
\node[color=black] at (0.4,2.7cm) {\small a)};
\end{scope}
\begin{scope}[xshift=4cm]
\begin{axis}[width=.41\columnwidth,xmin=-1,xmax=1,ymin=-1.8,ymax=0.8,compat=1.13, xtick ={-0.5, 0, 0.5, 1}, yticklabels=\empty, ticklabel style = {font=\tiny}, legend pos=south west, legend cell align={left},legend style={nodes={scale=0.65, transform shape}}]
\addplot[mark=none, black,dotted, color=blue!20!red, thick] table [x=x,y=U]{plot_paper/ex1/data/1dtruth.txt};
\addlegendentry{Truth}
\addplot[color=blue!40!green!70!white,mark=none,thick] table [x=x,y=r1]{plot_paper/ex1/MCMC/Cau_Gau//1dpcncaugau.txt};
\addplot[color=blue!50!green!60!white,mark=none,thick] table [x=x,y=r2]{plot_paper/ex1/MCMC/Cau_Gau/1dpcncaugau.txt};
\addplot[color=blue!60!green!50!white,mark=none,thick] table [x=x,y=r3]{plot_paper/ex1/MCMC/Cau_Gau/1dpcncaugau.txt};
\addplot[color=blue!70!green!40!white,mark=none,thick] table [x=x,y=r4]{plot_paper/ex1/MCMC/Cau_Gau/1dpcncaugau.txt};
\addlegendentry{Samples}
\end{axis}
\node[color=black] at (0.4,2.7cm) {\small b)};
\end{scope}
\begin{scope}[xshift=8cm]
\begin{axis}[width=.41\columnwidth,xmin=-1,xmax=1,ymin=-1.8,ymax=0.8,compat=1.13, xtick ={-0.5, 0, 0.5, 1}, yticklabels=\empty, ticklabel style = {font=\tiny}, legend pos=south west, legend cell align={left}, legend style={nodes={scale=0.65, transform shape}}]
\addplot[mark=none, black,dotted, color=blue!20!red, thick] table [x=x,y=U]{plot_paper/ex1/data/1dtruth.txt};
\addlegendentry{Truth}
\addplot[color=blue!40!green!70!white,mark=none,thick] table [x=x,y=r1]{plot_paper/ex1/MCMC/Cau/1dpcncau.txt};
\addplot[color=blue!50!green!60!white,mark=none,thick] table [x=x,y=r2]{plot_paper/ex1/MCMC/Cau/1dpcncau.txt};
\addplot[color=blue!60!green!50!white,mark=none,thick] table [x=x,y=r3]{plot_paper/ex1/MCMC/Cau/1dpcncau.txt};
\addplot[color=blue!70!green!40!white,mark=none,thick] table [x=x,y=r4]{plot_paper/ex1/MCMC/Cau/1dpcncau.txt};
\addlegendentry{Samples}
\end{axis}
\node[color=black] at (0.4,2.7cm) {\small c)};
\end{scope}
\begin{scope}[xshift = 0cm, yshift=-4cm]
\begin{axis}[width=.41\columnwidth,xmin=-1,xmax=1,ymin=-1.8,ymax=0.8,compat=1.13, ytick ={-1.5, -1, -0.5, 0, 0.5}, ticklabel style = {font=\tiny}, legend pos=south west, legend cell align={left}, legend style={nodes={scale=0.65, transform shape}}]
\addplot[mark=none, black,dotted, color=blue!20!red, thick] table [x=x,y=U]{plot_paper/ex1/data/1dtruth.txt};
\addlegendentry{Truth}
\addplot[color=blue!40!orange,mark=none,thick] table [x=x,y=mean]{plot_paper/ex1/MCMC/Gau/1dpcngaumeanstd.txt};
\addlegendentry{Mean}
\errorband[blue, opacity=0.2]{plot_paper/ex1/MCMC/Gau/1dpcngaumeanstd.txt}{x}{mean}{std}
\addlegendentry{$\pm 1.96$ Std. dev}
\end{axis}
\node[color=black] at (0.4,2.7cm) {\small d)};
\end{scope}
\begin{scope}[xshift = 4.0cm, yshift=-4cm]
\begin{axis}[width=.41\columnwidth,xmin=-1,xmax=1,ymin=-1.8,ymax=0.8,compat=1.13, xtick ={-0.5, 0, 0.5, 1}, yticklabels=\empty, ticklabel style = {font=\tiny}, legend pos=south west, legend cell align={left}, legend style={nodes={scale=0.65, transform shape}}]
\addplot[mark=none, black,dotted, color=blue!20!red, thick] table [x=x,y=U]{plot_paper/ex1/data/1dtruth.txt};
\addlegendentry{Truth}
\addplot[color=blue!40!orange,mark=none,thick] table [x=x,y=mean]{plot_paper/ex1/MCMC/Cau_Gau/1dpcncaugaumeanstd.txt};
\addlegendentry{Mean}
\errorband[blue, opacity=0.2]{plot_paper/ex1/MCMC/Cau_Gau/1dpcncaugaumeanstd.txt}{x}{mean}{std}
\addlegendentry{$\pm 1.96$ Std. dev.}
\end{axis}
\node[color=black] at (0.4,2.7cm) {\small e)};
\end{scope}
\begin{scope}[xshift = 8cm, yshift=-4cm]
\begin{axis}[width=.41\columnwidth,xmin=-1,xmax=1,ymin=-1.8,ymax=0.8,compat=1.13, xtick ={-0.5, 0, 0.5, 1}, yticklabels=\empty, ticklabel style = {font=\tiny}, legend pos=south west, legend cell align={left}, legend style={nodes={scale=0.65, transform shape}}]
\addplot[mark=none, black,dotted, color=blue!20!red, thick] table [x=x,y=U]{plot_paper/ex1/data/1dtruth.txt};
\addlegendentry{Truth}
\addplot[color=blue!40!orange,mark=none,thick] table [x=x,y=mean]{plot_paper/ex1/MCMC/Cau/1dpcncaumeanstd.txt};
\addlegendentry{Mean}
\errorband[blue, opacity=0.2]{plot_paper/ex1/MCMC/Cau/1dpcncaumeanstd.txt}{x}{mean}{std}
\addlegendentry{$\pm 1.96$ Std. dev.}
\end{axis}
\node[color=black] at (0.4,2.7cm) {\small f)};
\end{scope}
\end{tikzpicture}
\caption{Shown in (a,b,c) are samples and in (d,e,f) the uncertainty of
posterior distributions with different neural network priors for
Problem~\ref{ex1:fun_appro}. The plots correspond to the Gaussian
(a,d), Cauchy-Gaussian (b,e), and Cauchy (c,f) neural network
priors. \label{fig:1dpcn}}
\end{figure}
\begin{table}[ht!]
\centering
\caption{Relative $L^1$-error of samples computed by pCN
with Gaussian, Cauchy-Gaussian and Cauchy priors in Problem~\ref{ex1:fun_appro}. }
\begin{tabular}{ | p {3.5cm} | p {2.4cm} | p {2.9cm}| p {2.4cm}|}
\hline
\hfil Neural network prior & \hfil Gaussian & \hfil Cauchy-Gaussian & \hfil Cauchy \\
\hline
$\|\mathbb E \left[ u \right] - u^{\dagger}\|_{L^1}/\|u^{\dagger}\|_{L^1}$ & \hfil 8.14 & \hfil 5.66 & \hfil 4.74 \\
$\mathbb E [ \|u - u^{\dagger}\|_{L^1} ]/\|u^{\dagger}\|_{L^1}$ & \hfil 8.57 & \hfil 6.45 & \hfil 5.63 \\
$\text{Std} [\|u - u^{\dagger}\|_{L^1} ]/\|u^{\dagger}\|_{L^1}$ & \hfil 0.63 & \hfil 0.57 & \hfil 0.73 \\
\hline
\end{tabular}
\label{tb:tab2}
\end{table}
\subsection{Two-dimensional deconvolution}
\begin{problem}\label{ex2:deblurring}
This is a two-dimensional deblurring problem on $\mathcal D = [-1, 1]^2$. The
forward model involves a PDE-solve, namely the solution of
\begin{subequations}\label{eq:deblurring_prob}
\begin{align}\label{eq:deblurring_prob:1}
(I - \kappa \Delta) y &= u \quad \text{in} \quad \mathcal D, \\
\frac{\partial y}{\partial n} &= 0 \quad \text{on} \quad \partial \mathcal D,
\end{align}
\end{subequations}
where $y$ and $u$ are PDE solution and the unknown parameter field,
respectively. In \eqref{eq:deblurring_prob}, $\kappa = 0.01$, which controls the
amount of blurring. The forward model is
denoted as
\begin{align*}
\boldsymbol y_{\text{obs}} = A \left( u \right) + \boldsymbol \varepsilon, \quad \boldsymbol \varepsilon \sim \mathcal N(0, \eta^2 I_{N_{\text{obs}}}),
\end{align*}
where $\eta=0.01$ and the forward operator is $A = P \circ B$. Here,
$B: L^infty(\mathcal D) \to \mathcal C^{\infty}$ is the PDE operator and
$P: \mathcal C^{\infty} \to \mathbb R^N_{\text{obs}}$ point evaluation
on a uniform grid of $14 \times 14$ points. We discretize the forward
PDE on a uniform mesh of $100 \times 100$ grid points using the
standard 5-point finite difference stencil.
\end{problem}
The true parameter function $u^\dagger$ and the blurred model are
shown in Figure~\ref{fig:2drecons}. Similar to
Problem~\ref{ex1:fun_appro}, we test the optimization and pCN methods
on this problem with different neural network priors. For all tests,
we use the three-hidden-layer neural network structure with layer
widths of $[80, 80, 1000]$. Note that the input dimension of the
neural network is $2$, and the output dimension is $1$. This example
is computationally more costly than Problem~\ref{ex1:fun_appro} due to
the required PDE solves and the use of a wider network. Since the PDE
solve requires most of the computation time, we use an upfront
Cholesky decomposition of the discretized blurring operator to accelerate the computation.
\subsubsection{Optimization-based methods}
The objective to be minimized is formally as in the
one-dimensional case\label{eq:optobj}. We use the Adam method for the first $200$
steps followed by $1500$ iterations of L-BFGS. The initial step size
is set to be $0.01$ for each method. Reconstructions obtained with
different regularizations and different initialization in the
optimization are shown in Figure~\ref{fig:2drecons}. We observe that
the reconstructions using the Cauchy neural network capture the edges
better, especially the middle edge and the right bottom block. In
contrast, the fully Gaussian neural network tends to result in smooth
reconstructions. We also note that the fully Cauchy neural network
results in better reconstructions of the linear ramp in the upper part
of the image compared to the Cauchy-Gaussian neural network.
\begin{figure}[tb]\centering
\begin{tikzpicture}
\node at (0,2cm) (img1) {\includegraphics[scale=0.31]{plot_paper/ex2/data/truth}};
\node[above=of img1, node distance=0cm, yshift=-1.2cm,font=\color{black}] {Truth};
\node at (3.1,2cm) (img12) {\includegraphics[scale=0.31]{plot_paper/ex2/data/convolution}};
\node[above=of img12, node distance=0cm, yshift=-1.2cm,font=\color{black}] {Convolution};
\node at (7.8,1.7cm) (img13) {\includegraphics[scale=0.4]{plot_paper/ex2/data/colorbar}};
\node[above=of img13, node distance=0cm, yshift=-1.2cm,font=\color{black}] {Colorbar $[-0.1, 1.3]$};
\node at (0,-1.5cm) (img2) {\includegraphics[scale=0.31]{plot_paper/ex2/recons/21}};
\node[left=.1cm of img2, node distance=0cm, rotate=90, anchor=center, yshift = 0cm, font=\color{black}] {Gaussian};
\node at (3.1,-1.5cm) {\includegraphics[scale=0.31]{plot_paper/ex2/recons/22}};
\node at (6.2,-1.5cm) {\includegraphics[scale=0.31]{plot_paper/ex2/recons/23}};
\node at (9.3,-1.5cm) {\includegraphics[scale=0.31]{plot_paper/ex2/recons/24}};
\node at (0,-5cm) (img3) {\includegraphics[scale=0.31]{plot_paper/ex2/recons/31}};
\node[left=.1cm of img3, node distance=0cm, rotate=90, anchor=center, yshift = 0cm, font=\color{black}] {Cauchy-Gaussian};
\node at (3.1,-5cm) {\includegraphics[scale=0.31]{plot_paper/ex2/recons/32}};
\node at (6.2,-5cm) {\includegraphics[scale=0.31]{plot_paper/ex2/recons/33}};
\node at (9.3,-5cm) {\includegraphics[scale=0.31]{plot_paper/ex2/recons/34}};
\node at (0,-8.5cm) (img4) {\includegraphics[scale=0.31]{plot_paper/ex2/recons/41}};
\node[left=.1cm of img4, node distance=0cm, rotate=90, anchor=center, yshift = 0cm, font=\color{black}] {Cauchy};
\node at (3.1,-8.5cm) {\includegraphics[scale=0.31]{plot_paper/ex2/recons/42}};
\node at (6.2,-8.5cm) {\includegraphics[scale=0.31]{plot_paper/ex2/recons/43}};
\node at (9.3,-8.5cm) {\includegraphics[scale=0.31]{plot_paper/ex2/recons/44}};
\end{tikzpicture}
\caption{Shown in the first row are the truth $u^\dagger$ and blurred
models used for Problem~\ref{ex2:deblurring}. We show minimizers
obtained using the optimization method with different
initializations. Results are shown for Gaussian weights (second
row), Cauchy-Gaussian weights (third row), and Cauchy weights (forth
row). \label{fig:2drecons}}
\end{figure}
We also study the uncertainty using the ensemble method from
Sec.~\ref{sec:ensemble}. Results for different regularizations are shown in
Figure~\ref{fig:2densemble}. We observe that the Cauchy-Gaussian
neural network outperforms the Gaussian neural network for edge
detection.
\begin{figure}[tb]\centering
\begin{tikzpicture}
\node at (0,2cm) (img1) {\includegraphics[scale=0.39]{plot_paper/ex2/ensemble/mean1}};
\node[left=0cm of img1, node distance=0cm, rotate=90, anchor=center, yshift = 0cm, font=\color{black}] {Mean};
\node[above=of img1, node distance=0cm, yshift=-1.25cm,font=\color{black}] {Gaussian};
\node at (3.9,2cm) (img12) {\includegraphics[scale=0.39]{plot_paper/ex2/ensemble/mean2}};
\node[above=of img12, node distance=0cm, yshift=-1.25cm,font=\color{black}] {Cauchy-Gaussian};
\node at (7.8,2cm) (img13) {\includegraphics[scale=0.39]{plot_paper/ex2/ensemble/mean3}};
\node[above=of img13, node distance=0cm, yshift=-1.25cm,font=\color{black}] {Cauchy};
\node at (10.0, 2cm) (img13) {\includegraphics[scale=0.333]{plot_paper/ex2/ensemble/colorbar1}};
\node at (0,-2cm) (img1) {\includegraphics[scale=0.39]{plot_paper/ex2/ensemble/std1}};
\node[left=0cm of img1, node distance=0cm, rotate=90, anchor=center, yshift = 0cm, font=\color{black}] {Std. dev.};
\node at (3.9,-2cm) (img12) {\includegraphics[scale=0.39]{plot_paper/ex2/ensemble/std2}};
\node at (7.8,-2cm) (img13) {\includegraphics[scale=0.39]{plot_paper/ex2/ensemble/std3}};
\node at (10.0, -2cm) (img13) {\includegraphics[scale=0.333]{plot_paper/ex2/ensemble/colorbar2}};
\end{tikzpicture}
\caption{Results obtained with ensemble method for
Problem~\ref{ex2:deblurring}. Shown are the ensemble means (top row)
and standard deviation (bottom row) obtained with Gaussian weights
(left), Cauchy-Gaussian weights (middle), and fully Cauchy weights
(right). \label{fig:2densemble}}
\end{figure}
An alternative uncertainty quantification approximation method is the
last-layer Gaussian regression based on the trained neural network
(Sec.~\ref{sec:regression}). The mean and standard deviation obtained by this approach from a fully Gaussian and a Cauchy-Gaussian neural network are shown in
Figure~\ref{fig:2dgpresult}. Since the mean of the
Gaussian regression is identical to the reconstruction we exploit, we
only show the standard deviation of each neural network with different
regularizations.
\begin{figure}[tb]\centering
\begin{tikzpicture}
\node at (0,-2cm) (img1) {\includegraphics[scale=0.5]{plot_paper/ex2/gr/std1}};
\node[above=of img1, node distance=0cm, yshift=-1.25cm,font=\color{black}] {Gaussian};
\node[left=.15cm of img1, node distance=0cm, rotate=90, anchor=center, yshift = 0cm, font=\color{black}] {Std. dev.};
\node at (5.3,-2cm) (img12) {\includegraphics[scale=0.5]{plot_paper/ex2/gr/std2}};
\node[above=of img12, node distance=0cm, yshift=-1.25cm,font=\color{black}] {Cauchy-Gaussian};
\node at (8.4, -2cm) (img13) {\includegraphics[scale=0.43]{plot_paper/ex2/gr/colorbar2}};
\end{tikzpicture}
\caption{Results for last-layer Gaussian regression,
Problem~\ref{ex2:deblurring}. Shown are the standard deviations
with last-layer base functions from a pre-trained network. The
figures are for networks with Gaussian (left) and Cauchy-Gaussian
(right) weights. \label{fig:2dgpresult}}
\end{figure}
\subsubsection{MCMC sampling}
We also test the MCMC sampling method on this two-dimensional
deconvolution example using $10^6$ samples of the
adaptive pCN method. A numerical comparison of the uncertainty with
different priors is shown in Figure~\ref{fig:2dpcn}.
We observe that the mean of the chain with a Cauchy neural network
prior recovers the edges of the parameter function better while the
Gaussian neural network prior results in smoother
reconstructions. From the plots of the standard deviation, we also
note that the result obtained with a Cauchy neural network prior
possesses a larger variation around the edges compared to other
regions, which implies a better edge detection. We also note that the
neural network priors with fully Cauchy weights can learn the
linearly-changing block on the top better than the Cauchy-Gaussian
network prior.
\begin{figure}[tb]\centering
\begin{tikzpicture}
\node at (0,2cm) (img1) {\includegraphics[scale=0.39]{plot_paper/ex2/MCMC/mean1}};
\node[left=0cm of img1, node distance=0cm, rotate=90, anchor=center, yshift = 0cm, font=\color{black}] {Mean};
\node[above=of img1, node distance=0cm, yshift=-1.25cm,font=\color{black}] {Gaussian};
\node at (3.9,2cm) (img12) {\includegraphics[scale=0.39]{plot_paper/ex2/MCMC/mean2}};
\node[above=of img12, node distance=0cm, yshift=-1.25cm,font=\color{black}] {Cauchy-Gaussian};
\node at (7.8,2cm) (img13) {\includegraphics[scale=0.39]{plot_paper/ex2/MCMC/mean3}};
\node[above=of img13, node distance=0cm, yshift=-1.25cm,font=\color{black}] {Cauchy};
\node at (10.0, 2cm) (img13) {\includegraphics[scale=0.333]{plot_paper/ex2/MCMC/colorbar1}};
\node at (0,-2cm) (img1) {\includegraphics[scale=0.4]{plot_paper/ex2/MCMC/std1}};
\node[left=0cm of img1, node distance=0cm, rotate=90, anchor=center, yshift = 0cm, font=\color{black}] {Std. dev.};
\node at (3.9,-2cm) (img12) {\includegraphics[scale=0.39]{plot_paper/ex2/MCMC/std2}};
\node at (7.8,-2cm) (img13) {\includegraphics[scale=0.39]{plot_paper/ex2/MCMC/std3}};
\node at (10.0, -2cm) (img13) {\includegraphics[scale=0.333]{plot_paper/ex2/MCMC/colorbar2}};
\end{tikzpicture}
\caption{MCMC sampling results for Problem~\ref{ex2:deblurring}. Shown
are the means (top row) and standard deviations (bottom row) for
neural network priors with Gaussian weights (left), Cauchy-Gaussian
weights (middle), and fully Cauchy weights
(right). \label{fig:2dpcn}}
\end{figure}
\section{Summary and conclusions}\label{sec:conclu}
The main target of this work is to study neural
network priors for infinite-dimensional Bayesian inverse
problems. Samples of these priors are outputs of neural networks with random weights from different
distributions. Theoretically, we study finite-width neural networks with
$\alpha$-stable weights, and show that the derivative of the output at
a fixed point is heavy-tailed for Cauchy weights. We also present a
numerical comparison of Cauchy and Gaussian neural networks
priors in Bayesian inverse problems. We conclude that:
(1) Neural network priors are able to capture discontinuities and they
are discretization-independent by design. Conditioning these priors
with observations can require many hundreds of evaluations of the
network and the forward map to compute point approximations of the
posterior. Attempting sampling of the posterior distribution requires
tens of thousands of network and forward map evaluations.
(2) Not unexpectedly, the optimization landscapes for optimization
with Bayesian neural networks have multiple local minima even if the
forward map is linear as in the deblurring examples we study.
(3) We observe that upon optimization, most weights of Cauchy neural
networks are close to zero at (local) minimizers. Optimized Cauchy
networks thus have substantial sparsity, which is a consequence of the
regularization resulting from the Cauchy density.
(4) While we only focused on fully connected networks, one could use
block-diagonal weight matrices, thus requiring substantially fewer
weights. We found in numerical experiments (which are not shown here)
that for Cauchy weights, the resulting distributions for the output do
not differ much between block diagonal and fully connected networks.
|
1,941,325,221,187 | arxiv | \section{Introduction}
In this paper we consider a mixture of Boltzmann and McKean-Vlasov type equations defined as follows. Let $\mathcal{P}_{1}({\mathbb{R}}^{d})$ denote the space
of probability measures on ${\mathbb{R}}^{d}$ with a finite first
moment. We consider $\rho \in \mathcal{P}_{1}({\mathbb{R}}^{d})$, an abstract measurable space $(E,\mu )$ and three coefficients $b:{\mathbb{R}}^{d}\times \mathcal{P}_{1}({\mathbb{R}}^{d})\rightarrow {\mathbb{R}}^{d}$, $c:{\mathbb{R}}^{d}\times E\times {\mathbb{R}}^{d}\times \mathcal{P}_{1}({\mathbb{R}}^{d})\rightarrow {\mathbb{R}}^{d}$ and $\gamma :{\mathbb{R}}^{d}\times E\times {\mathbb{R}}^{d}\times \mathcal{P}_{1}({\mathbb{R}}^{d})\rightarrow {\mathbb{R}}_{+}$ that verify some linear growth and some Lipschitz continuity hypothesis (see Assumption $\mathbf{(A)}$ for
precise statements) and we associate the following weak equation on $f_{s,t}\in\cP_1(\R^d)$, $0\le s\le t$:
\begin{align}
\forall \varphi &\in C_{b}^{1}({\mathbb{R}}^{d}), \ \int_{{\mathbb{R}}^{d}}\varphi (x)f_{s,t}(dx)=\int_{{\mathbb{R}
^{d}}\varphi (x)\rho (dx)+\int_{s}^{t}\int_{{\mathbb{R}}^{d}}\left\langle
b(x,f_{s,r}),\nabla \varphi (x)\right\rangle f_{s,r}(dx)dr \label{int1} \\
&+\int_{s}^{t}\int_{{\mathbb{R}}^{d}\times {\mathbb{R}
^{d}}f_{s,r}(dx)f_{s,r}(dv)\int_{E}\left( \varphi
(x+c(v,z,x,f_{s,r}))-\varphi (x)\right) \gamma (v,z,x,f_{s,r})\mu (dz)dr.
\notag
\end{align
Here, $C_{b}^{1}({\mathbb{R}}^{d})$ denotes the set
of bounded $C^{1}$ functions with bounded gradient. Given a fixed $s\geq 0,$
a solution of this equation is a family $f_{s,t}(dx)\in \mathcal{P}_{1}({\mathbb{R}}^{d}),t\geq s,$ which verify (\ref{int1}) for every test function
$\varphi$. If one replaced in the last term of~\eqref{int1} the solution $f_{s,r}(dv)$ by a fixed $g_{s,t}(dv)\in \mathcal{P}_{1}({\mathbb{R}}^{d})$, this would be a McKean-Vlasov
type equation. Moreover, when the coefficients $b,c,\gamma $ do not depend on
the solution $f_{s,r}$ and for specific choices of $b$, $c$ and $\gamma$, this
equation covers variants of the Boltzmann equation (see Villani \cite{[V]}
and Alexandre~\cite{[A]} for the mathematical approach and Cercignani~\cite{[C]} for a presentation of the physical background). In the case of the
homogeneous Boltzmann equation the particles are (and remain) uniformly
distributed in space, so their positions do not appear as variables in the equation.
Then, $x\in {\mathbb{R}}^{d}$ represents the velocity of the typical
particle. In this case, the drift coefficient is simply $b=0$. In the case of the
inhomogeneous Boltzmann equation also known as the Enskog equation (see Arkeryd~\cite{[Ar]}), the positions of the particles matter. One works then on ${\mathbb{R}}^{2d}$, $(x^{1},...,x^{d})$ is the position and $(x^{d+1},...,x^{2d})$ represents the velocity of the typical particle. Then, the drift coefficient will be $b^{i}(x)=x^{i+d},i=1,...,d$ and $b^{i}(x)=0,i=d+1,...,2d$. This is one motivation for considering a general drift term in our abstract formulation.
The probabilistic approach to this type of Boltzmann equation has been
initiated by Tanaka in \cite{[T1]},\cite{[T2]} followed by many others (see
\cite{[BF]},\cite{[DGM]},\cite{[FG1]},\cite{[FMi]} for example). One takes $f_{s,t}(dx),t\geq s$ to be the solution of the equation (\ref{int1}) and
constructs a Poisson point measure $N_{f}$ with state space ${\mathbb{R}
^{d}\times E\times {\mathbb{R}}_{+}$ and with intensity measure
$f_{s,r}(dv)\mu (dz)1_{{\mathbb{R}}_{+}}(u)du1_{(s,\infty )}(r)dr$.
Then, one associates the stochastic equatio
\begin{align}
X_{s,t}=X&+\int_{s}^{t}b(X_{s,r},f_{s,r})dr \label{int3}\\
&+\int_{s}^{t}\int_{{\mathbb{R}
^{d}\times E\times {\mathbb{R}}_{+}}c(v,z,X_{s,r-},f_{s,r-})1_{\{u\leq
\gamma (v,z,X_{s,r-},f_{s,r-})\}}N_{f}(dv,dz,du,dr). \notag
\end{align
Here, the initial value $X$ is a random variable with law $\rho$ which is independent of the
Poisson measure $N_{f}$. Under suitable hypothesis (for specific
coefficients) one proves that the stochastic equation (\ref{int3}) has a
unique solution and moreover, the law of $X_{s,t}$ is $f_{s,t}(dx).$ In this
sense, (\ref{int3}) is a probabilistic interpretation of (\ref{int1}) and $(X_{s,t})$ is called the "Boltzmann process" (see \cite{[F1]} for example).
In the present paper, we give an alternative formulation of the problem
presented above. We first recall the definition of the Wasserstein distance~$W_{1}$ on the space~$\mathcal{P}_1(\R^d)$:
\begin{align*}
\mu,\nu \in \mathcal{P}_1(\R^d), \ W_{1}(\mu ,\nu )&=\inf_{\pi \in \Pi (\mu ,\nu )}\int_{{\mathbb{R}}^{d}\times
\mathbb{R}}^{d}}\left\vert x-y\right\vert \pi (dx,dy)\\
&=\sup_{L(f)\leq
1}\left\vert \int_{{\mathbb{R}}^{d}}f(x)\mu (dx)-\int_{{\mathbb{R}
^{d}}f(x)\nu (dx)\right\vert,
\end{align*
where $\Pi (\mu ,\nu )$ is the set of probability measures on ${\mathbb{R}
^{d}\times {\mathbb{R}}^{d}$ with marginals $\mu $ and $\nu$, and $L(f):=\sup_{x\not = y} \frac{|f(y)-f(x)|}{|y-x|}$ is the Lipschitz constant of $f$. The second equality is a classical consequence of Kantorovich duality, see e.g. Remark 6.5~\cite{Villani}. We also introduce $\mathcal{E}_0(\mathcal{P}_{1}({\mathbb{R}}^{d}))$, the metric space of the endomorphisms $\theta :\mathcal{P}_{1}({\mathbb{R}}^{d})\rightarrow \mathcal{P}_{1}({\mathbb{R}}^{d})$ such that $\sup_{\rho \in \mathcal{P}_{1}({\mathbb
R}}^{d})}\frac{ \int_{\R^d}
\left\vert x\right\vert \theta(\rho) (dx)}{1+\int_{\R^d}
\left\vert x\right\vert \rho (dx)}< \infty$, endowed with the distance
\begin{equation*}
d_{\ast }(\theta ,\theta ^{\prime })=\sup_{\rho \in \mathcal{P}_{1}({\mathbb
R}}^{d})}\frac{W_{1}(\theta (\rho ),\theta ^{\prime }(\rho ))}{1+\int_{\R^d}
\left\vert x\right\vert \rho (dx)}.
\end{equation*}
This is a complete metric space (see Lemma~\ref{endo_complete}).
Then we construct $\Theta _{s,t}\in \mathcal{E}_0(\mathcal{P}_{1}({\mathbb{R}}^{d}))$ in the following way. Given $\rho \in \mathcal{P}_{1}({\mathbb{R}}^{d})$, we construct a Poisson point measure $N_{\rho }$ with state space $
\mathbb{R}}^{d}\times E\times {\mathbb{R}}_{+}$ and with intensity measure
\begin{equation*}
\widehat{N}_{\rho }(dv,dz,du,dr)=\rho (dv)\mu (dz)1_{{\mathbb{R}
_{+}}(u)du1_{\{s,\infty )}(r)dr.
\end{equation*
Moreover, we take a random variable $X$ with law $\rho $ which is
independent of the Poisson measure $N_{\rho }$ and we defin
\begin{equation}
X_{s,t}(\rho )=X+b(X,\rho )(t-s)+\int_{s}^{t}\int_{{\mathbb{R}}^{d}\times
E\times {\mathbb{R}}_{+}}c(v,z,X,\rho )1_{\{u\leq \gamma (v,z,X,\rho
)\}}N_{\rho }(dv,dz,du,dr). \label{int4}
\end{equation
Clearly $X_{s,t}(\rho )$ is the one step Euler scheme for the stochastic
equation (\ref{int3}). We define $\Theta _{s,t}(\rho )$ to be the
probability distribution of $X_{s,t}(\rho ):$
\begin{equation*}
\Theta _{s,t}(\rho )(dv):={\mathbb{P}}(X_{s,t}(\rho )\in dv).
\end{equation*
Under suitable assumptions, we get that $\Theta _{s,t}$ indeed belongs to~$\mathcal{E}_0(\mathcal{P}_{1}({\mathbb{R}}^{d}))$.
\begin{definition}
\label{def_flow} A family of endomorphisms $\theta_{s,t}\in \mathcal{E}_0( \mathcal{P}_{1}({\mathbb{R}}^{d}))$ with $0\leq s<t$ is a flow if
\begin{equation*}
\theta _{s,t}=\theta _{r,t}\circ \theta _{s,r}, \text{ for every } 0\leq
s<r<t.
\end{equation*}
It is a stationary flow if $\theta_{s,t}=\theta_{0,t-s}$.
\end{definition}
Our problem is stated as follows: find a flow of endomorphisms such that
\begin{equation}
d_{\ast }(\theta _{s,t},\Theta _{s,t})\leq C(t-s)^{2}. \label{int5}
\end{equation
We call this $\theta$ a "flow solution" of the equation associated to the
coefficients $b,c,\gamma$ and to the measure~$\mu$. It turns out that
under suitable hypotheses, a flow solution exists and is unique. Moreover the
flow solution is a weak solution of Equation~(\ref{int1}) and admits the
stochastic representation~(\ref{int3}).
This special way to characterize the solution of an equation by means of
the distance in short time to the one step Euler scheme first appears, to our knowledge, in the paper~\cite{[D]} of Davie in the framework of
rough differential equations. Then, Bailleul in~\cite{[B]} and~\cite{[BC]}
coupled this idea with the concept of flows. These ideas appeared in the
framework of rough path integration initiated by Lyons in his seminal paper
\cite{[L]} (we refer to Friz and Victoir~\cite{[FV]} and to Friz and Hairer~\cite{[FH]} for a complete and friendly presentation of this topic).
It is worth to mention that a central instrument in the rough path theory is
the so called "sewing lemma" introduced by Feyel and De la Pradelle in \cit
{[FP]},\cite{[FPM]} and in the same time, independently, by Gubinelli in
\cite{[G]}. \ This is a generic and efficient way to treat the convergence
of Euler type schemes. In our paper we give a general abstract variant of
this lemma which plays a crucial part in our approach. In our framework this
lemma states as follows. We consider an abstract family of endomorphisms $\Theta _{s,t}$ which has the
following two properties. First, we assume the Lipschitz continuity propert
\begin{equation}
d_{\ast }(\Theta _{s,t}\circ U ,\Theta _{s,t}\circ \tilde{U} )\leq
C e^{C(t-s)}d_{\ast }(U ,\tilde{U} ),\quad \forall U ,\tilde{U} \in \mathcal{E}_0(\cP
_{1}({\mathbb{R}}^{d})). \label{int6}
\end{equation
Moreover, notice that $\Theta _{s,t}$ has not the flow property, although we
expect this property to be true for $\theta _{s,t}.$ However, we assume that it has almost this
property in the following asymptotic (small time) sense: we assume that for every $s<u<t$,
\begin{equation}
d_{\ast }(\Theta_{s,t},\Theta _{u,t}\circ \Theta _{s,u})\leq C(t-s)^{2}.
\label{int7}
\end{equation
This is the "sewing property". These two properties essentially allows to construct by the sewing lemma the flow $\theta _{s,t}$ which satisfies (\ref{int5}) as the limit in $d_{\ast }$ of the Euler schemes based on
\Theta _{s,t}.$ More precisely for a partition $\mathcal{P
=\{s=s_{0}<....<s_{n}=t\}$\ one defines the Euler scheme $\Theta _{s,t}^
\mathcal{P}}=\Theta _{s_{n-1},s_{n}}\circ ....\circ \Theta _{s_{0},s_{1}}$
and constructs $\theta _{s,t}$ as a limit as $\max_{i=1,\dots,n}s_i-s_{i-1}=:\left\vert \mathcal{P
\right\vert \rightarrow 0$ of such Euler schemes. Besides, the following error
estimate holds
\begin{equation}
d_{\ast }(\Theta _{s,t}^{\mathcal{P}},\theta _{s,t})\leq C\left\vert
\mathcal{P}\right\vert (t-s). \label{int8}
\end{equation}
Section~\ref{abs} presents the abstract framework that allows us to prove Lemma~\ref{Sewing}, a generalized sewing lemma that give the existence and the uniqueness of a flow satisfying~\eqref{int5}. A pleasant feature is that the
uniqueness of the flow is quite easy to obtain, since it essentially has to match with the limit of Euler schemes. Thus, the flow provides one notable solution of the weak equation~\eqref{int1}: this is the one which is obtained as the limit of Euler schemes.
In Section~\ref{jump}, we present the framework of our study (i.e. jump type equations) and our main assumptions. We then use this sewing lemma to prove in Theorem~\ref{flow} that the flow $\theta_{s,t}$ defined as the solution of (\ref{int5}) exists and is unique. We further obtain the estimate (\ref{int8}). Besides, we prove that the flow solution $\theta_{s,t}$ constructed
in Theorem \ref{flow} is a weak solution of equation (\ref{int1}) and admits a probabilistic interpretation (see equation (\ref{int3})).
Then, in Section~\ref{particles} we give a numerical approximation scheme for $\theta _{s,t}(\rho )$ based on a particle system. We obtain in Theorem~\ref{theorem_approx} the convergence of the law of any particle towards the flow solution, and give a rate of convergence that is interesting for practical applications. We also obtain a propagation of chaos result for the Wasserstein distance. Last, Section~\ref{sec_Boltz} applies the general results to the homogeneous Boltzmann equation and the non homogeneous Boltzmann (Enskog) equation. The problem of the uniqueness of the homogeneous Boltzmann equation has been studied in several papers by Fournier~\cite{[F1]}, Desvillettes and Mouhot~\cite{[DM]}, Fournier and Mouhot~\cite{[FMu]}. The study of the Enskog equation is up to our knowledge much more recent: we mention here contributions concerning existence, uniqueness and particle system approximations by Albeverio, R\"udiger and Sundar~\cite{[ARS]} and Friesen, R\"udiger and Sundar~\cite{[FRS],[FRS1]}. Here, for technical reasons, we only deal with truncated coefficients. The interesting problem of analysing the convergence of the equation with truncated coefficients towards the general equation is not related to our approach based on the sewing lemma and is thus beyond the scope of this paper. We show that the assumptions of Theorem~\ref{flow} are satisfied, which enables to define the flow $\theta_{s,t}$ that is a weak solution of~\eqref{int1} and admits a probabilistic representation~\eqref{int3}. Interestingly, our approach enables us to study equations that combine interactions of Boltzmann type and mean field interactions of McKean-Vlasov type. To illustrate this, we introduce an alternative equation to the Enskog equation, where we replace the space localization function by a mean-field interaction: collisions are more frequent when the typical particle is in a region with a high density of particles. Such a problem enters as well in our framework, and we thus obtain the same results for the flow given by this equation.
\section{Abstract sewing lemma}\label{abs}
We consider an abstract set $V$, and we denote by $\mathcal{E}(V)$ the space
of the endomorphisms $\varphi :V\rightarrow V$. Here and in the rest of the paper, we use the multiplicative notation for composition, so tha
\begin{equation*}
\varphi \psi(v):=\varphi(\psi(v)).
\end{equation*}
We consider $\mathcal{E}_0(V)\subset \mathcal{E}(V)$ a subgroup of endomorphism (i.e. $I_d\in \mathcal{E}_0(V)$ and $\varphi,\psi\in \mathcal{E}_0(V)\implies \varphi \psi \in \mathcal{E}_0(V)$). We assume that there is a distance $d_\ast$ on $\mathcal{E}_0(V)$ such that $(\mathcal{E}_0(V),d_\ast)$ is a complete metric space. We assume besides that
\begin{equation}\label{dist_compatibility}
\forall U\in \mathcal{E}_0(V), \exists C(U) \in \R_+,\forall \varphi,\psi\in \mathcal{E}_0(V), \ d_\ast(\varphi U, \psi U)\le C(U) d_\ast(\varphi, \psi),
\end{equation}
and moreover that we can pick the constant $C(U)$ uniformly in the following sense:
\begin{equation}\label{dist_compatibility2}
\forall R>0, \exists \bar{C}_R \in \R_+,\forall U,\varphi,\psi\in \mathcal{E}_0(V), \ d_\ast(U,Id)\le R \implies d_\ast(\varphi U, \psi U)\le \bar{C}_R d_\ast(\varphi, \psi).
\end{equation}
Thanks to~\eqref{dist_compatibility}, we get that $\varphi_n\to \varphi \implies \varphi_n U \to \varphi U$ for any $U\in \mathcal{E}_0$, and~\eqref{dist_compatibility2} ensures that this convergence is uniform on bounded sets.
We now consider a time horizon $T>0$ which will be fixed in the following and a family of endomorphisms $\Theta _{s,t} \in \mathcal{E}_0(V)$ for $0\leq s\le t \le T$, such that $\Theta_{s,t}=Id$ for $s=t$ and
\begin{equation}
D^\Theta(T):= \sup_{0\leq s\le t \le T}d_\ast(\Theta_{s,t} ,Id)<\infty. \tag{$\textbf{H}_{0}$} \label{bis0}
\end{equation}
For a partition $\mathcal{P}=\{s=s_{0}<...<s_{r}=t\}$ of the interval $[s,t]\subset[0,T]$ we
define the corresponding scheme
\begin{equation*}
\Theta_{s,t}^{\mathcal{P}}:=\Theta_{s_{r-1},s_{r}}\dots
\Theta_{s_0,s_1} \in \mathcal{E}_0(V).
\end{equation*
More generally, for $s<t$ and a partition $\mathcal{P}=\{s_{0}<...<s_{r}\}$
such that $s=s_{i}$ and $t=s_{j}$ with $0\le i<j\le r$, we define
\begin{equation*}
\Theta_{s,t}^{\mathcal{P}}=\Theta _{s_{j-1},s_{j}}\dots
\Theta_{s_i,s_{i+1}}.
\end{equation*}
For $s \in (0,T)$, we define
\begin{equation}\label{endo_Euls}
\mathcal{E}^{\Theta}_{s}=\cup_{r\in[0,s]}\{ \Theta_{r,s}^{\mathcal{P}} : \mathcal{P}=\{r=r_{0}<...<r_{k}=s\} \text{ a partition of }[r,s] \} \subset \mathcal{E}_0(V).
\end{equation}
We assume:
\begin{itemize}
\item (Lipschitz property) There exists $C_{lip}$ such that for any $0\le s \le t<T$ and $U,\tilde{U}\in\mathcal{E}_0(V)$,
\begin{equation}
d_\ast(\Theta_{s,t}^{\mathcal{P}}U,\Theta_{s,t}^{\mathcal{P}}\tilde{U})\le C_{lip}d_\ast(U,\tilde{U}). \tag{$\textbf{H}_{1}$} \label{bis1}
\end{equation}
\item (Sewing property) There exists $C_{sew}$ and $\beta >1$ such that any $0\le s<u<t <T$, $U\in \mathcal{E}^{\Theta}_{s}$
\begin{equation}
d_{\ast }(\Theta _{s,t} U,\Theta_{u,t}\Theta _{s,u}U )\leq
C_{sew}(t-s)^{\beta }. \tag{$\textbf{H}_{2}$} \label{bis2}
\end{equation}
\end{itemize}
We stress that the constants $C_{lip}$ and $C_{sew}$, for $T$ being fixed, do not depend on $(s,u,t)$ and on $(U,\tilde{U})$.
A family of endomorphisms $\Theta _{s,t}$ that verifies the
hypotheses $(\mathbf{H}_{0})$, $(\mathbf{H}_{1})$ and $(\mathbf{H}_{2})$ will be called a
"semi-flow". In this general framework the "sewing lemma" can be stated as
follows.
\begin{lemma} (Sewing lemma)
\label{Sewing} Suppose that \eqref{bis0}, \eqref{bis1} and \eqref{bis2} hold. Then, there exists $\theta_{s,t}\in \mathcal{E}_0(V)$, $0\leq s\le t\le T$, which is a flow (see Definition~\ref{def_flow}) and satisfies
\begin{align}
& d_{\ast }(\theta _{s,t},\Theta_{s,t})\leq 2^{\beta }C_{lip}C_{sew}\zeta
(\beta )(t-s)^{\beta }, \label{bis4}
\end{align}
with $\zeta(\beta)=\sum_{n=1}^\infty \frac{1}{n^\beta}$.
Moreover, it satisfies the Lipschitz property
\begin{equation}
d_\ast(\theta_{s,t}U ,\theta_{s,t}\tilde{U} )\leq
C_{lip}d_\ast(U,\tilde{U}) \text{ for } U,\tilde{U} \in \mathcal{E}_0(V), \label{bis4'}
\end{equation}
Besides, we have the approximation estimate
\begin{equation}
d_{\ast }(\Theta _{s,t}^{\mathcal{P}},\theta _{s,t})\leq 2^{\beta
}C_{lip}^{2}C_{sew}\zeta (\beta )(t-s)\left\vert \mathcal{P}\right\vert
^{\beta -1} \text{ for } \mathcal{P} \text{ partition of } [s,t] ,\label{bis5}
\end{equation}
with $\left\vert \mathcal{P}\right\vert :=\max_{i=0,...,r-1}(s_{i+1}-s_{i}).$
Furthermore, this is the unique flow such that $d_{\ast }(\theta _{s,t},\Theta _{s,t} )\leq C (t-s) h(t-s)$ for some constant $C>0$ and nondecreasing function $h:\R_+\to \R_+$ such that $\lim_{t\to 0}h(t)=0$.
\end{lemma}
\begin{proof}
We first prove that for any $U\in \mathcal{E}_s^\Theta$,
\begin{equation}
d_{\ast }(\Theta_{s,t}^{\mathcal{P}} U ,\Theta_{s,t} U )\leq
2^{\beta}C_{lip}C_{sew}\zeta (\beta )(t-s)^{\beta }. \label{bis6}
\end{equation}
We consider $\mathcal{P}=\{s=s_{0}<...<s_{r}=t\}$ and prove~\eqref{bis6} by iteration on~$r$. For $r=1$, the
inequality is obvious. Let $r\ge 2$. For a fixed $i\in\{1,\dots, r-1\}$, we
denote by $\mathcal{P}_{i}$ the partition in which we have canceled $s_{i}$.
Then, we have
\begin{equation*}
d_{\ast }(\Theta_{s,t}^{\mathcal{P}}U,\Theta_{s,t}^{\mathcal{P
_{i}}U)=d_{\ast }(\Theta_{s_{i+1},t}^{\mathcal{P}}ZU,\Theta_{s_{i+1},t}^
\mathcal{P}}Z^{\prime }U)
\end{equation*
with
\begin{equation*}
Y=\Theta_{s,s_{i-1}}^{\mathcal{P}}, \quad Z=\Theta _{s_{i},s_{i+1}}\Theta
_{s_{i-1},s_{i}}Y \text{ and } Z^{\prime }=\Theta _{s_{i-1},s_{i+1}}Y.
\end{equation*
Using (\ref{bis1}) first and (\ref{bis2}) next we obtai
\begin{align}
d_{\ast }(\Theta_{s,t}^{\mathcal{P}}U,\Theta_{s,t}^{\mathcal{P}_{i}}U)& \leq
C_{lip}d_{\ast }(ZU,Z^{\prime }U)=C_{lip}d_{\ast}(\Theta
_{s_{i-1},s_{i+1}}YU,\Theta _{s_{i},s_{i+1}}\Theta _{s_{i-1},s_{i}}YU))
\label{bis6'} \\
& \leq C_{lip}C_{sew}(s_{i+1}-s_{i-1})^{\beta }. \notag
\end{align
We give now the sewing argument. We choose $i_{0}\in \{1,\dots, r-1\}$ such
tha
\begin{equation*}
s_{i_{0}+1}-s_{i_{0}-1}\leq \frac{2}{r-1}(t-s).
\end{equation*
Such an $i_{0}$ exists, otherwise we would have $2(t-s)\geq
\sum_{i=1}^{r-1}(s_{i+1}-s_{i-1})>2(t-s)).$ Using the inequality (\ref{bis6'
) for this $i_{0}$ we obtai
\begin{equation*}
d_{\ast }(\Theta_{s,t}^{\mathcal{P}}U,\Theta_{s,t}^{\mathcal{P
_{i_{0}}}U)\leq \frac{2^{\beta }C_{lip}C_{sew}}{(r-1)^{\beta }}(t-s)^{\beta
}.
\end{equation*
We iterate this procedure up to the trivial partition $\{s<t\}$, and we
obtain (\ref{bis6}).
We are now in position to define $\theta_{s,t}$ as the limit of $\Theta_{s,t}^{\mathcal{P}}$
when $\left\vert \mathcal{P}\right\vert $ goes to zero. Since~$(\mathcal{E}_0(V),d_\ast)$ is
complete, it is sufficient to check the Cauchy criterion:
\begin{equation*}
\lim_{\left\vert \mathcal{P}\right\vert \vee \left\vert \overline{\mathcal{P
}\right\vert \rightarrow 0}d_{\ast }(\Theta _{s,t}^{\mathcal{P}},\Theta
_{s,t}^{\overline{\mathcal{P}}})=0.
\end{equation*
Let $\mathcal{P}\cup \overline{\mathcal{P}}$ denote the partition of $[s,t]$
obtained by merging both partitions. Since $d_{\ast }(\Theta _{s,t}^
\mathcal{P}},\Theta _{s,t}^{\overline{\mathcal{P}}})\leq d_{\ast }(\Theta
_{s,t}^{\mathcal{P}},\Theta _{s,t}^{\mathcal{P}\cup \overline{\mathcal{P}
})+d_{\ast }(\Theta _{s,t}^{\overline{\mathcal{P}}},\Theta _{s,t}^{\mathcal{
}\cup \overline{\mathcal{P}}})$, we may assume without loss of generality
that $\overline{\mathcal{P}}$ is a refinement of the partition $\mathcal{P}
. Thus, we can write $\mathcal{P}=\{s=s_{0}<...<s_{r}=t\}$ and $\overline
\mathcal{P}}=\cup _{i=1}^{r}\mathcal{P}^{i}$, where $\mathcal{P}^{i}$ is a
partition of $[s_{i-1},s_{i}]$. We now introduce for $l\in \{0,\dots ,r\}$
the partition $\overline{\mathcal{P}}_{l}$ of $[s,t]$ defined by
\begin{equation*}
\overline{\mathcal{P}}_{0}=\mathcal{P} \text{ and } \overline{\mathcal{P}}_{l}=\left( \cup _{i=1}^{l}\mathcal{P}^{i}\right) \cup
\mathcal{P} \text{ for } l\ge 1.
\end{equation*
So, $\overline{\mathcal{P}}_{l}$ is the partition in which we refine the
intervals $[s_{i-1},s_{i}]$, $i=1,\dots ,l$, according to $\overline{\mathcal{P}}$ but we do not refine the intervals $[s_{i-1},s_{i}]$, $i=l+1,\dots ,r$ (we keep them unchanged, as they are in $\mathcal{P}$). Thus, we have $\overline
\mathcal{P}}_{0}=\mathcal{P}$ and $\overline{\mathcal{P}}_{r}=\overline
\mathcal{P}}$, and we obtain by using the triangle inequality:
\begin{equation*}
d_{\ast }(\Theta _{s,t}^{\mathcal{P}},\Theta _{s,t}^{\overline{\mathcal{P}
})\leq \sum_{l=0}^{r-1}d_{\ast }(\Theta _{s,t}^{\overline{\mathcal{P}
_{l+1}},\Theta _{s,t}^{\overline{\mathcal{P}}_{l}}).
\end{equation*
We note $\varphi _{l}=\Theta _{s_{l+1},t}^{\overline{\mathcal{P}}},\psi
_{l}=\Theta _{s,s_{l}}^{\mathcal{P}}$ and have
\begin{equation*}
d_{\ast }(\Theta _{s,t}^{\overline{\mathcal{P}}_{l+1}},\Theta _{s,t}^
\overline{\mathcal{P}}_{l}})=d_{\ast }(\varphi _{l}\Theta _{s_{l},s_{l+1}}^
\mathcal{P}^{l}}\psi _{l},\varphi _{l}\Theta _{s_{l},s_{l+1}}\psi _{l}).
\end{equation*
Using first (\ref{bis1}) and then (\ref{bis6}), we obtain
\begin{equation*}
d_{\ast }(\Theta _{s,t}^{\overline{\mathcal{P}}_{l+1}},\Theta _{s,t}^
\overline{\mathcal{P}}_{l}})\leq C_{lip}d_{\ast }(\Theta
_{s_{l},s_{l+1}}^{\mathcal{P}^{l}}\psi _{l},\Theta _{s_{l},s_{l+1}}\psi
_{l})\leq C(s_{l+1}-s_{l-1})^{\beta },
\end{equation*
with $C=2^{\beta }C_{lip}^{2}C_{sew}\zeta (\beta )$. This leads to
\begin{equation*}
d_{\ast }(\Theta _{s,t}^{\mathcal{P}},\Theta _{s,t}^{\overline{\mathcal{P}
})\leq C\sum_{l=0}^{r-1}(s_{l+1}-s_{l-1})^{\beta }\le C(t-s)\left\vert \mathcal
P}\right\vert ^{\beta -1}\underset{|\mathcal{P}|\rightarrow 0}{\rightarrow
0.
\end{equation*
This shows the existence of $\theta_{s,t}$ together with~\eqref{bis5}. We then get easily~\eqref{bis4'} : by sending $|\cP|\to 0$, we get the Lipschitz property of~$\theta_{s,t}$ from~\eqref{bis1}. Moreover, we get~\eqref{bis4} $d_\ast(\theta_{s,t}U,\Theta_{s,t}U)\le 2^{\beta}C_{lip}C_{sew}\zeta (\beta )(t-s)^{\beta }$ from~\eqref{bis6} when $|\cP|\to 0$, and we simply take $U=Id$.
We now prove the flow property. Let $s,u,t$ be such that $0\leq s<u<t\leq T$, $\cP_1$ and $\cP_2$ be respectively a partition of $[s,u]$ and $[u,t]$. We have by using the triangle inequality,~\eqref{bis1} and~\eqref{dist_compatibility}
\begin{align*}
d_{\ast}(\Theta^{\cP_2}_{u,t} \Theta^{\cP_1}_{s,u} , \theta_{u,t}\theta_{s,u})&\le d_{\ast}(\Theta^{\cP_2}_{u,t} \Theta^{\cP_1}_{s,u} , \Theta^{\cP_2}_{u,t} \theta_{s,u})+ d_{\ast}(\Theta^{\cP_2}_{u,t} \theta_{s,u} , \theta_{u,t}\theta_{s,u}) \\
&\le C_{lip}d_{\ast}( \Theta^{\cP_1}_{s,u} , \theta_{s,u})+C(\theta_{s,u})d_{\ast}(\Theta^{\cP_2}_{u,t} , \theta_{u,t} ) \to 0,
\end{align*}
as $|\cP_1|\vee |\cP_2|\to 0$. The concatenation $\cP_1\cup\cP_2$ is a partition of $[s,t]$ and thus $d_{\ast}(\Theta^{\cP_2}_{u,t} \Theta^{\cP_1}_{s,u},\theta_{s,t})\to 0$. We get $d_{\ast}(\theta_{s,t} , \theta_{u,t}\theta_{s,u})=0$ and so $\theta_{s,t} = \theta_{u,t}\theta_{s,u}$.
We finally prove the uniqueness. Let $\tilde{\theta}_{s,t}\in \mathcal{E}_0(V)$, $0\le s\le t \le T$, be a family satisfying the flow property $\tilde{\theta}_{s,t} = \tilde{\theta}_{u,t}\tilde{\theta}_{s,u}$, $d_{\ast}(\tilde{\theta}_{s,t}U,\Theta_{s,t}U)\le C(t-s)^\beta$ for any $U\in \mathcal{E}^\Theta_s$. We consider a partition $\mathcal{P}=\{s=s_{0}<...<s_{r}=t\}$ and have by using first the flow property and the triangle inequality, second the Lipschitz property:
\begin{align*}
d_\ast(\tilde{\theta}_{s,t},\Theta^{\cP}_{s,t})&\le \sum_{i=0}^{r-1}d_\ast(\Theta^{\cP}_{s_i,t}\tilde{\theta}_{s,s_i},\Theta^{\cP}_{s_{i+1},t}\tilde{\theta}_{s,s_{i+1}})\\&
\le C\sum_{i=0}^{r-1}d_\ast(\Theta_{s_i,s_{i+1}} \tilde{\theta}_{s,s_i},\tilde{\theta}_{s_i,s_{i+1}}\tilde{\theta}_{s,s_i}
\end{align*}
Now, we observe that $d_\ast(\tilde{\theta}_{s,s_i},I_d)\le C T^\beta + d_\ast(\Theta_{s,s_i},I_d)\le C T^\beta +D^\Theta(T)=:R(T)$ by using~\eqref{bis0}. Thanks to the uniform bound~\eqref{dist_compatibility2}, we get $d_\ast(\Theta_{s_i,s_{i+1}} \tilde{\theta}_{s,s_i},\tilde{\theta}_{s_i,s_{i+1}}\tilde{\theta}_{s,s_i})\le \bar{C}_{ R(T)} d_\ast(\Theta_{s_i,s_{i+1}} ,\tilde{\theta}_{s_i,s_{i+1}})$ and thus
$$ d_\ast(\tilde{\theta}_{s,t},\Theta^{\cP}_{s,t})\le C^2 \bar{C}_{R(T)}\sum_{i=0}^{r-1}(s_{i+1}-s_i)h(s_{i+1}-s_i) \le C^2 \bar{C}_{ R(T)}(t-s)h(|\cP|).$$
This yields to $\theta_{s,t}=\tilde{\theta}_{s,t}$ by taking $|\cP|\to 0$.
\end{proof}
\begin{remark}
The hypothesis (\ref{bis2}) may be weakened by replacing $(t-s)^{\beta }$ by $(t-s) (1 \vee \left\vert \ln (t-s)\right\vert ^{-\rho })$ for some $\rho >1$. The
proof is exactly the same by using the fact that the series $\sum_{n} \frac 1{n(\ln n)^{\rho }}$ converges iff $\rho >1$. But in this case, the estimates in (\ref{bis4}) and (\ref{bis5}) are
less explicit. So, we keep (\ref{bis2}) which is verified in our framework of Section~\ref{jump}
with $\beta =2$.
\end{remark}
\begin{remark}\label{Rk_S} We observe from the proof of Lemma~\ref{Sewing} that the uniform bound~\eqref{dist_compatibility2} on the distance and Hypothesis~\eqref{bis0} are only needed for the uniqueness result of Lemma~\ref{Sewing}.
\end{remark}
We now present a typical setting that falls into our framework.
We consider a complete metric space $(V,d)$ and $v_0$ be a fixed element of $V$.
We define
\begin{equation}\label{def_E0_metric}\mathcal{E}_0(V)=\{\Theta\in \mathcal{E}(V): \sup_{v \in V}\frac{d(v_0,\Theta(v))}{1+d(v_0,v)} <\infty \},
\end{equation}
the set of endomorphisms with sublinear growth. We endow $\mathcal{E}_0(V)$ with the following distance
\begin{equation}\label{def_d*_metric}
d_{\ast }(\Theta,\overline{\Theta})=\sup_{v \in V}\frac{d(\Theta(v),\overline{\Theta}(v))}{1+d(v_0,v)}, \ \Theta,\overline{\Theta}\in\mathcal{E}_0(V) .
\end{equation}
It is clear from the definition of~$\mathcal{E}_0(V)$ that $d_{\ast }(\Theta,\overline{\Theta})<\infty$ and the distance properties of $d_\ast$ are obviously inherited from those of~$d$. We remark also that if $\Theta_1\in\mathcal{E}_0(V)$ and $\Theta_2\in \mathcal{E}(V)$ is such that $d_\ast(\Theta_1,\Theta_2)<\infty$, then $\Theta_2\in \mathcal{E}_0(V)$. Besides, we check also easily that $Id\in\mathcal{E}_0(V)$ and $(\mathcal{E}_0(V),\circ)$ is a group: for $\Theta_1,\Theta_2\in\mathcal{E}_0(V)$,
$$ \sup_{v \in V}\frac{d(v_0,\Theta_2(\Theta_1(v)))}{1+d(v_0,v)}\le \sup_{v \in V}\frac{d(v_0,\Theta_2(\Theta_1(v)))}{1+d(v_0,\Theta_1(v))} \sup_{v \in V}\frac{1+d(v_0,\Theta_1(v))}{1+d(v_0,v)} <\infty.$$
\begin{lemma}
\label{endo_complete} $(\mathcal{E}_0(V),d_{\ast })$ defined by~\eqref{def_E0_metric} and~\eqref{def_d*_metric} is a complete metric space. Besides, \eqref{dist_compatibility2} holds.
\end{lemma}
\begin{proof}
Let $\theta_n\in \mathcal{E}_0(V)$ be a sequence such that $\sup_{p,q\ge n}d_{\ast }(\theta_p ,\theta_q)\underset{n\to \infty}\to 0$. Then, for any $v \in V$, there exists $\theta_\infty(v)\in V$ such that $d(\theta_n (v ),\theta_\infty(v ))\to 0$ since $(V,d)$ is complete. Therefore, we have
$d(\theta_n (v ),\theta_\infty(v ))\le \sup_{q\ge n}d(\theta_n (v ),\theta_q(v ))$, which gives $d_{\ast }(\theta_n ,\theta_\infty)\le \sup_{q\ge n} d_{\ast }(\theta_n ,\theta_q) \to 0$.
We now consider $U,\varphi,\psi \in \mathcal{E}_0(V)$ and have
\begin{align*}
d_\ast(\varphi U, \psi U)&= \sup_{v\in V} \frac{d(\varphi (U(v)),\psi(U(v)))}{1+d(v_0,U(v))} \frac{1+d(v_0,U(v))}{1+d(v_0,v)}\\
&\le C(U)d_\ast(\varphi , \psi ),
\end{align*}
with $C(U)=\sup_{v\in V} \frac{1+d(v_0,U(v))}{1+d(v_0,v)}<\infty$ since $U\in \mathcal{E}_0(V)$. Using that $d(v_0,U(v))\le d(v_0,v)+d(v,U(v))$, we get $C(U)\le 1+d_\ast(U,Id)$ and we therefore obtain \eqref{dist_compatibility2}.
\end{proof}
Last, it is interesting to discuss in this setting the properties~\eqref{bis1} and~\eqref{bis2}. We first have:
\begin{equation}\label{equivH1}
\eqref{bis1}\iff \forall v_1,v_1\in V, d(\Theta^{\cP}_{s,t}(v_1),\Theta^{\cP}_{s,t}(v_2))\le C_{lip}d(v_1,v_2).
\end{equation}
To get the direct implication, we take $U(v)=v_1$, $\tilde{U}(v)=v_2$ and observe that $d_\ast(U,\tilde{U})=\sup_{v\in V} \frac{d(v_1,v_2)}{1+d(v_0,v)}=d(v_1,v_2)$. The other implication is clear from~\eqref{def_d*_metric}.
Besides, we have
\begin{align}
&\exists \tilde{C}_{sew}:V\to \R_+ ,\ d(\Theta_{s,t}(v),\Theta_{u,t}(\Theta_{s,u}(v))) \le \tilde{C}_{sew}(v)(t-s)^\beta \text{ and } \sup_{v\in V} \frac{\tilde{C}_{sew}(v)}{1+d(v_0,v)}<\infty, \notag \\
& \text{ and } \bar{D}:=\sup_{0\leq s \le T} \sup_{U\in \mathcal{E}_s^\Theta} d_\ast(U ,Id)<\infty \implies \eqref{bis2}. \label{implic_H2}
\end{align}
In fact, we then have by~\eqref{dist_compatibility2}
$$ d_\ast(\Theta_{s,t}U,\Theta_{u,t}\Theta_{s,u}U)\le C_{\bar{D}} \sup_{v\in V} \frac{\tilde{C}_{sew}(v)}{1+d(v_0,v)} (t-s)^\beta,$$
which gives~\eqref{bis2} with $C_{sew}=C_{\bar{D}} \sup_{v\in V} \frac{\tilde{C}_{sew}(v)}{1+d(v_0,v)}$.
\section{Jump type equations\label{jump}}
\subsection{Framework and assumptions}
We recall that $\mathcal{P}_{1}({\mathbb{R}}^{d})$ is the space of probability
measures $\nu $ on ${\mathbb{R}}^{d}$ such that $\int \left\vert x\right\vert d\nu (x)<\infty$, and $W_{1}$ is the $1$-Wasserstein
distance on $\mathcal{P}_{1}({\mathbb{R}}^{d})$ defined by
\begin{equation*}
W_{1}(\mu ,\nu )=\inf_{\pi }\int_{{\mathbb{R}}^{d}}\left\vert x-y\right\vert
\pi (dx,dy)
\end{equation*
with the infimum taken over all the probability measures $\pi $ on ${\mathbb{R}}^{d}\times {\mathbb{R}}^{d}$ with marginals $\mu $ and~$\nu$. We will
work with endomorphisms $\Theta :\mathcal{P}_{1}({\mathbb{R}
^{d})\rightarrow \mathcal{P}_{1}({\mathbb{R}}^{d})$ and we denote by
\mathcal{E}(\mathcal{P}_{1}({\mathbb{R}}^{d}))$ the space of these endomorphisms
and
$$\mathcal{E}_0(\mathcal{P}_{1}({\mathbb{R}}^{d}))=\{ \Theta \in \mathcal{E}(\mathcal{P}_{1}({\mathbb{R}}^{d})) : \sup_{\rho \in \mathcal{P}_{1}(
\mathbb{R}}^{d})}\frac{ \int \left\vert v\right\vert \Theta(\rho) (dv)}{1+\int \left\vert v\right\vert \rho (dv)}<\infty \}.$$
On
this space we define the distance
\begin{equation*}
d_{\ast }(\Theta ,\overline{\Theta })=\sup_{\rho \in \mathcal{P}_{1}(
\mathbb{R}}^{d})}\frac{W_{1}(\Theta (\rho ),\overline{\Theta }(\rho ))}
1+\int_{\R^d} \left\vert v\right\vert \rho (dv)}.
\end{equation*
We are precisely in the framework presented at the end of Section~\ref{abs} with $V=\cP_1(\R^d)$, $d=W_1$ and $v_0=\delta_0$ is the Dirac mass at~$0$ since $W_1(\rho,\delta_0)=\int_{\R^d} \left\vert v\right\vert \rho (dv)$. It is well known that $(\cP_1(\R^d),W_1)$ is a complete metric space (see e.g. Bolley~\cite{Bolley}) and we get from Lemma~\ref{endo_complete} that $(\mathcal{E(P}_{1}({\mathbb{R}}^{d})),d_{\ast })$ is a complete space that satisfies~\eqref{dist_compatibility2}, so we can apply the results of
Section~\ref{abs}. Finally, for a random variable $X$ we denote by $\mathcal{L}(X)$ the probability law of $X$.
We define now, for $s\leq t$ the semi-flow $\Theta _{s,t}$ in the following way. We consider a
measurable space $(E,\mu )$ and three functions $b:{\mathbb{R}}^{d}\times
\mathcal{P}_{1}({\mathbb{R}}^{d})\rightarrow {\mathbb{R}}^{d}$, $c:{\mathbb{R}}^{d}\times E\times {\mathbb{R}
^{d}\times
\mathcal{P}_{1}({\mathbb{R}}^{d})\rightarrow {\mathbb{R}}^{d}$ and $\gamma :{\mathbb{R}}^{d}\times
E\times {\mathbb{R}}^{d}\times \mathcal{P}_{1}({\mathbb{R}}^{d})\rightarrow
\mathbb{R}}_{+}$ (we give below the precise hypothesis). We denot
\begin{equation}
Q(v,z,u,x,\nu )=c(v,z,x,\nu )1_{\{u\leq \gamma (v,z,x,\nu )\}}, \text{ for } u\ge 0. \label{h5}
\end{equation
Then, for a probability measure $\rho \in \mathcal{P}_{1}({\mathbb{R}}^{d})$
we take $X\in L^{1}(\Omega )$ with law $\mathcal{L}(X)=\rho $ and we defin
\begin{equation}
X_{s,t}(X)=X+b(X,\rho )(t-s)+\int_{s}^{t}\int_{{\mathbb{R}}^{d}\times
E\times {\mathbb{R}}_{+}}Q(v,z,u,X,\rho )N_{\rho }(dv,dz,du,dr). \label{W2}
\end{equation
Here, $N_{\rho }$ is a Poisson point measure with intensity measur
\begin{equation}
\widehat{N}_{\rho }(dv,dz,du,dr)=\rho (dv)\mu (dz)dudr. \label{W3}
\end{equation
We stress that the law of $X$ appears in the intensity of the point process.
Moreover, we define
\begin{equation}
\Theta _{s,t}(\rho )=\mathcal{L}(X_{s,t}(X)). \label{W3'}
\end{equation
So $\Theta _{s,t}(\rho )$ is the law of the solution which has initial value
with distribution $\rho $. Our aim is to construct the flow
corresponding to the semi flow $\Theta _{s,t}$ by using the sewing
lemma~\ref{Sewing}.
Before going on, we precise the hypotheses that we require for the coefficients. We
make the three following assumptions:
\begin{itemize}
\item The drift coefficient $b$ is globally Lipschitz continuous: we assume
that
\begin{equation}
\exists L_{b}\in {\mathbb{R}}_{+}^{\ast },\ \left\vert b(x,\nu )-b(y,\rho
)\right\vert \leq L_{b}(\left\vert x-y\right\vert +W_{1}(\nu ,\rho ))
\tag{${\textbf{A}}_{1}$} \label{lipb}
\end{equation}
\item For every $(v,x)\in {\mathbb{R}}^{d}\times {\mathbb{R}}^{d}$ there
exists a function $Q_{v,x}:{\mathbb{R}}^{d}\times E\times \R_{+}\times
\mathbb{R}}^{d}\times \mathcal{P}_{1}({\mathbb{R}}^{d})\rightarrow {\mathbb{
}}^{d}$ such that for every $v,x,v^{\prime },x^{\prime }\in {\mathbb{R}
^{d},\rho \in \mathcal{P}_{1}({\mathbb{R}}^{d})$ and for every $\varphi \in
C_{b}^{0}({\mathbb{R}}^{d})
\begin{equation}
\int_{E\times {\mathbb{R}}_{+}}\varphi (Q(v^{\prime },z,u,x^{\prime },\rho
))\mu (dz)du=\int_{E\times {\mathbb{R}}_{+}}\varphi (Q_{v,x}(v^{\prime
},z,u,x^{\prime },\rho ))\mu (dz)du.
\tag{${\textbf{A}}_{2}$} \label{h6e}
\end{equation
We assume that $(v,x,v^{\prime },z,u,x,\rho )\rightarrow Q_{v,x}(v^{\prime
},z,u,x^{\prime },\rho )$ is jointly measurable.
\item We assume that $\int |Q(0,z,u,0,\delta _{0})|\mu (dz)du<\infty $
and that there exists a constant $L_{\mu }(c,\gamma )$ such that for every
v_{1},x_{1},v_{2},x_{2}\in {\mathbb{R}}^{d}$ and $\rho _{1},\rho _{2}\in
\mathcal{P}_{1}({\mathbb{R}}^{d})
\begin{align}
&\int_{E\times {\mathbb{R}}_{+}}\left\vert Q(v_{1},z,u,x_{1},\rho
_{1})-Q_{v_{1},x_{1}}(v_{2},z,u,x_{2},\rho _{2})\right\vert \mu (dz)du \tag{$\textbf{A}_3$} \label{h6d} \\
&\leq L_{\mu }(c,\gamma )(\left\vert x_{1}-x_{2}\right\vert +\left\vert
v_{1}-v_{2}\right\vert +W_{1}(\rho _{1},\rho _{2})). \notag
\end{align}
\end{itemize}
We will simply say that $(\mathbf{A})$ is satisfied when these three
Assumptions~\eqref{lipb},~\eqref{h6e}, and~\eqref{h6d} are fulfilled.
\begin{remark}
The (pseudo) Lipschitz condition~\eqref{h6d} looks at first sight surprising. In some cases such as the two-dimensional Boltzmann equation, one may simply take $Q_{v,x}=Q$ since we have a standard Lipschitz property. For some other interesting models such as the three-dimensional Boltzmann equation, we need to use a non-trivial transformation $Q_{v,x}$ satisfying~\eqref{h6e}. The difficulty comes from the fact that there is no smooth parametrisation of the unit sphere in ${\mathbb{R}}^{3}$, and Tanaka~\cite{[T1]} has been able to get around this difficulty by using such transformation.
\end{remark}
From~\eqref{lipb} and~\eqref{h6d}, we easily deduce the following sublinear
growth estimates for any $x,v\in {\mathbb{R}}^{d}$ and $\rho \in \mathcal{P}_{1}({\mathbb{R}}^{d})$:
\begin{align}
&\left\vert b(x,\rho )\right\vert \leq \left\vert b(0,\delta _{0})\right\vert
+L_{b}(\left\vert x\right\vert +W_{1}(\rho ,\delta _{0}))= \left\vert b(0,\delta _{0})\right\vert
+L_{b}\left(\left\vert x\right\vert +\int_{\R^d}|x|\rho(dx) \right), \label{growthb}
\\
& \int_{E\times {\mathbb{R}}_{+}}\left\vert Q(v,z,u,x,\rho
)\right\vert \mu (dz)du\leq C_{\mu }(c,\gamma )(1+\left\vert v\right\vert
+\left\vert x\right\vert +W_{1}(\rho ,\delta _{0})), \label{h6a}
\end{align}
with $C_{\mu }(c,\gamma )=L_{\mu }(c,\gamma )\vee \left( \int
|Q(0,z,u,0,\delta _{0})|\mu (dz)du\right) $. In particular (\ref{growthb})
and (\ref{h6a}) implie
\begin{equation}
{\mathbb{E}}(\left\vert X_{s,t}(X)-X\right\vert )\leq \left[ |b(0,\delta_0)|+C_{\mu
}(c,\gamma )+(2L_{b}+3C_{\mu }(c,\gamma ))\int \left\vert v\right\vert \rho
(dv)\right] (t-s). \label{h6b}
\end{equation
This ensures that $\Theta _{s,t}(\rho )\in \mathcal{P}_{1}({\mathbb{R}}^{d})$ for $\rho \in \mathcal{P}_{1}({\mathbb{R}}^{d})$ and that $\Theta_{s,t}\in \mathcal{E}_0(\mathcal{P}_{1}({\mathbb{R}^d}))$.
\subsection{Preliminary results}
We give now a stability result for the one step Euler scheme which is a key ingredient with our approach.
\begin{lemma}\label{STABILITY} Let us assume that the coefficients $b$ and $Q$ satisfy Assumption~$(\mathbf{A})$ with constants $L_b$ and $L_\mu(c,\gamma)$. For $i=1,2$, we consider a $\R^d$-valued random variable $Z^{i}\in L^{1}$
and a family of probability measures $f_{s,t}^{i}(dv) \in \mathcal{P}_1(\R^d)$ for $0\le s\leq t\leq T$ such that $[s,T] \ni t\mapsto f_{s,t}^{i}(dv)$ is continuous in Wasserstein distance. Let $N_{f^{i}}$ be a Poisson point processes independent of~$Z^i$ with intensity measure $f_{s,t}^{i}(dv)\mu (dz)1_{\R_{+}}(u)dudt$ and let $(X_{s,t}^{i},t\ge s)$ be defined by
\begin{equation}
X_{s,t}^{i}(Z^{i},\rho^{i})=Z^{i}+\int_{s}^{t}b(Z^i,\rho^{i})dr+\int_{s}^{t}\int_{\R^{d}\times E\times \R_{+}}Q(v,z,u,Z^i,\rho^{i})N_{f^{i}}(dv,dz,du,dr), \label{NEW1}
\end{equation
where $\rho^{i}\in \mathcal{P}_{1}({\mathbb{R}}^{d})$. Then, we have:
\begin{align}
W_{1}(\mathcal{L}(X_{s,t}^{1}(Z^{1},\rho^{1}))&,\mathcal{L}
(X_{s,t}^{2}(Z^{2},\rho^{2}))) \leq W_{1}(\mathcal{L}(Z^{1}),\mathcal{L}(Z^{2})) +L_{\mu }(c,\gamma)\int_{s}^{t}W_{1}(f_{s,r}^{1},f_{s,r}^{2})dr\notag \\&+(L_{b}+L_{\mu }(c,\gamma)) (W_{1}(\rho^{1},\rho^{2})+W_{1}(\mathcal{L}(Z^{1}),\mathcal{L}(Z^{2})))(t-s).
\label{NEW2''}
\end{align}
\end{lemma}
\begin{proof}
We first recall the following useful lemma.
\begin{lemma}
\label{lemSK} There exists a measurable map $\psi :[0,1)\times \mathcal{P}_{1}({\mathbb{R}}^{d})\rightarrow {\mathbb{R}}^{d}$ such that
\begin{equation*}
\forall f\in \mathcal{P}_{1}({\mathbb{R}}^{d}), \forall \varphi :{\mathbb{R}}^{d}\rightarrow {\mathbb{R}} \text{ bounded measurable},\ \int_{0}^{1}\varphi(\psi (u,f))du=\int_{{\mathbb{R}}^{d}}\varphi (x)f(dx).
\end{equation*}
\end{lemma}
\noindent This result is stated in~\cite{[CD]} (p.~391, Lemma 5.29) in a $L^{2}$ framework, but their proof works the same in our setting.
Let $\pi_{0}^{Z} \in \mathcal{P}_1(\mathbb{R}^{d}\times \mathbb{R}^{d})$ be an optimal coupling for $W_1$ of $\mathcal{L}(Z^{1})$ and $\mathcal{L}(Z^{2})$, that is
\begin{equation*}
W_{1}(\mathcal{L}(Z^{1}),\mathcal{L}(Z^{2}))=\int_{{\mathbb{R}}^{d}\times {\mathbb{R}}^{d}}\left\vert z_{1}-z_{2}\right\vert \pi_{0}^{Z}(dz_{1},dz_{2}).
\end{equation*
Then, we construct a random variable $\overline{Z}=(\overline{Z}^{1},\overline{Z}^{2})$ such that $\overline{Z}\sim \pi _{0}^{Z}.$ In particular
we will have
\begin{equation}
\E(\vert \overline{Z}^{1}-\overline{Z}^{2}\vert )=W_{1}(\mathcal{L
(Z^{1}),\mathcal{L}(Z^{2})). \label{NEW2'}
\end{equation
Moreover, for every $s\leq t$ we consider a probability measure $\pi_{s,t}^{f}(dv_{1},dv_{2})$ on $\mathbb{R}^{d}\times \mathbb{R}^{d}$ which is
an optimal $W_1$-coupling between $f_{s,t}^{1}(dv_{1})$ and $f_{s,t}^{2}(dv_{2})$,
and we construct $\tau _{s,t}^{f}(w)=(\tau _{s,t}^{f_{1}}(w),\tau_{s,t}^{f_{2}}(w))$ which represents $\pi_{s,t}^{f}$ in the sense of Lemma~\ref{lemSK}, this mean
\begin{equation*}
\int_{0}^{1}\varphi (\tau _{s,t}^{f}(w))dw=\int_{{\mathbb{R}}^{d}\times {\mathbb{R}}^{d}}\varphi (v_{1},v_{2})\pi _{s,t}^{f}(dv_{1},dv_{2}).
\end{equation*
In particular, we have
\begin{equation}\label{Wass_tau}
\int_{0}^{1}\left\vert \tau _{s,t}^{f_{1}}(w)-\tau _{s,t}^{f_{2}}(w)\right\vert
dw=\int_{{\mathbb{R}}^{d}\times {\mathbb{R}}^{d}}\left\vert
v_{1}-v_{2}\right\vert \pi
_{s,t}^{f}(dv_{1},dv_{2})=W_{1}(f_{s,t}^{1},f_{s,t}^{2}).
\end{equation
Since $t\mapsto f^i_{s,t}$ is continuous in Wasserstein distance (hence measurable), we may
construct $\tau _{s,t}^{f}(w)$ to be jointly measurable in $(s,t,w)$ by using Corollary 5.22~\cite{Villani}.
Now, we consider $N(dw,dz,du,dr)$ a Poisson point measure on $[0,1]\times E \times {\mathbb{R}}_{+}\times {\mathbb{R}}_{+},$ with intensity measure $dw\mu (dz)dudr$. We
stress that $\overline{Z}^{1}$ and $\overline{Z}^{2}$ are independent of the
Poisson point measure $N$. We then consider the equation
\begin{equation*}
x_{s,t}^{i}(\overline{Z}^{i},\rho^{i})=\overline{Z}^{i}+\int_{s}^{t}b(\overline{Z}^{i},\rho^{i}) dr+\int_{s}^{t}\int_{[0,1]\times E\times {\mathbb{R}}_{+}} Q(\tau_{s,r}^{f^{i}}(w),z,u,\overline{Z}^{i},\rho^{i}) N(dw,dz,du,dr)
\end{equation*
and we notice that the law of $x_{s,t}^{i}(\overline{Z}^{i},\rho^{i})$
coincides with the law of $X_{s,t}^{i}(\overline{Z}^{i},\rho^{i})$ and so
with the law of $X_{s,t}^{i}(Z^{i},\rho^{i})$. We also note that these laws have a first finite moment thanks to Assumption~$(\textbf{A})$ on $(b,Q)$ and~\eqref{h6b}.
Moreover, we define
\begin{equation*}
y_{s,t}(\overline{Z}^{1},\rho^{1})=\overline{Z}^{1}+\int_{s}^{t}b(\overline{Z}^{1},\rho^{1})dr+\int_{s}^{t}\int_{[0,1]\times E\times {\mathbb{R}}_{+}}Q_{\tau_{s,r}^{f^{2}}(w),\overline{Z}^{2}}(\tau_{s,r}^{f^{1}}(w),z,u,\overline{Z}^{1},\rho^{1})N(dw,dz,du,dr).
\end{equation*}
By~(\ref{h6e}), the law of $y_{s,t}(\overline{Z}^{1},\rho^{1})$ coincides with the law of $x_{s,t}^{1}(\overline{Z}^{1},\rho^{1})$, so with
the law of $X_{s,t}^{1}(Z^{1},\rho^{1})$. Therefore, we get by using the triangle inequality together with~(\ref{NEW2'}) and Assumption~\eqref{lipb} for the last inequality:
\begin{align*}
W_{1}(\mathcal{L}(X_{s,t}^{1}(Z^{1},\rho^{1})),\mathcal{L}(X_{s,t}^{2}(Z^{2},\rho^{2}))) &=W_{1}(\mathcal{L}(y_{s,t}(\overline{Z}^{1},\rho^{1})),\mathcal{L}(x_{s,t}^{2}(\overline{Z}^{2},\rho^{2}))) \\
&\leq \E\left(\left\vert y_{s,t}(\overline{Z}^{1},\rho^{1})-x_{s,t}^{2}
\overline{Z}^{2},\rho^{2})\right\vert \right) \\
&\leq W_{1}(\mathcal{L(}Z^{1}),\mathcal{L}(Z^{2}))+L_{b}\left(W_{1}(\rho^{1},\rho^{2})+\E(\vert \overline{Z}^{1}-\overline{Z}^{2}\vert
)\right)(t-s) \\
& \ \ +\Delta _{s,t},
\end{align*}
wit
\begin{align*}
\Delta_{s,t} &=\E\int_{s}^{t}\int_{[0,1]\times E\times {\mathbb{R}}_{+}}\left\vert Q_{\tau _{s,r}^{f^{2}}(w),\overline{Z}^{2}}(\tau
_{s,r}^{f^{1}}(w),z,u,\overline{Z}^{1},\rho^{1})-Q(\tau_{s,r}^{f^{2}}(w),z,u,\overline{Z}^{2},\rho^{2}) \right\vert dw\mu (dz)dudr.
\end{align*}
Using the pseudo-Lipschitz property (\ref{h6d}) and~\eqref{Wass_tau}, we obtai
\begin{align*}
\Delta _{s,t}^{1} &\leq L_{\mu }(c,\gamma )\left(\int_{s}^{t}\left[W_{1}(\rho^{1},\rho^{2})+\E
(\vert \overline{Z}^{1}-\overline{Z
^{2}\vert )+\int_{0}^{1}\left\vert \tau _{r}^{f^{1}}(w)-\tau
_{r}^{f^{2}}(w)\right\vert dw\right]dr \right) \\
&=L_{\mu }(c,\gamma )\left( [W_{1}(\rho^{1},\rho^{2})+W_{1}(\mathcal{L}(Z^{1}),\mathcal{L}(Z^{2}))] (t-s)+\int_{s}^{t} W_{1}(f_{s,r}^{1},f_{s,r}^{2})dr \right),
\end{align*}
which leads to (\ref{NEW2''}).
\end{proof}
\subsection{Flows of measures}\label{Flow}
We go on and prove the sewing and Lipschitz properties for the semi-flow
\Theta_{s,t}$ defined in (\ref{W3'}).
\begin{lemma}\label{lem_sew_lip}
Suppose that $\mathbf{(A)}$ holds. Then, for every $0\le s<u<t\le T$ and for every
\rho \in \mathcal{P}_{1}({\mathbb{R}}^{d})
\begin{equation}
W_{1}(\Theta _{s,t}(\rho ),\Theta _{u,t}(\Theta _{s,u}(\rho )))\leq \tilde{C}_{sew}(\rho) (t-s)^{2} \label{W1}
\end{equation
wit
\begin{equation}
\tilde{C}_{sew}(\rho)=(L_{b}+2L_{\mu }(c,\gamma ))\left[ |b(0)|+C_{\mu }(c,\gamma
)+(L_{b}+2C_{\mu }(c,\gamma ))\int \left\vert v\right\vert \rho (dv)\right]
\label{W1a}
\end{equation
Moreover, for every $\rho ,\xi \in \mathcal{P}_{1}({\mathbb{R}}^{d})$
\begin{eqnarray}
W_{1}(\Theta _{s,t}(\rho ),\Theta _{s,t}(\xi )) &\leq &C_{lip}(T) W_{1}(\rho
,\xi )\quad \text{ with } \label{W1'} \\
C_{lip}(T) &=&1+(2L_{b}+3L_{\mu }(c,\gamma ))T. \label{W1'a}
\end{eqnarray}
\end{lemma}
\begin{proof} The estimate (\ref{W1'}) is a direct consequence of the estimate~\eqref{NEW2''} obtained in Lemma~\ref{STABILITY}. Let us prove (\ref{W1}). We take $X$ a random variable with law $\rho $ and we consider $X_{s,t}(X)$ defined in (\ref{W2}). So, by
definition, $\Theta _{s,t}(\rho )=\mathcal{L}(X_{s,t}(X)).$ Moreover, we
take $u\in (s,t)$ and we denote $Y=X_{s,u}(X).$ Then we writ
\begin{equation*}
X_{s,t}(X)=Y+b(X,\rho )(t-u)+\int_{u}^{t}\int_{\R^{d}\times E\times
\R_{+}}Q(v,z,u,X,\rho )N_{\rho }(dv,dz,du,dr).
\end{equation*
We also denote $\overline{\rho }=\mathcal{L}(Y)=\Theta _{s,u}(\rho )$ and we
define
\begin{equation*}
Y_{u,t}(Y)=Y+b(Y,\overline{\rho })(t-u)+\int_{u}^{t}\int_{\R^{d}\times
E\times \R_{+}}Q(v,z,u,Y,\overline{\rho })N_{\overline{\rho }}(dv,dz,du,dr).
\end{equation*
We notice that the law of $Y_{u,t}(Y)$ is $\Theta_{u,t}(\overline{\rho })=\Theta _{u,t}(\Theta _{s,u}(\rho ))$. Using again (\ref{NEW2''}) on $[u,t]$, it follows that
\begin{eqnarray*}
W_{1}(\Theta _{s,t}(\rho ),\Theta _{u,t}(\Theta _{s,u}(\rho ))) &=&W_{1}\left(
\mathcal{L}(X_{s,t}(X)),\mathcal{L}(Y_{u,t}(Y))\right) \\
&\leq &(L_{b}+2L_{\mu }(c,\gamma ))(t-u)(W_{1}(\mathcal{L}(X),\mathcal{L}(Y))+W_{1}(\rho ,\overline{\rho }))\\&=&2(L_{b}+2L_{\mu }(c,\gamma ))(t-u)W_{1}(\rho ,\overline{\rho }).
\end{eqnarray*
By (\ref{h6b}), we get
\begin{equation*}
W_{1}(\rho ,\overline{\rho })=W_{1}(\mathcal{L}(X),\mathcal{L}(Y))\leq
\E(\left\vert X-Y\right\vert )=\E(\left\vert X-X_{s,u}(X)\right\vert )\leq
C(u-s)
\end{equation*
with $C$ given in (\ref{h6b}). So (\ref{W1}) is proved.
\end{proof}
\medskip
\noindent We are now able to use the abstract sewing lemma in order to construct the solution of our problem.
\begin{theorem}
\label{flow}Suppose that $\mathbf{(A)}$ holds true. Then, there exists a
unique flow $\theta_{s,t} \in \mathcal{E}_0(\mathcal{P}_{1}({\mathbb{R}}^{d}))$ with $0\le s< t\le T$ such tha
\begin{equation}
\exists C_1,C>0, \ d_{\ast }(\theta_{s,t},\Theta_{s,t})\leq C_{1}(t-s)^{2}. \label{W6}
\end{equation
Moreover, we have for every partition $\mathcal{P}$ of $[s,t]$
\begin{equation}
d_{\ast }(\theta _{s,t},\Theta _{s,t}^{\mathcal{P}})\leq
C_{2}(t-s)\left\vert \mathcal{P}\right\vert, \label{W7}
\end{equation
and~\eqref{W6} and~\eqref{W7} are satisfied with $C_{1}=4\zeta (2 )C_{lip}C_{sew}$ and $C_{2}=C_{lip}C_{1}$ (see (\ref{Csew}) and (\ref{W1'a}) for the values of $C_{sew}$ and of $C_{lip}$).
Besides, $\theta $ is stationary in the sense that $\theta_{s,t}=\theta_{0,t-s}$ and the map $t\mapsto \theta _{s,t}$ is Lipschitz continuous for $d_\ast $, i.e. \begin{equation}\label{lipc0_flow}
d_\ast(\theta _{s,t},\theta _{s,t^{\prime }})\le C|t^{\prime }-t|.
\end{equation}
Last, we have the following stability result:
\begin{equation}\forall 0\le s\le t \le T, \rho,\xi \in \cP_1(\R^d),\quad W_1(\theta_{s,t}(\rho),\theta_{s,t}(\xi))\le \exp\left[ (2L_b+3L_\mu(c,\gamma))(t-s) \right] W_1(\rho,\xi).\label{stab_flot}
\end{equation}
\end{theorem}
\begin{proof}
We will use the sewing lemma~\ref{Sewing} and check the different assumptions. \eqref{bis0} is a straightforward consequence of~\eqref{h6b}.
From~\eqref{NEW2''}, we get that $$W_1(\Theta_{s,t}(\rho),\Theta_{s,t}(\xi))\le [1+(L_b+L_\mu(c,\gamma))(t-s)]W_1(\rho,\xi)\le \exp\left( (2L_b+3L_\mu(c,\gamma))(t-s)\right) W_1(\rho,\xi).$$ By iterating, this gives for any partition $\cP$ of $[s,t]$
\begin{equation}\label{estim_flot}
W_1(\Theta^\cP_{s,t}(\rho),\Theta^\cP_{s,t}(\xi))\le \exp\left[ (2L_b+3L_\mu(c,\gamma))(t-s)\right] W_1(\rho,\xi).
\end{equation}
Property~\eqref{bis1} thus holds by using~\eqref{equivH1}.
We now check~\eqref{bis2} and will use the recipe given by~\eqref{implic_H2}. We get from~\eqref{W1}:
\begin{align*}
& d_\ast(\Theta_{s,t}\Theta_{r,s}^{\mathcal{P}^{\prime }} , \Theta_{u,t} \Theta_{s,u} \Theta_{r,s}^{\mathcal{P}^{\prime }})\\&\le (L_b+2L_\mu(c,\gamma))(s-t)^2\sup_{\rho \in \mathcal{P}_1(\R^d)} \frac{|b(0)|+ C_\mu(c,\gamma) + (L_b+2C_\mu(c,\gamma))\int_{\R^d}|v|\Theta_{r,s}^{\mathcal{P}^{\prime }}(\rho)(dv) }{1+\int_{\R^d}|v|\rho(dv)}
\end{align*}
We now observe that for $\cP'=\{r=r_0<\dots<r_m=s\}$, we have by~\eqref{growthb} and~\eqref{h6a}
$$\int_{\R^d}|v|\Theta_{r,r_{i+1}}^{\mathcal{P}^{\prime }}(\rho)(dv)\le\left( 1+(2L_b+3C_\mu(c,\gamma))(r_{i+1}-r_i) \right) \int_{\R^d}|v|\Theta_{r,r_{i}}^{\mathcal{P}^{\prime }}(\rho)(dv)+ (r_{i+1}-r_i)[|b(0,\delta_0)|+C_\mu(c,\gamma)].$$
By iterating this inequality, we get \begin{equation}\label{moment_theta}\int_{\R^d}|v|\Theta_{r,s}^{\mathcal{P}^{\prime }}(\rho)(dv)\le\exp((2L_b+3C_\mu(c,\gamma)) T) \left( \int_{\R^d}|v|\rho(dv) + [|b(0,\delta_0)|+C_\mu(c,\gamma)]T \right).
\end{equation}
Therefore the supremum over~$\rho$ is upper bounded by some constant $\overline{C}>0$, and \eqref{bis2} holds with $\beta=2$ and
\begin{equation}\label{Csew}
C_{sew}= (L_b+2L_\mu(c,\gamma))\overline{C} .
\end{equation}
We can thus apply Lemma~\ref{Sewing} to get the existence and uniqueness of the flow~$\theta$. Then, we easily get~\eqref{stab_flot} from~\eqref{estim_flot} and the stationarity is a simple consequence of the obvious fact that $\Theta_{s,t}=\Theta _{0,t-s}$.
To prove the Lipschitz property~\eqref{lipc0_flow}, we consider $t'\in [s,t]$ and get by iterating the upper bound~\eqref{h6b}
\begin{equation*}
W_1(\Theta _{s,t}^{\mathcal{P}}(\rho ),\Theta _{s,t^{\prime }}^
\mathcal{P}}(\rho ))\le \left[ |b(0)|+C_\mu(c,\gamma) + (L_b+2C_\mu(c,\gamma)) \max_{u \in \mathcal{P}} \int_{\R^d} |v| \Theta_{s,u} (\rho)(dv) \right]|t-t^{\prime }|,
\end{equation*
for any partition $\mathcal{P}$ of $[0,T]$ such that $t'\in \mathcal{P}$, since $\Theta _{s,t}^{\mathcal{P}}(\rho )$ can be obtained from $\Theta_{s,t'}^{\mathcal{P}}(\rho )$ by applying the Euler scheme. Using~\eqref{moment_theta}, we get $d_\ast(\Theta _{s,t}^{\mathcal{P}}(\rho ),\Theta _{s,t^{\prime }}^{\mathcal{P}}(\rho ))\le \overline{C}|t'-t|$, and we conclude by sending $|\mathcal{P}|\to 0$.
\end{proof}
\subsection{The weak equation\label{weak}}
Theorem~\ref{flow} provides a unique solution in terms of flows. Now we
prove that this solution solves an integral equation, in the weak sense.
Namely, for every $s\geq 0$ and $\rho \in \mathcal{P}({\mathbb{R}}^{d})$\ we
associate the weak equation
\begin{align}
\int_{{\mathbb{R}}^{d}}\varphi (x)\theta _{s,t}(\rho )(dx) =&\int_{{\mathbb
R}}^{d}}\varphi (x)\rho (dx)+\int_{s}^{t}\int_{{\mathbb{R}}^{d}}\left\langle
b(x,\theta _{s,r}(\rho )),\nabla \varphi (x)\right\rangle \theta _{s,r}(\rho
)(dx)dr \label{we2} \\
&+\int_{s}^{t}\int_{{\mathbb{R}}^{d}\times {\mathbb{R}}^{d}}\Lambda
_{\varphi }(v,x,\theta _{s,r}(\rho ))\theta _{s,r}(\rho )(dx)\theta
_{s,r}(\rho )(dv)dr,\ \varphi \in C_{b}^{1}({\mathbb{R}}^{d}), \notag
\end{align}
where
\begin{eqnarray}
\Lambda _{\varphi }(v,x,\rho ) &=&\int_{E\times \R_{+}}(\varphi
(x+Q(v,z,u,x,\rho ))-\varphi (x))\mu (dz)du \label{w2'} \\
&=&\int_{0}^{1}d\lambda \int_{E\times \R_{+}}\left\langle \nabla \varphi
(x+\lambda Q(v,z,u,x,\rho )),Q(v,z,u,x,\rho )\right\rangle \mu (dz)du
\label{w2''}
\end{eqnarray
Here, we have written equation~\eqref{we2} with a double indexed family of probability measures $(\theta_{s,t}(dx), 0\le s\le t)$. This is to make a direct link with the flow constructed by the sewing lemma that is naturally double indexed. In the literature, one usually rather considers the following equation for a family of
probability measures $(f_{t}(\rho ),t\geq 0)$
\begin{eqnarray}
\int_{{\mathbb{R}}^{d}}\varphi (x)f_{t}(\rho )(dx) &=&\int_{{\mathbb{R}
^{d}}\varphi (x)\rho (dx)+\int_{0}^{t}\int_{{\mathbb{R}}^{d}}\left\langle
b(x,f_{r}(\rho )),\nabla \varphi (x)\right\rangle f_{r}(\rho )(dx)dr
\label{we2bis} \\
&&+\int_{0}^{t}\int_{{\mathbb{R}}^{d}\times {\mathbb{R}}^{d}}\Lambda
_{\varphi }(v,x,f_{r}(\rho ))f_{r}(\rho )(dx)f_{r}(\rho )(dv)dr. \notag
\end{eqnarray
The link between~\eqref{we2} and~\eqref{we2bis} is clear. If $\theta_{s,t}$ solves~\eqref{we2}, then $\theta_{0,t}$ solves~\eqref{we2bis}. Conversely, if $f_{t}$ solves~\eqref{we2bis}, then $f_{t-s}$ solves~\eqref{we2}.
We need the following preliminary lemma.
\begin{lemma}\label{lem_lambdaphi} Assume that $\mathbf{(A)}$ holds. If $\varphi \in C^1_b(\R^d)$, then we have
\begin{align}
&|\Lambda_\varphi(v,x,\rho)|\le C_\mu(c,\gamma) \|\nabla \varphi\|_\infty\left(1+|v|+|x|+\int_{\R^d} |z|\rho(dz) \right) \label{sub_lin_A4}\\
&|\Lambda_{\varphi }(v,x,\rho)-\Lambda_{\varphi }(v',x,\rho')|\le L_\mu(c,\gamma) \|\nabla \varphi\|_\infty (|v-v'|+W_1(\rho,\rho')), \label{liplambda}
\end{align}
and
\begin{equation}\label{hyp_cont
\forall v \in \R^d,\rho \in \cP_1(\R^d), \ x\mapsto \Lambda_\varphi(v,x,\rho) \text{ is continuous.}
\end{equation}
\end{lemma}
\begin{proof}
We get the first bound by using~\eqref{w2'}, $|\varphi(x+Q(v,z,u,x,\rho ))-\varphi (x))|\le \|\nabla \varphi\|_\infty |Q(v,z,u,x,\rho )|$ and~\eqref{h6a}.
From~\eqref{h6e} we have
$\Lambda_{\varphi }(v',x,\rho')=\int_{E\times \R_{+}}(\varphi(x+Q_{v,x}(v',z,u,x,\rho' ))-\varphi (x))\mu (dz)du$ and thus
\begin{align*}
|\Lambda_{\varphi }(v,x,\rho)- \Lambda_{\varphi }(v',x,\rho')|&\le\|\nabla \varphi\|_\infty \int_{E\times \R_{+}}|Q(v,z,u,x,\rho )-Q_{v,x}(v',z,u,x,\rho' )| \mu(dz)du \\
&\le L_\mu(c,\gamma)\|\nabla \varphi\|_\infty (|v-v'|+W_1(\rho,\rho')),
\end{align*}
by using~\eqref{h6d}.
\\
We now prove~\eqref{hyp_cont}.
Let $(x_n)_{n\in \N}$ be a sequence in~$\R^d$ such that $x_n\to x$. We write:
\begin{align*}
\Lambda_\varphi(v,x_n,\rho)&= \int_{E\times \R_+}(\varphi(x_n+Q(v,z,u,x_n,\rho))-\varphi(x_n))\mu(dz)du\\
&=\int_{E\times \R_+}(\varphi(x_n+Q(v,z,u,x_n,\rho))-\varphi(x_n+Q_{v,x_n}(v,z,u,x,\rho)))\mu(dz)du\\& + \int_{E\times \R_+}(\varphi(x_n+Q_{v,x_n}(v,z,u,x,\rho))-\varphi(x_n))\mu(dz)du.
\end{align*}
By~\eqref{h6d}, the first integral is upper bounded by $\|\nabla \varphi\|_\infty L_\mu(c,\gamma)|x-x_n|\to 0.$ By~\eqref{h6e}, the second integral is equal to
$$\int_{E\times \R_+}(\varphi(x_n+Q(v,z,u,x,\rho))-\varphi(x_n))\mu(dz)du. $$
We have $\varphi(x_n+Q(v,z,u,x,\rho))-\varphi(x_n) \to \varphi(x+Q(v,z,u,x,\rho))-\varphi(x)$ and $|\varphi(x_n+Q(v,z,u,x,\rho))-\varphi(x_n)|\le \|\nabla\varphi\|_\infty |Q(v,z,u,x,\rho)| $ that is $\mu(dz)du$-integrable.
The dominated convergence theorem gives then $ \Lambda_\varphi(v,x_n,\rho)\to \Lambda_\varphi(v,x,\rho)$.
\end{proof}
\begin{theorem}
\label{Weq}Suppose that $\mathbf{(A)}$ holds.
Then $\theta_{s,t}$, the flow given by Theorem~\ref{flow}, satisfy Equation (\ref{we2}).
\end{theorem}
\begin{proof}
Let us consider $\rho \in \cP_1(\R^d)$ and the Euler scheme $\Theta _{s,s_{k}}^{\mathcal{P}}(\rho )$ associated to the partition $\mathcal{P}=\{s_{k}=s+\frac{k(t-s)}{n}:k=0,...,n\}.$ For $r\in \lbrack s_{k},s_{k+1})$ we denote $\tau(r)=s_{k}$ and associate the stochastic equatio
\begin{align}
X_{s,r}^{\mathcal{P}}=&X+\int_{s}^{r}b(X_{s,\tau (r^{\prime })}^{\mathcal{P}},\Theta _{s,\tau (r^{\prime })}^{\mathcal{P}}(\rho ))dr^{\prime} \label{we10}\\&+\int_{s}^{r}\int_{\R^{d}\times E\times \R_{+}}Q(v,z,u,X_{s,\tau (r^{\prime
})-}^{\mathcal{P}},\Theta _{s,\tau (r^{\prime })}^{\mathcal{P}}(\rho
))N_{\Theta ^{\mathcal{P}}}(dv,dz,du,dr^{\prime }), \notag
\end{align}
where $N_{\Theta ^{\mathcal{P}}}$ is a Poisson point measure of intensity $\Theta _{s,\tau (r^{\prime })}^{\mathcal{P}}(\rho )(dv)\mu (dz)dudr^{\prime}$ and $\mathcal{L}(X)=\rho .$ One has $\Theta _{s,s_{k}}^{\mathcal{P}}(\rho)(dx)=\mathcal{L}(X_{s,s_{k}}^{\mathcal{P}})$. Then, using It\^{o}'s formula in order to compute $\E(\varphi (X_{s,r}^{\mathcal{P}}))$ for $\varphi \in C_{b}^{1}({\mathbb{R}}^{d})$ we obtain
\begin{align}
\int_{{\mathbb{R}}^{d}}\varphi (x)\Theta _{s,r}^{\mathcal{P}}(\rho
)(dx) &=\int_{{\mathbb{R}}^{d}}\varphi (x)\rho (dx)+\int_{s}^{
r}\int_{{\mathbb{R}}^{d}}\left\langle b(x,\Theta _{s,\tau (r^{\prime })}^{\mathcal{P}}(\rho )),\nabla \varphi (x)\right\rangle \Theta _{s,\tau (r')}^{\mathcal{P}}(\rho )(dx)dr' \\
&+\int_{s}^{r}\int_{{\mathbb{R}}^{d}\times {\mathbb{R}}^{d}}\Lambda_{\varphi }(v,x,\Theta _{s,\tau (r)}^{\mathcal{P}}(\rho ))\Theta _{s,\tau
(r')}^{\mathcal{P}}(\rho )(dx)\Theta _{s,\tau (r')}^{\mathcal{P}}(\rho
)(dv)dr', \notag
\end{align}
From~\eqref{h6b}, we get easily that $\E[|X_{s,r}^{\mathcal{P}}-X_{s,\tau(r)}^{\mathcal{P}}|]\le C/n$ for some constant $C\in \R_+^*$. Besides, we have $d_*(\theta_{s,r},\Theta _{s,r}^{\mathcal{P}})\le C(r-s)/n$ by Theorem~\ref{flow}. From~\eqref{lipb} and Lemma~\ref{lem_lambdaphi}, this leads to
\begin{align*}
\int_{{\mathbb{R}}^{d}}\varphi (x)\Theta _{s,r}^{\mathcal{P}}(\rho
)(dx) &=\int_{{\mathbb{R}}^{d}}\varphi (x)\rho (dx)+\int_{s}^{
r}\int_{{\mathbb{R}}^{d}}\left\langle b(x,\theta_{s,r'}(\rho )),\nabla \varphi (x)\right\rangle \Theta _{s,\tau (r')}^{\mathcal{P}}(\rho )(dx)dr' \\
&+\int_{s}^{r}\int_{{\mathbb{R}}^{d}\times {\mathbb{R}}^{d}}\Lambda_{\varphi }(v,x, \theta_{s,r'}(\rho ))\Theta _{s,\tau
(r')}^{\mathcal{P}}(\rho )(dx)\Theta _{s,\tau (r')}^{\mathcal{P}}(\rho
)(dv)dr' +R_n, \notag
\end{align*}
with $|R_n|\le C/n$. Now, let us recall (see e.g.~\cite[Theorem 6.9]{Villani}) that $W_1(\rho_n,\rho_\infty)\to 0$ if, and only if, $\int_{\R^d} f(x) \rho_n(dx)\to\int_{\R^d} f(x) \rho_\infty(dx)$ for any continuous function $f:\R^d\to \R$ with sublinear growth (i.e. such that $\forall x, |f(x)|\le C(1+|x|)$ for some constant $C>0$).
From~\eqref{sub_lin_A4} and~\eqref{hyp_cont} (resp.~\eqref{growthb} and~\eqref{lipb}), the map $(v,x)\mapsto \Lambda_\varphi(v,x,\theta_{s,r'}(\rho ))$ (resp. $x \mapsto \left\langle b(x,\theta_{s,r'}(\rho )),\nabla \varphi (x)\right\rangle$) is, for any $r'$, continuous with sublinear growth since $\varphi\in C^1_b(\R^d)$. Since $W_1(\Theta _{s,\tau (r')}^{\mathcal{P}}(\rho),\theta_{s,r'}(\rho))\to 0$ (and thus $W_1(\Theta _{s,\tau (r')}^{\mathcal{P}}(\rho)\otimes\Theta _{s,\tau (r')}^{\mathcal{P}}(\rho),\theta_{s,r'}(\rho)\otimes\theta_{s,r'}(\rho))\to 0$), this gives the pointwise convergence for any~$r'$:
\begin{align*} &\int_{{\mathbb{R}}^{d}}\left\langle b(x,\theta_{s,r'}(\rho )),\nabla \varphi (x)\right\rangle \Theta _{s,\tau (r')}^{\mathcal{P}}(\rho )(dx) \to \int_{{\mathbb{R}}^{d}}\left\langle b(x,\theta_{s,r'}(\rho )),\nabla \varphi (x)\right\rangle \theta _{s,r'}(\rho )(dx),\\
&\int_{{\mathbb{R}}^{d}\times {\mathbb{R}}^{d}}\Lambda_{\varphi }(v,x, \theta_{s,r'}(\rho ))\Theta _{s,\tau
(r')}^{\mathcal{P}}(\rho )(dx)\Theta _{s,\tau (r')}^{\mathcal{P}}(\rho
)(dv) \to \int_{{\mathbb{R}}^{d}\times {\mathbb{R}}^{d}}\Lambda_{\varphi }(v,x, \theta_{s,r'}(\rho ))\theta _{s,r'}(\rho )(dx)\theta _{s,r'}(\rho
)(dv).\end{align*}
From the standard uniform bounds on the first moment, we can then get the claim by the dominated convergence theorem.
\end{proof}
\subsection{Probabilistic representation} In this subsection, we are interested in the existence and uniqueness of a process $(X_r,r\ge 0)$ on a probability space such that
\begin{equation}\label{Prob_rep}
X_t=X+\int_0^tb(X_r,\cL(X_r))dr+\int_0^t Q(v,z,u,X_{r-},\cL(X_{r-})) N(dv,dz,du,dr), \end{equation}
where $N$ is a Poisson point measure with intensity $\cL(X_{r-})(dv)\mu(dz)dudr$. If such a process exists, we call it ``Boltzmann process''. We notice that the sublinear growth properties~\eqref{growthb} and~\eqref{h6a} gives $\sup_{t\in[0,T]}\E[|X_t|]<\infty$ for any $T>0$ and then
\begin{equation}\label{lipsch_conti}\forall T>0,\exists C_T>0,\forall t\in[0,T], \E[|X_{t+h}-X_t|]\le C_T h,
\end{equation}
which yields to $\cL(X_{t-})=\cL(X_t)$ for any $t\ge 0$. The existence of a solution to~\eqref{Prob_rep} is classically related to Martingale problems, see Horowitz and Karandikar~\cite{HoKa} in the context of the Boltzmann equation. Our goal here is to underline some relations between the flow~$\theta$ introduced by Theorem~\ref{flow} and Equation~\eqref{Prob_rep}.
From It\^o's Formula, it is clear that a Boltzmann process leads to a solution of the weak equation~\eqref{we2bis}. In Theorem~\ref{Weq}, we have proved under suitable conditions that the flow constructed by Theorem~\ref{flow} is a weak solution of~\eqref{we2bis}. Here, we prove that there exists a Boltzmann process that has the marginal laws given by the flow~$\theta_{0,t}$.
\begin{theorem}
\label{ExistenceRepr} Suppose that $\mathbf{(A)}$ holds. Then, there exists a process~$X$ that satisfies~\eqref{Prob_rep} such that $\cL(X_t)=\theta_{0,t}(\rho)$.
\end{theorem}
To prove this result, we consider the Euler scheme for which we know the convergence of the marginal laws by Theorem~\ref{flow}. We show by classical arguments that it gives a tight sequence in the Skorohod space and that any converging subsequence leads to a solution of the Martingale problem of~\eqref{Prob_rep}.
\begin{proof}
Let $X\sim \rho $ with $\rho \in \mathcal{P}_{1}({\mathbb{R}}^{d})$.
We consider the time grid $t_{k}=\frac{k}{n}$, $k\in \N$ and we denote $\tau(t)=\frac{k}{n}$ for $\frac{k}{n}\leq t<\frac{k+1}{n}$. For $t\in \R$, we denote \begin{equation}\label{def_theta_n}\Theta^n_{0,t}=\Theta_{\frac{\lfloor nt \rfloor }{n},t} \Theta_{\frac{\lfloor nt \rfloor -1}{n},\frac{\lfloor nt \rfloor }{n}}\dots \Theta_{0,\frac 1n},
\end{equation}
with $\Theta_{s,t}$ defined by~\eqref{W3'}, so that $\Theta^n_{0,t}=\Theta^{\cP}_{0,t}$ with the partition $\cP=\{t_0<\dots<t_{\lfloor nt \rfloor}\le t\}$.
Then, we define the corresponding Euler scheme
\begin{equation}
X_{t}^{n}=X+\int_{0}^{t}b(X_{\tau(r)}^{n},\Theta^n_{0,\tau(r)}(\rho) )dr+\int_{0}^{t}\int_{{\mathbb{R}}^{d}\times E\times {\mathbb{R}}
_{+}}Q(v,z,X_{\tau (r)},\Theta^n_{0,\tau(r)}(\rho))N_{\Theta^n}(dv,dz,du,dr),
\label{App2.4}
\end{equation
where $N_{\Theta^n}$ is a Poisson point measure with compensator $\Theta^n_{0,\tau (r)}(\rho)(dv)\mu (dz)dudr$ that is independent of~$X$. By construction, we have $\cL(X^n_t)=\Theta^n_{0,t}(\rho)$ for all $t\ge 0$. Theorem~\ref{flow} gives that there exists a flow $\theta_{s,t}$ corresponding to~$\Theta_{s,t}$, and we have
\begin{equation}\label{estim_unif_thetan}
d_*(\theta_{0,t},\Theta^n_{0,t})\le \frac{C_2t}{n}.
\end{equation}
We first write the martingale problem associated with~$X^n$.
For $\varphi \in C_{b}^{1}(\R^{d})$, we define
\begin{equation}
M^{n}_\varphi(t):=\varphi (X_{t}^{n})-\varphi
(X)-I_{t}^{n}-J_{t}^{n} \label{App2.5}
\end{equation
wit
\begin{eqnarray*}
I_{t}^{n} &=&\int_{0}^{t} \int_{{\mathbb{R}}^{d}}\Lambda_{\varphi} (v,X_{r}^{n},\Theta^n_{0,\tau (r)}(\rho)))\Theta^n_{0,\tau (r)}(\rho)(dv)dr\\ J_{t}^{n} &=&\int_{0}^{t}\int_{{\mathbb{R}}^{d}}\left\langle
b(X_{\tau (r)}^{n},\Theta^n_{0,\tau (r)}(\rho))),\nabla \varphi (X_{r}^{n})\right\rangle dr .
\end{eqnarray*
This is a martingale, and we have for every $0\le s_{1}<...<s_{m}<t<t^{\prime }$ and every $\psi _{j}\in C_{b}^{0}(\R^{d})$
\begin{equation}
\E \left(\prod_{j=1}^{m}\psi _{j}(X_{\tau (s_{j})}^{n})M_{\varphi }^{n}(\tau(t^{\prime })) \right)=\E\left(\prod_{j=1}^{m}\psi _{j}(X_{\tau (t_{j})}^{n})M_{\varphi}^{n}(\tau (t)) \right). \label{App2.6}
\end{equation}
We now analyse the convergence when $n\to \infty$ and denote $P_{n}$ the probability measure on the
Skorohod space $\mathbb{D}(\R_{+},\R^{d})$ produced by the law of $X^{n}$. We easily check
Aldous' criterion : from~\eqref{growthb} and~\eqref{h6a}, we get a uniform bound on the first moment
\begin{equation}\label{uni_borne2}
\forall T>0, \E\left[ \sup_{n\ge1} \sup_{t\in [0,T]} |X^n_r| \right]<\infty
\end{equation}
and then \begin{equation}\label{uni_borne3}
\forall T>0, \exists C_T\in \R_+^*, \forall h\in [0,1], \E[\sup_{t\le T}|X^n_{t+h}-X^n_{t}|]\le C_Th.
\end{equation}
Thus, we obtain that the sequence $(P_{n})_{n\in \N}$ is
tight. Let $P$ be any limit point of this sequence. Up to consider a subsequence, we may assume that $(P_{n})_{n\in \N}$ weakly converges to~$P$. We denote by $X_{t}$ the
canonical projections on $\mathbb{D}(\R_{+},\R^{d})$. We define
\begin{equation}
M_{\varphi }(t):=\varphi (X_{t})-\varphi (X)-J_{t}-I_{t}
\label{App2.7}
\end{equation
wit
\begin{eqnarray*}
I_{t} &=&\int_{0}^{t}\int_{{\mathbb{R}}^{d}}\Lambda _{\varphi}(v,X_{r},\theta_{0,r}(\rho))\theta_{0,r}(\rho)(dv)dr,\\
J_{t} &=&\int_{0}^{t}\int_{{\mathbb{R}}^{d}}\left\langle b(X_{r},\theta_{0,r}(\rho)),\nabla \varphi (X_{r})\right\rangle dr.
\end{eqnarray*
\medskip
We now prove that $\E \left(\prod_{j=1}^{m}\psi _{j}(X_{\tau (s_{j})}^{n})M_{\varphi }^{n}(\tau(t^{\prime })) \right) \to \E_P \left(\prod_{j=1}^{m}\psi _{j}(X_{s_{j}})M_{\varphi }(t^{\prime }) \right)$, where $\E_P$ denotes the integration on $\mathbb{D}(\R_{+},\R^{d})$ with respect to~$P$. This then gives from~\eqref{App2.6}
\begin{equation}
\E\left(\prod_{j=1}^{m}\psi _{j}(X_{s_{j}})M_{\varphi }(t^{\prime})\right)=\E\left(\prod_{j=1}^{m}\psi _{j}(X_{s_{j}}) M_{\varphi }(t) \right).
\label{App2.8}
\end{equation
We define the intermediary terms
$$\hat{I}^n_t=\int_{0}^{t}\int_{{\mathbb{R}}^{d}}\Lambda _{\varphi}(v,X^n_{r},\theta_{0,r}(\rho))\theta_{0,r}(\rho)(dv)dr \text{ and } \hat{J}^n_t=\int_{0}^{t}\int_{{\mathbb{R}}^{d}}\left\langle b(X^n_{\tau(r)},\theta_{0,r}(\rho)),\nabla \varphi (X^n_{r})\right\rangle dr.$$
From the Lipschitz property of $b$ and $\Lambda_\varphi$ (Lemma~\ref{lem_lambdaphi}), we get
$$|I^n_{t}-\hat{I}^n_{t}|+|J^n_{t}-\hat{J}^n_{t}|\le C \int_0^{t}W_1(\theta_{0,r}(\rho),\Theta^n_{0,\tau(r)}(\rho))dr \underset{n\to \infty}\to 0,$$
by~\eqref{estim_unif_thetan} and~\eqref{uni_borne3}
Thus, it is sufficient to check the convergence of $$\E \left(\prod_{j=1}^{m}\psi _{j}(X_{\tau (s_{j})}^{n})[\varphi(X_{\tau (t')}^{n})-\varphi(X)-\hat{I}^n_{\tau (t')}-\hat{J}^n_{\tau (t')}]\right).$$
From~\eqref{sub_lin_A4}, $\Lambda_\varphi$ has a sublinear growth and is continuous with respect to~$x$. Therefore, $\mathbb{D}(\R_{+},\R^{d}) \ni \bar{x}\mapsto \int_0^{t'} \int_{\R^d} \Lambda_\varphi(v,\bar{x}(r),\theta_{0,r}(\rho))\theta_{0,r}(\rho)(dv)dr$ is continuous and bounded by $C(1+\sup_{r\in[0,t']} |\bar{x}(r)|+\sup_{r\in[0,t']}\int_{\R^d}|z|\theta_{0,r}(\rho)(dz))$ for some $C\in \R_+^*$. Since $ \E\left[ \sup_{n\ge1} \sup_{t\in [0,t']} |X^n_r| \right]<\infty$ by~\eqref{uni_borne2} and $P_n$ weakly converges to $P$, this gives the desired convergence for $s_1,\dots,s_m,t,t'\in[0,T] \setminus \mathfrak{D}$, where $\mathfrak{D}$ is an at most countable subset of $(0,T)$ (see Billingsley~\cite[p. 138]{Billingsley}). Last, from the right continuity under~$P$, we get that \eqref{App2.8} holds for any $0<s_1<\dots <s_m<t<t'$, which shows that $P$ is a solution of the Martingale Problem, i.e. is such that for any $\varphi \in C^1_b(\R^d)$, $M_\varphi(t)$ defined by~\eqref{App2.7} is a Martingale. Besides, let us notice that for all $t\ge 0$, $\cL(X_t)=\theta_{0,t}(\rho)$ by using~\eqref{estim_unif_thetan}.
\medskip
The classical theory of martingale problems allows to obtain
now the Equation (\ref{Prob_rep}). Let us be more explicit.
Let us denote by $\mu ^{X}$ the random point measure associated to the jumps of
X_{t}$.
Then, Theorem 2.42 in \cite{[JS]} guarantees that, as a solution of the
martingale problem, $X$ is a semimartingale with
characteristics $B_{r}=b(X_{r},\theta_{0,r}(\rho))$ and $\nu$ defined by $\nu ((0,t)\times A)=\int_{0}^{t}\int_{{\mathbb{R}}^{d}}\int_{E\times \R_{+}}1_{A}(Q(v,z,X_{r},\theta_{0,r}(\rho)))\mu (dz)du\theta_{0,r}(\rho)(dv)dr$. Then, by
Theorem 2.34 in \cite{[JS]} one has
\begin{equation*}
X_{t}=X+\int_{0}^{t}b(X_{r},\theta_{0,r}(\rho))dr+\int_{0}^{t}y\mu^{X}(dr,dy)
\end{equation*
and the compensator of $\mu^{X}$ is $\nu$ (in \cite{[JS]} a truncation function $h$ appears, here we take
the truncation function $h(x)=0$, which is possible because we work in the
framework of finite variation $\int \left\vert x\right\vert \theta_{0,r}(\rho)(dx)<\infty $). Then, using the representation given in
\cite[Theorem 7.4, p. 93]{[IW]}, one may construct a probability space and a
Poisson point measure $N_{\theta}$ of compensator $\mu (dz)du \theta_{0,r}(\rho)(dv)dr$ such
that the process
\begin{equation*}
\overline{X}_{t}=X_{0}+\int_{0}^{t}b(\overline{X}_{r},\theta_{0,r}(\rho))dr+\int_{0}^{t}\int_{{\mathbb{R}}^{d}}\int_{E\times \R_{+}}Q(v,z,\overline{X}_{r-},\theta_{0,r}(\rho)))N_{\theta}(dr,dz,du,dv)
\end{equation*
has the same law as $X$. Since $\cL(\overline{X}_{t})=\cL(X_t)=\theta_{0,t}(\rho)$ is continuous with respect to~$t$, this produces a solution of~\eqref{Prob_rep}.
\end{proof}
Theorem~\ref{ExistenceRepr} gives the existence of a Boltzmann process~\eqref{Prob_rep} such that $\cL(X_t)=\theta_{0,t}$. It would be interesting to have a uniqueness result and get for example that the marginal laws of any process satisfying~\eqref{Prob_rep} are given by $\theta_{0,t}$, $t\ge 0$. Unfortunately, this does not seem possible to prove such a result in our general framework, and we thus state it with a standard Lipschitz assumptions for~$Q$.
\begin{proposition}
Let us assume that $\mathbf{(A)}$ holds and that $Q$ satisfies the following Lipschitz assumption:
\begin{align}\label{std_lip}
&\int_{E\times {\mathbb{R}}_{+}}\left\vert Q(v_{1},z,u,x_{1},\rho
_{1})-Q(v_{2},z,u,x_{2},\rho _{2})\right\vert \mu (dz)du \leq L_\mu(c,\gamma) (\left\vert x_{1}-x_{2}\right\vert +\left\vert
v_{1}-v_{2}\right\vert +W_{1}(\rho _{1},\rho _{2})),
\end{align}
i.e.~\eqref{h6d} is true with $Q_{v,x}=Q$. Then, any process $(X_t, t\ge 0)$ that satisfies~\eqref{Prob_rep} is such that $\cL(X_t)=\theta_{0,t}(\rho)$ for all $t\ge 0$.
\end{proposition}
\begin{proof}
Let $X$ be a solution of~\eqref{Prob_rep}. We denote $f_t=\cL(X_t)$. There exists a Poisson point measure $N$ with intensity $f_r(dv)\mu(dz)dvdr$ such that
$$X_t=X+\int_0^t b(X_r,f_r)dr +\int_0^tQ(v,z,u,X_{r-},f_r)N(dv,dz,du,dr).$$
As for the preceding proof, we consider the time grid $t_{k}=\frac{k}{n}$, $k\in \N$ and denote $\tau(t)=\frac{k}{n}$ for $\frac{k}{n}\leq t<\frac{k+1}{n}$. We define the process $X^n$ by:
$$X^n_t=X+\int_0^t b(X^n_{\tau(r)},f_{\tau(r)})dr +\int_0^tQ(v,z,u,X_{\tau(r)},f_{\tau(r)})N(dv,dz,du,dr).$$
We have
\begin{align*}
|X_t-X^n_t|\le & \int_0^t |b(X_r,f_r)-b(X^n_{\tau(r)},f_{\tau(r)})|dr \\&+\int_0^t |Q(v,z,u,X_{r-},f_r)-Q(v,z,u,X_{\tau(r)},f_{\tau(r)})|N(dv,dz,du,dr).
\end{align*}
By using~\eqref{lipb} and~\eqref{std_lip}, we get
$$\E[|X_t-X^n_t|]\le \int_0^t (L_b+L_\mu(c,\gamma)) \left(\E[|X_r-X^n_{\tau(r)}|]+W_1(f_r,f_{\tau(r)})\right)dr.$$
Now, we observe that $W_1(f_r,f_{\tau(r)})\le \frac{C_T}{n}$ for $r\in[0,T]$ by using~\eqref{lipsch_conti}. Similarly, we observe from the sublinear growth properties~\eqref{growthb} that for any $T>0$,
$\E\left[ \sup_{n\ge1} \sup_{t\in [0,T]} |X^n_r| \right]<\infty$ and then
\begin{equation}\label{lipW1_eul} \exists C_T\in \R_+^*, \forall h\in [0,1], \E[\sup_{t\le T}|X^n_{t+h}-X^n_{t}|]\le C_Th.
\end{equation}
We therefore get for any $T>0$ the existence of a constant $C_T$ such that
$\E[|X_t-X^n_t|]\le \int_0^t (L_b+L_\mu(c,\gamma)) \E[|X_r-X^n_{r}|]dr+ \frac{C_T}n$, and then
\begin{equation}\label{cv_Euler} \E[|X_t-X^n_t|]\le \frac{C_T}n \exp((L_b+L_\mu(c,\gamma))t), \ t\in [0,T],
\end{equation}
by Gronwall lemma. This gives $W_1(\cL(X^n_t),f_t)\underset{n\to \infty}\to 0$.
On the other hand, we get by using Lemma~\ref{STABILITY} that
\begin{align*}W_1(\cL(X^n_{t_{k+1}}),\Theta^n_{0,t_{k+1}}(\rho))\le W_1(\cL(X^n_{t_{k}}),\Theta^n_{0,t_{k}}(\rho))&\left(1+2\frac{L_\mu(c,\gamma)+L_b}{n} \right)\\&+L_\mu(c,\gamma)\int_{t_k}^{t_{k+1}} W_1( f_t,\Theta^n_{0,t_{k}}(\rho))dt ,
\end{align*}
where $\Theta^n_{0,t}$ is defined by~\eqref{def_theta_n}. For $t_{k+1}\le T$ and $t\in[t_k,t_{k+1}]$, we have $W_1( f_t,\Theta^n_{0,t_{k}}(\rho))\le W_1( f_t,f_{t_k})+W_1( f_{t_k},\Theta^n_{0,t_{k}}(\rho))\le \frac{C_T}{n}$ for some constant $C_T$ by using~\eqref{lipsch_conti} and~\eqref{cv_Euler}. Therefore, we get for $t_{k+1}\le T$ that
$$W_1(\cL(X^n_{t_{k+1}}),\Theta^n_{0,t_{k+1}}(\rho))\le W_1(\cL(X^n_{t_{k}}),\Theta^n_{0,t_{k}}(\rho))\left(1+2\frac{L_\mu(c,\gamma)+L_b}{n} \right)+\frac{C_T}{n^2},$$
for some constant $C_T>0$. Since $\cL(X^n_0)=\Theta^n_{0,0}(\rho)=\rho$, we get for $t_k \in [0,T]$:
$$ W_1(\cL(X^n_{t_{k}}),\Theta^n_{0,t_{k}}(\rho)) \le \frac{C_T}{n^2} \frac{\left(1+2\frac{L_\mu(c,\gamma)+L_b}{n} \right)^k -1}{2\frac{L_\mu(c,\gamma)+L_b}{n}} \le \frac{C_T}{2(L_\mu(c,\gamma)+L_b)n} \exp( 2(L_\mu(c,\gamma)+L_b)T).$$
Since $T>0$ is arbitrary, we obtain by using this bound together with~\eqref{lipW1_eul} and Theorem~\ref{flow} that $W_1(\cL(X^n_{t}),\theta_{0,t}(\rho))\underset{n\to \infty} \to 0$ for any~$t\ge 0$. This shows that $f_t=\theta_{0,t}(\rho)$ since we already have proven that $W_1(\cL(X^n_{t}),f_t)\underset{n\to \infty} \to 0$.
\end{proof}
\section{Particle system approximation} \label{particles}
Particle systems have been used for a long time to show existence results on nonlinear SDE of McKean-Vlasov and Boltzmann type, see e.g. Sznitman~\cite{Sznitman} or M\'el\'eard~\cite{Meleard}. Formally, the interacting particle system associated to the equation~\eqref{Prob_rep} can be written as follows:
$$X^i_t=X^i_0+\int_0^tb\left(X^i_r,\frac 1N \sum_{j=1}^N\delta_{X^j_{r-}}\right)dr+\int_0^t\int_{\R^d\times E \times \R_+}Q\left(v,z,u,X^i_{r-},\frac 1N \sum_{j=1}^N\delta_{X^j_{r-}}\right)N^i(dv,dz,du,dr),$$
where $N^i(dv,dz,du,dr)$, $i=1,\dots,N$, are independent Poisson point measure with intensity $\left(\frac 1N \sum_{j=1}^N\delta_{X^j_{r-}}(dv)\right)\mu(dz)dudr$. In this section, we do not discuss this interacting particle system itself, but we focus on its discretization.
More precisely, we are interested in the approximation of operator $\Theta_{s,t}$ defined by~\eqref{W3'} and of the corresponding Euler scheme. Particle systems gives then a tool to approximate~$\Theta_{s,t}$ and thus the flow $\theta_{s,t}$ defined by Theorem~\ref{flow}. Through this section,
we will work with the space $\mathcal{P}(\mathcal{P}_{1}({\mathbb{R}}^{d}))$ of the probability measures on $\mathcal{P}_{1}(
\mathbb{R}}^{d})$\ (with the Borel $\sigma $ field associated to the
distance $W_{1}$). We denote by $\mathcal{P}_{1}(\mathcal{P}_{1}({\mathbb{R}
^{d}))$\ the space of probability measures $\eta \in \mathcal{P}(\mathcal{P
_{1}({\mathbb{R}}^{d}))$\ such that
\begin{equation*}
\int_{\mathcal{P}_{1}({\mathbb{R}}^{d})}W_{1}(\mu ,\delta _{0})\eta (d\mu
)<\infty .
\end{equation*
On $\mathcal{P}_{1}(\mathcal{P}_{1}({\mathbb{R}}^{d}))$, we take the
Wasserstein distance
\begin{eqnarray*}
\mathcal{W}_{1}(\eta _{1},\eta _{2}) &=&\inf_{\pi \in \Pi (\eta _{1},\eta
_{2})}\int_{\mathcal{P}_{1}({\mathbb{R}}^{d})\times \mathcal{P}_{1}({\mathbb
R}}^{d})}W_{1}(\mu ,\nu )\pi (d\mu ,d\nu ) \\
&=&\sup_{L(\Phi )\leq 1}\left\vert \int_{\mathcal{P}_{1}({\mathbb{R}
^{d})}\Phi (\mu )\eta _{1}(d\mu )-\int_{\mathcal{P}_{1}({\mathbb{R}
^{d})}\Phi (\mu )\eta _{2}(d\mu )\right\vert
\end{eqnarray*
where $\Pi (\eta _{1},\eta _{2})$ is the set of probability measures on
\mathcal{P}_{1}({\mathbb{R}}^{d})\times \mathcal{P}_{1}({\mathbb{R}}^{d})$
with marginals $\eta _{1}$ and $\eta _{2}$ and $L(\Phi )$ is the Lipschitz
constant of $\Phi ,$ so that $\left\vert \Phi (\mu )-\Phi (\nu )\right\vert
\leq L(\Phi )W_{1}(\mu ,\nu ).$ Before going on, we list some basic
properties of $\mathcal{W}_{1}$ which will be used in the following. First
we notice that $\Pi (\eta ,\delta _{\mu })=\{\eta \otimes \delta _{\mu }\}$
(the product probability of $\eta $ and $\delta _{\mu }$\ is the only
probability measure on the product space which has the marginals $\eta $ and
$\delta _{\mu }).$ As an immediate consequence, we have
$$W_1(\eta,\delta_\mu)=\int_{\mathcal{P}_{1}({\mathbb{R}}^{d})}W_{1}(\nu ,\mu)\eta (d\nu )$$
and $\mathcal{W}_{1}(\delta _{\mu
},\delta _{\nu })=W_{1}(\mu ,\nu ).$ Another fact, used in the following, is
that for every $\eta \in \mathcal{P}_{1}(\mathcal{P}_{1}({\mathbb{R}}^{d}))
\ and $\mu _{0}\in \mathcal{P}_{1}({\mathbb{R}}^{d})$\
\begin{eqnarray}
\int_{\mathcal{P}_{1}({\mathbb{R}}^{d})}\left\vert \int_{{\mathbb{R}
^{d}}f(x)\nu (dx)-\int_{{\mathbb{R}}^{d}}f(x)\mu _{0}(dx)\right\vert \eta
(d\nu ) &\leq &L(f)\int_{\mathcal{P}_{1}({\mathbb{R}}^{d})}W_{1}(\nu ,\mu
_{0})\eta (d\nu ) \label{chain1b} \\
&=&L(f)\mathcal{W}_{1}(\eta ,\delta _{\mu _{0}}). \notag
\end{eqnarray}
The main object in this section is a random vector
X=(X^{1},....,X^{N}),X^{i}\in {\mathbb{R}}^{d}$, $i=1,...,N,$ where the
dimension $N$ is given (fixed). We assume that $\E(\left\vert
X^{i}\right\vert )<\infty $ and we associate the (random) empirical measure
on ${\mathbb{R}}^{d}$
\begin{equation*}
\widehat{\rho }(X)(dv)=\frac{1}{N}\sum_{i=1}^{N}\delta _{X^{i}}(dv).
\end{equation*
Notice that $\widehat{\rho }(X)$ is a random variable with values in
\mathcal{P}_{1}({\mathbb{R}}^{d})$ so the law $\mathcal{L}(\widehat{\rho
(X))\in \mathcal{P}(\mathcal{P}_{1}({\mathbb{R}}^{d}))$ and, for every $\Phi
:\mathcal{P}_{1}({\mathbb{R}}^{d})\rightarrow {\mathbb{R}}_{+}
\begin{equation*}
\int_{\mathcal{P}_{1}({\mathbb{R}}^{d})}\Phi (\mu ) \mathcal{L}(\widehat
\rho }(X))(d\mu )={\mathbb{E}}(\Phi (\widehat{\rho }(X))).
\end{equation*
In particular, we hav
\begin{eqnarray}
\mathcal{W}_{1}(\mathcal{L}(\widehat{\rho }(X)),\mathcal{L}(\widehat{\rho
(Y))) &=&\sup_{L(\Phi )\leq 1}\left\vert {\mathbb{E}}(\Phi (\widehat{\rho
(X)))-{\mathbb{E}}(\Phi (\widehat{\rho }(Y)))\right\vert \label{chain1a} \\
&\leq &{\mathbb{E}}(W_{1}(\widehat{\rho }(X),\widehat{\rho }(Y))) \notag \\
&\leq &\frac{1}{N}\sum_{i=1}^{N}{\mathbb{E}}(\left\vert
X^{i}-Y^{i}\right\vert ). \notag
\end{eqnarray
This also proves by taking $Y^{i}=0$ that $\mathcal{L}(\widehat{\rho
(X))\in \mathcal{P}_{1}(\mathcal{P}_{1}({\mathbb{R}}^{d})).$
In the following we consider an initial vector $X_{0}$ and will assume that
the components $X_{0}^{1},...,X_{0}^{N}$ are identically distributed and we
denote $\rho \in \mathcal{P}_{1}({\mathbb{R}}^{d})$ the common law:
X_{0}^{i}\sim \rho ,i=1,...,N$. We consider the uniform grid
s=s_{0}<....<s_{n}=t,s_{i}=s+\frac{i}{n}(t-s)$ and we construct two
sequences $X_{k}$ and $\overline{X}_{k},k=0,1,...,n$ in the following way.
We start with $\overline{X}_{0}=X_0$. The sequence $X_{k}$ (respectively $\overline{X}_{k}$) is
constructed by using the empirical measures $\widehat{\rho }(X_{k})$
(respectively the measure $\Theta_{s,s_k}^n(\rho):=\Theta_{s_{k-1},s_k}\dots \Theta_{s,s_1}(\rho)$ with $\Theta_{s,t}(\rho)$ defined by~\eqref{W3'}), and we define by recurrence:
\begin{align}
X_{k+1}^{i} =&X_{k}^{i}+b(X_{k}^{i},\widehat{\rho }(X_{k}))(s_{k+1}-s_{k}) \label{chain5} \\
&
\int_{s_{k}}^{s_{k+1}}\int_{{\mathbb{R}}^{d}\times E\times {\mathbb{R}
_{+}}Q(v,z,u,X_{k}^{i},\widehat{\rho }(X_{k}))N_{\widehat{\rho
(X_{k})}^{i}(dv,dz,du,dr), \notag\\
\overline{X}_{k+1}^{i} =&\overline{X}_{k}^{i}+b(\overline{X}_{k}^{i},\Theta_{s,s_k}^n(\rho))(s_{k+1}-s_{k})\label{chain6}\\&+\int_{s_{k}}^{s_{k+1}}\int_{{\mathbb{R}
^{d}\times E\times {\mathbb{R}}_{+}}Q(v,z,u,\overline{X}_{k}^{i},\Theta_{s,s_k}^n(\rho))N_{\Theta_{s,s_k}^n(\rho)}^{i}(dv,dz,du,dr),\notag
\end{align
where $N_{\widehat{\rho }(X_{k})}^{i}(dv,dz,du,dr)$, $i=1,\dots ,N$ (resp.
N_{\Theta_{s,s_k}^n(\rho)}^{i}(dv,dz,du,dr)$) are Poisson point measures
that are independent each other conditionally to $X_{k}$ (resp. $\overline{X}_{k}$) with intensity $\widehat{\rho }(X_{k})(dv)\mu (dz)dudr$ (resp. $\Theta_{s,s_k}^n(\rho)(dv)\mu (dz)dudr$). Let us observe that the common law of $\overline{X}_{k}^{i}$, $i=1,...,N$, is
\begin{equation}
\cL (\overline{X}^i_{k})=\Theta _{s,s_{k}}^{n}(\rho ). \label{chain7}
\end{equation
Note that $\overline{X}^1_{k},\dots,\overline{X}^N_{k}$ are independent.
\begin{theorem}
\label{theorem_approx} Assume that $(\mathbf{A})$ holds true and
X_{0}^{i},i=1,...,N$ are independent and of law $\rho\in \mathcal{P}_1(
\mathbb{R}}^d)$. We assume that $M_q=\left(\int_{{\mathbb{R}}^d} |x|^q\rho(dx)\right)^{1/q}<\infty$ with $q>\frac{d}{d-1}\wedge 2$. We define
\begin{equation*}
V_N=1_{d=1} N^{-1/2}+1_{d=2} N^{-1/2}\log(1+N)+1_{d\ge 3}N^{-1/d}.
\end{equation*}
Then there exists a constant~$C$ depending on $d$, $q$, $(b,c,\gamma)$ and
(t-s)$ such that for every Lipschitz function $f:{\mathbb{R}}^d\to {\mathbb{
}}$ with $L(f)\leq 1
\begin{equation}
{\mathbb{E}}\left(\left\vert \frac{1}{N}\sum_{i=1}^{N}f(X_{n}^{i})-\int_{
\mathbb{R}}^{d}}f(x)\Theta _{s,s_{n}}^{n}(\rho )(dx)\right\vert \right)\leq
C M V_N. \label{chain8}
\end{equation
Besides, we have the propagation of chaos in Wasserstein distance:
$$W_1(\cL(X_n^1,\dots,X_n^m),\Theta _{s,s_{n}}^{n}(\rho )(dx)\otimes\dots\otimes \Theta _{s,s_{n}}^{n}(\rho )(dx))\le m C M V_N \underset{N\to \infty}{\to}0.$$
Furthermore, if $\theta_{s,t}$ denotes the flow given by Theorem~\ref{flow}, we
have
\begin{equation}
{\mathbb{E}}\left(\left\vert \frac{1}{N}\sum_{i=1}^{N}f(X_{n}^{i})-\int_{
\mathbb{R}}^{d}}f(x)\theta _{s,t}(\rho )(dx)\right\vert \right)\leq CMV_N
\frac{C}{n}. \label{chain8'}
\end{equation}
\end{theorem}
To prove Theorem~\ref{theorem_approx}, we introduce an intermediary sequence
$\widetilde{X}_{k}^{i}$ defined as follows. On the first time step, we
define
\begin{equation}
\widetilde{X}_{1}^{i}=X_{0}^{i}+b(X_{0}^{i},\rho
)(s_{1}-s_{0})+\int_{s_{0}}^{s_{1}}\int_{{\mathbb{R}}^{d}\times E\times
\mathbb{R}}_{+}}Q(v,z,u,X_{0}^{i},\rho )N_{\rho }^{i}(dv,dz,du,dr),
\label{chain2'}
\end{equation
so that $\widetilde{X}_{1}^{i}=\overline{X}_{1}^{i}$. Then, for the next
time steps, we define for $k\geq 1$,
\begin{equation*}
\widetilde{X}_{k+1}^{i}=\widetilde{X}_{k}^{i}+b(\widetilde{X}_{k}^{i}
\widehat{\rho }(\widetilde{X}_{k}))(s_{k+1}-s_{k})+\int_{s_{k}}^{s_{k+1}
\int_{{\mathbb{R}}^{d}\times E\times {\mathbb{R}}_{+}}Q(v,z,u,\widetilde{X
_{k}^{i},\widehat{\rho }(\widetilde{X}_{k}))N_{\widehat{\rho }(\widetilde{X
_{k})}^{i}(dv,dz,du,dr),\quad \quad
\end{equation*
where $N_{\widehat{\rho }(\widetilde{X}_{k})}^{i}(dv,dz,du,dr)$ is a Poisson
process with intensity $\widehat{\rho }(\widetilde{X}_{k})(dv)\mu (dz)dudr$.
We stress that for $k\geq 1,$ the intensity of the Poisson point measures
is, as for $X_{k+1}^{i}$, the empirical measure of the vector constructed in
the previous step. However, since $X_{1}\neq \widetilde{X}_{1},$ the two
chains are different.
\begin{lemma}
\label{lem_Xtilde} Let assumption ($\mathbf{A}$) hold. We assume that the
components of $X_{0}=(X_{0}^{1},...,X_{0}^{N})$ have the common distribution
$\rho \in \mathcal{P}_{1}({\mathbb{R}}^{d})$. Then, we have
\begin{align*}
&\mathcal{W}_{1}(\mathcal{L}(\widehat{\rho }(X_{n})),\mathcal{L}(\widehat
\rho }(\widetilde{X}_{n})))\leq \frac{e^{2L(t-s)}L(t-s)}{n}\mathcal{W}_{1}(\mathcal{L}(\widehat{ \rho }(X_{0})),\delta _{\rho }),\\
&W_1\left(\cL(X^1_n,\dots,X^m_n), \cL(\widetilde{X}^1_n,\dots,\widetilde{X}^m_n)\right)\leq m\frac{e^{2L(t-s)}L(t-s)}{n}
\mathcal{W}_{1}(\mathcal{L}(\widehat{\rho }(X_{0})),\delta _{\rho }),
\end{align*}
with $L=L_{b}+2L(c,\gamma )$.
\end{lemma}
\begin{proof}
\textbf{Step 1} We first construct by recurrence the sequences $x_{k}
\widetilde{x}_{k},k=1,...,n$ in the following way. We take $\pi _{0}(dv,
\overline{v})$ to be the optimal coupling of $\rho $ and of $\widehat{\rho
(X_{0})$ and we take $\tau _{0}:[0,1]\rightarrow {\mathbb{R}}^{d}\times
\mathbb{R}}^{d}$ that represents $\pi _{0}$. In particular, if $\tau
_{0}=(\tau _{0}^{1},\tau _{0}^{2})$ then $\tau _{0}^{1}$ represents $\widehat{\rho }(X_{0})$ and $\tau _{0}^{2}$ represents $\rho$. We
note that the optimality of $\pi _{0}$ gives $W_{1}(\rho ,\widehat{\rho }(X_{0}))=\int_{0}^{1}|\tau _{0}^{2}(u)-\tau _{0}^{1}(u)|du$ and thus
\begin{equation}
\mathcal{W}_{1}(\delta _{\rho},\mathcal{L}(\widehat{\rho }(X_{0})))=
\mathbb{E}}[W_{1}(\rho ,\widehat{\rho }(X_{0}))]={\mathbb{E}}\left[
\int_{0}^{1}|\tau _{0}^{2}(u)-\tau _{0}^{1}(u)|du\right] . \label{calW1}
\end{equation
Then we define
\begin{eqnarray*}
x_{1}^{i} &=&X_{0}^{i}+b(X_{0}^{i},\widehat{\rho }(X_{0}))(s_{1}-s_{0})
\int_{s_{0}}^{s_{1}}\int_{[0,1]\times E\times {\mathbb{R}}_{+}}Q(\tau
_{0}^{1}(w),z,u,X_{0}^{i},\widehat{\rho }(X_{0}))N^{i}(dw,dz,du,dr), \\
\widetilde{x}_{1}^{i} &=&X_{0}^{i}+b(X_{0}^{i},\rho
)(s_{1}-s_{0})+\int_{s_{0}}^{s_{1}}\int_{[0,1]\times E\times {\mathbb{R}
_{+}}Q_{\tau _{0}^{1}(w),X_{0}^{i}}(\tau _{0}^{2}(w),z,u,X_{0}^{i},\rho
)N^{i}(dw,dz,du,dr).
\end{eqnarray*
where $N^{i}$ is a Poisson process with intensity $1_{[0,1]}(w)dw\mu
(dz)dudr.$ We also assume that the Poisson point measures $N^{i},i=1,...,N$
are independent. Notice that $x_{1}$ has the same law as $X_{1}$ and
\widetilde{x}_{1}$ has the same law as $\widetilde{X}_{1}$ by (\ref{h6e}).
Then, for $k\geq 1,$ if $x_{k},\widetilde{x}_{k}$ are given, we construct
x_{k+1},\widetilde{x}_{k+1}$ as follows. We consider $\pi _{k}(dv,d\overline
v})$ an optimal coupling of $\widehat{\rho }(x_{k})$ and $\widehat{\rho }
\widetilde{x}_{k})$ and we take $\tau _{k}:[0,1]\rightarrow {\mathbb{R}
^{d}\times {\mathbb{R}}^{d}$ that represents $\pi _{k}$.
Then we define
\begin{eqnarray*}
x_{k+1}^{i} &=&x_{k}^{i}+b(x_{k}^{i},\widehat{\rho }(x_{k}))(s_{k+1}-s_{k})
\int_{s_{k}}^{s_{k+1}}\int_{[0,1]\times E\times {\mathbb{R}}_{+}}Q(\tau
_{k}^{1}(w),z,u,x_{k}^{i},\widehat{\rho }(x_{k}))N^{i}(dw,dz,du,dr), \\
\widetilde{x}_{k+1}^{i} &=&\widetilde{x}_{k}^{i}+b(\widetilde{x}_{k}^{i}
\widehat{\rho }(\widetilde{x}_{k}))(s_{k+1}-s_{k})+\int_{s_{k}}^{s_{k+1}
\int_{[0,1]\times E\times {\mathbb{R}}_{+}}Q_{\tau
_{k}^{1}(w),x_{k}^{i}}(\tau _{k}^{2}(w),z,u,\widetilde{x}_{k}^{i},\widehat
\rho }(\widetilde{x}_{k}))N^{i}(dw,dz,du,dr)
\end{eqnarray*
Using again (\ref{h6e}), we get by induction on~$k$ that $x_{k}$ has the
same law as $X_{k}$ and $\widetilde{x}_{k}$ has the same law as $\widetilde{
}_{k}$.
\textbf{Step 2.} Suppose that $k\geq 1.$ We use now Assumptions~\eqref{lipb}
and~\eqref{h6d} to ge
\begin{eqnarray*}
{\mathbb{E}}\left\vert x_{k+1}^{i}-\widetilde{x}_{k+1}^{i}\right\vert &\leq
{\mathbb{E}}\left\vert x_{k}^{i}-\widetilde{x}_{k}^{i}\right\vert
+(L_{b}+L_{\mu }(c,\gamma ))({\mathbb{E}}\left\vert x_{k}^{i}-\widetilde{x
_{k}^{i}\right\vert +W_{1}(\widehat{\rho }(x_{k}),\widehat{\rho }(\widetilde
x}_{k})))(s_{k+1}-s_{k}) \\
&&+L_{\mu }(c,\gamma ){\mathbb{E}}\int_{s_{k}}^{s_{k+1}}\int_{0}^{1}\lef
\vert \tau _{k}^{1}(\omega )-\tau _{k}^{2}(w)\right\vert dwds.
\end{eqnarray*
Sinc
\begin{equation*}
\int_{0}^{1}\left\vert \tau _{k}^{1}(\omega )-\tau _{k}^{2}(w)\right\vert
dw=W_{1}(\widehat{\rho }(x_{k}),\widehat{\rho }(\widetilde{x}_{k}))\leq
\frac{1}{N}\sum_{j=1}^{N}\left\vert x_{k}^{j}-\widetilde{x
_{k}^{j}\right\vert ,
\end{equation*
we obtai
\begin{eqnarray}\label{ineq_rec}
{\mathbb{E}}\left\vert x_{k+1}^{i}-\widetilde{x}_{k+1}^{i}\right\vert &\leq
{\mathbb{E}}\left\vert x_{k}^{i}-\widetilde{x}_{k}^{i}\right\vert
[1+(L_{b}+L_{\mu }(c,\gamma ))(s_{k+1}-s_{k})] \\
&&+(L_{b}+2L_{\mu }(c,\gamma ))(s_{k+1}-s_{k})\frac{1}{N}\sum_{j=1}^{N}
\mathbb{E}}|x_{k}^{j}-\widetilde{x}_{k}^{j}|. \notag
\end{eqnarray
Summing over $i=1,...,N$, we get
\begin{equation*}
\frac{1}{N}\sum_{i=1}^{N}{\mathbb{E}}\left\vert x_{k+1}^{i}-\widetilde{x
_{k+1}^{i}\right\vert \leq \frac{1}{N}\sum_{i=1}^{N}{\mathbb{E}}\left\vert
x_{k}^{i}-\widetilde{x}_{k}^{i}\right\vert [1+2(L_{b}+2L_{\mu }(c,\gamma
))(s_{k+1}-s_{k})].
\end{equation*
Using this inequality, we get by recurrence
\begin{equation*}
\frac{1}{N}\sum_{i=1}^{N}{\mathbb{E}}\left\vert x_{n}^{i}-\widetilde{x
_{n}^{i}\right\vert \leq \frac{1}{N}\sum_{i=1}^{N}{\mathbb{E}}\left\vert
x_{1}^{i}-\widetilde{x}_{1}^{i}\right\vert \left( 1+\frac{2(L_{b}+2L_{\mu
}(c,\gamma ))}{n}(s-t)\right) ^{n-1}.
\end{equation*}
\textbf{Step 3} We go now from $x_{1},\widetilde{x}_{1}$ to $X_{0}$. We have
\begin{eqnarray*}
{\mathbb{E}}\left\vert x_{1}^{i}-\widetilde{x}_{1}^{i}\right\vert &\leq
&L_{\mu }(c,\gamma )\int_{s}^{s_{1}}{\mathbb{E}}\int_{0}^{1}\left\vert \tau
_{0}^{1}(\omega )-\tau _{0}^{2}(w)\right\vert dwds+(L_{b}+L_{\mu }(c,\gamma
))W_{1}(\rho ,\widehat{X}_{0})(s_{1}-s) \\
&=&(L_{b}+2L_{\mu }(c,\gamma ))(s_{1}-s)\mathcal{W}_{1}(\delta _{\rho }
\mathcal{L}(\widehat{\rho }(X_{0})))
\end{eqnarray*
by using~\eqref{calW1} for the last equality, so that we get by summing over
$i=1,...,N$,
\begin{equation*}
\frac{1}{N}\sum_{i=1}^{N}{\mathbb{E}}\left\vert x_{1}^{i}-\widetilde{x
_{1}^{i}\right\vert \leq (L_{b}+2L_{\mu }(c,\gamma ))\frac{t-s}{n}\mathcal{W
_{1}(\delta _{\rho },\mathcal{L}(\widehat{\rho }(X_{0}))).
\end{equation*
We combine with the previous inequality and we obtai
\begin{eqnarray*}
\frac{1}{N}\sum_{i=1}^{N}\E\left\vert x_{n}^{i}-\widetilde{x
_{n}^{i}\right\vert &\leq &\mathcal{W}_{1}(\delta _{\rho },\mathcal{L}
\widehat{\rho }(X_{0})))\frac{(L_{b}+2L_{\mu }(c,\gamma ))(t-s)}{n}\left( 1
\frac{L_{b}+2L_{\mu }(c,\gamma )}{n}(t-s)\right) ^{n-1} \\
&\leq &\frac{e^{(L_{b}+2L(c,\gamma ))(t-s)}(L_{b}+2L_{\mu }(c,\gamma ))(t-s
}{n}\mathcal{W}_{1}(\delta _{\rho },\mathcal{L}(\widehat{\rho }(X_{0}))).
\end{eqnarray*}
We notice that the law of $(x^i_n,\widetilde{x}^i_n)_i$ is invariant up to a permutation on the $i$'s. In particular, we have $\E\left\vert x_{n}^{i}-\widetilde{x}_{n}^{i}\right\vert=\E\left\vert x_{n}^{1}-\widetilde{x}_{n}^{1}\right\vert$, and therefore
$$ \sum_{i=1}^m \E\left\vert x_{n}^{i}-\widetilde{x}_{n}^{i}\right\vert \le m \frac{e^{(L_{b}+2L(c,\gamma ))(t-s)}(L_{b}+2L_{\mu }(c,\gamma ))(t-s)}{n}\mathcal{W}_{1}(\delta _{\rho },\mathcal{L}(\widehat{\rho }(X_{0}))).$$
\textbf{Step 4} Since the law of $X_{n}$ coincides with the law of $x_{n}$
it follows that $\mathcal{L}(\widehat{\rho }(X_{n}))=\mathcal{L}(\widehat{\rho }(x_{n}))$ and $\cL(X^1_n,\dots,X^m_n)=\cL(x^1_1,\dots,x^m_n)$. The same is true for $\widetilde{X}_{n}$\ and $\widetilde{x
_{n}.$ So, we have by (\ref{chain1a})
\begin{eqnarray*}
\mathcal{W}_{1}\left(\mathcal{L}(\widehat{\rho }(X_{n})),\mathcal{L}(\widehat{\rho }(\widetilde{X}_{n}))\right) &=&\mathcal{W}_{1}(\mathcal{L}(\widehat{\rho
(x_{n})),\mathcal{L}(\widehat{\rho }(\widetilde{x}_{n})) \\
&\leq &\frac{1}{N}\sum_{i=1}^{N}\E\left\vert x_{n}^{i}-\widetilde{x
_{n}^{i}\right\vert \\
&\leq &\frac{e^{2(L_{b}+2L(c,\gamma ))(t-s)}(L_{b}+2L_{\mu }(c,\gamma ))(t-s
}{n}\mathcal{W}_{1}(\delta _{\rho},\mathcal{L}(\widehat{\rho
(X_{0}))).
\end{eqnarray*}
We get the other inequality by using $W_1(\cL(X^1_n,\dots,X^m_n),\cL(\widetilde{X}^1_n,\dots,\widetilde{X}^m_n))\le\E\left(\sum_{i=1}^m|x^i_n-\widetilde{x}^i_n|\right)$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem_approx}]
We use the argument of Lindeberg. In order to pass
from the sequence $X_{k}$ to the sequence $\overline{X}_{k}$, we construct
intermediary sequences as follows. Given $\kappa \in \{0,....,n-1\}$ we
define $X_{\kappa ,k}=\overline{X}_{k}$ for $k\leq \kappa $ and, for $k\geq \kappa $ we define $X_{\kappa ,k}$ by the recurrence formula (\ref{chain5}). So the construction of $k\leq \kappa $ employs the intensity measure
based on the common law $\rho (\overline{X}_{k})$ while for $k>\kappa $ we
use the empirical measure. In particular, $X_{\kappa ,\kappa }^{i},i=1,...,N$
are independent and have the common distribution $\Theta _{s,s_{\kappa}}^{n}(\rho).$ Then we writ
\begin{equation*}
\mathcal{W}_{1}(\mathcal{L}(\widehat{\rho }(X_{n})),\mathcal{L}(\widehat
\rho }(\overline{X}_{n})))\leq \sum_{\kappa =0}^{n-1}\mathcal{W}_{1}
\mathcal{L}(\widehat{\rho }(X_{\kappa ,n})),\mathcal{L}(\widehat{\rho
(X_{\kappa +1,n}))).
\end{equation*
Let us compare the sequences $X_{\kappa ,k}$ and $X_{\kappa +1,k}$. Both
sequences start with $\overline{X}_{\kappa }$ at time $s_\kappa$ and then, in the following
step, $\rho (\overline{X}_{\kappa })$ is used to produce $X_{\kappa ,\kappa +1}$ and the empirical measure $\widehat{\rho }(\overline{X}_{\kappa })$ is used to produce $X_{\kappa +1,\kappa +1}$. Afterwards, for $k\geq \kappa +1$
both sequences use their corresponding empirical measure. This is exactly the framework of Lemma~\ref{lem_Xtilde}, so we get
\begin{equation*}
\mathcal{W}_{1}(\mathcal{L}(\widehat{\rho }(X_{\kappa ,n})),\mathcal{L}
\widehat{\rho }(X_{\kappa +1,n})))\leq \frac{e^{(L_{b}+2L_{\mu }(c,\gamma
)(t-s)}L_{\mu }(c,\gamma )(t-s)}{n}\times \mathcal{W}_1(\delta_{\Theta_{s,s_\kappa }^n(\rho)},\mathcal{L}(\widehat{\rho }(X_{\kappa}))),
\end{equation*
and summing over $\kappa $ we obtai
\begin{equation}
\mathcal{W}_{1}(\mathcal{L}(\widehat{\rho }(X_{n})),\mathcal{L}(\widehat
\rho }(\overline{X}_{n})))\leq \frac{e^{(L_{b}+2L_{\mu }(c,\gamma
)(t-s)}L_{\mu }(c,\gamma )(t-s)}{n}\times \sum_{\kappa=0}^{n-1} \mathcal{W}_1(\delta_{\Theta_{s,s_\kappa }^n(\rho)},\mathcal{L}(\widehat{\rho }(X_{\kappa}))).
\end{equation
It is well known that the moments of order~$q$ are preserved by the Euler scheme, thanks to~\eqref{growthb} and~\eqref{h6a}. We can therefore use Theorem~1 of the article~\cite{[FG1]} by Fournier and Guillin and get $\mathcal{W}_1(\delta_{\Theta_{s,s_\kappa }^n(\rho)},\mathcal{L}(\widehat{\rho }(X_{\kappa})))\le \tilde{C} M V_N$, leading to
$$ \mathcal{W}_{1}(\mathcal{L}(\widehat{\rho }(X_{n})),\mathcal{L}(\widehat{\rho }(\overline{X}_{n})))\leq CMV_N.$$
Now using (\ref{chain1b}) with $\eta =\mathcal{L}(\widehat{\rho }(\overline{X}_{n}))$ and $\mu_{0}=\Theta _{s,s_{n}}(\rho )$ we get, for every $f$ with
$L(f)\leq 1
\begin{equation*}
\E\left(\left\vert \frac{1}{N}\sum_{i=1}^{N}f(X_{n}^{i})-\int_{\R^{d}}f(x)\Theta
_{s,s_{n}}^{n}(\rho )(dx)\right\vert \right)\leq \mathcal{W}_{1}(\mathcal{L}
\widehat{\rho }(\overline{X}_{n})),\delta _{\Theta _{s,s_{n}}^{n}(\rho
)}))\leq CMV_N.
\end{equation*
Then, \eqref{chain8'} is a consequence of (\ref{W7}).
Last, the propagation of chaos follows by the same arguments, since we have from Lemma~\ref{lem_Xtilde}
\begin{align*}
& W_1\left(\cL(X_{\kappa,n}^1,\dots,X_{\kappa,n}^m),\cL(X_{\kappa+1,n}^1,\dots,X_{\kappa+1,n}^m)\right)\\
&\le m \frac{e^{(L_{b}+2L_{\mu }(c,\gamma
)(t-s)}L_{\mu }(c,\gamma )(t-s)}{n}\times \mathcal{W}_1(\delta_{\Theta_{s,s_\kappa }^n(\rho)},\mathcal{L}(\widehat{\rho }(X_{\kappa}))).
\end{align*}
\end{proof}
\textbf{Approximating particles system and algorithm}
We now discuss briefly the problem of sampling the system of particles defined
by~\eqref{chain5}. To do so, we will assume that:
\begin{equation}
\mu (E)<\infty \quad \text{ and }\quad \left\vert \gamma (v,z,x)\right\vert
\leq \Gamma ,\forall v,x\in {\mathbb{R}}^{d},z\in E. \label{chain12a}
\end{equation
The approximation in a more general framework requires then to use some
truncation procedures and to quantify the corresponding error.
When~\eqref{chain12a} holds, the solution of~\eqref{chain5} is constructed
in an explicit way as follows. Let us assume that the values of $(X_{k}^{i},i\in \{1,\dots ,N\})$ have been obtained: we explain how to construct then $(X_{k+1}^{i},i\in \{1,\dots ,N\})$. We take $T_{\ell}^{i},\ell \in {\mathbb{N}}$ to be the jump times of a Poisson process of
intensity $\mu (E)\times \Gamma $ and we take $Z_{\ell }^{i}\sim \frac{1}
\mu (E)}\mu (dz),U_{\ell }^{i}\sim \frac{1}{\Gamma }1_{[0,\Gamma ]}(u)du$
and $\varepsilon _{\ell }^{i}$ uniformly distributed on $\{1,...,N\}$. For
each $i=1,...,N$ this set of random variables are independent. Then one
computes explicitly
\begin{equation*}
X_{k+1}^{i}=X_{k}^{i}+b(X_{k}^{i},\widehat{\rho }(X_{k}))\frac{s-t}{n
+\sum_{s_{k}\leq T_{\ell }^{i}<s_{k+1}}Q(X_{k}^{\varepsilon _{\ell
}^{i}},Z_{\ell }^{i},U_{\ell }^{i},X_{k}^{i},\widehat{\rho }(X_{k})),
\end{equation*
which gives the desired particle system that satisfies~\eqref{chain8'}.
\bigskip
\section{The Boltzmann equation}\label{sec_Boltz}
\subsection{The homogeneous Boltzmann equation}
We consider the following more specific set of coefficients, which
corresponds to the Boltzmann equation with hard potential. We take $a\in (0,1)$ and we define
\begin{equation}\label{gamma_Boltz}
\gamma(v,x)=\left\vert v-x\right\vert ^{a}.
\end{equation}
Moreover we take $b:{\mathbb{R}}^{d}\rightarrow {\mathbb{R}}^{d}$ that is Lipschitz continuous (and thus satisfies (\ref{lipb}))
and $c:{\mathbb{R}}^{d}\times E\times {\mathbb{R}}^{d}\rightarrow {\mathbb{R}}^{d}$ that verifies the following
hypothesis. We assume that for every $(v,x)\in {\mathbb{R}}^{d}\times {\mathbb{R}}^{d}$ there exists a function $c_{v,x}:{\mathbb{R}}^{d}\times E \times {\mathbb{R}}^{d}\rightarrow \R$ such that for every $v',x'\in {\mathbb{R}}^{d}$ and $\varphi \in C_{b}^{1}({\mathbb{R}}^{d})
\begin{equation}
\int_{E}\varphi (c(v',z,x')) \mu (dz)=\int_{E}\varphi
(c_{v,x}(v',z,x'))\mu (dz). \label{H1}
\end{equation
Notice that, since $\gamma $ does not depend on $z$, this guarantees that $Q(v,z,u,x):=c(v,z,x)1_{\{u<\gamma (v,x)\}}$ verifies (\ref{h6e}). Then, we
assume that there exists some function $\alpha :E\rightarrow {\mathbb{R}}_{+}$ such that $\int_{E}\alpha (z)\mu (dz)<\infty $,
\begin{equation}
\left\vert c(v,z,x)\right\vert \leq \alpha (z)\left\vert v-x\right\vert
\quad\text{and}\quad \left\vert c(v,z,x)-c_{v,x}(v',z,x')\right\vert \leq \alpha
(z)(\left\vert v-v'\right\vert +\left\vert x-x'\right\vert ). \label{H2}
\end{equation
Notice that $Q$ may not satisfy the sublinear growth property~(\ref{h6a}) because~\eqref{H2} only ensures that $\int_{E\times \R_+}|Q(v,z,u,x)| \mu(dz)du = \gamma(v,x) \int_{E}|c(v,z,x)| \mu(dz)\le\int_E \alpha(z)\mu(dz) |v-x|^{1+a}$. Thus~\eqref{h6d} may not hold. So, our results does not apply directly for these coefficients. In order to fit our framework, we will use a truncation procedure. For $\Gamma \geq 1$ we define $H_{\Gamma }(v)=v\times \frac{\left\vert v\right\vert \wedge \Gamma }{\left\vert v\right\vert }$ and we notice that $\left\vert H_{\Gamma}(v)\right\vert \leq \Gamma $ and $\left\vert H_{\Gamma }(v)-H_{\Gamma}(w)\right\vert \leq \left\vert v-w\right\vert$. Then we define
\begin{align}
&\gamma _{\Gamma }(v,x) =\gamma (H_{\Gamma }(v),H_{\Gamma }(x))=\left\vert
H_{\Gamma }(v)-H_{\Gamma }(x)\right\vert ^{a}, \notag\\
&c_{\Gamma }(v,z,x) =c(H_{\Gamma }(v),z,H_{\Gamma }(x)), \quad c_{\Gamma,(v,x)}(v',z,x')=c_{H_\Gamma(v),H_\Gamma(x)}(H_\Gamma(v'),z,H_\Gamma(x')),\label{H2a}\\
&Q_{\Gamma}(v,z,u,x)=Q(H_{\Gamma }(v),z,u,H_{\Gamma }(x)), \quad Q_{\Gamma,(v,x)}(v',z,u,x')=1_{u<\gamma_\Gamma(v',x')}c_{\Gamma,(v,x)}(v',z,x'). \notag
\end{align
\begin{lemma}
\label{TR} Let $b:\R^d\to \R^d$ be Lipschitz continuous and assume~\eqref{H1} and~\eqref{H2}. Then, the triplet $(b,c,\gamma)$ satisfies~\eqref{lipb} and \eqref{h6e}.
Besides, for every $\Gamma \geq 1,$ the triplet $(b,c_{\Gamma },\gamma_{\Gamma })$ satisfies~$\mathbf{(A)}$ with $L_{\mu}(c_\Gamma,\gamma_\Gamma )=6 \Gamma ^{a}\int_E \alpha(z) dz$.
\end{lemma}
The proof of this Lemma is postponed to Appendix~\ref{App_proofs}. Thanks to this result, we can then apply Theorem~\ref{flow} to construct a flow $\theta _{s,t}^{\Gamma }(\rho)$. By Theorem~\ref{Weq}, this flows solves the weak equation~\eqref{we2} associated with $\gamma_\Gamma$ and $Q_\Gamma$. Besides, by Theorem~\ref{ExistenceRepr} there exists a probabilistic representation of this solution. The natural question is then to know if $\theta _{s,t}^{\Gamma }(\rho)$ converges when $\Gamma \rightarrow \infty $. This would produce a flow that would be a natural candidate for the solution of the Boltzmann equation. We leave this issue for further research.
\subsubsection*{The 3D Boltzmann equation with hard potential} \label{subsec_HP}
We now precise the coefficients which appear in the homogeneous Boltzmann
equation in dimension three. We follow the parametrization introduced in~\cite{[FM]} and~\cite{[F1]}. For this equation, the space $E$ is $E=[0,\pi ]\times [ 0,2\pi ]$, we note $z=(\zeta ,\varphi )$ and the measure $\mu$ is defined by $\mu(dz)=\zeta ^{-(1+\nu )}d\zeta d\varphi$, for some $\nu \in (0,1)$. The coefficient $\gamma$ is given by~\eqref{gamma_Boltz}. We now define~$c$. Given a vector $X\in {\mathbb{R}}^{3}\setminus\{0\}$, one
may construct $I(X),J(X)\in {\mathbb{R}}^{3}$ such that $X\rightarrow(I(X),J(X))$ is measurable and $(\frac{X}{\left\vert X\right\vert },\frac{I(X)}{\left\vert X\right\vert },\frac{J(X)}{\left\vert X\right\vert })$ is an orthonormal basis in ${\mathbb{R}}^{3}$. We define the function $\Delta
(X,\varphi )=\cos (\varphi )I(X)+\sin (\varphi )J(X)$ and then
\begin{equation}\label{def_c_3D}
c(v,(\zeta ,\varphi ),x)=-\frac{1-\cos \zeta }{2}(v-x)+\frac{\sin (\zeta )}{2}\Delta (v-x,\varphi ).
\end{equation
The specific difficulty in this framework is that $c$ does not satisfy the
standard Lipschitz continuity property. It has been circumvented by Tanaka in \cite{[T1]} (see also Lemma 2.6 in \cite{[FM]}) who proves that one may construct a measurable function $\eta :{\mathbb{R}}^{3}\times {\mathbb{R}}^{3}\rightarrow \lbrack 0,2\pi ]$ such that
\begin{equation*}
\left\vert c(v,(\zeta ,\varphi ),x)-c(v',(\zeta
,\varphi +\eta (v'-x',v-x)),x')\right\vert \leq 2\zeta (\left\vert v-v'\right\vert +\left\vert x-x'\right\vert).
\end{equation*
This means that Hypothesis~(\ref{H2}) holds with $c_{v,x}(v',(\zeta ,\varphi) ,x')=c(v',(\zeta ,\varphi +\eta (v'-x',v-x)),x')$. This function also satisfies (\ref{H1}):
\begin{equation*}
\int_{0}^{\pi }\frac{d\zeta }{\zeta ^{1+\nu }}\int_{0}^{2\pi}f(x+c(v,(\zeta ,\varphi ),x))d\varphi =\int_{0}^{\pi }\frac{d\zeta }{\zeta ^{1+\nu }}\int_{0}^{2\pi }f(x+c(v,(\zeta ,\varphi +\eta (v-x,\overline{v}-\overline{x})),x))d\varphi,
\end{equation*
since for every $v,x\in {\mathbb{R}}^{3}$ and $\zeta \in (0,\pi )$ the
function $\varphi \rightarrow f(x+c(v,(\zeta ,\varphi ),x))$ is $2\pi$-periodic.
We are therefore indeed in the framework of Lemma~\ref{TR}.
\subsection{The Boltzmann-Enskog equation}
In this section we consider the non homogeneous Boltzmann equation called Enskog equation which has been discussed in~\cite{[Ar]}. The study of this equation has been initiated in~\cite{[P]}, and more recent contributions concerning existence, uniqueness, probabilistic interpretation and particle system approximations are given in~\cite{[ARS]}, \cite{[FRS]} and~\cite{[FRS1]}. We consider a model in which $\bX=(\bX^{1},...,\bX^{d})\in {\mathbb{R}}^{d}$ with $d=3$
represents the position of the "typical particle" and $X=(X^{1},...,X^{d})\in{\mathbb{R}}^{d}$ is its velocity. In all this subsection, letters with bar will refer to positions, and bold letters $\boX=(\bX,X)$ will denote the couple position-velocity. Then, the position
follows the dynamics given by the velocity
\begin{equation*}
\bX_{s,t}=\bX_{0}+\int_{s}^{t}X_{s,r}dr,
\end{equation*
where $\bX_0$ is an integrable random variable. As for the velocity, $X_{s,t}$ follows the equation
\begin{equation*}
X_{s,t}=X_{0}+\int_{s}^{t}\int_{({\mathbb{R}}^{d}\times {\mathbb{R}}^{d})\times E\times {\mathbb{R}}_{+}} c(v,z, X_{s,r-})1_{\{u\leq \gamma(v,X_{s,r-})\}}\times \beta (\bv,\bX_{s,r-}) N(d\bov,dz,du,dr)
\end{equation*
where $\bov=(\bv,v)$, $\gamma(v,x)=|x-v|^a$, $\beta \in C_{b}^{1}({\mathbb{R}}^{d}\times {\mathbb{R}}^{d})$ and $N$ is a Poisson point measure of intensity
\begin{equation*}
\botheta_{s,r}(d\bov )\mu (dz)dudr\quad \text{with}\quad
\botheta_{s,r}(d \bov)=\P((\bX_{s,r},X_{s,r})\in d\bov).
\end{equation*
Here, as in the case of the homogeneous equation, $E=[0,\pi ]\times [0,2\pi ]$ and $z=(\zeta ,\varphi )$ and the measure $\mu (dz)=\zeta ^{-(1+a)} d\zeta d\varphi$, $a\in (0,1)$.
We look to this dynamics as a system in dimension $2d$ (typically with $d=3$). We denote $\boxx=(\bx,x)$, $\bov=(\bv,v)$ and $\boX=(\bX,X)$. The drift is then given by
\begin{eqnarray}
\bob(\boxx) &=&\begin{cases} x^{i}\quad \text{for }i=1,...,d, \\
0\quad \text{for }i=d+1,...,2d,
\end{cases}\label{e1}
\end{eqnarray
and the collision kernel (cross section) i
\begin{equation}
\boc (\bov,z,\boxx)=c (v,z,x) \beta (\bv,\bx) \label{e2},
\end{equation
where~$c$ is defined as in the Boltzmann equation by~\eqref{def_c_3D} with $z=(\zeta ,\varphi )\in E$, and
$$ \bogamma (\boxx,\bov)=\left\vert v-x\right\vert^{a},$$
with $a\in (0,1)$. The equation~(\ref{we2}) associated to these coefficients is the ($d$-dimensional) Enskog equation. In the particular case $\beta(\bv,\bx)=1$, we recover the case of the homogeneous Boltzmann equation and $\bar{X}$ is just the time-integral of the process. The specificity of the inhomogeneous case is illustrated by the following example. Let us take $R>0$, $i_{R}$ be a regularized version of the
indicator function $1_{x<R}$ and define $\beta_{R}(\bv,\bx)=i_{R}(\left\vert \bx-\bv \right\vert )$. Then, the coefficient $\beta_{R}(\bv,\bx)$ means that only the particles which are closer to the distance $R$ may collide.
We now define the truncated coefficients $\boc_\Gamma$ and $\bogamma_\Gamma$ for $\Gamma>0$. We still denote, for $v\in\R^d$, $H_{\Gamma }(v)=v\times \frac{\left\vert v\right\vert \wedge \Gamma }{\left\vert v\right\vert }$, and we define $\boc_\Gamma (\bov,z,\boxx)=c_\Gamma(v,z,x) \beta_\Gamma(\bv,\bx)$ with $\beta_\Gamma(\bv,\bx):=\beta(H_\Gamma(\bv),H_\Gamma(\bx))$ and $\bogamma_\Gamma (\boxx,\bov)=\left\vert H_\Gamma(v)-H_\Gamma(x)\right\vert^{a}$.
\begin{lemma}\label{TR2} Assume that $\beta \in C_{b}^{1}({\mathbb{R}}^{d}\times {\mathbb{R}}^{d})$ and that~\eqref{H1} and~\eqref{H2} hold. Then, for every $\Gamma \ge 1$, the triplet $(\bob,\boc_\Gamma,\bogamma_\Gamma)$ satisfies~$\mathbf{(A)}$ with $L_{\mu}(c_\Gamma,\gamma_\Gamma )=C \Gamma ^{a+1}$ for some constant $C>0$.\end{lemma}
The proof is postponed to Appendix~\ref{App_proofs}. As for the Boltzmann equation, this lemma allows by Theorem~\ref{flow} to construct the flow, and then by Theorems~\ref{Weq} and~\ref{ExistenceRepr}, a weak solution and a probabilistic representation for the Enskog-Boltzmann equation with truncated coefficients. The convergence when $\Gamma \to \infty$ remains an open problem.
\subsection{A Boltzmann equation with a mean field interaction on the position}
The fact that we are able with our approach to mix easily Boltzmann and McKean-Vlasov interactions gives more flexibility to model the behaviour of particles. Here,
we give a very simple example that is derived from the Boltzmann-Enskog
equation discussed above. In this equation, interactions are made both for the position and the velocity through a Poisson point measure. More precisely, when the function $\beta$ is the indicator function, collisions only occur when the particle is close enough to the one sampled by the Poisson point measure. Here, we consider an alternative modelling with a mean field interaction on the position. Precisely, we consider the coefficient~$\bob$ defined by~\eqref{e1}, and the coefficients $\boc$ and $\bogamma$ given by
\begin{equation}\label{BE_meanfield}
\boc(v,z,\boxx)=c(v,z,x) \text{ and } \bogamma(v,\boxx,\rho)=|v-x|^a\int_{\R^d}p_{R}(\bx-\bx')\rho (d \bx'),
\end{equation}
with $p_R(x)=\frac 1 {(2\pi R^2)^{d/2}}\exp\left(-\frac{|x|^2}{2R^2}\right)$ is the Gaussian kernel. Thus, the higher is the probability density function of the particles in the neighbourhood of~$\bx$, the more likely are the collisions. This corresponds to the following dynamics for $\boX=(\bX,X)$:
\begin{equation}\label{BE_MFeq}
\begin{cases} \bX_{s,t}=\bX_{0}+\int_{s}^{t}X_{s,r}dr,\\
X_{s,t}=X_{0}+\int_{s}^{t}\int_{ {\mathbb{R}}^{d}\times E\times {\mathbb{R}}_{+}} c(v,z, X_{s,r-})1_{\{u\leq \bogamma(v,\boX_{s,r-},\bar{\theta}_{s,r})\}} N(dv,dz,du,dr),
\end{cases}
\end{equation}
where $N$ is a Poisson point measure with intensity $\theta_{s,r}(dv)\mu(dz)dudr$. Here, $\botheta_{s,r}(d \bov)=\P((\bX_{s,r},X_{s,r})\in d\bov)$ is the probability distribution of $(\bX_{s,r},X_{s,r})$ with marginal laws $\bar{\theta}_{s,r}(d\bv)=\P(\bX_{s,r}\in d\bv)$ and $\theta_{s,r}(dv)=\P(X_{s,r}\in dv)$.
To underline the different types of interaction (Boltzmann and mean-field), we also write the interacting particle system corresponding to this equation: for $i\in\{1,\dots,N\}$,
\begin{equation}\label{BE_MFeq_IPS}
\begin{cases} \bX^i_{t}=\bX^i_{0}+\int_{0}^{t}X^i_{r}dr,\\
X^i_{t}=X^i_{0}+\int_{0}^{t}\int_{ {\mathbb{R}}^{d}\times E\times {\mathbb{R}}_{+}} c(v,z, X^i_{s,r-})1_{\{u\leq \frac 1N \sum_{j=1}^N |X^i_{t-}-X^j_{t-}|^a p_R(\bX^i_{t}-\bX^j_{t})\}} N^i(dv,dz,du,dr),
\end{cases}
\end{equation}
where $N^i$, $i=1,\dots,N$, are independent Poisson point measure with intensity $\frac 1 N \sum_{j=1}^N\delta_{X^j_{t-}}(dv)\mu(dz) du dr$.
We now define for $\Gamma>0$ the truncated coefficients $\boc_\Gamma(v,z,\boxx)=c(H_\Gamma(v),z,H_\Gamma(x))$ and $\bogamma_\Gamma (v,\boxx,\rho)=|H_\Gamma(v)-H_\Gamma(x)|^a\int_{\R^d}p_{R}(\bx-\bx')\rho (d \bx')$. The proof of next lemma can be found in Appendix~\ref{App_proofs}.
\begin{lemma}\label{TR3}
Assume that~\eqref{H1} and~\eqref{H2} hold. Then, for every $\Gamma \ge 1$, the triplet $(\bob_\Gamma,\boc_\Gamma,\bogamma_\Gamma)$ defined as the truncation of~\eqref{e1} and~\eqref{BE_meanfield} satisfies~$\mathbf{(A)}$ with $L_{\mu}(c_\Gamma,\gamma_\Gamma )=C \Gamma ^{a+1}$ for some constant $C>0$.
\end{lemma}
Thanks to Lemma~\ref{TR3}, we can apply Theorem~\ref{flow} to get the existence and uniqueness of a flow~$\theta_{s,t}^\Gamma$ corresponding to this equation and Theorem~\ref{ExistenceRepr} to get a probabilistic representation associated to this flow. Again, one needs further assumptions to justify that this produces, when $\Gamma \to \infty$ a solution to~\eqref{BE_MFeq}. This is left for further research.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.